For progress to be by accumulation and not by random walk, read great books

post by MichaelVassar · 2010-03-02T08:11:51.034Z · LW · GW · Legacy · 117 comments

Contents

117 comments

This recent blog post strikes me as an interesting instance of a common phenomenon.  The phenomenon looks like the following; an intellectual, working within the assumption that the world is not mad, (an assumption not generally found outside of the Anglo-American Enlightenment intellectual tradition) notices that some feature of the world would only make sense if the world was mad.  This intellectual responds by denouncing as silly one of the few features of this vale of tears to be, while not intelligently designed, at least structured by generalized evolution rather than by entropy.  The key line in the post is 

"Conversely in all those disciplines where we have reliable quantatative measurements of progress (with the obvious exception of history) returning to the original works of past great thinkers is decidedly unhelpful."

I agree with the above statement, and find that the post makes a compelling argument for it.  My only caveat is that  we essentially never have quantitative measures of progress.  Even in physics, when one regards not the theory but the technique of actually doing physics, tools and modes of thought rise and fall for reasons of fashion, and once widespread techniques that remain useful fall into disuse. 

Other important techniques, like the ones used to invent calculus in the first place, are never adequately articulated by those who use them and thus never come into general use.  One might argue that Newton didn't use any technique to invent calculus, just a very high IQ or some other unusual set of biological traits.  This, however, doesn't explain why a couple of people invented calculus at about the same time and place, especially given the low population of that time and place compared to the population of China over the many centuries when China was much more civilized than Europe. 

It seems likely to me that in cases like the invention of calculus, looking at the use of such techniques can contribute to their development in at least crude form.  By analogy, even the best descriptions of how to do martial arts are inadequate to provide expertise without practice, but experience watching experts fight is a valuable complement to training by the relatively inept.  If one wants to know the Standard Model, sure, study it directly, but if you want to actually understand how to do the sorts of things that Newton did, you would be advised to read him, Feynman and yes, Plato too, as Plato also did things which contributed greatly to the development of thought. 

Anyone who has ever had a serious intellectual following is worth some attention.  Repeating errors is the default, so its valuable to look at ideas that were once taken seriously but are now recognized as errors.  This is basically the converse of studying past thinkers to understand their techniques.   

Outside of physics, the evidence for progress is far weaker.  Many current economists think that today we need to turn back to Keynes to find the tools that he developed but which were later abandoned or simply never caught on.  A careful reading of Adam Smith and of Ben Franklin reveals them to use tools which did catch on centuries after he published, such as economic models of population growth which would have predicted the "demographic transition" which surprised almost all demographers just recently.  Likewise, much in Darwin is part of contemporary evolutionary theory but was virtually unknown by evolutionary biologists half a century ago.  

As a practical matter a psychologist who knew the work of William James as well as that of B.F. Skinner or an economist who knows Hayek and Smith as well as Samuelson or Keynes is always more impressive than one who knows only the 'modern' field as 'modern' was understood by the previous generation.  Naive induction strongly suggests that like all previous generations of social scientists, today's social scientists who specialize in contemporary theories will be judged by the next generation, who will have an even more modern theory, to be inferior to their more eclectic peers.  Ultimately one has to look at the empirical question of the relative per-capita intellectual impressiveness of people who study only condensations and people who study original works.  To me, the latter looks much much greater in most fields, OK, in every field that I can quickly think of except for astronomy. 

To the eclectic scholar of scholarly madness, progress is real.  This decade's sludge contains a few gems that weren't present in the sludge of any previous decade.  To the person who assumes that fields like economics or psychology effectively condense the findings of previous generations as background assumptions to today's work, however, progress means replacing one pile of sludge with another fashionable sludge-pile of similar quality.  And to those few whom the stars bless with the coworkers of those who study stars?  Well I have only looked at astronomy as through a telescope.  I haven't seen the details on the ground.  That said, for them maybe, just maybe, I can endorse the initial link.  But then again, who reads old books of astronomy?

117 comments

Comments sorted by top scores.

comment by RobinZ · 2010-03-02T12:49:43.418Z · LW(p) · GW(p)

More relevant: many textbooks are straightforwardly badly written, to the point that the thirty-year-old conference papers in the citations are actually more accurate. Another factor which the classics-are-screened-off-by-moderns argument may miss is the degree to which poor work reduces the value of a reference.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2010-03-05T15:21:27.173Z · LW(p) · GW(p)

Is it that hard to pick the good ones?

Replies from: RobinZ
comment by RobinZ · 2010-03-05T15:24:59.985Z · LW(p) · GW(p)

I've never had to pick - I couldn't tell you. My professors have done a mostly good job so far.

comment by Richard_Kennaway · 2010-03-03T09:03:19.925Z · LW(p) · GW(p)

Would this be a fair summary?

Old books can be useful, but for the old books in a field to be essential reading today, something must have gone badly wrong with the field. Some fields have indeed gone badly wrong.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-03T17:24:03.251Z · LW(p) · GW(p)

Yep, except that I'm saying that virtually all fields have gone badly enough wrong for old books to be useful.

Replies from: Erebus
comment by Erebus · 2010-03-04T09:51:31.261Z · LW(p) · GW(p)

Do you have any specific examples in mind, or is this an expression of the general idea that the academia is mad?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-04T17:22:44.574Z · LW(p) · GW(p)

I mentioned biology and economics, philosophy and psychology. I could go farther if desired.
However, really, since academia promotes reading old books, I'm happy to place the probablistic burden of the claim that academia is mad on it.

Replies from: Richard_Kennaway, rortian
comment by Richard_Kennaway · 2010-03-04T17:58:42.657Z · LW(p) · GW(p)

That doesn't seem so for mathematics, physics, chemistry...the hard sciences in general. It may be an ornament to one's education to read Euclid, Newton, and Einstein, but it is not necessary. The books that endure in these fields are the exceptionally good textbooks rather than the original works.

comment by rortian · 2010-03-04T18:20:01.855Z · LW(p) · GW(p)

Biology is the hot science right now. Knowledge about evolution was going to be very superficial until genetics came along. Now that tools are available, we are learning all sorts of things at an amazing clip.

comment by nazgulnarsil · 2010-03-02T19:02:23.080Z · LW(p) · GW(p)

in order to recognize systemic errors of your own era it is useful to return to a time before the current dominant paradigm was in effect. even better if you can find first hand accounts of when the current paradigm started becoming fashionable and was regarded as strange and alien.

comment by diegocaleiro · 2010-03-03T06:52:01.355Z · LW(p) · GW(p)

I disagree with a few points

1 ) Most people do not have enormous amount of time to read, so the question is always if one should NOT read something actual and read a classic instead.

2) People who Do have lots of time end up reading Both actual and classic material, which is probably why you find those who read the classics superior, it's just they are more into it.

3) Academics advise towards reading the classics among other reasons because they have been advised the same way, and chosen the same way, so Choice supportive bias plays a role there.

4) In addition, they prefer that their students read something they are already familiar with than something they themselves will have to become acquainted with in order to judge. It's easier to judge Hegel than Bostrom.

5) Very motivated people tend to lose motivation when not allowed to have their own ideas, and with time become meme-copies of classic people, in part this happens because they are obligated to read Plato, Aristotle, etc... and end up losing faith in the intellectual world. High young achievers such as Feynman, Eliezer, Russell, Kripke, Wittgenstein and others take deep pride in having been outsiders in their studying methods.

5) To Dodge the Nearest Mistakes: We are all mistakers, trying to fit the map more and more to the territory. If I read Plato, I'll be reading an old scrapped map made with coal in a rush by someone with alzheimer. If I read Feynman, I'm using satellite technology to provide a three dimensional visualization that scales up to centimeter range.

Replies from: Sticky, WannabeChthonic
comment by Sticky · 2010-03-08T22:54:07.897Z · LW(p) · GW(p)

Your usage of "actual" appears to be based on a false cognate.

comment by WannabeChthonic · 2019-10-12T05:39:18.321Z · LW(p) · GW(p)

I agree with you so much. Since I have limited time (like everyone) I should maximize learning/time when pursuing learning. Some old classics are still worth their weight (e. g. Plato Republic). Most however, are not.

Even tho a lot of crap books exist today due to unedited selfpublishing and whatnot one can make the case that in general, there are better books out there for nearly any learning purpose than the original.

I'd argue that a original work has historical significance and that someone can learn something by analyzing it. On the other hand one is advised to learn the initial concept from a modern textbook (e. g. modern evolution theory is much more advanced than what darwin thought of).

comment by Morendil · 2010-03-03T08:16:11.250Z · LW(p) · GW(p)

Typo - you want "vale of tears", not "veil".

(I'm now on record with several comments like this one. Please let me know if they annoy. It's a quirk of mine that egregious misspellings bias me toward thinking less of the writer and the writing, but it seems to be a widely shared one.)

Replies from: rortian, RobinZ
comment by rortian · 2010-03-04T18:10:00.484Z · LW(p) · GW(p)

Tip: You could pm the people about the error. No need for a permanent public record for trivial mistakes.

Replies from: Alicorn, wnoise
comment by Alicorn · 2010-03-04T18:12:41.958Z · LW(p) · GW(p)

Yeah, when something is in the permanent public record, everybody notices...

Replies from: rortian
comment by rortian · 2010-03-04T18:51:43.339Z · LW(p) · GW(p)

Well at least this was to a different person. Changing default behaviors is incredibly difficult. Nicely done though :)

comment by wnoise · 2010-03-04T18:12:07.572Z · LW(p) · GW(p)

Or you could delete it after it's been fixed.

comment by RobinZ · 2010-03-03T13:40:06.601Z · LW(p) · GW(p)

Strictly speaking, "veil of tears" is not egregious, but I do generally like to be corrected when I make errors of that kind.

comment by Karl_Smith · 2010-03-02T20:54:24.080Z · LW(p) · GW(p)

I think this post overstates the case a bit. My general impression is that the scientific method "wins" even in economics and that later works are better than earlier works.

Now it might be true that the average macro-economist of today understands less than Keynes did but I'd be hard pressed to say that the best don't understand more. Moreover, there are really great distillers. In macro for example, Hicks distilled Keynes into something that I would consider more useful that the original.

Nonetheless, I think it is correct that someone should be reading the originals. If not there is the propensity for a particular distiller to miss an important insight and then for everyone else to go one missing it.

What this says to me is that there should be rewards to re-discovery. Suppose that I read Adam Smith and rediscover something great. I should be rewarded for that just as much as if I had come up with the idea myself. Afterall, it has the same effect on the current state of knowledge. However, that will not happen.

Rediscovering is not as prestigious as discovering, because it is not as difficult and does not signal intellectual greatness.

Replies from: MichaelVassar, TrevinPeterson
comment by MichaelVassar · 2010-03-03T06:11:51.882Z · LW(p) · GW(p)

I'm sure some people understand more than Keynes, both today and in his time, but can you name them? The understanding of the best unrecognized synthesizing geniuses of both today and Keynes' day aren't available. If you think that the most famous contemporary macro people know more than Keynes I won't laugh, just observe that they are probably using that knowledge to make hedge fund managers rich, not sharing it with you.

Macro-economists are rightly subject to the criticism "if your so smart, why aren't you rich".

Replies from: Karl_Smith
comment by Karl_Smith · 2010-03-03T18:20:12.321Z · LW(p) · GW(p)

So the easy answers might be:

Ben Bernanke

Mark Gertler

Micheal Wooford

Greg Mankiw

Its not clear to me why macro-economists are rightly subject to such criticism. To me its like asking a mathematician, "If you're so good at logical reasoning why didn't you create the next killer app"

Understanding how the economy works and applying that knowledge to a particular task are completely different.

Replies from: SecondWind
comment by SecondWind · 2013-05-19T06:03:38.467Z · LW(p) · GW(p)

"If you're so good at logical reasoning why didn't you create the next killer app"

'Designing the next killer app' seems to rely heavily on predicting what people will want, which is many steps and a lot of knowledge away from logical reasoning.

comment by TrevinPeterson · 2010-03-02T23:52:59.707Z · LW(p) · GW(p)

Rediscovering is not as prestigious as discovering, because it is not as difficult and does not signal intellectual greatness.

There is a difference between rediscovering and old idea, and adapting an old idea to a new situation. Simply rediscovering an old idea does not grant much prestige. Austrians are constantly coming across Hayek quotes and parading them around as definitive solutions to current problems. The problem is that these ideas are every bit as untestable as they were on the day Hayek wrote them. A confirmation bias leads Austrians to see them as Truth, while Keysians remain skeptical.

When old ideas are adapted into a testable form they endow a great deal of prestige. There are all sorts of anecdotes about this happening, such as Henry Ford taking the idea of an assembly line from Oldsmobile and mixing it with his observations from a meat factory, to create the moving assembly line. The difference is that this is a testable idea that creates immediate results.

Replies from: Karl_Smith
comment by Karl_Smith · 2010-03-03T03:13:04.366Z · LW(p) · GW(p)

So clearly adapting the new idea is useful.

However, it may also be the case that there is an old idea which if re-examined will be seen to be useful in and of itself.

The problem with the Austrians is that their ideas are being considered and they are being rejected. See Byran Caplan's Why I am Not an Austrian Economist. (link seems not to be working)

comment by SarahNibs (GuySrinivasan) · 2010-03-02T17:54:37.298Z · LW(p) · GW(p)

An excerpt from the Amazon description of Plausible Reasoning: "This work might have been called "How to Become a Good Guesser"."

Polya's How to Solve It is a great little text he wrote for teachers and students of mathematics. Polya's Mathematics and Plausible Reasoning is even better. There's lots of great problem-solving techniques for non-mathematicians in there too. I recommend it to everyone, it's the best example I've ever seen of someone writing down their techniques.

Edit: cleared up the reference of the quote, it originally looked like I was quoting the article, sorry about that!

Replies from: roland, RobinZ
comment by roland · 2010-03-02T21:02:55.774Z · LW(p) · GW(p)

I agree! There is a lot of good stuff on the kad network and emule is a great client.

comment by RobinZ · 2010-03-02T19:59:08.438Z · LW(p) · GW(p)

Question: I don't see the quote you reply to in the original post - where did it come from?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-03-02T20:41:49.853Z · LW(p) · GW(p)

It came from the Amazon description, actually. I've edited the comment to make that clear.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T20:44:31.110Z · LW(p) · GW(p)

Thank you - I might have also moved it to the end of the comment, just to make it clear that it related to the material in the comment, not material in the post. (You were the one who brought up "Plausible Reasoning", after all!)

comment by komponisto · 2010-03-02T16:20:47.538Z · LW(p) · GW(p)

One might argue that Newton didn't use any technique to invent calculus, just a very high IQ or some other unusual set of biological traits.

That would be a non-explanation in any case. However high Newton's IQ may have been, his brain was still operating by lawful processes within the physical universe. By the sheer improbability of inventing calculus by chance, there is bound to exist some general technique used by Newton for doing things like inventing calculus, for all that that technique may have been opaque to Newton's own conscious introspection. Perhaps someone else may be able to formulate this technique in explicit generality (in the same way that Newton himself formulated the methods of calculus, already known in special cases, in explicit generality).

"High IQ" probably doesn't mean more than something like high processing speed and copious amounts of RAM. The algorithms (at least in their essence) can still be run, less efficiently, on inferior hardware.

Replies from: Eliezer_Yudkowsky, Karl_Smith, dxu, jhuffman, MichaelVassar, None
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-02T16:33:27.010Z · LW(p) · GW(p)

I dispute that this is a non-explanation. Besides referring to concepts whose existence has already been confirmed by other means, it makes a testable prediction about the degree to which abilities should run in genetic families as opposed to student lineages.

Replies from: komponisto
comment by komponisto · 2010-03-02T16:55:33.717Z · LW(p) · GW(p)

It's a question of which data you're interested in explaining. I'm more interested in understanding the mechanism of how Newton invented calculus than in explaining the (comparatively uninteresting) fact that most other people didn't. (If you want to program an AI to invent calculus, crying "IQ!" isn't going to help.)

[ETA: To be more explicit: the vague hypothesis that "Newton had a high IQ" adequately explains why, given that calculus was invented, Newton was among two people to have invented it. But does a much less effective job of explaining why it was invented in the first place, by anybody.]

(As it happens, most of the world's intellectual power has in fact been spread via students rather than children.)

Replies from: SirBacon
comment by SirBacon · 2010-03-02T20:31:37.822Z · LW(p) · GW(p)

As for Newton's exact mental processes, they are lost to history, and we are not going to get very specific theories about them. Newton can only give us an outside view of the circumstances of discovery. His most important finds were made alone in his private home and outside of academic institutions. Eliezer left school early himself. Perhaps a common thread?

Teachers select strongly for IQ among students when they have power to choose their students. This might be a more powerful aggregator of high-IQ individuals than transmission from parents to children. It might be the case that teachers don't transmit any special powers to their students, but just like to affiliate with other high-IQ individuals, who then go on to do impressive things.

At a certain level of IQ (that of Yudkowsky, Newton) pedagogy becomes irrelevant and a child will teach itself, given the necessary resources. At this point, teachers are more likely to take credit for natural talent while doing nothing to aid it than they are to "transmit intellectual power."

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-03T06:24:46.502Z · LW(p) · GW(p)

If academic lineages are due to an ability that teachers have to identify talent, this ability is extremely common and predicts achievement FAR better than IQ tests can. I am struck by the degree to which the financial world fails to identify talent with anything like similar reliability.

Also, the above theory is inconsistent with the extreme intellectual accomplishments of East Asians, and previously Jews, within European culture and failure of those same groups to produce similar intellectual accomplishments prior to such cultural admixture.

comment by Karl_Smith · 2010-03-02T20:27:45.577Z · LW(p) · GW(p)

I remember reading that one of the most g loaded tests was recognition time. I think the experiment involved flashing letters and timing how fast it took to press the letter on a keyboard. The key correlate was "time until finger left the home keys" which the authors interpreted as the moment you realized what the letter was.

I also heard a case that sensory memory lasts for a short a relatively constant time among humans and that difference in cognitive ability were strongly related to how speed on pushing information into sensory memory. The greater the speed the larger a concept could be pushed in before key elements started to leak out.

comment by dxu · 2015-04-02T15:31:16.751Z · LW(p) · GW(p)

"High IQ" probably doesn't mean more than something like high processing speed and copious amounts of RAM. The algorithms (at least in their essence) can still be run, less efficiently, on inferior hardware.

This seems (to me) to be pretty unlikely to be the case. "High processing speed and copious amounts of RAM" would allow more efficient execution of a particular algorithm... but where does that algorithm come from in the first place? One notes that no one taught Newton the "algorithm for inventing calculus". The true algorithm he used, as you pointed out, is likely to have been implemented at a lower level of thought than that of conscious deliberation; if he were still alive today and you asked him how he did it, he might shrug and answer, "I don't know", "It just seemed obvious", or something along those lines. So where did the algorithm come from? I very much doubt that processing speed and RAM alone are enough to come up with a working algorithm good enough to invent calculus from scratch within a single human lifespan, no matter what substrate said algorithm is being run on. (If they were, so-called "AI-complete" problems such as natural language processing would plausibly be much easier to solve.) There is likely some additional aspect to intelligence (pattern-recognition, possibly?) that makes it possible for humans to engage in creative thinking of the sort Newton must have employed to invent calculus; to use Douglas Hofstadter's terminology, "I-mode", not "M-mode". "High IQ", then, would refer to not only increased processing speed and working memory, but also increased pattern-recognition skills. (Raven's Progressive Matrices, anyone?)

comment by jhuffman · 2010-03-02T18:21:37.972Z · LW(p) · GW(p)

The algorithms (at least in their essence) can still be run, less efficiently, on inferior hardware.

I don't think so. There are some conceptual leaps that people with inadequate intelligence will simply never be able to make, no matter how much time they put in. Part of the problem is they will lack the intuition and insight to know what type of problem or method of thought they are trying to invent. If there were a system for generating entirely new paradigms of useful thought we'd have already achieved a singularity of some kind I think.

Both Leibniz and Newton were giants among the early natural philosophers or scientists.If not for them it might have taken an Einstein or Ramanujan to invent calculus; and if it had been Einstein then instead of benefiting from the work he built on top of Newton and some of his successors we would have to wait for someone else to work out general relativity (most likely).

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-03T20:35:56.310Z · LW(p) · GW(p)

If there were a system for generating entirely new paradigms of useful thought we'd have already achieved a singularity of some kind I think.

Human creativity isn't magic. There IS such a system. Most likely we can codify a simpler and more efficient system. Hopefully so, as this will be required for FAI.

The fact that we haven't coded it yet doesn't mean it can't be done. Once done, a below average thinker could in principle follow the algorithm.

Replies from: TruePath, jhuffman
comment by TruePath · 2010-04-14T23:15:27.032Z · LW(p) · GW(p)

Arguably they couldn't.

An average thinker could surely be the computational substrate on which the algorithm was implemented in the same way transitors implement the algorithm running on this computer. However, this would simply be a version of Searle's Chinese room. The sentient being doing the thinking here would actually be an AI running really really slowly through the application of computational rules on pencil and paper by some person.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-04-15T16:44:22.106Z · LW(p) · GW(p)

Any rule you can follow to break down a problem or bypass a known cognitive bias makes you smarter. It IS such an algorithm. There doesn't have to be another sentient being/AI that you're running, that's just proof of concept.

The point is that we do not have to rely on genetics to give us people who can come up with brilliant ideas. We can train normal people and certainly above-average people to think in ways that lead to brilliant ideas, even if more slowly or only in groups.

And we should be training the brilliant people in the same processes anyway.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-15T17:26:38.675Z · LW(p) · GW(p)

What training methods are you thinking of?

Replies from: JamesAndrix
comment by JamesAndrix · 2010-04-15T21:30:04.078Z · LW(p) · GW(p)

For the most part, we don't have them yet. To a small degree they are some of the things we try to work out here. To a larger degree, science in general qualifies. (Look at the difference in performance between the most brilliant people pre-science, and the most brilliant people post-science. I see no reason to assume that normal people don't enjoy the same multiplier. At least some sub-brilliant people must have made brilliant discoveries because they used science.)

The potential future methods are somewhere in between the strategy of running an AI on pencil and paper, and giving up on making yourself more creative/rational.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-16T14:36:57.167Z · LW(p) · GW(p)

Thinking at the Edge might be useful.

It grew out of Focusing, a method based on observation of who got value from therapy and who didn't. Those who did all had a pattern of pausing, paying close attention to how they felt, spending some time searching for the exact words which satisfied them to express how they felt, and then saying them. I haven't seen any discussion of art or music therapy in this context.

Thinking at the Edge applies the method of close observation and expression of subtle feelings to cognition.

TAE requires a familiarity with Focusing. The participants in our first TAE were experienced Focusing people. This took care of the most difficult part of my university course. Nevertheless I expected it to fail, and I certainly experienced that it did fail. Some people did not even get as far as using logic, and most created no theory. Yet there was great satisfaction and even excitement. A great thing seemed to have happened, so I was grateful that I was saved any embarrassment. For some reason they did not feel cheated.

Later I understood. During the ensuring year many people wrote to us. They reported that they found themselves able to speak from what they could not say before, and that they were now talking about it all the time. And some of them also explained another excitement. Some individuals had discovered that they could think! What “thinking” had previously meant to many of them involved putting oneself aside and rearranging remembered concepts. For some the fact that they could create and derive ideas was the fulfillment of a need which they had despaired of long ago.

Now after five American and four German TAE meetings I am very aware of the deep political significance of all this. People, especially intellectuals, believe that they cannot think! They are trained to say what fits into a pre-existing public discourse. They remain numb about what could arise from themselves in response to the literature and the world. People live through a great deal which cannot be said. They are forced to remain inarticulate about it because it cannot be said in the common phrases. People are silenced! TAE can empower them to speak from what they are living through.

Replies from: CronoDAS
comment by CronoDAS · 2010-04-16T17:38:46.254Z · LW(p) · GW(p)

The writing at that link is confusing. It's too... "dense", let's say, and reminds me of attempts to sound profound by deliberately being hard to understand rather than actually being profound - what others may have called using too many "big words". I don't have a good way of describing the feeling of reading something hard to understand, and, when something is hard to understand, it's also hard to know whether it's worth putting in the effort to try to understand it or whether it's just gibberish. Am I making sense here?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-16T18:27:24.673Z · LW(p) · GW(p)

You're making sense. I'm sure Focusing is legitimate, and TAE is the same process I use for accessing new ideas. The bit I quoted sounds like TAE is incredibly valuable for people who've gotten false ideas about thinking from school and/or mainstream society.

However, in spite of all this, I find the TAE site unreadable, and I can handle moderately difficult text.

I'm not sure what the problem is.I don't think it's the vocabulary-- it might be that there's too much philosophy inserted in the wrong places, but this is only a guess.

Replies from: thomblake, CronoDAS
comment by thomblake · 2010-04-16T18:43:28.800Z · LW(p) · GW(p)

"Philosophy" should have been in scare quotes.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-16T18:59:50.088Z · LW(p) · GW(p)

Can you be more specific about that?

comment by CronoDAS · 2010-04-16T18:36:41.653Z · LW(p) · GW(p)

Yeah, there is definitely something very wrong with the writing style on that site.

comment by jhuffman · 2010-03-05T18:19:15.165Z · LW(p) · GW(p)

Not if you think what Karl mentions above. The problem is that the amount of thought that you can hold in your head at one time is finite and differs significantly from one person to another.

In other words: algorithms need working memory, which is not boundless.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-05T19:45:41.092Z · LW(p) · GW(p)

Well first off, I was assuming pencil and paper were allowable augmentations.

I would be surprised if it were the case that our brain process that finds big insights with N 'bits of working memory' couldn't be serialized to find the same big insights as a sequence of small insights produced by a brain running a similar process but with only N/2 available 'bits'.

Replies from: jhuffman
comment by jhuffman · 2010-03-05T20:35:43.691Z · LW(p) · GW(p)

Imagine yourself studying a 4 megapixel digital image only by looking at it one pixel at a time. Yes, you can look at it, and then even write down what color it was. Later you can refer back to this list and see what color a particular pixel was. Its hard to remember more than a few dozen at once though, so how will you ever have a complete picture of it in your head?

Replies from: JamesAndrix
comment by JamesAndrix · 2010-03-06T02:07:55.918Z · LW(p) · GW(p)

I could find and write down a set of instructions that would allow you to determine if there was a face in the image. If you were immortal and I were smarter, I could write down a set of instructions that might enable you to derive the physics of the photographed universe given a few frames.

At this level it's like the Chinese room.

But I don't think the ratio between Einstein's working memory and a normal person's working memory is 100,000 to 1.

It would be EASY to make instructions to find faces even if someone could only see and remember 1/16th of the image at a time. You get tons of image processing for free. "Is there a dark circle surrounded by a color?"

A human runnable algorithm to turn data into concepts would be different in structure, but not in kind.

comment by MichaelVassar · 2010-03-02T17:46:53.936Z · LW(p) · GW(p)

"IQ or some other unusual set of biological traits" implies that the unusual features of the cognitive process might be built upon unusual features of a biological process and fairly likely to emerge given that unusual substrate. I then argued that this was an unlikely interpretation..

comment by [deleted] · 2015-04-02T18:08:08.775Z · LW(p) · GW(p)

"High IQ" probably doesn't mean more than something like high processing speed and copious amounts of RAM. The algorithms (at least in their essence) can still be run, less efficiently, on inferior hardware.

This seems like strikingly accurate defintion of IQ, although I agree with dxu that pattern recognition and/or other unusual abilities (set on solving logical problems no matter the context,) also are part of it. However, the methods Newton used to come up with for example calculus, are likely not the ones that can be found inside a human brain of a newborn. He probably used a lot of creative thinking to come up with ideas that hepled him do that.

Replies from: TheOtherDave
comment by TheOtherDave · 2015-04-02T19:14:13.218Z · LW(p) · GW(p)

the methods Newton used to come up with for example calculus, are likely not the ones that can be found inside a human brain

Can you say more about what you mean by this? An uncharitable reading is absurd on the face of it (if the methods Newton used weren't to be found inside a human brain, how exactly did Newton use them?) but I can't quite work out a coherent charitable reading.

Replies from: None
comment by [deleted] · 2015-04-02T19:24:00.203Z · LW(p) · GW(p)

Err, I meant that I don´t find it likely that the human brain by itself have algorithms that are made for inventing calculus. He probably developed that thinking by other means. It was misfortunate of me to forget to spell out that last part.

Replies from: TheOtherDave
comment by TheOtherDave · 2015-04-02T22:22:41.424Z · LW(p) · GW(p)

Well, right, but what I'm trying to understand is what "other means" you have in mind, and what you're trying to contrast them with, and how you think he went about developing them. As it stands, it sounds like you're trying to suggest that creative thinking isn't a natural function of the human mind.... which, again, I assume is not what you mean, but I'm at a loss to understand what you do mean.

Replies from: None, dxu, None
comment by [deleted] · 2015-04-03T15:01:59.728Z · LW(p) · GW(p)

What I meant is simply: 1) IQ and creative thinking is not the same thing, the two concepts are not strongly connected to one and other. The brain operates differently when using stuff that requires high "IQ" and when "thinking creatively" (Algorithms related to both concepts still reside inside the brain of course.) 2) I think that Newton used both creative thinking and high IQ and perhaps some other part that the brain is equipped with by default, in order to develop his thinking in a way that allowed for the invention of calculus.

Replies from: TheOtherDave
comment by TheOtherDave · 2015-04-03T15:14:25.858Z · LW(p) · GW(p)

Ah! OK, this helps clarify. Thanks.

For my own part, I agree that the cognitive processes underlying what we observe when we measure IQ aren't the same as the ones we observe when we evaluate creative thinking, though they certainly overlap significantly. And, sure, it seems likely that developing calculus requires both of those sets.

Replies from: None
comment by [deleted] · 2015-04-03T15:21:15.732Z · LW(p) · GW(p)

Good we sorted it out :)

comment by dxu · 2015-04-03T01:25:55.245Z · LW(p) · GW(p)

I think that by "creative thinking" Okeymaker is referring to something similar to what I describe in this comment, in that Newton employed more than simply "high processing speed and copious amounts of RAM" when he developed calculus.

Replies from: TheOtherDave
comment by TheOtherDave · 2015-04-03T15:11:05.850Z · LW(p) · GW(p)

Honestly, I grow more confused rather than less.

So, yes, of course there's more going on when thinking systems think than "processing speed and RAM." Of course there are various cognitive processes engaging with input in various ways.

If I'm following, you're suggesting that the distinction being introduced here is between two different set of cognitive processes, one of which (call it A) is understood as somehow more natural or innate or intrinsic to the human mind than the other (call it B), and creative thinking is part of B. And the claim is that Newton relied not only on A, but also (and importantly) on B to invent calculus.

Well, OK. I mean, sure, we can divide cognitive processes up into categories however we wish.

I guess what I'm failing to understand is:
a) what observable traits of cognitive processes sort them into A or B (or both or neither)? Like... is identifying words that rhyme "natural"? Is flirting with someone attractive? Is identifying the number of degrees in the unmeasured angles of an equilateral triangle? How would we answer these questions?
b) what is the benefit of having sorted cognitive processes into these categories?

EDIT: Ah. Okeymaker's most recent comment has helped clarify matters, in that they are no longer talking about natural and unnatural cognitive processes at all, but merely processes underlying "IQ" vs "creative thinking." That I understand.

Replies from: dxu
comment by dxu · 2015-04-03T15:40:05.177Z · LW(p) · GW(p)

If I'm following, you're suggesting that the distinction being introduced here is between two different set of cognitive processes, one of which (call it A) is understood as somehow more natural or innate or intrinsic to the human mind than the other (call it B), and creative thinking is part of B.

No, I'm not suggesting that. That may be what Okeymaker is suggesting; I'm not quite clear on his/her distinction either. What I was originally addressing, however, was komponisto's assertion that "high IQ" is merely "high processing speed and copious amounts of RAM", which I denied, pointing out that "high processing speed and copious amounts of RAM" alone would surely not have been enough to invent calculus, and that "creative thinking" (whatever that means) is required as well. In essence, I was arguing that "high IQ" should be defined as more than simply "high processing speed and copious amounts of RAM", but should include some tertiary or possibly even quaternary component to account for thinking of the sort Newton must have performed to invent calculus. This suggested definition of IQ seems more reasonable to me; after all, if IQ were simply defined as "high processing speeed and copious amounts of RAM", I doubt researchers would have had so much trouble testing for it. Furthermore, it's difficult to imagine tests like Raven's Progressive Matrices (which are often used in IQ testing) being completed by dint of sheer processing speed and RAM.

Note that the above paragraph contains no mention of the words "natural", "innate", or any synonyms. The distinction between "natural" thinking and "synthetic" (I guess that would be the word? I was trying to find a good antonym for "natural") thinking was not what I was trying to get at with my original comment; indeed, I suspect that the concept of such a distinction may not even be coherent. Furthermore, conditional on such a distinction existing, I would not sort "creative thinking" into the "synthetic" category of thinking; as I noted in my original comment, no one taught Newton the algorithm he used to invent calculus. It was probably opaque even to his own conscious introspection, probably taking the form of a brilliant flash of insight or something like that, after which he just "knew" the answer, without knowing how he "knew". This sort of thinking, I would say, is so obviously spontaneous and untaught that I would not hesitate to classify it as "natural"--if, that is, the concept is indeed coherent.

It sounds as though you may be confused because you have been considering Okeymaker's and my positions to be one and the same. In light of this, I think I should clarify that I simply offered my comment as a potential explanation of what Okeymaker meant by "creative thinking"; no insight was meant to be offered on his/her distinction between "natural" thinking and "synthetic" thinking.

Replies from: komponisto, TheOtherDave
comment by komponisto · 2015-04-27T19:52:04.201Z · LW(p) · GW(p)

What I was originally addressing, however, was komponisto's assertion that "high IQ" is merely "high processing speed and copious amounts of RAM", which I denied, pointing out that "high processing speed and copious amounts of RAM" alone would surely not have been enough to invent calculus,

This shows that you didn't understand what I was arguing, because you are in fact agreeing with me.

The structure of my argument was:

(1) People say that high IQ is the reason Newton invented calculus.

(2) However, high IQ is just high processing speed and copious amounts of RAM.

(3) High processing speed and copious amounts of RAM don't themselves suffice to invent calculus.

(4) Therefore, "high IQ" is not a good explanation of why Newton invented calculus.

Replies from: dxu, Quill_McGee
comment by dxu · 2015-04-27T23:02:37.710Z · LW(p) · GW(p)

I understood what you were saying; I just disagreed with your definition of "high IQ". Put another way: I modus tollens'd your modus ponens.

EDIT: It turns out that Quill_McGee already expressed what I was trying too, and probably better than I could have myself. So yeah--what he/she said.

comment by Quill_McGee · 2015-04-27T22:37:52.306Z · LW(p) · GW(p)

Whereas, if I am interpreting them correctly, what they are saying is

(1) People say that high IQ is the reason Newton invented calculus.

(2) High processing speed and copious amounts of RAM don't themselves suffice to invent calculus.

(3) Therefore, "High processing speed and copious amounts of RAM" is not a good description of high IQ.

Personally, I'd say that 'high IQ' is probably most useful when just used to refer to whatever it is that enables people to do stuff like invent calculus, and that 'working memory' already suffices for RAM, and that there probably should be a term for 'high processing speed' but I do not know what it is/should be.

EDIT: that is, I think that Newton scored well along some metric which did immensely increase his chances of inventing calculus, which does extend beyond RAM and processing speed, which I would nonetheless refer to as 'high IQ'

tabooing IQ would almost certainly be helpful here.

comment by TheOtherDave · 2015-04-03T17:34:48.274Z · LW(p) · GW(p)

I apologize for being unclear; when I wrote "you're suggesting that the distinction being introduced here" I meant introduced by Okeymaker, whose position is what I was trying to understand in the first place (and I believe I now do), and which I'd assumed (incorrectly) that you were talking about as well.

comment by [deleted] · 2015-04-02T23:00:53.488Z · LW(p) · GW(p)

deleted, se my other comment in response to your question.

comment by Douglas_Reay · 2012-11-17T20:46:32.531Z · LW(p) · GW(p)

This recent blog post

Updated link:

Reading Originals

comment by Unknowns · 2010-03-02T19:14:26.473Z · LW(p) · GW(p)

I read old books of astronomy and I found it very helpful for understanding new books of astronomy.

Replies from: gimpf
comment by gimpf · 2010-03-03T17:30:10.700Z · LW(p) · GW(p)

I read old books on philosophy and found they are obsolete when it comes to logic and epistemology.

comment by Alex Flint (alexflint) · 2010-03-03T09:14:16.126Z · LW(p) · GW(p)

Outside of physics, the evidence for progress is far weaker.

The economic growth of the last few decades suggests that some people, somewhere, are gradually getting more things right more often. Those genomes aren't sequencing themselves. Or have I misunderstood you?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-03T17:28:01.973Z · LW(p) · GW(p)

Specific technologies arise and fall. Capital accumulates and depreciates. Governments make up numbers. Physics touches everything, especially through solid state and semiconductor physics in recent years. Finally, as the post emphasized, ideas are a sort of capital, and accumulate over time, even if new ideas are no better than old ones, so long as the old ones aren't thrown out.

comment by TrevinPeterson · 2010-03-02T19:15:56.982Z · LW(p) · GW(p)

progress means replacing one pile of sludge with another fashionable sludge-pile of similar quality.

The methods available to test these various hypotheses seem to have more of an impact on their prominence, than any objective measure of truth. Classical mechanics conformed to observations and could be confirmed by various tests. This led to widespread adoption until, observations were be made that did not fit the theories. Often the theories are available and cover various possible outcomes, all justified by the intuition offered by the current, yet untestable, theories.

This is where the social sciences run into difficulty. Predictions made by the social sciences are confirmed or disproved by the available methods of verification, at the time the predictions are made. These methods of verification evolve at a slower rate than the theories, and are always limited by the dynamic nature of human actors in large groups. Even if we could determine the utility function for everyone in the world, by the time that utility function had been applied and used to test various SS theories, they would have already changed.

It is unlikely that the LHC will produce results, not yet predicted by various physicists. When it does produce results, some thoeries will be proved and some will be disproved. The confirmation of the correct theory, however, is more valuable than 100 potentially correct yet untestable theories.

Mathematics has evolved quickly, for the same reasons that language has evolved, it is testable in its immediate ability to express and be understood. It has a very clean and objective measurement of success.

Replies from: blogospheroid, Jayson_Virissimo
comment by blogospheroid · 2010-03-03T10:17:55.926Z · LW(p) · GW(p)

Bingo!

Economics suffers from a problem that it is the art of the royal economic advisor. Almost all radical economic advice suffers from a problem that only a very strong sovereign would be able to implement the same. In real life, almost every economic measure would be half-diluted by the time the rubber hit the road.

That doesn't mean that the field has no advances. One might have to push and prod around a little to get progress in the direction sought.

For advancing the art of value creation, one can easily identify insights from economics that can be used.

comment by Jayson_Virissimo · 2010-03-02T19:31:41.448Z · LW(p) · GW(p)

Classical mechanics conformed to observations and could be confirmed by various tests. This led to widespread adoption until, observations were be made that did not fit the theories.

This is what I call the naive history of science. In this view science progresses inevitably because it relies on a recipe for doing good science (the scientific method). You could probably find this in a physics textbook, but these kinds of stories aren't taken seriously by historians of science.

Classical mechanics made incorrect predictions from the get-go (for instance it couldn't explain the observed motion of the moon), in addition to positing occult forces which many natural philosophers (especially on The Continent) believed were a return to the natural magic tradition. The disagreement over classical mechanics was not a simple problem of applying a method. There were deep metaphysical commitments that explain why some accepted Newton's theories and others rejected them. Theories "fitting" or "not fitting" observation cannot explain the history of physics (let alone the history of science).

Replies from: orthonormal, h-H
comment by orthonormal · 2010-03-03T06:39:19.584Z · LW(p) · GW(p)

Theories "fitting" or "not fitting" observation cannot explain the history of physics (let alone the history of science).

Right, but they're at least entangled with it, which is what separates scientific disciplines from their predecessors. I completely agree that the history of science is more messy, politics-laden, and irrational than the naive/textbook view acknowledges, but it only takes a weak sustained current (in this case, the fact that the results of experiments sometimes shocked and puzzled scientists) to overcome random noise in time.

comment by h-H · 2010-03-02T21:30:42.581Z · LW(p) · GW(p)

um, I think you're missing the overall point of his post; he states that we sometimes have accurate theories but our box of tools-mathematical techniques-is yet underdeveloped to make full sense of them.

it might be the case that he's taking a naive view etc, but from your post it appears that has little to no significance to his overall point.

also, to any who downvoted, please refrain from down-voting without attempting to explain your disagreement. it's obviously not good practice.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-03T06:28:42.524Z · LW(p) · GW(p)

No, up and down votes are symmetrical. Both should usually be done without explanation.

Replies from: orthonormal, Tyrrell_McAllister, wedrifid, khafra
comment by orthonormal · 2010-03-03T06:41:27.635Z · LW(p) · GW(p)

I disagree; an explanation of a downvote is a lot more helpful to the author than an explanation of an upvote (in addition to the fact that it often mitigates status-based anger), and thus the symmetry is broken. h-H is perhaps exaggerating this principle, but it's perfectly legitimate to say "that comment looked OK to me, what are you seeing?"

Replies from: h-H, komponisto
comment by h-H · 2010-03-04T00:08:50.345Z · LW(p) · GW(p)

seconded, and well put.

comment by komponisto · 2010-03-03T13:44:59.006Z · LW(p) · GW(p)

Strong second.

comment by Tyrrell_McAllister · 2010-03-03T14:04:33.011Z · LW(p) · GW(p)

No, up and down votes are symmetrical.

Up and down votes should not be symmetrical. The space of upvote-worthy comments is much smaller than the space of downvote-worthy comments, so a down-vote, by itself, conveys less information.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-03-03T17:38:30.437Z · LW(p) · GW(p)

In the space of comments actually posted, the reverse is the case. What class of potential comments did you have in mind?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-03-03T18:33:24.346Z · LW(p) · GW(p)

In the space of comments actually posted, the reverse is the case. What class of potential comments did you have in mind?

I had in mind the space of comments that would be posted if commenters received no feedback on what kinds of comments were appropriate.

ETA: My point was that there are a lot more ways for a comment to go wrong than to go right. The region of good comments is a small target in commentspace. Given only that a comment was downvoted, it could be anywhere in a vast wasteland of bad possible comments. That's the case even if you condition on the comment's having appeared on LW.

Of course, sometimes one knows exactly why a comment was downvoted. But, if you're the author, and you hadn't expected the downvote, it's probably not so clear why you received one. In general, you can see that the comment must have been in a relatively small region within bad-comment-land. But that's small relative to all of bad-comment-land, so even your "small" region is probably still big compared to all of good-comment-land.

comment by wedrifid · 2010-03-03T06:37:27.894Z · LW(p) · GW(p)

Agree, and add that I often prefer not to downvote in cases where I have expressed disagreement, simply because it reduces resentment.

comment by khafra · 2010-03-03T20:06:24.293Z · LW(p) · GW(p)

With a somewhat valuable but straightforward comment, an upvote with no further discussion is optimal, because both the author and the readers understand why it's good.

With a worthless but ingenuously written comment, the readers gain nothing from further discussion, but commentary helps the author to more easily discover his error. Do what your decision theory requires regarding the good of the many vs. the good of the few.

comment by djcb · 2010-03-02T17:09:10.431Z · LW(p) · GW(p)

This somewhat echos The Value of Nature and Old Books. Sometimes, older books can be quite effective at explaining things that do not depend on the latest research -- the books by e.g. Knuth, Feynman, Abelson/Susskind are good examples, and I would hearthily recommend those, even if there are newer works on similar subjects.

comment by WannabeChthonic · 2019-10-13T16:09:44.052Z · LW(p) · GW(p)

I'd like to quote this argument from here:

Distillation works best in very exact sciences, such as physics and mathematics. If you rely on distillation for an inexact science, you will do best at capturing its exact parts. You will be left with a systematic bias, and knowledge gap, regarding its inexact parts.

comment by ObliqueFault · 2010-03-02T22:30:35.704Z · LW(p) · GW(p)

Likewise, much in Darwin is part of contemporary evolutionary theory but was virtually unknown by evolutionary biologists half a century ago.

I disagree with the statement that evolutionary biology isn't making clear progress. I'm guessing you're talking about punctuated equilibrium, which was part of Darwin's On the Origin of Species (albeit not by that name), deemphasized by later evolutionary biologists, and later assertively brought back by Gould et al. However, this hypothesis is only vacillating in and out of 'style' because it 1) has scientific merit and 2) is difficult to prove. Other aspects of Darwin's theory have been easier to validate or disprove and so have been retained or decisively refuted over the years. On the whole modern evolutionists have a vastly more complete understanding of their subject than Darwin did. The entire new fields of genetics and molecular biology have opened up since Darwin's day, expanding on Darwin's theory as well as explaining the mechanics that underlie it.

Ultimately one has to look at the empirical question of the relative per-capita intellectual impressiveness of people who study only condensations and people who study original works. To me, the latter looks much much greater in most fields, OK, in every field that I can quickly think of except for astronomy.

Who says derivative works are always condensations? To continue with the Darwin example, On the Origin of Species was a seminal work, to be sure, but it doesn't explain many necessary modern concepts, such as sexual selection, kin selection, silent mutations, genetic drift, etc. If you are an evolutionary biologist then you should clearly read On the Origin of Species, among other things. But if you are an interested amateur and only have time to read one book then you should read a modern evolution textbook, in the same way you would read a modern medical textbook instead of one written in the 19th century. The old texts would contain some discredited concepts and be missing a lot of substantiated ones.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-03-03T06:18:33.233Z · LW(p) · GW(p)

I don't just mean punctuated equilibrium.
Darwin wrote more than Origin and did talk about sexual selection.

I agree that an interested amateur should read the modern textbook over Origin. It's not THAT good. If you can only read one book in a discipline it should pretty much always be a textbook unless the discipline is totally dysfunctional.

Replies from: wedrifid, ObliqueFault
comment by wedrifid · 2010-03-03T06:20:25.731Z · LW(p) · GW(p)

One book in a discipline?

comment by ObliqueFault · 2010-03-03T17:58:28.497Z · LW(p) · GW(p)

Darwin wrote more than Origin and did talk about sexual selection.

Yes, you're right. Thanks for the correction.

The bulk of my point still stands, though. Evolutionary biology has made clear progress, especially since molecular biology took off in the 50's. Simplistically speaking, evolution is composed of mutation and natural selection, the latter of which was developed impressively by Darwin. But that was only half the story, so it was left to later biologists to complete the picture.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-04T03:10:35.543Z · LW(p) · GW(p)

Progress in the last 50 years is a non sequitur response to a claim that the situation was dire 50 years ago. At least, if you claim to disagree.

Replies from: ObliqueFault
comment by ObliqueFault · 2010-03-04T15:57:47.076Z · LW(p) · GW(p)

Unless I misunderstand him, his claim is that there hasn't been clear progress in the field since Darwin. My position is that there has been clear progress in the last 60 years. I concede that progress before that was slim.

Replies from: FAWS
comment by FAWS · 2010-03-04T16:41:17.691Z · LW(p) · GW(p)

Still, if the field has actually regressed between Darwin and mid 20th century (by today's standards) without evolutionary biologists of that time being aware of that fact that's evidence that progress in evolutionary biology is not necessarily clear, and reason enough to at least consider the possibility that the field might have regressed in other ways that we are not aware of.

Replies from: ObliqueFault
comment by ObliqueFault · 2010-03-04T21:01:08.230Z · LW(p) · GW(p)

I said progress was stagnant, not regressing. All of Darwin's books have always been widely available and read, so no information was ever lost. Some of Darwin's conjectures were deemphasized, and the biologists of the time were right to do so; they didn't yet have the techniques to prove or disprove them, and mere conjecture should never be foundations of a scientific discipline. They weren't central to the theory anyway, and even Darwin considered them just speculation.

With modern technical know-how, such as radiometric dating and molecular clocks, they've discovered evidence supporting some of Darwin's more difficult-to-prove ideas, such as punctuated equilibrium. Darwin was an exceedingly smart man, so it's no surprise that some of his idle speculation turned out to be accurate. But that's a far cry from modern evolutionists "catching up" with Darwin.

Replies from: FAWS
comment by FAWS · 2010-03-04T21:21:02.578Z · LW(p) · GW(p)

I said progress was stagnant, not regressing.

I'm not necessarily trying to conivince you of anything, just interested. Assuming that you are convinced that Bayesian statistics are the correct way to treat uncertainty, would you say that the field of statistics never regressed in that respect because the works of Bayes and Laplace were always around?

All of Darwin's books have always been widely available and read, so no information was ever lost.

That's a pretty good argument for reading the work of the old masters though, isn't it? (Not that you voiced any disagreement with that)

Replies from: ObliqueFault
comment by ObliqueFault · 2010-03-04T23:45:52.162Z · LW(p) · GW(p)

You have me at a disadvantage because I don't know much about the history of statistics, but here is my view. Assuming the core principles of Bayesian statistics were demonstrably effective, if they were widely accepted and then later rejected or neglected for whatever reason, then that would be regression. If Bayes' and Laplace's methods never caught on at all until a long time later, and there were no other significant advances in the field, then that would be stagnation.

By these (admittedly my own) definitions, evolutionary biology didn't regress after Darwin because the only parts of his theory that were neglected were the ones that weren't yet provable. It's as if, theoretically, Bayes came up with a variety of statistical methods, most of which were clearly effective but others were of dubious utility. It wouldn't count as a regression, at least to me, if later generations dropped the dubious methods but kept the useful ones.

That's a pretty good argument for reading the work of the old masters though, isn't it?

I apologize, I haven't made my position clear about this. I think that experts should read the classics as well as modern works in their field. The interested amateur, though, should skip over the classics and go directly to modern thought, unless he or she has more free time than most.

comment by annebrandes (annebrandes1@gmail.com) · 2024-11-24T22:34:00.581Z · LW(p) · GW(p)

an assumption not generally found outside of the Anglo-American Enlightenment intellectual tradition


Is this actually true?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2024-11-25T09:40:50.887Z · LW(p) · GW(p)

Perhaps he means something like what Keynes said here

comment by WannabeChthonic · 2019-10-12T05:28:24.959Z · LW(p) · GW(p)

I have to admit that personally I don't see a golden thread in the post. What was the core argument? As far as I understood it the pot reasons about "relative per-capita intellectual impressiveness of people who study only condensations and people who study original works".

Which is... to be honest, just a mockup. Who cares about the "impressiveness" while studying? Why should one optimize "impressiveness" in ones study?

Personally I think that original works carry a lot of baggage. For example the language is older, the theories sometimes incredibly outdated, ... etc. It's fun to read about this "new discovered oil" and that "this black oil will never run out!" but tbh not all books age the same. Plato ages well but 500 year old books on eye surgery are probably completely useless by now.

So I'd argue that there's value in the "modern, condensed" form. Some expert which tells me "this obscure line has the meaning of x. Don't mistake it for an y".

comment by WannabeChthonic · 2019-10-12T04:35:37.879Z · LW(p) · GW(p)

This recent blog post

Link to infiniteinjury.org seems to be down.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-10-12T04:45:23.694Z · LW(p) · GW(p)

(Archived.)

Replies from: WannabeChthonic
comment by WannabeChthonic · 2019-10-13T16:04:50.953Z · LW(p) · GW(p)

The purpose of the comment was more in the sense of fixing the article... I am new to LW. Posts can be edited, right?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2019-10-13T16:26:54.840Z · LW(p) · GW(p)

I'm not sure the OP pays that much attention to Less Wrong these days? The mods could do it if they wanted (or write a broken-link checker??).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-10-13T20:49:01.440Z · LW(p) · GW(p)

It is not even necessary to write one; such tools already exist (search for “broken link” on that page).

comment by Gabriel W Irizarry Olivera (gabriel-w-irizarry-olivera) · 2018-09-07T09:58:29.841Z · LW(p) · GW(p)

Books are written sometimes about "The Great Ideas Of The Past", sometimes about that great thinker of former times, and the public reads these books written by a someone else, but not the works of "The Great Man or Woman" himself/herself.

There is nothing that so greatly recreates the mind as the works of the old classic writers. Directly one has been taken up, even if it is only for half-an-hour, one feels as quickly refreshed, relieved, purified, elevated, and strengthened as if one had refreshed oneself at a mountain stream.

One can never read too little of bad, or too much of good books: bad books are intellectual poison; they destroy the mind.

In order to read what is good one must make it a condition never to read what is bad; for life is short, and both time and strength limited.

It would be a good thing to buy books if one could also buy the time to read them; but one usually confuses the purchase of books with the acquisition of their contents. To desire that a person should retain everything he has ever read, is the same as wishing him/her to retain in his stomach all that he has ever eaten. He has been bodily nourished on what he has eaten, and mentally on what he has read, and through them become what he is.

It is because people will only read what is the newest instead of what is the best, that writers remain in the narrow circle of prevailing ideas, and that we sometimes feel that the age sinks deeper and deeper in its own mire.

comment by Lorimer · 2018-08-06T10:30:40.507Z · LW(p) · GW(p)

Putting China on BLAST!

comment by bridgesandballoons · 2010-03-03T02:02:40.999Z · LW(p) · GW(p)

"A good analysis book doesn’t summarize Newton it digests his insights and presents them as part of a grander theory. "

Exactly. And I want to be in charge of doing that for myself, so I suppose I'll continue to read original sources.

Replies from: jimrandomh
comment by jimrandomh · 2010-03-03T02:23:42.399Z · LW(p) · GW(p)

Exactly. And I want to be in charge of doing that for myself, so I suppose I'll continue to read original sources.

In that case, it will take you much longer to learn physics than it would if you'd just read a standard textbook. You will come out with extra knowledge, but it will be knowledge of history, not physics.

Replies from: orthonormal
comment by orthonormal · 2010-03-03T06:49:21.706Z · LW(p) · GW(p)

That's too strong a claim; working it out for oneself from the intuitions available at the time probably makes good experience for a scientist, and it's too bad we lack it. That being said, it will in fact take a lot more effort for that one benefit, and we should see if there's a Third Alternative between being spoon-fed conclusions with tidy derivations, and trying to recapitulate the entire history of physics.

comment by TruePath · 2010-04-14T23:43:00.154Z · LW(p) · GW(p)

Finally I just want to say that surely you don't disagree that there is something different about what happens in physics than what happens in astrology do you? I don't care about deep principled distinctions here but just at a purely practical level physics (and the other sciences) let us make strictly more things now than they did 10, 50 or 100 years ago.

The notion of progress I had in mind is much much weaker than yours. I just mean that sometimes we discover shit that we find very useful (transistor technology) and that the useful consequences of scientific discoveries (be it new theories or just accurate measurements of molecular weight) are rarely lost.

In other words all I'm saying is that if you wanted nifty fun gadgets to play with or technologies to save your sick wife or the like and you had the chance to pluck 10 great scientists from any time in history to help you out during development you'd pick them from the future not the past. That is physicists can now give engineers theories that let them build both chips and buildings while before they only gave them building theories.


Ultimately, however, the aim of my post was to establish that there isn't some kind of important knowledge best gained through the reading of original sources. The target of my argument was the frequently given argument that somehow spurning these great original works puts you at some kind of real (not just bad taste) intellectual disadvantage in terms of learning/knowledge relative to those who do.

Given that new 'great' originals continue to be published albeit quite slowly one can immediately conclude that either we are making progress or that there is no reason to believe reading great originals gives you a boost (i.e. helps you make progress). After all if we aren't making progress then these new books can't give later generations a boost (that would be progress) hence, one can't justifiably claim that reading great originals is an aid to academic/intellectual progress.

Given that my claim is an entirely negative one I need not make any assumptions as you allege. Rather I'm just offering a reducto of position that you are simply dismissing from the start.

comment by TruePath · 2010-04-14T23:36:01.474Z · LW(p) · GW(p)

Ultimately, however, the aim of my post was to establish that there isn't some kind of important knowledge best gained through the reading of original sources. The target of my argument was the frequently given argument that somehow spurning these great original works puts you at some kind of 'objective' disadvantage in terms of learning/knowledge relative to those who do. Sure these are fuzzy terms and I think most of them aren't even really meaningful but the idea the advocates of this position have in mind is that somehow reading literature classics and other 'great' originals somehow helps you make intellectual contributions more than reading more recent works instead.

Given that new 'great' originals continue to be published albeit quite slowly one can immediately conclude that either we are making progress or that there is no reason to believe reading great originals gives you a boost (i.e. helps you make progress). After all if we aren't making progress then these new books can't give later generations a boost (that would be progress) hence, one can't justifiably claim that reading great originals is an aid to academic/intellectual progress.

Given that my claim is an entirely negative one I need not make any assumptions as you allege. Rather I'm just offering a reducto of position that you are simply dismissing from the start.

comment by RobinZ · 2010-03-02T12:46:00.589Z · LW(p) · GW(p)

Very weakly related to the post: I surprised Eliezer Yudkowsky last October with a quote showing off Galileo's rationality.

comment by TruePath · 2010-04-14T23:11:24.886Z · LW(p) · GW(p)

You make decent points about the lack of evidence for 'progress' in methodology. I think it's quite possible that we don't significantly improve the process by which we go from the current best theory to it's successor. Of course to make sense of this notion you would need a more precise notion of what it means to have a better methodology for generating scientific theories. I mean the first natural way to do this might be to somehow try and measure the percent of the physical world we can explain/predict from initial conditions (many complications with random events etc..) but that yields a decreasing rate of methodological progress as a matter of pure mathematics.

If f(t) is a bounded monotonically increasing differentiable function then the limit of f'(t) (f prime) as x goes to infinity is 0. So if f is some measure of the percent of the world physics has explained then it's rate of increase has to eventually go to 0 since there is only so much world to explain.

More generally saying that a particular scientific methodology works or works better than another is equivalent to asserting that induction works and works better with respect to such and such measure of simplicity. All you can do is assume your notion of simplicity gives rise to a good scientific methodology (you can't gain inductive evidence for it) so it doesn't really make sense to measure our progress in scientific methodology.

So if I don't believe in the idea of progress in the scientific method what did I mean by progress in my post? I put that in another comment since I felt it better to divide them up.