Justifiable Erroneous Scientific Pessimism

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-08T20:37:32.096Z · score: 14 (20 votes) · LW · GW · Legacy · 114 comments

In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).  If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.

We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory.  We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.  Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age.  This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still.  Where there are sort of two, there may be more.  Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?

I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way:  I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints.  (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.)  We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.

114 comments

Comments sorted by top scores.

comment by CronoDAS · 2013-05-08T21:40:51.026Z · score: 15 (15 votes) · LW · GW

"Continental drift" is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...

comment by David_Gerard · 2013-05-09T12:25:20.178Z · score: 2 (2 votes) · LW · GW

They didn't have a mechanism at all until subduction and hence plate tectonics was discovered. The expanding earth theory was actually considered not implausible by geologists for quite a while - it didn't have anything like a plausible mechanism, but neither did continental drift. I was surprised to discover how recent this was.

comment by benharack · 2013-05-08T22:32:52.496Z · score: 14 (14 votes) · LW · GW

There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn't exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.

The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf

comment by Luke_A_Somers · 2013-05-09T02:05:34.493Z · score: 15 (15 votes) · LW · GW

Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau's impossibility proof is entirely correct.

And that's why we don't use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don't really deserve to be called crystals.

The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.

comment by Dias · 2013-05-08T21:30:03.700Z · score: 10 (10 votes) · LW · GW

Lord Kelvin was wrong but was he pessimistic? He wasn't saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn't seem to be the case that 'underestimating a quantity' is pessimism. If nothing else, the quantity might be 'number of babies killed'.

comment by Luke_A_Somers · 2013-05-09T02:07:41.239Z · score: 2 (4 votes) · LW · GW

It was pessimistic in the sense that under his estimate the sun was steadily cooling and so we'd all freeze to death long before the real sun will present us any trouble.

comment by Jack · 2013-05-09T03:13:16.730Z · score: 1 (1 votes) · LW · GW

Did he give an estimate of when we'd all freeze to death?

comment by Plasmon · 2013-05-09T05:22:34.203Z · score: 4 (4 votes) · LW · GW

He estimated the sun was no more than 20 million years old, and presumably did not expect it to last for more than a few tens of millions of years more.

comment by Luke_A_Somers · 2013-05-09T05:10:21.359Z · score: 2 (2 votes) · LW · GW

Not that I know of. Gravitational collapse is a really lousy, short-term source of energy, which is why he gave such a shorter estimate. Still on the scale of millions of years, I think.

comment by Randaly · 2013-05-09T00:54:43.289Z · score: 8 (8 votes) · LW · GW

The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn't revolve around the Sun.

comment by Jack · 2013-05-09T03:18:16.002Z · score: 1 (1 votes) · LW · GW

*there would have been a parallax given assumptions at the time regarding the distance of the stars.

I've wondered though: if there were no planets besides Earth would we have persisted as geocentrists until the 19th century?

comment by SilasBarta · 2013-05-09T07:12:48.522Z · score: 0 (0 votes) · LW · GW

If there were no celestial bodies but Earth and the sun, we would have been just as correct as heliocentrists.

comment by Jack · 2013-05-09T08:33:25.498Z · score: 2 (2 votes) · LW · GW

I don't think that's right.

comment by ArisKatsaris · 2013-05-09T13:08:15.754Z · score: 5 (5 votes) · LW · GW

The center of mass for the Earth-sun system is inside the sun; so, yeah, the heliocentrists wouldn't be "just as correct".

If the two masses were equal, then Earth and Sun would orbit a point that was equidistant to them; and in that scenario heliocentrists and geocentrists would be equally wrong....

comment by Kawoomba · 2013-05-09T14:18:11.418Z · score: -2 (4 votes) · LW · GW

Why privilege the center of mass as the reference point? Do we need to find the densest concentration of mass in the known universe to determine what we call the punctum fixum and what we call the punctum mobile?

As far as I can tell, most of the local universe revolves around me. That may be a common human misconception, seeing as I'm not a black hole, if we only go by centers of mass. But do we have to?

(Also, "densest concentration of mass" would probably be in the bible belt.)

comment by rocurley · 2013-05-09T15:30:28.038Z · score: 3 (1 votes) · LW · GW

I think the center of mass thing is a bit of a red herring here. While velocity and position are all relative, rotation is absolute. You can determine if you're spinning without reference to the outside world. For example, imagine a space station you spin for "gravity". You can tell how fast it's spinning without looking outside by measuring how much gravity there is.

You can work in earth-stationary coordinates, there will just be some annoying odd terms in your math as a result (it's a non-inertial reference frame).

comment by SilasBarta · 2013-05-09T17:26:14.904Z · score: 3 (3 votes) · LW · GW

You can determine if you're spinning without reference to the outside world.

Technically, no you can't. Per EY's points on Mach's principle, spinning yourself around (with the resulting apparent movement of stars and feeling of centrifugal stresses) is observationally equivalent to the rest of the universe conspiring to rotate around you oppositely.

Einstein's theory further had the property that moving matter would generate gravitational waves, propagating curvatures. Einstein suspected that if the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you.

The c.g. of the earth/sun solar system would likewise lack a privileged position in such a world.

comment by rocurley · 2013-05-10T00:35:37.109Z · score: 3 (1 votes) · LW · GW

I agree that it's at least quite plausible (as per your post, it's not proven to follow from GR) that if the universe spun around you, it might be exactly the same as if you were spinning. However, if there's no background at all, then I'm pretty sure the predictions of GR are unambiguous. If there's no preferred rotation, then what do you predict to happen when you spin newton's bucket at different rates relative to each other?

EDIT: Also, although now I'm getting a bit out of my league, I believe that even in the massive external rotating shell case, the effect is miniscule.

EDIT 2: See this comment.

comment by SilasBarta · 2013-05-10T23:55:27.020Z · score: 0 (0 votes) · LW · GW

Are you sure you linked the right comment? That's just someone talking about centripetal vs centrifugal.

comment by rocurley · 2013-05-11T01:53:21.327Z · score: 3 (1 votes) · LW · GW

No, I didn't. It's fixed now, thanks.

comment by satt · 2013-05-10T01:59:23.331Z · score: 1 (1 votes) · LW · GW

You can determine if you're spinning without reference to the outside world.

Technically, no you can't.

Is that correct? Spinning implies rotation implies acceleration, which I'd always thought could be detected without external reference points.

Per EY's points on Mach's principle, spinning yourself around (with the resulting apparent movement of stars and feeling of centrifugal stresses) is observationally equivalent to the rest of the universe conspiring to rotate around you oppositely.

Without taking a stance on Mach's principle or that specific question of observational equivalence, what about a spinning body in an otherwise empty universe? As an extreme example, my own body could spin only so fast before tearing itself apart. Surely this holds even if I'm floating in an otherwise utterly empty universe?

comment by SilasBarta · 2013-05-10T16:54:52.217Z · score: 0 (0 votes) · LW · GW

Is that correct? Spinning implies rotation implies acceleration, which I'd always thought could be detected without external reference points.

This is addressed later in the article, very well IMHO. Let me just give the relevant excerpts:

If you tried to visualize [the entire universe moving together], it seems like you can imagine it. If the universe is standing still, then you imagine a little swirly cloud of galaxies standing still. If the whole universe is moving left, then you imagine the little swirly cloud moving left across your field of vision until it passes out of sight.

But then, ... you can't always trust your imagination. [...]

Suppose that you pick an arbitrary but uniform (x, y, z, t) coordinate system. [... Y]ou might say:

"Since there's no way of figuring out where the origin is by looking at the laws of physics, the origin must not really exist! There is no (0, 0, 0, 0) point floating out in space somewhere!"

Which is to say: There is just no fact of the matter as to where the origin "really" is. [...]

[...]

And now—it seems—we understand how we have been misled, by trying to visualize "the whole universe moving left", ... The seeming absolute background, the origin relative to which the universe was moving, was in the underlying neurology we used to visualize it!

But there is no origin!

comment by satt · 2013-05-10T21:03:19.558Z · score: 0 (0 votes) · LW · GW

I worry I'm missing something obvious, but that EY quote doesn't seem to address my belief (namely, that detecting accleration doesn't need an external reference point). It just argues there's no absolute origin to use as an external reference point.

comment by arundelo · 2013-05-10T21:32:40.791Z · score: 0 (0 votes) · LW · GW

Silas is talking about this:

Einstein suspected that if the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you. So you could construct the laws of physics in an accelerating or even rotating frame of reference, and end up observing the same laws -- again freeing us of the specter of absolute space.

(I do not think this has been verified exactly [emphasis arundelo's], in terms of how much matter is out there, what kind of gravitational wave it would generate by rotating around us, et cetera. Einstein did verify that a shell of matter, spinning around a central point, ought to generate a gravitational equivalent of the Coriolis force that would e.g. cause a pendulum to precess. [Wow!] Remember that, by the basic principle of gravity as curved spacetime, this is indistinguishable in principle from a rotating inertial reference frame.)

Edit: You are correct from a classical physics standpoint that if you are in a windowless room on a merry-go-round, you can tell whether the merry-go-round is standing still versus spinning at a constant speed. (For instance, you could shoot a billiard ball and see whether its path is straight or curved.) This contrasts with the analogous situation in a windowless train car, where you cannot tell whether the train is standing still versus moving with a constant velocity.

comment by SilasBarta · 2013-05-10T21:49:24.383Z · score: 1 (1 votes) · LW · GW

Right, that (a small portion of it) was what I quoted first, one exchange upthread, and satt still held to the intuition that there are rotational stresses in the absence of the universe's background matter. So I went back/up/down[1] a level to the basic question of when you can rule out a certain "absolute" in nature: when the simplest laws stop requiring it.

The point I was trying to make (which I should have been more specific on) was that, just as the Galilean observation set sufficed to rule out "special" velocities and leave only relative ones, our observation set now has, as an optimal description, laws that give no privilege to any non-relative motion, including higher derivatives of velocity.

[1] whichever preposition would be least offensive

comment by arundelo · 2013-05-10T22:29:59.456Z · score: 0 (0 votes) · LW · GW

Right, that (a small portion of it) was what I quoted first

Ah, sorry. Upthread reading fail on my part.

comment by satt · 2013-05-10T22:05:09.540Z · score: 0 (0 votes) · LW · GW

[EY quote on the covariance of physical law for a spinning body]

Edit: You are correct from a classical physics standpoint that if you are in a windowless room on a merry-go-round, you can tell whether the merry-go-round is standing still versus spinning at a constant speed.

As far as I can tell, what I'm saying holds even for non-spinning accelerating objects, and under quantum physics. According to QFT, a sufficiently sensitive thermometer accelerating through a vacuum detects a higher temperature than a non-accelerating thermometer would. This appears to be a way for a thermometer to tell whether it's accelerating without having to "look" at distant stars & such.

comment by nonplussed · 2013-05-14T21:52:02.207Z · score: 0 (0 votes) · LW · GW

Hm, I'm not sure the thermometer can conclude that it's accelerating from seeing the black body radiation. I think it's equivalent to there being an event horizon behind it emitting hawking radiation (this happens when you accelerate at a constant rate). The thermometer can't tell if it's next to a black hole or if it's accelerating. Could be wrong though, but I vaguely remember something along these lines.

comment by satt · 2013-05-15T02:25:23.993Z · score: 0 (0 votes) · LW · GW

I don't see anything incorrect in what you say. (Sounds to me like a direct consequence of the equivalence principle, although I'm no GR expert.) But I'm assuming away the possibility of rogue black holes in this hypothetical, since I'm wondering whether a sufficiently sensitive sensor could detect its own acceleration even inside an otherwise empty universe (or at least without reference to the rest of the cosmos).

comment by arundelo · 2013-05-10T22:22:11.937Z · score: 0 (0 votes) · LW · GW

I think I misunderstood what you and Silas were talking about. (Note though that my train thought experiment was about a train with a constant velocity. The billiard ball technique works to detect acceleration of the train even if no rotation is involved.)

comment by shminux · 2013-05-10T22:16:46.434Z · score: 0 (0 votes) · LW · GW

Yes, all acceleration is absolute, not relative. You don't need hypothetical esoteric effects to detect it, a usual weighing scale will do. Gravity throws a bit of a quirk in it, of course.

comment by satt · 2013-05-10T22:39:36.976Z · score: 0 (0 votes) · LW · GW

I'm simultaneously reassured (that my intuition's correct) & confused (about SilasBarta & Eliezer's remarks, since they read to me like they contradict my intuition). Maybe I should post a comment on the Sequences post rather than continuing to press the point here, though.

[Edit: originally linked the wrong Sequences post, fixed that.]

comment by gwern · 2013-05-09T02:55:44.437Z · score: 1 (1 votes) · LW · GW

I thought that parallax argument was applied to the stars, not the Sun?

comment by Randaly · 2013-05-09T03:14:30.782Z · score: 4 (4 votes) · LW · GW

Yeah, that's what I meant. (No parallax in star observations -> the Earth isn't moving -> the Sun is revolving around the Earth.)

comment by Luke_A_Somers · 2013-05-09T02:09:01.636Z · score: 0 (0 votes) · LW · GW

That's a justifiable error, but I don't see how it's pessimistic.

comment by CellBioGuy · 2013-05-09T05:12:16.128Z · score: 3 (3 votes) · LW · GW

"Pessimistic" is a loaded term and I'm not sure if it's all that useful in the context of this discussion in the first place.

comment by Luke_A_Somers · 2013-05-09T05:35:02.460Z · score: 1 (1 votes) · LW · GW

It's crucial to the original point that Eliezer was making, which was differentiating technological pessimism from technological optimism.

This isn't technology, and though it makes a difference to the universe as a whole, it wouldn't be better or worse for us either way.

comment by lukeprog · 2013-05-09T00:31:47.233Z · score: 8 (14 votes) · LW · GW

We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.

This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:

"You can build a machine to draw [logical] conclusions for you, but I think you can never build a machine that will draw [probabilistic] inferences."

George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.

[Machines] cannot play chess any more than they can play football.

Mortimer Taube (1960) — not long before computers began to regularly dominate amateur and then expert chess players. (Edit: this one seems wrong)

"[Pattern recognition] is obviously an inductive process, hence it is not a logical process or mechanical sorting that the computer can perform without human aid."

Satosi Watanabe (1974) — a couple decades before both supervised and unsupervised machine learning took off.

Also, Hubert Dreyfus mocked the capabilities of chess computers, and compared AI to alchemy, in Dreyfus (1965) — a mere two years before he was defeated by the chess computer Mac Hack.

comment by DanielLC · 2013-05-09T04:45:05.214Z · score: 14 (18 votes) · LW · GW

[Machines] cannot play chess any more than they can play football.

Technically, he was correct.

comment by NancyLebovitz · 2013-05-09T16:36:25.567Z · score: 0 (0 votes) · LW · GW

I like the idea of football (soccer) played by quadrupeds.

comment by gjm · 2013-05-09T08:53:06.420Z · score: 11 (15 votes) · LW · GW

Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.

[...] there are analog relationships in real chess -- such as the emptiness of a line [...] which cannot be directly handled by any digital machine. These analog relationships can be approximated digitally [...] in order to determine whether a given line is empty [...] such a set of calculations is not identical to the visual recognition that the space between two pieces is empty. A large part of the enjoyment of chess [...] derives from its deployment or topological character, which a machine cannot handle except by elimination. If game is used in the usual sense -- that is, as it was used before the word was redefined by computer enthusiasts with nothing more serious to do -- it is possible to state categorically that machines cannot play games. They cannot play chess any more than they can play football.

Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have something like the same experience as a human player has: seeing the spatial relationships between the pieces, for example. He thinks that's something machines fundamentally cannot do, and that is why he thinks machines cannot play chess.

Now, for the avoidance of doubt, I think he was badly wrong about all that. Someone blind from birth can learn to play chess, and I hope Taube wouldn't really want to say that such a player isn't really playing chess because she isn't having the same visual/spatial experiences as a sighted player. And most likely one day computers (or some other artificially constructed machines) will be having experiences every bit as rich and authentic as humans have. (Taube wrote a book claiming this was impossible. I haven't seen it myself, but from what little I've read about it its arguments were very weak.)

But his main claim about machines here isn't one that's been nicely falsified by later events. We have machines that do a very good job of evaluating positions and choosing moves, but he never claimed that that was impossible. We don't yet have machines that play chess in the very strong sense he's demanding, or even the weaker sense of using anything closely analogous to human visual perception to play. (I suppose you might say that programs using a "bitboard" representation are doing something a little along those lines, but somehow I doubt Taube would have been convinced.)

... Also, Taube wasn't a scientist or a computer expert or a chess expert or even a philosopher. He was a librarian. A librarian is a fine thing to be, but it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.

comment by gwern · 2013-05-09T16:16:56.970Z · score: 7 (19 votes) · LW · GW

You accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books, hence

it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.

I accuse you of being highly misleading in at least two ways here:

  1. in 1960, a librarian is one of the occupations - outside actual computer-based occupations - most likely to have hands-on familiarity with computers & things like Boolean logic, for the obvious reason that being a librarian is often about research where computers are invaluable. A librarian could well have extensive experience, and so it's not much of a mark against him.
  2. Mortimer Taube turns out to be the kind of 'librarian' who exemplifies this; the little byline to his letter about "Documentation Incorporated" should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids' books, but because you did not see fit to add any background on what sort of 'librarian' Taube was, I will:

    ...He is on the list of the 100 most important leaders in Library and Information Science of the 20th century.[1] He was important to the Library Science field because he invented Coordinate Indexing, which uses “uniterms” in the context of cataloging. It is the forerunner to computer based searches. In the early 1950s he started his own company, Documentation, Inc. with Gerald J. Sophar. Previously he worked at such institutions as the Library of Congress, the Department of Defense, and the Atomic Energy Commission. American Libraries calls him “an innovator and inventor, as well as scholar and savvy businessman.”[1] Current Biography called him the “Dewey of mid-twentieth Librarianship.”[2]

    ...Mortimer Taube received his Bachelors of Arts in Philosophy from the University of Chicago in 1933. He then pursued at PhD in the same field from the University of California at Berkeley in 1935.

    ...In 1944, Mortimer Taube left academia behind to become a true innovator in the field of science, especially Information Science. After the war, there was a huge boom of scientific invention, and the literature to go with it. The contemporary indexing and retrieval methods simply could not handle the inflow.[2] New technology was needed to meet this high demand and Mortimer Taube delivered.He dabbled in many projects during and after the war. In 1944 he joined the Library of Congress as the Assistant Chief of the General Reference and Bibliographical Division.[4] He was then head of the Science and Technology project from 1947-1949.[2] He worked for the Atomic Energy Commission, which was established after “the Manhattan District Project wanted to evaluate and publish the scientific and engineering records showing the advancements made during the war.”[2]

    ...Mortimer Taube also worked heavily with documentation, the literature pertaining to the new scientific innovation.[2] He was a consultant and Lecturer on Scientific Documentation and was even the editor of American Documentation in the years 1952-1953.[2] In 1952, Taube founded his own company, Documentation, Inc. with Gerald J. Sophar and two others.[4] Documentation, Inc. was the “largest aerospace information center” and did work for NASA.[4] Here Taube developed Coordinate Indexing, and important innovation in the field of Library Science. Taube defines Coordinate Indexing as, “the analysis of any field of information into a set of terms and the combination of these terms in any order to achieve any desired degree of detail in either indexing or selection."[6] Coordinate Indexing used “uniterms” to make storing and retrieving information easier and faster.[2]

    ...Taube had split coordinate indexing into two categories, item and term indexing.[7] It used punch cards and a machine reader to search for specific items or documents by terms or keywords.[7] Documentation, Inc. also brought forth the IBM 9900 Special Index Analyzer, also known as COMAC.[2][8] COMAC stood for “continuous multiple access controller.” This machine handled data punch cards, used for information storage and retrieval.[2] It made “logical relationships among terms.”[2] Even though Documentation Inc. started as a small company, it soon grew to well over 700 members.[4]

    ...Even though his technology seems to be the forerunner of OPACs and computer cataloging systems, Taube himself personally didn’t like the idea of computers taking over modern life.[4] He believed that “computers didn’t think.”[4] Even though he is an important figure in Information Science, he also seemed to remain interested in philosophy. He was writing a book on the subject before he died.[4]

    So to summarize: he was a trained philosopher and tech startup co-founder who invented new information technology and handled documentation tasks who was familiar with the cybernetics literature and traveled in the same circles as people like Vannevar Bush.

    And you write

    A librarian is a fine thing to be, but it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.

    !

An upvote for correctly contextualizing what Taube wrote, and a mental downvote for being lazy or deceptive in your final paragraph.

comment by gjm · 2013-05-09T19:03:58.146Z · score: 6 (12 votes) · LW · GW

You accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books

I really can't think of a polite way to say this, so:

Bullshit.

  1. I wasn't accusing Luke of anything; I was disagreeing with him. Disagreement is not accusation. When I want to make an accusation, I will make an accusation, like this one: You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes, and I have to say I'm pretty shocked to see someone as generally excellent as you behaving in such a way.

  2. I do not think, and I did not say, and I had not the slightest intention of implying, that "a librarian is a harmless drudge who just shelves books".

Allow me to remind you how Luke's comment begins. The boldface emphasis is mine.

We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.

This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:

Taube was, despite his many excellent qualities, not a scientist as that term is generally understood, and he was, despite his many excellent qualities, not working in "the field of AI research".

(Yes, I know the Wikipedia page says he was "a true innovator in the field of science". Reading what it says he did, though, I really can't see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don't think that "not science" is in any way the same sort of thing as "not valuable" or "not important" or "not difficult". What the creators of (say) the Firefox web browser did was important and valuable and difficult, but happens not to be science. What Beethoven did was important and valuable and difficult, but happens not to be science. What Martin Luther King did was important and valuable and difficult, but happens not to be science.)

Pointing this out doesn't mean I think there's anything wrong with being a librarian. When I said "a librarian is a fine thing to be", I meant it. (And, for the avoidance of doubt, it is my opinion both when "librarian" means "someone who shelves books in a library" and when it means "a world-class expert on organizing information in catalogues".)

Now, having said all that, I should add that you are quite right about one thing: when I said that Taube was neither a computer expert nor a philosopher, I was oversimplifying. (Not least because I hadn't looked deeply into Taube's career.) He was an important innovator in the use of punched cards for document indexing, which is quite a bit like being a computer expert; and he was a PhD in philosophy, which is quite a bit like being a philosopher. None the less, I stand by what I said: neither being a world-class expert in document indexing, nor knowing a lot about punched-card reading machinery, nor being a PhD in philosophy, seems to me to be the kind of expertise that makes it particularly startling if one's wrong about whether machines can play chess.

(And, once again, for the avoidance of doubt, I am not in the least trying to belittle his expertise and creativity. I just don't see that they were the kind of expertise and creativity that make it startling for someone to be wrong about the possibilities of computer chess-playing.)

[EDITED to clarify a bit of wording and add some emphasis. ... And again, later, to add a missing negative; oops. Also, while I'm here, two other remarks. 1: I regret the confrontational tone this exchange has taken; but I don't see any way I could have responded sufficiently forcefully to the accusations levelled at me without perpetuating it. 2: I see a lot of downvotes are flying around in this subthread. For the record, I haven't cast any.]

comment by gwern · 2013-05-09T19:17:57.880Z · score: -4 (14 votes) · LW · GW

I wasn't accusing Luke of anything; I was disagreeing with him.

You were claiming he cherrypicked the example; I'll quote again:

... Also, Taube wasn't a scientist or a computer expert or a chess expert or even a philosopher. He was a librarian. A librarian is a fine thing to be, but it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.

If that were true, Luke would be seriously cherrypicking and that is not a harmless error but the sort of biased selection and lying which one would rightly take into account in considering flipping the bozo bit on someone and henceforth ignoring anything they said. This isn't a harmless mistake of attribution or minor peccadilloe that might hurt a single clause or subpoint or tangential argument, this is the sort of thing that discredits an entire line of thought. Maybe you didn't mean it as an accusation, but I treat it as one since if it was true it would be very serious; in much the same way maybe someone bringing up the fact that the lead author on a drug study has taken millions of dollars from the dug company doesn't mean anything serious by it, hey, they're just discussing the paper, but I would take it very seriously indeed and maybe even ignore the study entirely.

You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes

Duly noted, but see above, I don't especially care what you actually think, I care just what you wrote and whether it is a serious issue with Luke's comment.

I do not think, and I did not say, and I had not the slightest intention of implying, that "a librarian is a harmless drudge who just shelves books".

Right. I'm sure you actually meant "I think librarians are fantastic smart people who know everything about everything and have many valid and expert opinions, however it just so happens that chess and AI and cybernetics happens to be one of the few areas where their informed commentary is worthless and ' it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here'".

Yes, I know the Wikipedia page says he was "a true innovator in the field of science". Reading what it says he did, though, I really can't see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don't think that "not science" is in any way the same sort of thing as "not valuable" or "not important" or "not difficult".

If working on key organization schemes and pushing forward the field of information science cannot be construed as 'science' no matter how broadly defined, then I guess we'd better exempt computer science and AI from that moniker too.

He was an important innovator in the use of punched cards for document indexing, which is quite a bit like being a computer expert; and he was a PhD in philosophy, which is quite a bit like being a philosopher. None the less, I stand by what I said: neither being a world-class expert in document indexing, nor knowing a lot about punched-card reading machinery, nor being a PhD in philosophy, seems to me to be the kind of expertise that makes it particularly startling if one's wrong about whether machines can play chess.

ಠ_ಠ Actually, that doesn't quite convey my impression of your no-true-Scotsmanning, I'll try that again: ಠ_ಠ ಠ_ಠ ಠ_ಠ A PhD in philosophy is not enough to be called a philosopher? zomgwtfbbq.

comment by gjm · 2013-05-09T20:10:23.245Z · score: 3 (7 votes) · LW · GW

You were claiming he cherrypicked the example. [...] If that were true, Luke would be seriously cherrypicking and that is not a harmless error [...].

This appears to me to be an instance of a common error: assuming that when someone says something, they intended every inference you find it natural to make from it. It doesn't appear to me, at all, that for Luke to have been wrong in the way I say he was he needs to have been a liar or bozo or whatever else you're trying to suggest I accused him of being.

(I'm puzzled, too. We seem to be agreed that Luke's quotation gives a misleading impression about what claim Taube was making, and -- rightly, in my opinion -- you don't appear to have concluded from this that Luke was dishonestly cherrypicking and needs the bozo bit flipped. But I don't understand, at all, why giving a misleading impression about Taube's relevant expertise is a worse thing to "accuse" him of than giving a misleading impression about what Taube was claiming. Either of them means that the quotation from Taube fails to serve the purpose Luke put it there for.)

I don't especially care what you actually think, I care just what you wrote and whether it is a serious issue with Luke's comment.

If you don't especially care what I actually think, then what the hell are you doing putting words into my mouth about how librarians are uninteresting low-status unintellectual drudges? (Which, just in case it needs saying again, in no way resemble my actual opinion.)

I'm sure you actually meant [...]

I meant what I said. I did not mean what you said. I also did not mean the particular equally-ridiculous thing you now sarcastically suggest I could have meant. I honestly have no idea what I've done to bring forth all this hostility, but if you want an actual reasoned discussion then I politely suggest that you stop flinging shit at me and then we can have one.

cannot be construed as 'science' no matter how broadly defined

Those last five words are yours, not mine. I'm sure you can find definitions according to which Taube's work was "science". I'm also sure you can quickly and easily think of plenty of instances where "no matter how broadly defined" ends up meaning "way too broadly defined for most purposes". (Here's an extreme example: Richard Dawkins is on record as accepting the term "cultural Christian" as applying to him. I would accordingly not say that RD cannot be construed as 'Christian' no matter how broadly defined -- but, none the less, for most purposes describing him as a Christian would be silly. Taube's work is certainly nearer to being science than Richard Dawkins is to being a Christian; the point of the example is to clarify my point, not to be a perfect analogy.)

A PhD in philosophy is not enough to be called a philosopher?

Ian Bostridge has a doctorate in history, and spent some time as an academic historian. However, I would not now call him a historian but a singer. (Or, more specifically, a tenor.) Angela Merkel has a PhD in physics, but I wouldn't now call her a physicist but a politician (or, perhaps, some more august term along those lines). George Soros has a PhD in philosophy but I wouldn't call him a philosopher.

So: no, the fact that someone got a PhD in philosophy in 1935 is not sufficient reason to call them a philosopher in 1960. As I say, having a PhD in philosophy is certainly quite like being a philosopher; it's certainly not wholly irrelevant; I oversimplified and I shouldn't have. But it's not the same thing.

comment by gwern · 2013-05-11T01:21:06.783Z · score: 1 (5 votes) · LW · GW

This appears to me to be an instance of a common error: assuming that when someone says something, they intended every inference you find it natural to make from it.

It's a common error indeed, and one that is justifiable when enough other people draw that error. Yeah Hitler said to kill all the Jews, but he really meant to kill the Jew inside, not real Jews. If I may quote your other comment:

(I agree that private_messaging's comment is extremely silly, and I regret the fact that what I wrote seems to have encouraged it.)

Indeed.

If you don't especially care what I actually think, then what the hell are you doing putting words into my mouth about how librarians are uninteresting low-status unintellectual drudges? (Which, just in case it needs saying again, in no way resemble my actual opinion.)

Right, because you just threw that in for no reason...

I'm sure you can find definitions according to which Taube's work was "science".

And I even gave several. Feel free to deal with the examples; do you think computer science and AI are not 'science'?

Here's an extreme example: Richard Dawkins is on record as accepting the term "cultural Christian" as applying to him. I would accordingly not say that RD cannot be construed as 'Christian' no matter how broadly defined -- but, none the less, for most purposes describing him as a Christian would be silly.

I don't see what's the least bit silly about describing him a a "cultural Christian" especially if he accepts the label. He was indeed raised in a Christian culture and implicitly accepts a lot of the background beliefs like belief in guilt and sin (heck, I still think in those terms to some degree and say things like 'goddamn it'); even if we don't go quite as far as Moldbug in diagnosing Dawkins as holding to a puritanical secular Christanity, the influence is ineradicable. There is no view from nowhere.

Ian Bostridge has a doctorate in history, and spent some time as an academic historian. However, I would not now call him a historian but a singer. (Or, more specifically, a tenor.)

Wow, so not only is he a trained historian who has published & defended his doctorate of original research, you describe him as actually having been in academia post-graduate school, and you still won't describe him as a historian? Would I describe him as a historian? Heck yes. Because if I won't even grant that description to Bostridge, I don't know who the heck I would grant it to. You know, describing someone as a historian is not committing to describing him as a 'great historian' or a 'ground-breaking historian' or a 'famous historian'. You don't need to be Marvin Minsky to be called 'an AI researcher' and you don't need to be a pre-eminent figure to be described as a worker in a field. Even a bad programmer is still a 'programmer'; someone who has moved up into management is still a programmer even if they haven't written a large program in years.

Angela Merkel has a PhD in physics, but I wouldn't now call her a physicist but a politician (or, perhaps, some more august term along those lines).

From Wikipedia: "After being awarded a doctorate (Dr. rer. nat.) for her thesis on quantum chemistry,[17] she worked as a researcher and published several papers."

But no, all that is chopped liver because gjm doesn't think she's a physicist/chemist.

George Soros has a PhD in philosophy but I wouldn't call him a philosopher.

I imagine Soros would be disappointed to hear that; his Popperian philosophy grounds his 'reflexivity' on which he has written extensively and believes can significantly influence economics as it's currently practiced.

So: no, the fact that someone got a PhD in philosophy in 1935 is not sufficient reason to call them a philosopher in 1960.

It is more than sufficient, Taube had excellent training (the University of Chicago, especially in the 1930s thanks to Adler & Hutchinson, was a philosophy powerhouse, and still is to some extent - ranked #24 in the Anglosphere by Leiter), received his PhD, kept up with the issues both as a practitioner and commenter, and was reportedly working on a philosophy book when he died. He was a philosopher. And your other examples were hardly better.

comment by gjm · 2013-05-11T03:29:12.840Z · score: 0 (2 votes) · LW · GW

On flipping the bozo bit

Before you bother to read any of what follows, I would be grateful if you would answer the following question: Have you, in fact, bozo-bitted me? Because I've been proceeding on the assumption that it is in principle possible for us to have a reasoned discussion, but that's looking less and less true, and if I'm wasting my time here then I'd prefer to stop.

On librarians and librarianship

Unless I misunderstand you badly, you are arguing either that I have been lying constantly about this or that I am appallingly unaware of my own opinions and attitudes and you know them better than I do. And, if I understand this remark correctly ...

Right, because you just threw that in for no reason...

... your basis for this is that you can't think of any reason why I might have mentioned that Taube was a librarian other than that I have "contempt for librarians" and that I wanted to put Taube down by calling him names.

So, allow me to propose a very simple alternative explanation (which is, in fact, the correct explanation, so far as I can tell by introspection): I said it because, having listed a bunch of things that weren't Taube's profession, it seemed appropriate to say what his profession actually was.

On the basis of this thread so far, I'm guessing that you still don't believe me; so let me ask: Is there, in fact, anything I could possibly say or do that would convince you that I do not hold librarians in contempt? Because it looks to me as if there isn't, and it seems rather odd that describing someone who was in fact a librarian as a librarian could be such strong evidence of contempt for librarians as to outweigh all future testimony from the person in question.

On professions and the like

There are at least three things you can mean by saying someone is, e.g., "a biologist". (1) That they know something about biology and think about it from time to time. (2) That doing biology is their job, or at least that they do it as much and as well as you could reasonably expect if it were. (3) That, regardless of how much biology they actually do, they have at least some (fairly high) threshold level of expertise in it.

Angela Merkel is surely a physicist(1). She is not a physicist(2) now, although she used to be. Whether she's a physicist(3) depends on what threshold we pick and on the extent to which she's kept up her expertise. Similarly, Ian Bostridge is a historian(1), not a historian(2) so far as I know, and might or might not be a historian(3), and similarly for George Soros and philosophy.

In general, being an X PhD is a guarantee of being an Xer(1) and (at least for a while; knowledge decays) of being an Xer(3) for some plausible choices of threshold; it is of course no guarantee of being an Xer(2).

You appear to be taking the position that it is never reasonable to deny that someone with an X PhD is "an Xer". That seems like excessive credentialism to me.

The relevant notion of "scientist", "philosopher", etc., here was never made explicit. I think I've had meaning 2 in mind sometimes and meaning 3 in mind sometimes. Eliezer's original post about Pascalian wagers takes Enrico Fermi as its leading example, and talks about "famous scientists" and "prestigious scientists" in general. The present post takes Lord Kelvin as another example, but also points to skepticism about flying machines (which was not generally from famous scientists). So I don't know what the "right" threshold for meaning 3 would be here, but it seems like it should be fairly high.

Bostridge, Merkel and Soros seem to me like pretty decent examples of people who are no longer Xers(2), and probably aren't Xers(3) with a high threshold. I could be wrong about some or all of them, though; I mentioned them only to make the more general point that holding a doctoral degree is no guarantee of being an Xer(2) or Xer(3) with high threshold.

On Taube and his qualifications

Taube was an expert in the indexing of documents, and an innovator in that field. In your opinion, does that amount to expertise in computer chess-playing comparable to, say, Fermi's expertise in nuclear fission?

Taube was (I think; perhaps it was actually others in his company who were concerned with this) an expert in automated punched-card reading machines. Does that amount to expertise in computer chess-playing comparable to, etc.?

Taube held a PhD in philosophy; I think his thesis was on the history of philosophical thought about causality. Does that amount to, etc., etc.?

I repeat: Mortimer Taube was an impressive person. He was clearly very smart. He accomplished more than I am ever likely to. I do not hold him in contempt. Still less do I hold him in contempt for having been a librarian. I simply don't think that his opinions on computer chess-playing are the same kind of thing as Fermi's opinions on nuclear fission, or Kelvin's on the age of the earth.

comment by gwern · 2013-05-11T03:54:11.091Z · score: -3 (3 votes) · LW · GW

Before you bother to read any of what follows, I would be grateful if you would answer the following question: Have you, in fact, bozo-bitted me?

I haven't yet, but if you're going to persist in claiming that people with PhDs in philosophy are not even allowed the description 'philosopher', it's tempting because why should I bother with people who abuse language and redefine words so abysmally?

So, allow me to propose a very simple alternative explanation (which is, in fact, the correct explanation, so far as I can tell by introspection): I said it because, having listed a bunch of things that weren't Taube's profession, it seemed appropriate to say what his profession actually was.

Which was pursuant to your belief that a mere librarian could have nothing to say about the issue, could not be any sort of authority or indicator of the times, and so does not belong in the list lukeprog presented. Yes, I've said all this before.

On the basis of this thread so far, I'm guessing that you still don't believe me; so let me ask: Is there, in fact, anything I could possibly say or do that would convince you that I do not hold librarians in contempt?

The obvious reading of your concluding paragraph was obvious, before you started trying to defend it.

Angela Merkel is surely a physicist(1). She is not a physicist(2) now, although she used to be. Whether she's a physicist(3) depends on what threshold we pick and on the extent to which she's kept up her expertise. Similarly, Ian Bostridge is a historian(1), not a historian(2) so far as I know, and might or might not be a historian(3), and similarly for George Soros and philosophy...The relevant notion of "scientist", "philosopher", etc., here was never made explicit. I think I've had meaning 2 in mind sometimes and meaning 3 in mind sometimes.

Indeed. And I think it's absurd to restrict usage of descriptions to the rarefied and elevated #2s (how many biologists get tenure?) and even more absurd to restrict it to the even more rarefied and elevated #3s.

(Merkel & Bostridge were both #2s at some point, but seem likely to never be #3s in those fields; whether we could consider Soros a #3 - because he claims his philosophical approach of reflexivity guides his philanthropy & investing and so his inarguably historic roles there are part and parcel of philosophy - is an interesting question, but getting a bit far afield.)

Taube was an expert in the indexing of documents, and an innovator in that field. In your opinion, does that amount to expertise in computer chess-playing comparable to, say, Fermi's expertise in nuclear fission?

It gives him a great deal of expertise in organizing and searching data mechanically, which is relevant to AI; and inasmuch as chess-playing falls under AI... No, he didn't write his thesis on chess-playing, but here again I would say it's absurd to insist on such doctrinaire rigidity that no one can have respectable expertise without being the expert on a topic. (I would note in passing that Fermi's laurea thesis was not on fission, but X-ray imaging; is that close enough? Well, probably, but then why is indexing and search so out of bounds? Search at Google involves a great deal of AI work, so clearly there is a real connection at some point in time...)

Taube held a PhD in philosophy; I think his thesis was on the history of philosophical thought about causality. Does that amount to, etc., etc.?

I'm afraid I have shocking news for you, many respected philosophers in AI may not have written their theses directly on AI: Dennett's dissertation on consciousness was etc. etc. etc? Or consider John Searle's early work on speech acts, was it etc etc etc? Keeping in mind the recent praise on LW for his work...

I simply don't think that his opinions on computer chess-playing are the same kind of thing as Fermi's opinions on nuclear fission, or Kelvin's on the age of the earth.

All I can do is point to my previous summary and observe that Taube was one of the few contemporaries who grappled with the cybernetics issues, was trained philosophically, built a tech career on primitive computers etc etc. His observations are not chopped liver.

comment by gjm · 2013-05-11T09:41:35.285Z · score: 1 (1 votes) · LW · GW

(I'm going to be brief, because I'm losing hope that you're going to pay any attention to anything I say. I haven't the least intention of bozo-bitting you globally because you have been consistently extremely impressive elsewhere, but in this particular discussion it seems that at least one of us -- and I'm perfectly willing to consider that it may be me -- is being sufficiently irrational that we're doomed to produce more heat than light. More specifically, what it looks like to me is that you're treating me as an enemy combatant who needs to be defeated, rather than a person who disagrees with you who needs to be either taught or learned from or both.)

[EDITED to add: well, it turns out I wasn't so brief. But I tried.]

Which was pursuant to your belief that [...]

What's annoying here is not so much your evident belief that I am lying through my teeth about my own opinion about librarians (why on earth would I even do that?) as your refusal even to acknowledge that your fantasy about that opinion is anything other than a mutually-agreed truth.

[...] Is there, in fact, anything I could possibly say or do that would convince you that I do not hold librarians in contempt?

The obvious reading of your concluding paragraph was obvious, before you started trying to defend it.

I'm sorry, was that meant to be an answer to the question I asked?

I wasn't asking the question just to make a rhetorical point. Your behaviour in this thread suggests to me that as soon as you read the last sentence of what I wrote you leapt to a conclusion, got angry about it, and came out fighting, and that ever since you've refused even to consider the possibility that you leapt to the wrong conclusion.

It's absured to insist on such doctrinaire rigidity that no one can have respectable expertise without being the expert on the topic.

It's just as well that I'm not insisting on any such thing. So far as I know, there were other people around who were about as expert on nuclear physics as Fermi. I am not an expert on the history, so maybe that's wrong, but I haven't been assuming it's wrong and when I say "comparable to Fermi's expertise in nuclear fission" I don't mean "expertise as of the world's greatest expert", I mean "expertise as of someone very expert in the field". Because it seems to me that that's the level of expertise that's actually relevant to Eliezer's original point and his more recent question.

many respected philosophers in AI may not have written their theses directly on AI

Of course. But what makes them respected philosophers in AI, and means that if they make pronouncements about AI that turn out to be very wrong then they might be examples of the phenomenon Eliezer was talking about, is not the fact that they are philosophy PhDs but their further body of work in the field that is related to AI.

("Might be" rather than "are" because I have the impression that a sizeable number of people around here hold that philosophy is so terribly diseased a discipline that being a respected philosopher in AI is no ground for paying much attention to their opinions on AI.)

On the other hand, you've been arguing (I think) that Taube's philosophical expertise made him an expert in the nascent field of AI, and the only evidence we have for his philosophical expertise is that he was a philosophy PhD. So it's of some relevance what stuff this tells us he'd studied and thought about in depth. The stuff in question seems pretty interesting, but I don't see how it could have shed much light on the prospects for computer chess-playing.

I was slightly wrong about the topic of (the book I think was derived from) Taube's thesis, by the way; it wasn't only a historical study of other philosophers' thinking about causation but also "an attempt to solve the causal problem", as the title puts it. Apparently his solution involved saying that causation and determination are incompatible and hence that causation implies freedom.

comment by Luke_A_Somers · 2013-05-14T22:48:41.807Z · score: 0 (0 votes) · LW · GW

in this particular discussion it seems that at least one of us -- and I'm perfectly willing to consider that it may be me -- is being sufficiently irrational that we're doomed to produce more heat than light.

Doing a perfect post on this topic would be hitting a dead horse right between the eyes at a thousand paces.

comment by [deleted] · 2013-05-11T16:53:15.836Z · score: 0 (2 votes) · LW · GW

And I think it's absurd to restrict usage of descriptions to the rarefied and elevated #2s (how many biologists get tenure?)

Doing X for a living is a lower bar than being tenured.

comment by satt · 2013-05-10T01:44:30.480Z · score: 1 (1 votes) · LW · GW

peccadilloe

Peccadillo. (Sorry; couldn't resist the temptation to flag that accidental autology for posterity.)

comment by wedrifid · 2013-05-09T16:25:16.996Z · score: 2 (4 votes) · LW · GW

So to summarize: he was a trained philosopher and tech startup co-founder who invented new information technology and handled documentation tasks who was familiar with the cybernetics literature and traveled in the same circles as people like Vannevar Bush.

Thankyou for your research. I was mislead by the grandparent.

comment by lukeprog · 2013-05-09T17:35:49.093Z · score: 1 (1 votes) · LW · GW

You accuse Eliezer of being misleading in taking a quote from a mere "librarian",

"Eliezer" should be "lukeprog".

comment by gwern · 2013-05-09T17:38:55.099Z · score: 3 (3 votes) · LW · GW

Hah, whups. And so it goes - you correct Eliezer's lack of examples, gjm corrects your description of Taube, I correct gjm's description of Taube, and you correct my description of gjm's description...

comment by yli · 2013-05-09T19:13:02.999Z · score: 0 (0 votes) · LW · GW

Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.

comment by gjm · 2013-05-09T19:32:42.421Z · score: 1 (1 votes) · LW · GW

Yup. I strongly suspect that Taube was in fact "into qualia territory", or something along those lines, when he wrote that.

comment by private_messaging · 2013-05-09T11:33:48.911Z · score: -8 (12 votes) · LW · GW

Crackpots frequently build lists of scientists being wrong, misrepresent quotes, and the like, but doing that for librarians ? That's quite outstanding.

comment by gwern · 2013-05-09T16:15:55.681Z · score: 2 (8 votes) · LW · GW
  1. I dislike gjm's and your contempt for librarians. My favorite writer, Jorge Luis Borges, was a librarian. Being a librarian does not disqualify one from commenting.
  2. His rebuttal letter to Norbert Weiner was apparently thought worth publishing by Science, right after a Harvard physicist's letter and before another letter. This indicates that his thinking was hardly marginal, considered low-quality, or uninformed, and tells us what the thinking was like at the time - which is all the quote was supposed to do!
  3. Taube is hardly 'just' a libarian. See my reply to gjm.
comment by gjm · 2013-05-09T23:04:43.849Z · score: 2 (4 votes) · LW · GW

I dislike gjm's and your contempt for librarians.

My "contempt for librarians" is sufficiently fictional that I am happy to pay the 5 karma point penalty I am currently paying (on account of the very negative comment upthread) to reiterate: I do not have contempt for librarians, nor did I express contempt for librarians; you are drawing an incorrect inference and should update whatever mental model led you to draw it.

(I agree that private_messaging's comment is extremely silly, and I regret the fact that what I wrote seems to have encouraged it.)

comment by private_messaging · 2013-05-09T18:26:03.985Z · score: -1 (7 votes) · LW · GW

I think you misunderstood. No contempt for librarians - merely a surprise that a librarian would be honoured with a misinterpretation by crackpots.

comment by shminux · 2013-05-08T21:46:06.513Z · score: 8 (8 votes) · LW · GW

Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.

EDIT: hmm, I'm having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.

comment by shminux · 2013-05-09T06:51:10.431Z · score: 5 (5 votes) · LW · GW

Here is another famous example:Chandrasekhar's limit. Eddington rejected the idea of black holes ("I think there should be a law of Nature to prevent a star from behaving in this absurd way!"). Says wikipedia:

Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing.

I guess this is not quite what you are asking for, since the math was on Chandrasekhar's side, and Eddington was pinning his hopes on "new physics". To be fair, recent discussions about horizon firewalls) could be such new physics.

comment by betterthanwell · 2013-05-10T04:33:27.862Z · score: 2 (2 votes) · LW · GW

"I think there should be a law of Nature to prevent a star from behaving in this absurd way!" (Eddington, 1935)

Eddington erroneously dismissed M_(white dwarf) > M_limit ⇒ "a black hole" , but didn't he correctly anticipate new physics?
Do event horizons (Finkelstein, 1958) not prevent nature from behaving in "that absurd way", so far as we can ever observe?

comment by shminux · 2013-06-25T22:52:19.555Z · score: 0 (0 votes) · LW · GW

It's hard to know what Eddington meant by "absurd way". Presumably he meant that this hypothetical law would prevent matter from collapsing into nothing. Possibly if Chandrasekhar had figured out the strange properties of the event horizon back in 1935 and had emphasized that whatever weird stuff is happening beyond the final Chandrasekhar limit is hidden from view, Eddington would not have reacted as harshly. But that took another 20-30 years, even though the relevant calculations require at most 3rd year college math. Besides, Chandrasekhar's strength was in mathematics, not physics, and he could not compete with Eddington in physics intuition (which happened to be quite wrong in this particular case).

comment by private_messaging · 2013-05-09T06:37:40.336Z · score: 4 (6 votes) · LW · GW

The general success rate of breakthroughs is pretty damn low, and so I'd argue that most examples of "invalid" pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can't anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corresponding regression towards the mean rate of success). They're merely not as hindsight resistant as Fermi's example. You look back at history seeing things that succeeded. Go read archive of some old journals, and note the zillions of amazing breakthroughs that did not pan out.

If bomb did not rely on unusual U235 , Fermi would not have been irrational about 10% probability to emission of secondary neutrons from fission - it is something that most likely either happens for all fissions, or does not happen for any fissions, so the clever "there would be one" argument doesn't work irrespective of U235. U235 is not the most general valid objection, it's just the objection for which sources are easiest to find. No one did the silly task of writing out that production of secondary neutrons is not a statistically independent fact across different nuclei, and we're lucky that there's just 1 nucleus so we don't have to, either.

comment by ESRogs · 2013-05-10T20:05:00.244Z · score: 0 (0 votes) · LW · GW

I'm having trouble understanding your second paragraph. This is probably just due to missing background knowledge on my part, but would you mind explaining what you mean by:

the clever "there would be one" argument

and

U235 is not the most general valid objection

Thanks!

comment by private_messaging · 2013-05-10T22:10:13.374Z · score: 1 (1 votes) · LW · GW

There was a really silly argument about Fermi's 10% estimate , scattered over several threads (which OP talks about). Yudkowsky been arguing that Fermi's estimate was too low. He came up with the idea that surely there would have been one element (out of many) that would have worked so the probability should have been higher, that was wrong because a: its not as if some element's fissions released neutrons and some didn't, and b: there was only 1 isotope to start from (U-235), not many.

comment by ESRogs · 2013-05-12T21:42:49.698Z · score: 1 (1 votes) · LW · GW

its not as if some element's fissions released neutrons and some didn't

Do all elements' fissions release neutrons?

comment by private_messaging · 2013-05-19T19:29:33.013Z · score: 2 (2 votes) · LW · GW

Yes. The issue is that the argument "look at periodic table, it's so big, there would be at least one" requires that the fact of fission releasing neutrons would be assumed independent across nuclei.

comment by ESRogs · 2013-05-21T19:21:31.856Z · score: 0 (0 votes) · LW · GW

Gotcha, thanks.

comment by CronoDAS · 2013-05-08T21:18:30.065Z · score: 4 (4 votes) · LW · GW

I'm not sure if this is justifiable or just an old-fashioned blunder...

On the subject of stars, all investigations which are not ultimately reducible to simple visual observations are…necessarily denied to us… We shall never be able by any means to study their chemical composition.

-- August Comte, 1835

I'm leaning towards "blunder" myself...

comment by TsviBT · 2013-05-08T22:35:15.293Z · score: 10 (10 votes) · LW · GW

Yeah, blunder. Wikipedia says:

In the 1820s both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy. In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks, thereby introducing an alternative mechanism to flame spectroscopy.

comment by wedrifid · 2013-05-09T00:42:16.702Z · score: 4 (4 votes) · LW · GW

On the subject of stars, all investigations which are not ultimately reducible to simple visual observations are…necessarily denied to us… We shall never be able by any means to study their chemical composition.

Well, the first half seems approximately correct. The second sentence should have begun with "And by clever application of this means we shall...".

comment by [deleted] · 2013-05-14T17:37:41.934Z · score: 3 (3 votes) · LW · GW

Even if you interpret “visual” as ‘mediated by photons’, there's such a thing as neutrino astronomy.

comment by sketerpot · 2013-05-08T22:27:59.740Z · score: 4 (4 votes) · LW · GW

It wasn't until the 1850s that Ångström discovered that elements both emit and absorb light at characteristic wavelengths, which is what spectroscopic analysis of stars is based on, so I'm leaning toward justifiable.

comment by Jack · 2013-05-09T20:53:59.604Z · score: 3 (3 votes) · LW · GW

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235

This has interesting repercussions for Fermi's paradox.

comment by JoshuaZ · 2013-05-09T20:59:46.278Z · score: 5 (5 votes) · LW · GW

Yes, particularly in the context that you and I discussed earlier that intelligent life arising earlier might have had an easier time wiping itself out. Although the consensus there seemed to be that it wouldn't be a large enough difference to matter for serious filtration issues.

comment by tgb · 2013-05-09T17:02:45.861Z · score: 3 (3 votes) · LW · GW

I posted the following in a quotes page a few months back. I don't know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman's formulation of QED for many years after it was first introduced.

It is interesting to note that Bohr was an outspoken critic of Einstein's light quantum (prior to 1924), that he mercilessly denounced Schrodinger's equation, discouraged Dirac's work on the relativist electron theory (telling him, incorrectly, that Klein and Gordon had already succeeded), opposed Pauli's introduction of the neutrino, ridiculed Yukawa's theory of the meson, and disparaged Feynman's approach to quantum electrodynamics.

[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]

-David Griffiths, Introduction to Elementary Particles, 2008 page 24

comment by tgb · 2013-05-10T16:21:07.892Z · score: 2 (4 votes) · LW · GW

Here's an example of the 'opposite' - a case of unjustifiable correct optimism:

Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.

This is probably not a very useful example and I wouldn't be surprised to see that there were plenty more of these examples.

comment by JoshuaFox · 2013-05-09T07:20:37.377Z · score: 2 (2 votes) · LW · GW

Kuhn's Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school -- fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.

comment by Bruno_Coelho · 2013-05-10T16:18:19.496Z · score: 1 (1 votes) · LW · GW

To him(Kuhn) evidence don't maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.

comment by JoshuaFox · 2013-05-11T20:34:54.928Z · score: 1 (1 votes) · LW · GW

Yes, "Science advances one funeral at a time", but this, from Wikipedia, is a pretty good summary of a typical "scientific revolution":

"...Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, Copernicus's model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility."

comment by James_Miller · 2013-05-09T02:24:08.389Z · score: 2 (2 votes) · LW · GW

Thomas Malthus' view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.

comment by Jack · 2013-05-09T03:15:50.924Z · score: 20 (22 votes) · LW · GW

Not clear his view is erroneous given suitable values for "long run".

comment by gwern · 2013-05-09T02:54:58.390Z · score: 0 (2 votes) · LW · GW

He would have been right if not for the sustained growth given to us by the industrial revolution.

How so? Last I checked, human populations could still pop out children if they wanted to faster than the average real global growth rate since the IR of ~2%.

comment by James_Miller · 2013-05-09T03:59:10.182Z · score: 3 (3 votes) · LW · GW

What's relevant to whether we are in a Malthusian trap is the actual birth rate, not what the birth rate would be if people wanted to have far more children.

comment by gwern · 2013-05-09T04:06:35.415Z · score: 5 (7 votes) · LW · GW

I'll be more explicit then: the 'sustained growth' is almost irrelevant since per the usual Malthusian mechanisms it is quickly eliminated. What made Malthus wrong, what he was pessimistic about, was whether people would exercise "moral restraint" - in other words, he didn't think the demographic transition would happen. It did, and that's why we're wealthy.

comment by SilasBarta · 2013-05-09T07:16:53.638Z · score: 2 (2 votes) · LW · GW

But how do you know it's the "moral restraint" that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources? In fact, the moral restraint could be keeping us closer to the catastrophe than if we had been producing more humans.

comment by gwern · 2013-05-09T15:33:02.779Z · score: 1 (1 votes) · LW · GW

But how do you know it's the "moral restraint" that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources?

Because population growth can outpace innovation growth. This is not a hard concept.

comment by SilasBarta · 2013-05-09T17:19:21.184Z · score: 0 (0 votes) · LW · GW

I know. But your post seemed to be taking the position in favor of population growth (change) as the relevant factor rather than innovation. I was asking why you (seemed to have) thought that.

comment by gwern · 2013-05-09T17:24:20.108Z · score: 3 (3 votes) · LW · GW

Population growth and innovation are two sides of a scissor: innovation drives potential per capita up, population growth drives it down. But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation.

Hence, innovation can be seen as necessary - but it is not sufficient, in the absence of changes to reproductive patterns.

comment by SilasBarta · 2013-05-09T17:45:54.017Z · score: 1 (1 votes) · LW · GW

But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation.

Okay, that's where I disagree: Each additional person is also another coin toss (albeit heavily stacked against us) in the search for innovators. The question then is whether the possible innovations, weighted by probability of a new person being an innovator (and to what extent) favors more or fewer people.

There's no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.

comment by gwern · 2013-05-09T19:06:22.062Z · score: 1 (1 votes) · LW · GW

There's no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.

There is no a priori reason, of course. We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.

Yet, the world we actually live in doesn't look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.) It was not China or India which launched the Scientific and Industrial Revolutions.

comment by SilasBarta · 2013-05-10T17:07:01.714Z · score: 0 (0 votes) · LW · GW

Yet, the world we actually live in doesn't look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.)

The ability to produce lots of children does not at all work against the ability of innovators and innovator probability to overcome their resource-extraction load. In order for your strategy to actually work against the potential innovation, you would have to also suppress the intelligence (probability) of your children to the point where the innovation blade is sufficiently small. And you would have to do it without that action itself causing the die-off, and while ensuring they can continue to execute the strategy on the next generation. And keep in mind, you're working against the upper tail of the intelligence bell curve, not the mode.

It was not China or India which launched the Scientific and Industrial Revolutions.

Innovation in this context needn't be revolution-size. China and India (and the Islamic Empire) did innovate faster than the West, and averted many Malthusian overtakings along the way (probably reaching 800 years ahead at their zenith). Malthus would have known about this at the time.

comment by gwern · 2013-05-10T17:14:20.398Z · score: 0 (0 votes) · LW · GW

The ability to produce lots of children does not at all work against the ability of innovators and innovator probability to overcome their resource-extraction load.

I'm not following your terms here. Obviously the ability to produce lots of children does in fact sop up all the additional production, because that's why per capita incomes on net essentially do not change over thousands of years and instead populations may get bigger. So you can't mean that, but I don't know what you mean.

China and India (and the Islamic Empire) did innovate faster than the West, and averted many Malthusian overtakings along the way (probably reaching 800 years ahead at their zenith). Malthus would have known about this at the time.

They innovated faster at some points, arguably. And the innovation such as in farming techniques helped support a higher population - and a poorer population. Malthus would have known this about China, did, and used China as an example of a number of things, for example, the consequences of a subsistence wage which is close to starvation http://en.wikisource.org/wiki/An_Essay_on_the_Principle_of_Population/Chapter_VII :

The only true criterion of a real and permanent increase in the population of any country is the increase of the means of subsistence. But even, this criterion is subject to some slight variations which are, however, completely open to our view and observations. In some countries population appears to have been forced, that is, the people have been habituated by degrees to live almost upon the smallest possible quantity of food. There must have been periods in such counties when population increased permanently, without an increase in the means of subsistence. China seems to answer to this description. If the accounts we have of it are to be trusted, the lower classes of people are in the habit of living almost upon the smallest possible quantity of food and are glad to get any putrid offals that European labourers would rather starve than eat. The law in China which permits parents to expose their children has tended principally thus to force the population. A nation in this state must necessarily be subject to famines. Where a country is so populous in proportion to the means of subsistence that the average produce of it is but barely sufficient to support the lives of the inhabitants, any deficiency from the badness of seasons must be fatal.

comment by randallsquared · 2013-05-10T11:16:46.801Z · score: 0 (0 votes) · LW · GW

We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.

That's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.

comment by gwern · 2013-05-10T16:29:35.245Z · score: 0 (0 votes) · LW · GW

Sure, it's just an example which does not seem to be impossible but where the blade of innovation is clearly bigger than the blade of population growth. But the basic empirical point remains the same: the world does not look like one where population growth drives innovation in a virtuous spiral or anything remotely close to that*.

* except, per Miller's final reply, in the very wealthiest countries post-demographic-transition where reproduction is sub-replacement and growth maybe even net negative like Japan and South Korea are approaching, then in these exceptional countries some more population growth may maximize innovation growth and increase rather than decrease per capita income.

comment by James_Miller · 2013-05-09T13:21:19.902Z · score: 1 (1 votes) · LW · GW

I can't prove this, but I believe that in the United States and Western Europe we would still be rich (in the sense that calorie deprivation wouldn't pose a health risk to the vast majority of the population) if the birth rate had stayed the same since Malthus's time.

comment by gwern · 2013-05-09T15:39:08.897Z · score: 0 (0 votes) · LW · GW

if the birth rate had stayed the same since Malthus's time.

That makes no sense to argue: Malthus's time was part of the demographic transition. Of course I would agree that if the demographic transition continued post-Malthus - as it did - we would see higher per capita (as we did).

But look up the extremely high birth rates of some times and places (you can borrow some figures from http://www.marathon.uwc.edu/geography/demotrans/demtran.htm ), apply modern United States & Western Europe infant and child mortality rates, and tell me whether the population growth rate is merely much higher than the real economic growth rates of ~2% or extraordinarily higher. You may find it educational.

comment by James_Miller · 2013-05-09T16:46:34.914Z · score: 2 (2 votes) · LW · GW

But I believe that from the point of view of maximizing the per person wealth of the United States and Western Europe the population growth rate has been much, much too low since the industrial revolution. (I admittedly have no citations to back this up.)

comment by gwern · 2013-05-09T17:02:05.487Z · score: 2 (2 votes) · LW · GW

Maybe. That's not the same thing as what you said initially, though.

comment by private_messaging · 2013-05-09T05:43:59.797Z · score: 1 (7 votes) · LW · GW

We'll just evolve for restraint not to work any more.

comment by [deleted] · 2013-05-10T23:24:06.992Z · score: 2 (2 votes) · LW · GW

(Was there a SMBC comic or something about men evolving a condom-breaking mechanism in their penis?)

comment by private_messaging · 2013-05-11T05:09:42.166Z · score: 7 (7 votes) · LW · GW

We're rapidly evolving condom-not-putting-on mechanism in the brain.

comment by gwern · 2013-05-09T15:30:57.147Z · score: 2 (2 votes) · LW · GW

Yes, that's the question: is the demographic transition temporary? I've brought it up before: http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/

comment by Error · 2013-05-14T12:08:46.445Z · score: 0 (0 votes) · LW · GW

I was always under the impression that what thwarted his hypothesis was the rise of effective and widespread birth control. I remember reading one of his works and noting that it was operating on the assumption that, to reduce birthrate to sustainable levels, sex would have to be reduced, and that was unlikely. It is unlikely, but it's also mostly decoupled from childbirth now, at least in the developed world.

Have I misinterpreted something here?

comment by Eugine_Nier · 2013-05-15T03:22:44.683Z · score: 2 (2 votes) · LW · GW

I believe he considered the possibility of birth control, referring to it as "immorality".

comment by [deleted] · 2013-05-09T04:12:42.948Z · score: 2 (4 votes) · LW · GW

"Watch out for that cliff!"

"It looks pretty far off, and besides, we're turning left soon anyway."

"But we could keep accelerating!"

comment by gwern · 2013-05-09T15:32:02.862Z · score: 1 (3 votes) · LW · GW

Your reply seems completely irrelevant to the Malthusian point that population growth can always exceed total factor production, and so it is population growth - or lack of growth - which dominates and determines per capita.

comment by someonewrongonthenet · 2013-05-13T07:42:13.799Z · score: 1 (1 votes) · LW · GW

This blog post claims that only a few years before the Wright brother's success, the consensus was that flying machines would necessarily have to be less dense than air (like hot air balloons).

comment by wedrifid · 2013-05-09T00:51:39.728Z · score: -1 (5 votes) · LW · GW

it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).

All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route "wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock" would still be available.

comment by Luke_A_Somers · 2013-05-09T02:11:10.818Z · score: 8 (10 votes) · LW · GW

In context, Fermi was considering something rather more short-term: WW2.

That said, he may not have scoped his statement to such a small scale.

comment by wedrifid · 2013-05-09T02:15:53.320Z · score: 1 (9 votes) · LW · GW

In context, Fermi was considering something rather more short-term: WW2.

One of many suitable and sufficient qualifiers that could make the arguments plausible.