post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
Every now and then, one reads an article about the Singularity in
which some reporter confidently asserts, "The Singularitarians,
followers of Ray Kurzweil, believe that they will be uploaded into
techno-heaven while the unbelievers languish behind or are extinguished
by the machines."
I don't think I've ever met a single Singularity fan, Kurzweilian or otherwise, who
thinks that only believers in the Singularity will go to upload heaven
and everyone else will be left to rot. Not one.
(There's a very few pseudo-Randian types who believe that only the truly selfish who accumulate lots of money will make it, but they expect e.g. me to be damned with
But if you start out thinking that the Singularity is a loony religious meme, then it seems like Singularity believers ought to believe that they alone will be saved. It seems like a detail that would fit the story.
This fittingness is so strong as to manufacture the conclusion without any particular observations. And then the conclusion isn't marked as a deduction. The reporter just thinks that they investigated the Singularity, and found some loony cultists who believe they alone will be saved.
Or so I deduce. I haven't actually observed the inside of their minds, after all.
Has any rationalist ever advocated behaving as if all
people are reasonable and fair? I've repeatedly heard people say,
"Well, it's not always smart to be rational, because other people
aren't always reasonable." What rationalist said they were? I would deduce: This is something that non-rationalists believe it would "fit" for us to believe, given our general blind faith in Reason. And
so their minds just add it to the knowledge pool, as though it were an observation. (In this case I encountered yet another example recently enough to find the reference; see here.)
(Disclaimer: Many things
have been said, at one time or another, by one person or another, over
centuries of recorded history; and the topic of "rationality" is popularly enough discussed that some self-identified "rationalist" may have described "rationality" that way at one point or another. But I have yet to hear a rationalist say it, myself.)
I once read an article on Extropians (a certain flavor of transhumanist) which asserted
that the Extropians were a reclusive enclave of techno-millionaires
(yeah, don't we wish). Where did this detail come from? Definitely not from observation. And considering the sheer divergence from reality, I doubt it
was ever planned as a deliberate lie. It's not just easily falsified,
but a mark of embarassment to give others too much credit that way
("Ha! You believed they were millionaires?") One suspects, rather,
that the proposition seemed to fit, and so it was added -
without any warning label saying "I deduced this from my other beliefs,
but have no direct observations to support it."
There's also a general problem with reporters which is that they
don't write what happened, they write the Nearest Cliche to what
happened - which is very little information for backward inference, especially if
there are few cliches to be selected from. The distance from actual Extropians to the Nearest Cliche "reclusive enclave of techno-millionaires" is kinda large. This may get a separate post at some point.
My actual nightmare scenario for the future involves well-intentioned AI researchers who try to make
a nice AI but don't do enough math. (If you're not an expert you can't
track the technical issues yourself, but you can often also tell at a
glance that they've put very little thinking into "nice".) The
AI ends up wanting to tile the galaxy with tiny smiley-faces, or
reward-counters; the AI doesn't bear the slightest hate for humans, but
we are made of atoms it can use for something else. The most probable-seeming result is
not Hell On Earth but Null On Earth, a galaxy tiled with paperclips or something
equally morally inert.
The imaginary position that gets invented because it seems to "fit" - that is, fit the folly that the other believes is generating the position - is "The Singularity is a dramatic final conflict
between Good AI and Evil AI, where Good AIs are made by
well-intentioned people and Evil AIs are made by ill-intentioned
In many such cases, no matter how much you tell people what you really believe, they don't update! I'm not even sure this is a matter of any deliberately justifying decision on their part - like an explicit counter that you're concealing your real beliefs. To me the process seems more like: They stare at you for a moment, think "That's not what this person ought to believe!", and then blink away the dissonant evidence and continue as before. If your real beliefs are less convenient for them, the same phenomenon occurs: words from the lips will be discarded.
There's an obvious relevance to prediction markets - that if there's an outstanding dispute, and the market-makers don't consult both sides on the wording of the payout conditions, it's possible that one side won't take the bet because "That's not what we assert!" In which case it would be highly inappropriate to crow "Look at those market prices!" or "So you don't really believe; you won't take the bet!" But I would guess that this issue has already been discussed by prediction market advocates. (And that standard procedures have already been proposed for resolving it?)
I'm wondering if there are similar Imaginary Positions in, oh, say,
economics - if there are things that few or no economists believe, but
which people (or journalists) think economists believe because it seems
to them like "the sort of thing that economists would believe". Open general question.
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Robin_Hanson2 ·
2008-12-23T17:50:46.000Z · LW(p) · GW(p)
Honestly, almost everything the ordinary person thinks economists think is wrong. Which is what makes teaching intro to econ such a challenge. The main message here is to realize you don't know nearly as much as you might think about what other groups out there think, especially marginalized and colorful groups. Doubt everything you think you know about the beliefs of satanists, theologians, pedophiles, free-lovers, marxists, mobsters, futurists, UFO folk, vegans, and yes economists.
comment by Psy-Kosh ·
2008-12-23T18:33:22.000Z · LW(p) · GW(p)
I'm not sure this is exactly the same thing, but it's probably related:
The way people sometimes... creatively misunderstand... various fields of endeavor.
Think along the lines of quantum mysticism stuff and the way that Godel's theorem is abused by religious types. (not to mention the ye olde "but the second law of thermodynamics proves you can't have evolution" that they love to trot out)
I'm not sure this is exactly the same phenomenon, but it does seem to be part of a larger "creative misunderstanding of another's position to somehow support my own" thing.
Actually, just remembered one I saw that was almost painful.
I don't remember the exact quote, but it went something like this:
"I don't really know anything at all about quantum mechanics, but I believe it does prove the truth behind the ancient eastern wisdom." (or something to that effect)
I'm pretty sure the person who said that was being serious.
My first thought was obviously "but... if you don't actually know anything about it, and you know you don't know, then why do you..."
Replies from: wallowinmaya
↑ comment by David Althaus (wallowinmaya) ·
2011-05-28T14:46:02.248Z · LW(p) · GW(p)
I had a similar experience with a friend of mine who admired Thomas Mann. And, well, science was not his strong point:
He: " I just finished the Magic Mountain and I believe Thomas Mann anticipated Einstein's theory of relativity!"
Me: " Äh, what..."
He: " On one page he writes about how the watch hands divide the space of the clock-face into several areas! He anticipated that space and time are related!!"
And just in case: Mann started writing " Magic Mountain" in 1912.
Replies from: Psy-Kosh
comment by Joshua_Fox ·
2008-12-23T20:00:14.000Z · LW(p) · GW(p)
Minor point perhaps, but in the field I once studied, diachronic linguistics, people always want to know what the oldest language is, and no amount of explanation will convince them that there question is off-base.
comment by JamesAndrix ·
2008-12-23T20:02:45.000Z · LW(p) · GW(p)
To give the journalist the greatest benefit of the doubt: In what capacity could it be said that singulitarians think only singulitarians will be saved?
Off the top of my head: If uploading is consensual, only people willing to be uploaded will be uploaded. (or made immortal, or cognitively enhanced) If they foom without the rest of humanity... It fits the story pretty well.
comment by Roland2 ·
2008-12-23T21:07:44.000Z · LW(p) · GW(p)
I write as an ex-christian. Common misconceptions:
-All christians believe in creationism(as opposed to evolution).
-All christians believe that the Bible is 100% correct and the inerrant word of God.
comment by Disciple_of_Dawkins ·
2008-12-23T21:32:58.000Z · LW(p) · GW(p)
I remember a clip from Richard Dawkins' documentary where he confronts a guy running a large church (was later caught doing something naughty - at least according to their rules). The guy just smiles smugly and creatively misunderstands everything Dawkins says.
Those folks are just simply lost. The doc could have been called "Beyond salvation". Brings up painful memories of militant GeebusCamp moms... ugh... I'll study some logic to cleanse my mind of the filthy memes... :P
comment by Glen_Raphael ·
2008-12-23T21:34:07.000Z · LW(p) · GW(p)
The issue of "failing to update" probably deserves its own post. The state of the world tends to change over time. Even people who bother to look stuff up to determine whether their views are in accordance with current reality and revise accordingly can't afford to constantly keep checking back to see if any relevant facts have changed. The first time you get in an argument you might look the facts up but from then on when the same issue comes up you fall back on your previously-researched conclusion.
This is one reason why old scientists need to die out before new ideas can receive suitable consideration.
I tend to think a lot of positions people hold that appear currently absurd were once validly held - either by that person, or by a source that person respects - at some point in the past. For example, many who think they want stronger gun control laws hold in their heads the idea that you're much more likely to be mugged in the US than in Great Britain...because that used to be true, back when they first formed their opinions on the subject. Or consider Macintosh versus Winodws arguments that allege Macs are/aren't more/less expensive and are/aren't much faster than comparable PCs; all those views are correct, depending on the reference frame of the speaker.
comment by Fortune_Elkins ·
2008-12-23T21:39:17.000Z · LW(p) · GW(p)
"If uploading is consensual, only people willing to be uploaded will be uploaded"
How funny you should mention this. I listened to a debate on this just on Sunday afternoon. It seemed agreed that the real issue revolved around the meaning of informed consent, and how that changes when uploading becomes possible.
The most common reasons people choose to die - depression, Alzheimer's, cancer - might be made moot by that advance alone, eliminating any "rational" reason to die, one side thought. One woman argued uploading would an easy extension of EU human rights; to deny people uploading would be as barbarous as the death penalty.
I think she was trying to make an explicitly anti-Hansonian claim - she argued that uploading would actually increase the value and meaning of life, not lessen it. This may be a distinctive European form of social thought, I don't know.
comment by Kazuo_Thow ·
2008-12-23T21:41:04.000Z · LW(p) · GW(p)
"This is one reason why old scientists need to die out before new ideas can receive suitable consideration."
So giving these scientists full ability to update their beliefs isn't an acceptable solution?
comment by Lord ·
2008-12-23T21:45:05.000Z · LW(p) · GW(p)
Now Kurzweil fits the description of techno-millionaire. I have no idea if he is an Extropian, but it does seem pausible. People don't usually label themselves, define themselves completely consistently and coherently, or distinguish between their beliefs and those of others which leaves others to do it and naturally enough get it wrong.
I, for one, find the whole natural vs. artificial intelligence argument little more than projections of inadequacy and fear. Far more likely to me is natural and artificial intelligence melding into complexes that one can no longer define as natural or artificial except on trivial irrelevant bases. I doubt we will even recognize a singularity until it is long past with historians debating what constituted it just as scientists debate what constitutes natural and artificial.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-12-23T21:53:02.000Z · LW(p) · GW(p)
James, that's a good example - I remember being shocked to find out that there are various kinds of contract clauses that judges will ignore. It made me appreciate much more deeply why you have to have a lawyer look it over. Similarly, I used to have no idea of the philosophy behind stare decisis - I thought it was ingrained deference to the past a la Judaism.
Roland, since many Christians do believe those things, this isn't the sort of entirely imaginary position I'm talking about - the one that just gets manufactured from thin air because it seems so "truthy".
comment by Lior ·
2008-12-23T22:54:46.000Z · LW(p) · GW(p)
I think people believe economists are all leaning towards libertarianism and republican. In reality, they are mostly democratic.
comment by Philip_Goetz ·
2008-12-23T22:56:56.000Z · LW(p) · GW(p)
I recently had 2 occasions where
- I said X
- Someone else said, No, Y
- I thought they were wrong
- I later realized: Not X, but Z
- then went back and looked, and saw Y = Z
In both cases, I didn't understand Y the first time because I had expectations for how X would be misunderstood, and looked for indications of them in Y, and on finding terms and phrases that matched my expectations stopped parsing Y prematurely.
comment by JulianMorrison ·
2008-12-23T23:48:52.000Z · LW(p) · GW(p)
I propose there exists a confusion-recovery mode for conversation. It triggers when what you just heard makes no sense to you. Evolution expects this to happen when the speaker is rationalizing or dishonest. The instinct is to get suspicious, stop interpreting words literally, and look for an in-framework explanation that fits the other person's expected bias. I also think this has an element of preferring premature certainty to valid confusion.
If there's a huge inferential gap, you'll trigger this mode immediately.
comment by anon19 ·
2008-12-24T00:37:11.000Z · LW(p) · GW(p)
I don't think there are scientists, who, in their capacity as scientists, debate what constitutes natural and artificial.
comment by Paul_Crowley2 ·
2008-12-24T01:46:57.000Z · LW(p) · GW(p)
Very recently experienced exactly this phenomenon: someone discussing atheists who think "all religion/religious stuff is bad" to the inclusion of, for example, the music of Bach, or drinking and celebrating at Christmas. They seemed convinced that such atheists exist, and I doubt it, or at least I have never heard of them or met them, and I know for a fact that for example all four horsemen of atheism have made explicit statements to the contrary.
Your disclaimer is an annoying one to have to make, and of course this problem comes up whenever this move is made in discussion; your counterpart says "well, but some singularitarians believe that, don't they?" and you can't actually prove there are none, and you have the sneaking fear that given the vastness of the Internet a judicious Google search might just turn up someone crazy enough to vindicate them; but of course a handful of anonymous loons on the Internet sought specifically for their particular brand of madness does not a position worthy of discussion make.
comment by frelkins ·
2008-12-24T02:12:57.000Z · LW(p) · GW(p)
"wouldn't at all claim that uploading can't increase the value and meaning of life."
Despite the confusing-to-some negative form of phrasing here, my impression that is I do in fact understand your position on this. However, the woman on Sunday believed she was arguing against you. This was exactly why I used this example here in Imaginary Positions, sorry if it seemed unclear to you.
My impression is that many people who do not regularly read OB with care come across your ideas in other places where they may not be well-stated, for example in TierneyLab or Wikipedia.
To set the record straight, my impression from reading all your papers is that you do actually argue for a basically positive future. Being an em - I now personally expect to be an em! - I look forward to being an em! - will be life-enhancing.
In short, I think it will be more like Gentle Seduction than the downbeat living as Tierney seems to describe. I am buying my Emotiv helmet ASAP!
comment by michael_vassar3 ·
2008-12-24T04:14:24.000Z · LW(p) · GW(p)
Eliezer: The distinction between direct observation and deduction is pretty ambiguous for a Bayesian, is it not?
Also, MANY rationalists advocate "giving people the benefit of the doubt" which for them implies "behaving as if all people are reasonable and fair?". Furthermore, almost all rationalists, you for instance, advocate stating literally true beliefs towards people rather than stating the beliefs that you have most reason to expect to be most informative or to produce the best results. MANY people refrain from becoming more rational out of fear that they would have to do the same and out of justified belief that doing so would cripple their efficacy in life.
James Miller: Good call!
That point about non-lawyers deserves a post of its own somewhere? I seriously wonder where they got that idea. Strangest of all, they seem to have generalized that misconception to invent the "laws of nature" which really are literal.
Paul Crowley: Both my wife and I have had brief phases when we were atheists of the type you question exists.
comment by Kaj_Sotala ·
2008-12-24T15:49:40.000Z · LW(p) · GW(p)
Also, on topic of the Singularity, half of the people I meet seem to think the concept is obviously absurd because "the Singularity is all about infinite growth, and that's physically impossible". Has anybody ever actually claimed that the Singularity would be literally infinite anything?
Replies from: JoshuaZ
↑ comment by JoshuaZ ·
2010-05-26T22:11:47.097Z · LW(p) · GW(p)
Has anybody ever actually claimed that the Singularity would be literally infinite
IIRC, Kurzweil discusses the possibility in The Singularity is Near (in the context of making new universes). But it seems to be more in the "wow, gee!" category than anything he's asserting is at all likely.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-12-24T16:51:41.000Z · LW(p) · GW(p)
Eliezer: The distinction between direct observation and deduction is pretty ambiguous for a Bayesian, is it not?
Um... not at all, actually. A key insight into causal networks consists of giving prior-probability messages and likelihood-evidential messages two separate pathways to travel along, and recombining the two only after the messages have propagated separately. Like counting soldiers in a line using a distributed algorithm by having each soldier report the number of soldiers behind (and passing that number + 1 forward) and having each soldier report the number of soldiers forward (and passing that number + 1 behind) and only recombining the two messages afterward, rather than mixing them up as they pass.
So you will generally want a very crisp distinction between your reasons to believe something because of what you believe about its generating process, and your reasons to believe something because of what you have observed of its effects.
comment by Robin_Z ·
2008-12-25T02:12:22.000Z · LW(p) · GW(p)
Brandon Reinhart: Jack Thompson. (Fortunately, he's been disbarred, now, so maybe that particular vein of stupidity is getting tapped out.)
comment by Nick_Tarleton ·
2010-06-13T13:01:01.523Z · LW(p) · GW(p)
It seems to me that claiming the right to say what someone else believes, entirely ignoring what they protest, is a strong claim of status over them (compare: "stop hitting yourself!"). Perhaps, at least to the extent that they're recognizable as non-responsive to reality or to denial, imaginary positions allow the claimants and their readers to join in putting down those nutty technophiles or whoever.
comment by ata ·
2010-09-12T02:22:26.651Z · LW(p) · GW(p)
I believe you now.
Replies from: Perplexed
↑ comment by Perplexed ·
2010-09-12T02:48:35.270Z · LW(p) · GW(p)
Reed seems to be assuming that no one but oddball Singularitarians will want to be "saved". Be thankful that he isn't jumping to the conclusion that "salvation" will not optional.
But if Reed is right that the majority of mankind will not wish to participate in this salvation and if Eliezer is right that the overwhelming majority of very intelligent people will wish to participate, this sets up a very dramatic situation. You could almost construct a novel around that idea. A big fat novel with a hundred page philosophical monologue by the hero embedded near the conclusion. Maybe call it "Alas! Shagged!!"
comment by timtyler ·
2010-09-12T10:28:03.260Z · LW(p) · GW(p)
In Turing's Cathedral, Dyson says:
For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.
Today, the inner circle at companies like Google and Facebook get more of the benefits than outsiders do - they get a share of the winnings.
People have reason to be sceptical when people say that their machine intelligence will be any different. You are just saying that, they will think - to get support from the masses during the construction process - when your project is at its most vulnerable.
In this model, they think you have a strong motivation to decieve them - so it is not much of a surprise when they fail to update based on what you say.
comment by [deleted] ·
2012-06-20T21:56:41.510Z · LW(p) · GW(p)
The QM sequence led me to take such a position against Quantum Physicists... Huh.
Replies from: Luke_A_Somers
↑ comment by Luke_A_Somers ·
2013-01-10T17:24:35.669Z · LW(p) · GW(p)
Quantum physicists, not Quantum Mechanics? Do you mean, just... physicists? Physicists who deal in quantum phenomena? Physicists who try to interpret quantum mechanics professionally?
And what sort of position - that MWI is necessarily a part of how QM works, etc.? Alternately, that physicists are all Copenhagen diehards who won't look at the foundations of what they're doing?
Replies from: shminux, None
↑ comment by shminux ·
2013-01-10T17:54:36.767Z · LW(p) · GW(p)
The EY's emotional outburst about Copenhagen having to kill puppies to get rejected may have contributed to the misattribution.