Bayesian Judo

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-07-31T05:53:13.000Z · LW · GW · Legacy · 110 comments

You can have some fun with people whose anticipations get out of sync with what they believe they believe.

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: "I don't believe Artificial Intelligence is possible because only God can make a soul."

At this point I must have been divinely inspired, because I instantly responded: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

He said, "What?"

I said, "Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion."

There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, "Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are."

I said, "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."

He said, "Well, um, I guess we may have to agree to disagree on this."

I said: "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong."

We went back and forth on this briefly. Finally, he said, "Well, I guess I was really trying to say that I don't think you can make something eternal."

I said, "Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires."  I stretched out my hand, and he shook it, and then he wandered away.

A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."

"Thank you very much," I said.

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "Professing and Cheering"

Previous post: "Belief in Belief"

110 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by joe_blo · 2007-07-31T06:36:05.000Z · LW(p) · GW(p)

Hmmm... and I thought you were going to suggest that, if you succeded in making an AI that you must be god. I would've loved to be there to offer him that option instead. LOL!

Of course, even if what you said is what we really mean, I'm not sure which one is more effective at getting people to think, but your story shows that it's usually good (and at least entertaining) to try being more direct, every once in a while. I just find it easier to break through the social convention of politeness with humor.

comment by Paul2 · 2007-07-31T07:24:20.000Z · LW(p) · GW(p)

I am quite impressed at your capability of signaling your prodigious intelligence. Less pompously, moments like that make for fond memories.

Replies from: david-james
comment by David James (david-james) · 2024-04-14T16:15:17.283Z · LW(p) · GW(p)

First, E.Y. did more than signal his intellect; he demonstrated it. Second, E.Y. did more than what many of us could muster; namely, he took the other person seriously and engaged. This is a level of respect that often gets misread as pompousness. Third, E.Y. has a body of writing that clearly demonstrates that other people’s irrationality is his problem, and I agree. Fourth, if someone could thread the needle better, wonderful!

comment by Richard_Hollerith · 2007-07-31T07:29:15.000Z · LW(p) · GW(p)

Nice story.

comment by Valter · 2007-07-31T10:17:00.000Z · LW(p) · GW(p)

Nice job, but the mention of Aumann's theorem looks a bit like a sleight of hand: did the poor fellow ever learn that the theorem requires the assumption of common priors?

comment by michael_vassar3 · 2007-07-31T11:25:19.000Z · LW(p) · GW(p)

Robin sort-of generalized it so that it doesn't. http://www.overcomingbias.com/2006/12/why_common_prio.html

My big question though is whether this exchange led to a lasting change in the fellow's opinion as to the possibility of AI. In practice it seems to me that most of the time when people decisively loose an argument they still return to their original position within a few days just by ignoring that it ever happened.

Replies from: PrometheanFaun
comment by PrometheanFaun · 2013-08-12T02:37:42.597Z · LW(p) · GW(p)

He probably didn't see it as an argument proper, but a long misunderstanding. Most people arn't mentally equipped to make high fidelity translations between qualia and words in either direction[superficially, they are Not Articulate. More key, they might be Not Articulable], when you dismantle their words, it doesn't mean much to them, cause you havn't touched their true thoughts or anything that represents them.

comment by Robin_Hanson2 · 2007-07-31T11:58:14.000Z · LW(p) · GW(p)

This story is related to the phenomena whereby the most intelligent and educated religious folks are very careful to define their beliefs so that there can be no conflict with observations, while ordinary people are more prone to allow their religion to have implications, which are then subject to challenges like Eliezer's. It is fun to pick holes in the less educated views, but to challenge religion overall it seems more honest to challenge the most educated views. But I usually have trouble figuring out just what it is that the most educated religious folks think exactly.

comment by TGGP3 · 2007-07-31T13:49:32.000Z · LW(p) · GW(p)

I've mentioned before that my attempt to salvage a belief in God ultimately resulted in something like H. P. Lovecraft's Azathoth, which might not be too surprising as it was that ardent materialist's parody of the God of the Old Testament.

comment by Silas · 2007-07-31T14:49:57.000Z · LW(p) · GW(p)

A few questions and comments:

1) What kind of dinner party was this? It's great to expose non-rigorous beliefs, but was that the right place to show off your superiority? It seems you came off as having some inferiority complex, though obviously I wasn't there. I know that if I'm at a party (of most types), for example, my first goal ain't exactly to win philosophical arguments ...

2) Why did you have to involve Aumann's theorem? You caught him in a contradiction. The question of whether people can agree to disagree, at least it seems to me, is an unnecessary distraction. And for all he knows, you could just be making that up to intimidate him. And Aumann's Theorem certainly doesn't imply that, at any given moment, rectifying that particularly inconsistency is an optimal use of someone's time.

3) It seems what he was really trying to say was someting along the lines of "while you could make an intelligence, its emotions would not be real the way humans' are". ("Submarines aren't really swimming.") I probably would have at least attempted to verify if that's what he meant rather latching onto the most ridiculous meaning I could find.

4) I've had the same experience with people who fervently hold beliefs but don't consider tests that could falsify them. In my case, it's usually with people who insist that the true rate of inflation in the US is ~12%, all the time. I always ask, "so what basket of commodity futures can I buy that consistently makes 12% nominal?"

Replies from: omalleyt
comment by omalleyt · 2016-09-02T18:14:32.071Z · LW(p) · GW(p)

To point 4 and inflation, the trick is to not invest in commodity futures (where the deflationary pressures of improved production technology cancel some of the inflationary pressures of currency devaluation) but rather assets. You can invest in the S&P 500 and achieve ~11% nominal returns. Now whether asset prices are relevant to "inflation" is dependent upon whether you are trying to answer the question of "how many apples could I buy for a dollar in 1960 versus today?" or the question "how many apples could I buy for a dollar today if they were produced with the same inputs and technological process as they were in 1960?"

comment by Pseudonymous2 · 2007-07-31T18:20:21.000Z · LW(p) · GW(p)

That was cruel. Fun, but cruel.

A woman who had stood nearby, listening to the conversation, said to me gravely, "That was beautiful."

And people wonder why men argue so often.

comment by Joseph_Hertzlinger · 2007-07-31T20:35:39.000Z · LW(p) · GW(p)

Meanwhile, over at the next table, there was the following conversation:

"I believe science teaches us that human-caused global warming is an urgent crisis."

"You mean if it's either not a problem or can be fixed easily, it proves science is false?"

Replies from: rkyeun
comment by rkyeun · 2012-08-27T23:49:35.413Z · LW(p) · GW(p)

Technically, it proves his belief about science is false.

If he'd said "Science teaches us that human-caused global warming is an urgent crisis." then "You mean if it's either not a problem or can be fixed easily, it proves science is false?" applies. And yes, it in fact would.

And then Science would (metaphorically) say, "My bad, thanks for that new evidence, I reject my prior theory and form a new one that accounts for your data and explains this new phenomenon that causes symptoms as if global warming were an urgent problem."

Replies from: Bound_up
comment by Bound_up · 2015-02-27T08:24:43.228Z · LW(p) · GW(p)

"Technically, it proves his belief about science is false."

True, though in the same way, Eliezer's success in producing an AI, even according to the dodgy specifications of his dinner companion, would only prove his belief about God wrong, not his belief IN God wrong.

The AI data point would contradict Mr Dinner's model of God's nature only at a single point, His allegedly unique intelligence-producing quality.

Replies from: rkyeun, Capla
comment by rkyeun · 2015-03-26T21:51:43.339Z · LW(p) · GW(p)

The is no evidence for gods, and so any belief he has in them is already wrong. Don't believe without evidence.

Replies from: wizzwizz4
comment by wizzwizz4 · 2019-07-14T20:25:06.374Z · LW(p) · GW(p)

Did you mean: Hold sensible priors

comment by Capla · 2015-04-11T20:38:49.180Z · LW(p) · GW(p)

Sure. But religion is supposed divinely inspired and thus completely correct on every point. If one piece of the bundle is disproven, the whole bundle takes a hit.

Replies from: g_pepper
comment by g_pepper · 2015-04-11T21:38:12.938Z · LW(p) · GW(p)

Even if religion is divinely inspired, a person's understanding of one aspect of religion can be wrong without invalidating all of that person's other religious beliefs.

Replies from: Capla
comment by Capla · 2015-04-12T14:36:03.177Z · LW(p) · GW(p)

Yep.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-07-31T21:21:01.000Z · LW(p) · GW(p)

I only torture them to make them stronger.

comment by Kaj_Sotala · 2007-07-31T22:45:44.000Z · LW(p) · GW(p)

I know that if I'm at a party (of most types), for example, my first goal ain't exactly to win philosophical arguments ...

Funny, I've always thought that debates are one of the most entertaining forms of social interaction available. Parties with a lot of strangers around are one of the best environments for them - not only don't you know in advance the opinions of the others, making the discussions more interesting, but you'll get to know them on a deeper level, and faster, than you could with idle small talk. You'll get to know how they think.

Replies from: gershom
comment by gershom · 2011-08-13T17:59:38.859Z · LW(p) · GW(p)

Or how they don't...

comment by albatross · 2007-08-01T00:45:11.000Z · LW(p) · GW(p)

Of course, you may not be invited back, if you offend them badly enough....

Replies from: TraderJoe
comment by TraderJoe · 2012-04-12T12:32:05.690Z · LW(p) · GW(p)

[comment deleted]

comment by Hopefully_Anonymous2 · 2007-08-01T00:51:51.000Z · LW(p) · GW(p)

Good catch, Pseudonymous. Robin, my guess is that they're crypto-skeptics, performing for their self-perceived comparative economic/social advantage. Eliezer, please don't make something that will kill us all.

comment by Tom_McCabe · 2007-08-01T01:48:30.000Z · LW(p) · GW(p)

"Funny, I've always thought that debates are one of the most entertaining forms of social interaction available."

We may not have rationality dojos, but in-person debating is as good an irrationality dojo as you're going to get. In debating, you're rewarded for 'winning', regardless of whether what you said was true; this encourages people to develop rhetorical techniques and arguments which are fully general across all possible situations, as this makes them easier to use. And while it may be hard to give public demonstrations of rationality, demonstrations of irrationality are easy: simply talk about impressive-sounding nonsense in a confident, commanding voice, and people will be impressed (look at how well Hitler did).

comment by MRA · 2007-08-01T02:14:34.000Z · LW(p) · GW(p)

I think the idea of argument is to explore an issue, not "win" or "lose". If you enter an argument with the mentality that you must be right, you've rather missed the point. There wasn't an argument here, just a one-sided discussion. It was a bludgeoning by someone with training and practice in logical reasoning on someone without. It was both disgusting and pathetic, no different than a high-school yard bully pushing some kid's face in the dirt because he's got bigger biceps. Did the outcome of this "argument" stroke your ego?

All-in-all, I'm not sure this is a story you should want to share. To put this in uncomplicated terms, it makes you sound like a real a$$hole.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-01T02:43:43.000Z · LW(p) · GW(p)

(FYI, the MRA who posted is not Ames.)

MRA, the difference between winning an argument with someone, versus pushing them into the dirt - well, there's a number of differences, really. The three most important are: First, I didn't force him to talk to me. Second, losing an argument makes you stronger. (Or rather, it gives you a precious chance to become stronger; whether he took advantage of it was up to him. Winning is a null-op, of course.)

Third and above all, in factual arguments there is such a thing as truth and falsity.

Replies from: Belkar15
comment by Belkar15 · 2010-12-07T10:39:54.321Z · LW(p) · GW(p)

Yes, but humans are emotional beings, and we must recognize this. Sure, it is his fault he is so ignorant, but the difference between calmly explaining to him what is wrong with his thinking, and making fun of him, is that you hurt him on an emotional level. One must always live on both planes, and must always recognize what kind of argument he is dealing with. Emotional arguments are things to stay away from, because they only strengthen a person in the sense that bullying someone strengthens them, as opposed to teaching them KaraTe. People are tested in the little things. The unimportant things. That is where your personality shows. (BTW, how can you like this, and hate Gemara?)

Replies from: TobyBartels
comment by TobyBartels · 2010-12-30T09:51:01.620Z · LW(p) · GW(p)

I agree with your general points, but I don't think that they apply here. Why do you say that Eliezer hurt this guy emotionally? He did amuse a witness, and it's possible that later she came and laughed at the guy, or something, but there's no evidence for that. On the contrary, the guy got a little confusion, technically won an argument (after having to clarify his position), and just maybe got something to think about.

The only really cheap shot is citing Aumann.

Replies from: jacit31
comment by jacit31 · 2012-12-03T06:03:39.318Z · LW(p) · GW(p)

i second :)

comment by Joshua_W._Burton · 2007-08-01T04:29:33.000Z · LW(p) · GW(p)

Aumann has some curious priors of his own.

Replies from: TimFreeman
comment by TimFreeman · 2011-04-13T19:24:31.678Z · LW(p) · GW(p)

The link broke sometime between 2007 and 2011. Do you have another pointer or some summary of what it said?

Replies from: Zack_M_Davis, khafra, Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-03T14:40:24.105Z · LW(p) · GW(p)

Learn to use the time machine. Back in 2007, the page looked like this.

comment by Tobbic2 · 2007-08-01T08:45:58.000Z · LW(p) · GW(p)

"I believe science teaches us that human-caused global warming is an urgent crisis." "You mean if it's either not a problem or can be fixed easily, it proves science is false?" Science has been proved false many times. Those things proven to be false are no longer science. OTOH most religious beliefs are dogmatic. They can't be discarded from that religion without divine intervention/prophecy.

Replies from: TraderJoe
comment by TraderJoe · 2012-04-12T12:38:04.883Z · LW(p) · GW(p)

[comment deleted]

Replies from: Danfly
comment by Danfly · 2012-04-12T12:52:52.953Z · LW(p) · GW(p)

Data accumulated using the scientific method perhaps? Once you have the data, you can make inferences to the best explanation. If the theory held to be the best explanation is falsified, that becomes part of the data. It then ceases to be the be the best explanation.

comment by Mark_D · 2007-08-01T17:11:18.000Z · LW(p) · GW(p)

“It was a bludgeoning by someone with training and practice in logical reasoning on someone without.”

I’m inclined to agree. I also found it less than convincing.

Let’s put aside the question of whether intelligence indicates the presence of a soul (although I’ve known more than a few highly intelligent people that are also morally bankrupt).

If it’s true that you can disprove his religion by building an all-encompassing algorithm that passes as a pseudo-soul, then the inverse must also be true. If you can’t quantify all the constituent parts of a soul, then you would have to accept that his religion offers a better explanation of the nature of being than AI. So you would have to start believing his religion until a better explanation presents itself. That seems fair, no?

If you can’t make that leap, then now would be a good time to examine your motives for any satisfaction you felt at his mauling. I’d argue your enjoyment is less about debating ability, and more about the enjoyment of putting the “uneducated” in their place.

So let’s consider the emotion compassion. You can design an algorithm so that it knows was compassionate behaviour looks like. You could also design it so that it learns when this behaviour is appropriate. But at no point is your algorithm actually “feeling” compassion, even if it’s demonstrating it. It’s following a set of predefined rules (with perhaps some randomness and adaptation built in) because it believes it’s advantageous or logical to do so. If this was a human being, we’d apply the label “sociopath”. That, to me, is a critical distinction between AI and soul.

Debates like these take all the fun right out of AI. It’s disappointing that we need to debate the merits of tolerance on forums like this one.

Replies from: floormatthew
comment by floormatthew · 2010-09-14T23:05:32.901Z · LW(p) · GW(p)

Just nitpicking a little, but you don't seem to understand the concept of an AI. It reprogrammes itself after each encounter (the same way a child does while growing up), so it counts as an emotional responce: reacting the same way other's do when a respone is needed.

If you attempt to mention that the responce is therfore invalid (for not actualy feeling any emotion just a, admittedly frequently updated, responce) then I point you at the 'is my happyness the same as you're happiness' arguement.

comment by Johnny_Logic · 2007-08-01T18:20:29.000Z · LW(p) · GW(p)

Where do people get the impression that we all have the right not to be challenged in our beliefs? Tolerance is not about letting every person's ideas go unchallenged; it's about refraining from other measures (enforced conformity, violence) when faced with intractable personal differences.

As for politeness, it is an overrated virtue. We cannot have free and open discussions, if we are chained to the notion that we should not challenge those that cannot countenance dissent, or that we should be free from the dissent of others. Some people should be challenged often and publicly. Of course, the civility of these exchanges matters, but, as presented by Eliezer, no serious conversational fouls or fallacies were committed in this case (contemptuous tone, ad hominems, tu quoque or other Latinate no-nos, etc.).

Mark D,

How do you know what the putative AI "believes" about what is advantageous or logical? How do you know that other humans are feeling compassion? In other words, how you feel about the Turing test, and how, other than their behavior, would you be able to know about what people or AIs believe and feel?

comment by Kaj_Sotala · 2007-08-01T18:34:05.000Z · LW(p) · GW(p)

We may not have rationality dojos, but in-person debating is as good an irrationality dojo as you're going to get. In debating, you're rewarded for 'winning', regardless of whether what you said was true

Only if you choose to approach it that way.

comment by Mark_D · 2007-08-01T20:59:45.000Z · LW(p) · GW(p)

Johnny Logic: some good questions.

“Tolerance is not about letting every person's ideas go unchallenged; it's about refraining from other measures (enforced conformity, violence) when faced with intractable personal differences.”

That’s certainly the bare minimum. His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?

“How do you know what the putative AI "believes" about what is advantageous or logical?”

By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes? Even if we employ some sort of Bayesian learning algorithm that changes behaviour, its actions would be well scripted.

“How do you know that other humans are feeling compassion?”

I’m not sure this can be answered without an emotive argument. If you’re confident that your actions are always consistent with your personal desires (if they exist), then you have me beaten. I personally didn’t want to wake up and go to work on Monday, but you wouldn’t know it by my actions since I showed up anyway. You’ll just have to take my word for it that I had other unquantifiable impulses.

“In other words, how you feel about the Turing test, and how, other than their behavior, would you be able to know about what people or AIs believe and feel?”

I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however.

Replies from: rkyeun
comment by rkyeun · 2012-08-28T00:01:47.975Z · LW(p) · GW(p)

His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates).

Correction: It costs us nothing to let him keep them provided he never at any point acts in a way where the outcome would be different depending on whether or not it is true in reality. A great many people have great personal value in the belief that faith healing works. And it costs us the suffering and deaths of children.

comment by Johnny_Logic · 2007-08-01T22:58:33.000Z · LW(p) · GW(p)

Mark M.,

"His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?"

Values may be misplaced, and they have consequences. This particular issue doesn't have much riding on it (on the face of it, anyway), but many do. Moreover, how we think is in many ways as important as what we think. The fellows ad hoc moves are problematic. Ad hoc adjustments to our theories/beliefs to avoid disconfirmation are like confirmation bias and other fallacies and biases-- they are hurdles creativity, making better decisions and increasing our understanding of ourselves and the world. This all sounds more hard-nosed than I really am, but you get the point.

"By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes?"

You seem to envision AI as a massive database of scripts chosen according to circumstance, but this is not feasible. The number of possible scripts to enable intelligent behavior would be innumerable. No, an AI need not have "clearly defined rules" in the sense of being intelligible to humans. I suspect anything robust enough to pass the Turing Test in any meaningful (non-domain restricted) sense would be either too complicated to decode or predict upon its inspection, or would be the result of some artificial evolutionary process that would be no more decodable than a brain. Have you ever looked at complex code--it can be difficult if not impossible for a person to understand as code, let alone all the possible ways it may implement (thus bugs, infinite loops, etc.). As Turing said, "Machines take me by surprise with great frequency."

"You’ll just have to take my word for it that I had other unquantifiable impulses."

But you would not take the word of an AI that exhibited human level robustness in its actions? Why?

"I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however."

Funny, I'm afraid that you might be misapplying the Turing Test. The Turing Test is not supposed to provide a maximally predictive "formula" for a putative intelligence. Rather, passing it is arguably supposed to demonstrate that the subject is, in some substantive sense of the word, intelligent.

comment by Mark_D · 2007-08-02T01:57:22.000Z · LW(p) · GW(p)

JL, I’ve programmed in several languages, but you have me correctly pegged as someone who is more familiar with databases. And since I’ve never designed anything on the scale we’re discussing I’m happy to defer to your experience. It sounds like an enormously fun exercise though.

My original point remains unanswered however. We’re demanding a level of intellectual rigour from our monotheistic party goer. Fair enough. But nothing I’ve seen here leads me to believe that we’re as open minded as we’re asking him to be. Would you put aside your convictions and adopt religion if a skilful debater put forward an argument more compelling than yours? If you were to still say “no” in the face of overwhelming logic, you wouldn’t justifiably be able to identify yourself as a critical thinker. And THAT’S what I was driving at. Perhaps I’m reading subtexts where none exist, but this whole anecdote has felt less like an exercise in deductive reasoning than having sport at someone else’s expense (which is plainly out of order).

I don’t really have any passion for debating so I’ll leave it there. I’m sure EY can pass along the email address I entered on this site if you’re determined to talk me out of my wayward Christianity.

Best of luck to you all

comment by Johnny_Logic · 2007-08-02T03:09:59.000Z · LW(p) · GW(p)

Mark D,

"JL, I’ve programmed in several languages, but you have me correctly pegged as someone who is more familiar with databases. And since I’ve never designed anything on the scale we’re discussing I’m happy to defer to your experience. It sounds like an enormously fun exercise though."

There are programs (good ol' chatter bots) that use methods like you supposed, but they are far from promising. No need to defer to me-- I am familiar with machine learning methods, some notable programs and the philosophical debate, but I am far from an expert on AI, and would listen to counterarguments.

"Would you put aside your convictions and adopt religion if a skilful debater put forward an argument more compelling than yours? If you were to still say “no” in the face of overwhelming logic, you wouldn’t justifiably be able to identify yourself as a critical thinker. And THAT’S what I was driving at."

It is not the skillfulness of the debtor that is the issue, but the quality of the reasoning and evidence given the magnitude of the claims. I have sought good arguments and found them all to be seriously lacking. However, if I were presented with a very good argument (overwhelming evidence is better) though, I would like to think I would be able to change my beliefs. Of course, such a new belief would not be immune to revision in the future. Also, knowing what I do about the many ways we fool ourselves and our reasoning fails, I may be wrong about my ability to change cherished unbeliefs, but I do try. Keeping an open, curious, yet appropriately critical attitude toward everything even when we are at our best is not easy, or maybe even possible.

"I don’t really have any passion for debating so I’ll leave it there. I’m sure EY can pass along the email address I entered on this site if you’re determined to talk me out of my wayward Christianity."

I trust that you are serious about signing-off, so I will leave you with a few questions I do not expect to be answered, but are in my opinion, worth considering: Are there any conditions under which you would reject Christianity? Why do you believe in your flavor of Christianity, rather than anything else? Are these good reasons? Is your belief proportionate to the evidence, or total? Would you accept this kind of reasoning in other domains (buying a car, convicting a criminal), or if it led to different conclusions than yours (Islam, Mormonism)? Why or why not?

"Best of luck to you all"

Cheers.

comment by Johnny_Logic · 2007-08-02T03:23:47.000Z · LW(p) · GW(p)

A less personal response to the second bit I quoted from Mark D: Yes, changing our beliefs in the face of good evidence and argument is desirable, and to the extent that we are able to do this we can be called critical thinkers.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-02T06:55:32.000Z · LW(p) · GW(p)

Would you put aside your convictions and adopt religion if a skilful debater put forward an argument more compelling than yours?

To the extent the answer is "No" my atheism would be meaningless. I hope the answer is "Yes", but I have not been so tested (and do not expect to be; strong arguments for false theses should not exist).

comment by DaCracka · 2007-10-25T18:24:39.000Z · LW(p) · GW(p)

First: The argument wasn't the author being an a$$hole. He was stating the nature of his business, which is a very normal thing to do at a social gathering. (We are, to a disturbing extend, defined by our income.) Godboy dismissed his profession as quixotic, leading the author to the notion that if he created a working AI, that it would disprove God, in the mind of his coparticipant in discussion. This was a logical inferrence, based on the statement that inspired it.

Second: The only winner in a conversation is the person who learns something. I believe, that in being forced to examine his beliefs, and how he expresses them in polite company, Godboy was the clear winner.

Unless you're in the habit of giving out cookies to any sophists who gives you a pimp slap with the logical vernacular.

comment by douglas · 2007-10-25T21:49:44.000Z · LW(p) · GW(p)

Wouldn't it be easier to say, an AI is not a soul? In what sense do these two words have the same meaning? An AI is a non-existant entity which, due to the unflagging faith of some, is being explored. A soul is an eternal being granted life (human only?) by god (should that be capitalized?) Comparing them is what leads to the problem.

comment by douglas · 2007-10-25T21:58:17.000Z · LW(p) · GW(p)

Before using Aumann one should ask, "What does this guy know that I don't?"

comment by g · 2007-10-26T01:14:37.000Z · LW(p) · GW(p)

Douglas, (1) what makes you think that anyone was suggesting that "AI" and "soul" have the same meaning?, (2) in what way would "an AI is not a soul" be a useful substitute for anything else said in this discussion?, and (3) why should comparing the two notions lead to any problems, and in particular to whatever you're calling "the problem" here?

I don't think it's any more obvious that there are no AIs than that there are no souls. That is: perhaps, despite appearances, my computer is really intelligent, and thinks in some manner quite different from the computational processes I know it performs, but which is none the less somehow based on them. There is just as much evidence for this (admittedly bizarre) hypothesis as there is for the existence of souls.

(On some side issues: I think "God" should be capitalized when it's being used as a proper name, and not otherwise. Thus, "the Christian god" but "May God bless you" or "Oh God, what a stupid idea". Note that this has nothing to do with, e.g., whether one believes in the existence of any such being or one's opinions about whether he/she/it does/would deserve respect. I don't understand why "eternal" should be part of the definition of "soul". The point of Aumann's theorem is that observing someone else's opinion and how it changes gives you information about what the other person knows that you don't.)

comment by douglas · 2007-10-26T07:20:35.000Z · LW(p) · GW(p)

g- the man said, "I don't believe AI is possible because only God can make a soul." "...If I can make an AI it proves your religion false?" Somebody in this exchange has equated the making of an AI with the making of a soul. That's why I would suggest that the words have been confused. An AI is not a soul would be useful in this discussion because it would clarify that the making of one would not invalidate the existence of the other or the statement that "only God can make a soul". Comparing the two notions would not be a problem, equating them is. You seem somewhat willing to (at least partially)accept the existence of AI based on bizarre hypothesis. If you would give me some idea of what sort of evidence you would accept for the existence of a soul, I would be happy to supply it if I can. Thank-you for your interesting comment re: Aumann.

comment by g · 2007-10-26T11:20:57.000Z · LW(p) · GW(p)

Douglas: OK, I hadn't realised you were talking about him; my bad. And, sure, another approach Eliezer could have taken is to say "an AI and a soul aren't the same thing". But I don't see why that would be any improvement on what he actually did do.

Also: "soul" is used vaguely enough that I don't think Eliezer could justifiably claim that an AI wouldn't have to be a soul. If his interlocutor believed, e.g., that a soul is what it takes in order to have real beliefs, feelings, will, etc., then saying "oh no, I'm not talking about souls" could have led to all sorts of confusion. Better to stick with specifics, as Eliezer did, and let the chap's definition of "soul" sort itself out in the light of whatever conclusions are reached that way.

Either your meaning of "somewhat willing" is very different from mine, or I've not been very clear. I don't think there's any good reason to think that anything that deserves to be called an AI is yet in existence. (Of course there are computers doing things that once upon a time were thought to be possible only for genuinely intelligent beings; "AI is what we haven't worked out how to do yet", etc.) As to whether we'll make one in the future, that's dependent (at least) on continued technological progress, availability of resources, non-extinction, etc., so I certainly don't think it's obvious that it will ever be done.

I can't tell you what evidence would convince me of the existence of "souls" until I know what you mean by "soul", and maybe also "exist". If, e.g., "soul" means "eternal being granted life by God" (I guess we'd better throw in "immaterial" or something), then clearly I'd want to be shown (1) good evidence for the existence of some sort of god and (2) good evidence that that god does, or at least should be expected to, grant life to immaterial eternal beings.

#2 seems to involve either second-guessing what a being whose mind is vastly unlike ours would do, or else accepting some sort of revelation; but all the candidates for the latter that I've looked at enough to have an opinion seem ambiguous or unreliable or both, to an extent that makes it very difficult to draw any useful conclusions from them.

Now, actually that definition seems to me a very poor one -- I don't see why "eternal" or "made by God" should be any part of the definition of "soul". Perhaps you have a different one?

comment by douglas · 2007-10-26T19:40:40.000Z · LW(p) · GW(p)

g- you ask good questions. My point about AI and religion is that rather than pretending that one is related to the other, AI would benefit from clearing up this confusion. (So would the religious) Perhaps the way Elizer went about it was OK I would define "soul" as a non-corporeal being that exists separable from the body and that survives body death. (I want to say something about the soul being the true source of consciousness and ability-- OK, I said it)

comment by Dave_P · 2009-04-07T16:45:39.000Z · LW(p) · GW(p)

I came here looking for judo techniques...

But whilst I'm here your reply to mra - are you saying your discussion with the religious guy is factual? is god Fact? if so I would very much like to meet him as I have a few questions I'd like to ask..you got his number? Sounds to me like this guy was a closed book and maybe you were a little harsh but maybe you opened his mind up a bit which would be good but I wouldn't feel good about how you did it, clever as it was.

I suppose to some people belief = truths, but

End of the day neither of you were speaking universal truths.

comment by happyseaurchin · 2009-12-28T16:40:30.795Z · LW(p) · GW(p)

cool :)

comment by tasups · 2010-05-27T20:20:02.723Z · LW(p) · GW(p)

Seems more like Aikido. I sense a broken spirit more than a redirection of thought or processes of it and his belief system. Simply put, honey has always gotten me more flies than vinegar.

comment by cousin_it · 2010-07-26T19:56:44.658Z · LW(p) · GW(p)

Rereading the post, I don't understand why the fellow didn't just say "I defy your ability to build an AI" in response to your first question. Maybe he was intimidated at the moment.

Replies from: thomblake
comment by thomblake · 2011-05-02T22:51:14.839Z · LW(p) · GW(p)

Rereading the post, I don't understand why the fellow didn't just say "I defy your ability to build an AI"

Because he wanted the ability to retain his religious belief in the face of successful AI - and he can anticipate, in advance, exactly which experimental results he'll need to excuse

comment by BillGlover · 2010-11-22T02:11:42.796Z · LW(p) · GW(p)

I have to heartily disagree with those that seem to think it impolite to disagree with the religious. Remember this same person is going to go out and make life and death decisions for himself and others. Notice also that it was the theist who started the debate.

comment by MrPineapple · 2011-01-23T23:33:19.750Z · LW(p) · GW(p)

All you did was show that your argumentative skills were better. His intial belief mentioned souls, and i dont think you ever did. I'd like to see some sort of testability for souls :)

As to your reply of possibly proving his religion false, if he was better at arguing, he may have replied that at the least it might prove his understanding of religion false.

And of course its not as if you have created an AI.

Replies from: Hul-Gil
comment by Hul-Gil · 2011-05-02T22:20:00.497Z · LW(p) · GW(p)

Your points are irrelevant. The man asserted that his religious beliefs meant Artificial Intelligence was impossible, and that's what the author of this post was debating about. No souls need to be tested, because the existence of souls was not contested. Nor did Eliezer say he had created an AI.

I'm also surprised no one pointed out that Mark D's "reversal" scenario is totally wrong: if Eliezer was unable to create an AI, that does not at all imply that the man's own assertions were true. It might, at best, be very weak evidence; there could be many reasons, other than a lack of a soul, that Eliezer might fail.

comment by UnclGhost · 2011-05-13T05:59:45.926Z · LW(p) · GW(p)

I attended a lecture by noted theologian Alvin Plantinga, about whether miracles are incompatible with science. Most of it was "science doesn't say it's impossible, so there's still a chance, right?"-type arguments. However, later on, his main explanation for why it wasn't impossible that God could intervene from outside a closed system and still not violate our laws of physics was that maybe God works through wavefunction collapse. Maybe God creates miracles by causing the right wavefunction collapses, resulting in, say, Jesus walking on water, rising from the dead, unscrambling eggs, etc.

Recalling this article, I wrote down and asked this question when the time came:

"The Many-Worlds Interpretation is currently [I said "currently" because he was complaining earlier about other philosophers misrepresenting modern science] one of the leading interpretations of quantum mechanics. The universe splits off at quantum events, but is still deterministic, and only appears probabilistic from the perspective of any given branch. Every one of the other branches still exists, including ones where Jesus doesn't come back. If true, how does this affect your argument?"

I wanted to see if he would accept a falsifiable version of his belief. Unfortunately, he said something like "Oh, I don't like that theory, I don't know how it would work with a million versions of me out there" and ignored the "if" part of the question. (I would have liked to point this out, but the guy before me had abused his mic privileges so I had to give it back.)

(Also, is that a fair layman's representation of many-worlds? I'm normally very wary of using any sort of quantum physics-based reasoning as a non-quantum physicist, but, well, he started it.)

comment by TuviaDulin · 2011-08-04T01:52:57.509Z · LW(p) · GW(p)

Now, what I don't get is why he let you force him to change his position. If he really believed that it was impossible for you to create AI, why wouldn't he have just said "yes," and then sit back, comfortable in his belief that you will never create an AI?

Replies from: lessdazed
comment by lessdazed · 2011-08-04T03:39:06.399Z · LW(p) · GW(p)

He didn't believe his religion was definitely true, even though he belived that he believed that. There is nothing paradoxical about being wrong about things, even one's own beliefs.

"My religion is true" is a statement with no consequences for his anticipations, so it is isolated from his belief network. He was deeply committed to "I believe in my religion", and that second belief required him to pretend that the first statement was a member of his belief network by pretending that he believed in consequences as if his religion were true occurred and just happened to be untestable. Once he realized that he had goofed and said he anticipated consequences if his religion were true different from if it weren't true in a testable area, he had to backtrack. By never anticipating different consequences if his religion is true than if it isn't, he protects his belief that he believes his religion is true from falsification.

if you have a whole general concept like "post-colonial alienation", which does not have specifications bound to any specific experience, you may just have a little bunch of arrows off on the side of your causal graph, not bound to anything at all; and these may well be meaningless.

So he changed his position only insofar as he updated his belief framework, but he didn't change any core belief. Everything snaps into place if you realize the terms of his real belief the way the movement of planets makes sense if you realize the Earth is not the center of the Solar System, and the Sun is. His religious belief is meaningless, not just wrong, and his belief that he believes is wrong.

People are capable of even more sophisticated levels levels of self deception than it seems this guy had.

Welcome to Less Wrong!

comment by juliawise · 2011-08-08T19:37:07.316Z · LW(p) · GW(p)

This post's presence so early in the core sequences is the reason I nearly left LW after my first day or two. It gave me the impression that a major purpose of rationalism was to make fun of other people's irrationality rather than trying to change or improve either party. In short, to act like a jerk.

I'm glad I stuck around long enough to realize this post wasn't representative. Eliezer, at one point you said you wanted to know if there were characteristically male mistakes happening that would deter potential LWers. I can't speak for all women, but this post exemplifies a kind of male hubris that I find really off-putting. Obviously the woman in the penultimate paragraph appreciated it in someone else, but I don't know if it made her think, "This is a community I want to hang out with so I, too, can make fools of other people at parties."

Replies from: jsalvatier, Jotto999
comment by jsalvatier · 2011-08-08T19:55:05.994Z · LW(p) · GW(p)

Do you occasionally see other comments/posts that give you this same vibe?

Replies from: juliawise
comment by juliawise · 2011-08-08T20:11:07.824Z · LW(p) · GW(p)

I can't think of any. Maybe there have been comments, but they're not sanctioned in the same way a core sequences post is, so I'm more apt to dismiss them.

comment by Jotto999 · 2012-02-12T21:44:56.683Z · LW(p) · GW(p)

Before I say anything I would like to mention that this is my first post on LW, and being only part way through the sequences I am hesitant to comment yet, but I am curious about your type of position.

What I find peculiar about your position is the fact that Yudkowsky did not, as he presented here, engage the argument. The other person did, asserting "only God can make a soul", implying that Yudkowsky's profession is impossible or nonsensical. Vocalizing any type of assertion, in my opinion, should be viewed as a two-way street, letting potential criticism come. In this particular example the assertion was of a subject that the man knew would be of large interest to Yudkowsky, certainly disproportionately more than say whether or not the punch being served had mango juice in it.

I'd like to know what you expect Yudkowsky should have done given the situation. Do you expect him not to give his own opinion, given the other person's challenge? Or was it instead something in particular about the way Yudkowsky did it? Isn't arguing inevitable and all we can do is try to build better dialogue quality? (That has been my conclusion for the last few years). Either way, I don't see the hubris you seem to. My usual complaints of discussions is that they are not well educated enough and people tend to say things that are too vague to be useful, or outright unsupported. However I rarely see a discussion and think "Well the root problem here is that they are too arrogant", so I'd like to know what your reasoning is.

It may be relevant that in real life I am known by some as being "aggressive" and "argumentative". Though you probably could have inferred that based on my position but I'd like to keep everything about my position as transparent as possible.

Thank you for your time.

Replies from: juliawise
comment by juliawise · 2012-02-13T02:50:16.221Z · LW(p) · GW(p)

If I were the host I would not like it if one of my guests tried to end a conversation with "We'll have to agree to disagree" and the other guest continued with "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree." In my book this is obnoxious behavior.

Having fun at someone else's expense is one thing, but holding it up in an early core sequences post as a good thing to do is another. Given that we direct new Less Wrong readers to the core sequence posts, I think they indicate what the spirit of the community is about. And I don't like seeing the community branded as being about how to show off or how to embarrass people who aren't as rational as you.

What gave me an icky feeling about this conversation is that Eliezer didn't seem to really be aiming to bring the man round to what he saw as a more accurate viewpoint. If you've read Eliezer being persuasive, you'll know that this was not it. He seemed more interested in proving that the man's statement was wrong. It's a good thing to learn to lose graciously when they're wrong, and learn from the experience. But that's not something you can force someone to learn from the outside. I don't think the other man walked away from this experience improved, and I don't think that was Eliezer's goal.

I, like you, love a good argument with someone who also enjoys it. But to continue arguing with someone who's not enjoying it feels sadistic to me.

If I were in this conversation, I would try to frame it as a mutual exploration rather than a mission to discover which of us was wrong. At the point where the other tried to shut down the conversation, I might say, "Wait, I think we were getting to something interesting, and I want to understand what you meant when you said..." Then proceed to poke holes, but in a curious rather than professorial way.

Replies from: Jotto999, satt
comment by Jotto999 · 2012-02-13T17:04:09.786Z · LW(p) · GW(p)

Interesting. Do we have any good information on the attributes of discussions or debates that are the most likely to educate the other person when they disagree? In hindsight this would be a large shortcoming of mine, having debated for years now but never invested much in trying to optimize my approach with people.

Something I've noticed: when someone takes the "conquer the debate" adversarial approach, a typical-minded audience appears more likely to be interested and side with the "winner" than if the person takes a much more reserved and cooperative approach despite having just as supported arguments. Maybe the first works well for typical audiences and the second for above-typical ones? Or maybe it doesn't matter if we can foster the second in "typical" minds. Given my uncertainty it seems highly unlikely that my approach with people is optimal.

Do you have any tips for someone interested in making a mental habit out of cooperative discussion as opposed to being adversarial? I find it very difficult, I'm an aggressive and vigorous person. Maybe if I could see a video of someone using the better approach so I can try to emulate them.

Replies from: thomblake, juliawise
comment by thomblake · 2012-02-13T17:09:13.150Z · LW(p) · GW(p)

Interesting. Do we have any good information on the attributes of discussions or debates that are the most likely to educate the other person when they disagree?

Something I've noticed: when someone takes the "conquer the debate" adversarial approach, a typical-minded audience appears more likely to be interested and side with the "winner" than if the person takes a much more reserved and cooperative approach despite having just as supported arguments. Maybe the first works well for typical audiences and the second for above-typical ones?

I hope you've noticed you changed the subject here. In the first paragraph you're trying to persuade the person with whom you are conversing; in the second paragraph you're trying to convince an audience. They might well require entirely different methods.

Replies from: Jotto999
comment by Jotto999 · 2012-02-13T17:29:25.626Z · LW(p) · GW(p)

You're right, I see now that the effect on audiences does not relate much to the one-on-one, so I should have kept a clear distinction. Thank you for pointing this out.

I believe this obvious mistake shows that I shouldn't comment on the sequences as I work my way through them, but rather it is better if I only start commenting after I have become familiar with them all. I am not ready yet to make comments that are relevant and coherent, and the very last thing I want to do is pollute the comment section. I am so glad about the opportunity for growth this site has, thanks very much to all.

Replies from: thomblake
comment by thomblake · 2012-02-13T18:21:16.960Z · LW(p) · GW(p)

I shouldn't comment on the sequences as I work my way through them, ... the very last thing I want to do is pollute the comment section.

Meh. Comments on old sequence posts don't add much noise, as long as the comment threads don't explode.

comment by juliawise · 2012-02-14T02:07:04.066Z · LW(p) · GW(p)

An adversarial approach may impress spectators. In Eliezer's example, it impressed at least one. But I think it's more likely to alienate the person you're actually conversing with.

I don't have objective research on this. I'm working from personal experience and social work training. In social work you assume people are pretty irrational and coax them round to seeing what you think are better approaches in a way that doesn't embarrass them.

In social work we'd call it "collaborative empricism" or Socratic questioning. Here's video example of a therapist not shouting "Of course you're not being punished by God!" It's more touchy-feely than an argument, but the elements (taking the outside view, encouraging him to lay out the evidence on the situation) are there.

Replies from: TheOtherDave, Jotto999
comment by TheOtherDave · 2012-02-14T03:59:33.453Z · LW(p) · GW(p)

Shortly after my stroke, my mom (who was in many ways more traumatized by it than I was) mentioned that she was trying to figure out what it was that she'd done wrong such that God had punished her by my having a stroke. As you might imagine, I contemplated a number of different competing responses to this, but what I finally said was (something along the lines of) "Look, I understand why you want to build a narrative out of this that involves some responsible agent making decisions that are influenced by your choices, and I recognize that we're all in a difficult emotional place right now and you do what you have to do, but let me offer you an alternative narrative: maybe I had a survivable stroke at 40 so I'd start controlling my blood pressure so I didn't have a fatal one at 45. Isn't that a better story to tell yourself?"

I was pretty proud of that interaction.

Replies from: juliawise
comment by juliawise · 2012-02-14T11:45:21.664Z · LW(p) · GW(p)

Nice work!

That's the same idea as narrative therapy: drawing a new storyline with the same data points.

comment by Jotto999 · 2012-02-14T21:16:22.448Z · LW(p) · GW(p)

Hmm! I found that actually quite helpful. The therapist didn't even voice any apparent disagreement, he coaxed the man into making his reasoning explicit. This would greatly reduce the percent of the argument spent in an adversarial state. I noticed that it also put the emphasis of the discussion on the epistemology of the subject which seems the best way for them to learn why they are wrong, as opposed to a more example-specific "You're wrong because X".

Thank you for that link. Would it be useful for me to use other videos involving a therapist who disagrees with a delusional patient? It seems like the ideal type of behaviour to try and emulate. This is going to take me lots of practice but I'm eager to get it.

Thank you for your help and advice!

Replies from: juliawise
comment by juliawise · 2012-02-15T23:16:31.313Z · LW(p) · GW(p)

Would it be useful for me to use other videos involving a therapist who disagrees with a delusional patient?

I'm not sure. The kind of irrational beliefs you're likely to talk about with others are some kind of misconception or cached belief, rather than an out-and-out delusion like "people are following me everywhere", which probably stems from a chemical imbalance and can't really be talked away.

You could try reading up on CBT, but the literature is about doing therapy, which is a pretty different animal from normal conversations. Active listening might be a more useful skill to start with. People are less defensive if they feel you're really trying to understand their point of view.

comment by satt · 2012-02-14T01:01:41.263Z · LW(p) · GW(p)

If I were the host I would not like it if one of my guests tried to end a conversation with "We'll have to agree to disagree" and the other guest continued with "No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree." In my book this is obnoxious behavior.

I'd find it especially obnoxious because Aumann's agreement theorem looks to me like one of those theorems that just doesn't do what people want it to do, and so ends up as a rhetorical cudgel rather than a relevant argument with practical import.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-02-28T14:33:49.636Z · LW(p) · GW(p)

Agreed. If this was Judo, it wasn't a clean point. EY's opponent simply didn't know that the move used on him was against the sport's rules, and failed to cry foul.

Storytelling-wise, EY getting away with that felt like a surprising ending, like a minor villain not getting his comeuppance.

comment by contrarycynic · 2011-12-09T04:16:09.716Z · LW(p) · GW(p)

Interesting. When I am arguing with somebody, I usually get them to explicitly define every one of their terms and then use the definitions to logic them into realising that their argument was faulty. A more rational person could have escaped your comment simply by defining AI as human-like intelligence- ie, ability to create, dream, emote and believe without prior programming for those things. And yes, I am religious and my belief can be overturned by proof. If aliens are found with human-like intelligence, I will give up my faith entirely- but until then, just about anything else can be explained from within my ideology.

Replies from: wedrifid
comment by wedrifid · 2011-12-09T04:39:15.270Z · LW(p) · GW(p)

When I am arguing with somebody, I usually get them to explicitly define every one of their terms and then use the definitions to logic them into realising that their argument was faulty.

Does that work with actual people?

comment by alttaab · 2012-02-11T00:03:28.844Z · LW(p) · GW(p)

[ Disclaimer: This is my first post so please don't go easy on me. ]

After reading a handful of comments I am surprised to see so many people think of what Eliezer did here as some sort of "bad" thing. Maybe I'm missing something but after reading all I saw was him convincing the man to continue the discourse even though he initially began to shy away from it.

Perhaps citing a theorem may have intimidated him a little, but in all fairness Eliezer did let him know at the outset that he worked in the field of Artificial Intelligence.

Replies from: Jiro
comment by Jiro · 2014-08-26T18:48:18.772Z · LW(p) · GW(p)

Old post, but...

  1. He cited a theorem knowing that his opponent wouldn't be able to detect a bad cite.
  2. It actually was a bad cite.
comment by hesperidia · 2012-02-23T06:43:50.539Z · LW(p) · GW(p)

I've already seen plenty of comment here on just how awkward this post is to be so early in the Sequences, and how it would turn people off, so I won't comment on that.

However: Seeing this post, early in the sequences, led me to revise my general opinion of Eliezer down just enough that I managed to catch myself before I turned specific admiration into hero-worship (my early, personal term for the halo effect).

I seriously, seriously doubt that's the purpose of this article, mainly because if Eliezer wanted to deliberately prevent himself from being affective-death-spiraled this article would read more subtly.

That said, if it is agreed that it would be good for a post like this to exist early in the Sequences (that's a pretty big if), I would hope that it could be written to invite fewer pattern-matches to the stereotype of "socially-oblivious, obsessed-with-narrow-intellectual-interest geek/nerd/dork".

Replies from: None, wedrifid
comment by [deleted] · 2012-02-23T07:30:12.684Z · LW(p) · GW(p)

Nah, didn't happen. The essay reports an adolescent fantasy featuring martial invincibility. I'm sure the author has grown up by now.

I would hope that it could be written to invite fewer pattern-matches to the stereotype of "socially-oblivious, obsessed-with-narrow-intellectual-interest geek/nerd/dork

comment by wedrifid · 2012-02-23T09:41:18.469Z · LW(p) · GW(p)

I've already seen plenty of comment here on just how awkward this post is to be so early in the Sequences, and how it would turn people off, so I won't comment on that.

So early in the sequences? It would seem to be worse later in what we now call the sequences. At the time this was written it was just a casual post on a blog Eliezer had only recently started posting on. Perhaps the main error is that somehow someone included it in an index when they were dividing the stream of blog posts into 'sequences' for reference.

comment by EphemeralNight · 2012-08-14T02:25:37.478Z · LW(p) · GW(p)

There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.

This seems like one of those things that can be detrimental if taught in isolation.

It may be a good idea to emphasize that only one person in a disagreement doing something wrong is far less likely than both sides in a disagreement doing something wrong.

I can easily imagine someone casually encountering that statement, and taking it to instead mean this:

There's a thing called "Aumann's Agreement Theorem" that says rationalists can't agree to disagree. Therefore if I apply the label "rationalist" to myself, I can use the words "Aumann's Agreement Theorem" to prove that anyone who disagrees with me is wrong.

comment by Epiphany · 2012-08-14T06:29:01.848Z · LW(p) · GW(p)

Eliezer, that's false reasoning. I'm not religious, so don't take this as the opening to a religious tirade, but it's a pet peeve of mine that intelligent people will assert that every belief within a religion is wrong if only one piece of it is wrong.

There are a billion and one reasons why a body of knowledge that is is mostly correct (not saying I think religions are) could have one flaw. This particular flaw doesn't prove God doesn't exist, it would only prove God souls aren't necessary for an intelligent life form to survive, or (perhaps, to a religious person) that God isn't the only entity that can make them.

It's easy to get lazy when one's opponent isn't challenging enough (I've done it occasionally) and I've said stuff like that. I think it's best not to. They're not convincing to the opposition and we're not challenging ourselves to improve.

Replies from: None, Lotska
comment by [deleted] · 2012-12-19T03:11:36.458Z · LW(p) · GW(p)

I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.

A: "I don't believe Artificial Intelligence is possible because only God can make a soul." B: "You mean if I can make an Artificial Intelligence, it proves your religion is false?"

I don't see at all how this follows. At best, this would only show A's belief about what only God can or cannot do is mistaken. Concluding this entails their belief is false is purely fallacious reasoning. Imagine the following situation:

A: "I don't believe entanglement is possible because quantum mechanics shows non-locality is impossible." B: "You mean if I can show entanglement is possible, it proves quantum mechanics is false?"

This is not a case of quantum mechanics being false, but rather a case of A's knowledge of what quantum mechanics does and does not show being false.

What you believe or don't believe about quantum mechanics or God is irrelevant to this point. The point is that the conclusion Yudkowsky made was, at best, hastily and incorrectly arrived at. Of course, saying that "if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false" is sound reasoning and a simple example of modus tollens. But that is not, as far as I can see, what A said.

A: "I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are." B: "So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong."

There again seems to be invalid reasoning at work here. Whether or not an AI entity can 'start talking' about an emotional life that sounds like ours has nothing to do with the comment made by A, which was about whether or not such AI entities could actually be emotional in the same way organic beings are.

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2012-12-19T03:36:04.460Z · LW(p) · GW(p)

I think that Yudkowsky, hubris nonetheless, has made a few mistakes in his own reasoning.

Consider rewording that in such a manner that you can fit the 'hubris' label in while leaving the sentence coherent.

comment by TheOtherDave · 2012-12-19T03:38:01.085Z · LW(p) · GW(p)

Well, if person A's religion strictly implies the claim that only God can make a soul and this precludes AI, then the falsehood of that claim also implies the falsehood of A's religion. (A->B => -B -> -A)

But sure, you're of course correct that if person A is mistaken about what person A's religion claims, then no amount of demonstrated falsehoods in person A's statements necessarily demonstrates falsehood in person A's religion.

That said... if we don't expect person A saying "my religion claims X" given that person A's religion claims X, and we don't expect person A saying "my religion doesn't claim X" given that person A's religion doesn't claim X, then what experiences should we expect given the inclusion or exclusion of particular claims in person A's religion?

Because if there aren't any such experiences, then It seems that this line of reasoning ultimately leads to the conclusion that not only the objects religions assert exist, but the religions themselves, are epiphenomenal.

Replies from: J Mann
comment by J Mann · 2019-03-04T19:28:05.437Z · LW(p) · GW(p)

I think the "strictly implies" may be stealing a base.

Yes, being convinced of the existence of the AI would make the man rethink the aspects of his religion that he believes renders an AI impossible, but he could update that and keep the rest. From his perspective, he'd have the same religion, but updated to account for the belief in AIs.

comment by Lotska · 2013-05-23T20:38:40.374Z · LW(p) · GW(p)

It looks like false logic to me too, but I'm very aware that that is how many Christians "prove" their religion to be true. 'The Bible says this historical/Godly event happened and this archeological evidence supports the account in the Bible, therefore the Bible must be true about everything so God exists and I'm going to Heaven.' Which sounds very similar to 'This is a part of what you say about your religion and it may be proved false one day, so your religion might be too.'

Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?

Replies from: DSherron
comment by DSherron · 2013-05-23T21:46:54.765Z · LW(p) · GW(p)

Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?

Basically, the question here is whether you can use the Dark Arts with purely Light intentions. In the ideal case, I have to say "of course you can". Assuming that you know a method which you believe is more likely to cause your partner to gain true beliefs rather than false ones, you can use that method even if it involves techniques that are frowned upon in rationalist circles. However, in the real world, doing so is incredibly dangerous. First, you have to consider the knock-on effects of being seen to use such lines of reasoning; it could damage your reputation or that of rationalists in general for those that hear you, it could cause people to become more firm in a false epistemology which makes them more likely to just adopt another false belief, etc. You also have to consider that you run on hostile hardware; you could damage your own rationality if you aren't very careful about handling the cognitive dissonance. There are a lot of failure modes you open yourself up to when you engage in that sort of anti-reasoning, and while it's certainly possible to navigate through it unscathed (I suspect Eliezer has done so in his AI box experiments), I don't think it is a good idea to expose yourself to the risk without a good reason.

An unrelated but also relevant point: everything is permissible, but not all things are good. Asking "is it okay to..." is the wrong question, and is likely to expose you to some of the failure modes of Traditional Rationality. You don't automatically fail by phrasing it like that, but once again it's an issue of unnecessarily risking mental contamination. The better question is "is it a good idea to..." or "what are the dangers of..." or something similar that voices what you really want answered, which should probably not be "will LWers look down at me for doing ..." (After all, if something is a good idea but we look down at it then we want to be told so so that we can stop doing silly things like that.)

Replies from: jake-heiser
comment by descent (jake-heiser) · 2020-10-09T06:21:20.104Z · LW(p) · GW(p)

The framing of the first sentence gives me a desperately unfair expectation for the discussion inside HPMOR- I'm excited.

comment by Sengachi · 2012-09-08T18:21:06.162Z · LW(p) · GW(p)

Me: Writes on hand "Aumann's Agreement Theorem". Thank you Eliezer, you have no idea how much easier you just made my Theory of Knowledge class. Half of our discussions in class seem to devolve into statements about how belief is a way of knowing and how everyone has a right to their own belief. This (after I actually look up and confirm for myself that Aumann's Agreement Theorem works) should make my class a good deal less aggravating.

Replies from: jake-heiser
comment by descent (jake-heiser) · 2020-10-09T06:13:35.961Z · LW(p) · GW(p)

If it also states that the participants must be rationalists as Yudkowski specifies, you'll be sorely disappointed to find out how many people would identify as a rationalist

comment by Anna_Zhang · 2013-07-18T08:16:51.887Z · LW(p) · GW(p)

I would have loved to watch that.

comment by aletheianink · 2013-12-01T06:14:07.386Z · LW(p) · GW(p)

That was beautiful!

comment by J Mann · 2019-03-04T19:25:02.465Z · LW(p) · GW(p)

I wrote a long post saying what several people had already said years ago, then shortened it. Still, because this post has made me mad for years:

1) Of COURSE people can agree to disagree! If not, EY is telling this guy that no two rationalists currently disagree about anything. If THAT were true, it's so fascinating that it should have derailed the whole conversation!

(Leaving aside, for the moment, the question of whether Aumann's theory "requires" a rationalist to agree with a random party goer. If it really did, then the party goer could convince EY by simply refusing to change his mind.)

2) Presumably, if EY did produce an AI to the party-goer's satisfaction, the party-goer would very likely update his religious beliefs to include the existence of AIs. EY is smart enough to see that, so trying to trap the other guy with "so if true AI is developed, then God doesn't exist" is just dunking on somebody who isn't smart enough to get to the answer of "At a minimum, if AI exists, then I am mistaken in at least some of my current beliefs about God and mind-body duality."

comment by descent (jake-heiser) · 2020-10-09T06:11:22.463Z · LW(p) · GW(p)

While I understand the absolute primal urge to stomp on religious texts used to propagate compulsory heterosexuality, I do think this exchange ended up a bit of a poor game, when it seems like he'd be mostly interested in discussing how the emotions of programmed thought might differ from ours (and that's a fun well to splash around in, for a while)(though deposing of cult-friendly rhetoric is valuable too, even if you have to get nasty).

I'm mildly concerned about the Reign Of Terror Precept, but I also understand it. It's just disappointing to know that the good faith of conversation has to be preserved artificially (ostensibly limited to Eliezerposting, which is more than fair. I can't wait to read the Harry Potter Fanfiction Fanfiction.)

comment by mike_hawke · 2022-11-17T00:46:19.195Z · LW(p) · GW(p)

I wonder if this post would have gotten a better reception if the stooge had been a Scientologist or a conspiracy theorist or something, instead of just a hapless normie.

comment by Stephen McAleese (stephen-mcaleese) · 2023-01-15T12:08:45.812Z · LW(p) · GW(p)

This is a great example of an ad hoc hypothesis.