Failure By Affective Analogy

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T07:14:55.000Z · LW · GW · Legacy · 3 comments

Contents

3 comments

Previously in seriesFailure By Analogy

Alchemy is a way of thinking that humans do not instinctively spot as stupid.  Otherwise alchemy would never have been popular, even in medieval days.  Turning lead into gold by mixing it with things that seemed similar to gold, sounded every bit as reasonable, back in the day, as trying to build a flying machine with flapping wings.  (And yes, it was worth trying once, but you should notice if Reality keeps saying "So what?")

And the final and most dangerous form of failure by analogy is to say a lot of nice things about X, which is similar to Y, so we should expect nice things of Y. You may also say horrible things about Z, which is the polar opposite of Y, so if Z is bad, Y should be good.

Call this "failure by affective analogy".

Failure by affective analogy is when you don't just say, "This lemon glazing is yellow, gold is yellow, QED."  But rather say:

"And now we shall add delicious lemon glazing to the formula for the Philosopher's Stone the root of all wisdom, since lemon glazing is beautifully yellow, like gold is beautifully yellow, and also lemon glazing is delightful on the tongue, indicating that it is possessed of a superior potency that delights the senses, just as the beauty of gold delights the senses..."

That's why you find people saying things like, "Neural networks are decentralized, just like democracies" or "Neural networks are emergent, just like capitalism".

A summary of the Standard Prepackaged Revolutionary New AI Paradigm might look like the following - and when reading, ask yourself how many of these ideas are affectively laden:

By means of this tremendous package deal fallacy, lots of good feelings are generated about the New Idea (even if it's thirty years old).  Enough nice words may even manage to start an affective death spiral.  Until finally, via the standard channels of affect heuristic and halo effect, it seems that the New Idea will surely be able to accomplish some extremely difficult end -

- like, say, true general intelligence -

- even if you can't quite give a walkthrough of the internal mechanisms which are going to produce that output.

(Why yes, I have seen AGIfolk trying to pull this on Friendly AI - as they explain how all we need to do is stamp the AI with the properties of Democracy and Love and Joy and Apple Pie and paint an American Flag on the case, and surely it will be Friendly as well - though they can't quite walk through internal cognitive mechanisms.)

From such reasoning as this (and this), came the string of false promises that were published in the newspapers (and led futurists who grew up during that era to be very disappointed in AI, leading them to feel negative affect that now causes them to put AI a hundred years in the future).

Let's say it again:  Reversed stupidity is not intelligence - if people are making bad predictions you should just discard them, not reason from their failure.

But there is a certain lesson to be learned.  A bounded rationalist cannot do all things, but the true Way should not overpromise - it should not (systematically/regularly/on average) hold out the prospect of success, and then deliver failure.  Even a bounded rationalist can aspire to be well calibrated, to not assign 90% probability unless they really do have good enough information to be right nine times out of ten.  If you only have good enough information to be right 6 times out of 10, just say 60% instead.  A bounded rationalist cannot do all things, but the true Way does not overpromise.

If you want to avoid failed promises of AI... then history suggests, I think, that you should not expect good things out of your AI system unless you have a good idea of how specifically it is going to happen.  I don't mean writing out the exact internal program state in advance.  But I also don't mean saying that the refrigeration unit will cool down the AI and make it more contemplative.  For myself, I seek to know the laws governing the AI's lawful uncertainty and lawful creativity - though I don't expect to know the full content of its future knowledge, or the exact design of its future inventions.

Don't want to be disappointed?  Don't hope!

Don't ask yourself if you're allowed to believe that your AI design will work.

Don't guess.  Know.

For this much I do know - if I don't know that my AI design will work, it won't.

There are various obvious caveats that need to be attached here, and various obvious stupid interpretations of this principle not to make.  You can't be sure a search will return successfully before you have run it -

- but you should understand on a gut level:  If you are hoping that your AI design will work, it will fail.  If you know that your AI design will work, then it might work.

And on the Friendliness part of that you should hold yourself to an even higher standard - ask yourself if you are forced to believe the AI will be Friendly - because in that aspect, above all, you must constrain Reality so tightly that even Reality is not allowed to say, "So what?"  This is a very tough test, but if you do not apply it, you will just find yourself trying to paint a flag on the case, and hoping.

3 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Jim_Lebeau · 2008-11-19T14:58:36.000Z · LW(p) · GW(p)

Fom Piet Hein's "Grooks"

THE OPPOSITE VIEW

For many system shoppers it's a good-for-nothing system that classifies as opposites stupidity and wisdom.

because by logic-choppers it's accepted with avidity: stupidity's true opposite's the opposite stupidity.

comment by David_Gerard · 2011-04-01T20:02:06.971Z · LW(p) · GW(p)

"Neural networks are decentralized, just like democracies" or "Neural networks are emergent, just like capitalism".

D-:

Dare I ask who said these?

comment by A1987dM (army1987) · 2013-10-19T15:44:21.821Z · LW(p) · GW(p)

Reminds me of stuff like “$chemical [must be bad because it] is used in explosives!!!1!one!lol!”.