My Best and Worst Mistake

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-16T00:43:50.000Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Yesterday I covered the young Eliezer's affective death spiral around something that he called "intelligence".  Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition—consciously, deliberately refused.  Indeed, he would have been loath to put any definition on "intelligence" at all.

Why?  Because there's a standard bait-and-switch problem in AI, wherein you define "intelligence" to mean something like "logical reasoning" or "the ability to withdraw conclusions when they are no longer appropriate", and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, "Lo, I have implemented intelligence!"  People came up with poor definitions of intelligence—focusing on correlates rather than cores—and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence.  It's not like Eliezer1996 was out to build a career in Artificial Intelligence.  He just wanted a mind that would actually be able to build nanotechnology.  So he wasn't tempted to redefine intelligence for the sake of puffing up a paper.

Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else stupidity:  Having seen attempts to define "intelligence" abused so often, I refused to define it at all.  What if I said that intelligence was X, and it wasn't really X?  I knew in an intuitive sense what I was looking for—something powerful enough to take stars apart for raw material—and I didn't want to fall into the trap of being distracted from that by definitions.

Similarly, having seen so many AI projects brought down by physics envy—trying to stick with simple and elegant math, and being constrained to toy systems as a result—I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence.  "Except for Bayes's Theorem," Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

If you're wondering why Eliezer2000 thought such a thing—disbelieved in a math of intelligence—well, it's hard for me to remember this far back.  It certainly wasn't that I ever disliked math.  If I had to point out a root cause, it would be reading too few, too popular, and the wrong Artificial Intelligence books.

But then I didn't think the answers were going to come from Artificial Intelligence; I had mostly written it off as a sick, dead field.  So it's no wonder that I spent too little time investigating it.  I believed in the cliche about Artificial Intelligence overpromising.  You can fit that into the pattern of "too far in the opposite direction"—the field hadn't delivered on its promises, so I was ready to write it off.  As a result, I didn't investigate hard enough to find the math that wasn't fake.

My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

Because I disbelieved that there could be any simple answers to intelligence, I went and I read up on cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, and more than one branch of Artificial Intelligence.  When I had what seemed like simple bright ideas, I didn't stop there, or rush off to try and implement them, because I knew that even if they were true, even if they were necessary, they wouldn't be sufficient: intelligence wasn't supposed to be simple, it wasn't supposed to have an answer that fit on a T-Shirt.  It was supposed to be a big puzzle with lots of pieces; and when you found one piece, you didn't run off holding it high in triumph, you kept on looking.  Try to build a mind with a single missing piece, and it might be that nothing interesting would happen.

I was wrong in thinking that Artificial Intelligence the academic field, was a desolate wasteland; and even wronger in thinking that there couldn't be math of intelligence.  But I don't regret studying e.g. functional neuroanatomy, even though I now think that an Artificial Intelligence should look nothing like a human brain.  Studying neuroanatomy meant that I went in with the idea that if you broke up a mind into pieces, the pieces were things like "visual cortex" and "cerebellum"—rather than "stock-market trading module" or "commonsense reasoning module", which is a standard wrong road in AI.

Studying fields like functional neuroanatomy and cognitive psychology gave me a very different idea of what minds had to look like, than you would get from just reading AI books—even good AI books.

When you blank out all the wrong conclusions and wrong justifications, and just ask what that belief led the young Eliezer to actually do...

Then the belief that Artificial Intelligence was sick and that the real answer would have to come from healthier fields outside, led him to study lots of cognitive sciences;

The belief that AI couldn't have simple answers, led him to not stop prematurely on one brilliant idea, and to accumulate lots of information;

The belief that you didn't want to define intelligence, led to a situation in which he studied the problem for a long time before, years later, he started to propose systematizations.

This is what I refer to when I say that this is one of my all-time best mistakes.

Looking back, years afterward, I drew a very strong moral, to this effect:

What you actually end up doing, screens off the clever reason why you're doing it.

Contrast amazing clever reasoning that leads you to study many sciences, to amazing clever reasoning that says you don't need to read all those books.  Afterward, when your amazing clever reasoning turns out to have been stupid, you'll have ended up in a much better position, if your amazing clever reasoning was of the first type.

When I look back upon my past, I am struck by the number of semi-accidental successes, the number of times I did something right for the wrong reason.  From your perspective, you should chalk this up to the anthropic principle: if I'd fallen into a true dead end, you probably wouldn't be hearing from me on this blog.  From my perspective it remains something of an embarrassment.  My Traditional Rationalist upbringing provided a lot of directional bias to those "accidental successes"—biased me toward rationalizing reasons to study rather than not study, prevented me from getting completely lost, helped me recover from mistakes.  Still, none of that was the right action for the right reason, and that's a scary thing to look back on your youthful history and see.  One of my primary purposes in writing on Overcoming Bias is to leave a trail to where I ended up by accident—to obviate the role that luck played in my own forging as a rationalist.

So what makes this one of my all-time worst mistakes?  Because sometimes "informal" is another way of saying "held to low standards". I had amazing clever reasons why it was okay for me not to precisely define "intelligence", and certain of my other terms as well: namely,other people had gone astray by trying to define it.  This was a gate through which sloppy reasoning could enter.

So should I have jumped ahead and tried to forge an exact definition right away?  No, all the reasons why I knew this was the wrong thing to do, were correct; you can't conjure the right definition out of thin air if your knowledge is not adequate.

You can't get to the definition of fire if you don't know about atoms and molecules; you're better off saying "that orangey-bright thing".  And you do have to be able to talk about that orangey-bright stuff, even if you can't say exactly what it is, to investigate fire.  But these days I would say that all reasoning on that level is something that can't be trusted—rather it's something you do on the way to knowing better, but you don't trust it, you don't put your weight down on it, you don't draw firm conclusions from it, no matter how inescapable the informal reasoning seems.

The young Eliezer put his weight down on the wrong floor tile—stepped onto a loaded trap.  To be continued.

17 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Allan_Crossman · 2008-09-16T01:05:36.000Z · LW(p) · GW(p)

From your perspective, you should chalk this up to the anthropic principle: if I'd fallen into a true dead end, you probably wouldn't be hearing from me on this blog.

I'm not sure that can properly be called anthropic reasoning; I think you mean a selection effect. To count as anthropic, my existence would have to depend upon your intellectual development; which it doesn't, yet. :)

(Although I suppose my existence as Allan-the-OB-reader probably does so depend... but that's an odd way of looking at it.)

comment by Zubon · 2008-09-16T01:40:42.000Z · LW(p) · GW(p)

Do you see any relationship between this and your current view of philosophy?

comment by RobinHanson · 2008-09-16T02:18:32.000Z · LW(p) · GW(p)

Many intellectuals (like me) find it hard to focus long on a particular topic, and easily succumb to weak excuses to read widely and try out new fields. I suspect your personality largely determined your history here.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-16T02:53:20.000Z · LW(p) · GW(p)
Many intellectuals (like me) find it hard to focus long on a particular topic, and easily succumb to weak excuses to read widely and try out new fields. I suspect your personality largely determined your history here.

I don't say you're wrong, but the obvious next question is what kind of realization would lead you to focus long on a particular topic despite your personality, why the young Eliezer lacked that realization, and even whether that would have been appropriate (considering how things turned out).

I mean, if you'd told the young Eliezer that, he would have fired back that extreme specialization was an error produced by poor incentive structures in academia. Relative to his state of knowledge about not knowing which path to go down and specialize extremely in, this was a lucky mistake for him to make - though it was still a mistake, because you can't stay permanently in a state of shallow exploration, and depth isn't just an incentive failure.

My own change of opinion on this subject dates to my Bayesian Enlightenment, when my opinion changed about a lot of things, making it hard to untangle; but I would mostly chalk it up to reading E.T. Jaynes and seeing a higher level of precision in action.

comment by RobinHanson · 2008-09-16T03:10:58.000Z · LW(p) · GW(p)

Eliezer, "sluts" often eventually settle down with someone - their initial wide exploration was not so much a general preference for variety, but rather a strategy of exploring more widely before settling down.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-16T03:14:33.000Z · LW(p) · GW(p)

So are we chalking this up to the successful execution of an adaptation?

comment by Bob_Unwin11 · 2008-09-16T03:16:18.000Z · LW(p) · GW(p)

It's common for non-depressed people to say things like "I made some mistakes in the past, but it all turned out for the best in the end and I'm happy where I've ended up". A big reason for this is that for most people it is psychologically very difficult to admit that their previous mistakes have put them in a situation significantly worse than they could have been in had they made better choices.

One way to recognize such cases is to find out if people are mostly positive about how they've ended up. For example, suppose that Fred works as a lawyer for five years. Over that period, Fred mostly says that he is happy with his choice to do law. After the five years, Fred switches to a non-legal job in a major corporation. After a few years, he says that it was a mistake for him to go into law, but that he's happy how things have turned out.

So: would Eliezer_1999 have said confidently that he was happy with how things had turned out? Would he have been able to give lots of reasons why he was in a better position than Eliezer_1996?

Also: If you had gone to a university with a good AI department, then you would have encountered people doing Bayesian stuff. (The same goes for a stat department, or a philosophy department with a strong philosophy of science orientation). Would you have been better going to college rather than being an auto-didact? I'm asking non-rhetorically. What advice would you give to pre-college teenagers in something like your position in 1996?

comment by michael_vassar3 · 2008-09-16T05:51:03.000Z · LW(p) · GW(p)

Bob: what do you think of my analysis of my mistakes. In general, I think that it is hugely desirable that someone fairly similar to me made roughly the mistakes that I made in terms of life path, but its unfortunate that I made them rather than someone somewhat less capable. In terms of other types of mistakes, I have made a few terrible ones but got reasonably lucky so there were no consequences of note.

Eliezer: one heuristic that could have given you roughly your behaviors if you formed it is "almost no-one invests adequately in information while still investing effort in action at all". As a general idea, high level intellectual exploration should consume substantially more time than goal-directed action, but there are few social encouragements to behave in this manner so the only people who do so are essentially those who are addicted to such intellectual exploration and have no propensity or willingness to take action at all.

comment by Brian5 · 2008-09-16T09:05:24.000Z · LW(p) · GW(p)

...but the obvious next question is what kind of realization would lead you to focus long on a particular topic despite your personality...

My general rule is that we're inclined towards optimal opportunities to experience our personal affect in the world. We continue exploring widely as long as no specific topic quite matches our existing knowledge, circumstances, and cognitive disposition.

When we find knowledge that helps us interpret our circumstances in a way that we can personally identify with those interpretations (i.e. we recognize our previous knowledge and cognitive disposition in them, and subsequently we recognize the affects of our interpretations on our circumstances, e.g. when events correspond with our beliefs so well that events seem to occur because we anticipate them -- whether via Bayes's Theorem or the Bible or whatever it may be), then we can focus.

I think up to a point the more various fields of knowledge we study the more difficult it is to find some core interpretive principle that works well with everything we know -- there's always some contradiction.

The key is to keep organizing and digesting all that knowledge as we go, at some point (late 20's?), if we've worked hard and been lucky enough with your accidents, gaining new knowledge actually helps put the old knowledge to good use: like having a lot of clothes, there's a lot more to match with. Then we start to gain enough creative freedom to develop our own 'personalized specialization' to focus on, which is knowledge that really helps us interpret circumstances in a way we identify through.

comment by Richard_Hollerith2 · 2008-09-16T09:09:26.000Z · LW(p) · GW(p)

Are you sure you want to use the word "addicted" as in "addicted to intellectual exploration", Michael Vassar? Einstein is on record as having derived great pleasure from learning and thinking about physics. Would you call it an addiction even though it did not prevent him from holding down a job as a clerk in a patent office when circumstances made that necessary.

comment by Z._M._Davis · 2008-09-16T15:46:27.000Z · LW(p) · GW(p)

Vassar: "what do you think of my analysis of my mistakes"

Um, which? Where? I am curious.

comment by michael_vassar3 · 2008-09-17T03:33:37.000Z · LW(p) · GW(p)

Well Richard, we all know that Einstein's thought habit left him incapable of holding down a "real job" as a professor in an era when the field was MUCH less competitive than it is today. Fortunately for him, he lived in a society where there was readily available credential-based government work for even the least impressive degree holders, largely because there were so many fewer of them. He was also empirically unable to make his refrigerator company profitable.

comment by michael_vassar3 · 2008-09-17T03:35:53.000Z · LW(p) · GW(p)

To clarify Z.M. I didn't do an analysis here, I just gave my conclusion, which is that I'm glad that someone made them because important information was learned which is not otherwise available in my social group (except maybe some of it from Phil Goetz) but I would rather that someone other than I had done so and had simply told me.

comment by XiXiDu · 2012-04-01T11:36:55.658Z · LW(p) · GW(p)

...you can't conjure the right definition out of thin air if your knowledge is not adequate.

You can't get to the definition of fire if you don't know about atoms and molecules; you're better off saying "that orangey-bright thing". And you do have to be able to talk about that orangey-bright stuff, even if you can't say exactly what it is, to investigate fire. But these days I would say that all reasoning on that level is something that can't be trusted—rather it's something you do on the way to knowing better, but you don't trust it, you don't put your weight down on it, you don't draw firm conclusions from it, no matter how inescapable the informal reasoning seems.

I suppose this statement is qualified later on in the sequences? Otherwise, wouldn't this contradict what SI is doing with respect to risks associated with artificial general intelligence?

comment by A1987dM (army1987) · 2013-10-07T04:44:37.589Z · LW(p) · GW(p)

Studying neuroanatomy meant that I went in with the idea that if you broke up a mind into pieces, the pieces were things like "visual cortex" and "cerebellum"—rather than "stock-market trading module" or "commonsense reasoning module", which is a standard wrong road in AI.

When you break a human mind into pieces. Why should an artificial mind also be like that?

comment by Entraya · 2014-02-17T10:18:50.233Z · LW(p) · GW(p)

Well, that reminds me.. Should anyone stumble across this article and comment and know a good way to enter the fields of cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, then please respond. I doubt i should make an attempt right away, but still, i have a functioning brain and a brain can learn, and one day i would like for it to learn about brains and learning and intelligence. I just don't know how i would even begin to find resources and trying to understand it all, and i was somewhat lucky to find this fairly comprehensive site of articles to even begin this whole adventure

comment by Rubix · 2014-04-01T04:41:40.021Z · LW(p) · GW(p)

s/by seeing someone else stupidity/by seeing someone else's stupidity/