Partial Transcript of the Hanson-Yudkowsky June 2011 Debatepost by ChrisHallquist · 2012-04-19T03:43:58.810Z · score: 11 (11 votes) · LW · GW · Legacy · 17 comments
So I'm currently trying to write an article on the nature of intelligence and the AI FOOM issue. I listened to the live debate between Eliezer Yudkowsky and Robin Hanson they did in June 2011, and thought taken together Eliezer's comments in that debate make for a really good concise statement of his position. However, videos and audios have the disadvantage that when you're trying to make sure you remember or understand something correctly, it's can be inconvenient to find the part and re-listen to it (because of inability to skim, and to an extent because I read faster than most people talk). I finally decided to transcribe substantial chunks of Eliezer's comments for my own use, and I think my transcript would be useful to the Less Wrong community in general.
This is not a word for word transcript; I've eliminated verbal filler and false starts (where Eliezer stopped mid-sentence and decided to say something a different way.) If people want to transcribe parts I left out, including Robin's remarks, they can put them in the comments and I'll edit them in to the main post along with attribution. Corrections and suggested tweaks to what I've transcribed so far are also welcome.
EDIT: Partial credit for this post goes to kalla724, not because she contributed directly, but because I'm not sure I would have been able to do this without having read this post. It instantly increased my ability to avoid distractions while getting work done, sending my productivity through the roof. I hope the effects are permanent.
[7:30] When we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed a “brain in a box in a basement.” And I love that phrase, so I stole it. In other words, we tend to visualize that there’s this AI programming team a lot like the wannabe AI programming teams you see nowadays trying to create artificial general intelligence like the artificial general intelligence projects you see nowadays and they manage to acquire some new deep insights which combined with published insights in the general scientific community let them go down to their basement and work on it for a while and create an AI which is smart enough to reprogram itself and then you get an intelligence explosion.
[21:24] If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it. And the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now the complexity that it does have it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not. (And I’m not saying it’s that small because it’s 750 megabytes, I’m saying it’s gotta be that small because at least 90% of the 750 megabytes is junk and there’s only 30,000 genes for the whole body, never mind the brain.)
That something that simple can be this powerful, and this hard to understand, is a shock. But if you look at the brain design, it’s got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on, it just doesn’t really look all that complicated. It’s very powerful. It’s very mysterious. What can say about it is that it probably involves 1,000 different deep, major, mathematical insights into the nature of intelligence that we need to comprehend before we can build it.
This is probably one of the more intuitive, less easily quantified and argued by reference to large bodies of experimental evidence type thing. It’s more a sense of, you read through The MIT Encyclopedia of Cognitive Sciences and you read Judea Pearl’s Probabilistic Reasoning in Intelligent Systems, so here’s an insight, it’s an insight into the nature of causality, how many more insights of this size do we need given that this is what The MIT Encyclopedia of Cognitive Sciences seems to indicate we already understand and what it doesn’t? And you sort of take a gander on it and you say it’s probably about ten more insights, definitely not one, not a thousand, probably not a hundred either.
[27:34] Our nearest neighbors, the chimpanzees, have 95% shared DNA with us. Now in one sense that may be a little misleading because what they don’t share is probably more heavily focused on brain than body type stuff. But on the other hand you can look at those brains. You can put the brains through an MRI. They have almost exactly the same brain areas as us. We just have larger versions of some brain areas. And I think there’s one sort of neuron we have that they don’t, or possibly even they had it but only in very tiny quantities.
This is because there have been only 5 million years since we split off from the chimpanzees. There simply has not been time to do any major changes to brain architecture in 5 million years. It’s just not enough to do really significant complex machinery. The intelligence we have is the last layer of icing on the cake, and yet if you look at the curve of evolutionary optimization into the hominid line versus how much optimization power put out, how much horse power was the intelligence, it goes like this:
[Gestures to indicate something like a hyperbola or maybe a step function—a curve that is almost horizontal for a while, then becomes almost verticle.]
If we look at the world today, we find that taking a little bit out of the architecture produces something that is just not in the running as an ally or a competitor when it comes to doing cognitive labor. Chimpanzees don’t really participate in the economy at all, in fact. But the key point from our perspective is that although they are in a different environment, they grow up different things, there are genuinely skills that chimpanzees have that we don’t, such as being able to poke a branch into an anthill and pull it out in such a way as to have it covered with lots of tasty ants, nonetheless there are no branches of science where the chimps do better because they have mostly the same architecture and more relevant content.
So it seems to me at least that if we look at the present cognitive landscape we are getting really strong information that—pardon me, we’re trying to reason from one sample, but pretty much all of this is trying to reason in one way or another—we’re seeing that in this particular case at least, humans can develop all sorts of content that lets them totally outcompete other animal species who have been doing things for millions of things longer than we have, by virtue of architecture, and anyone who doesn’t have the architecture isn’t really in the running for it.
[33:20] This thing [picks up laptop] does run at around two billion Hz, and this thing [points to head] runs at about two hundred Hz. So if you can have architectural innovations which merely allow this thing [picks up laptop again] to do the sort of thing that this thing [points to head again] is doing, only a million times faster, then that million times faster means that 31 seconds works out to about a subjective year and all the time between ourselves and Socrates works out to about 8 hours.
[40:00] People have tried raising chimps in human surroundings, and they absorb this mysterious capacity for abstraction that sets them apart from other chimps. There’s this wonderful book about one of these chimps, Kanzi was his name, very famous chimpanzee, probably the world’s most famous chimpanzee, and probably the world’s smartest chimpanzee as well. They were trying to teach his mother to do these human things and he was just a little baby chimp and he was watching and he picked stuff up. It’s amazing, but nonetheless he did not go on to become the world’s leading chimpanzee scientist using his own chimpanzee abilities separately.
If you look at human beings we have this enormous processing object containing billions upon billions of neurons and people still fail the Wason selection task. They cannot figure out which playing card they need to turn over to verify the rule “if a card has an even number on one side it has a vowel on the other.” They can’t figure out which cards they need to turn over to verify whether this rule is true or false.
[47:55] The reason why I expect localish sort of things is that I expect one project to go over the threshold for intelligence in much the same way that chimps went over the threshold of intelligence and became humans (yes I know that’s not evolutionarily accurate) and then even though they now have this functioning mind to which they can make all sorts of interesting improvements and have it run even better and better, whereas meanwhile all the other cognitive work on the planet is being done by these non-enduser-modifiable human intelligences.
[55:25] As far as I can tell what happens when the government tries to develop AI is nothing. But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing.
So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.)
But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)
[1:01:11] Here at the Singularity Institute we plan to keep all of our most important insights private and hope that everyone else releases their results.
[1:02:59] The human brain is a completely crap design, which is why it can’t solve the Wason selection task. You think up any bit of the heuristics and biases literature and there’s 100 different ways this thing reliably, experimentally malfunctions when you give it some simple-seeming problem.
[1:04:26] I would hope to build an AI that was sufficiently unlike human, because it worked better, that there would be no direct concept of how fast does this run relative to you. It would be able to solve some problems very quickly and if it can solve all problems much faster than you you’re already getting into the superintelligence range. But at the beginning you would already expect it to be able to do arithmetic immensely faster than you and at the same time it might be doing basic scientific research a bit slower. Then eventually it’s faster than you at everything, but possibly not the first time you boot up the code.
[1:17:49] It seems like human brains are just not all that impressive. We don’t add that well. We can’t communicate with other people. One billion squirrels could not compete with a human brain. Our brains is about four times as large as a chimp, but four chimps cannot compete with one human. Making a brain twice as large, and actually incorporating that into the architecture, seems to produce a scaling of output of intelligence that is not even remotely comparable to the effect taking two brains of fixed size and letting them talk to each other using words. So an artificial intelligence can do all this neat stuff internally and possibly scale its processing power by orders of magnitude that itself has a completely different output function than human brains trying to talk to each other.
[1:34:12] So it seems to me that this [Hanson’s view] is all strongly dependent first on the belief that the causes of intelligence get divided up very finely into lots of little pieces that get developed in a wide variety of different places, so that nobody gets an advantage. And second that if you do get a small advantage you’re only doing a small fraction of the total intellectual labor going into the problem so you don’t have a "nuclear pile gone critical effect" because any given pile is still a very small fraction of all the thinking that’s going into AI everywhere.
I’m not quite sure what to say besides when I look at the world it doesn’t actually look like the world looks like that. There aren’t twenty different species all of whom are good at different aspects of intelligence and have different advantages. G factor is pretty weak evidence, but it exists. The people talking about g factor do seem to be winning on the experimental predictions test versus the people who previously went around talking about multiple intelligences.
It’s not a very transferable argument, but to extent that I actually have a grasp of cognitive science it does not look like it’s sliced into lots of little pieces. It looks like there’s a bunch of major systems doing particular tasks and they’re all cooperating with each other. It’s sort of like we have a heart, and not one hundred little mini hearts distributed around the body. It might have been a better system, but nonetheless we just have one big heart over there.
It looks to me like there’s really obvious, hugely important things you could do with the first prototype intelligence that actually worked. And so I expect that the critical thing is going to be the first prototype intelligence that actually works and runs on a two gigahertz processor and can do little experiments to find out which of its own mental processes work better and things like that. And that the first AI that really works is already going to have a pretty large advantage relative to the biological system so that the key driver of change looks more like somebody builds a prototype and not like this large existing industry reaches a certain quality level at the point where it is mainly being driven by incremental improvements leaking out of particular organizations.
Comments sorted by top scores.