Partial Transcript of the Hanson-Yudkowsky June 2011 Debate
post by ChrisHallquist · 2012-04-19T03:43:58.810Z · LW · GW · Legacy · 17 commentsContents
17 comments
So I'm currently trying to write an article on the nature of intelligence and the AI FOOM issue. I listened to the live debate between Eliezer Yudkowsky and Robin Hanson they did in June 2011, and thought taken together Eliezer's comments in that debate make for a really good concise statement of his position. However, videos and audios have the disadvantage that when you're trying to make sure you remember or understand something correctly, it's can be inconvenient to find the part and re-listen to it (because of inability to skim, and to an extent because I read faster than most people talk). I finally decided to transcribe substantial chunks of Eliezer's comments for my own use, and I think my transcript would be useful to the Less Wrong community in general.
This is not a word for word transcript; I've eliminated verbal filler and false starts (where Eliezer stopped mid-sentence and decided to say something a different way.) If people want to transcribe parts I left out, including Robin's remarks, they can put them in the comments and I'll edit them in to the main post along with attribution. Corrections and suggested tweaks to what I've transcribed so far are also welcome.
EDIT: Partial credit for this post goes to kalla724, not because she contributed directly, but because I'm not sure I would have been able to do this without having read this post. It instantly increased my ability to avoid distractions while getting work done, sending my productivity through the roof. I hope the effects are permanent.
[7:30] When we try to visualize how all this is likely to go down, we tend to visualize a scenario that someone else once termed a “brain in a box in a basement.” And I love that phrase, so I stole it. In other words, we tend to visualize that there’s this AI programming team a lot like the wannabe AI programming teams you see nowadays trying to create artificial general intelligence like the artificial general intelligence projects you see nowadays and they manage to acquire some new deep insights which combined with published insights in the general scientific community let them go down to their basement and work on it for a while and create an AI which is smart enough to reprogram itself and then you get an intelligence explosion.
[21:24] If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it. And the brain is simply not a very complicated artifact by comparison to, say, Windows Vista. Now the complexity that it does have it uses a lot more effectively than Windows Vista does. It probably contains a number of design principles which Microsoft knows not. (And I’m not saying it’s that small because it’s 750 megabytes, I’m saying it’s gotta be that small because at least 90% of the 750 megabytes is junk and there’s only 30,000 genes for the whole body, never mind the brain.)
That something that simple can be this powerful, and this hard to understand, is a shock. But if you look at the brain design, it’s got 52 major areas on each side of the cerebral cortex, distinguishable by the local pattern, the tiles and so on, it just doesn’t really look all that complicated. It’s very powerful. It’s very mysterious. What can say about it is that it probably involves 1,000 different deep, major, mathematical insights into the nature of intelligence that we need to comprehend before we can build it.
This is probably one of the more intuitive, less easily quantified and argued by reference to large bodies of experimental evidence type thing. It’s more a sense of, you read through The MIT Encyclopedia of Cognitive Sciences and you read Judea Pearl’s Probabilistic Reasoning in Intelligent Systems, so here’s an insight, it’s an insight into the nature of causality, how many more insights of this size do we need given that this is what The MIT Encyclopedia of Cognitive Sciences seems to indicate we already understand and what it doesn’t? And you sort of take a gander on it and you say it’s probably about ten more insights, definitely not one, not a thousand, probably not a hundred either.
[27:34] Our nearest neighbors, the chimpanzees, have 95% shared DNA with us. Now in one sense that may be a little misleading because what they don’t share is probably more heavily focused on brain than body type stuff. But on the other hand you can look at those brains. You can put the brains through an MRI. They have almost exactly the same brain areas as us. We just have larger versions of some brain areas. And I think there’s one sort of neuron we have that they don’t, or possibly even they had it but only in very tiny quantities.
This is because there have been only 5 million years since we split off from the chimpanzees. There simply has not been time to do any major changes to brain architecture in 5 million years. It’s just not enough to do really significant complex machinery. The intelligence we have is the last layer of icing on the cake, and yet if you look at the curve of evolutionary optimization into the hominid line versus how much optimization power put out, how much horse power was the intelligence, it goes like this:
[Gestures to indicate something like a hyperbola or maybe a step function—a curve that is almost horizontal for a while, then becomes almost verticle.]
If we look at the world today, we find that taking a little bit out of the architecture produces something that is just not in the running as an ally or a competitor when it comes to doing cognitive labor. Chimpanzees don’t really participate in the economy at all, in fact. But the key point from our perspective is that although they are in a different environment, they grow up different things, there are genuinely skills that chimpanzees have that we don’t, such as being able to poke a branch into an anthill and pull it out in such a way as to have it covered with lots of tasty ants, nonetheless there are no branches of science where the chimps do better because they have mostly the same architecture and more relevant content.
So it seems to me at least that if we look at the present cognitive landscape we are getting really strong information that—pardon me, we’re trying to reason from one sample, but pretty much all of this is trying to reason in one way or another—we’re seeing that in this particular case at least, humans can develop all sorts of content that lets them totally outcompete other animal species who have been doing things for millions of things longer than we have, by virtue of architecture, and anyone who doesn’t have the architecture isn’t really in the running for it.
[33:20] This thing [picks up laptop] does run at around two billion Hz, and this thing [points to head] runs at about two hundred Hz. So if you can have architectural innovations which merely allow this thing [picks up laptop again] to do the sort of thing that this thing [points to head again] is doing, only a million times faster, then that million times faster means that 31 seconds works out to about a subjective year and all the time between ourselves and Socrates works out to about 8 hours.
[40:00] People have tried raising chimps in human surroundings, and they absorb this mysterious capacity for abstraction that sets them apart from other chimps. There’s this wonderful book about one of these chimps, Kanzi was his name, very famous chimpanzee, probably the world’s most famous chimpanzee, and probably the world’s smartest chimpanzee as well. They were trying to teach his mother to do these human things and he was just a little baby chimp and he was watching and he picked stuff up. It’s amazing, but nonetheless he did not go on to become the world’s leading chimpanzee scientist using his own chimpanzee abilities separately.
If you look at human beings we have this enormous processing object containing billions upon billions of neurons and people still fail the Wason selection task. They cannot figure out which playing card they need to turn over to verify the rule “if a card has an even number on one side it has a vowel on the other.” They can’t figure out which cards they need to turn over to verify whether this rule is true or false.
[47:55] The reason why I expect localish sort of things is that I expect one project to go over the threshold for intelligence in much the same way that chimps went over the threshold of intelligence and became humans (yes I know that’s not evolutionarily accurate) and then even though they now have this functioning mind to which they can make all sorts of interesting improvements and have it run even better and better, whereas meanwhile all the other cognitive work on the planet is being done by these non-enduser-modifiable human intelligences.
[55:25] As far as I can tell what happens when the government tries to develop AI is nothing. But that could just be an artifact of our local technological level and it might change over the next few decades. To me it seems like a deeply confusing issue whose answer is probably not very complicated in an absolute sense. Like we know why it’s difficult to build a star. You’ve got to gather a very large amount of interstellar hydrogen in one place. So we understand what sort of labor goes into a star and we know why a star is difficult to build. When it comes to building a mind, we don’t know how to do it so it seems very hard. We like query our brains to say “map us a strategy to build this thing” and it returns null so it feels like it’s a very difficult problem. But in point of fact we don’t actually know that the problem is difficult apart from being confusing. We understand the star-building problem so we know it’s difficult. This one we don’t know how difficult it’s going to be after it’s no longer confusing.
So to me the AI problem looks like a—it looks to me more like the sort of thing that the problem is finding bright enough researchers, bringing them together, letting them work on that problem instead of demanding that they work on something where they’re going to produce a progress report in two years which will validate the person who approved the grant and advance their career. And so the government has historically been tremendously bad at producing basic research progress in AI, in part because the most senior people in AI are often people who got to be very senior by having failed to build it for the longest period of time. (This is not a universal statement. I’ve met smart senior people in AI.)
But nonetheless, basically I’m not very afraid of the government because I don’t think it’s a throw warm bodies at the problem and I don’t think it’s a throw warm computers at the problem. I think it’s a good methodology, good people selection, letting them do sufficiently blue sky stuff, and so far historically the government has been tremendously bad at producing that kind of progress. (When they have a great big project to try to build something it doesn’t work. When they fund long-term research it works.)
[1:01:11] Here at the Singularity Institute we plan to keep all of our most important insights private and hope that everyone else releases their results.
[1:02:59] The human brain is a completely crap design, which is why it can’t solve the Wason selection task. You think up any bit of the heuristics and biases literature and there’s 100 different ways this thing reliably, experimentally malfunctions when you give it some simple-seeming problem.
[1:04:26] I would hope to build an AI that was sufficiently unlike human, because it worked better, that there would be no direct concept of how fast does this run relative to you. It would be able to solve some problems very quickly and if it can solve all problems much faster than you you’re already getting into the superintelligence range. But at the beginning you would already expect it to be able to do arithmetic immensely faster than you and at the same time it might be doing basic scientific research a bit slower. Then eventually it’s faster than you at everything, but possibly not the first time you boot up the code.
[1:17:49] It seems like human brains are just not all that impressive. We don’t add that well. We can’t communicate with other people. One billion squirrels could not compete with a human brain. Our brains is about four times as large as a chimp, but four chimps cannot compete with one human. Making a brain twice as large, and actually incorporating that into the architecture, seems to produce a scaling of output of intelligence that is not even remotely comparable to the effect taking two brains of fixed size and letting them talk to each other using words. So an artificial intelligence can do all this neat stuff internally and possibly scale its processing power by orders of magnitude that itself has a completely different output function than human brains trying to talk to each other.
[1:34:12] So it seems to me that this [Hanson’s view] is all strongly dependent first on the belief that the causes of intelligence get divided up very finely into lots of little pieces that get developed in a wide variety of different places, so that nobody gets an advantage. And second that if you do get a small advantage you’re only doing a small fraction of the total intellectual labor going into the problem so you don’t have a "nuclear pile gone critical effect" because any given pile is still a very small fraction of all the thinking that’s going into AI everywhere.
I’m not quite sure what to say besides when I look at the world it doesn’t actually look like the world looks like that. There aren’t twenty different species all of whom are good at different aspects of intelligence and have different advantages. G factor is pretty weak evidence, but it exists. The people talking about g factor do seem to be winning on the experimental predictions test versus the people who previously went around talking about multiple intelligences.
It’s not a very transferable argument, but to extent that I actually have a grasp of cognitive science it does not look like it’s sliced into lots of little pieces. It looks like there’s a bunch of major systems doing particular tasks and they’re all cooperating with each other. It’s sort of like we have a heart, and not one hundred little mini hearts distributed around the body. It might have been a better system, but nonetheless we just have one big heart over there.
It looks to me like there’s really obvious, hugely important things you could do with the first prototype intelligence that actually worked. And so I expect that the critical thing is going to be the first prototype intelligence that actually works and runs on a two gigahertz processor and can do little experiments to find out which of its own mental processes work better and things like that. And that the first AI that really works is already going to have a pretty large advantage relative to the biological system so that the key driver of change looks more like somebody builds a prototype and not like this large existing industry reaches a certain quality level at the point where it is mainly being driven by incremental improvements leaking out of particular organizations.
17 comments
Comments sorted by top scores.
comment by lukeprog · 2012-04-19T20:31:36.169Z · LW(p) · GW(p)
Note: I am in the process of having someone transcribe the whole thing.
Replies from: John_Maxwell_IV, ChrisHallquist↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-12-08T13:04:07.181Z · LW(p) · GW(p)
Any updates on this?
↑ comment by ChrisHallquist · 2012-04-20T03:06:14.599Z · LW(p) · GW(p)
Oh cool. When will it be available?
Replies from: lukeprogcomment by buybuydandavis · 2012-04-19T08:37:29.346Z · LW(p) · GW(p)
Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
Is anyone taking rat embryos, removing chunks of "junk dna", and seeing what happens?
Replies from: philh, Luke_A_Somers, David_Gerard↑ comment by philh · 2012-04-19T15:34:43.671Z · LW(p) · GW(p)
This has been done with mice, with no observable effect. I think wikipedia's page on junk DNA references the study, which I haven't looked at myself.
edit: The paper is behind Nature's paywall; here's a copy. They didn't notice any effects at a macro level. When they checked gene expression levels, they found two differences; but at p<0.05 from over a hundred experiments, which seems suspect. They did another test which I don't understand but which seems to show some difference between the experimental groups.
↑ comment by Luke_A_Somers · 2012-04-19T16:11:57.112Z · LW(p) · GW(p)
The 'junk' is in large part genes that have been useful in the past but the promoters have been silenced so the genes are never expressed. These promoters can relatively easily mutate or in some cases epigenetically change to reactivate the gene, so on evolutionary timescales it's a good idea to keep it around, for flexibility.
I don't know of any experiments doing what you describe - not in vertebrates - but I could very easily have missed it. I know they strip out lots of stuff all the time in viruses for gene therapy, and that's stuff that's actually expressed!
There are reasons that the experiment could fail yet the junk dna would nonetheless be junk (containing no information) - if, for instance, self-hybridization is used to splice things, then you're going to need non-information-bearing DNA to mechanically connect the self-hybridizing areas. That, at least, is not going to approach 90%.
There could be other space-takers that are less removable, especially in reproduction - what if you're heterozygous with a gene that's junk in one case and not in the other? Meiosis could be a train wreck if you try to take away that junk gene!
Replies from: ChrisHallquist, philh↑ comment by ChrisHallquist · 2012-04-19T16:32:11.239Z · LW(p) · GW(p)
If a gene isn't being expressed, there's no way to weed out deleterious mutations. The "keep it around for later" thing might work on short time scales, but on longer time scales genes that aren't being expressed will degrade into gibberish.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-05-08T19:53:25.251Z · LW(p) · GW(p)
Relevant to this point:
http://www.nature.com/nature/journal/v485/n7396/full/nature10995.html
↑ comment by philh · 2012-04-19T16:29:10.326Z · LW(p) · GW(p)
These promoters can relatively easily mutate or in some cases epigenetically change to reactivate the gene, so on evolutionary timescales it's a good idea to keep it around, for flexibility.
Evolution can't decide to keep something around just because it might be useful for future evolution. If it's not currently causing an organism to have more/stronger children (or fewer/weaker), evolution doesn't pay attention to it.
Also, you're describing pseudogenes. I don't think they make up a large part of noncoding DNA, but I don't have actual numbers.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-04-19T23:53:35.834Z · LW(p) · GW(p)
Evolution can't decide to keep something around just because it might be useful for future evolution.
Evolution can't decide to do anything. It occurs that genes that aggressively root out recently abandoned genetic material are maladaptive.
↑ comment by David_Gerard · 2012-04-19T15:17:47.013Z · LW(p) · GW(p)
You couldn't grow a human cell from just the DNA - there's mitochondria, for example. What else needs to come along with the minimum information package?
comment by Duncan · 2012-04-19T14:18:52.171Z · LW(p) · GW(p)
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
This is false. Just because we do not know what role a lot of DNA performs does not mean it is 'almost certainly junk'. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases where the DNA isn't currently 'in use' that DNA may be critical to the ongoing stability of our genome over multiple generations or have other unknown functions.
Replies from: philh↑ comment by philh · 2012-04-19T15:31:20.388Z · LW(p) · GW(p)
Your objections are correct, but Eliezer's statement is still true. The elements you list, as far as I know, take up even less space than the coding regions. (If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.)
Comparison with the mouse genome shows at least 5% of the human genome is under selective pressure, whereas only something like 2% has a purpose that we've discovered. But at the same time, there's a lot that we're pretty sure really is junk.
Replies from: Duncan↑ comment by Duncan · 2012-04-19T16:19:50.324Z · LW(p) · GW(p)
If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.
Unless this is a standard definition for describing DNA, I do not agree that such DNA is 'junk'. If the DNA serves a purpose it is not junk. There was a time when it was believed (as many still do) that the nucleus was mostly a disorganized package of DNA and associated 'stuff'. However, it is becoming increasing clear that it is highly structured and that structure is critical for proper cell regulation including epigenetics.
If it can be shown that outright removal of most of our DNA does not have adverse affects I would agree with the junk description. However, I am not aware that this has been shown in humans (or human cell lines at least).
Replies from: philh↑ comment by philh · 2012-04-19T17:13:55.052Z · LW(p) · GW(p)
I think the term "junk" has fallen out of favour. Fair enough, let's taboo that word.
If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, it contains no useful information - or at least, no more than it takes to say "a megabase of arbitrary DNA goes here". The context is roughly "how much information does it take to express a brain?" It's true that we can't completely ignore those regions unless we're confident that they could be completely removed, but they only add O(1) complexity instead of O(n).
Replies from: Duncan↑ comment by Duncan · 2012-04-20T04:10:53.989Z · LW(p) · GW(p)
In the context of "what is the minimal amount of information it takes to build a human brain," I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.
Do we know how much of the human genome can definitely be replaced with random code without problem?
In addition, do we know how much information is contained in the structure of a cell? You can't just put the DNA of our genome in water and expect to get a brain. Our DNA resides in an enormously complex sea of nano machines and structures. You need some combination of both to get a brain.
Honestly, I think the important take away is that there are probably a number of deep or high level insights that we need to figure out. Whether it's 75 mb, 750 mb, or a petabyte doesn't really matter if most of that information just describes machine parts or functions (e.g., a screw, a bolt, a wheel, etc.). Simple components often take up a lot of information. Frankly, I think 1 mb containing 1000 deep insights at maximum compression would be far more difficult to comprehend than a petabyte containing loads of parts descriptions and only 10 deep insights.