Q&A with Michael Littman on risks from AI

post by XiXiDu · 2011-12-19T09:51:15.496Z · LW · GW · Legacy · 88 comments

Contents

    [Click here to see a list of all interviews]
  The Interview:
    Michael Littman: 
None
88 comments

[Click here to see a list of all interviews]

Michael L. Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, Partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a professor of computer science and department chair at Rutgers University.

Homepage: cs.rutgers.edu/~mlittman/

Google Scholar: scholar.google.com/scholar?q=Michael+Littman

The Interview:

Michael Littman: A little background on me.  I've been an academic in AI for not-quite 25 years.  I work mainly on reinforcement learning, which I think is a key technology for human-level AI---understanding the algorithms behind motivated behavior.  I've also worked a bit on topics in statistical natural language processing (like the first human-level crossword solving program).  I carried out a similar sort of survey when I taught AI at Princeton in 2001 and got some interesting answers from my colleagues.  I think the survey says more about the mental state of researchers than it does about the reality of the predictions. 

In my case, my answers are colored by the fact that my group sometimes uses robots to demonstrate the learning algorithms we develop.  We do that because we find that non-technical people find it easier to understand and appreciate the idea of a learning robot than pages of equations and graphs.  But, after every demo, we get the same question: "Is this the first step toward Skynet?"  It's a "have you stopped beating your wife" type of question, and I find that it stops all useful and interesting discussion about the research.

Anyhow, here goes:

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?

Michael Littman:

10%: 2050 (I also think P=NP in that year.)
50%: 2062
90%: 2112

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)

I think complete human extinction is unlikely, but, if society as we know it collapses, it'll be because people are being stupid (not because machines are being smart).

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?

Michael Littman: epsilon (essentially zero).  I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection.  It involves experimenting with the world and seeing what works and what doesn't.  The world, as they say, is its best model.  Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.

Q3-sub: P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?

Michael Littman: Ditto.

Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?

Michael Littman: 1%. At least 5 years is enough for some experimentation.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Michael Littman: No, I don't think it's possible.  I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Michael Littman: In terms of science risks (outside of human fundamentalism which is the only non-negligible risk I am aware of), I'm most afraid of high energy physics experiments, then biological agents, then, much lower, information technology related work like AI.

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

Michael Littman: I think people are currently hypersensitive.  As I said, every time I do a demo of any AI ideas, no matter how innocuous, I am asked whether it is the first step toward Skynet.  It's ridiculous.  Given the current state of AI, these questions come from a simple lack of knowledge about what the systems are doing and what they are capable of.  What society lacks is not a lack of awareness of risks but a lack of technical understanding to *evaluate* risks.  It shouldn't just be the scientists assuring people everything is ok.  People should have enough background to ask intelligent questions about the dangers and promise of new ideas.

Q7: Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?

Michael Littman: Slightly subhuman intelligence?  What we think of as human intelligence is layer upon layer of interacting subsystems.  Most of these subsystems are complex and hard to get right.  If we get them right, they will show very little improvement in the overall system, but will take us a step closer.  The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development.  Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.

88 comments

Comments sorted by top scores.

comment by cousin_it · 2011-12-19T12:43:00.536Z · LW(p) · GW(p)

I think this expert is anthropomorphizing too much. To pose an extinction risk, a machine doesn't even need to talk, much less replicate all the accidental complexity of human minds. It just has to be good at physics and engineering.

These tasks seem easier to formalize than many other things humans do: in particular, you could probably figure out the physics of our universe from very little observational data, given a simplicity prior and lots of computing power (or a good enough algorithm). Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem, and a machine that could solve it efficiently could develop nanotech faster than humans do.

We humans probably suck at physics and engineering on an absolute scale just like we suck at multiplying 32-bit numbers, see Moravec's paradox. And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Replies from: Vladimir_Nesov, whpearson, JoshuaZ, Nymogenous, wedrifid
comment by Vladimir_Nesov · 2011-12-19T13:23:26.971Z · LW(p) · GW(p)

We now know that playing chess doesn't require human-level intelligence as Littman understands it. It may turn out that destroying the world doesn't require human-level intelligence either. A narrow AI could do just fine.

Interesting: this framing moved me more than your previous explanation.

comment by whpearson · 2011-12-19T21:53:58.120Z · LW(p) · GW(p)

It just has to be good at physics and engineering.

I would contend it would have to know what is in the current environment as well. What bacteria and other micro organisms it is likely to face ( a largely unexplored question by humans), what chemicals it will have available (as potential feedstocks and poisons) and what radiation levels.

To get these from first principles it would have to recreate the evolution of earth from scratch.

Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem,

What do you mean by a formalized problem in this context? I'm interested in links on the subject.

Replies from: JoshuaZ, cousin_it
comment by JoshuaZ · 2011-12-19T22:06:59.723Z · LW(p) · GW(p)

There are a variety of formalized versions of protein folding. See for example this paper(pdf). There are however questions if these models are completely accurate. Computing how a protein will fold based on a model is often so difficult that testing the actual limits of the models is tricky. The model given in the paper I linked to is known to be too simplistic in many practical cases.

comment by cousin_it · 2011-12-19T23:12:23.260Z · LW(p) · GW(p)

Sorry for speaking so confidently. I don't really know much about protein folding, it was just the impression I got from Wikipedia: 1, 2.

comment by JoshuaZ · 2011-12-19T16:25:11.442Z · LW(p) · GW(p)

And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

I don't think this follows. Humans spent thousands of years in near stagnation (the time before the dawn of agriculture is but one example). It isn't clear what caused technological civilization to take off but when new discoveries occurred almost looks like some sort of nearly random process except that the probability of a new discovery or invention increases as more discoveries occur. I'd almost consider modeling it as a biased coin which starts off with an extreme bias towards tails, but each time it turns up heads, the bias shifts a bit in the heads direction. Something like P(heads on the nth flip)= (1+k)/(10^5 + k) where k is the number of previous flips that came up heads.If that's the case, then the timing doesn't by itself tell us much about where our capacity is for civilization. It doesn't look that improbable that some other extinct species might even have had the capability at about where we are or higher but went extinct before they got those first few lucky coin flips.

Replies from: billswift, Vladimir_Nesov
comment by billswift · 2011-12-19T18:31:20.205Z · LW(p) · GW(p)

almost looks like some sort of nearly random process except that the probability of a new discovery or invention increases as more discoveries occur.

And as population increases that would tend to increase the rate of discovery or invention as well. This is basically Julian Simon's argument in The Great Breakthrough and Its Causes, that gradually increasing population hit a point where the rates of discovery and invention suddenly started increasing rapidly (and population then started increasing even more rapidly), resulting in the Renaissance and ultimately in the Industrial Revolution. He gives some thought and argument as to why they didn't happen earlier in India or China, but I think the specific arguments a bit iffy.

comment by Vladimir_Nesov · 2011-12-19T17:44:08.782Z · LW(p) · GW(p)

In a blink of evolution's eye.

comment by Nymogenous · 2011-12-19T13:00:58.253Z · LW(p) · GW(p)

I wouldn't take Moravec's paradox too seriously; all it seems to indicate is that we're better at programming a system we've spent thousands of years formalizing (eg, math) than a system that's built into our brains so that we never really think about it...hardly surprising to me.

Replies from: cousin_it
comment by cousin_it · 2011-12-19T13:13:35.655Z · LW(p) · GW(p)

I think Moravec's paradox is more than a selection effect. Face recognition requires more computing power than multiplying two 32-bit numbers, and it's not just because we've learned to formalize one but not the other. We will never get so good at programming computers that our face-recognition programs get faster than our number-multiplication programs.

comment by wedrifid · 2011-12-19T13:57:19.235Z · LW(p) · GW(p)

And we probably suck at these tasks about as much as it's possible to suck and still build a technological civilization, because otherwise we would have built it at an earlier point in our evolution.

Replies from: cousin_it
comment by cousin_it · 2011-12-19T14:13:08.920Z · LW(p) · GW(p)

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Replies from: wedrifid
comment by wedrifid · 2011-12-19T14:38:02.191Z · LW(p) · GW(p)

This is a well-known argument. I got it from Eliezer somewhere, don't remember where.

Yes, and I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason. Hence encouragement of others saying the same thing.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-19T16:00:00.223Z · LW(p) · GW(p)

I'm sick of trying to explain to people why "we have no evidence that it is possible to have higher than human intelligence" is trivially absurd for approximately this reason.

You wrote a reference post where you explain why you would deem anyone who wants to play quantum roulette crazy. If the argument mentioned by cousin_it is that good and you have to explain it to people that often, I want you to consider writing a post on it where you outline the argument in full detail ;-)

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

Replies from: wedrifid
comment by wedrifid · 2011-12-19T16:03:11.759Z · LW(p) · GW(p)

You could start by showing how most evolutionary designs are far short of their maximum efficiency and that we therefore have every reason to believe that human intelligence barely reached the minimum threshold necessary to build a technological civilization.

The therefore is pointing the wrong direction. That's the point!

Replies from: XiXiDu
comment by XiXiDu · 2011-12-19T16:29:48.527Z · LW(p) · GW(p)

The therefore is pointing the wrong direction. That's the point!

Human intelligence barely reached the minimum threshold necessary to build a technological civilization and therefore we have every reason to believe that most evolutionary designs are far short of their maximum efficiency? That seems like a pretty bold claim based on the the fact that some of our expert systems are better at narrow tasks that were never optimized for by natural selection.

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

Replies from: FAWS, wedrifid
comment by FAWS · 2011-12-19T22:45:30.030Z · LW(p) · GW(p)

If you really want to convince people that human intelligence is the minimum of general intelligence possible given the laws of physics then in my opinion you have to provide some examples of other evolutionary designs that are very inefficient compared to their technological counterparts.

E. g. trying to estimate how fast the first animal that walked on land could run and comparing that with how fast the currently fastest animal on land can run, how fast the fastest robot with legs can run and how fast the fastest car can "run"?

comment by wedrifid · 2011-12-19T16:39:06.364Z · LW(p) · GW(p)

Cousin_it, this is why I am glad to see people other than myself explaining the concept. I just don't have the patience to deal with this kind of thinking.

Replies from: jsteinhardt, XiXiDu
comment by jsteinhardt · 2011-12-20T15:13:35.099Z · LW(p) · GW(p)

I don't understand. XiXiDu's thinking was "if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way, why don't we actually do that so that we can see if we're right or not?" That seems to me like the cornerstone of critical thought, or am I missing what you found objectionable?

Replies from: XiXiDu
comment by XiXiDu · 2011-12-20T16:46:54.019Z · LW(p) · GW(p)

..."if your assertion about humans was true, then we would expect to see these other things as well (i.e., other species being minimally fit for a task when they first start doing it); we therefore have a way of testing this hypothesis in a fairly convincing way,..."

That is a good suggestion and I endorse it. I have however been thinking about something else.

I suspected that people like cousin_it and wedrifid must base their assumption that human intelligence is close to the minimum level of efficiency (optimization power/resources used) on other evidence, e.g. that expert systems can factor numbers 10^180 times faster than humans can. I didn't think that the whole argument rests on the fact that humans didn't start to build a technological civilization right after they became mentally equipped to do so.

Don't get a wrong impression here, I agree that it is very unlikely that human intelligence is close to the optimum. But I also don't see that we have much reason to believe that it is close to the minimum. Further I believe that intelligence is largely overrated by some people on lesswrong and that conceptual revolutions, e.g. the place-value notation method of encoding numbers, wouldn't have been discovered much quicker by much more intelligent beings other than due to lucky circumstances. In other words, I think that the speed of discovery is not proportional to intelligence but rather that intelligence quickly hits diminishing returns (anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

But I digress. My suggestion was to compare technological designs with evolutionary designs. For example animal echolocation with modern sonar, ant colony optimization algorithm with the actual success rate of ant behavior, energy efficiency and maneuverability of artificial flight with insect or bird flight...

If intelligence is a vastly superior optimization process compared to evolution then I suspect that any technological replications of evolutionary designs, that have been around for some time, should have been optimized to an extent that their efficiency vastly outperforms that of their natural counterparts. And from this we could then draw the conclusion that intelligence itself is unlikely to be an outlier but just like those other evolutionary designs very inefficient and subject to strong artificial amplification.

ETA: I believe that even sub-human narrow AI is an existential risk. So that I believe that lots of people here are hugely overconfident about a possible intelligence explosion doesn't really change that much with respect to risks from AI.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-12-23T12:07:22.248Z · LW(p) · GW(p)

(anecdotal evidence here includes that real world success doesn't really seem to scale with IQ points. Are people like Steve Jobs that smart? Could Terence Tao become the richest person if he wanted to? Are high karma people on lesswrong unusually successful?).

There's evidence that on the upper end higher IQ is inversely correlated with income but this may be due to people caring about non-monetary rewards (the correlation is positive at lower IQ levels and is positive at lower education levels but negatively correlated at higher education levels). I would not be surprised if there were an inverse correlation between high karma and success levels, since high karma may indicate procrastination and the like. If one looked at how high someone's average karma per a post is that might be a better metric to make this sort of point.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-23T15:09:52.356Z · LW(p) · GW(p)

There's evidence that on the upper end higher IQ is inversely correlated with income but this may be due to people caring about non-monetary rewards...

That occurred to me as well and seems a reasonable guess. But let me restate the question. Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual? I just don't see that and I still don't understand the hype about intelligence around here.

All it really takes is to speed up the rate of discovery immensely by having a vast number of sub-human narrow AI scientists research various dangerous technologies and stumble upon something unexpected or solve a problem that enables unfriendly humans to wreak havoc.

The main advantage of AI is that it can be applied in parallel to brute force a solution. But this doesn't imply that you can brute force problem solving and optimization power itself. Conceptual revolutions are not signposted so that one only has to follow a certain path or use certain heuristics to pick them up. They are scattered randomly across design space and their frequency is decreasing disproportionately with each optimization step.

I might very well be wrong and recursive self-improvement is a real and economic possibility. But I don't see there being enough evidence to take that possibility as seriously as some here do. It doesn't look like that intelligence is instrumentally useful beyond a certain point, a point that might well be close to human levels of intelligence. Which doesn't imply that another leap in intelligence, alike the one that happened since chimpanzees and humans split, isn't possible. But there is not much reason to believe that it can be reached by recursive self-improvement. It might very well take sheer luck because the level above ours is as hard to grasp for us than our level is for chimpanzees.

So even if it is economically useful to optimize our level of intelligence there is no evidence that it is possible to reach the next level other than by stumbling upon it. Lots of "ifs", lots of conjunctive reasoning is required to get from simple algorithms over self-improvement to superhuman intelligence.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2011-12-26T14:31:07.644Z · LW(p) · GW(p)

Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual?

I think that's the wrong question; it should read:

Is it more likely that a human, rather than a dog, destroys the world if he tried to?

The difference in intelligence between Marylin vos Savant and a human with IQ 100 is very small in absolute terms.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-15T17:50:07.936Z · LW(p) · GW(p)

Would Marilyn vos Savant be proportionally more likely to destroy the world if she tried to than a 115 IQ individual?

I think that's the wrong question; it should read:

Is it more likely that a human, rather than a dog, destroys the world if he tried to?

I believe that the question is an important one. An AI has to be able to estimate the expected utility of improving its own intelligence and I think it is unlikely that any level of intelligence is capable of estimating the expected utility of a whole level above its own, because 1) it can't possible know where it is located in design space 1b) how it can detect insights in solution space 2) how much resources it takes 3) how long it takes. Therefore any AI has to calculate the expected utility of the next small step towards the next level, the expected utility of small amplifications of its intelligence similar to the difference between an average human and that of Marylin vos Savant. It has to ask 1) is the next step instrumentally useful 2) are resources spent on amplification better spent on 2b) pursuing other ways to gain power 2c) to pursue terminal goals directly given the current resources and intelligence.

I believe that those questions shed light on the possibility of recursive self improvement and its economic feasibility.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-01-15T18:34:50.203Z · LW(p) · GW(p)

1) it can't possible know where it is located in design space 1b) how it can detect insights in solution space 2) how much resources it takes 3) how long it takes.

I think it's possible that an AI could at least roughly estimate where it's located in design space, how much ressources and how long it takes to increase its intelligence, but I'm happy to hear counter-arguments.

And it seems to me that the point of diminishing returns in increasing intelligence is very far away. I would do almost anything to gain, say, 30 IQ points, and that's nothing in absolute terms.

But I agree with you that these questions are important and that trusting my intuitions too much would be foolish.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-15T19:38:37.391Z · LW(p) · GW(p)

I do not have the necessary education to evaluate state of the art AI research and to grasp associated fields that are required to make predictions about the nature of possible AI's capable of self-modification. I can only voice some doubts and questions.

For what it's worth, here are some thoughts on recursive self-improvement and risks from AI that I wrote for a comment on Facebook:

I do think that an expected utility-maximizer is the ideal in GAI. But, just like general purpose quantum computers, I believe that expected utility-maximizer's which - 1) find it instrumentally useful to undergo recursive self-improvement 2) find it instrumentally useful to take over the planet/universe to protect their goals - are, if at all feasible, the end-product of a long chain of previous AI designs with no quantum leaps in-between. That they are at all feasible is dependent on 1) how far from the human level intelligence hits diminishing returns 2) that intelligence is more useful than other kinds of resources in stumbling upon unknown unknowns in design space 3) that expected utility-maximizer's and their drives are not fundamentally dependent on the precision with which their utility-function is defined.

Here is an important question: Would Marilyn vos Savant (http://en.wikipedia.org/wiki/Marilyn_vos_Savant) be proportionally more likely to take over the world if she tried to than a 115 IQ individual?

Let me explain why I believe that the question is an important one. I believe that the question does shed light on the possibility of recursive self improvement and its economic feasibility.

An AI has to be able to estimate the expected utility of improving its own intelligence. And I think it is unlikely that any level of intelligence is capable of estimating the expected utility of a whole level above its own (where "a level above its own" is assumed to be similar to a boost in efficient cross-domain optimization power similar to that between humans and chimpanzees).

I think it is impossible for an AI to estimate the expected utility of the next level above its own, because 1) it can't possible know where the next level is located in design space 1b) how it can detect insights about it in solution space (because those insights are beyond a conceptual singularity (otherwise it wouldn't be a level above its own)) 2) how much resources it takes to stumble upon the next level 3) how long it takes to discover it.

Therefore any AI has to content to calculate the expected utility of the next small step towards the next level, the expected utility of small amplifications of its intelligence similar to the difference between an average human and that of Marylin vos Savant.

The reason for why an AI can't estimate the expected utility of the next level is that it is over its conceptual horizon, whereas small amplification are in sight. Small amplifications are subject to feedback from experimentation with altered designs and the use of guided evolution. Large amplifications require conceptual insights that are not readily available. No intelligence is able to easily verify conclusively the workings of another intelligence that is a level above its own without painstakingly acquiring resources, devising the necessary tools and building the required infrastructure.

Humans first had to invent science, bumble through the industrial revolution and develop computers to be able to prove modern mathematical problems. An AI would have to invent meta-science, maybe advanced nanotechnology, and other unknown tools and heuristics to be able to figure out how to create a trans-AI, an intelligence that could solve problems it couldn't solve itself.

Every level of intelligence has to prove the efficiency of its successor to estimate if it is rational to build it, if it is economical, if the resources that are necessary to build it should be allocated differently. This does demand considerable effort and therefore resources. It does demand great care and extensive simulations and being able to prove the correctness of the self-modification symbolically.

In any case, every level of intelligence has to acquire new resources given its current level of intelligence. It can't just self-improve to come up with faster and more efficient solutions. Self-improvement does demand resources. Therefore the AI is unable to profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

For those reasons the AI has to answer the following questions,

  • 1) Is the next step instrumentally useful?
  • 2) Are resources spent on amplification better spend on,
  • 2b) pursuing other ways to gain power?
  • 2c) pursuing any terminal goal directly given the current resources and intelligence?

There are many open questions here. It is not clear that most problems would be easier to solve given certain amounts of intelligence amplification. Since intelligence does not guarantee the discovery of unknown unknowns. Intelligence is mainly useful to adapt previous discoveries and solve well-defined problems. The next level of intelligence is by definition not well-defined. And even if it was the case that intelligence would guarantee to speed up the rate at which discoveries are made, it is not clear that the resources that are required to amplify intelligence are in proportion to its instrumental usefulness. It might be the case that many problems require exponentially more intelligence to make small steps towards a solution.

Yet there are still other questions, it is not clear that a lot of small steps of intelligence amplification eventually amount to a whole level. And as mentioned in the beginning, how dependent are AI's on the precision with which their goals are defined? If you tell an AI to create 10 paperclips, would it care to take over the universe to protect the paperclips from destruction? Would it care to create them economically or quickly? Would it care how to create 10 paperclips if those design parameters are not explicitly defined? I don't think so. More on that another time.

Replies from: wallowinmaya, wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-01-15T21:04:38.761Z · LW(p) · GW(p)

I think it is impossible for an AI to estimate the expected utility of the next level above its own

I don't know what that means. It's always possible to assign probabilities, even if you don't have a clue. And assigning utilities seems trivial, too. Let's say the AI thinks "Hm. Improving my intelligence will lead to world dominion, if a) vast intelligence improvement doesn't cost too much ressources, b) doesn't take too long, c) and if intelligence really is as useful as it seems to be, i.e. is more efficient at discovering "unknown unkowns in design space" than other processes (which seems to me tautological since intelligence is by definition optimization power divided by ressources used; but I could be wrong). Let me assign a 50% probability to each of these claims." (Or less, but it always can assign a probability and can therefore compute an expected utility. )

And so even if P(Gain World dominion|successful, vast intelligence-improvement) * P(succesful, vast intelligence-improvement) is small (and I think it's easily larger than 0.05) , the expected utility could be great nonetheless. If the AI is a maximizer, not a satisficer, it will try to take over the world.

The biggest problem that I have with recursive self-improvement is that it's not at all clear that intelligence is an easily "scalable" process. Some folks seem to think that intelligence is a relatively easy algorithm, that a few mathematical insights it will be possible to "grok" intelligence. But maybe you need many different modules and heuristics for general intelligence (just like the human brain) and there is no "one true and easy path". But I'm just guessing....

I agree that the EU of small improvements is easier to compute and that they are easier to implement. But if intelligence is an "scalable" process you can make those small improvements fairly rapidly and after 100 of those you should be pretty, friggin powerful.

Intelligence is mainly useful to adapt previous discoveries and solve well-defined problems.

Do you think the discovery of General Relativity was a well-defined problem? And what about writing of inspiring novels and creating beautiful art and music? Creativity is a subset of intelligence. There are no creative chimps.

What do you mean by intelligence?

And yes, I believe that people with very high IQ and advanced social skills (another instantiation of high intelligence; chimpanzees just don't have high social skills) are far more likely to take over the world than people with IQ 110, although it's still very unlikely.

And as mentioned in the beginning, how dependent are AI's on the precision with which their goals are defined?

My intuition tells me that they are. :-) If you give them the goal of creating 10 paperclips and nothing else, they will try everything to achieve this goal.

But Eliezer's arguments in the AI-Foom debate are far more convincing than mine, so my arguments tell you probably nothing new. The whole discussion is frustrating, because our (subconscious) intuitions seem to differ greatly, and there is little we can do about it. (Just like the debate between Hanson and Yudkowsky was pretty fruitless.) That doesn't mean that I don't want to participate in further discussions, but the probability of an agreement seems somewhat thin. I try to do my best :-)

I'm currently rereading the Sequences and I'm trying to summarize the various arguments and counter-arguments for the intelligence explosion, though that will take some time...

comment by David Althaus (wallowinmaya) · 2012-01-15T22:04:07.376Z · LW(p) · GW(p)

I can only voice some doubts and questions.

Just wanted to say, that I think it's great that you voice questions and doubts. Most folks who don't agree with the "party-line" on LW, or substantial amounts thereof, probably just leave.

I do not have the necessary education to evaluate state of the art AI research and to grasp associated fields that are required to make predictions about the nature of possible AI's capable of self-modification.

I don't have the necessary education either. But you can always make predictions, even if you know almost nothing about the topic in question. You just have to widen your confidence intervalls! :-)

Replies from: XiXiDu
comment by XiXiDu · 2012-01-16T09:42:43.460Z · LW(p) · GW(p)

Most folks who don't agree with the "party-line" on LW, or substantial amounts thereof, probably just leave.

Yes, I was talking to people on Facebook who just "left".

But you can always make predictions, even if you know almost nothing about the topic in question.

The problem is that I find most of the predictions being made convincing, but only superficially so. There seem to be a lot of hidden assumptions.

If you were going to speed up a chimp brain a million times, would it quickly reach human-level intelligence? I don't think so. Why would it be different for a human-level intelligence trying to reach transhuman intelligence? It seems like a nice idea when formulated in English, but would it work?

Just because we understand Chess_intelligence we do not understand Human_intelligence. As I see it, either there is a single theory of general intelligence and improving it is just a matter at throwing more resources at it or different levels are fundamentally different and you can't just interpolate Go_intelligence from Chess_intelligence...

Even if we assume that there is one complete theory of general intelligence. Once discovered, one just has to throw more resources at it. It might be able to incorporate all human knowledge, adapt it and find new patterns. But would it really be vastly superior to human society and their expert systems?

Take for example a Babylonian mathematician. If you traveled back in time and were to accelerate his thinking a million times, would he discover place-value notation to encode numbers in a few days? I doubt it. Even if he was to digest all the knowledge of his time in a few minutes, I just don't see him coming up with quantum physics after a short period of time.

That conceptual revolutions are just a matter of computational resources seems like pure speculation. If one were to speed up the whole Babylonian world and accelerate cultural evolution, obviously one would arrive quicker at some insights. But how much quicker? How much are many insights dependent on experiments, to yield empirical evidence, that can't be speed-up considerably? And what is the return? Is the payoff proportionally to the resources that are necessary?

Another problem is if one can improve intelligence itself apart from solving well-defined problems and making more accurate predictions on well-defined classes of problems. I don't think the discovery of unknown unknowns is subject to other heuristics than natural selection. Without goals, well-defined goals, terms like "optimization" have no meaning.

Without well-defined goals in form of a precise utility-function, I don't think it would be possible to maximize expected "utility". Concepts like "efficient", "economic" or "self-protection" all have a meaning that is inseparable with an agent's terminal goals. If you just tell it to maximize paperclips then this can be realized in an infinite number of ways that would all be rational given imprecise design and goal parameters. Undergoing to explosive recursive self-improvement, taking over the universe and filling it with paperclips, is just one outcome. Why would an arbitrary mind pulled from mind-design space care to do that? Why not just wait for paperclips to arise due to random fluctuations out of a state of chaos? That wouldn't be irrational. To have an AI take over the universe as fast as possible you would have to explicitly design it to do so.

Replies from: Emile, wallowinmaya, faul_sname
comment by Emile · 2012-01-16T10:09:01.267Z · LW(p) · GW(p)

I don't think that the LW "party line" is that mere additional computational resources are sufficient to get superintelligence or even just intelligence (I'd find such a view simplistic and a bit naive, but I don't find Eliezer's views simplistic and naive).

I think that it's pretty likely that today's hardware would be in theory sufficient to run roughly human level or superhuman intelligence (in the broad sense, "could do most intellectual jobs human do today" for example), though that doesn't mean humans are likely to make them anytime soon (just like, if you teleported a competent engineer back in ancient Greece, he would be able to make some amazing device with the technology of the time, even if that doesn't mean the Greeks were about to invent those things).

I do think that as computational resources increase, the number of ways of designing minds increases, so it makes it more and more likely that someone will eventually figure out how to make something AGIish. But that's not the same as saying that "just increase computational resources and it'll work!".

For an analogy, it may have been possible to build an internal explosion engine with 1700-time technology, but as time went by and the precision of measurement and manufacturing tools increased, it became easier and easier to do. That doesn't mean that making 1700-era manufacturing machinery have modern levels of precision would be enough to allow them to build an internal explosion engine.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-16T11:43:24.190Z · LW(p) · GW(p)

...but I don't find Eliezer's views simplistic and naive...

My whole problem is that some people seem to have high confidence in following idea voiced by Eliezer:

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM". Just to be clear on the claim, "fast" means on a timescale of weeks or hours rather than years or decades; and "FOOM" means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology (that it gets by e.g. ordering custom proteins over the Internet with 72-hour turnaround time).

I do not doubt that it is a possibility but I just don't see how people justify to be very confident about it. It sure sounds nice when formulated in English. But is it the result of disjunctive reasoning? I perceive it to be conjunctive, a lot of assumptions have to turn out to be correct to make humans discover simple algorithms over night that can then be improved to self-improve explosively. I would compare that to the idea of a Babylonian mathematician discovering modern science and physics given that he would be uploaded into a supercomputer. I believe that to be highly speculative. It assumes that he could brute-force conceptual revolutions. Even if he was given a detailed explanation of how his mind works and the resources to understand it, self-improving to achieve superhuman intelligence assumes that throwing resources at the problem of intelligence will magically allow him to pull improved algorithms from solution space as if they were signposted. But unknown unknowns are not signposted. It's rather like finding a needle in a haystack. Evolution is great at doing that and assuming that one could speed up evolution considerably is another assumption about technological feasibility and real-world resources.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-16T15:50:13.741Z · LW(p) · GW(p)

OK, so here are some assumptions, stated as disjunctively as I can:

1: Humans have, over the last hundred years, created systems in the world that are intended to achieve certain goals. Call those systems "technology" for convenience.

2: At least some technology is significantly more capable of achieving the goals it's intended to achieve than its closest biologically evolved analogs. For example, technological freight-movers can move more freight further and faster than biologically evolved ones.

3: For the technology described in assumption 2, biological evolution would have required millenia to develop equivalently capable systems for achieving the goals of that technology.

4: Human intelligence (rather than other things such as, for example, human musculature or covert intervention by technologically advanced aliens) is primarily responsible for the creation of technology described in assumption 2.

5: Technology analogous to the technology-developing functions of human intelligence is in principle possible.

6: Technological technology-developers, if developed, will be significantly more capable of developing technology than human intelligence is.

Here are some assertions of confidence of these assumptions:

A1: 1-epsilon.
A2: 1-epsilon.
A3, given A2: .99+
A4, given A2 : ~.9
A5 given A4: .99+
A6 given A5: .95+

I conclude a .8+ confidence that it's in principle possible for humans to develop systems that are significantly more capable of delivering technological developments than humans are.

I'll pause there and see if we've diverged thus far: if you have different confidence levels for the assumptions I've stated, I'm interested in yours. If you don't believe that my conclusion follows from the assumptions I've stated, I'm interested in why not.

Replies from: XiXiDu, asr
comment by XiXiDu · 2012-01-16T18:41:18.512Z · LW(p) · GW(p)

You can't really compare technological designs for which there was no selection pressure and therefore no optimization with superficially similar evolutionary inventions. For example, you would have to compare the energy efficiency with which insects or birds can carry certain amounts of weight with a similar artificial means of transport carrying the same amount of weight. Or you would have to compare the energy efficiency and maneuverability of bird and insect flight with artificial flight. But comparing a train full of hard disk drives with the bandwidth of satellite communication is not useful. Saying that a rocket can fly faster than anything that evolution came up with is not generalizable to intelligence. And if even if I was to accept that argument, then there are many counter-examples. The echolocation of bats, economic photosynthesis or human gait. And the invention of rockets did not led to space colonization either, space exploration is actually retrogressive.

You also mention that human intelligence is primarily responsible for the creation of technology. I do think this is misleading. What is responsible is that we are goal-oriented while evolution is not. But the advance of scientific knowledge is largely an evolutionary process. I don't see that intelligence is currently tangible enough to measure that the return of increased intelligence is proportional to the resources it would take to amplify it. The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.

It is in principle possible to create artificial intelligence that is as capable as human intelligence. But this says nothing about how quickly we will be able to come up with it. I believe that intelligence is fundamentally dependent on the complexity of the goals against which it is measured. Goals give rise to agency and define an agent's drives. As long as we won't be able to precisely hard-code a complexity of values similar to that of humans we won't achieve levels of general intelligence similar to humans.

It is true that humans have created a lot of tools that help them to achieve their goals. But it is not clear that incorporating those tools into some sort of self-perception, some sort of guiding agency, is superior to humans using a combination of tools and expert systems. In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems. And if that was the case then I think that, just like chimpanzees would be unable to invent science, we won't be able to come up with a meta-heuristic that would allow us to discover algorithms that can solve a class of problems that we can't (other than by using guided evolution).

Besides, recursive self-improvement does not demand sentience, consciousness or agency. Even if humans are not able to "recursively improve" their own algorithms we can still "recursively improve" our tools. And the supremacy of recursively improving agent's over humans and their tools is a reasonable conjecture but not a fact. It largely relies on the idea that the integration of tools into a coherent framework of agencies has huge benefits.

I also object to assigning numerical probability estimates to informal arguments and predictions. When faced with data from empirical experiments, or goats behind doors in a gameshow, it is reasonable. But using formalized methods to evaluate informal evidence can be very misleading. For real-world, computationally limited agents it is a recipe to fail spectacularly. Using formalized methods to to evaluate vague ideas like risks from AI can lead you to dramatically over or underestimate evidence by forcing you to use your intuition to assign numbers to your intuitive judgement of informal arguments.

And as a disclaimer: Don't jump to the conclusion that I generally rule out the possibility that very soon someone will stumble upon a simple algorithm that can be run on a digital computer, that can be improved to self-improve, become superhuman and take over the universe. All am saying is that the possibility isn't as inevitable as some seem to believe. If forced, I would probably assign a 1% probability to it but still feel uncomfortable about that (which isn't to equate with risks from AI in general, I don't think FOOM is required for AI's to pose a risk).

I think that Eliezer crossed the border of what can sensibly be said about this topic at the present time when he says that AI will likely invent molecular nanotechnology in a matter of hours or days. Jürgen Schmidhuber is the only person I could find who might agree with that. Even Shane Legg is more skeptical. And since I do not yet have the education to evaluate state of the art AI research myself I will side with the experts and say that Eliezer is likely wrong. Of course, I have no authority but I have to make a decision. I don't feel it would be reasonable to believe Eliezer here without restrictions.

Just because the possibility of superhuman AI seems to be disjunctive on some level doesn't mean that there are no untested assumptions underlying the claims that such an outcome is possible. Reduce the vagueness and you will discover a set assumptions that need to be true in conjunction.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-16T20:29:03.997Z · LW(p) · GW(p)

So, I'm having a lot of difficulty mapping your response to the question I asked. But if I've understood your response, you are arguing that technology analogous to the technology-developing functions of human intelligence might not be in principle possible, or that if developed might not be capable of significantly greater technology-developing power than human intelligence is.

In other words, that assumptions 5 and/or 6 might be false.

I agree that it's possible. Similar things are true of the other examples you give: it's possible that technological echolocation, or technological walking, or technological photosynthesis, either aren't possible in principle, or can't be significantly more powerful than their naturally evolved analogs. (Do you actually believe that to be true of those examples, incidentally?)

This seems to me highly implausible, which is why my confidence for A5 and A6 are very high. (I have similarly high confidence in our ability to develop machines more efficient than human legs at locomotion, machines more efficient at converting sunlight to useful work than plants, and more efficient at providing sonar-based information about their surroundings than bats.)

So, OK. We've identified a couple of specific, relevant assertions for which you think that my confidence is too high. Awesome! That's progress.

So, what level of confidence do you think is justified for those assertions? I realize that you reject assigning numbers to reported confidence, so OK... do you have a preferred way of comparing levels of confidence? Or do you reject the whole enterprise of such comparisons?

Incidentally: you say a lot of other stuff here which seems entirely beside my point... I think because you're running out ahead to arguments you think I might make some day. I will return to that stuff if I ever actually make an argument to which it's relevant.

comment by asr · 2012-01-16T18:47:42.613Z · LW(p) · GW(p)

I am uneasy with premise 4. I think human technological progress involves an awful lot of tinkering and evolution, and intelligent action by the technologist is not the hardest part. I doubt that if we could all think twice as quickly*, we would develop technology twice as quickly. The real rate-limiting step isn't the design, it's building things and testing them.

This doesn't mean that premise 4 is wrong, exactly, but it means that I'm worried it's going to be used in an inconsistent, equivocal, way.

*I am picturing taking all the relevant people, and having them think the same thoughts they do today, in half the time. Presumably they use the newly-free time to think more thoughts.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-16T19:27:31.442Z · LW(p) · GW(p)

Fair enough. If I end up using it equivocally or inconsistently, please do call me out on it.

Note that absolutely nothing I've said so far implies people thinking the same thoughts they do today in half the time.

Replies from: asr
comment by asr · 2012-01-16T19:53:55.975Z · LW(p) · GW(p)

No no, I wasn't attributing "same thoughts in half the time" to you. I was explaining the thought-experiment I was using to distinguish "intelligence" as an input from other requirements for technology creation.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-01-16T21:09:23.340Z · LW(p) · GW(p)

If what you understand by "intelligence" is the ability to arrive at the same conclusions faster, then I agree with you that that thing has almost nothing to do with technological development, and I should probably backup and rewrite assumptions 4-6 while tabooing the word "intelligence"

comment by David Althaus (wallowinmaya) · 2012-01-17T12:39:05.134Z · LW(p) · GW(p)

If you just tell it to maximize paperclips then this can be realized in an infinite number of ways

If the AI has the goal to maximize the number of paperclips in the universe and it is a rational utility maximizer it will try to find the most efficient way to do that, and there is probably only one (i.e. recursive self-improvement, acquiring ressources, etc..) You're right, if the AI isn't a rational utility maximizer it could do anything.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-17T13:24:36.399Z · LW(p) · GW(p)

You're right, if the AI isn't a rational utility maximizer it could do anything.

I don't think this follows. Even a rational utility maximizer can maximize paperclips in a lot of different ways. How it does it is fundamentally dependent on its utility-function and how precisely it was defined. If there are no constraints in the form of design and goal parameters then it can maximize paperclips in all sorts of ways that don't demand recursive self-improvement. "Utility" does only become well-defined if we precisely define what it means to maximize it. Just maximizing paperclips doesn't define how quickly and how economically it is supposed to happen.

Replies from: wallowinmaya
comment by David Althaus (wallowinmaya) · 2012-01-17T15:04:17.445Z · LW(p) · GW(p)

I don't understand your arguments.

My intuition is this: If the AI has the goal "The more paperclips the better: e.g. an universe containing 1002 paperclips is "400 utilons" better than an universe containing 602" then it will try to maximize paperclips. And if it tries this by reciting poems from the Bible then it isn't a rational AI, since it does not employ the most efficient strategy for maximizing paperclips.

The very definition of "rational utility maximizer" implies that it will try to maximize utilons as fast and as efficient as possible. Sure, it's possible that recursive self-improvement isn't a good strategy for doing so, but I think it's not unlikely. Am I missing something?

If the AI has a different utility function like "paperclips are pretty cool, but not as awesome as other things" then it will do other things.

Replies from: wedrifid, XiXiDu
comment by wedrifid · 2012-01-17T16:39:53.281Z · LW(p) · GW(p)

The very definition of "rational utility maximizer" implies that it will try to maximize utilons as fast and as efficient as possible. Sure, it's possible that recursive self-improvement isn't a good strategy for doing so, but I think it's not unlikely. Am I missing something?

No, you are not missing something at least not here. XiXiDu simply doesn't have a firm grasp on the concept of optimization. Don't let this confuse you.

comment by XiXiDu · 2012-01-17T16:24:25.263Z · LW(p) · GW(p)

The very definition of "rational utility maximizer" implies that it will try to maximize utilons as fast and as efficient as possible.

The problem is that "utility" has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.

I know what kind of agent you assume. I am just pointing out what needs to be true in conjunction to make the overall premise true. Expected utility maximizing does not equal what you assume. You can also assign utility to maximize paperclips as long as nothing turns you off but don't care about being turned off. If an AI is not explicitly programmed to care about it, then it won't.

comment by faul_sname · 2012-01-16T09:51:49.247Z · LW(p) · GW(p)

Take for example a Babylonian mathematician. If you traveled back in time and were to accelerate his thinking a million times, would he discover place-value notation to encode numbers in a few days? I doubt it. Even if he was to digest all the knowledge of his time in a few minutes, I just don't see him coming up with quantum physics after a short period of time.

I suspect that he would invent place-value notation or something similar within a few "days." Remember that a "day" at 1 million times speedup is over 2 millennia. Now, there are clearly some difficulties in testing what an isolated human produces in 2300 years, but we could look at cases of hermits who left civilization for periods of years or decades. Did any of them come up with revolutionary ideas? If the answer is yes in a decade or two, it is almost certain that a human at 1,000,000x speedup would have several such insights (assuming that such a speedup doesn't result in insanity). I can't imagine the mathematician coming up with quantum mechanics, but that could easily be a failure of my 1x speed brain.

comment by XiXiDu · 2011-12-19T17:18:48.933Z · LW(p) · GW(p)

I don't see how us not having build a technological civilization earlier in our history does constitute evidence that we only have the minimum intelligence that is necessary to do so. I don't think that intelligence makes as much of a difference to how quickly discoveries are made as you seem to think.

...this is why I am glad to see people other than myself explaining the concept.

I have never seen you explain the concept nor have I seen you refer to an explanation. I must have missed that, but I also haven't read all of your comments.

comment by Dr_Manhattan · 2011-12-19T18:00:01.252Z · LW(p) · GW(p)

10%: 2050 (I also think P=NP in that year.) 50%: 2062

+40% in 12 specific years? Now that's a bold distribution.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-12-19T22:58:05.018Z · LW(p) · GW(p)

And he it would be even tighter on the left if P!=NP.

comment by lukeprog · 2011-12-20T22:56:48.467Z · LW(p) · GW(p)

I do value these; please keep doing them!

comment by timtyler · 2011-12-19T10:35:47.585Z · LW(p) · GW(p)

2050 (I also think P=NP in that year.)

P=NP - WTF?!? ;-)

Replies from: JoshuaZ, XiXiDu
comment by JoshuaZ · 2011-12-19T14:33:04.351Z · LW(p) · GW(p)

By far that comment is the one that is farthest outside his expertise. I'm not sure why he's commenting on it. (He is a computer scientist but none of his work seems to be in complexity theory or is even connected to it as far as I can tell.) But he's still very respected and I would presume knows a lot about issues in parts of compsci that aren't his own area of research. It is possible that he made a typo?

Replies from: mlittman
comment by mlittman · 2012-01-15T15:10:40.075Z · LW(p) · GW(p)

Not a typo---I was mostly being cheeky. But, I have studied complexity theory quite a bit (mostly in analyzing the difficulty of problems in AI) and my 2050 number came from the following thought experiment. The problem 3-SAT is NP complete. It can be solved in time 2^n (where n is the number of variables in the formula). Over the last 20 or 30 years, people have created algorithms that solve the problem in c^n for ever decreasing values of c. If you plot the rate of decrease of c over time, it's a pretty good fit (or was 15 years ago when I did this analysis) for a line that goes below 1 in 2050. (If that happens, an NP-hard problem would be solvable in polynomial time and thus P=NP.) I don't put much stake in the idea that the future can be predicted by a graph like that, but I thought it was fun to think about. Anyhow, sorry for being flip.

Replies from: mlittman, JoshuaZ
comment by mlittman · 2012-01-21T15:36:55.819Z · LW(p) · GW(p)

Side note: I did this analysis initially in honor of Donald Loveland (a colleague at the time whose satisfiability solver sits at the root of this tree of discoveries). I am gratified to see that he was interviewed on lesswrong on a more recent thread!

comment by JoshuaZ · 2012-01-15T17:41:53.157Z · LW(p) · GW(p)

Thanks for clarifying. (And welcome to Less Wrong.)

comment by XiXiDu · 2011-12-19T11:07:35.972Z · LW(p) · GW(p)

P=NP - WTF?!? ;-)

Also see: Polls And Predictions And P=NP

Replies from: Nymogenous
comment by Nymogenous · 2011-12-19T13:23:52.621Z · LW(p) · GW(p)

Does anyone know what the largest amount of money wagered on this question is?

EDIT: I'm aware of a few bets on specific claimed proofs, but have not been able to find any bets on the general question that exceed a few hundred dollars (unless you count the million-dollar prizes various institutes are offering).

Replies from: XiXiDu
comment by XiXiDu · 2011-12-19T15:02:50.874Z · LW(p) · GW(p)

Does anyone know what the largest amount of money wagered on this question is?

Don't know, but Scott Aaronson once bet $200,000 on a proof being wrong. He wrote:

I’ve literally bet my house on it.

Replies from: gwern
comment by gwern · 2011-12-19T16:04:05.905Z · LW(p) · GW(p)

When I read that, I didn't expect him to actually pay up in the unlikely event the proof was right - there's a big difference between saying 'I bet my house' on your blog and actually sending a few hundred thousand or million bucks to the Long Now Foundation's Long Bets project.

Replies from: CarlShulman
comment by CarlShulman · 2012-01-04T09:09:01.651Z · LW(p) · GW(p)

When I read that, I didn't expect him to actually pay up in the unlikely event the proof was right

Likewise.

sending a few hundred thousand or million bucks to the Long Now Foundation's Long Bets project.

With Long Bets you lose the money (to your chosen charity) even if you are right, so not an ideal comparison.

Replies from: gwern
comment by gwern · 2012-01-04T14:48:36.669Z · LW(p) · GW(p)

It was an example of a more credible commitment than a blog post. To paraphrase Buffet's Noah principle, 'predicting rain doesn't count, building arks does'.

EDIT: an additional disadvantage to Long Bets is that they stash the stakes in a very low return fund (but one that should be next to invulnerable). Depending on your views about the future and your investment abilities, the opportunity cost could be substantial.

comment by Emile · 2011-12-19T15:27:53.003Z · LW(p) · GW(p)

Q3-sub: P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 Gigabit Internet connection) = ?

Michael Littman: 1%. At least 5 years is enough for some experimentation.

That's the answer that surprised me the most. I'm willing to defer to his experience when it comes to the feasibility of human-level AI itself, but human-level AI + a blueprint of how it was built + better resources than a human in terms of raw computing power and memory + having a much closer interface to code than a human does + self-modification .... well, all that seems like a pretty straightforward recipe for creating superhuman intelligence.

Replies from: whpearson
comment by whpearson · 2011-12-19T22:37:39.075Z · LW(p) · GW(p)

Lots of Machine Learning programs have parameters set to certain values because they seem to work well (e.g. update rates on peceptrons). Perhaps he is extrapolating that into full AI. So the blueprint would be strewn with comments like "Set complexity threshold to attributing external changes to volitional agents to 0.782. Any higher and the agent believes humans aren't intelligent and tries (and fails) to predict them from first principles rather than the intentional stance. Any lower and the agent believes rocks are intelligent and just want to stay still. Also this interferes with learning rate alpha for unknown reasons".

So experimentation with different variants of values might take significant time to evaluate their efficacy (especially if you have to raise from a baby each time).

I'm also guessing that Michael doesn't think that AI's are likely to be malicious and write malware to run experiments in the darknet :)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-12-20T14:39:46.044Z · LW(p) · GW(p)

This is the reason why I'm more worried about hardware overhang than recursive self-improvement. Currently known learning algorithms seem to all have various parameters like that whose right value you can't know a priori - you have to experiment to find out. And when setting parameter 420 to .53 gives you a different result than setting it to .48, you don't necessarily know which result is more correct, either. You need some external way of verifying the results, and you need to be careful that you are still interpreting the external data correctly and didn't just self-modify yourself to go insane. (You can test yourself on data you've generated yourself, and where you know the correct answers, but that doesn't yet show that you'll process real-world data correctly.)

My current intuition suggests that general intelligence is horribly fragile, in the sense that it's an extremely narrow slice of mindspace that produces designs that actually reason correctly. Just like with humans, if you begin to tamper with your own mind, you're most likely to do damage if you don't know what you're doing - and evolution has had time to make our minds quite robust in comparison.

That isn't to say that an AGI couldn't RSI itself to godhood in a relatively quick time, especially if it had humans scientists helping it out. Also, like cousin_it pointed out, you don't necessarily need superintelligence to destroy humanity. But the five year estimate doesn't strike me as unreasonable.

What I suspect - and hope, since it might give humanity a chance - to happen is that some AGI will begin a world-takeover attempt, but then fail due to some epistemic equivalent of a divide-by-zero error, falling prey to Pascal's mugging or something.

Then again, it might fail, but only after having destroyed humans while in the process.

Replies from: whpearson, Emile
comment by whpearson · 2011-12-21T00:37:49.765Z · LW(p) · GW(p)

I've thought about scenarios of failed RSIs. My favorite is an idiot savant computer hacking AI that subsumes the entire Internet but has no conception of the real world. So we just power off, reformat and need to think carefully about how we make computers and how to control AI.

But I've really no concrete reason to expect this scenario to play out. I expect the nature of intelligence to throw us some more conceptual curve balls before we have an inkling of where we are headed and how to best steer the future.

comment by Emile · 2011-12-20T16:52:43.141Z · LW(p) · GW(p)

You need some external way of verifying the results, and you need to be careful that you are still interpreting the external data correctly and didn't just self-modify yourself to go insane. (You can test yourself on data you've generated yourself, and where you know the correct answers, but that doesn't yet show that you'll process real-world data correctly.)

If I was an AI in such a situation, I'd make a modified copy of myself (or of the relevant modules) interfaced with a simulation environment with some physics-based puzzle to solve, such that it only gets a video feed and only has some simple controls (say, have it play Portal - the exact challenge is a bit irrelevant, just something that requires general intelligence). A modified AI that performs better (learns faster, comes up with better solutions) in a wide variety of simulated environments will probably also work better in the real world.

Even if the combinations of parameters that makes functional intelligence is very fragile, i.e. the search space has high-dimensionality and the "surface" is very jagged, it's still a search space that can be explored and mapped.

That's a bit hand-wavy, but enough to get me to suspect that an agent that can self-modify and run simulations of itself has a non-negligible chance of self-improving successfully (for a broad meaning of "successfully", that includes accidentally rewriting the utility function, as long as the resulting system is more powerful).

But the five year estimate doesn't strike me as unreasonable.

Meaning, a 1% chance of superhuman intelligence within 5 years, right?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-12-20T18:29:15.000Z · LW(p) · GW(p)

Meaning, a 1% chance of superhuman intelligence within 5 years, right?

Sorry, I meant to say that it does not seem unreasonable to me that an AGI might take five years to self-improve. 1% does seem unreasonably low. I'm not sure what probability I would assign to "superhuman AGI in 5 years", but under say 40% seems quite low.

comment by Nymogenous · 2011-12-19T13:04:58.486Z · LW(p) · GW(p)

No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.

Not sure this is a fair comparison for 2 reasons: 1) We don't have the complete source code to human consciousness yet, so we can't do a good analysis of it, and 2) If anything primates are provably unfriendly to each other (at least outside their tribal group).

EDIT: Yes, I realize that a human genome is sort of a source code to our behavior, but having it without a complete theory of physics is rather like being given the source code to an AI in an unknown format.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-12-19T17:20:17.862Z · LW(p) · GW(p)

Yes, I realize that a human genome is sort of a source code to our behavior, but having it without a complete theory of physics is rather like being given the source code to an AI in an unknown forma

Having the exact laws of physics here probably doesn't matter as much as simply having a better understanding of human development. The genome isn't all that matters. What proteins are in the egg at the start matter a lot, and there are things like epigenetics. And the computational level involved in trying to model anything in the human body reliably is immense. The fundamental laws of physics probably don't matter much for human behavior.

comment by timtyler · 2011-12-19T10:28:33.524Z · LW(p) · GW(p)

The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human's development.

Those were some of the most exciting years of my life.

Similarly, I expect the run up to machine intelligence to consist of interesting times.

comment by NancyLebovitz · 2011-12-19T21:13:10.912Z · LW(p) · GW(p)

Is it plausible that fair-to-middling AI could be enough to break civilization? There are a lot of factors, especially whether civilization will become more fragile or more resilient as tech advances, but it does seem to me that profit-maximizing and status-maximizing AI have a lot of possibilities for trouble.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-12-21T03:28:49.727Z · LW(p) · GW(p)

How about attention-maximizing AI, e.g. a game that optimizes for addictiveness — for the amount of person-hours humans spend playing it?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-22T18:56:29.205Z · LW(p) · GW(p)

I think that's less likely to break civilization than a status-maximizer or a money-maximizer-- there are a lot of people who don't want to get started with video games, and I think that an attention-maximizer would run into a lot of resistance as early adopters neglected their lives.

A clever attention maximizer which was aiming for the long run might not wreck civilization. I'm not as sure about status or money maximizers.

Brunner's The Jagged Orbit is about some issues with maximizers, including who might be most likely to develop a stupid maximizer.

Gur rkrphgvirf ng n crefbany jrncbaf pbzcnal unir n pbzchgre cebtenz gb znkvzvmr cebsvgf (cbffvoyl fnyrf-- V unira'g ernq gur obbx yngryl) va gur eryngviryl fubeg eha, erfhygvat va nqiregvfvat pnzcnvtaf juvpu penax hc cnenabvn gb gur cbvag jurer pvivyvmngvba vf jerpxrq.

Gurer'f n fbyhgvba juvpu vaibyirf gur pbzchgre vairagvat gvzr geniry, ohg V qba'g erzrzore gur qrgnvyf.

comment by marchdown · 2011-12-23T19:28:35.789Z · LW(p) · GW(p)

That is very... refreshing.

comment by timtyler · 2011-12-19T10:34:48.877Z · LW(p) · GW(p)

I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't.

A common sentiment. Shane Legg even says something similar:

We then use this fact to prove that although very powerful prediction algorithms exist, they cannot be mathematically discovered due to Godel incompleteness. Given how fundamental prediction is to intelligence, this result implies that beyond a moderate level of complexity the development of powerful artificial intelligence algorithms can only be an experimental science.

I think we can now label the need for an environment as a fallacy. Most of the guts of building an intelligent agent involves finding good computable approximations to Solomonoff induction - and you can do that pretty well in a virtual world with an optimisation algorithm and a fitness function based around something like AIQ. This is essentiallly a math problem.

Replies from: jsteinhardt
comment by jsteinhardt · 2011-12-20T12:53:48.369Z · LW(p) · GW(p)

I think we can now label the need for an environment as a fallacy.

I think it's difficult to label something as a fallacy when there is almost no hard evidence either way about who is right. The vast majority of AI researchers (including myself) don't think that Solomonoff induction will solve AI. It is also possible to construct formal environments where performing better than chance is impossible without constant interaction with the environment (unless P=NP). So, if the problem is essentially a math problem, that would have to depend on specific facts about the world that make it different from such environments.

Replies from: timtyler
comment by timtyler · 2011-12-20T14:30:05.778Z · LW(p) · GW(p)

The vast majority of AI researchers (including myself) don't think that Solomonoff induction will solve AI.

Solomonoff induction certainly doesn't give you an evaluation function on a plate. It would pass the Turing test, though. If the "vast majority of AI researchers" don't realise that, they should look into the issue further.

It is also possible to construct formal environments where performing better than chance is impossible without constant interaction with the environment (unless P=NP). So, if the problem is essentially a math problem, that would have to depend on specific facts about the world that make it different from such environments.

So: Occam's razor - the foundation of science - is also needed. We know about that. Other facts about the world might help a little (arguably, some elements relating to locality are implicit in the reference machine) - but they don't seem to be as critical.

Replies from: jsteinhardt
comment by jsteinhardt · 2011-12-20T15:07:34.263Z · LW(p) · GW(p)

So: Occam's razor - the foundation of science - is also needed.

I was referring to computational issues, not whether a complexity prior is reasonable or not. It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment. I don't see how Occam's razor suggests that our world doesn't look like that (in fact, I currently think that our world does look like that, although my confidence in this is not very high).

Replies from: timtyler
comment by timtyler · 2011-12-20T15:51:57.914Z · LW(p) · GW(p)

It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment.

Well, of course - but that's learning - which Solomonoff induction models just fine (it is a learning theory).

Or maybe you are suggesting that organisms modify their environment to make their problems simpler. That is perfectly possible - but I don't really see how it is relevant.

You apparently didn't disagree with Solomonoff induction allowing the Turing test to be passed. So: what exactly is your beef with its centrality and significance?

Replies from: jsteinhardt
comment by jsteinhardt · 2011-12-20T16:05:35.021Z · LW(p) · GW(p)

It's possible I misunderstood your original comment. Let me play it back to you in my own words to make sure we're on the same page.

My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?

P.S. I think that I also disagree with the Solomonoff induction -> Turing test proposition; but I'd rather delay discussing that point because I think it is contingent on the others.

Replies from: timtyler
comment by timtyler · 2011-12-20T19:25:18.874Z · LW(p) · GW(p)

My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?

Pretty much. Virtual environments are fine, contain lots of complexity (chaos theory) and have easy access to lots of interesting and difficult problems (game of go, etc). Virtual worlds permit the development of intelligent agents just like the "real" world does. A good job too - since we have no terribly good way of telling whether our world exists under simulation or not.

The Solomonoff induction -> Turing test proposition is detailed here.

Replies from: jsteinhardt
comment by jsteinhardt · 2011-12-28T17:04:55.313Z · LW(p) · GW(p)

Sorry for the delayed response, it took me a while to get through the article and corresponding Hutter paper. Do you know of any sources that present the argument for why the Kolmogorov complexity of the universe should be relatively low (i.e. not proportional to the number of atoms), or else why Solomonoff induction would perform well even if the Kolmogorov complexity is high? These both seem intuitively true to me, but I feel uneasy accepting them as fact without a solid argument.

Replies from: timtyler
comment by timtyler · 2011-12-29T00:02:30.411Z · LW(p) · GW(p)

The Kolmogorov complexity of the universe is a totally unknown quantity - AFAIK. Yudkowsky suggests a figure of 500 bits here - but there's not much in the way of supporting argument.

Solomonoff induction doesn't depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.

Instead, consider that Solomonoff induction is a formalisation of Occam's razor - which is a well-established empirical principle.

Replies from: jsteinhardt, dlthomas
comment by jsteinhardt · 2011-12-29T00:24:18.757Z · LW(p) · GW(p)

I don't understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.

Replies from: timtyler
comment by timtyler · 2011-12-29T15:27:07.770Z · LW(p) · GW(p)

Are you thinking of applying Solomonoff induction to the whole universe?!?

If so, that would be a very strange thing to try and do.

Normally you apply Solomonoff induction to some kind of sensory input stream (or a preprocessed abstraction from that stream).

Replies from: jsteinhardt
comment by jsteinhardt · 2011-12-29T16:03:06.589Z · LW(p) · GW(p)

Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?

Replies from: timtyler
comment by timtyler · 2011-12-29T16:38:12.749Z · LW(p) · GW(p)

It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.

This is all far-future stuff. Why should we worry about it? Aren't there more pressing issues?

comment by dlthomas · 2011-12-29T00:24:33.993Z · LW(p) · GW(p)

The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.

Wouldn't it put an upper bound on the complexity of any given piece, as you can describe it with "the universe, plus the location of what I care about"?

Edited to add: Ah, yes but "the location of what I care about" is has potentially a huge amount of complexity to it.

Replies from: timtyler
comment by timtyler · 2011-12-29T15:31:26.746Z · LW(p) · GW(p)

Wouldn't it put an upper bound on the complexity of any given piece, as you can describe it with "the universe, plus the location of what I care about"?

As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...

...but yes, things we see might well have a maximum complexity - associated with the size and complexity of the universe.

When dealing with practical approximations to Solomonoff induction this is "angels and pinheads" material, though. We neither know nor care about such things.

Replies from: dlthomas
comment by dlthomas · 2011-12-29T16:48:51.926Z · LW(p) · GW(p)

Fair enough.