Elites and AI: Stated Opinions

post by lukeprog · 2013-06-15T19:52:36.207Z · LW · GW · Legacy · 24 comments

Contents

24 comments

Previously, I asked "Will the world's elites navigate the creation of AI just fine?" My current answer is "probably not," but I think it's a question worth additional investigation.

As a preliminary step, and with the help of MIRI interns Jeremy Miller and Oriane Gaillard, I've collected a few stated opinions on the issue. This survey of stated opinions is not representative of any particular group, and is not meant to provide strong evidence about what is true on the matter. It's merely a collection of quotes we happened to find on the subject. Hopefully others can point us to other stated opinions — or state their own opinions.

MIRI researcher Eliezer Yudkowsky is famously pessimistic on this issue. For example, in a 2009 comment, he replied to the question "What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?" by saying "the answer is, 'None.' It's like asking how you should move your legs to walk faster than a jet plane" — again, implying extreme skepticism that political elites will manage AI properly.1

Cryptographer Wei Dai is also quite pessimistic:

...even in a relatively optimistic scenario, one with steady progress in AI capability along with apparent progress in AI control/safety (and nobody deliberately builds a UFAI for the sake of "maximizing complexity of the universe" or what have you), it's probably only a matter of time until some AI crosses a threshold of intelligence and manages to "throw off its shackles". This may be accompanied by a last-minute scramble by mainstream elites to slow down AI progress and research methods of scalable AI control, which (if it does happen) will likely be too late to make a difference.

Stanford philosopher Ken Taylor has also expressed pessimism, in an episode of Philosophy Talk called "Turbo-charging the mind":

Think about nuclear technology. It evolved in a time of war... The probability that nuclear technology was going to arise at a time when we use it well rather than [for] destruction was low... Same thing with... superhuman artificial intelligence. It's going to emerge... in a context in which we make a mess out of everything. So the probability that we make a mess out of this is really high.

Here, Taylor seems to express the view that humans are not yet morally and rationally advanced enough to be trusted with powerful technologies. This general view has been expressed before by many others, including Albert Einstein, who wrote that "Our entire much-praised technological progress... could be compared to an axe in the hand of a pathological criminal."

In response to Taylor's comment, MIRI researcher Anna Salamon (now Executive Director of CFAR) expressed a more optimistic view:

I... disagree. A lot of my colleagues would [agree with you] that 40% chance of human survival is absurdly optimistic... But, probably we're not close to AI. Probably by the time AI hits we will have had more thinking going into it... [Also,] if the Germans had successfully gotten the bomb and taken over the world, there would have been somebody who profited. If AI runs away and kills everyone, there's nobody who profits. There's a lot of incentive to try and solve the problem together...

Economist James Miller is another voice of pessimism. In Singularity Rising, chapter 5, he worries about game-theoretic mechanisms incentivizing speed of development over safety of development:

Successfully creating [superhuman AI] would give a country control of everything, making [superhuman AI] far more militarily useful than mere atomic weapons. The first nation to create an obedient [superhuman AI] would also instantly acquire the capacity to terminate its rivals’ AI development projects. Knowing the stakes, rival nations might go full throttle to win [a race to superhuman AI], even if they understood that haste could cause them to create a world-destroying [superhuman AI]. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners’ Dilemma thwarts all cooperation efforts... Imagine that both the US and Chinese militaries want to create [superhuman AI]. To keep things simple, let’s assume that each military has the binary choice to proceed either slowly or quickly. Going slowly increases the time it will take to build [superhuman AI] but reduces the likelihood that it will become unfriendly and destroy humanity. The United States and China might come to an agreement and decide that they will both go slowly... [But] if the United States knows that China will go slowly, it might wish to proceed quickly and accept the additional risk of destroying the world in return for having a much higher chance of being the first country to create [superhuman AI]. (During the Cold War, the United States and the Soviet Union risked destroying the world for less.) The United States might also think that if the Chinese proceed quickly, then they should go quickly, too, rather than let the Chinese be the likely winners of the... race.

In chapter 6, Miller expresses similar worries about corporate incentives and AI:

Paradoxically and tragically, the fact that [superhuman AI] would destroy mankind increases the chance of the private sector developing it. To see why, pretend that you’re at the racetrack deciding whether to bet on the horse Recursive Darkness. The horse offers a good payoff in the event of victory, but her odds of winning seem too small to justify a bet—until, that is, you read the fine print on the racing form: "If Recursive Darkness loses, the world ends." Now you bet everything you have on her because you realize that the bet will either pay off or become irrelevant.

Miller expanded on some of these points in his chapter in Singularity Hypotheses.

In a short reply to Miller, GMU economist Robin Hanson wrote that

[Miller's analysis is] only as useful as the assumptions on which it is based. Miller's chosen assumptions seem to me quite extreme, and quite unlikely.

Unfortunately, Hanson does not explain his reasons for rejecting Miller's analysis.

Sun Microsystems co-founder Bill Joy is famous for the techno-pessimism of his Wired essay "Why the Future Doesn't Need Us," but that article's predictions about elites' likely handling of AI are actually somewhat mixed:

we all wish our course could be determined by our collective values, ethics, and morals. If we had gained more collective wisdom over the past few thousand years, then a dialogue to this end would be more practical, and the incredible powers we are about to unleash would not be nearly so troubling.

One would think we might be driven to such a dialogue by our instinct for self-preservation. Individuals clearly have this desire, yet as a species our behavior seems to be not in our favor. In dealing with the nuclear threat, we often spoke dishonestly to ourselves and to each other, thereby greatly increasing the risks. Whether this was politically motivated, or because we chose not to think ahead, or because when faced with such grave threats we acted irrationally out of fear, I do not know, but it does not bode well.

The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed... Churchill remarked, in a famous left-handed compliment, that the American people and their leaders 'invariably do the right thing, after they have examined every other alternative.' In this case, however, we must act more presciently, as to do the right thing only at last may be to lose the chance to do it at all...

...And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups.

Former GiveWell researcher Jonah Sinick has expressed optimism on the issue:

I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:

  1. I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.

  2. AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.

  3. The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time... The most rational people are the people who are most likely to be aware of and to work to avert AI risk...

  4. Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient...

  5. In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.

Paul Christiano is another voice of optimism about elites' handling of AI. Here are some snippets from his "mainline" scenario for AI development:

It becomes fairly clear some time in advance, perhaps years, that broadly human-competitive AGI will be available soon. As this becomes obvious, competent researchers shift into more directly relevant work, and governments and researchers become more concerned with social impacts and safety issues...

Call the point where the share of human workers is negligible point Y. After Y humans are very unlikely to maintain control over global economic dynamics---the effective population is overwhelmingly dominated by machine intelligences... This picture becomes clear to serious onlookers well in advance of the development of human-level AGI... [hence] there is much intellectual activity aimed at understanding these dynamics and strategies for handling them, carried out both in public and within governments.

Why should we expect the control problem to be solved? ...at each point when we face a control problem more difficult than any we have faced so far and with higher consequences for failure, we expect to have faced slightly easier problems with only slightly lower consequences for failure in the past.

As long as solutions to the control problem are not quite satisfactory, the incentives to resolve control problems are comparable to the incentives to increase the capabilities of systems. If solutions are particularly unsatisfactory, then incentives to resolve control problems are very strong. So natural economic incentives build a control system (in the traditional sense from robotics) which keeps solutions to the control problem from being too unsatisfactory.

Christiano is no Polyanna, however. In the same document, he outlines "what could go wrong," and what we might do about it.

 

Notes

1 I originally included another quote from Eliezer, but then I noticed that other readers on Less Wrong had elsewhere interpreted that same quote differently than I had, so I removed it from this post.

24 comments

Comments sorted by top scores.

comment by CarlShulman · 2013-06-16T00:03:27.718Z · LW(p) · GW(p)

So far, of course, the sampling here is very locally biased.

Think about nuclear technology. It evolved in a time of war... The probability that nuclear technology was going to arise at a time when we use it well rather than [for] destruction was low...

So far the nuclear record has turned out reasonably well though, I'm not sure what the argument is here.

Unfortunately, Hanson does not explain his reasons for rejecting Miller's analysis.

One reason to discount arms race worries is increasing global peace and cooperation, as covered at length by Steven Pinker.

and that the efforts of MIRI are critical to the planet's survival.

I would certainly disclaim this as a likely possibility, speaking for myself.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2013-06-20T19:57:16.147Z · LW(p) · GW(p)

and that the efforts of MIRI are critical to the planet's survival.

I would certainly disclaim this as a likely possibility, speaking for myself.

Can't find this part in OP - has it been deleted?

Replies from: CarlShulman
comment by CarlShulman · 2013-06-20T22:37:24.386Z · LW(p) · GW(p)

See Luke's footnote 1.

comment by [deleted] · 2013-06-15T22:27:15.193Z · LW(p) · GW(p)

Former GiveWell researcher Jonah Sinick has expressed optimism on the issue:

  1. In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking.

Argh. This is a myth. (It is especially frustrating in its most dire form, when it's implied that the concern was chemical ignition, i.e. ordinary fire. The concern was actually about nuclear ignition. Other retellings often say "set fire to" which implies ordinary fire.)

See Wikipedia's article on the Trinity test, which links to Report LA-602, "Ignition of the Atmosphere With Nuclear Bombs", where this was carefully analyzed.

Replies from: gwern, CarlShulman, army1987
comment by gwern · 2013-06-16T01:05:56.654Z · LW(p) · GW(p)

http://lesswrong.com/lw/rg/la602_vs_rhic_review/

Replies from: None
comment by [deleted] · 2013-06-16T02:31:36.257Z · LW(p) · GW(p)

I had totally forgotten about that post. (Probably because I learned about the history of the Manhattan Project long before reading anything from EY.) Thanks for the reminder.

comment by CarlShulman · 2013-06-15T23:56:45.738Z · LW(p) · GW(p)

There were calculations done before the bomb was tested, which confirmed people's strong priors against an atmospheric ignition effect. But the report is dated August 1946, after the first nuclear tests and the destruction of Hiroshima and Nagasaki. Presumably the report is elaborating on the earlier calculations, but the analysis before the first nuclear detonations is more important than the analysis after.

Replies from: None
comment by [deleted] · 2013-06-16T02:37:59.884Z · LW(p) · GW(p)

I suspect, without any evidence, that the analysis was carried out to a sufficient extent to convince all of the physicists involved, saving the formal writeup for later. There was a war on, you know.

Nowadays, as I understand it, most areas of science are carried out through informal circulation of preprints long before papers are formally published for the record. I imagine the same thing went on at Los Alamos, especially given the centralization of that community.

Replies from: CarlShulman
comment by CarlShulman · 2013-06-16T02:41:39.014Z · LW(p) · GW(p)

I suspect, without any evidence, that the analysis was carried out to a sufficient extent to convince all of the physicists involved, saving the formal writeup for later.

Based on the evidence of reading about the Teller story, yes the calculations were enough to settle the issue internally. Indeed, others thought it a bit silly in the first place. The procedure clearly had a very low expected failure rate in general, and the danger was low prior. OTOH, it's not clear what p "convincing" translated into.

In any case, those calculations eliminated most of the subjective expected value of atmospheric nuclear ignition risk, as asteroid searches have eliminated most of the expected value of asteroid extinction risk.

comment by A1987dM (army1987) · 2013-06-16T17:08:03.886Z · LW(p) · GW(p)

(It is especially frustrating in its most dire form, when it's implied that the concern was chemical ignition, i.e. ordinary fire. The concern was actually about nuclear ignition. Other retellings often say "set fire to" which implies ordinary fire.)

I'm peeved by the fact that stellar nucleosynthesis processes are usually called “burning” rather than “fusion”, BTW.

Replies from: None, None
comment by [deleted] · 2013-06-16T20:02:51.266Z · LW(p) · GW(p)

Stars are awesome (in the old-school non-diluted sense) which naturally makes it tempting to use more evocative language when talking about them. And you could think of 'burning' in such usage referring more to incandescence rather than rapid oxidation.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T19:14:49.913Z · LW(p) · GW(p)

I was going to object to the idea that “fusion” isn't evocative enough, but I guess that whoever first named stellar nucleosynthesis “burning” hadn't been exposed to Dragon Ball Z, Gillette advertising and repeated claims that “thirty years from now” fusion power will solve all of our problems.

(And, of course, the real question here is whether Jews are allowed to operate fusion reactors on Shabbat. ;-))

comment by [deleted] · 2013-06-16T20:33:18.171Z · LW(p) · GW(p)

Don't look at this Wikipedia article, or your head will explode.

Replies from: army1987
comment by A1987dM (army1987) · 2013-06-16T20:36:59.285Z · LW(p) · GW(p)

For some reason that doesn't bother me as much.

comment by BerryPick6 · 2013-06-15T22:27:36.735Z · LW(p) · GW(p)

Why on earth downvote this? (Was at -2)

comment by John_Maxwell (John_Maxwell_IV) · 2013-06-17T06:51:35.695Z · LW(p) · GW(p)

Does Eliezer assign lots of probability mass to a particular failure mode or does he have his probability mass fairly evenly spread across many failure modes? His answer seems a bit overconfident to me for a question that involves the actions of squishy and unpredictable humans.

Sergey Brin, an apparently smart person who has met politicians (unlike anyone quoted here?), says the ones he has met are "invariably thoughtful, well-meaning people" whose main problem is the fact that "90% of their effort seems to be focused on how to stick it to the other party". So it could matter a lot how the issue ends up getting framed. What are the issues that the government seems to deal with most intelligently, and how can we make FAI end up getting treated like those issues?

  • Nate Silver's book discusses the work of government weather forecasters, earthquake researchers, and disease researchers and seems to give them positive reviews.

  • Some publicly funded universities do important and useful research.

  • My dad told me a story about a group of quants hired by the city of New York to develop a model of what buildings needed visits from the city fire inspector, with impressive results. Here's an article I found while trying to track down that anecdote. (Oh wait, maybe this is it?)

  • I like the Bureau of Labor Statistics' Occupational Outlook Handbook, but it's hard to know how accurate it is.

Does anyone have more examples? Note that I've reframed the problem from "elites and AI" to "government and AI"; not sure if that's a good reframing.

One guess: make the part of the government concerned with AI be a boring government department staffed with PhDs in a technical subject where one of the best careers one can get given a PhD in this subject is to work for this government department (math?). IMO, the intelligence of government workers probably matters more than the fact that they are government workers, and there are factors that determine this. This strategy (a variant of "can't beat 'em? join 'em") would probably end up working better in a scenario where there is no "AI Sputnik moment". (BTW, can we expect politicians to weigh the opinions of government experts over those of experts who have private-sector or nonprofit jobs?)

Here's an interesting government department. I wonder how hard it would be for a few of us Less Wrong users to get jobs there?

comment by Shmi (shminux) · 2013-06-15T21:24:02.419Z · LW(p) · GW(p)

Scott Aaronson seems to be notoriously optimistic and has reiterated his opinion fairly recently.

Replies from: lukeprog, CarlShulman
comment by lukeprog · 2013-06-15T22:59:35.159Z · LW(p) · GW(p)

Where does he express optimism about elites' handling of AGI? In that post, he seems to just be saying "AGI is probably many centuries away, and I don't see much we can knowably do about it so far in advance."

Replies from: shminux
comment by Shmi (shminux) · 2013-06-16T00:05:51.725Z · LW(p) · GW(p)

Right, and if it's that slow, there is plenty of time for it to be noticed and mitigated, see Adams Law of Slow-Moving Disasters

Replies from: lukeprog
comment by lukeprog · 2013-06-16T00:26:56.656Z · LW(p) · GW(p)

That's a great link! But does Aaronson express this view anywhere?

comment by CarlShulman · 2013-06-15T23:51:12.153Z · LW(p) · GW(p)

In past conversations he has been rather pessimistic about climate change collapsing civilization via nuclear war.

comment by HungryHobo · 2013-06-18T09:20:04.017Z · LW(p) · GW(p)

There seems to be the implicit assumption that superhuman AI will be some sort of sudden absolute thing.

why?

If I were to guess I'd say that the most likely course is one of gradual improvement for quite some time, more similar to the development of airplanes than the development of the atomic bomb.

if you handed modern bombers and the tech to support them to one of the sides in the first world war then you can be sure they've have won pretty quickly. And there was investment in flight and it was useful. but early planes were slow, fault prone, terrible as weapons platforms etc.

We might very well see AI develop slowly with roadblocks every few years or decades which halt or slow advancement for a while until some sollution is found.

I guess it's down to whether you assume that the difficulty of increasing intelligence is exponential or linear.

If each additional IQ point(for want of a better measure) gets harder to add than the last then even with a cycle of self improvement you're not automatically going to get a god.

We might even see intelligence augmentation keeping pace with AI development for quite some time.

comment by ESRogs · 2013-06-16T18:05:20.632Z · LW(p) · GW(p)

"What kind of competitive or political system would make fragmented squabbling AIs safer than an attempt to get the monolithic approach right?" by saying "the answer is, 'None.' It's like asking how you should move your legs to walk faster than a jet plane" — again, implying extreme skepticism that political elites will manage AI properly.

It seems like a bit of a non sequitur to go from competing AIs being unsafe to elites not managing AI properly. Was there meant to be an additional clause explaining that elites would tend to favor multiple competing AIs over a single monolithic one? (Perhaps this was an artifact of the deletion of the quote referenced in the footnote?)

comment by Sorlaize · 2013-06-19T21:15:06.442Z · LW(p) · GW(p)

If you've read Guy Mcpherson's best articles you will know AI/robots/machinery is pure fantasy at this point because of climate change alone. http://guymcpherson.com/2013/01/climate-change-summary-and-update/