Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

post by Dr_Manhattan · 2014-04-21T16:55:56.240Z · LW · GW · Legacy · 28 comments

Contents

28 comments

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

28 comments

Comments sorted by top scores.

comment by torekp · 2014-04-21T21:26:32.076Z · LW(p) · GW(p)

Hawking/Russell/Tegmark/Wilczek:

If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI.

Nice.

Replies from: Squark, lukeprog, None
comment by Squark · 2014-04-22T18:52:28.891Z · LW(p) · GW(p)

Actually, in the alien civilization scenario we would already be screwed: there wouldn't be much that can be done. This is not the case with AI.

Replies from: Mestroyer
comment by Mestroyer · 2014-04-23T19:13:05.385Z · LW(p) · GW(p)

If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens' cosmological horizon.

Replies from: Squark
comment by Squark · 2014-04-23T19:22:53.853Z · LW(p) · GW(p)

It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.

Regarding probes to extremely far galaxies: theoretically might work, depending on economics of space colonization. We would survive at the cost of losing most of potential colonization space. Neat.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-23T20:37:34.863Z · LW(p) · GW(p)

It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.

This needs unpacking of "deal with". A FAI is still capable of optimizing a "hopeless" situation better than humans, so if you focus on optimizing and not satisficing, it doesn't matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it's a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.

Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.

Replies from: Squark
comment by Squark · 2014-04-23T21:55:03.219Z · LW(p) · GW(p)

A FAI is still capable of optimizing a "hopeless" situation better than humans...

This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

...it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away...

Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it's the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.

Thus we remain a small civilization but survive for a long time.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2014-04-23T22:45:04.496Z · LW(p) · GW(p)

For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

There is no reason to expect exact equality, only close similarity. If you optimize, you still prefer something that's a tiny bit better to something that's a tiny bit worse. I'm not claiming that there is a significant difference. I'm claiming that there is some expected difference, all else equal, however tiny, which is all it takes to prefer one decision over another. In this case, a FAI gains you as much difference as available, minus the opportunity cost of FAI's development (if we set aside the difficulty in predicting success of a FAI development project).

(There are other illustrations I didn't give for how the difference may not be "tiny" in some senses of "tiny". For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history. This is large compared to the value of millions of human lives, tiny compared to the value of uncontested future light cone.)

(I wouldn't give a Neanderthal as a relevant example of an optimizer, as the abstract argument about FAI's value is scrambled by the analogy beyond recognition. The Neanderthal in the example would have to be better than the fly at optimizing fly values (which may be impossible to usefully define for flies), and have enough optimization power to render the difference in bodies relatively morally irrelevant, compared to the consequences. Otherwise, the moral difference between their bodies is a confounder that renders the point of the difference in their optimization power, all else equal, moot, because all alse is now significantly not equal.)

Replies from: Squark
comment by Squark · 2014-04-24T09:57:51.798Z · LW(p) · GW(p)

...a FAI gains you as much difference as available, minus the opportunity cost of FAI's development...

Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time "partying" rather than developing FAI).

For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history.

Neat. One way it might work is the FAI running much-faster-than-realtime WBE's so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.

comment by Vladimir_Nesov · 2014-04-23T23:02:53.002Z · LW(p) · GW(p)

Thus we remain a small civilization but survive for a long time.

It's not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever's preferable, that's the trade that a FAI might be in a position to facilitate.

comment by lukeprog · 2014-04-22T00:51:05.478Z · LW(p) · GW(p)

Just FYI, that analogy is originally due to Russell specifically, according to an interview I saw with Norvig.

comment by [deleted] · 2014-04-22T00:44:57.047Z · LW(p) · GW(p)

Eeeehhhhh... it's not that surprising when you consider that billions of people people really, truly believe in a form of divine-command moral realism that implies universally compelling arguments.

It is, however, worrying.

comment by Lumifer · 2014-04-21T18:00:12.717Z · LW(p) · GW(p)

I'm not sure that being on HuffPo is a win. Unless you want to mingle in the company of SHOCKING celebrity secrets and 17 Nightmare-Inducing Easter Photos You Can't Unsee, that is...

Replies from: Nornagest, Vika
comment by Nornagest · 2014-04-21T18:17:25.355Z · LW(p) · GW(p)

Hard to find a major news site these days that isn't paywalled and hasn't hopped on the clickbait train, unfortunately. I'm hoping the fad will pass once these people realize that they're spending credibility faster than they get it back, but I'm not expecting it to happen anytime soon.

Replies from: David_Gerard, Lumifer
comment by David_Gerard · 2014-04-21T22:15:41.359Z · LW(p) · GW(p)

What This Superintelligent Computer Will Do To Your Species Will Completely Blow Your Mind! Literally.

Replies from: None
comment by [deleted] · 2014-04-22T00:49:00.078Z · LW(p) · GW(p)

You know, I'm fairly sure being paper-clipped will merely destroy my mind, rather than blow it. Oh well, got anything Friendly?

comment by Lumifer · 2014-04-21T18:23:21.096Z · LW(p) · GW(p)

Maybe that's because the whole concept of a "major news site" isn't looking good nowadays.

comment by Vika · 2014-06-13T18:04:52.109Z · LW(p) · GW(p)

The Independent ran pretty much the same article

comment by somervta · 2014-04-22T04:40:14.754Z · LW(p) · GW(p)

This has been circulating among the tumblrers for a little bit, and I wrote a quick infodump on where it comes from.

TL;DR: The article comes from (is co-authored by) a brand-new x-risk organization founded by Max Tegmark and four others, with all of the authors from it's scientific advisory board.

comment by raisin · 2014-04-21T17:44:15.110Z · LW(p) · GW(p)

I decided to post this with a catchy title (edit. on retrospect that title doesn't put nearly enough emphasis on the danger aspect) to bunch of subreddits on reddit to get more recognition to this. Asking for upvotes is not allowed, so do with this information as you wish.

Submission on /r/technology

Submission on /r/TrueReddit

Submission on /r/Futurology

comment by asr · 2014-04-25T16:59:37.087Z · LW(p) · GW(p)

It's a big deal. In particular, I was startled to see Russell signing it. I don't put much weight on the physicists, who are well outside their area of expertise. But Russell is a totally respectable guy and this is exactly his nominal area. I interacted with him a few times as a student and he impressed me as a smart thoughtful guy who wasn't given to pet theories.

Has he ever stopped by MIRI to chat? The Berkeley CS faculty are famously busy, but I'd think if he bothers to name-check MIRI in a prominent article, he'd be willing to come by and have a real technical discussion.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-04-26T00:38:50.800Z · LW(p) · GW(p)

I don't know, but I found his omission of MIRI in this interview (found via lukeprog's FB) surprising http://is.gd/Dx0lw0

Replies from: IlyaShpitser
comment by IlyaShpitser · 2014-04-30T10:00:10.068Z · LW(p) · GW(p)

It's not surprising to me at all, I think you might have an overly inflated opinion of MIRI. MIRI has no mainstream academic status, and isn't getting more any time soon.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2014-04-30T13:09:58.132Z · LW(p) · GW(p)

Not sure if you're saying I have an inflated opinion of MIRI or of MIRI's status. If it's the earlier, in my own opinion FWIW, is that what MIRI lacks in terms of academic status it well makes up by (initially) being the only org doing reasonably productive work in the area of safety research.

More specifically, AIMA mentions Friendly AI, and Yudkowsky by name, which is why I found the omission somewhat surprising.

comment by gjm · 2014-04-22T12:45:31.070Z · LW(p) · GW(p)

If the quotation from their placeholder website on somertva's tumblr is to be believed, it's a "sister site" of fqxi.org. This worries me a little -- FQXI is funded by the John Templeton Foundation, which has its own agenda and one I don't much care for. Is FLI also Templeton-funded? I'm not aware that Templeton has had any particular malign influence on FQXI, though, and the people at this new organization don't seem like they've been cherry-picked for (e.g.) religion-friendliness, so maybe it's OK.

Replies from: Vika
comment by Vika · 2014-06-13T18:00:44.348Z · LW(p) · GW(p)

No, FLI has nothing to do with the Templeton Foundation. The website was a "sister site" of the FQXi site, because both organizations are run by Max, and he wanted to keep the same web platform for simplicity.

Replies from: gjm
comment by gjm · 2014-06-13T20:13:41.435Z · LW(p) · GW(p)

That's encouraging. Thanks for the information!

comment by AlexMennen · 2014-04-22T03:01:42.335Z · LW(p) · GW(p)

This article was pretty lacking in actual argument. I feel like if I hadn't already been concerned about AI risk, reading that wouldn't have changed my mind. I guess the fact that the authors are pretty high-powered authority figures makes it still somewhat significant, though.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2014-04-22T19:23:44.668Z · LW(p) · GW(p)

The argument is simply an argument from authority. What more could you reasonably expect for six paragraphs and the mega-sized inferential distance between physicists/computer scientists and the readers of Huffington Post?