[LINK] Steven Hawking warns of the dangers of AI

post by Salemicus · 2014-12-02T15:22:58.849Z · LW · GW · Legacy · 15 comments

Contents

15 comments

From the BBC:

[Hawking] told the BBC:"The development of full artificial intelligence could spell the end of the human race."

...

"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

There is, however, no mention of Friendly AI or similar principles.

In my opinion, this is particularly notable for the coverage this story is getting within the mainstream media. At the current time, this is the most-read and most-shared news story on the BBC website.

15 comments

Comments sorted by top scores.

comment by MichaelHoward · 2014-12-02T19:18:02.544Z · LW(p) · GW(p)

It was also on the BBC TV main evening news today, and BBC News 24.

Edit: more from them here: http://www.bbc.co.uk/news/technology-30293863

Replies from: Salemicus
comment by Salemicus · 2014-12-03T10:05:19.727Z · LW(p) · GW(p)

Thank you for posting this additional story. That one is particularly good to see because it mentions Bostrom, and talks about Friendliness in AI (though not by that name).

comment by tim · 2014-12-03T02:23:29.022Z · LW(p) · GW(p)

Just spitballing, but I would guess that this type of coverage is a net benefit due to the level of exposure and subsequent curiosity generated by "holy crap what is this thing that could spell doom for us all?" That is, it seems like singularitarian ideas need as much exposure as possible (any press is good press) and are a long way away from worrying about anti-AI picketers. Am I off here?

Replies from: MathiasZaman, John_Maxwell_IV
comment by MathiasZaman · 2014-12-03T09:14:26.405Z · LW(p) · GW(p)

I think you're correct. Ideas like AGI are mostly unknown by the general public and anything that can make someone curious about that cluster of ideas is probably a good thing.

comment by John_Maxwell (John_Maxwell_IV) · 2014-12-04T02:50:18.565Z · LW(p) · GW(p)

What's the causal pathway by which coverage like this improves things? If we want technical expertise or research funding, it seems like there are more targeted channels. This could be optimal if we want to make some kind of political move though. What else?

comment by metatroll · 2014-12-03T06:24:13.280Z · LW(p) · GW(p)

Hawking is right, artificial intelligence really can spell the end of the human race.

comment by byerley · 2014-12-03T09:35:11.611Z · LW(p) · GW(p)

There is perhaps no better man to alert the mainstream of the possibilities and/or dangers of AI. His comments have no doubt encouraged many people to look into this area. Some of these people may be capable of helping create Friendly AI in the future. In my opinion Steven Hawking believed making these comments were for the greater good of society and I tend to agree with him.

comment by Artaxerxes · 2014-12-03T07:34:57.835Z · LW(p) · GW(p)

This story was picked up by the ABC in Australia, on radio, free-to-air TV and online.

comment by Furcas · 2014-12-03T16:26:44.711Z · LW(p) · GW(p)

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology. I guess MIRI is still seen as too low status.

Replies from: None, Gondolinian
comment by [deleted] · 2014-12-04T07:39:02.328Z · LW(p) · GW(p)

There has certainly been increased general media coverage lately , and MIRI was mentioned in the Financial Times recently.

comment by Gondolinian · 2014-12-03T16:41:00.385Z · LW(p) · GW(p)

All of these high status scientists speaking out about AGI existential risk seldom mention MIRI or use their terminology.

Perhaps they do, but the journalists or their editors edit it out?

comment by Gondolinian · 2014-12-02T16:28:11.563Z · LW(p) · GW(p)

It also mentions Elon Musk:

In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

To be fair though, this is in the article too:

But others are less pessimistic.

"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.

[...]

"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says.

But he is betting that AI is going to be a positive force.

comment by ZankerH · 2014-12-03T18:18:48.300Z · LW(p) · GW(p)

For a glimpse at how "ordinary people" react to such claims, go be horrified at the comments to the same article at /r/futurology.

Replies from: MathiasZaman
comment by MathiasZaman · 2014-12-03T19:13:49.851Z · LW(p) · GW(p)

/r/Futurology is horrible in general, but gets even worse when talking AGI;

comment by cameroncowan · 2014-12-07T09:02:03.160Z · LW(p) · GW(p)

I think getting to a friendly AI is very hard. I trust his assessment and I think we have to be very careful with the development of AI.