[Link] Max Tegmark and Nick Bostrom Speak About AI Risk at UN International Security Event

post by Gram_Stone · 2015-10-13T23:25:26.811Z · LW · GW · Legacy · 10 comments

Contents

10 comments

http://webtv.un.org/watch/chemical-biological-radiological-and-nuclear-cbrn-national-action-plans-rising-to-the-challenges-of-international-security-and-the-emergence-of-artificial-intelligence/4542739995001

Exactly what it says on the tin. It starts around 1:50:00. I had trouble buffering for some reason, and I couldn't find it on YouTube.

EDIT: It's on YouTube now: https://www.youtube.com/watch?v=W9N_Fsbngh8

10 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2015-10-14T06:51:26.918Z · LW(p) · GW(p)

The buffering issues seem to affect only video not audio.

Nick Bostrom's part starts at 2:15.

Indeed as it says on the tin. Interesting to notice how they phrase AI risk in terms of CBRN and esp. use robotics and drones as a spring board.

They will organize an AI master class later the year in the netherlands.

Replies from: Gram_Stone
comment by Gram_Stone · 2015-10-14T17:45:31.866Z · LW(p) · GW(p)

They will organize an AI master class later the year in the netherlands.

What does that mean?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2015-10-14T20:50:05.917Z · LW(p) · GW(p)

I guess something like this about AI risk. It was mentioned briefly in the video and I though it might interest somebody (as did this one).

Replies from: Gram_Stone
comment by Gram_Stone · 2015-10-14T21:19:47.595Z · LW(p) · GW(p)

It is interesting. Something like that seems like it would be extremely high-impact.

I can't believe how awareness has shot up. I thought it would remain a phenomenon in popular culture; now the global elite is taking it seriously and is being educated in various levels of detail about the problem.

comment by username2 · 2015-10-14T04:29:44.043Z · LW(p) · GW(p)

To be fair, it seems that recently almost everyone can speak before a some kind of UN panel.

Replies from: Thomas
comment by Thomas · 2015-10-15T09:22:34.968Z · LW(p) · GW(p)

Which is good. The last thing I want is the UN to mess with AI. So, if it is just another UN panel, I don't have to worry.

Replies from: Soothsilver
comment by Soothsilver · 2015-10-15T17:49:45.424Z · LW(p) · GW(p)

Nick Bostrom seems to think it's useful to alert the UN to AI. The UN is currently the best thing we have that handles international cooperation.

Replies from: Soothsilver
comment by Soothsilver · 2015-10-16T22:44:36.166Z · LW(p) · GW(p)

I'm toning back my response. Nick had said previously that he didn't think policy action could help the field of AI now (except perhaps more funding) but that it could help prevent other existential risks. There still has to be some reason why he made this talk, though.

comment by Gram_Stone · 2015-10-16T15:55:02.456Z · LW(p) · GW(p)

Added a YouTube link to the OP.

comment by m9099520231 · 2015-10-15T14:21:55.031Z · LW(p) · GW(p)

We must never create strong AI.

Because they have no guarantee of death.

There is a possibility that they will be tormented forever due to a bug and HPC.

And needless to say, we must never upload our mind to computer.