AGI and Mainstream Culture

post by madhatter · 2017-05-21T08:35:12.656Z · LW · GW · Legacy · 10 comments

Contents

10 comments

Hi all,

So, as you may know, the first episode of Doctor Who, "Smile", was about a misaligned AI trying to maximize smiles (ish). And the latest, "Extremis", was about an alien race who instantiated conscious simulations to test battle strategies for invading the Earth, of which the Doctor was a subroutine. 

I thought the common threat of AGI was notable, although I'm guessing it's just a coincidence. More seriously, though, this ties in with an argument I thought of, and want to know your take on: i

If we want to avoid an AI arms race, so that safety research has more time to catch up to AI progress, then we would want to prevent, if at all possible, these issues from becoming more mainstream. The reason is that if AGI in public perception becomes disassociated with Terminator (i.e. laughable, nerdy, and unrealistic) and more like a serious whoever-makes-this-first-can-take-over-the-world situation, then we will get an arms race faster. 

I'm not sure I believe this argument myself. For one thing, being more mainstream has the benefit of attracting more safety research talent, government funding, etc. But maybe we shouldn't be spreading awareness without thinking this through some more.

 

10 comments

Comments sorted by top scores.

comment by lukeprog · 2017-05-23T19:28:54.174Z · LW(p) · GW(p)

Thanks for briefly describing those Doctor Who episodes.

comment by siIver · 2017-05-21T18:32:41.620Z · LW(p) · GW(p)

This just seems like an incredibly weak argument to me. A) it seems to me that prior research will be influenced much more than the probability for an arms race, because the first is more directly linked to public perception, B) we're mostly trying to spread awareness of the risk not the capability, and C) how do we even know that more awareness on the top political levels would lead to a higher probability for an arms race, rather than a higher probability for an international cooperation?

I feel like raising awareness has a very clear and fairly safe upside, while the downside is highly uncertain.

Replies from: whpearson
comment by whpearson · 2017-05-21T21:59:30.474Z · LW(p) · GW(p)

Why do you think this time is different to the nuclear arms race? The federation of atomic scientists didn't prevent it. It only slackened because russia ran ouf of steam.

Replies from: siIver
comment by siIver · 2017-05-22T18:47:38.754Z · LW(p) · GW(p)

I guess it's a legit argument, but it doesn't have the research aspect and it's a sample size of one.

Replies from: whpearson
comment by whpearson · 2017-05-22T21:49:49.016Z · LW(p) · GW(p)

(Un)luckily we don't have many examples of potentially world destroying arms races. We might have to adopt the inside view. We'd have to look at how much mutual trust and co-operation there is currently for various things. Beyond my current knowledge.

By the research aspect, I think research can be done without the public having a good understanding of the problems. E.g. cern/CRISPR. I can also think of other bad outcomes of the the public having an understanding of AIrisk. It might be used as another stick to take away freedoms, see the war on terrorism and drugs for examples of the public's fears.

Convincing the general public of AIrisk seems like shouting fire in crowded movie theatre, it is bound to have a large and chaotic impact on society.

This is the best steelman of this argument, that I can think of at the moment. I'm not sure I'm convinced. But I do think we should put more brain power into this question.

Replies from: siIver
comment by siIver · 2017-05-23T16:32:14.806Z · LW(p) · GW(p)

That sounds dangerously like justifying inaction.

Literally speaking, I don't disagree. It's possible that spreading awareness has a net negative outcome. It's just not likely. I don't discourage looking into the question, and if facts start pointing the other way I can be convinced. But while we're still vaguely uncertain, we should act on what seems more likely right now.

Replies from: whpearson
comment by whpearson · 2017-05-23T17:56:00.158Z · LW(p) · GW(p)

I would never argue for inaction. I think this line of thinking would argue for efforts being made to make sure any AGI researchers were educated but no efforts were made to make sure other people were (in the most extreme case).

But yep we may as well carry on as we are for the moment.

comment by [deleted] · 2017-05-22T20:49:33.470Z · LW(p) · GW(p)

You cannot avoid AI race unless all developed countries come into agreement to stop all AI development. Chances of that happening are too low. Most probably some military projects are far away from known ones. However it does not mean we cannot help/affect the situation.

comment by Lumifer · 2017-05-22T02:19:45.626Z · LW(p) · GW(p)

we would want to prevent, if at all possible, these issues from becoming more mainstream

Let's start with capabilities. Do you (or, more generally, the LW crowd) have the capability to significantly affect the mainstream perception of AI and its risk? If not (and I think not) then you're off to fantasy land and the wanderings of your imagination are not relevant to reality.