AI risk: the five minute pitch

post by Stuart_Armstrong · 2012-05-08T16:28:09.359Z · LW · GW · Legacy · 5 comments

Contents

5 comments

I did a talk at the 25th Oxford Geek night, in which I had five minutes to present the dangers of AI. The talk is now online. Though it doesn't contain anything people at Less Wrong would find new, I feel it does a reasonable job at pitching some of the arguments in a very brief format.

5 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2012-05-08T17:46:35.815Z · LW(p) · GW(p)

I liked your 5-min summary. Pretty good job, I'd say.

A couple of nitpicks: you mentioned that the reasons why AI can be bad are "technical" and "complicated", while showing a near-empty slide. I don't think that makes a convincing impression. Later on, you mentioned the "utility function", which extends the inferential distance a bit too far. Your last example, tiling the universe with smiling faces, seemed to fall flat, probably a sentence or two would fill the gap. In general, the audience's reaction shows quite clearly what worked and what did not.

Replies from: Stuart_Armstrong, Stuart_Armstrong
comment by Stuart_Armstrong · 2012-05-08T17:47:40.352Z · LW(p) · GW(p)

The "tilling the universe" actually worked, as I remember - the audience did react well, just not audibly enough.

comment by Stuart_Armstrong · 2012-05-09T09:45:30.568Z · LW(p) · GW(p)

PS: thanks for your advice, btw

comment by timtyler · 2012-05-09T10:27:40.997Z · LW(p) · GW(p)

The best summary I can give here is that AIs are expected to be expected utility maximisers that completely ignore anything which they are not specifically tasked to maximise.

Counter example: incoming asteroid.

Replies from: BlackNoise
comment by BlackNoise · 2012-05-09T13:21:15.970Z · LW(p) · GW(p)

I thought utility maximizers were allowed to make the inference "Asteroid Impact -> reduced resources -> low utility -> action to prevent that from happening", kinda part of the reason for why AI is so dangerous: "Humans may interfere - > Humans in power is low utility -> action to prevent that from happening"

They ignore anything but what they're maximizing in the sense that they don't follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.