AI and the paperclip problem (or: Economist solves control problem with one weird trick!)

post by fortyeridania · 2018-06-11T02:19:15.886Z · LW · GW · 4 comments

This is a link post for https://voxeu.org/article/ai-and-paperclip-problem

4 comments

Comments sorted by top scores.

comment by fortyeridania · 2018-06-11T02:24:24.533Z · LW(p) · GW(p)

Author:

  • Website: https://www.joshuagans.com
  • Wikipedia: https://en.wikipedia.org/wiki/Joshua_Gans

Summary:

Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.

Key paragraph:

The insight from economics is that while it may be hard, or even impossible, for a human to control a super-intelligent AI, it is equally hard for a super-intelligent AI to control another AI. Our modest super-intelligent paperclip maximiser, by switching on an AI devoted to obtaining power, unleashes a beast that will have power over it. Our control problem is the AI's control problem too. If the AI is seeking power to protect itself from humans, doing this by creating a super-intelligent AI with more power than its parent would surely seem too risky.

Link to actual paper: https://arxiv.org/abs/1711.04309

Abstract:

Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.
comment by Donald Hobson (donald-hobson) · 2018-06-11T09:25:02.238Z · LW(p) · GW(p)

There are several problems with this argument, firstly the AI has code describing its goal. It would seem much easier to copy this code across than to turn our moral values into code. Secondly the AI doesn't have to be confident in getting it right. A paperclipping AI has two options, it can work at a factory and make a few paperclips, or it can self improve, but it has a chance of the resultant AI won't maximize paperclips. However the amount of paperclips it could produce if it successfully self improves is astronomically vast. If its goal function is linear in paperclips, it will self improve if it thinks it has any chance of getting it right. If it fails at preserving its values as its self improving then the result looks like a staple maximizer.

Humans (at least the sort thinking about AI) know that we all have roughly similar values, so if you think you might have solved alignment, but aren't sure, it makes sense to ask for others to help you, to wait for someone else to finish solving it.

However a paperclipping AI would know that no other AI's had its goal function. If it doesn't build a paperclipping super-intelligence, no one else is going to. It will therefore try to do so even if unlikely to succeed.

Replies from: TAG
comment by TAG · 2018-06-14T11:20:43.101Z · LW(p) · GW(p)

There are several problems with this argument, firstly the AI has code describing its goal. It would seem much easier to copy this code across than to turn our moral values into code.

"Has code" and "has code that's in an unpluggable and reusable module" are two different things.

comment by Gordon Seidoh Worley (gworley) · 2018-06-12T21:23:37.217Z · LW(p) · GW(p)

Reading through the referenced paper I'm impressed by how well it models the situation and considers outcomes given certain assumptions about the problem. I suspect that the assumptions will not in general hold, but that still makes this interesting since any particular set of assumptions is not likely to hold and so exploring various classes of ways AGI issues may play out is valuable. I'd be interested to see more application of micro-economic models to explore AGI issues in ways that we are not currently (although to be fair current approaches are adjacent, like using game theory, but I think there's value in the slightly different perspective economic models bring).