Is "Control" of a Superintelligence Possible?

post by Mahdi Complex (mahdi-complex) · 2022-04-18T16:03:49.126Z · LW · GW · 13 comments

Contents

  1. Surrender to the Will of the Superintelligence
  2. Riding the Techno-Leviathan
None
14 comments

Epistemic status: confused.

I currently see two conceptualization of an aligned post-AGI/foom world:

1. Surrender to the Will of the Superintelligence

Any ‘control’ in a world with a Superintelligence will have to be illusory. The creation of an AGI will be the last truly agentic thing humans do. A Superintelligence would be so far superior to any human or group of humans, and able to manipulate humans so well, that any “choice” humanity faces will be predetermined. If an AI understands you better than yourself, can predict you better than yourself, and understands the world and human psychology well enough that it can bring about whatever outcome it wants, then any sense of ‘control’ – any description of the universe putting humans in the driver’s seat – will be false.

This doesn’t mean that alignment is necessarily impossible. The humans creating the Superintelligence could still instil it with a goal of leaving the subjective human experience of free-will intact. An aligned Superintelligence would still put humans into situations where the brain’s algorithm of deliberate decision making is needed, even if the choice itself as well as the outcome are ‘fake’ in some sense. The human experience of control should rank high in an aligned Superintelligence's utility function. But only a faint illusory glimmer of human choice would remain, while the open-ended, agentic power over the Universe would have left humanity with the creation of the first Superintelligence. That’s why it’s so crucial to get the value-loading problem right on the first try.

2. Riding the Techno-Leviathan

The alternative view seems to be something like: it will be possible to retain human agency over a post-AGI world thanks to boxing, interpretability, oracle AI, or some other selective impairment scheme that would leave human agency unspoiled by AGI meddling. The goal appears to be to either limit an AI’s optimization power enough, or insulate the algorithms of human decision making (brain, institutions of group decision making...) well enough, such that humanity remains sovereign, or “in the loop,” in a meaningful sense. This, while also having the AI’s goals point towards the outcomes of human decision making.

This view implies that the integrity of human decision making can be maintained even in the face of an AGI’s optimization power.

I currently associate 1. more with MIRI/Superintelligence style thinking, while 2. with most other prosaic alignment schemes (Christiano, Olah…). 1. requires you to bite some very weird, unsettling and hard to swallow bullets about the post-AGI world, while 2. seems to point towards a somewhat more normal future, though might suffer from naivete and normalcy bias.

Am I understanding this correctly? Are there people holding a middle ground view?

13 comments

Comments sorted by top scores.

comment by Raemon · 2022-04-18T23:07:38.167Z · LW(p) · GW(p)

Existing post that's one piece of the answer to this:

https://www.lesswrong.com/posts/EZ8GniEPSechjDYP9/free-to-optimize [LW · GW

Replies from: mahdi-complex
comment by AprilSR · 2022-04-19T07:39:08.655Z · LW(p) · GW(p)

I think either is technically possible with perfect knowledge - that is, I don’t think either option is so incoherent that you cannot make any logical sense of it.

This leaves the question of which is easier. (1) requires somehow getting a full precise description of the human utility function. I don’t fully understand the arguments against (2), though MIRI seems to be pretty confident there are large issues.

comment by JBlack · 2022-04-19T04:44:12.033Z · LW(p) · GW(p)

The main distinction seems to be in the extent of how strongly these super-intelligent agents will use their power to influence human decision-making.

At one extreme end is total control, even in the most putatively aligned case: If my taking a sip of water from my glass at 10:04:22 am would be 0.000000001% better in some sense than sipping at 10:04:25 am, then it will arrange the inputs to my decision so that I take a sip of water at 10:04:22 am, and similarly for everything else that happens in the world. I do think that this would constitute a total loss of human control, though not necessarily a loss of human agency.

At the extreme other end would be something more like an Oracle, a superintelligent system (I hesitate to call it an agent) that has absolutely no preferences, including implied preferences, for the state of the world beyond some very narrow task.

Or to put it another way, how much slack will a superintelligence have in its implied preferences?

Concept 1 appears to be describing a superintelligence with no slack at all. Every human decision (and presumably everything else in the universe) must abide by a total strict order of preferences and it will optimize the hell out of those preferences. Concept 2 describes a superintelligence that may be designed to have - or be constrained to abide by - some slack in orderings of outcomes that depend upon human agency. Even if it can predict exactly what a human may decide, it doesn't necessarily have to act so as to cause a preferred expected distribution of outcomes.

I don't really think that we can rationally hold strong beliefs about where a future superintelligence might fall in this spectrum, or even outside it in some manner that I can't imagine. I do think that the literal first scenario is infeasible even for a superintelligent agent, if it is constrained by anything like our current understanding of physical laws. I can imagine a superintelligence that acts in a manner that is as close to that as possible, and that this would drastically reduce human control even in the most aligned case.

comment by Donald Hobson (donald-hobson) · 2022-04-19T00:22:29.611Z · LW(p) · GW(p)

I think something that doesn't match either. I think both 1 and 2 are possible. But what we should probably go for is 3. 

Suppose a superintelligent AI, and it has a goal involving giving humans as much actual control as possible. 

and able to manipulate humans so well, that any “choice” humanity faces will be predetermined.

All our choices are technically predetermined, because the universe is deterministic. (Modulo unimportant quantum details) 

The AI could manipulate humans, but is programmed not to want to. 

This isn't an impairment scheme like boxing or oracle AI. This is the genie that listens to your wish without manipulating you, and then carries it out in the spirit you asked. If the human(s?) tell the AI to become a paperclip maximizer, the AI will. (Maybe with a little pop up box. "It looks like you are about to destroy the universe, are you sure you want to do that?"  to prevent mistakes.)And the humans are making that decision using brains that haven't been deliberately tampered with by the AI.  

comment by tailcalled · 2022-04-18T16:31:58.703Z · LW(p) · GW(p)

I think it depends on the goals of the superintelligence. If it is optimized for leaving humans in control, then it could do so. However, if it is not optimized for leaving humans in control, then it would be an instrumentally convergent goal for it to take over control, and therefore it could be assumed to do so.

Replies from: mahdi-complex
comment by Mahdi Complex (mahdi-complex) · 2022-04-18T17:01:00.030Z · LW(p) · GW(p)

I'm just confused about what "optimized for leaving humans in control" could even mean? If a Superintelligence is so much more intelligent than humans that it could find a way, without explicit coercion, for humans to ask it to tile the universe with paper-clips, then "control" seems like a meaningless concept. You would have to force the Superintelligence to treat the human skull, or whatever other boundary of human decision making, as some kind of unviolable and uninfluenceable black box.

Replies from: tailcalled, sil-ver
comment by tailcalled · 2022-04-20T07:20:18.495Z · LW(p) · GW(p)

This basically boils down to the alignment problem. We don't know how to specify what we want, but that doesn't mean it is necessarily incoherent.

Treating the human skull as "some kind of unviolable and uninfluenceable black box" seems to get you some of the way there, but of course is problematic in its own ways (e.g. you wouldn't want delusional AIs). Still it seems like it points to the path forwards in a way.

comment by Rafael Harth (sil-ver) · 2022-04-18T19:02:01.805Z · LW(p) · GW(p)

I think control is a meaningful concept. You could have AI that doesn't try to alter your terminal goals. Something that just does what you want (not what you ask, since that has well-known failure modes) without trying to persuade you into something else.

The difficulty of building such a system is another question, alas.

comment by Yitz (yitz) · 2022-04-19T17:44:12.881Z · LW(p) · GW(p)

Third option not considered here (though it may be fairly unlikely)—it may be the case that superintelligence does not provide a substantial enough advantage to be able to control much of humanity, due to implications of chaos theory or something similar. Maybe it would be able to control politics fairly well, but some coordination problems could plausibly be beyond any reasonable finite intelligence, and hence beyond its control.

comment by Victor Novikov (ZT5) · 2022-04-18T17:04:07.049Z · LW(p) · GW(p)

Why would you want to control a superintelligence aligned with our values? What would be the point of that?

Why would we want to allow for individual humans, who are less-than-perfectly-aligned with our values, to control a superintelligence that is perfectly-aligned-with-our-values?

A Superintelligence would be so far superior to any human or group of humans, and able to manipulate humans so well, that any “choice” humanity faces will be predetermined.

I guess the positive way to phrase this is, "FAI would create an environment where the natural results of our choices would typically be good outcomes" (typically, but not always, because being optimized too hard to succeed is not fun).

Talking about manipulation seems to imply that FAI would trick humans into making choices against their own best interest. I don't think that, typically, is what would happen

I also see a scenario where FAI deliberately limits its ability to predict people's actions, out of respect for people being upset over the feeling of their choices being "predetermined".

But only a faint illusory glimmer of human choice would remain, while the open-ended, agentic power over the Universe would have left humanity with the creation of the first Superintelligence.

Meh. I'd rather have the FAI make the big-picture decisions, rather than some corrupt/flawed group of human officials falling prey to the usual bias in human thinking. Either way, I am not the one making the decisions, so what does it matter to me? At least FAI would actually make good decisions.

Replies from: mahdi-complex
comment by Mahdi Complex (mahdi-complex) · 2022-04-18T17:11:04.495Z · LW(p) · GW(p)

I didn't mean to make 1. sound bad. I'm only trying to put my finger on a crux. My impression of most prosaic alignment work seems to be that they have 2. in mind, even though MIRI/Bostrom/LW seem to believe that 1. is actually what we should be aiming towards. Do prosaic alignment people think that work on human 'control' now will lead to scenario 1 in the long run, or do they just reject scenario 1?

Replies from: ZT5
comment by Victor Novikov (ZT5) · 2022-04-18T17:34:12.108Z · LW(p) · GW(p)

I'm not sure I understand the "prosaic alignment" position well enough to answer this.

I guess, personally, I can see appeal of scenario 2, of keeping a super-optimizer under control and using it in limited ways to solve specific problems. I also find that scenario incredibly terrifying, because super-optimizers that don't optimize for the full set of human values are dangerous.