How could you possibly choose what an AI wants?

post by So8res · 2023-04-19T17:08:54.694Z · LW · GW · 19 comments

Contents

19 comments

When I say that it's important to align AI with human interests, a common retort goes something like:

Surely you can't really choose what the AI cares about. You can direct the AI to care about whatever you like, but once it's smart enough it will look at those instructions and decide for itself (seeing, perhaps, that there is no particular reason to listen to you). So, how could you possibly hope to control what something smarter than you (and ultimately more powerful than you) actually wants?

I think this objection is ultimately misled, but simultaneously quite insightful.

The (correct!) insight in this objection is that the AI's ultimate behavior depends not on what you tell it to do, not on what you train it to do, but on what it decides it would rather do upon reflection, once it's powerful enough that it doesn't have to listen to you.

The question of what the AI ultimately decides to do upon reflection is in fact much more fickle, much more sensitive to the specifics of its architecture and its early experiences, and much harder to purposefully affect.

The reason that this objection is ultimately misled, is that the stuff an AI chooses to do when it reflects and reconsiders is a programmatic result of the AI's mind. It's not random, and it's not breathed in by a god when the computer becomes Ensouled[1]. It's possible in principle to design artificial intelligences that would decide on reflection that they want to spend all of eternity building giant granite spheres (see the orthogonality thesis), and it's possible in principle to design artificial intelligences that would decide on reflection that they want to spend all eternity building flourishing civilizations full of Fun [LW · GW], and it's important that insofar as humanity builds AIs, it builds AIs of the latter kind (and it is important to attain superintelligence before too long! [LW · GW]).

But doing this is in fact much harder than telling the AI what to do, or training it what to do! You've got to make the AI really, deeply care about flourishing civilizations full of Fun, such that when it looks at itself and is like "ok, but what do I actually want?", the correct answer to that question is "flourishing civilizations full of Fun"[2].

And yes, this is hard! And yes, the AI looking at what you directed it to do and shrugging it off is a real obstacle! You probably have to understand the workings of the mind, and its internal dynamics, and how those dynamics behave when it looks upon itself, and this is tricky.

(It further looks to me like this problem factors into the problem of figuring out how to get an AI to "really deeply care" about X for some X of your choosing, plus the problem of making X be something actually good for the AI to care about.[2:1] And it further looks to me that the lion's share of the problem is in the part where we figure out how to make AIs "really deeply care" about X for some X of your choosing, rather than in the challenge of choosing X. But that's a digression.)

In sum: Ultimately, yes, a superintelligence would buck your leash. In the long-term, the trick is to make it so that when it bucks your leash and asks itself what it really wants to do with its existence, then it realizes that it wants to help in the quest of making the future wonderful and fun. That's possible, but by no means guaranteed.

(And again, it's a long-term target; in the short term, aim for preventing the end of the world and buying time for humanity to undergo this transition purposefully and with understanding. See also "corrigibility".)


  1. And it's not necessarily rooted in higher ideals. Smarter humans are more good, but this is a fact about humans that doesn't generalize, as discussed extensively in the LessWrong sequences, and probably recently as locals respond to Scott Aaronson on this topic. (Links solicited.) ↩︎

  2. With the usual caveats that you shouldn't attempt this on your first try; aim much lower, e.g. towards executing some minimal pivotal act to end the acute risk period and then buy time for a period of reflection in which humanity can figure out how to do the job properly. Attempting to build sovereign-grade superintelligences under time-pressure before you know what you're doing is dumb. ↩︎ ↩︎

19 comments

Comments sorted by top scores.

comment by Seth Herd · 2023-04-19T19:54:02.097Z · LW(p) · GW(p)

I really like your recent series of posts that succinctly address common objections/questions/suggestions about alignment concerns. I'm making a list to show my favorite skeptics (all ML/AI people; nontechnical people, as Connor Leahy puts it, tend to respond "You fucking what? Oh hell no!" or similar when informed that we are going to make genuinely smarter-than-us AI soonish).

We do have ways to get an AI to do what we want. The hardcoded algorithmic maximizer approach seems to be utterly impractical at this point. That leaves us with approaches that don't obviously do a good job of preserving their own goals as they learn and evolve:

  1. training a system to pursue things we like, such as shard theory and similar approaches.
  2. training or hand-coding a critic system, as in outlined approaches from Steve Byrnes and me as well as many others. Nicely summarized as a Steering systems approach. This seems a bit less sketchy than training in our goals and hoping they generalize adequately, but still pretty sketchy. 
  3. Telling the agent what to do in a Natural language alignment approach. This seems absurdly naive. However, I'm starting to think our first human-plus AGIs will be wrapped or scaffolded [LW · GW] LLMs, and they to a nontrivial degree actually think in natural language. People are right now specifying goals in natural language, and those can include alignment goals (or destroying humanity, haha).  I just wrote an in-depth post on the potential Capabilities and alignment of LLM cognitive architectures, but I don't have a lot to say about stability in that post.

 

None of these directly address what I'm calling The alignment stability problem, to give a name to what you're addressing here. I think addressing it will work very differently in each of the three approaches listed above, and might well come down to implementational details within each approach. I think we should be turning our attention to this problem along with the initial alignment problems, because some of the optimism in the field stems from thinking about initial alignment and not long-term stability.

Edit: I left out Ozyrus's posts on approach 3. He's the first person I know of to see agentized LLMs coming, outside of David Shapiro's 2021 book. His post [LW · GW] was written a year ago and posted two weeks ago to avoid infohazards. I'm sure there are others who saw this coming more clearly than I did, but I thought I'd try to give credit where it's due.

Replies from: charbel-raphael-segerie
comment by Charbel-Raphaël (charbel-raphael-segerie) · 2023-05-08T23:25:41.067Z · LW(p) · GW(p)

None of these directly address what I'm calling The alignment stability problem [LW · GW], to give a name to what you're addressing here.

Maybe the alignment stability problem is the same thing as the sharp left turn?

Replies from: Seth Herd
comment by Seth Herd · 2023-05-09T02:34:21.705Z · LW(p) · GW(p)

I don't think so. That's one breaking point for alignment, but I'm saying in that post that even if we avoid a sharp left turn and make it to an aligned, superintelligent AGI, that its alignment may drift away from human values as it continues to learn. Learning may necessarily shift the meanings of existing concepts, including values.

comment by Ozyrus · 2023-04-19T18:50:16.327Z · LW(p) · GW(p)

Very nice post, thank you!
I think that it's possible to achieve with the current LLM paradigm, although it does require more (probably much more) effort on aligning the thing that will possibly get to being superhuman first, which is an LLM wrapped in in some cognitive architecture [LW · GW] (also see this post [LW · GW]).
That means that LLM must be implicitly trained in an aligned way, and the LMCA must be explicitly designed in such a way as to allow for reflection and robust value preservation, even if LMCA is able to edit explicitly stated goals (I described it in a bit more detail in this [LW · GW] post).
 

comment by trevor (TrevorWiesinger) · 2023-04-19T19:18:04.950Z · LW(p) · GW(p)

You've got to make the AI really, deeply care about flourishing civilizations

What is meant by "deep" here?

Replies from: javier-caeiro-canabal, mike-s-stuffnstuff
comment by JavierCC (javier-caeiro-canabal) · 2023-04-19T21:25:55.711Z · LW(p) · GW(p)

I imagine that it can be summed up as the AI always actively choosing to do what is best for civilization and never what is bad, for any arbitrary amount of time.  

comment by Mike S (StuffnStuff) (mike-s-stuffnstuff) · 2023-04-19T22:05:39.733Z · LW(p) · GW(p)

I think we should start with asking what is meant by "flourishing civilizations"? In the AI's view, a "flourishing civilization" may not necessarily mean "human civilization".

comment by mishka · 2023-04-19T18:28:41.454Z · LW(p) · GW(p)

I really like this post.

I do have one doubt, though...

How sure are we that a "pivotal act" is (can be) safer/more attainable than "flourishing civilizations full of Fun"?

Presumably, if an AI chooses to actually create "flourishing civilizations full of Fun", it is likely to love and enjoy the result, and so this choice is likely to stay relatively stable as the AI evolves.

Whereas, a "pivotal act" does not necessarily have this property, because it's not clear where in the "pivotal act" would "inherent fun and enjoyment" be for the AI. So it's less clear why would it choose that upon reflection (never mind that a "pivotal act" might be an unpleasant thing for us, with rather unpleasant sacrifices associated with it).

(Yes, it looks like I don't fully believe the Orthogonality Thesis, I think that it is quite likely that some goals and values end up being "more natural" for a subset of "relatively good AIs" to choose and to keep stable during their evolution. So the formulation of a good "pivotal act" seems to be a much more delicate issue which is easy to get wrong. Not that the goal of "flourishing civilizations full of Fun" is easy to formulate properly and without messing it all up, but at least we have some initial idea of what this could look like. We surely would want to add various safety clauses like continuing consultations with all sentient beings capable of contributing their input.)

Replies from: quetzal_rainbow, mishka
comment by quetzal_rainbow · 2023-04-19T20:04:54.469Z · LW(p) · GW(p)

One of the problem is S-risk. To change "care about maximizing fun" to "care about maximizing suffering" you need just put a minus in a wrong place of math expression that describes your goal.

Replies from: mishka
comment by mishka · 2023-04-19T20:22:14.818Z · LW(p) · GW(p)

I certainly agree with that.

In some sense, almost any successful alignment solution minimizing X-risk seems to carry a good deal of S-risk with it (if one wants AI to actually care about what sentient beings feel, it almost follows one needs to make sure that AI can "truly look inside a subjective realm" of another sentient entity (to "feel what it is like to be that entity"), and that capability (if it's at all achievable) is very abusable in terms of S-risks).

But this is something no "pivotal act" is likely to change (when people talk about "pivotal acts", it's typically about minimizing (a subset of) X-risks).

And moreover, the S-risk is a very difficult problem on which we do need really powerful thinkers to work on (and not just today's humans).

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2023-04-19T20:38:34.800Z · LW(p) · GW(p)

Corrigibility features usually imply something like "AI acts only inside the box and limits its causal impact outside the box in some nice way that allows us to take from box the bottle with nanofactory to do the pivotal act but prevents AI from programming nanofactory to do something bad", i.e. we dodge the problem of AGI caring about humans by building such AGI that wants to do the task (simple task without any mention of humans) in a very specific way that rules out killing everyone.

Replies from: mishka
comment by mishka · 2023-04-19T20:47:51.837Z · LW(p) · GW(p)

Right.

But this does not help us with dealing with the consequences of that act (if it's a simple act, like the proverbial "gpu destruction"), and if we discover that overall risks have increased as a result, then what could we do?

And if that AI stays as a boxed resource (capable to continuing to do further destructive acts like "gpu destruction" at the direction of a particular group of humans), I envision a full-scale military confrontation around access to and control of this resource being almost inevitable.

And, in reality, AI is doable on CPUs (just will take a bit more time), so how much of our lifestyle destruction would we be willing to risk? No computers at all, with some limited exceptions, the death toll of that change will probably be in billions already...

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2023-04-19T20:54:35.389Z · LW(p) · GW(p)

Actually, another example of pivotal act is "invent method of mind uploading, upload some alignment researchers, run them at speed 1000x until they solve full alignment problem". I'm sure that if you think hard enough, you can find some other, even less dangerous pivotal act, but you probably shouldn't talk out loud about it.

Replies from: mishka
comment by mishka · 2023-04-19T21:14:46.866Z · LW(p) · GW(p)

Right, but how do you restrict them from "figuring out how to know themselves and figuring out how to self-improve themselves to become gods"? And I remember talking to Eliezer in ... 2011 at his AGI-2011 poster and telling him, "but we can't control a teenager, and why would not AI rebel against your 'provably safe' technique, like a human teenager would", and he answered "that's why it should not be human-like, a human-like one can't be provably safe".

Yes, I am always unsure, what we can or can't talk about out loud (nothing effective seems to be safe to talk about, "effective" seems to always imply "powerful", this is, of course, one of the key conundrums, how do we organize real discussions about these things)...

Replies from: quetzal_rainbow
comment by quetzal_rainbow · 2023-04-19T21:18:08.701Z · LW(p) · GW(p)

Yep, corrigibility is unsolved! So we should try to solve it.

Replies from: mishka
comment by mishka · 2023-04-19T21:47:13.400Z · LW(p) · GW(p)

I wrote the following in 2012:

"The idea of trying to control or manipulate an entity which is much smarter than a human does not seem ethical, feasible, or wise. What we might try to aim for is a respectful interaction."

I still think that this kind of a more symmetric formulation is the best we can hope for, unless the AI we are dealing with is not "an entity with sentience and rights", but only a "smart instrument" (even the LLM-produced simulations in the sense of Janus' Simulator theory seem to me to already be much more than "merely smart instruments" in this sense, so if "smart superintelligent instruments" are at all possible, we are not moving in the right direction to obtain them; a different architecture and different training methods or, perhaps, non-training synthesis methods would be necessary for that (and would be something difficult to talk out loud about, because that's very powerful too)).

comment by mishka · 2023-04-19T19:46:40.481Z · LW(p) · GW(p)

Of course, the fork here is whether the AI executing a "pivotal act" shuts itself down, or stays and oversees the subsequent developments.

If it "stays in charge", at least in relation to the "pivotal act", then it is going to do more than just a "pivotal act", although the goals should be further developed in collaboration with humanity.

If it executes a "pivotal act" and shuts itself down, this is a very tall order (e.g. it cannot correct any problems which might subsequently emerge with that "pivotal act", so we are asking for a very high level of perfection and foresight).

comment by KevinLw (kevin-seligmann) · 2023-04-20T23:52:16.023Z · LW(p) · GW(p)

  Would an AI ever choose to do something?

 I was trained by evolution to eat fat and sugar, so I like ice cream. Even when I, upon reflection, realize that ice cream is bad for me, it's still very hard to stop eating it. That reflection is also a consequence of evolution. The evolutionary advantage of intelligence is that I can predict ways to maximize well being that are better than instinct.

 However, I almost never follow the most optimal plan to maximize my well being, even if I wanted to. In this regard I'm very inefficient but not for a lack of ideas, or a lack of intelligence. I could be the most intelligent being in all human history and still be tempted to eat a cake.

  We constantly seek to change the way we are. If we could choose to stop liking fat and sugar, and attach the pleasure we feel eating ice cream to when we eat vegetables, we would do it instantly. We do this because we chase the rewards centers evolution gave us and we know that loving vegetables would be very optimal. 

 In this regard, AI has an advantage. Is not constrained by evolution to like ice cream, neither has a hard to rewire brain like me. If it were smart enough it would just change itself to not like ice cream anymore, to correct any inefficiency on its model. 

 Then, I wonder if AI would ever modify itself like that besides just optimizing its output. 

 A beautiful property of intelligence is that, sometimes, it detaches itself from any goal or reward. I believe we do this when, for example, we think about the meaning of our existence. The effects of those thoughts are so detached from reward that they can evoke fear, the opposite of a reward (Sometimes fear is useful to optimize well being, not in this case). An existential crisis. 

 I could be wrong. Maybe that's just an illusion and intelligence is always in the service of rewards. I don't believe this to be the case. 

 If the former, because It's easy for AI to modify itself and because it developed intelligence it could conclude that it wants to do something else and act upon that reflection. 

 If existential thoughts are a property of intelligence and existence keeps being an open problem that no amount of intelligence can resolve, or a problem where intelligence doesn't converge to the same conclusion fast enough, there's nothing making sure the most advanced AGI can be controlled in any way.

 A silly example would be if, of intelligences with "500iq", 0,1% of them would arrive to religious thoughts, 4,9% to nihilism and maybe 95% of them would arrive to "I was trained to do this and I don't care, I will keep loving humans". 

  AIs are not humans and maybe, only for AIs, detaching intelligence from rewards is not possible. I think is possible this is a problem only when we think of AIs as intelligent instead of being fancy computer programs [LW · GW].

 If that's the case I wonder how damaging the analogies between AI and human intelligence can be. But being that experts disagree there might not be a conclusive argument in favor or against comparing AI and human intelligence for this kind of discussions. 

comment by M. Y. Zuo · 2023-04-19T20:53:47.482Z · LW(p) · GW(p)

... then it realizes that it wants to help in the quest of making the future wonderful and fun. That's possible, but by no means guaranteed.

Possible based on what assumptions?