Do alignment concerns extend to powerful non-AI agents?

post by Ozyrus · 2022-06-24T18:26:22.737Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    13 paulfchristiano
    5 Ozyrus
    4 benjamincosman
    4 Daniel Kokotajlo
    1 Volodymyr Frolov
None
1 comment

Right now the main focus of alignment seems to be on how to align powerful AGI agents, a.k.a AI Safety. I think the field can benefit from a small reframing: we should think not about aligning AI, but about alignment of systems in general, if we are not already.
 

It seems to me that the biggest problem in AI Safety comes not from the fact that the system will have unaligned goals, but from the fact that it is superhuman.

As in, it has nearly godlike power in understanding of the world and in turn manipulation of both the world and other humans in it. Does it really matter if an artificial agent gets godlike processing and self-improvement powers or a human will, or government/business?
 

I propose a little thought experiment – feel free to answer in the comments.

If you, the reader, or, say, Paul Christiano or Eliezer gets uploaded and obtains self-improvement, self-modification and processing speed/power capabilities, will your goals converge to damaging humanity as well?

If not, what makes it different? How can we transfer this secret sauce to an AI agent?
 

If yes, maybe we can see how big, superhuman systems get aligned right now and take some inspiration from that?


 

Answers

answer by paulfchristiano · 2022-06-25T17:47:48.480Z · LW(p) · GW(p)

I believe that much of the difficulty in AI alignment comes from specific facts about how you might build an AI, and especially searching for policies that behave well empirically. Similarly, much of the hope comes from techniques that seem quite specific to AI.

I do think there are other contexts where alignment is a natural problem, especially the construction of institutions. But I'm not convinced that either the particular arguments for concern, or the specific technical approaches we are considering, transfer.

A more zoomed out "second species [AF · GW]" style argument for risk may apply roughly equally well a priori to human institutions as to AI. But quantitatively I think that institutions tend to be weak when their interests are in conflict with any shared interests of their stakeholders/constituents, and so this poses a much lower risk (I'd guess that this is also e.g. Richard's opinion). I think aligning institutions is a pretty interesting and important question, but that the potential upside/downside is quite different from AI alignment, and the quantitative differences are large enough to be qualitative even before we get into specific technical facts about AI.

answer by Ozyrus · 2022-06-24T18:26:55.954Z · LW(p) · GW(p)

My thoughts here is that we should look into the value of identity. I feel like even with godlike capabilities I will still thread very carefully around self-modification to preserve what I consider "myself" (that includes valuing humanity).
I even have some ideas on safety experiments on transformer-based agents to look into if and how they value their identity.

answer by benjamincosman · 2022-06-25T00:35:45.319Z · LW(p) · GW(p)

If you, the reader, or, say, Paul Christiano or Eliezer gets uploaded and obtains self-improvement, self-modification and processing speed/power capabilities, will your goals converge to damaging humanity as well? If not, what makes it different? How can we transfer this secret sauce to an AI agent?

The Orthogonality Thesis [? · GW] states that values and capabilities can vary independently. The key question then is whether my/Paul's/Eliezer's values are actually as aligned with humanity as they appear to be, or if instead we are already unaligned and would perform a Treacherous Turn [? · GW] once we had the power to get away with it. There are certainly people who are already obviously bad choices, and people who would perform the Treacherous Turn (possibly most people[1]), but I believe there are people who are sufficiently aligned, so let's assume going forward we've picked one of those. At this point "If not, what makes it different?" answers itself: by assumption we've picked a person for whom the Value Loading Problem [LW · GW] is already solved. But we have no idea how to "transfer this secret sauce to an AI agent" - the secret sauce is hidden somewhere along this person's particular upbringing and more importantly their multi-billion year evolutionary history.


  1. The adage "power tends to corrupt, and absolute power corrupts absolutely" basically says that treacherous turns are commonplace for humans - we claim to be aligned and might even believe it ourselves while we are weak, but then when we get power we abuse it. This adage existing does not of course mean it's universally true. ↩︎

comment by Viliam · 2022-06-25T16:34:48.197Z · LW(p) · GW(p)

The adage "power tends to corrupt, and absolute power corrupts absolutely" basically says that treacherous turns are commonplace for humans - we claim to be aligned and might even believe it ourselves while we are weak, but then when we get power we abuse it.

I would like to know the true answer to this.

On one hand, some people are assholes, and often it's just a fear of punishment or social disapproval that stops them. Remove all this feedback, and it's probably not going to end well. (Furthermore, a percent or two of the population are literally psychopaths.)

On the other hand, people who have power are usually not selected randomly, but through a process where the traits that later cause the "treacherous turn" may be helpful; quite likely they already had to betray people repeatedly in order to get to the top. (Then the adage kinda reduces to "people selected for evil traits are evil"; no shit Sherlock. This doesn't say much about the average person.) Also, having "power" often requires continuously defending it from all kinds of enemies, which can make a person paranoid and aggressive. (When in doubt, attack first, because if you keep ignoring situations with 1% chance to hurt you, your fall is just a question of time.)

I don't know if we have a sample of people who somehow got power without actively fighting for it, and who felt safe at keeping it. How did they behave, on average?

Replies from: MSRayne
comment by MSRayne · 2022-06-26T14:33:55.245Z · LW(p) · GW(p)

I think the insights from Selectorate Theory [LW · GW] imply that it is impossible to keep power without gradually growing more corrupt, in the sense of appeasing the "Winning Coalition" with private goods. No matter what your terminal goals, more power, and power kept for longer, is a convergent instrumental goal, and one which usually takes so much effort to achieve that you gradually lose sight of your terminal goals too, compromising ethics in the short term in the name of an "ends justify the means" long term (which often never arrives).

So yeah, I think that powerful humans are unaligned by default, as our ancestors who rejected all attempts to form hierarchies for tens of thousands of years before finally succumbing to the first nationstates may attest.

Replies from: Viliam
comment by Viliam · 2022-06-26T20:17:44.904Z · LW(p) · GW(p)

Seems like there are two meanings of "power" that get conflated, because in real life it is a combination of both:

  • to be able to do whatever you want;
  • to successfully balance the interests of others, so that you can stay nominally on the top.
Replies from: MSRayne
comment by MSRayne · 2022-06-26T20:20:15.037Z · LW(p) · GW(p)

Good point. Perhaps there's some people who would be corrupted by the realities of human politics, but not by e.g. ascension to superintelligence.

answer by Daniel Kokotajlo · 2022-06-24T20:43:16.139Z · LW(p) · GW(p)
  1. Superhuman agents these days are all built up out of humans talking to each other. That helps a lot for their alignability, in multiple ways. For an attempt to transfer this secret sauce to an AI agent, see Iterated Distillation and Amplification, which as I understand it works by basically making a really good human-imitator, then making a giant bureacracy of them, and then imitating that bureacracy & repeating the process.
  2. The AIs we will soon build will be superhuman in new ways, ways that no current superhuman agent enjoys. (See e.g. Bostrom's breakdown of speed, quality, and collective intelligence -- current organizations are superhuman in "collective" but human-level in speed and quality)
  3. To answer your question, no, I'd feel pretty good about Paul or Eliezer or me being uploaded. If it was a random human being instead of one of those two, I'd still think things would probably be OK though there'd be a still-too-large chance of catastrophe.
comment by the gears to ascension (lahwran) · 2022-06-24T23:17:35.989Z · LW(p) · GW(p)

humans talking to each other already has severe misalignment. ownership exploitation is the primary threat folks seem to fear from ASI: "you're made of atoms the ai can use for something else" => "you're made of atoms jeff bezos and other big capital can use for something else". I don't think point 1 holds strongly. youtube is already misaligned; it's not starkly superhuman, but it's much better at selecting superstimuli than most of its users. hard asi would amplify all of these problems immensely, but because they aren't new problems, I do think seeking formalizations of inter-agent safety is a fruitful endeavor.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-06-25T02:39:15.468Z · LW(p) · GW(p)

Oh I agree with all that. I said "it helps a lot for their alignability" not "they are all aligned."

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-06-25T08:23:38.936Z · LW(p) · GW(p)

makes sense, glad we had this talk :thumbsup:

answer by Volodymyr Frolov · 2022-06-29T04:26:58.114Z · LW(p) · GW(p)

The misalignment problem is universal, extending far beyond AI research. We have to deal daily with misaligned artificial non-intelligent and non-artificial intelligent systems. Politicians, laws, credit score systems, and many other things around us are misaligned to some degree with the goals of the agents who created them or gave them the power. The AI Safety concern states that if an AGI system is disproportionately powerful, a tiny misalignment is enough to create unthinkable risks for humanity. It doesn't matter if the powerful but tiny-misaligned system is AI or not, but we believe that AGI systems are going to be extraordinarily powerful and are doomed to be tiny-misaligned.

You can do a different thought experiment. I'm sure that you, like any other standard agent, are slightly misaligned with the goals of the rest of humanity. Imagine you have a near-infinite power. How bad would that be for the rest of humanity? If you were somebody else in this world, a random person, would you still want to find yourself in the situation where now-you has near-infinite power? I certainly don't. 

1 comment

Comments sorted by top scores.

comment by paulfchristiano · 2022-06-25T17:48:07.183Z · LW(p) · GW(p)

The question "what does a human do if they obtain a lot of power" seems only tangentially related to intent alignment. I think this largely comes down to (i) the preferences of that human in this new context, (ii) the competence of that person at behaving sanely in this new context.

I like to think that I'm a nice person who the world should be unusually happy to empower, but I don't think that means I'm "aligned with humanity;" in general we are aiming at a much stronger notion of alignment than that. Indeed, I don't think "humanity" has the right type signature for something to be aligned with. And on top of that, while there are many entities I would treat with respect, and while I would expect to quickly devolve the power I acquired in this magical thought experiment, I still don't think there exists any X (other than perhaps "Paul Christiano") that I am aligned with in the sense that we want our AI systems to be aligned with us.