Collection of arguments to expect (outer and inner) alignment failure?

post by Sam Clarke · 2021-09-28T16:55:28.385Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    12 Sam Clarke
    3 Steven Byrnes
    1 Koen Holtman
None
No comments

Various arguments have been made for why advanced AI systems will plausibly not have the goals their operators intended them to have (due to either outer [? · GW] or inner [? · GW] alignment failure).

I would really like a distilled collection of the strongest arguments.

Does anyone know if this has been done?

If not, I might try to make it. So, any replies pointing me to resources with arguments that I've missed (in my own answer [LW(p) · GW(p)]) would also be much appreciated!

Clarification: I'm most interested in arguments that alignment failure is plausible, rather than merely that it is possible (there are already examples that establish the possibility of outer and inner [LW · GW] alignment failure for current ML systems, which probably implies we can't rule it out for more advanced versions of these systems either).

Answers

answer by Sam Clarke · 2021-09-28T16:56:06.321Z · LW(p) · GW(p)

Arguments for outer alignment failure, i.e. that we will plausibly train advanced AI systems using a training objective that doesn't incentivise or produce the behaviour we actually want from the AI system. (Thanks to Richard for spelling out these arguments clearly in AGI safety from first principles [? · GW].)

  • It's difficult to explicitly write out objective functions which express all our desires about AGI behaviour.
    • There’s no simple metric which we’d like our agents to maximise - rather, desirable AGI behaviour is best formulated in terms of concepts like obedience, consent, helpfulness, morality, and cooperation, which we can’t define precisely in realistic environments.
    • Although we might be able to specify proxies for those goals, Goodhart’s law suggests that some undesirable behaviour will score very well according to these proxies, and therefore be reinforced in AIs trained on them.
  • Comparatively primitive AI systems have already demonstrated many examples of outer alignment failures, even on much simpler objectives than what we would like AGIs to be able to do.

Arguments for inner alignment failure, i.e. that advanced AI systems will plausibly pursue an objective other than the training objective while retaining most or all of the capabilities it had on the training distribution.[1]

  • There exist certain subgoals, such as "acquiring influence", that are useful for achieving a broad range of final goals. Therefore, these may reliably lead to higher reward during training. Agents might come to value these subgoals for their own sake, and highly capable agents that e.g. want influence are likely to take adversarial action against humans.
  • The models we train might learn heuristics instead of the complex training objective, which are good enough to score very well on the training distribution, but break down under distributional shift.
    • This could happen if the model class isn't expressive enough to learn the training objective; or because heuristics are more easily discovered (than the training objective) during the learning process.
  • Argument by analogy to human evolution: humans are misaligned with the goal of increasing genetic fitness.
    • The naive version of this argument seems quite weak to me, and could do with more investigation about just how analogous modern ML training and human evolution are.
  • The training objective is a narrow target among a large space of possible objectives that do well on the training distribution.
    • The naive version of this argument also seems quite weak to me. Lots of human achievements have involved hitting very improbable, narrow targets. I think there's a steelman version, but I'm not going to try to give it here.
  • The arguments in Sections 3.2, 3.3 and 4.4 of Risks from Learned Optimization are also relevant, which give arguments for mesa-optimisation failure.
    • (Remember, mesa-optimisation failure is a specific kind of inner alignment failure. It's an inner alignment failure when the learned model is a optimiser in the sense that it is internally searching through a search space looking for elements that score highly according to some objective function that is explicitly represented within the system).

  1. This follows abergal's suggestion [LW(p) · GW(p)] of what inner alignment should refer to. ↩︎

answer by Steven Byrnes · 2021-09-28T17:51:36.905Z · LW(p) · GW(p)

I have 5 "inner" and 2 "outer" arguments in bullet point lists at My AGI Threat Model: Misaligned Model-Based RL Agent [LW · GW] (although you'll notice that my threat model winds up with "outer" & "inner" referring to slightly different things than the way most people around here use the terms).

answer by Koen.Holtman (Koen Holtman) · 2021-10-03T15:56:01.553Z · LW(p) · GW(p)

This is probably not the answer you are looking for, but as you are considering putting a lot of work into this...

Does anyone know if this has been done? If not, I might try to make it.

Probably has been done, but depends on what you mean with strongest arguments.

Does strongest mean that the argument has a lot of rhetorical power, so that it will convince people that alignment failure is more plausible than it actually is? Or does strongest mean that it gives the audience the best possible information about the likelihood of various levels of misalignment, where these levels go from 'annoying but can be fixed' to 'kills everybody and converts all matter in its light cone to paperclips'.

Also, the strongest argument when you address an audience of type A, say policy makers, may not be the strongest argument for an audience of type B, say ML researchers.

My main message here, I guess, is that many distilled collections of arguments already exist, even book-length ones like Superintelligence, Human Compatible, and The Alignment Problem. If you are thinking about adding to this mountain of existing work, you need to carefully ask yourself who your target audience is, and what you want to convince them of.

comment by Sam Clarke · 2021-10-04T08:41:40.873Z · LW(p) · GW(p)

Thanks for your reply!

depends on what you mean with strongest arguments.

By strongest I definitely mean the second thing (probably I should have clarified here, thanks for picking up on this).

Also, the strongest argument when you address an audience of type A, say policy makers, may not be the strongest argument for an audience of type B, say ML researchers.

Agree, though I expect it's more like, the emphasis needs to be different, whilst the underlying argument is similar (conditional on talking about your second definition of "strongest").

many distilled collections of arguments already exist, even book-length ones like Superintelligence, Human Compatible, and The Alignment Problem.

Probably I should have clarified some more here. By "distilled", I mean:

  • a really short summary (e.g. <1 page for each argument, with links to literature which discuss the argument's premises)
  • that makes it clear what the epistemic status of the argument is.

Those books aren't short, and neither do they focus on working out exactly how strong the case for alignment failure is, but rather on drawing attention to the problem and claiming that more work needs to be done on the current margin (which I absolutely agree with).

I also don't think they focus on surveying the range of arguments for alignment failure, but rather on presenting the author's particular view.

If there are distilled collections of arguments with these properties, please let me know!

(As some more context for my original question: I'm most interested in arguments for inner alignment failure. I'm pretty confused by the fact that some researchers seem to think inner alignment is the main problem and/or probably extremely difficult, and yet I haven't really heard a rigorous case made for its plausibility.)

Replies from: Koen.Holtman, Koen.Holtman
comment by Koen.Holtman · 2021-10-04T11:05:43.755Z · LW(p) · GW(p)

I'll do the easier part of your question first:

I'm most interested in arguments for inner alignment failure. I'm pretty confused by the fact that some researchers seem to think inner alignment is the main problem and/or probably extremely difficult, and yet I haven't really heard a rigorous case made for its plausibility.

I have not read all the material about inner alignment that has appeared on this forum, but I do occasionally read up on it.

There are some posters on this forum who believe that contemplating a set of problems which are together called 'inner alignment' can work as an intuition pump that would allow us to make needed conceptual breakthroughs. The breakthroughs sought have mostly to do, I believe, with analyzing possibilities for post-training treacherous turns which have so far escaped notice. I am not (no longer) one of the posters who have high hopes that inner alignment will work as a useful intuition pump.

The terminology problem I have with the term 'inner alignment' is that many working on it never make the move of defining it in rigorous mathematics, or with clear toy examples of what are and what are not inner alignment failures. Absent either a mathematical definition or some defining examples, I am not able judge if inner alignment is either the main alignment problem, or whether it would be a minor one, but still one that is extremely difficult to solve.

What does not help here is that by now several non-mathematical notions floating around of what an inner alignment failure even is, to the extent that Evan has felt a need to write an entire clarification post.

When poster X calls something an example of an inner alignment failure, poster Y might respond and declare that in their view of inner alignment failure, it is not actually an example of an inner alignment failure, or a very good example of an inner alignment failure. If we interpret it as a meme, then the meme of inner alignment has a reproduction strategy where it reproduces by triggering social media discussions about what it means.

Inner alignment has become what Minsky called a suitcase word: everybody packs their own meaning into it. This means that for the purpose of distillation, the word is best avoided. If you want to distil the discussion, my recommendation is to look for the meanings that people pack into the word.

Replies from: Sam Clarke
comment by Sam Clarke · 2021-10-04T15:01:06.692Z · LW(p) · GW(p)

I'm broadly sympathetic to your point that there have been an unfortunate number of disagreements about inner alignment terminology, and it has been and remains a source of confusion.

to the extent that Evan has felt a need to write an entire clarification post.

Yeah, and recently there has been [LW · GW] even [LW · GW] more [LW · GW] disagreement/clarification attempts.

I should have specified this on the top level question, but (as mentioned in my own answer) I'm talking about abergal's suggestion [LW(p) · GW(p)] of what inner alignment failure should refer to (basically: a model pursuing a different objective to the one it was trained on, when deployed out-of-distribution, while retaining most or all of the capabilities it had on the training distribution). I agree this isn't crisp and is far from a mathematical formalisim, but note that there are several examples [LW · GW] of this kind of failure in current ML systems that help to clarify what the concept is, and people seem to agree on these examples.

If you can think of toy examples that make real trouble for this definition of inner alignment failure, then I'd be curious to hear what they are.

Replies from: Koen.Holtman
comment by Koen.Holtman · 2021-10-08T13:02:51.282Z · LW(p) · GW(p)

Meta: I usually read these posts via the alignmentforum.org portal, and this portal filters out certain comments, so I missed your mention of abergal's suggestion, which would have clarified your concerns about inner alignment arguments for me. I have mailed the team that runs the website to ask if they could improve how this filtering works.

Just read the post with the examples [LW · GW] you mention, and skimmed the related arxiv paper. I like how the authors develop the metrics of 'objective robustness' vs 'capability robustness' while avoiding the problem of trying to define a single meaning for the term 'inner alignment'. Seems like good progress to me.

comment by Koen.Holtman · 2021-10-04T13:04:50.939Z · LW(p) · GW(p)

I also don't think [these three books] focus on surveying the range of arguments for alignment failure, but rather on presenting the author's particular view.

I disagree. In my reading. all of these books offer fairly wide-ranging surveys of alignment failure mechanisms.

A more valid criticism would be that the authors spend most of their time on showing that all of these failure mechanisms are theoretically possible, without spending much time discussing how likely each of them is are in practice. Once we take it as axiomatic that some people are stupid some of the time, presenting a convincing proof that some AI alignment failure mode is theoretically possible does not require much heavy lifting at all.

If there are distilled collections of arguments with these properties, please let me know!

The collection of posts under the threat models tag may be what you are looking for: many of these posts highlight the particular risk scenarios the authors feel are most compelling or likely.

The main problem with distilling this work into, say, a top 3 of most powerful 1-page arguments is that we are not dealing purely with technology-driven failure modes.

There is a technical failure mode story which says that it is very difficult to equip a very powerful future AI with an emergency stop button, that we have not solved that technical problem yet. In fact, this story is a somewhat successful meme in its own right: it appears in all 3 books I mentioned. That story is not very compelling to me. We have plenty of technical options for building emergency stop buttons, see for example my post here.

There have been some arguments that none of the identified technical options for building AI stop buttons will be useful or used, because they will all turn out to be incompatible with yet-undiscovered future powerful AI designs. I feel that these arguments show a theoretical possibility, but I think it is a very low possibility, so in practice these arguments are not very compelling to me. The more compelling failure mode argument is that people will refuse to use the emergency AI stop button, even though it is available.

Many of the posts with the tag above show failure scenarios where the AI fails to be aligned because of an underlying weakness or structural problem in society. These are scenarios where society fails to take the actions needed to keep its AIs aligned.

One can observe hat that in recent history, society has mostly failed to take the actions needed to keep major parts of the global economy aligned with human needs. See for example the oil industry and climate change. Or the cigarette industry and health.

One can be a pessimist, and use our past performance on climate change to predict how good we will be in handling the problem of keeping powerful AI under control. Like oil, AI is a technology that has compelling short-term economic benefits. This line of thought would offer a very powerful 1-page AI failure mode argument. To a pessimist.

Or one can be an optimist, and argue that the case of climate change is teaching us all very valuable lessons, so we are bound to handle AI better than oil. So will you be distilling for an audience of pessimists or optimists?

There is a political line of thought, which I somewhat subscribe to, that optimism is a moral duty. This has kept me from spending much energy myself on rationally quantifying the odds of different failure mode scenarios. I'd rather spend my energy in finding ways to improve the odds. When it comes to the political sphere, a many problems often seem completely intractable, until suddenly there are not.

Replies from: Sam Clarke
comment by Sam Clarke · 2021-10-04T16:27:24.489Z · LW(p) · GW(p)

A more valid criticism would be that the authors spend most of their time on showing that all of these failure mechanisms are theoretically possible, without spending much time discussing how likely each of them is are in practice

Sure, I agree this is a stronger point.

The collection of posts under the threat models tag may be what you are looking for: many of these posts highlight the particular risk scenarios the authors feel are most compelling or likely.

Not really, unfortunately. In those posts, the authors are focusing on painting a plausible picture of what the world looks like if we screw up alignment, rather than analysing the arguments that we should expect alignment failures in the first place - which is what I'm interested in (with the exception of Steven's scenario, who already answered here [LW(p) · GW(p)]).

The main problem with distilling this work into, say, a top 3 of most powerful 1-page arguments is that we are not dealing purely with technology-driven failure modes.

I fully agree that thinking through e.g. incentives that different actors will have in the lead up to TAI, the interaction between AI technology and society, etc. is super important. But we can think through those things as well - e.g. we can look at historical examples of humanity being faced with scenarios where the global economy is (mis)aligned with human needs, and reason about the extent to which AI will be different. I'd count all of that as part of the argument to expect alignment failure. Yes, as soon as you bring societal interactions into the mix, things become a whole lot more complicated. But that isn't reason not to try.

As it stands, I don't think there are super clear arguments for alignment failure that take into account interactions between AI tech and society that are ready to be distilled down, though I tried doing some of it here [LW · GW].

Equally, much of the discussion (and predictions of many leading thinkers in this space [LW · GW]) is premised on technical alignment failure being the central concern (i.e. if we had better technical alignment solutions, we would manage to avoid existential catastrophe). I don't want to argue about whether that's correct here, but just want to point out that at least some people think that at least some of the plausible failure modes are mostly technology-driven.

So will you be distilling for an audience of pessimists or optimists?

Neither - just trying to think clearly through the arguments on both sides.

In the particular case you describe, I find the "pessimist" side more compelling, because I don't see much evidence that humanity has really learned any lessons from oil and climate change. In particular, we still don't know how to solve collective action problems.

This has kept me from spending much energy myself on rationally quantifying the odds of different failure mode scenarios. I'd rather spend my energy in finding ways to improve the odds.

Yeah, I'm sympathetic to this line of thought, and I think I personally tend to err on the side of trying to spend too much energy on quantifying odds and not enough on acting.

However, to the extent that you're impartial between different ways of trying to improve the odds (e.g. working on technical AI alignment vs other technical AI safety vs AI policy vs meta interventions vs other cause areas entirely), then it still pays to work out (e.g.) how plausible AI alignment failure is, in order to inform your decision about what to do if you want to have the best chance of helping.

Replies from: Koen.Holtman
comment by Koen.Holtman · 2021-10-09T11:21:41.588Z · LW(p) · GW(p)

Not really, unfortunately. In those posts [under the threat models tag], the authors are focusing on painting a plausible picture of what the world looks like if we screw up alignment, rather than analysing the arguments that we should expect alignment failures in the first place.

I feel that Christiano's post here [LW · GW] is pretty good at identifying plausible failure modes inside society that lead to unaligned agents not being corrected. My recollection of that post is partly why I mentioned the posts under that tag.

There is an interesting question of methodology here: if you want to estimate the probability that society will fail in this this way in handing the impact of AI, do you send a poll to a bunch of AI technology experts, or should you be polling a bunch of global warming activists or historians of the tobacco industry instead? But I think I am reading in your work that this question is no news to you.

Several of the AI alignment organisations you polled [LW · GW] have people in them who produced work like this examination of the nuclear arms race. I wonder what happens in your analysis of your polling data if you single out this type of respondent specifically. In my own experience in analysing polling results with this type of response rate, I would be surprised however if you could find a clear signal above the noise floor.

However [...] it still pays to work out (e.g.) how plausible AI alignment failure is, in order to inform your decision about what to do if you want to have the best chance of helping.

Agree, that is why I am occasionally reading various posts with failure scenarios and polls of experts. To be clear: my personal choice of alignment research subjects is only partially motivated by what I think is the most important to work to do, if I want to have the best chance of helping. Another driver is that I want to have some fun with mathematics. I tend to work on problems which lie in the intersection of those two fuzzy sets.

No comments

Comments sorted by top scores.