The Catastrophic Convergence Conjecture
post by TurnTrout · 2020-02-14T21:16:59.281Z · LW · GW · 16 commentsContents
Overfitting the AU landscape Preferences implicit in the evolution of the AU landscape Why can't everyone be king? Objective vs value-specific catastrophes Detailing the catastrophic convergence conjecture (CCC) Prior work None 16 comments
Overfitting the AU landscape
When we act, and others act upon us, we aren’t just changing our ability to do things – we’re shaping the local environment towards certain goals, and away from others.[1] We’re fitting the world to our purposes.
What happens to the AU landscape[2] if a paperclip maximizer takes over the world?[3]
Preferences implicit in the evolution of the AU landscape
Shah et al.'s Preferences Implicit in the State of the World leverages the insight that the world state contains information about what we value. That is, there are agents pushing the world in a certain "direction". If you wake up and see a bunch of vases everywhere, then vases are probably important and you shouldn't explode them.
Similarly, the world is being optimized to facilitate achievement of certain goals. AUs are shifting and morphing, often towards what people locally want done (e.g. setting the table for dinner). How can we leverage this for AI alignment?
Exercise: Brainstorm for two minutes by the clock before I anchor you.
Two approaches immediately come to mind for me. Both rely on the agent focusing on the AU landscape rather than the world state [LW · GW].
Value learning without a prespecified ontology or human model. I have previously criticized [LW · GW] value learning for needing to locate the human within some kind of prespecified ontology (this criticism is not new). By taking only the agent itself as primitive, perhaps we could get around this (we don't need any fancy engineering or arbitrary choices to figure out AUs/optimal value from the agent's perspective).
Force-multiplying AI. Have the AI observe which of its AUs most increase during some initial period of time, after which it pushes the most-increased-AU even further.
In 2016, Jessica Taylor wrote [AF · GW] of a similar idea:
"In general, it seems like "estimating what types of power a benchmark system will try acquiring and then designing an aligned AI system that acquires the same types of power for the user" is a general strategy for making an aligned AI system that is competitive with a benchmark unaligned AI system."
I think the naïve implementation of either idea would fail; e.g., there are a lot of degenerate AUs it might find. However, I'm excited by this because a) the AU landscape evolution is an important source of information, b) it feels like there's something here we could do which nicely avoids ontologies, and c) force-multiplication is qualitatively different than existing proposals.
Project: Work out an AU landscape-based alignment proposal.
Why can't everyone be king?
Consider two coexisting agents each rewarded for gaining power; let's call them Ogre and Giant. Their reward functions[4] (over the partial-observability observations) are identical. Will they compete? If so, why?
Let's think about something easier first. Imagine two agents each rewarded for drinking coffee. Obviously, they compete with each other to secure the maximum amount of coffee. Their objectives are indexical, so they aren't aligned with each other – even though they share a reward function.
Suppose both agents are able to have maximal power. Remember, Ogre's power can be understood as its ability to achieve a lot of different goals [LW · GW]. Most of Ogre's possible goals need resources; since Giant is also optimally power-seeking, it will act to preserve its own power and prevent Ogre from using the resources. If Giant weren't there, Ogre could better achieve a range of goals. So, Ogre can still gain power by dethroning Giant. They can't both be king.
Just because agents have indexically identical payoffs doesn't mean they're cooperating; to be aligned with another agent, you should want to steer towards the same kinds of futures.
Most agents aren't pure power maximizers. But since the same resource competition usually applies, the reasoning still goes through.
Objective vs value-specific catastrophes
How useful is our definition of "catastrophe" with respect to humans? After all, literally anything could be a catastrophe for some utility function.[5]
Tying one's shoes is absolutely catastrophic for an agent which only finds value in universes in which shoes have never ever ever been tied. Maybe all possible value in the universe is destroyed if we lose at Go to an AI even once [LW(p) · GW(p)]. But this seems rather silly.
Human values are complicated and fragile [LW · GW]:
Consider the incredibly important human value of "boredom" - our desire not to do "the same thing" over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing - and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.
But the human AU is not so delicate. That is, given that we have power, we can make value; there don’t seem to be arbitrary, silly value-specific catastrophes for us. Given energy and resources and time and manpower and competence, we can build a better future.
In part, this is because a good chunk of what we care about seems roughly additive over time and space; a bad thing happening somewhere else in spacetime doesn't mean you can't make things better where you are; we have many sources of potential value. In part, this is because we often care about the universe more than the exact universe history; our preferences don’t seem to encode arbitrary deontological landmines. More generally, if we did have such a delicate goal, it would be the case that if we learned that a particular thing had happened at any point in the past in our universe, that entire universe would be partially ruined for us forever. That just doesn't sound realistic.
It seems that most of our catastrophes are objective catastrophes.[6]
Consider a psychologically traumatizing event which leaves humans uniquely unable to get what they want, but which leaves everyone else (trout, AI, etc.) unaffected. Our ability to find value is ruined. Is this an example of the delicacy of our AU?
No. This is an example of the delicacy of our implementation; notice also that our AUs for constructing red cubes, reliably looking at blue things, and surviving are also ruined. Our power has been decreased.
Detailing the catastrophic convergence conjecture (CCC)
In general, the CCC follows from two sub-claims. 1) Given we still have control over the future, humanity's long-term AU is still reasonably high (i.e. we haven't endured a catastrophe). 2) Realistically, agents are only incentivized to take control from us in order to gain power for their own goal. I'm fairly sure the second claim is true ("evil" agents are the exception prompting the "realistically").
Also, we're implicitly considering the simplified frame of a single smart AI affecting the world, and not structural risk via the broader consequences of others also deploying similar agents [LW · GW]. This is important but outside of our scope for now.
Unaligned goals tend to have catastrophe-inducing optimal policies because of power-seeking incentives.
Let's say a reward function is aligned[7] if all of its Blackwell-optimal policies are doing what we want (a policy is Blackwell-optimal if it's optimal and doesn't stop being optimal as the agent cares more about the future). Let's say a reward function class is alignable if it contains an aligned reward function.[8] The CCC is talking about impact alignment only, not about intent alignment.
Unaligned goals tend to have catastrophe-inducing optimal policies because of power-seeking incentives.
Not all unaligned goals induce catastrophes, and of those which do induce catastrophes, not all of them do it because of power-seeking incentives. For example, a reward function for which inaction is the only optimal policy is "unaligned" and non-catastrophic. An "evil" reward function which intrinsically values harming us is unaligned and has a catastrophic optimal policy, but not because of power-seeking incentives.
"Tend to have" means that realistically, the reason we're worrying about catastrophe is because of power-seeking incentives – because the agent is gaining power to better achieve its own goal. Agents don't otherwise seem incentivized to screw us over very hard; CCC can be seen as trying to explain adversarial Goodhart [AF · GW] in this context. If CCC isn't true, that would be important for understanding goal-directed alignment incentives and the loss landscape for how much we value deploying different kinds of optimal agents.
While there exist agents which cause catastrophe for other reasons (e.g. an AI mismanaging the power grid could trigger a nuclear war), the CCC claims that the selection pressure which makes these policies optimal tends to come from power-seeking drives.
Unaligned goals tend to have catastrophe-inducing optimal policies because of power-seeking incentives.
"But what about the Blackwell-optimal policy for Tic-Tac-Toe? These agents aren't taking over the world now". The CCC is talking about agents optimizing a reward function in the real world (or, for generality, in another sufficiently complex multiagent environment).
Edit: The initial version of this post talked about "outer alignment"; I changed this to just talk about alignment, because the outer/inner alignment distinction doesn't feel relevant here. What matters is how the AI's policy impacts us; what matters is impact alignment [LW · GW].
Prior work
In fact even if we only resolved the problem for the similar-subgoals case, it would be pretty good news for AI safety. Catastrophic scenarios are mostly caused by our AI systems failing to effectively pursue convergent instrumental subgoals on our behalf, and these subgoals are by definition shared by a broad range of values.
~ Paul Christiano, Scalable AI control
Convergent instrumental subgoals are mostly about gaining power. For example, gaining money is a convergent instrumental subgoal. If some individual (human or AI) has convergent instrumental subgoals pursued well on their behalf, they will gain power. If the most effective convergent instrumental subgoal pursuit is directed towards giving humans more power (rather than giving alien AI values more power), then humans will remain in control of a high percentage of power in the world.
If the world is not severely damaged in a way that prevents any agent (human or AI) from eventually colonizing space (e.g. severe nuclear winter), then the percentage of the cosmic endowment that humans have access to will be roughly close to to the percentage of power that humans have control of at the time of space colonization. So the most relevant factors for the composition of the universe are (a) whether anyone at all can take advantage of the cosmic endowment, and (b) the long-term balance of power between different agents (humans and AIs).
I expect that ensuring that the long-term balance of power favors humans constitutes most of the AI alignment problem...
~ Jessica Taylor, Pursuing convergent instrumental subgoals on the user's behalf doesn't always require good priors [AF · GW]
In planning and activity research there are two common approaches to matching agents with environments. Either the agent is designed with the specific environment in mind, or it is provided with learning capabilities so that it can adapt to the environment it is placed in. In this paper we look at a third and underexploited alternative: designing agents which adapt their environments to suit themselves... In this case, due to the action of the agent, the environment comes to be better fitted to the agent as time goes on. We argue that [this notion] is a powerful one, even just in explaining agent-environment interactions.
Hammond, Kristian J., Timothy M. Converse, and Joshua W. Grass. "The stabilization of environments." Artificial Intelligence 72.1-2 (1995): 305-327. ↩︎
Thinking about overfitting the AU landscape implicitly involves a prior distribution over the goals of the other agents in the landscape. Since this is just a conceptual tool, it's not a big deal. Basically, you know it when you see it. ↩︎
Overfitting the AU landscape towards one agent's unaligned goal is exactly what I meant when I wrote the following in Towards a New Impact Measure [LW · GW]:
Unfortunately, almost never,[9] so we have to stop our reinforcement learners from implicitly interpreting the learned utility function as all we care about. We have to say, "optimize the environment some according to the utility function you've got, but don't be a weirdo by taking us literally and turning the universe into a paperclip factory. Don't overfit the environment to , because that stops you from being able to do well for other utility functions."
In most finite Markov decision processes, there does not exist a reward function whose optimal value function is (defined as "the ability to achieve goals in general" in my paper) because often violates smoothness constraints on the on-policy optimal value fluctuation (AFAICT, a new result of possibility theory, even though you could prove it using classical techniques). That is, I can show that optimal value can't change too quickly from state to state while the agent is acting optimally, but can drop off very quickly.
This doesn't matter for Ogre and Giant, because we can still find a reward function whose unique optimal policy navigates to the highest power states. ↩︎
In most finite Markov decision processes, most reward functions do not have such value fragility. Most reward functions have several ways of accumulating reward. ↩︎
When I say "an objective catastrophe destroys a lot of agents' abilities to get what they want", I don't mean that the agents have to actually be present in the world. Breaking a fish tank destroys a fish's ability to live there, even if there's no fish in the tank. ↩︎
This idea comes from Evan Hubinger's Outer alignment and imitative amplification [LW · GW]:
Intuitively, I will say that a loss function is outer aligned at optimum if all the possible models that perform optimally according to that loss function are aligned with our goals—that is, they are at least trying to do what we want. More precisely, let and . For a given loss function , let . Then, is outer aligned at optimum if, for all such that , is trying to do what we want.
Some large reward function classes are probably not alignable [AF · GW]; for example, consider all Markovian linear functionals over a webcam's pixel values. ↩︎
I disagree with my usage of "aligned almost never" on a technical basis: assuming a finite state and action space and considering the maxentropy reward function distribution, there must be a positive measure set of reward functions for which the/a human-aligned policy is optimal. ↩︎
16 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2020-02-17T13:42:49.044Z · LW(p) · GW(p)
Human values are complicated and fragile [LW · GW]
It's not clear to me whether you actually meant to suggest this as well, but this line of reasoning makes me wonder if many of our values are actually not that complicated and fragile after all, instead being to connected to AU considerations. E.g. self-determination theory's basic needs of autonomy, competence and relatedness seem like different ways of increasing your AU, and the boredom example might not feel catastrophic because of some highly arbitrary "avoid boredom" bit in the utility function, but rather because looping a single experience over and over isn't going to help you maintain your ability to avoid catastrophes. (That is, our motivations and values optimize for [LW · GW] maintaining AU among other things, even if that is not the thing that those values feel like from the inside.)
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-17T17:25:49.397Z · LW(p) · GW(p)
Intriguing. I don't know whether that suggests our values aren't as complicated as we thought, or whether the pressures which selected them are not complicated.
While I'm not an expert on the biological intrinsic motivation literature, I think it's at least true that some parts of our values were selected for because they're good heuristics for maintaining AU. This is the thing that MCE was trying to explain:
The paper’s central notion begins with the claim is that there is a physical principle, called “causal entropic forces,” that drives a physical system toward a state that maximizes its options for future change. For example, a particle inside a rectangular box will move to the center rather than to the side, because once it is at the center it has the option of moving in any direction. Moreover, argues the paper, physical systems governed by causal entropic forces exhibit intelligent behavior.
I think they have this backwards: intelligent behavior often results in instrumentally convergent behavior (and not necessarily the other way around). Similarly, Salge et al. overview the behavioral empowerment hypothesis:
The adaptation brought about by natural evolution reduce organisms that in absence of specific goals behave as if they were maximizing [mutual information between their actions and future observations].
As I discuss in section 6.1 of Optimal Farsighted Agents Tend to Seek Power, I think that "ability to achieve goals in general" (power) is a better intuitive and technical notion than information-theoretic empowerment. I think it's pretty plausible that we have heuristics which, all else equal, push us to maintain or increase our power.
comment by Rafael Harth (sil-ver) · 2020-07-26T08:23:24.117Z · LW(p) · GW(p)
Attempt to summarize
- The AU landscape naturally leads to competition because many goals imply seeking power, and [A acquiring a lot of power] tends to be in conflict with [B acquiring a lot of power] because, well, the resources only exist once.
- The CCC (catastrophic convergence conjecture) argues that, therefore, unaligned goals with us tend to cause catastrophic consequences if given to a powerful agent. It's (right now) informal.
- The power-framing leads to a division of catastrophes into value-specific vs. objective, where the former ones depend on the goals of an agent, whereas the latter rely on the instrumental convergence idea, i.e., they lower the AU for those goals which are instrumentally convergent (like "stay alive") and thus lower the AU for lots of different agents (who have different goals).
- AU is probably less fragile than values.
- The environment contains information about what we value, and can be seen as an inspiration for AI alignment approaches. These approaches arguably work better in the AU framing as supposed to the classical values framing.
comment by Steven Byrnes (steve2152) · 2020-02-15T10:26:49.957Z · LW(p) · GW(p)
I have previously criticized value learning for needing to locate the human within some kind of prespecified ontology (this criticism is not new). By taking only the agent itself as primitive, perhaps we could get around this (we don't need any fancy engineering or arbitrary choices to figure out AUs/optimal value from the agent's perspective).
Wouldn't you need to locate the abstract concept of AU within the AI's ontology? Is that easier? Or sorry if I'm misunderstanding.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-15T14:17:00.382Z · LW(p) · GW(p)
Wouldn't you need to locate the abstract concept of AU within the AI's ontology? Is that easier? Or sorry if I'm misunderstanding.
To the contrary, an AU is naturally calculated from reward, one of the few things that is ontologically fundamental in the paradigm of RL. As mentioned in the last post, the AU of reward function is - which calculates the maximum possible -return from a given state.
This will become much more obvious in the AUP empirical post.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-02-15T19:00:52.799Z · LW(p) · GW(p)
Sure. Looking forward to that. My current intuition is: Humans have a built-in reward system based on (mumble mumble) dopamine, but the existence of that system doesn't make it easy for us to understand dopamine, or reward functions in general, or anything like that, nor does it make it easy for us to formulate and pursue goals related to those things. It takes quite a bit of education and beautifully-illustrated blog posts to get us to that point :-D
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-15T23:52:52.794Z · LW(p) · GW(p)
Note that when I said
(we don't need any fancy engineering or arbitrary choices to figure out AUs/optimal value from the agent's perspective).
I meant we could just consider how the agent's AUs are changing without locating a human in the environment.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-02-16T02:26:14.572Z · LW(p) · GW(p)
Cool. We're probably on the same page then.
comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-02-15T04:49:26.533Z · LW(p) · GW(p)
I think an important consideration is the degree of catastrophe. Even the asteroid strike, which is catastrophic to many agents on many metrics, is not catastrophic on every metric, not even every metric humans actually care about. An easy example of this is prevention of torture, which the asteroid impact accomplishes quite smoothly, along with almost every other negative goal. The asteroid strike is still very bad for most agents affected, but it could be much, much worse, as with the "evil" utility function you alluded to, which is very bad for humans on every metric, not just positive ones. Calling both of these things a "catastrophe" seems to sweep that difference under the rug.
With this in mind, "catastrophe" as defined here seems to be less about negative impact on utility, and more about wresting of control of utility function away from humans. Which seems bound to happen even in the best case where a FAI takes over. It seems a useful concept if that is what you are getting at but "catastrophe" seems to have confusing connotations, as if a "catastrophe" is necessarily the worst thing possible and should be avoided at all costs. If an antialigned "evil" AI were about to be released with high probability, and you had a paperclip maximizer in a box, releasing the paperclip maximizer would be the best option, even though that moves the chance of catastrophe from high probability to indistinguishable from certainty.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-15T05:19:03.189Z · LW(p) · GW(p)
Calling both of these things a "catastrophe" seems to sweep that difference under the rug.
Sure, but just like it makes sense to be able to say that a class of outcomes is "good" without every single such outcome being maximally good, it makes sense to have a concept for catastrophes, even if they're not literally the worst things possible.
Which seems bound to happen even in the best case where a FAI takes over.
Building a powerful agent helping you get what you want, doesn't destroy your ability to get what you want. By my definition, that's not a catastrophe.
as if a "catastrophe" is necessarily the worst thing possible and should be avoided at all costs. If an antialigned "evil" AI were about to be released with high probability, and you had a paperclip maximizer in a box, releasing the paperclip maximizer would be the best option, even though that moves the chance of catastrophe from high probability to indistinguishable from certainty.
Correct. Again, I don't mean to say that any catastrophe is literally the worst outcome possible.
comment by TurnTrout · 2020-11-22T18:16:01.148Z · LW(p) · GW(p)
The catastrophic convergence conjecture was originally formulated in terms of "outer alignment catastrophes tending to come from power-seeking behavior." I think that this was a mistake: I meant to talk about impact alignment [LW · GW] catastrophes tending to be caused by power-seeking. I've updated the post accordingly.
comment by Charlie Steiner · 2020-02-17T05:08:57.704Z · LW(p) · GW(p)
How much are you thinking about stability under optimization? Most objective catastrophes are also human catastrophes. But if a powerful agent is trying to achieve some goal while avoiding objective catastrophes, it seems like it's still incentivized to dethrone humans - to cause basically the most human-catastrophic thing that's not objective-catastrophic.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-02-17T05:20:02.466Z · LW(p) · GW(p)
I'm not thinking of optimizing for "not an objective catastrophe" directly - it's just a useful concept. The next post [LW · GW] covers this.
comment by Martín Soto (martinsq) · 2023-05-15T16:06:26.022Z · LW(p) · GW(p)
More generally, if we did have such a delicate goal, it would be the case that if we learned that a particular thing had happened at any point in the past in our universe, that entire universe would be partially ruined for us forever. That just doesn't sound realistic.
It does sound realistic given how much we disvalue extreme suffering, and how much we regret events like the Holocaust (even still acknowledging that we need to look forward, it is still better for the future to be improved, we still have the potential to do so, etc.).
Maybe I'm misunderstanding you. If you meant "learning this past thing is qualitatively different to any positive thing we could implement going forward", then I agree this doesn't seem to be the case. But if you just meant "our utility would be heavily negative, because part of the universe has been devoted to that thing", then I do think it's actually the case for most humans' revealed preferences. Like, everything just continues to add up into the same quantitative utility basket (instead of being qualitatively different), but maybe that past negative sum was so large that it's very difficult in our universe to overpower it.
comment by Joe Collman (Joe_Collman) · 2021-02-07T23:27:45.257Z · LW(p) · GW(p)
I understand what you mean with the CCC (and that this seems a bit of a nit-pick!), but I think the wording could usefully be clarified.
As you suggest here [? · GW], the following is what you mean:
CCC says (for non-evil goals) "if the optimal policy is catastrophic, then it's because of power-seeking"
However, that's not what the CCC currently says.
E.g. compare:
[Unaligned goals] tend to [have catastrophe-inducing optimal policies] because of [power-seeking incentives].
[People teleported to the moon] tend to [die] because of [lack of oxygen].
The latter doesn't lead to the conclusion: "If people teleported to the moon had oxygen, they wouldn't tend to die."
Your meaning will become clear to anyone who reads this sequence.
For anyone taking a more cursory look, I think it'd be clearer if your clarification were the official CCC:
CCC: (for non-evil goals) "if the optimal policy is catastrophic, then it's because of power-seeking"
Currently, I worry about people pulling an accidental motte-and-bailey on themselves, and thinking that [weak interpretation of CCC] implies [conclusions based on strong interpretation]. (or thinking that you're claiming this)
comment by Pattern · 2020-02-14T21:40:25.191Z · LW(p) · GW(p)
For example, a reward function for which inaction is the only optimal policy is "unaligned" and non-catastrophic.
Though if a system for preventing catastrophe (say, an asteroid impact prevention/mitigation system) had it's reward system replaced with the inaction reward system, or was shutdown at a critical time, that replacement/shutdown could be a catastrophic act.