Refining the Sharp Left Turn threat model, part 1: claims and mechanisms
post by Vika, Vikrant Varma (amrav), Ramana Kumar (ramana-kumar), Mary Phuong (mary-phuong) · 2022-08-12T15:17:38.304Z · LW · GW · 4 commentsThis is a link post for https://vkrakovna.wordpress.com/2022/11/25/refining-the-sharp-left-turn-threat-model/
Contents
What are the main claims of the “sharp left turn” threat model? Claim 1. Capabilities will generalize far (i.e., to many domains) Claim 1a [Optional]: Capabilities (in different "domains") will all generalize at the same time Claim 1b [Optional]: Capabilities will generalize far in a discrete phase transition (rather than continuously) Claim 2. Alignment techniques that worked previously will fail during this transition Claim 3: Humans can’t intervene to prevent or align this transition Arguments for the claims in this threat model Mechanisms for capabilities generalizing far (Claim 1) Mechanisms for a rapid phase transition (Claim 1b) None 4 comments
This is our current distillation of the sharp left turn [LW · GW] threat model and an attempt to make it more concrete. We will discuss our understanding of the claims made in this threat model, and propose some mechanisms for how a sharp left turn could happen. This is a work in progress, and we welcome feedback and corrections.
What are the main claims of the “sharp left turn” threat model?
Claim 1. Capabilities will generalize far (i.e., to many domains)
There is an AI system that:
- Performs well: it can accomplish impressive feats, or achieve high scores on valuable metrics.
- Generalizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning.
Generalization is a key component of this threat model because we're not going to directly train an AI system for the task of disempowering humanity, so for the system to be good at this task, the capabilities it develops during training need to be more broadly applicable.
Some optional sub-claims can be made that increase the risk level of the threat model:
Claim 1a [Optional]: Capabilities (in different "domains") will all generalize at the same time
Claim 1b [Optional]: Capabilities will generalize far in a discrete phase transition (rather than continuously)
Claim 2. Alignment techniques that worked previously will fail during this transition
- Qualitatively different alignment techniques are needed. The ways the techniques work apply to earlier versions of the AI technology, but not to the new version because the new version gets its capability through something new, or jumps to a qualitatively higher capability level (even if through “scaling” the same mechanisms).
Claim 3: Humans can’t intervene to prevent or align this transition
- Path 1: humans don't notice because it's too fast (or they aren’t paying attention)
- Path 2: humans notice but are unable to make alignment progress in time
- Some combination of these paths, as long as the end result is insufficiently correct alignment
Arguments for the claims in this threat model
- Claim 1: There is a "core" of general intelligence - a most effective way of updating beliefs and selecting actions (Ruin #22). Speculation about what the core is: consequentialism / EU maximization / "doing things for reasons".
- Claim 1a: Capability gains from intelligence are highly correlated (Ruin #15)
- Claim 2: There is no simple core for alignment (Ruin #22), Corrigibility is anti-natural (Ruin #23)
- Claims 1 & 2: arguments in Will capabilities generalize more? [LW · GW] (+ Ruin #21)
- Claim 3: We can't coordinate to avoid AGI (Ruin #4)
Mechanisms for capabilities generalizing far (Claim 1)
Capabilities will generalize far if learning / reasoning / cognitive work is done outside of the optimization process, similarly to how human cultural evolution happens outside genetic evolution and proceeds faster. Here are some mechanisms for cognitive work getting done outside the optimization process:
- Mechanisms proposed by Paul Christiano [LW(p) · GW(p)]: performing search internally, natural selection occurring internally, reasoning about how to think better, deploying subagents
- The AI develops a way to outpace SGD as the way to learn / improve, similar to how humans outpace evolution by cultural transmission + within-lifetime learning. E.g., it writes Google docs to itself, or searches over prompts (or whatever is the prompt analog at that point).
- The optimization process finds a general meta-learning algorithm, which replaces the previous solution of domain-specific heuristics and greatly increases the rate of learning
- Sequential reasoning: sharp increase in effective sequential memory/cognitive time between the "aligned" regime and the sharp left turn regime. Ways in which this can happen:
- The model is just given access to memory. Its knowledge grows over time, and reaches a critical point which causes a sharp left turn. Alignment schemes before the critical point appear to work, but not after.
- We try to restrict the model's memory, but fail. E.g. the model's outputs are used in the world, and the subsequent inputs carry information about previous outputs.
- Some input causes the model to access a new faster mode of thinking (e.g. an explanation of how to do step-by-step reasoning in some domain). The same compute that on previous inputs powered e.g. one subjective day of thinking on this new input now powers one subjective week of thinking. This allows the model to evade alignment schemes.
Mechanisms for a rapid phase transition (Claim 1b)
A rapid phase transition happens if there is a capability overhang: the AI system is improving at various skills continuously, but its improvement in many domains is bottlenecked on one specific skill, and at some point it receives some input that makes its existing capabilities much more effective. Here are some ways this can happen:
- The system acquires situational awareness, and now its world model includes understanding of the system’s place in the world and how its training and deployment works. It can plan using this knowledge, which leads to more effective use of existing capabilities.
- Analogy to few-shot prompting: the capabilities are already present in the trained artifact. Any alignment technique that goes through gradient updates becomes irrelevant. Putting the artifact into the “right” situation (e.g., giving it a few-shot prompt) reveals its capabilities relevant to this situation. Mechanism: the relevant knowledge and capabilities are installed by some generic pretraining optimisation process.
- Discovering a more effective way to make use of low quality data leads to more effective use of existing capabilities.
We will discuss mechanisms for Claim 2 in a future post [LW · GW].
4 comments
Comments sorted by top scores.
comment by Vika · 2023-12-20T19:52:17.210Z · LW(p) · GW(p)
I still endorse the breakdown of "sharp left turn" claims in this post. Writing this helped me understand the threat model better (or at all) and make it a bit more concrete.
This post could be improved by explicitly relating the claims to the "consensus" threat model [LW · GW] summarized in Clarifying AI X-risk. Overall, SLT seems like a special case of that threat model, which makes a subset of the SLT claims:
- Claim 1 (capabilities generalize far) and Claim 3 (humans fail to intervene), but not Claims 1a/b (simultaneous / discontinuous generalization) or Claim 2 (alignment techniques stop working).
- It probably relies on some weaker version of Claim 2 (alignment techniques failing to apply to more powerful systems in some way). This seems necessary for deceptive alignment to arise, e.g. if our interpretability techniques fail to detect deceptive reasoning. However, I expect that most ways this could happen would not be due to the alignment techniques being fundamentally inadequate for the capability transition to more powerful systems (the strong version of Claim 2 used in SLT).
comment by joshc (joshua-clymer) · 2022-08-14T18:57:37.651Z · LW(p) · GW(p)
Claim 1: there is an AI system that (1) performs well ... (2) generalizes far outside of its training distribution.
Don't humans provide an existence proof of this? The point about there being a 'core' of general intelligence seems unnecessary.
Replies from: ramana-kumar, weverka↑ comment by Ramana Kumar (ramana-kumar) · 2022-08-17T14:31:26.123Z · LW(p) · GW(p)
I agree that humans satisfying the conditions of claim 1 is an argument in favour of it being possible to build machines that do the same. A couple of points: I think the threat model would posit the core of general intelligence as the reason both why humans can do these things and why the first AGI we build might also do these things. Claim 1 should perhaps be more clear that it's not just saying such an AI design is possible, but that it's likely to be found and built.
↑ comment by weverka · 2022-11-25T16:18:11.545Z · LW(p) · GW(p)
No, Humans do not satisfy this assumptions adopted here, unless you make this more specific.
The definition of Generalize is given above as: "Generalizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning".
Whether you think humans do this depends on what you take for "new domains" and "perform well".
Humans taught to crawl on hardwood floors can crawl on carpeted floors. Humans taught to hunt fly larva will need further training to hunt big game.