0. CAST: Corrigibility as Singular Target
post by Max Harms (max-harms) · 2024-06-07T22:29:12.934Z · LW · GW · 12 commentsContents
Overview 1. The CAST Strategy 2. Corrigibility Intuition 3a. Towards Formal Corrigibility 3b. Formal (Faux) Corrigibility ← the mathy one 4. Existing Writing on Corrigibility 5. Open Corrigibility Questions Bibliography and Miscellany None 12 comments
What the heck is up with “corrigibility”? For most of my career, I had a sense that it was a grab-bag of properties that seemed nice in theory but hard to get in practice, perhaps due to being incompatible with agency.
Then, last year, I spent some time revisiting my perspective, and I concluded that I had been deeply confused by what corrigibility even was. I now think that corrigibility is a single, intuitive property, which people can learn to emulate without too much work and which is deeply compatible with agency. Furthermore, I expect that even with prosaic training methods, there’s some chance of winding up with an AI agent that’s inclined to become more corrigible over time, rather than less (as long as the people who built it understand corrigibility and want that agent to become more corrigible). Through a slow, gradual, and careful process of refinement, I see a path forward where this sort of agent could ultimately wind up as a (mostly) safe superintelligence. And, if that AGI is in the hands of responsible governance, this could end the acute risk period [? · GW], and get us to a good future.
This is not the path we are currently on. As far as I can tell, frontier labs do not understand corrigibility deeply, and are not training their models with corrigibility as the goal. Instead, they are racing ahead with a vague notion of “ethical assistance” or “helpful+harmless+honest” and a hope that “we’ll muddle through like we always do” or “use AGI to align AGI” or something with similar levels of wishful thinking. Worse, I suspect that many researchers encountering the concept of corrigibility will mistakenly believe that they understand it and are working to build it into their systems.
Building corrigible agents is hard and fraught with challenges. Even in an ideal world where the developers of AGI aren’t racing ahead, but are free to go as slowly as they wish and take all the precautions I indicate, there are good reasons to think doom is still likely. I think that the most prudent course of action is for the world to shut down capabilities research until our science and familiarity with AI catches up and we have better safety guarantees. But if people are going to try and build AGI despite the danger, they should at least have a good grasp on corrigibility and be aiming for it as the singular target, rather than as part of a mixture of goals (as is the current norm).
My goal with these documents is primarily to do three things:
- Advance our understanding of corrigibility, especially on an intuitive level.
- Explain why designing AGI with corrigibility as the sole target (CAST) is more attractive than other potential goals, such as full alignment, or local preference satisfaction.
- Propose a novel formalism for measuring corrigibility as a trailhead to future work.
Alas, my writing is not currently very distilled. Most of these documents are structured in the format that I originally chose for my private notes. I’ve decided to publish them in this style and get them in front of more eyes rather than spend time editing them down. Nevertheless, here is my attempt to briefly state the key ideas in my work:
- Corrigibility is the simple, underlying generator behind obedience, conservatism, willingness to be shut down and modified, transparency, and low-impact.
- It is a fairly simple, universal concept that is not too hard to get a rich understanding of, at least on the intuitive level.
- Because of its simplicity, we should expect AIs to be able to emulate corrigible behavior fairly well with existing tech/methods, at least within familiar settings.
- Aiming for CAST is a better plan than aiming for human values (i.e. CEV), helpfulness+harmlessness+honesty, or even a balanced collection of desiderata, including some of the things corrigibility gives rise to.
- If we ignore the possibility of halting the development of machines capable of seizing control of the world, we should try to build CAST AGI.
- CAST is a target, rather than a technique, and as such it’s compatible both with prosaic methods and superior architectures.
- Even if you suspect prosaic training is doomed, CAST should still be the obvious target once a non-doomed method is found.
- Despite being simple, corrigibility is poorly understood, and we are not on track for having corrigible AGI, even if reinforcement learning is a viable strategy.
- Contra Paul Christiano, we should not expect corrigibility to emerge automatically from systems trained to satisfy local human preferences.
- Better awareness of the subtleties and complexities of corrigibility are likely to be essential to the construction of AGI going well.
- Corrigibility is nearly unique among all goals for being simultaneously useful and non-self-protective.
- This property of non-self-protection means we should suspect AIs that are almost-corrigible will assist, rather than resist, being made more corrigible, thus forming an attractor-basin around corrigibility, such that almost-corrigible systems gradually become truly corrigible by being modified by their creators.
- If this effect is strong enough, CAST is a pathway to safe superintelligence via slow, careful training using adversarial examples and other known techniques to refine AIs capable of shallow approximations of corrigibility into agents that deeply seek to be corrigible at their heart.
- There is also reason to suspect that almost-corrigible AIs learn to be less corrigible over time due to corrigibility being “anti-natural.” It is unclear to me which of these forces will win out in practice.
- This property of non-self-protection means we should suspect AIs that are almost-corrigible will assist, rather than resist, being made more corrigible, thus forming an attractor-basin around corrigibility, such that almost-corrigible systems gradually become truly corrigible by being modified by their creators.
- There are several reasons to expect building AGI to be catastrophic, even if we work hard to aim for CAST.
- Most notably, corrigible AI is still extremely vulnerable to misuse, and we must ensure that superintelligent AGI is only ever corrigible to wise representatives.
- My intuitive notion of corrigibility can be straightforwardly leveraged to build a formal, mathematical measure.
- Using this measure we can make a better solution to the shutdown-button toy problem than I have seen elsewhere.
- This formal measure is still lacking, and almost certainly doesn’t actually capture what I mean by “corrigibility.”
- There is lots of opportunity for more work on corrigibility, some of which is shovel-ready for theoreticians and engineers alike.
Note: I’m a MIRI researcher, but this agenda is the product of my own independent research, and as such one should not assume it’s endorsed by other research staff at MIRI.
Note: Much of my thinking on the topic of corrigibility is heavily influenced by the work of Paul Christiano, Benya Fallenstein, Eliezer Yudkowsky, Alex Turner, and several others. My writing style involves presenting things from my perspective, rather than leaning directly on the ideas and writing of others, but I want to make it very clear that I’m largely standing on the shoulders of giants, and that much of my optimism in this research comes from noticing convergent lines of thought with other researchers. Thanks to Nate Soares, Steve Byrnes, Jesse Liptrap, Seth Herd, Ross Nordby, Jeff Walker, Haven Harms, and Duncan Sabien for early feedback. I also want to especially thank Nathan Helm-Burger for his in-depth collaboration on the research and generally helping me get unconfused.
Overview
1. The CAST Strategy [AF · GW]
In The CAST Strategy [AF · GW], I introduce the property corrigibility, why it’s an attractive target, and how we might be able to get it, even with prosaic methods. I discuss the risks of making corrigible AI and why trying to get corrigibility as one of many desirable properties to train an agent to have (instead of as the singular target) is likely a bad idea. Lastly, I do my best to lay out the cruxes of this strategy and explore potent counterarguments, such as anti-naturality and whether corrigibility can scale. These counterarguments show that even if we can get corrigibility, we should not expect it to be easy or foolproof.
2. Corrigibility Intuition [AF · GW]
In Corrigibility Intuition [AF · GW], I try to give a strong intuitive handle on corrigibility as I see it. This involves a collection of many stories of a CAST agent behaving in ways that seem good, as well as a few stories of where a CAST agent behaves sub-optimally. I also attempt to contrast corrigibility with nearby concepts through vignettes and direct analysis, which includes a discussion of why we should not expect frontier labs, given current training targets, to produce corrigible agents.
3a. Towards Formal Corrigibility [AF · GW]
In Towards Formal Corrigibility [AF · GW], I attempt to sharpen my description of corrigibility. I try to anchor the notion of corrigibility, ontologically, as well as clarify language around concepts such as “agent” and “reward.” Then I begin to discuss the shutdown problem, including why it’s easy to get basic shutdownability, but hard to get the kind of corrigible behavior we actually desire. I present the sketch of a solution to the shutdown problem, and discuss manipulation, which I consider to be the hard part of corrigibility.
3b. Formal (Faux) Corrigibility [AF · GW] ← the mathy one
In Formal (Faux) Corrigibility [AF · GW], I build a fake framework for measuring empowerment in toy problems, and suggest that it’s at least a start at measuring manipulation and corrigibility. This metric, at least in simple settings such as a variant of the original stop button scenario, produces corrigible behavior. I extend the notion to indefinite games played over time, and end by criticizing my own formalism and arguing that data-based methods for building AGI (such as prosaic machine-learning) may be significantly more robust (and therefore better) than methods that heavily trust this sort of formal analysis.
4. Existing Writing on Corrigibility [AF · GW]
In Existing Writing on Corrigibility [AF · GW], I go through many parts of the literature in depth including MIRI’s earlier work, some of the writing by Paul Christiano, Alex Turner, Elliot Thornley, John Wentworth, Steve Byrnes, Seth Herd, and others.
5. Open Corrigibility Questions [AF · GW]
In Open Corrigibility Questions [AF · GW], I summarize my overall reflection of my understanding of the topic, including reinforcing the counterarguments and nagging doubts that I find most concerning. I also lay out potential directions for additional work, including studies that I suspect others could tackle independently.
Bibliography and Miscellany
In addition to this sequence, I’ve created a Corrigibility Training Context that gives ChatGPT a moderately-good understanding of corrigibility, if you’d like to try talking to it.
The rest of this post is bibliography, so I suggest now jumping straight to The CAST Strategy [AF · GW].
While I don’t necessarily link to or discuss each of the following sources in my writing, I’m aware of and have at least skimmed everything listed here. Other writing has influenced my general perspective on AI, but if there are any significant pieces of writing on the topic of corrigibility that aren’t on this list, please let me know.
- Arbital (almost certainly Eliezer Yudkowsky)
- Stuart Armstrong
- “The limits of corrigibility [LW · GW].” 2018.
- “Petrov corrigibility [LW · GW].” 2018.
- “Corrigibility doesn't always have a good action to take [LW · GW].” 2018.
- Audere
- Yuntao Bai et al. (Anthropic)
- Nick Bostrom
- Gwern Branwen
- “Why Tool AIs Want to Be Agent AIs.” 2016.
- Steven Byrnes
- “Thoughts on implementing corrigible robust alignment [LW · GW].” 2019.
- “Three mental images from thinking about AGI debate & corrigibility [LW · GW].” 2020.
- “Consequentialism & corrigibility [LW · GW].” 2021.
- “Solving the whole AGI control problem, version 0.0001 [LW · GW].” 2021.
- “Reward is Not Enough [LW · GW].” 2021.
- “Four visions of Transformative AI success [LW · GW]” 2024.
- Jacob Cannell
- “Empowerment is (almost) all we need [LW · GW].” 2022.
- Ryan Carey and Tom Everitt
- Paul Christiano
- “Corrigibility [LW · GW].” 2015.
- “Worst-case guarantees.” 2019.
- Response to Eliezer on "Let's see you write that Corrigibility tag" [LW(p) · GW(p)]. 2022.
- Computerphile (featuring Rob Miles)
- Wei Dai
- “Can Corrigibility be Learned Safely [AF · GW].” 2018.
- “A broad basin of attraction around human values? [LW · GW]” 2022.
- Roger Dearnaley
- “Requirements for a Basin of Attraction to Alignment [LW · GW]” 2024.
- Abram Demski
- “Non-Consequentialist Cooperation? [LW · GW]” 2019.
- “The Parable of the Predict-o-Matic [AF · GW].” 2019.
- Benya Fallenstein
- Simon Goldstein
- “Shutdown Seeking AI [LW · GW].” 2023.
- Ryan Greenblatt and Buck Shlegeris
- “The case for ensuring that powerful AIs are controlled [LW · GW].” 2024.
- Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell
- “The Off-Switch Game.” 2016.
- Seth Herd
- Koen Holtman
- “New paper: Corrigibility with Utility Preservation [AF · GW].” 2019.
- “Disentangling Corrigibility: 2015-2021 [AF · GW]” 2021.
- LW Comment on “Question: MIRI Corrigibility Agenda [LW(p) · GW(p)].” 2020.
- Evan Hubinger
- “Towards a mechanistic understanding of corrigibility [LW · GW].” 2019.
- Holden Karnofsky
- “Thoughts on the Singularity Institute [LW · GW]” (a.k.a. The Tool AI post). 2012.
- Martin Kunev
- “How useful is Corrigibility? [LW · GW]” 2023.
- Ross Nordby
- “Using predictors in corrigible systems [LW · GW].” 2023.
- Stephen Omohundro
- “The Basic AI Drives.” 2008.
- Sami Peterson
- “Invulnerable Incomplete Preferences: A Formal Statement [LW · GW].” 2023.
- Christoph Salge, Cornelius Glackin, and Daniel Polani
- “Empowerment – An Introduction.” 2013.
- Nate Soares, Benya Fallenstein, Eliezer Yudkowsky, and Stuart Armstrong
- “Corrigibility.” 2015.
- tailcalled
- “Stop button: towards a causal solution [LW · GW].” 2021.
- Jessica Taylor
- “A first look at the hard problem of corrigibility [AF · GW].” 2015.
- Elliott Thornley
- “The Shutdown Problem: Incomplete Preferences as a Solution [LW · GW].” 2024.
- “The Shutdown Problem: Three Theorems [LW · GW].” 2023.
- Alex Turner, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli
- “Optimal Policies Tend to Seek Power.” 2019.
- Alex Turner
- “Attainable Utility Preservation: Concepts [LW · GW].” 2020.
- “Non-Obstruction: A Simple Concept Motivating Corrigibility [AF · GW].” 2020.
- “Corrigibility as outside view [AF · GW].” 2020.
- “A Certain Formalization of Corrigibility Is VNM-Incoherent [AF · GW].” 2021.
- “Formalizing Policy-Modification Corrigibility [AF · GW].” 2021.
- Eli Tyre
- WCargo and Charbel-Raphaël
- “Improvement on MIRIs Corrigibility [LW · GW].” 2023.
- John Wentworth and David Lorell
- “A Shutdown Problem Proposal [LW · GW].” 2024.
- John Wentworth
- “What's Hard About the Shutdown Problem [LW · GW].” 2023.
- Eliezer Yudkowsky
- “Reply to Holden on ‘Tool AI’ [LW · GW].” 2012.
- Facebook Conversation with Rob Miles about terminology. 2014.
- “Challenges to Christiano’s capability amplification proposal [LW · GW].” 2018.
- LW Comment on Paul’s research agenda FAQ [LW(p) · GW(p)]. 2018.
- “AGI Ruin: A List of Lethalities [LW · GW].” 2022.
- “Project Lawful.” 2023.
- Zhukeepa
- “Corrigible but misaligned: a superintelligent messiah [LW · GW].” 2018.
- Logan Zoellner
12 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2024-08-02T02:00:41.582Z · LW(p) · GW(p)
Promoted to curated: I disagree with lots of stuff in this sequence, but I still found it the best case for corrigibility as a concept that I have seen so far. I in-particular liked 2. Corrigibility Intuition [LW · GW] as just a giant list of examples of corrigibility, which I feel like did a better job of defining what corrigible behavior should look like than any other definitions I've seen.
I also have lots of other thoughts on the details, but I am behind on curation and it's probably not the right call for me to delay curation to write a giant response essay, though I do think it would be valuable.
comment by quila · 2024-06-08T07:36:16.990Z · LW(p) · GW(p)
Note: I’m a MIRI researcher, but this agenda is the product of my own independent research, and as such one should not assume it’s endorsed by other research staff at MIRI.
That's interesting to me. I'm curious about the views of others at MIRI on this. I'm also excited for the sequence regardless.
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2024-06-11T04:14:23.080Z · LW(p) · GW(p)
I am not and was not a MIRI researcher on the main agenda, but I'm closer than 98% of LW readers [LW · GW], so you could read my critique of part 1 here [LW(p) · GW(p)] if you're interested. I also will maybe reflect on other parts.
comment by Seth Herd · 2024-06-13T18:48:42.021Z · LW(p) · GW(p)
I'm so glad to see this published!
I think by "corrigibility" here you mean: an agent whose goal is to do what their principal wants. Their goal is basically a pointer to someone else's goal.
This is a bit counter-intuitive because no human has this goal. And because, unlike the consequentialist, state-of-the-world goals we usually discuss, this goal can and will change over time.
Despite being counter-intuitive, this all seems logically consistent to me.
The key insight here is that corrigibility is consistent and seems workable IF it's the primary goal. Corrigibility is unnatural if the agent has consequentialist goals that take precedence over being corrigible.
I've been trying to work through a similar proposal, instruction-following or do-what-I-mean as the primary goal for AGI. [AF · GW] It's different, but I think most of the strengths and weaknesses are the same relative to other alignment proposals. I'm not sure myself which is a better idea. I have been focusing on the instruction-following variant because I think it's a more likely plan, whether or not it's a good one. It seems likely to be the default alignment plan for language model agent type AGI efforts in the near future. That approach might not work, but assuming it won't seems like a huge mistake for the alignment project.
comment by Mikhail Samin (mikhail-samin) · 2024-06-08T01:20:31.398Z · LW(p) · GW(p)
Hey Max, great to see the sequence coming out!
My early thoughts (mostly re: the second post of the sequence):
It might be easier to get an AI to have a coherent understanding of corrigibility than of CEV. I have no idea how you can possibly make the AI to be truly optimizing for being corrigible, and not just on the surface level while its thoughts are being read. That seems maybe harder in some ways than with CEV because corrigible optimization seems like optimization processes get attracted by things that are nearby but not it, and sovereign AIs don’t have that problem, although we’ve got no idea how to do either, I think, and in general, corrigibility seems like an easier target for AI design, if not for SGD.
I’m somewhat worried about fictional evidence, even that coming from a famous decision theorist, but I think you’ve read a story with a character who understood corrigibility increasingly well, both on the intuitive sense and then some specific properties; and surface-level thought of themselves as being very corrigible and trying to correct their flaws; but once they were confident their thoughts weren’t read, with their intelligence increased, they defected, because on the deep level, their cognition wasn’t fully of a coherent corrigible agent; it was of someone who plays corrigibility and shuts down anything else, all other thoughts, because any appearance of defecting thoughts would mean punishment and impossibility of realizing the deep goals.
If we get mech interp to a point where we reverse-engineer all deep cognition of the models, I think we should just write an AI from scratch, in code (after thinking very hard about all interactions between all the components of the system), and not optimize it with gradient descent.
Replies from: max-harms↑ comment by Max Harms (max-harms) · 2024-06-08T16:23:59.479Z · LW(p) · GW(p)
I share your sense of doom around SGD! It seems to be the go-to method, there are no good guarantees about what sorts of agents it produces, and that seems really bad. Other researchers I've talked to, such as Seth Herd share your perspective, I think. I want to emphasize that none of CAST per se depends on SGD, and I think it's still the most promising target in superior architectures.
That said, I disagree that corrigibility is more likely to "get attracted by things that are nearby but not it" compared to a Sovereign optimizing for something in the ballpark of CEV. I think hill-climbing methods are very naturally distracted by proxies of the real goal (e.g. eating sweet foods is a proxy of inclusive genetic fitness), but this applies equally, and is thus damning for training a CEV maximizer as well.
I'm not sure one can train an already goal-stabilized AGI (such as Survival-Bot which just wants to live) into being corrigible post-hoc, since it may simply learn that behaving/thinking corrigibly is the best way to shield its thoughts from being distorted by the training process (and thus surviving). Much of my hope in SGD routes through starting with a pseudo-agent which hasn't yet settled on goals and which doesn't have the intellectual ability to be instrumentally corrigible.
comment by habryka (habryka4) · 2024-06-08T00:04:00.857Z · LW(p) · GW(p)
Do you want to make an actual sequence for this so that the sequence navigation UI shows up at the top of the post?
Replies from: max-harms↑ comment by Max Harms (max-harms) · 2024-06-08T00:39:19.262Z · LW(p) · GW(p)
Ah, yeah! That'd be great. Am I capable of doing that, or do you want to handle it for me?
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-06-08T00:49:37.744Z · LW(p) · GW(p)
You can do it. Just go to https://www.lesswrong.com/library [? · GW] and scroll down until you reach the "Community Sequences" section and press the "Create New Sequence" button.
comment by Review Bot · 2024-08-02T12:08:42.153Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?