Bootstrapped Alignment
post by Gordon Seidoh Worley (gworley) · 2021-02-27T15:46:29.507Z · LW · GW · 12 commentsContents
12 comments
NB: I doubt any of this is very original. In fact, it's probably right there in the original Friendly AI writings and I've just forgotten where. Nonetheless, I think this is something worth exploring lest we lose sight of it.
Consider the following argument:
- Optimization unavoidably leads to Goodharting [LW · GW] (as I like to say, Goodhart is robust)
- This happens so long as we optimize (make choices) based on an observation, which we must do because that's just how the physics work.
- We can at best make Goodhart effects happen slower, say by quantilization or satisficing [? · GW].
- Attempts to build aligned AI that rely on optimizing for alignment will eventually fail to become or remain aligned due to Goodhart effects under sufficient optimization pressure.
- Thus the only way to build aligned AI that doesn't fail to become and stay aligned is to not rely on optimization to achieve alignment.
This means that, if you buy this argument, huge swaths of AI design space is off limits for building aligned AI, and means many proposals are, by this argument, doomed to fail. Some examples of such doomed approaches:
- HCH
- debate
- IRL/CIRL
So what options are left?
- Don't build AI
- The AI you don't build is vacuously aligned.
- Friendly AI [? · GW]
- AI that is aligned with humans right from the start because it was programmed to work that way.
- (Yes I know "Friendly AI" is an antiquated term, but I don't know a better one to distinguish the idea of building AI that's aligned because it's programmed that way from other ways we might build aligned AI.)
- Bootstrapped alignment
- Build AI that is aligned via optimization that is not powerful enough or optimized (Goodharted) hard enough to cause existential catastrophe. Use this "weakly" aligned AI to build Friendly AI.
Not building AI is probably not a realistic option unless industrial civilization collapses. And so far we don't seem to be making progress on creating Friendly AI. That just leaves bootstrapping to alignment.
If I'm honest, I don't like it. I'd much rather have the guarantee of Friendly AI. Alas, if we don't know how to build it, and if we're in a race against folks who will build unaligned superintelligent AI if aligned AI is not created first, bootstrapping seems the only realistic option we have.
This puts me in a strange place with regards to how I think about things like HCH, debate, IRL, and CIRL. On the one hand, they might be ways to bootstrap to something that's aligned enough to use to build Friendly AI. On the other, they might overshoot in terms of capabilities, we probably wouldn't even realize we overshot, and then we suffer an existential catastrophe.
One way we might avoid this is by being more careful about how we frame attempts to build aligned AI and being clear if they are targeting "strong", perfect alignment like Friendly AI or "weak", optimization-based alignment like HCH. I think this would help us avoid confusion in a few places:
- thinking work on weak alignment is actually work on strong alignment
- forgetting work on weak alignment we meant to use to bootstrap to strong alignment is not itself a mechanism for strong alignment
- thinking we're not making progress towards strong alignment because we're only making progress on weak alignment
It also seems like it would clear up some of the debates we fall into around various alignment techniques. Plenty of digital ink has been spilled trying to suss out if, say, debate would really give us alignment or if it's too dangerous to even attempt, and I think a lot of this could have been avoided if we thought of debate as a weak alignment techniques we might use to bootstrap strong alignment.
Hopefully this framing is useful. As I say, I don't think it's very original, and I think I've read a lot of this framing expressed in comments and buried in articles and posts, so hopefully it's boring rather than controversial. Despite this, I can't recall it being crisply laid out like above, and I think there's value in that.
Let me know what you think.
12 comments
Comments sorted by top scores.
comment by Steven Byrnes (steve2152) · 2021-02-27T17:52:25.152Z · LW(p) · GW(p)
Reminds me of a quote from this Paul Christiano post: "It's a solution built to last (at most) until all contemporary thinking about AI has been thoroughly obsoleted...I don’t think there is a strong case for thinking much further ahead than that."
Replies from: avturchin↑ comment by avturchin · 2021-02-27T21:44:45.346Z · LW(p) · GW(p)
as a weak alignment techniques we might use to bootstrap strong alignment.
Yes, it also reminded me Christiano approach of amplification and distillation.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-08T18:25:45.524Z · LW(p) · GW(p)
Thanks both! I definitely had the idea that Paul had mentioned something similar somewhere but hadn't made it a top-level concept. I think there's similar echos in how Eliezer talked about seed AI in the early Friendly AI work.
comment by Adrià Garriga-alonso (rhaps0dy) · 2021-03-04T19:18:41.516Z · LW(p) · GW(p)
I'm confused, I don't know what you mean by 'Friendly AI'. If I take my best guess for that term, I fail to see how it does not rely on optimization to stay aligned.
I take 'Friendly AI' to be either:
- An AI that has the right utility function from the start. (In my understanding, that used to be its usage.) As you point out, because of Goodhart's Law, such an AI is an impossible object.
- A mostly-aligned AI, that is designed to be corrigible. Humans can intervene to change its utility function or shut it down as needed to prevent it from taking bad actions. Ideally, it would consult human supervisors before taking a potentially bad action.
In the second case, humans are continuously optimizing the "utility function" to be closer to the true one. Or, modifying the utility function to make "shut down" the preferred action, whenever the explicit utility function presents a 'misaligned' preferred outcome. Thus, it also represents an optimization-based weak alignment method.
Would you argue that my second definition is also an impossible object, because it also relies on optimization?
I think part of my confusion comes from the very fuzzy definition of "optimization". How close, and how fast, do you have to get to the maximum possible value of some function U(s) to be said to optimize it? Or is this the entirely wrong framework altogether? There's no need to answer these now, I'm mostly curious about a clarification for 'Friendly AI'.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-08T01:07:00.209Z · LW(p) · GW(p)
"Friendly AI" is a technical term from the past that has mostly been replaced by "aligned AI" today. However, I'm using it here to refer to aligned AI conforming to an aspect of the original proposal for Friendly AI, which is that it be designed to be aligned, say in a mathematically provable way, rather than as an engineered process that approaches alignment by approximation.
It's still the case that humans are choosing what criteria make a Friendly AI aligned and thus there is some risk of missing the objective of aligned AI, but this avoids Goodharting because there's no optimization being applied. Of course, it could always slip back in depending on the process used to come up with the criteria a Friendly AI would be built to provably have, thus making the challenge of building one quite hard!
As to your second set of questions that seem to hinge on what I mean by optimization, I just mean choosing one thing over another to try to make the world look one way rather than another. If that still seems vague it's because optimization is a very common process that basically just requires a feedback loop and a signal (reward functions are a very complex type of signal).
Replies from: rhaps0dy↑ comment by Adrià Garriga-alonso (rhaps0dy) · 2021-03-08T10:07:37.027Z · LW(p) · GW(p)
Friendly AI, which is that it be designed to be aligned, say in a mathematically provable way, rather than as an engineered process that approaches alignment by approximation.
I think I understand that now, thank you!
this avoids Goodharting because there's no optimization being applied
I'm confused again here. Is this implying that a Friendly AI, per the definition above, is not an optimizer?
I am very pessimitic about being able to align an AI without any sort of feedback loop on the reward (thus, without optimization). The world's overall transition dynamics are likely to be chaotic, so the "initial state" of an AI that is provably aligned without feedback needs to be exactly the right one to obtain the outcome we want. It could be that the chaos does not affect what we care about, but I'm unsure about that, even linear systems can be chaotic.
It is not an endeavour as clearly impossible as "build an open-loop controller for this dynamical system", but I think it's similar.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-08T18:24:09.429Z · LW(p) · GW(p)
I'm confused again here. Is this implying that a Friendly AI, per the definition above, is not an optimizer?
No. It's saying the process by which Friendly AI is designed is not an optimizer (although see my caveats in the previous apply about choosing alignment criteria; it's still technically optimization but constrained as much as possible to eliminate the normal Goodharting mechanism). The AI itself pretty much has to be an optimizer to do anything useful.
I am very pessimitic about being able to align an AI without any sort of feedback loop on the reward (thus, without optimization). The world's overall transition dynamics are likely to be chaotic, so the "initial state" of an AI that is provably aligned without feedback needs to be exactly the right one to obtain the outcome we want. It could be that the chaos does not affect what we care about, but I'm unsure about that, even linear systems can be chaotic.
I'm similarly pessimistic as it seems quite a hard problem and after 20 years we still don't really know how to start (or so I think; maybe MIRI folks feel differently and that we have made some real progress here). Hence why maybe bootstrapping to alignment is the best alternative given I think totally abandoning the Friendly AI strategy is also a bad choice.
comment by Pattern · 2021-03-09T20:08:01.818Z · LW(p) · GW(p)
1. Optimization unavoidably leads to Goodharting (as I like to say, Goodhart is robust)
What happens if we revise or optimize our metrics?
2. Attempts to build aligned AI that rely on optimizing for alignment will eventually fail to become or remain aligned due to Goodhart effects under sufficient optimization pressure.
Sufficient optimization pressure from the AI? Or are there risks associated from a) our mitigation efforts, like reducing optimization decreases friendliness 'because of Goodhart's Law', or b) the more we try to make an AI friendly/not optimize/etc. the more risks there are from that optimization process?
comment by Rohin Shah (rohinmshah) · 2021-02-28T17:32:50.339Z · LW(p) · GW(p)
Planned summary for the Alignment Newsletter:
Replies from: gworleyThis post distinguishes between three kinds of “alignment”:
1. Not building an AI system at all,
2. Building Friendly AI that will remain perfectly aligned for all time and capability levels,
3. _Bootstrapped alignment_, in which we build AI systems that may not be perfectly aligned but are at least aligned enough that we can use them to build perfectly aligned systems.The post argues that optimization-based approaches can’t lead to perfect alignment, because there will always eventually be Goodhart effects.
↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-01T14:42:28.826Z · LW(p) · GW(p)
Looks good to me! Thanks for planning to include this in the AN!
comment by Charlie Steiner · 2021-02-28T10:51:08.506Z · LW(p) · GW(p)
I'm still holding out hope for jumping straight to FAI :P Honestly I'd probably feel safer switching on a "big human" than a general CIRL agent that models humans as Boltzmann-rational.
Though on the other hand, does modern ML research already count as trying to use UFAI to learn how to build FAI?
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2021-03-01T15:01:55.392Z · LW(p) · GW(p)
Seems like it probably does, but only incidentally.
I instead tend to view ML research as the background over which alignment work is now progressing. That is, we're in a race against capabilities research that we have little power to stop, so our best bets are either that it turns out capabilities are about to hit the upper inflection point of an S-curve, buying us some time, or that the capabilities can be safely turned to helping us solve alignment.
I do think there's something interesting about a direction not considered in this post related to intelligence enhancement of humans and human emulations (ems) as a means to working on alignment, but I think realistically current projections of AI capability timelines suggest they're unlikely to have much opportunity for impact.