Takeoff speeds have a huge effect on what it means to work on AI x-risk
post by Buck · 2022-04-13T17:38:11.990Z · LW · GW · 27 commentsContents
27 comments
The slow takeoff hypothesis predicts that AGI emerges in a world where powerful but non-AGI AI is already a really big deal. Whether AI is a big deal right before the emergence of AGI determines many super basic things about what we should think our current job is. I hadn’t fully appreciated the size of this effect until a few days ago.
In particular, in a fast takeoff world, AI takeover risk never looks much more obvious than it does now, and so x-risk-motivated people should be assumed to cause the majority of the research on alignment that happens. In contrast, in a slow takeoff world, many aspects of the AI alignment problems will already have showed up as alignment problems in non-AGI, non-x-risk-causing systems; in that world, there will be lots of industrial work on various aspects of the alignment problem, and so EAs now should think of themselves as trying to look ahead and figure out which margins of the alignment problem aren’t going to be taken care of by default, and try to figure out how to help out there.
In the fast takeoff world, we’re much more like a normal research field–we want some technical problem to eventually get solved, so we try to solve it. But in the slow takeoff world, we’re basically in a weird collaboration across time with the more numerous, non-longtermist AI researchers who will be in charge of aligning their powerful AI systems but who we fear won’t be cautious enough in some ways or won’t plan ahead in some other ways. Doing technical research in the fast takeoff world basically just requires answering technical questions, while in the slow takeoff world your choices about research projects are closely related to your sociological predictions about what things will be obvious to whom when.
I think that these two perspectives are extremely different, and I think I’ve historically sometimes had trouble communicating with people who held the slow takeoff perspective because I didn’t realize we disagreed on basic questions about the conceptualization of the question. (These miscommunications persisted even after I was mostly persuaded of slow takeoffs, because I hadn’t realized the extent to which I was implicitly assuming fast takeoffs in my picture of how AGI was going to happen.)
As an example of this, I think I was quite confused about what genre of work various prosaic alignment researchers think they’re doing when they talk about alignment schemes. To quote a recent AF shortform post of mine [AF(p) · GW(p)]:
Something I think I’ve been historically wrong about:
A bunch of the prosaic alignment ideas (eg adversarial training, IDA, debate) now feel to me like things that people will obviously do the simple versions of by default. Like, when we’re training systems to answer questions, of course we’ll use our current versions of systems to help us evaluate, why would we not do that? We’ll be used to using these systems to answer questions that we have, and so it will be totally obvious that we should use them to help us evaluate our new system.
Similarly with debate--adversarial setups are pretty obvious and easy.
In this frame, the contributions from Paul and Geoffrey feel more like “they tried to systematically think through the natural limits of the things people will do” than “they thought of an approach that non-alignment-obsessed people would never have thought of or used”.
It’s still not obvious whether people will actually use these techniques to their limits, but it would be surprising if they weren’t used at all.
I think the slightly exaggerated slogan for this update of mine is “IDA is futurism, not a proposal”.
My current favorite example of the thinking-on-the-margin version of alignment research strategy is in this comment by Paul Christiano [AF(p) · GW(p)].
27 comments
Comments sorted by top scores.
comment by johnswentworth · 2022-04-13T18:53:49.512Z · LW(p) · GW(p)
I agree with the basic difference you point to between fast- and slow-takeoff worlds, but disagree that it has important strategic implications for the obviousness of takeover risk.
In slow takeoff worlds, many aspects of the alignment problem show up well before AGI goes critical. However, people will by-default train systems to conceal those problems. (This is already happening: RL from human feedback is exactly the sort of strategy which trains systems to conceal problems, and we've seen multiple major orgs embracing it within the past few months.) As a result, AI takeover risk never looks much more obvious than it does now.
Concealed problems look like no problems, so there will in-general be economic incentives to train in ways which conceal problems. The most-successful-looking systems, at any given time, will be systems trained in ways which incentivize hidden problems over visible problems.
Replies from: Buck, not-relevant↑ comment by Buck · 2022-04-14T08:45:13.131Z · LW(p) · GW(p)
I expect that people will find it pretty obvious that RLHF leads to somewhat misaligned systems, if they are widely used by the public. Like, I think that most ML researchers agree that the Facebook Newsfeed algorithm is optimizing for clicks in a way people are somewhat unhappy about, and this is based substantially on their personal experience with it; inasmuch as we’re interacting a lot with sort-of-smart ML systems, I think we’ll notice their slight misalignment. And so I do think that this will make AI takeover risk more obvious.
Examples of small AI catastrophes will also probably make takeover risk more obvious.
I guess another example of this phenomenon is that a bunch of people are more worried about AI takeover than they were five years ago, because they’ve seen more examples of ML systems being really smart, even though they wouldn’t have said five years ago that ML systems could never solve those problems. Seeing the ways things happen is often pretty persuasive to people.
Replies from: johnswentworth↑ comment by johnswentworth · 2022-04-14T17:55:28.716Z · LW(p) · GW(p)
Like, I think that most ML researchers agree that the Facebook Newsfeed algorithm is optimizing for clicks in a way people are somewhat unhappy about, and this is based substantially on their personal experience with it; inasmuch as we’re interacting a lot with sort-of-smart ML systems, I think we’ll notice their slight misalignment.
This prediction feels like... it doesn't play out the whole game tree? Like, yeah, Facebook releases one algorithm optimizing for clicks in a way people are somewhat unhappy about. But the customers are unhappy about it, which is not an economically-stable state of affairs, so shortly thereafter Facebook switches to a different metric which is less click-centric. (IIRC this actually happened a few years ago.)
On the other hand, sometimes Facebook's newsfeed algorithm is bad in ways which are not visible to individual customers. Like, maybe there's an echo chamber problem, people only see things they agree with. But from an individual customer's perspective, that's exactly what they (think they) want to see, they don't know that there's anything wrong with the information they're receiving. This sort of problem does not actually look like a problem from the perspective of any one person looking at their own feed; it looks good. So that's a much more economically stable state; Facebook is less eager to switch to a new metric.
... but even that isn't a real example of a problem which is properly invisible. It's still obvious that the echo-chamber-newsfeed is bad for other people, and therefore it will still be noticed, and Facebook will still be pressured to change their metrics. (Indeed that is what happened.) The real problems are problems people don't notice at all, or don't know to attribute to the newsfeed algorithm at all. We don't have a widely-recognized example of such a thing and probably won't any time soon, precisely because most people do not notice it. Yet I'd be surprised if Facebook's newsfeed algorithm didn't have some such subtle negative effects, and I very much doubt that the subtle problems will go away as the visible problems are iterated on.
If anything, I'd expect iterating on visible problems to produce additional subtle problems - for instance, in order to address misinformation problems, Facebook started promoting an Official Narrative which is itself often wrong. But that's much harder to detect, because it's wrong in a way which the vast majority of Official Sources also endorse. To put it another way: if most of the population can be dragged into a single echo chamber, all echoing the same wrong information, that doesn't make the echo chamber problem less bad, but it does make the echo chamber problem less visible.
Anyway, zooming out: solve for the equilibrium, as Cowen would say. If the problems are visible to customers, that's not a stable state. Organizations will be incentivized to iterate until problems stop being visible. They will not, however, be incentivized to iterate away the problems which aren't visible.
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-14T22:34:18.902Z · LW(p) · GW(p)
I can’t tell which of two arguments you’re making: that there are unknown unknowns, or that myopia isn’t a complete solution.
This is a good argument for all metrics being Goodhearteable, and that if takeover occurs and the AI is incorrigible that’ll cause suboptimal value lock-in (Ie unknown unknowns).
I agree myopia isn’t a complete solution, but it seems better for preventing takeover risk than for preventing social media dysfunction? It seems more easily defineable in the worst case (“don’t do something nearly all humans really dislike” than “make the public square function well”).
↑ comment by Not Relevant (not-relevant) · 2022-04-14T03:46:59.350Z · LW(p) · GW(p)
Can you talk more about why RL4HF is “concealing problems”? Do you mean “attempting alignment” in a way that other people won’t, or something else?
Replies from: alex-lszn↑ comment by Alex Lawsen (alex-lszn) · 2022-04-14T06:51:00.267Z · LW(p) · GW(p)
Roughly, "avoid your actions being labelled as bad by humans [or models of humans]" is not quite the same signal as "don't be bad".
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-14T11:38:48.323Z · LW(p) · GW(p)
Ah ok, so you’re saying RL4HF is bad if it’s the action model. But it seems fine if it’s done to the reward model, right?
Replies from: LawChan↑ comment by LawrenceC (LawChan) · 2022-04-15T10:25:37.699Z · LW(p) · GW(p)
What do you mean by “RLHF is done to the reward model”, and why would that be fine?
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-15T11:16:25.591Z · LW(p) · GW(p)
You can use an LLM to ask what actions to take, or you can use an LLM to ask “hey is this a good world state?” The latter seems like it might capture a lot of human semantics about value given RL4HF
comment by Steven Byrnes (steve2152) · 2022-04-13T18:21:53.624Z · LW(p) · GW(p)
I guess it depends on “how fast is fast and how slow is slow”, and what you say is true on the margin, but here's my plea that the type of thinking that says “we want some technical problem to eventually get solved, so we try to solve it” is a super-valuable type of thinking right now even if we were somehow 100% confident in slow takeoff. (This is mostly an abbreviated version of this section [LW · GW].)
- Differential Technological Development (DTD) seems potentially valuable, but is only viable if we know what paths-to-AGI will be safe & beneficial really far in advance. (DTD could take the form of accelerating one strand of modern ML relative to another—e.g. model-based RL versus self-supervised language models etc.—or it could take the form of differentially developing ML-as-a-whole compared to, I dunno, something else.) Relatedly, suppose (for the sake of argument) that someone finds an ironclad proof that safe prosaic AGI is impossible, and the only path forward is a global ban on prosaic AGI research. It would be way better to find that proof right now than finding it in 5 years, and better in 5 years than 10, etc., and that's true no matter how gradual takeoff is.
- We don't know how long safety research will take. If takeoff happens over N years, and safety research takes N+1 years, that's bad even if N is large.
- Maybe you'll say that almost all of the person-years of safety research will happen during takeoff, and any effort right now is a drop in the ocean compared to that. But I really think wall-clock time is an important ingredient in research progress, not just person-years. (“Nine women can't make a baby in a month.”)
- We don't just need to figure out the principles for avoiding AGI catastrophic accidents. We also need every actor with a supercomputer to understand and apply these principles. Some ideas take many decades to become widely (let alone universally) accepted—famous examples include evolution and plate tectonics. It takes wall-clock time for arguments to be refined. It takes wall-clock time for evidence to be marshaled. It takes wall-clock time for nice new pedagogical textbooks to be created. And of course, it takes wall-clock time for the stubborn holdouts to die and be replaced by the next generation. :-P
↑ comment by MaxRa · 2022-04-14T13:03:41.628Z · LW(p) · GW(p)
Some ideas take many decades to become widely (let alone universally) accepted—famous examples include evolution and plate tectonics.
One example that an AI policy person mentioned in a recent Q&A is "bias in ML" already being fairly much a consensus issue in ML and AI policy. I guess this happened in 5ish years?
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2022-04-14T13:45:10.445Z · LW(p) · GW(p)
I certainly wouldn't say that all correct ideas take decades to become widely accepted. For example, often somebody proves a math theorem, and within months there's an essentially-universal consensus that the theorem is true and the proof is correct.
Still, "bias in ML" is an interesting example. I think that in general, "discovering bias and fighting it" is a thing that everyone feels very good about doing, especially in academia and tech which tend to be politically left-wing. So the deck was stacked in its favor for it to become a popular cause to support and talk about. But that's not what we need for AGI safety. The question is not “how soon will almost everyone be saying feel-good platitudes about AGI safety and bemoaning the lack of AGI safety?”; the question is “how soon will AGI safety be like bridge-building safety, where there are established, universally-agreed-upon, universally-followed, legally-mandated, idiot-proof best practices?”. I don't think the "bias in ML" field is there yet. I'm not an expert, but my impression is that there is a lot of handwringing about bias in ML, and not a lot of established universal idiot-proof best practices about bias in ML. I think a lot of the discourse is bad or confused—e.g. people continue to cite the ProPublica report as a prime example of "bias in ML" despite the related impossibility theorem (see Brian Christian book chapter 2). I'm not even sure that all the currently-popular best practices are good ideas. For example, if there's a facial recognition system that's worse at black faces than white faces, my impression is that best practices are to diversify the training data so that it gets better at black faces. But it seems to me that facial recognition systems are just awful, because they enable mass surveillance, and the last thing we should be doing is making them better, and if they're worse at identifying a subset of the population then maybe those people are the lucky ones.
So by my standards, "bias in ML" is still a big mess, and therefore 5ish years hasn't been enough.
Replies from: not-relevant↑ comment by Not Relevant (not-relevant) · 2022-04-15T04:20:43.173Z · LW(p) · GW(p)
I think the ML bias folks are stuck with too hard a problem, since they’ve basically decided that all of justice can/should (or should not) be remedied through algorithms. As a result the technical folks have run into all the problems philosophy never solved, and so “policy” can only do the most obvious interventions (limit use of inaccurate facial recognition) which get total researcher consensus. (Not to mention the subfield is left-coded and thus doesn’t win the bipartisan natsec-tech crowd.) That said, 5 years was certainly enough to get their scholars heavily embedded throughout a presidential administration.
comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-04-13T18:12:50.534Z · LW(p) · GW(p)
In particular, in a fast takeoff world, AI takeover risk never looks much more obvious than it does now, and so x-risk-motivated people should be assumed to cause the majority of the research on alignment that happens.
I strongly disagree with that and I don't think it follows from the premise. I think by most reasonable definitions of alignment it is already the case that most of the research is not done by x-risk motivated people.
Furthermore, I think it reflects poorly on this community that this sort of sentiment seems to be common.
Replies from: capybaralet, MaxRa↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2022-04-13T18:15:19.152Z · LW(p) · GW(p)
It's possible that a lot of our disagreement is due to different definitions of "research on alignment", where you would only count things that (e.g.) 1) are specifically about alignment that likely scales to superintelligent systems, or 2) is motivated by X safety.
To push back on that a little bit...
RE (1): It's not obvious what will scale, And I think historically this community has been too pessimistic (i.e. almost completely dismissive) about approaches that seem hacky or heuristic.
RE (2): This is basically circular.
↑ comment by adamShimi · 2022-04-15T09:05:06.810Z · LW(p) · GW(p)
I disagree, so I'm curious about what are great examples for you of good research on alignment that is not done by x-risk motivated people? (Not being dismissive, I'm genuinely curious, and discussing specifics sounds more promising than downvoting you to oblivion and not having a conversation at all).
Replies from: Joe_Collman↑ comment by Joe Collman (Joe_Collman) · 2022-04-16T17:22:09.107Z · LW(p) · GW(p)
Examples would be interesting, certainly. Concerning the post's point, I'd say the relevant claim is that [type of alignment research that'll be increasingly done in slow takeoff scenarios] is already being done by non x-risk motivated people.
I guess the hope is that at some point there are clear-to-everyone problems with no hacky solutions, so that incentives align to look for fundamental fixes - but I wouldn't want to rely on this.
↑ comment by MaxRa · 2022-04-14T13:10:15.539Z · LW(p) · GW(p)
I also stumbled over this sentence.
1) I think even non-obvious issues can get much more research traction than AI safety does today. And I don't even think that catastrophic risks from AI are particularly non-obvious?
2) Not sure how broadly "cause the majority of research" is defined here, but I have some hope we can find ways to turn money into relevant research
comment by Donald Hobson (donald-hobson) · 2022-04-14T21:33:41.100Z · LW(p) · GW(p)
In contrast, in a slow takeoff world, many aspects of the AI alignment problems will already have showed up as alignment problems in non-AGI, non-x-risk-causing systems; in that world, there will be lots of industrial work on various aspects of the alignment problem, and so EAs now should think of themselves as trying to look ahead and figure out which margins of the alignment problem aren’t going to be taken care of by default, and try to figure out how to help out there.
Lets consider the opposite. Imagine you are programming a self driving car, in a simulated environment. You notice it goodhearting your metrics, so you tweak them and try again. You build up a list of 1001 ad hoc patches that makes your self driving car behave reasonably most of the time.
The object level patches only really apply to self driving cars. They include things like a small intrinsic preference towards looking at street signs. The meta level strategy of patching it until it works isn't very relevant either.
Imagine a world with many AI's like this. All with ad hoc kludges of hard coded utility functions. The AI is becoming increasingly economically important and getting close to AGI. Slow takeoff. All the industrial work is useless.
comment by chanamessinger (cmessinger) · 2022-11-02T13:05:04.285Z · LW(p) · GW(p)
Have you written about your update to slow takeoff?
comment by ryan_greenblatt · 2022-04-15T17:17:03.973Z · LW(p) · GW(p)
In contrast, in a slow takeoff world, many aspects of the AI alignment problems will already have showed up as alignment problems in non-AGI, non-x-risk-causing systems; in that world, there will be lots of industrial work on various aspects of the alignment problem, and so EAs now should think of themselves as trying to look ahead and figure out which margins of the alignment problem aren’t going to be taken care of by default, and try to figure out how to help out there.
TLDR: I think an important sub-question is 'how fast is agency takeoff' as opposed to economic/power takeoff in general.
There are a few possible versions of this in slow takeoff which look quite different IMO.
- Agentic systems show up before the end of the world and industry works to align these systems. Here's a silly version of this:
GPT-n prefers writing romance to anything else. It's not powerful enough to take over the world but it does understand it's situation, what training is etc. And it would take over the world if it could and this is somewhat obvious to industry. In practice it mostly tries to detect when it isn't in training and then steer outputs in a more romantic direction. Industry would like to solve this, but finetuning isn't enough and each time they've (naively) retrained models they just get some other 'quirky' behavior (but at least soft-core romance is better than that AI which always asks for crypto to be sent to various addresses). And adversarial training just results in getting other strange behavior.
Industry works on this problem because it's embarassing and it costs them money to discard 20% of completions as overly romantic. They also foresee the problem getting worse (even if they don't buy x-risk).
- Not obviously agentic systems have alignment problems, but we don't see obvious, near human level agency until the end of the world. This is slow takeoff world, so these systems are taking over a larger and larger fraction of the economy despite to being very agentic. These alignment issues could be reward hacking or just general difficulty getting language models to follow instructions to the best of their ability (as shows up currently).
I'd claim that in a world which is more centrally scenerio (2), industrial work on the 'alignment problem' might not be very useful for reducing existential risk in the same way that I think that a lot of current 'applied alignment'/instruction following/etc isn't very useful. So, this world goes similarly to fast takeoff in terms of research prioritization. But in something like scenerio (1), industry has to do more useful research and problems are more obvious.
comment by Jack R (Jack Ryan) · 2022-04-14T08:20:41.681Z · LW(p) · GW(p)
while in the slow takeoff world your choices about research projects are closely related to your sociological predictions about what things will be obvious to whom when.
Example?
Replies from: Buck↑ comment by Buck · 2022-04-14T08:39:57.255Z · LW(p) · GW(p)
I’m not that excited for projects along the lines of “let’s figure out how to make human feedback more sample efficient”, because I expect that non-takeover-risk-motivated people will eventually be motivated to work on that problem, and will probably succeed quickly given motivation. (Also I guess because I expect capabilities work to largely solve this problem on its own, so maybe this isn’t actually a great example?) I’m fairly excited about projects that try to apply human oversight to problems that the humans find harder to oversee, because I think that this is important for avoiding takeover risk but that the ML research community as a whole will procrastinate on it.
comment by Richard Korzekwa (Grothor) · 2022-04-27T18:51:04.874Z · LW(p) · GW(p)
in a slow takeoff world, many aspects of the AI alignment problems will already have showed up as alignment problems in non-AGI, non-x-risk-causing systems; in that world, there will be lots of industrial work on various aspects of the alignment problem, and so EAs now should think of themselves as trying to look ahead and figure out which margins of the alignment problem aren’t going to be taken care of by default, and try to figure out how to help out there.
I agree with this, and I think it extends beyond what you're describing here. In a slow takeoff world, the aspects of the alignment problem that show up in non-AGI systems will also provide EAs with a lot of information about what's going on, and I think we should try to do things now that will help us to notice those aspects and act appropriately. (I'm not sure what this looks like; maybe we want to build relationships with whoever will be building these systems, or maybe we want to develop methods for figuring things out and fixing problems that are likely to generalize.)
comment by ArtMi (richard-ford) · 2022-04-15T09:47:29.835Z · LW(p) · GW(p)
I support that thinking-on-the-margin alignment research version is crucial and it is one of the biggest areas of opportunity to increase the probability of success. Based on the seemingly current low probability, possibly at least it is worth trying.
In the general public context, one of the premises is about if they could have benefit to the problem. In my intuition it really seems that AI alignment should be more debated in Universities and Technology. The current lack of awareness is concerning and unbelievable.
We should evaluate more the outcomes of raising awareness. Taking into account all the options: a general(public), partial(experts) awareness strategy and the spectrum and variables in between. It seems that current Safe Alignment leaders have high motives to not focus enough on the expansion of awareness or even to not take the strategy even as possibly useful. I believe this motives should not be fixed but be more debated and not determined as too hard to implement.
We can't assume that if someone is capable to solve Safe Alignment, he/she/they would also be aware of the problem. It seems probable that if someone is capable of solving safe alignment, he/she/they currently don't understand the true magnitude of the problem. In that probable case, a needed step on the success path is in he/she/they understanding the problem. And we can be crucial with that. I understand that with this strategy, as in many safe alignment strategies, the probability of it reducing instead of increasing our success must be highly evaluated.
In the current Alignment research context, possibly there is also positive opportunity from taking more thinking-on-the-margin approaches. The impact of present and future AI systems to AGI/Safe Alignment is very likely of very high importance, and more so compared to its current focus. Because these systems are very likely to shorten the timelines("how much?" is important and currently ambiguous). Seems that we are not evaluating enough the probable crucial impact of current deep models to the problem, but I'm glad the idea is growing.
(paulfchristiano, 2022) states "AI systems could have comparative disadvantage at alignment relative to causing trouble, so that AI systems are catastrophically risky before they solve alignment." I support that is one of the most important issues. Because, if AI systems are capable of improving safe AI alignment research, they will very likely be even more capable of improving non-safe/default AI alignment and probably Superintelligence creation research. This means that the probable most crucial technology in the Superintelligence birth lowers the probability of safe alignment. So two crucial questions are: How to fight this? and more essentially: How and how much can the current and near future AI Systems improve AGI creation?.
Now I will propose a polemic AI Safe Alignment thinking-on-the-margin tactic. The (I argue highly) probable ideation of new/different/better AI Safe Alignment strategies by AI Safe Alignment researchers from taking advantage of Stimulants and Hallucinogens. We definitely are in a situation where we must take any advantage we can. Non-ordinary states of consciousnesses are very highly worth trying because of the almost none risks involved. (Also with nootropics, but I'm almost not familiar with them).
Finally I will share what I believe currently should be the most important issue on all versions of alignment research and is on top of all previous ideas: If trying to Safely Align will almost certainly not solve our x-risk as EY states in "Miri announces new "Death with dignity strategy"". Then what it will have achieved is only higher s-risk probabilities. (THANK YOU for the Info hazards T.T). So one option is to aim to shorten the x-risk timeline if that reduces the probability of s-risks. Helping to build the Superintelligence asap.
Or to shift all the strategy to lowering s-risks. This is specially and highly relevant to us because we have a higher probability of s-risk (thanks e.e). So we should focus on the issues that increased our s-risk probabilities.