Success without dignity: a nearcasting story of avoiding catastrophe by luck
post by HoldenKarnofsky · 2023-03-14T19:23:15.558Z · LW · GW · 17 commentsContents
The initial alignment problem Basic countermeasures The deployment problem Some objections to this picture Success without dignity Notes None 17 comments
I’ve been trying to form a nearcast-based [LW · GW] picture of what it might look like to suffer or avoid an AI catastrophe. I’ve written a hypothetical “failure story” (How we might stumble into AI catastrophe) and two “success stories” (one presuming a relatively gradual takeoff, one assuming a more discontinuous one).
Those success stories rely on a couple of key actors (a leading AI lab and a standards-and-monitoring organization) making lots of good choices. But I don’t think stories like these are our only hope. Contra Eliezer [LW · GW], I think we have a nontrivial1 chance of avoiding AI takeover even in a “minimal-dignity” future - say, assuming essentially no growth from here in the size or influence of the communities and research fields focused specifically on existential risk from misaligned AI, and no highly surprising research or other insights from these communities/fields either. (There are further risks beyond AI takeover; this post focuses on AI takeover.)
This is not meant to make anyone relax! Just the opposite - I think we’re in the “This could really go lots of different ways” zone where marginal effort is most valuable. (Though I have to link to my anti-burnout take after saying something like that.) My point is nothing like “We will be fine” - it’s more like “We aren’t stuck at the bottom of the logistic success curve; every bit of improvement in the situation helps our odds.”
I think “Luck could be enough” should be the strong default on priors,2 so in some sense I don’t think I owe tons of argumentation here (I think the burden is on the other side). But in addition to thinking “I haven’t heard knockdown arguments for doom,” I think it’s relevant that I feel like I can at least picture success with minimal dignity (while granting that many people will think my picture is vague, wishful and wildly unrealistic, and they may be right). This post will try to spell that out a bit.
It won’t have security mindset, to say the least - I’ll be sketching things out that “could work,” and it will be easy (for me and others) to name ways they could fail. But I think having an end-to-end picture of how this could look might be helpful for understanding my picture (and pushing back on it!)
I’ll go through:
- How we could navigate the initial alignment problem:3 getting to the first point of having very powerful (human-level-ish), yet safe, AI systems.
- For human-level-ish AIs, I think it’s plausible that the alignment problem is easy, trivial or nonexistent. (Also plausible that it’s fiendishly hard!)
- If so, it could end up cheap and easy to intent-align human-level-ish AIs, such that such AIs end up greatly outnumbering misaligned ones - putting us in good position for the deployment problem (next point).
- How we could navigate the deployment problem:4 reducing the risk that someone in the world will deploy irrecoverably dangerous systems, even though the basic technology exists to make powerful (human-level-ish) AIs safe. (This is often discussed through the lens of “pivotal acts,” though that’s not my preferred framing.5)
- You can think of this as containing two challenges: stopping misaligned human-level-ish AI, and maintaining alignment as AI goes beyond human level.
- A key point is that once we have aligned human-level-ish AI, the world will probably be transformed enormously, to the point where we should consider ~all outcomes in play.
- (Briefly) The main arguments I’ve heard for why this picture is unrealistic/doomed.
- A few more thoughts on the “success without dignity” idea.
As with many of my posts, I don’t claim personal credit for any new ground here. I’m leaning heavily on conversations with others, especially Paul Christiano and Carl Shulman.
The initial alignment problem
What happens if you train an AI using the sort of process outlined here [LW · GW] - essentially, generative pretraining followed by reinforcement learning, with the latter refereed by humans?
I think danger is likely by default - but not assured. It seems to depend on a number of hard-to-predict things:
- How accurate is reinforcement [AF · GW]?
- The greater an AI’s ability to get better performance by deceiving, manipulating or overpowering supervisors, the greater the danger.
- There are a number of reasons (beyond explicit existential risk concern) that AI labs might invest heavily in accurate reinforcement, via techniques like task decomposition/amplification, recursive reward modeling, mechanistic interpretability, and using AIs to debate or supervise other AIs. Relatively moderate investments here could imaginably lead to highly accurate reinforcement.
- How “natural” are intended generalizations (like “Do what the supervisor is hoping I’ll do, in the sense that most humans would mean this phrase rather than in a precise but malign sense”) vs. unintended ones (like “Do whatever maximizes reward”)?
- It seems plausible that large amounts of generative pretraining could result in an AI having a suite of well-developed humanlike concepts, such as “Do what the supervisor is hoping I’ll do, in the sense that most humans would mean this phrase rather than in a precise technical sense” - and also such as “Fool the supervisor into thinking I did well,” but the latter could be hard enough to pull off successfully in the presence of a basic audit regime (especially for merely human-level-ish AI), and/or sufficiently in conflict with various learned heuristics, that it could be disadvantaged in training.
- In this case, a relatively small amount of reinforcement learning could be enough to orient an AI toward policies that generalize as intended.
- How much is training “outcomes-based vs. process-based”? [LW · GW] That is, how much does it look like “An AI goes through a long episode, taking many steps that aren’t supervised or necessarily understood, and ultimately subject to gradient descent based on whether humans approve of the outcome?” vs. “Each local step the AI takes is subject to human supervision and approval?”
- The former leaves a lot of scope for mistaken feedback that trains deception and manipulation. The latter could still in some sense train “doing what humans think they want rather than what they actually want,” but that’s quite different from training “Do whatever results in a seemingly good outcome,” and I think it’s noticeably less vulnerable to some of the key risks.
- Outcomes-based training seems abstractly more “powerful,” and likely to be a big part of training the most powerful systems - but this isn’t assured. Today, training AIs based on outcomes of long episodes is unwieldy, and the most capable AIs haven’t had much of it.
- How natural/necessary is it for a sufficiently capable AI to form ambitious goals and act like a “maximizer”?
- There has been a lot of ink spilled about this, and I think it remains up in the air. I’ll just say that modern AI development methods are not clearly set up to produce “maximizers.”
For what it’s worth, it seems to me like the very shortest, straightest-line imaginable path to transformative AI from today looks relatively favorable on the above dimensions. Today’s most capable models mostly look like “lots of generative pretraining, tiny amounts of reinforcement learning,” which seems like a good thing at least from the point of view of (a) developing a large suite of human-like concepts; (b) a good chance that the RL part of the training can be highly “accurate” (few opportunities to get better performance by deceiving, manipulating or overpowering supervisors). It’s imaginable to AI models could become transformative with a handful of additions (see footnote6) that - while greatly changing capabilities and introducing serious alignment risks - wouldn’t clearly change what I just said.
I’d guess that the situation will get worse (e.g., more outcome-based training) compared to that picture, but I don’t know that. And even if it does, it still seems like we could end up with models that have human-level and human-like capabilities and lack ambitious goals of their own. (Even without a special effort.)
For an opposing view, see my dialogue with Nate Soares [LW · GW]. I could imagine being about 70% confident of Nate’s view if I’d done a lot of thinking I haven’t done, but I doubt most people are in that sort of position or should trust Nate (or Eliezer) to be, to the point of being certain of danger by default.
Basic countermeasures
There are also possible countermeasures to alignment risk, identifiable today, that I can imagine going quite well, without needing new breakthrough insights (we would need luck, though). Examples:
Simple checks and balances [AF · GW]. An example: we might train “AI watchdogs” to make the best arguments they can (ideally including discussions of internal states) that other AIs are dangerous, that specific measures can make them less dangerous, etc. Gradient descent could be used to both push against spurious arguments (as assessed by randomized deep dives) while pushing toward more important+correct (according to human judges and auditors) critiques. There are tons of reasons this might not work, but I don’t think we should be shocked if it produces huge dividends, even coming from simply human-level AIs that are well-suited to finding flaws and dangers in models and outputs similar to themselves. These dividends could lead to highly accurate training that makes it hard to learn unintended generalizations, and/or to deployment scenarios where it’s overly hard for human-level-ish AIs to coordinate against humans.
Intense red-teaming. I’ve written about how adversarial training [AF · GW] could “undo” a good deal of misalignment that has popped up during training, especially if something like relaxed adversarial training [AF · GW] (simulating inputs that one can’t actually produce) turns out to be feasible. It’s plausible to me that AI companies might invest heavily in this kind of work, without needing to be mostly motivated by existential risk reduction (they might be seeking intense guarantees against e.g. lawsuit-driving behavior by AI systems).
Training on internal states. I think interpretability research could be useful in many ways, but some require more “dignity” that I’m assuming here7 and/or pertain to the “continuing alignment problem” (next section).8 If we get lucky, though, we could end up with some way of training AIs on their own internal states that works at least well enough for the initial alignment problem.
Training AIs on their own internal states risks simply training them to manipulate and/or obscure their own internal states, but this may be too hard for human-level-ish AI systems, so we might at least get off the ground with something like this.
A related idea is finding a regularizer that penalizes e.g. dishonesty, as in Eliciting Latent Knowledge [LW · GW].
It’s pretty easy for me to imagine that a descendant of the Burns et al. 2022 method, or an output of the Eliciting Latent Knowledge [LW · GW] agenda, could fit this general bill without needing any hugely surprising breakthroughs. I also wouldn’t feel terribly surprised if, say, 3 more equally promising approaches emerged in the next couple of years.
The deployment problem
Once someone has developed safe, powerful (human-level-ish) AI, the threat remains that:
- More advanced AI will be developed (including with the help of the human-level-ish AI), and it will be less safe, due to different development methods and less susceptibility to the basic countermeasures above.9
- As it gets cheaper and easier for anyone in the world to build powerful AI systems, someone will do so especially carelessly and/or maliciously.
The situation has now changed in a few ways:
- There’s now a lot more capacity for alignment research, threat assessment research (to make a more convincing case for danger and contribute to standards and monitoring), monitoring and enforcing standards, and more [AF · GW] (because these things can be done by AIs). I think interpretability looks like a particularly promising area for “automated research” - AIs might grind through large numbers of analyses relatively quickly and reach a conclusion about the thought process of some larger, more sophisticated system.
- There’s also a lot more capacity for capabilities research that could lead to more advanced, more dangerous AI.
- For a good outcome, alignment research or threat assessment research doesn’t have to “keep up with” capabilities research for a long time - a strong demonstration of danger, or decisive/scalable alignment solution, could be enough.
It’s hard to say how all these factors will shake out. But it seems plausible that one of these things will happen:
- Some relatively cheap, easy, “scalable” solution to AI alignment (the sort of thing ARC is currently looking for) is developed and becomes widely used.
- Some decisive demonstration of danger is achieved, and AIs also help to create a successful campaign to persuade key policymakers to aggressively work toward a standards and monitoring regime. (This could be a very aggressive regime if some particular government, coalition or other actor has a lead in AI development that it can leverage into a lot of power to stop others’ AI development.)
- Something else happens to decisively change dynamics - for example, AIs turn out to be good enough at finding and patching security holes that the offense-defense balance in cybersecurity flips, and it becomes possible to contain even extremely capable AIs.
Any of these could lead to a world in which misaligned AI in the wild is at least rare relative to aligned AI. The advantage for humans+aligned-AIs could be self-reinforcing, as they use their greater numbers to push measures (e.g., standards and monitoring) to suppress misaligned AI systems.
I concede that we wouldn’t be totally out of the woods in this case - things might shake out such that highly-outnumbered misaligned AIs can cause existential catastrophe. But I think we should be optimistic by default from such a point. A footnote elaborates on this, addressing Steve Byrnes’s discussion of a related topic [AF · GW] (which I quite liked and think raises good concerns, but isn’t decisive for the scenario I’m contemplating).10
More generally, I think it’s very hard to reason about a world with human-level-ish aligned AIs widely available (and initially outnumbering comparably powerful misaligned AIs), so I think we should not be too confident of doom starting from that point.
Some objections to this picture
The most common arguments I’ve heard for why this picture is hopeless involve some combination of:
- AI systems could quickly become very powerful relative to their supervisors [AF · GW], which means we have to confront a harder version of the alignment problem without first having human-level-ish aligned systems.
- I think it’s certainly plausible this could happen, but I haven’t seen a reason to put it at >50%.
- To be clear, I expect an explosive “takeoff” by historical standards. I want to give Tom Davidson’s analysis [LW · GW] more attention, but it implies that there could be mere months between human-level-ish AI and far more capable AI (but that could be enough for a lot of work by human-level-ish AI).
- One key question: to the extent that we can create a feedback loop with AI systems doing research to improve hardware and/or software efficiency (which then increases the size and/or capability of the “automated workforce,” enabling further research ...), will this mostly be via increasing the number of AIs or by increasing per-AI capabilities? There could be a feedback loop with human-level-ish AI systems exploding in number, which seems to present fewer (though still significant) alignment challenges than a feedback loop with AI systems exploding past human capability.11
- It’s arguably very hard to get even human-level-ish capabilities without ambitious misaligned aims. I discussed this topic at some length with Nate Soares - notes here [LW · GW]. I disagree with this as a default (though, again, it’s plausible) for reasons given at that link.
- Expecting “offense-defense” asymmetries (as in this post [AF · GW]) such that we’d get catastrophe even if aligned AIs greatly outnumber misaligned ones. Again, this seems plausible, but not the right default guess for how things will go, as discussed at the end of the previous section.
I think all of these arguments are plausible, but very far from decisive (and indeed each seems individually <50% likely to me).
Success without dignity
This section is especially hand-wavy and conversational. I probably don’t stand by what you’d get from reading any particular sentence super closely and taking it super seriously. I stand by some sort of vague gesture that this section is trying to make.
I have a high-level intuition that most successful human ventures look - from up close - like dumpster fires. I’m thinking of successful organizations - including those I’ve helped build - as well as cases where humans took highly effective interventions against global threats, e.g. smallpox eradication; recent advances in solar power that I’d guess are substantially traceable to subsidy programs; whatever reasons we haven’t had a single non-test nuclear detonation since 1945.
I expect the way AI risk is “handled by society” to look like a dumpster fire, in the sense that lots of good interventions will be left on the table, lots of very silly things will be done, and no intervention will be satisfyingly robust. Alignment measures will be fallible, standards regimes will be gameable, security setups will be imperfect, and even the best AI labs will have lots of incompetent and/or reckless people inside them doing scary things.
But I don’t think that automatically translates to existential catastrophe, and this distinction seems important. (An analogy: “that bednet has lots of gaping holes in it” vs. “That bednet won’t help” or “That person will get malaria.”) The future is uncertain; we could get lucky and stumble our way into a good outcome.
Furthermore, there are a number of interventions that could interact favorably with some baseline good luck. (I’ll discuss this more in a future post.)
One key strategic implication of this view that I think is particularly worth noting:
- I think there’s a common headspace that says something like: “We’re screwed unless we get a miracle. Hence, ~nothing matters except for (a) buying time for that miracle to happen (b) optimizing heavily for attracting and supporting unexpectedly brilliant people with unexpectedly great ideas.”
- My headspace is something more like: “We could be doomed even in worlds where our interventions go as well as could be reasonably expected; we could be fine in worlds where they go ~maximally poorly; every little bit (of alignment research, of standards and monitoring, of security research, etc.) helps; and a lot of key interventions would benefit from things other than time and top intellectual talent - they’d benefit from alignment-concerned people communicating well, networking well, being knowledgeable about the existing AI state of the art, having good reputations with regulators and the general public, etc. etc. etc.”
- That is, in my headspace, there are lots of things that can help - which also means that there are lots of factors we need to worry about. Many are quite ugly and unpleasant to deal with (e.g., PR and reputation). And there are many gnarly tradeoffs with no clear answer - e.g., I think there are things that hurt community epistemics12 and/or risk making the situation worse13 that still might be right to do.
- I have some suspicion that the first headspace is self-serving for people who really don’t like dealing with that stuff and would rather focus exclusively on trying to do/support/find revolutionary intellectual inquiry. I don’t normally like making accusations like this (they rarely feel constructive) but in this case it feels like a bit of an elephant in the room - it seems like quite a strange view on priors to believe that revolutionary intellectual inquiry is the “whole game” for ~any goal, especially on the relatively short timelines many people have for transformative AI.
I don’t feel emotionally attached to my headspace. It’s nice to not think we’re doomed, but not a very big deal for me,14 and I think I’d enjoy work premised on the first headspace above at least as much as work premised on the second one.
The second headspace is just what seems right at the moment. I haven’t seen convincing arguments that we won’t get lucky, and it seems to me like lots of things can amplify that luck into better odds of success. If I’m missing something correctible, I hope this will prompt discussion that leads there.
Notes
-
Like >10% ↩
-
Since another way of putting it is: “AI takeover (a pretty specific event) is not certain (conditioned on the ‘minimal-dignity’ conditions above, which don’t seem to constrain the future a ton).” ↩
-
Phase 1 in this analysis [LW · GW] ↩
-
Phase 2 in this analysis [LW · GW] ↩
-
I think there are alternative ways things could go well, which I’ll cover in the relevant section, so I don’t want to be stuck with a “pivotal acts” frame. ↩
-
Salient possible additions to today’s models:
- Greater scale (more parameters, more pretraining)
- Multimodality (training the same model on language + images or perhaps video)
- Memory/long contexts: it seems plausible that some relatively minor architectural modification could make today’s language models much better at handling very long contexts than today’s cutting-edge systems, e.g. they could efficiently identify which parts of an even very long context ought to be paid special attention at any given point. This could imaginably be sufficient for them to be “taught” to do tasks, in roughly the way humans are (e.g., I might give an AI a few examples of a successfully done task, ask it to try, critique it, and repeat this loop over the course of hundreds of pages of “teaching” - note that the “teaching” is simply building up a context it can consult for its next step, it is not using gradient descent).
- Scaffolding: a model somewhat like today’s cutting-edge models could be put in a setting where it’s able to delegate tasks to copies of itself. Such tasks might include things like “Think about how to accomplish X, and send me some thoughts” and “That wasn’t good enough, think more please.” In this way, it could be able to vary the amount of “thought” and effort it puts into different aspects of its task. It could also be given access to some basic actuators (shell access might be sufficient). None of this need involve further training, and it could imaginably give an AI enough of the functionality of things like “memory” to be quite capable.
It’s not out of the question to me that we could get to transformative AI with additions like this, and with the vast bulk of the training still just being generative pretraining. ↩
-
E.g., I think interpretability could be very useful for demonstrating danger, which could lead to a standards-and-monitoring regime, but such a regime would be a lot more “dignified” than the worlds I’m picturing in this post. ↩
-
I think interpretability is very appealing as something that large numbers of relatively narrow “automated alignment researchers” could work on. ↩
-
Debate-type setups seem like they would get harder for humans to adjudicate as AI systems advance; more advanced AI seems harder to red-team effectively without its noticing “tells” re: whether it’s in training; internal-state-based training seems more likely to result in “manipulating one’s own internal states” for more advanced AI; ↩
-
Byrnes’s post [AF · GW]seems to assume there are relatively straightforward destruction measures that require draconian, scary “plans” to stop. (Contrast with my discussion here [AF · GW], in which AIs can be integrated throughout the economy in ways that makes it harder for misaligned AIs to “get off the ground” with respect to being developed, escaping containment and acquiring resources.)
- I don’t think this is the right default/prior expectation, given that we see little evidence of this sort of dynamic in history to date. (Relatively capable people who want to cause widespread destruction even at cost to themselves are rare, but do periodically crop up and don’t seem to have been able to effect these sorts of dynamics to date. Individuals have done a lot of damage by building followings and particularly via government power, but this seems very different from the type of dynamic discussed in Byrnes’s post.)
- One could respond by pointing to particular vulnerabilities and destruction plans that seem hard to stop, but I haven’t been sold on anything along these lines, especially when considering that a relatively small number of biological humans’ surviving could still be enough to stop misaligned AIs (if we posit that aligned AIs greatly outnumber misaligned AIs). And I think misaligned AIs are less likely to cause any damage if the odds are against ultimately achieving their aims.
- I note that Byrnes’s post also seems to assume that it’s greatly expensive and difficult to align an AI (I conjecture that it may not be, above).
-
The latter, more dangerous possibility seems more likely to me, but it seems quite hard to say. (There could also of course be a hybrid situation, as the number and capabilities of AI grow.) ↩
-
I think optimizing for community epistemics has real downsides, both via infohazards/empowering bad actors and via reputational risks/turning off people who could be helpful. I wish this weren’t the case, and in general I heuristically tend to want to value epistemic virtue very highly, but it seems like it’s a live issue - I (reluctantly) don’t think it’s reasonable to treat “X is bad for community epistemics” as an automatic argument-ender about whether X is bad (though I do think it tends to be a very strong argument). ↩
-
E.g., working for an AI lab and speeding up AI (I plan to write more about this).
More broadly, it seems to me like essentially all attempts to make the most important century go better also risk making it go a lot worse, and for anyone out there who might’ve done a lot of good to date, there are also arguments that they’ve done a lot of harm (e.g., by raising the salience of the issue overall).
Even “Aligned AI would be better than misaligned AI” seems merely like a strong bet to me, not like a >95% certainty, given what I see as the appropriate level of uncertainty about topics like “What would a misaligned AI actually do, incorporating acausal trade considerations and suchlike?”; “What would humans actually do with intent-aligned AI, and what kind of universe would that lead to?”; and “How should I value various outcomes against each other, and in particular how should I think about hopes of very good outcomes vs. risks of very bad ones?”
To reiterate, on balance I come down in favor of aligned AI, but I think the uncertainties here are massive - multiple key questions seem broadly “above our pay grade” as people trying to reason about a very uncertain future. ↩
-
I’m a person who just doesn’t pretend to be emotionally scope-sensitive or to viscerally feel the possibility of impending doom. I think it would be hard to do these things if I tried, and I don’t try because I don’t think that would be good for anyone.
I like doing worthy-feeling work (I would be at least as happy with work premised on a “doomer” worldview as on my current one) and hanging out with my family. My estimated odds that I get to live a few more years vs. ~50 more years vs. a zillion more years are quite volatile and don’t seem to impact my daily quality of life much. ↩
17 comments
Comments sorted by top scores.
comment by Hastings (hastings-greer) · 2023-03-14T19:40:12.269Z · LW(p) · GW(p)
A low quality prior on odds of lucky alignment: we can look at the human intelligence sharp left turn from different perspectives
Worst case scenario S risk: pigs, chickens, cows
X risk: Homo florensis, etc
Disastrously unaligned but then the superintelligence inexplicably started to align itself instead of totally wiping us out: Whales, gorillas
unaligned but that's randomly fine for us: raccoons, rats
Largely aligned: Housecats
↑ comment by cousin_it · 2023-03-15T10:36:02.854Z · LW(p) · GW(p)
X risk would be passenger pigeons, no?
Anyway your comment got me thinking. So far it seems the territory colonized by humans is a subset of the territory previously colonized by life, not stretching beyond it. And the territory covered by life is also not all of Earth, nevermind the universe. So we can imagine AI occupying the most "cushy" subset of former human territory, with most humans removed from there, some subsisting as rats, some as housecats, some as wild animals periodically hit by incomprehensible dangers coming from the AI zone (similar to oil spills and habitat destruction), and some in S-risk type situations due to the AI remaining concerned with humans in some way.
Though this "concentric circles" model is maybe a bit too neat to imagine, and too similar to existing human myths about gods and so on. So let's not trust it too much.
Replies from: DanArmak↑ comment by DanArmak · 2023-03-18T15:51:09.995Z · LW(p) · GW(p)
So we can imagine AI occupying the most "cushy" subset of former human territory
We can definitely imagine it - this is a salience argument - but why is it at all likely? Also, this argument is subject to reference class tennis: humans have colonized much more and more diverse territory than other apes, or even all other primates.
Once AI can flourish without ongoing human support (building and running machines, generating electricity, reacting to novel environmental challenges), what would plausibly limit AI to human territory, let alone "cushy" human territory? Computers and robots can survive in any environment humans can, and in some where we at present can't.
Also: the main determinant of human territory is inter-human social dynamics. We are far from colonizing everywhere our technology allows, or (relatedly) breeding to the greatest number we can sustain. We don't know what the main determinant of AI expansion will be; we don't even know yet how many different and/or separate AI entities there are likely to be, and how they will cooperate, trade or conflict with each other.
comment by Lukas_Gloor · 2023-03-14T21:56:15.357Z · LW(p) · GW(p)
I think “Luck could be enough” should be the strong default on priors,2 [LW(p) · GW(p)] so in some sense I don’t think I owe tons of argumentation here (I think the burden is on the other side).
I agree with this being the default and the burden being on the other side. At the same time, I don't think of it as a strong default.
Here's a frame that I have that already gets me to a more pessimistic (updated) prior:
It has almost never happened that people who developed and introduced a revolutionary new technology displayed a lot of foresight about its long-term consequences. For instance, there were comparatively few efforts at major social media companies to address ways in which social media might change society for the worse. The same goes for the food industry and the obesity epidemic or online dating and its effects on single parenthood rates. When people invent cool new technology, it makes the world better on some metrics but creates new problems on its own. The whole thing is accelerating and feels out of control.
It feels out of control because even if we get cool new things from tech progress, we don't seem to be getting any better at fixing the messiness that comes with it (misaligned incentives/goodhearting, other Molochian forces, world-destroying tech becoming ever more accessible). Your post says "a [] story of avoiding catastrophe by luck." This framing makes it sound like things would be fine by default if it isn't for some catastrophe happening. However, humans have never seemed particularly "in control" over technological progress. For things to go well, we need the opposite of a catastrophe – a radical change towards the upside. We have to solve massive coordination problems and hope for a technology that gives us god-like power, finally putting sane and compassionate forces in control over the future. It so happens that we can tell a coherent story about how AI might do this for us. But to say that it might go right just by luck – I don't know, that seems far-fetched!
All of that said, I don't think we can get very far arguing from priors. What carries by far the most weight are arguments about alignment difficulty, takeoff speeds, etc. And I think it's a reasonable view to say that it's very unlikely that any researchers currently know enough to make highly confident statements about these variables. (Edit: So, I'm not sure we disagree too much – I think I'm more pessimistic about the future than you are, but I'm probably not as pessimistic as the position you're arguing against in this post. I mostly wanted to make the point that I think the "right" priors support at least moderate pessimism, which is a perspective I find oddly rare among EAs.)
FWIW, it's not obvious to me that slow takeoff is best. Fast takeoff at least gives you god-like abilities early on, which are useful from a perspective of "we were never particularly in control over history; lots of underlying problems need fixing before we pass a point of no return." By contrast, with slow takeoff, coordination problems seem more difficult because (at least by default) there will be more actors using AIs in some ways or other and it's not obvious that the AIs in a slow-takeoff scenario will be all that helpful at facilitating coordination.
comment by RogerDearnaley (roger-d-1) · 2023-05-28T01:26:49.386Z · LW(p) · GW(p)
My view is that we've already made some significant progress on alignment, compared to say where we were O(15) years ago, and have also had some unexpectedly lucky breaks. Personally I'd list:
- Value learning, as a potential solution to issues like corrigibility and the shut-down problem.
- Once your value learner is a STEM-capable AGI, then doing or assisting with alignment research becomes a convergent instrumental strategy for it.
- The closest thing we currently have to an AGI, LLMs, are fortunately not particularly agentic, they're more of a tool AI (until you wrap them in a script to run them in a loop with suitable prompts).
- To be more specific, for the duration of generating a specific document (at least before RLHF), an LLM emulates the output of a human or humans generating text, so to the extent that they pick up/emulate agentic behavior from us, it's myopic past the end of document, and emulates some human(s) who have contributed text to their training set. Semi-randomly-chosen humans are a type of agent that humans are unusually good at understanding and predicting. The orthogonality thesis doesn't apply to them: they will have an emulation of some version of human values. Like actual random humans, they're not inherently fully aligned, but on average they're distinctly better than paperclip maximizers. (Also both RLHF and prompts can alter the random distribution.)
- While human values are large and fragile, LLMs are capable of capturing fairly good representations of large fragile things, including human values. So things like constitutional RL [LW · GW] work. That still leaves concerns about what happens when we apply optimization pressure or distribution shifts to these representations of human values, but it's at least a lot better than expecting us to hand-craft a utility function for the entire of human values in symbolic form. If we could solve knowing when an LLM representation of human values was out-of distribution and not reliable, then we might actually have a basis for an AGI-alignment solution that I wouldn't expect to immediately kill everyone. (For example, it might make an acceptable initial setting to preload into an AGI value learner that could then refine it and extend its region of validity.) Even better, knowing when an LLM isn't able to give a reliable answer is a capabilities problem, not just an alignment problem, since it's the same issue as getting an LLM to reply "I don't know" when asked a question to which it would otherwise have hallucinated a false answer. So all of the companies buying and selling access to LLMs are strongly motivated to solve this. (Indeed, leading LLM companies appear to have made significant progress on reducing hallucination rates in the last year.)
This is a personal list and I'm sure will be missing some items.
That we've made some progress and had some lucky breaks doesn't guarantee that this will continue, but it's unsurprising to me that
- alignment research in the context of a specific technology that we can actually experiment with is easier than trying to do alignment research in abstract for arbitrary future systems, and that
- with more people interested in alignment research we're making progress faster.
comment by baturinsky · 2023-03-15T06:35:52.975Z · LW(p) · GW(p)
One of the most dangerous thing that even one misaligned AI could theoreticaly pull, is to successfully launch a misaligned Von Neumann probe. Because then it would be extremely hard to track it down in space and stop before it will do it's thing.
Replies from: MilesTS↑ comment by MilesTS · 2023-06-04T07:41:18.463Z · LW(p) · GW(p)
What about quickly launching a missile following its trajectory using the same technology? The probe eventually needs to slow down to survive impact and the missile doesn't so preventing Von Neumann probes seems fairly straightforward to me. My understanding is that tracking objects in space is very easy unless they've had time to cool to near absolute zero.
On the other hand, this requires a misaligned AI was able to build such a probe and get it on a rocket it built or commandeered without being detected or stopped. That rules out safety via monitoring (and related approaches) and we would need to rely on it being essentially aligned anyway (such as via the "natural generalizations" Holden mentioned).
comment by Noosphere89 (sharmake-farah) · 2024-09-29T17:45:32.821Z · LW(p) · GW(p)
I'd say the biggest thing that happened to make this scenario plausible is that we learned a few very important things about alignment that makes our lives easier:
- It's looking like deep learning and AI in general naturally generalizes much better in respect to alignment than a lot of LWers thought several years ago.
A lot of the reason for that comes down to people underestimating how easy values data is to learn and underestimating how hard it is to learn a lot of useful capabilities, and more generally underestimating the influence of your data sources on your values.
More generally, I think LWers had a habit of overestimating the need for insights and underestimate the need for engineering work in alignment.
Indeed, a central faultline writ the entire LessWrong idea is my general view that insights in your head are far less necessary than experimenting/engineering work to solve a lot of problems, including alignment problems.
The link for it is down below:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/
2. I believe this:
- Some relatively cheap, easy, “scalable” solution to AI alignment (the sort of thing ARC is currently looking for) is developed and becomes widely used.
Is actually pretty likely, and I'd argue we even have the rough outlines of how to do it, which is essentially make large, curated synthetic datasets about human values, and make the AI internalize what a human values before it can be deceptive/seek power.
Contra @RogerDearnaley [LW · GW], I think the evidence is more like "human values are less fragile and have less large of a complexity", because remember that the human values data is only a small part of it's training data, and LLMs are genuinely beyond look-up tables, and are actually discovering new regularities (the evidence for it will be shown below), so the main takeaway is that the human value function is both simpler and less fragile than people thought 15-20 years ago:
https://www.lesswrong.com/posts/kixewxJfuZ23DQDfF/how-should-deepmind-s-chinchilla-revise-our-ai-forecasts#4__LLMs_are__intelligent__ [LW · GW]
One final takeaway is that a lot of AI progress, as well as alignment progress, is essentially the revenge of the blank slate view popularly held in the 20th century, and while the strict form is disproven, we got ourselves quite close to it, and since AI and human brains are quite similar (my thread is below), AI progress also has implications for neuroscience of the human brain, and I have my own tentative takes on what AI progress means for human brains.
https://x.com/SharmakeFarah14/status/1837528997556568523
Replies from: roger-d-1↑ comment by RogerDearnaley (roger-d-1) · 2024-10-05T01:25:12.385Z · LW(p) · GW(p)
I think human values have a very simple and theoretically predictable basis: they're derived from a grab-bag of evolved behavioral, cognitive and sensory heuristics which had a good cost/performance ratio for maximizing our evolutionary fitness (mostly on the Savannah). So the basics of some of them are really easy to figure out: e.g. "Don't kill everyone!" can be trivially derived from Darwinian first principles (and would equally apply to any other sapient species). So I think modelling human values to low (but hopefully sufficient for avoiding X-risk) accuracy is pretty simple. E.g. if the there was a guide for alien zookeepers (who were already familiar with Terran biochemistry) on how to keep humans, how long would that need to be for the humans to mostly survive in captivity? I'm guessing a single textbook could do a good job of this, maybe even just a long chapter in a textbook.
However, I think there is a lot more complexity in the finer/subtler details, much of which is biological in nature, starting with the specific grab-bag of heuristics that evolution happened to land on and their tuning, then with even more sociological/cultural/historical complexity layered on top. So where I think the complexity ramps up a lot is if you want to do a really good job of modelling human values accurately in all their detail, as we would clearly prefer our ASIs to do. If you look through the Dewey Decimal system, roughly half the content of any general-purpose library is devoted to sub-specialities of "how to make humans happy". However, LLMs are good at learning large amounts of complex, nuanced information. So an LLM knowing how to make humans happy in a lot of detail is not that surprising: in general, modern LLMs display detailed knowledge of this material.
The challenging part is ensuring that an LLM-powered agent cares about making humans happy, more than, say, a typical human autocrat does. Base model LLMs are "distilled" from many humans, so they absorb humans' capability for consideration for others, and also humans' less aligned traits like competitiveness and ambition. The question then is how to ensure which of these dominate, and how reliably, in agents powered by an instruct-trained LLM.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2024-10-05T01:34:53.611Z · LW(p) · GW(p)
I think the key crux is this in my view is basically unnecessary:
However, I think there is a lot more complexity in the finer/subtler details, most of which is biological in nature, starting with the specific grab-bag of heuristics that evolution happened to land on and their tuning, with even more sociological/cultural/historical complexity layered on top. So where I think the complexity ramps up a lot is if you want to do a really good job of modelling human values accurately, as we would clearly prefer our ASIs to do.
@Steven Byrnes [LW · GW] talks about how the mechanisms used in human brains might be horrifically complicated, but that the function is simple enough that you can code it quite well and robustly for AIs, and my difference from @Steven Byrnes [LW · GW] is that I believe that this basically also works for the things that make humans have values, like the social learning parts of our brains.
Thus it's a bit of a conditional claim, in that either the mechanism used in human brains is also simple, or that we can simplify it radically to preserve the core function while discarding the unnecessary (in my view) complexity, and that's the takeaway I have from LLMs learning human values.
Link and quote below:
In other words, the brain's implementation of that thing can be super-complicated, but the input-output relation cannot be that complicated—at least, the useful part of the input-output relation cannot be that complicated.
The crustacean stomatogastric ganglion central pattern generators discussed above are a great example: their mechanisms are horrifically complicated, but their function is simple: they create a rhythmic oscillation. Hey, you need a rhythmic oscillation in your AGI? No problem! I can do that in one line of Python.
https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than#If_some_circuit_in_the_brain_is_doing_something_useful__then_it_s_humanly_feasible_to_understand_what_that_thing_is_and_why_it_s_useful__and_to_write_our_own_CPU_code_that_does_the_same_useful_thing_ [LW · GW]
Also, a question for this quote is what's the assumed capability/compute level used in this thought experiment?
Replies from: roger-d-1E.g. if the there was an guide for alien zookeepers (ones already familiar with Terran biochemistry) on how to keep humans, how long would it need to be for the humans to mostly survive?
↑ comment by RogerDearnaley (roger-d-1) · 2024-10-05T08:08:35.173Z · LW(p) · GW(p)
I basically agree, for three reasons:
- The level of understanding of and caring about human values required to not kill everyone and be able to keep many humans alive, is actually pretty low (especially on the knowledge side).
- That's also basically sufficient to motivate wanting to learn more about human values, and being able to, so then the Value Learning [? · GW] process then kicks in: a competent and caring alien zookeeper would want to learn more about their charges' needs.
- We have entire libraries half of whose content is devoted to "how to make humans happy", and we already fed most of them into our LLMs as training material. On a factual basis, knowing how to make humans happy in quite a lot of detail (and for a RAG agent, looking up details they don't already have memorized) is clearly well within their capabilities. The part that concerns me is the caring side, and that's not conceptually complicated: roughly speaking, the question is how to ensure an agent's selfless caring for humans is consistently a significantly stronger motivation than various bad habits [LW · GW] like ambition, competitiveness, and powerseeking that it either picked up from us during the "distillation" of the base model, and/or learnt during RL training.
Also, a question for this quote is what's the assumed capability/compute level used in this thought experiment?
E.g. if the there was an guide for alien zookeepers (ones already familiar with Terran biochemistry) on how to keep humans, how long would it need to be for the humans to mostly survive?
ASI, or high AGI: capable enough that we've lost control and alignment is an existential risk.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2024-10-05T15:51:26.728Z · LW(p) · GW(p)
ASI, or high AGI: capable enough that we've lost control and alignment is an existential risk.
Then the answer is probably kilobytes to megabytes, but at any rate the guide for alien zookeepers can be very short, and that the rest can be learned from data.
I like your point that humans aren't aligned, and while I'm more optimistic about human alignment than you are, I agree that the level of human alignment currently is not enough to make a superintelligence safe if it only had human levels of motivation/reliability.
Weirdly enough, I think getting aligned superintelligence is both harder and easier than you are, and I'm defining alignment like you, in which we could have a superintelligence deployed into the world that cared at least for humans totally and doesn't need restraints on it's power like law enforcement or government of superintelligences.
The thing that makes alignment harder is I believe achieving FOOM for AIs, while unlikely, isn't obviously impossible, and I believe right around the cusp when AIs start to automate research without humans in the loop is when I suspect a whole lot of algorithmic progress will be done, and the only real bottlenecks are power and physical interfaces like robotics, and if these are easy/very easy to solve, I see fast FOOM as being very plausible.
The thing that makes alignment easier is that currently, alignment generalizes more than capabilities, which is good for us, and it's looking like influencing an AI's values through it's data is far easier than making it have great capabilities like being an autonomous researcher for deep reasons, which means we could get by on smaller data quantities assuming very high sample efficiency:
> In general, it makes sense that, in some sense, specifying our values and a model to judge latent states is simpler than the ability to optimize the world. Values are relatively computationally simple and are learnt as part of a general unsupervised world model where there is ample data to learn them from (humans love to discuss values!). Values thus fall out mostly’for free’ from general unsupervised learning. As evidenced by the general struggles of AI agents, ability to actually optimize coherently in complex stochastic ‘real-world’ environments over long time horizons is fundamentally more difficult than simply building a detailed linguistic understanding of the world.
Link below:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/
I think that we agree on a lot, and only really disagree on how much data is necessary for a good outcome, if at all we disagree.
↑ comment by RogerDearnaley (roger-d-1) · 2024-10-05T20:22:59.638Z · LW(p) · GW(p)
I like your point that humans aren't aligned, and while I'm more optimistic about human alignment than you are, I agree that the level of human alignment currently is not enough to make a superintelligence safe if it only had human levels of motivation/reliability.
The most obvious natural experiments about what humans do when they have a lot of power with no checks-and-balances are autocracies. While there are occasional examples (such as Singapore) of autocracies that didn't work out too badly for the governed, they're sadly few and far between. The obvious question then is whether "humans who become autocrats" are a representative random sample of all humans, or if there's a strong selection bias here. It seems entirely plausible that there's at least some selection effects in the process of becoming an autocrat. A couple of percent of all humans are sociopaths, so if there were a sufficiently strong (two orders of magnitude or more) selection bias, then this might, for example, be a natural experiment about the alignment properties of a set of humans consisting mostly of sociopaths, in which case it usually going badly would be unsurprising.
The thing that concerns me is the aphorism "Power corrupts, and absolute power corrupts absolutely". There does seem to be a strong correlation between how long someone has had a lot of power and an increasing likelihood of them using it badly. That's one of the reasons for term limits in positions like president: humans seem to pretty instinctively not trust a leader after they've been in a position of a lot of power with few check-and-balances for roughly a decade. The histories of autocracies tend to reflect them getting worse over time, on decade time-scales. So I don't think the problem here is just from sociopaths. I think the proportion of humans who wouldn't eventually be corrupted by a lot of power with no checks-and-balances may be fairly low, comparable to the proportion of honest senior politicians, say.
How much of this argument applies to ASI agents powered by LLMs "distilled" from humans is unclear — it's much more obviously applicable to uploads of humans that then get upgraded to super-human capabilities.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2024-10-05T20:35:44.370Z · LW(p) · GW(p)
IMO, there are fairly strong arguments that there is a pretty bad selection effect for people who aim to get into power generally being more Machiavellian/Sociopathic than other people, and at least part of the problem is that the parts of your brain that cares about other people gets damaged when you gain power, which is obviously not good.
But still, I agree with you that an ASI that can entirely run society while only being as aligned as humans are to very distant humans likely ends up in a very bad state for us, possibly enough to be an S-risk or X-risk (I currently see S-risk being more probable than X-risk for ASI if we only had human-level alignment to others.)
comment by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-05-19T19:37:39.629Z · LW(p) · GW(p)
I think interpretability looks like a particularly promising area for “automated research” - AIs might grind through large numbers of analyses relatively quickly and reach a conclusion about the thought process of some larger, more sophisticated system.
Arguably, this is already starting to happen (very early, with obviously-non-x-risky systems) with interpretability LM agents like in FIND and MAIA.
Replies from: bogdan-ionut-cirstea↑ comment by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2024-05-20T16:34:59.457Z · LW(p) · GW(p)
Related, from Advanced AI evaluations at AISI: May update:
Short-horizon tasks (e.g., fixing a problem on a Linux machine or making a web server) were those that would take less than 1 hour, whereas long-horizon tasks (e.g., building a web app or improving an agent framework) could take over four (up to 20) hours for a human to complete.
[...]
The Purple and Blue models completed 20-40% of short-horizon tasks but no long-horizon tasks. The Green model completed less than 10% of short-horizon tasks and was not assessed on long-horizon tasks3. We analysed failed attempts to understand the major impediments to success. On short-horizon tasks, models often made small errors (like syntax errors in code). On longer horizon tasks, models devised good initial plans but did not sufficiently test their solutions or failed to correct initial mistakes. Models also sometimes hallucinated constraints or the successful completion of subtasks.
Summary: We found that leading models could solve some short-horizon tasks, such as software engineering problems. However, no current models were able to tackle long-horizon tasks.