AGI safety career advice

post by Richard_Ngo (ricraz) · 2023-05-02T07:36:09.044Z · LW · GW · 24 comments

Contents

  General mindset
  Alignment research
    Alignment research directions
  Governance work
    List of governance topics
None
25 comments

People often ask me for career advice related to AGI safety. This post (now also translated into Spanish) summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post [EA · GW] I wrote two years ago, containing a bunch of fairly general career advice.

General mindset

In order to have a big impact on the world you need to find a big lever. This document assumes that you think, as I do, that AGI safety is the biggest such lever. There are many ways to pull on that lever, though—from research and engineering to operations and field-building to politics and communications. I encourage you to choose between these based primarily on your personal fit—a combination of what you're really good at and what you really enjoy. In my opinion the difference between being a great versus a mediocre fit swamps other differences in the impactfulness of most pairs of AGI-safety-related jobs.

How should you find your personal fit? To start, you should focus on finding work where you can get fast feedback loops. That will typically involve getting hands-on or doing some kind of concrete project (rather than just reading and learning) and seeing how quickly you can make progress. Eventually, once you've had a bunch of experience, you might notice a feeling of confusion or frustration: why is everyone else missing the point, or doing so badly at this? (Though note that a few top researchers commented on a draft to say that they didn't have this experience.) For some people that involves investigating a specific topic (for me, the question “what’s the best argument that AGI will be misaligned?“); for others it's about applying skills like conscientiousness (e.g. "why can't others just go through all the obvious steps?") Being excellent seldom feels like you’re excellent, because your own abilities set your baseline for what feels normal.

What if you have that experience for something you don't enjoy doing? I expect that this is fairly rare, because being good at something is often very enjoyable. But in those cases, I'd suggest trying it until you observe that even a string of successes doesn't make you excited about what you're doing; and at that point, probably trying to pivot (although this is pretty dependent on the specific details).

Lastly: AGI safety is a young and small field; there’s a lot to be done, and still very few people to do it. I encourage you to have agency when it comes to making things happen: most of the time the answer to “why isn’t this seemingly-good thing happening?” or “why aren’t we 10x better at this particular thing?” is “because nobody’s gotten around to it yet”. And the most important qualifications for being able to solve a problem are typically the ability to notice it and the willingness to try. One anecdote to help drive this point home: a friend of mine has had four jobs at four top alignment research organizations; none of those jobs existed before she reached out to the relevant groups to suggest that they should hire someone with her skillset. And this is just what’s possible within existing organizations—if you’re launching your own project, there are far more opportunities to do totally novel things. (The main exception is when it comes to outreach and political advocacy. Alignment is an unusual field because the base of fans and supporters is much larger than the number of researchers, and so we should be careful to avoid alignment discourse being dominated by advocates who have little familiarity with the technical details, and come across as overconfident. See the discussion here [LW · GW] for more on this.)

Alignment research

I’ll start with some high-level recommendations, then give a brief overview of how I see the field.

  1. Alignment is mentorship-constrained. If you have little research experience, your main priority should be finding the best mentor possible to help you gain research skills—e.g. via doing research in a professor’s lab, or internships at AI labs. Most of the best researchers and mentors aren't (yet) working on alignment, so the best option for mentorship may be outside of alignment—but PhDs are long enough, and timelines short enough, that you should make sure that your mentor would be excited about supervising some kind of alignment-relevant research. People can occasionally start doing great work without any mentorship; if you’re excited about this, feel free to try it, but focus on the types of research where you have fast feedback loops.
  2. You’ll need to get hands-on. The best ML and alignment research engages heavily with neural networks (with only a few exceptions). Even if you’re more theoretically-minded, you should plan to be interacting with models regularly [LW · GW], and gain the relevant coding skills. In particular, I see a lot of junior researchers who want to do “conceptual research”. But you should assume that such research is useless until it cashes out in writing code or proving theorems, and that you’ll need to do the cashing out yourself (with threat modeling being the main exception, although even then I think most threat modeling is not concrete enough to be useful). Perhaps once you’re a senior researcher with intuitions gained from hands-on experience you’ll be able to step back and primarily think about potential solutions at a high level, but that can’t be your plan as a junior researcher—it’ll predictably steer you away from doing useful work.
  3. You can get started quickly. People coming from fields like physics and mathematics often don’t realize how much shallower deep learning is as a field, and so think they need to spend a long time understanding the theoretical foundations first. You don’t—you can get started doing deep learning research with nothing more than first-year undergrad math, and pick up things you’re missing as you go along. (Coding skill is a much more important prerequisite, though.) You can also pick up many of the conceptual foundations of alignment as you go along, especially in more engineering heavy roles. While I recommend that all alignment researchers eventually become familiar with the ideas covered in the Alignment Fundamentals curriculum, upskilling at empirical research should be a bigger priority for most people who have already decided to pursue a career in alignment research and who aren't already ML researchers.
    Some recommended ways to upskill at empirical research (roughly in order):
    1. MLAB
    2. ARENA [EA · GW]
    3. Jacob Hilton’s deep learning curriculum
    4. Neel Nanda's guide to getting started with mechanistic interpretability
    5. Replicating papers
      Each of these teaches you important skills for good research: how to implement algorithms, how to debug code and experiments, how to interpret results, etc. Once you’ve implemented an algorithm or replicated a paper, you can then try to extend the results by improving the techniques somehow.
  4. Most research won’t succeed. This is true both on the level of individual projects, and also on the level of whole research directions: research is a very heavy-tailed domain. You should be looking hard for the core intuitions for why a given research direction will succeed, the absence of which may be hidden under mathematics or complicated algorithms (as I argue here) [AF · GW]. (You can think of this as a type of conceptual research, but intended to steer your own empirical or theoretical work, rather than intended as a research output in its own right.) In the next section I outline some of my views on which research directions are and aren't promising.

Alignment research directions

From my perspective, the most promising alignment research falls into three primary categories. I outline those below, as well as three secondary categories I think are valuable. Note that I expect the boundaries between all of these to blur over time as research on them progresses, and as we automate more and more things.

  1. Scalable oversight: finding ways to leverage more powerful models to produce better reward signals. Scalable oversight research may be particularly high-leverage if it ends up being adopted widely, e.g. as a tool for preventing hallucinations (like how alignment teams’ work on RLHF has now been adopted very widely).
    1. The theoretical paper I most often point people to is Irving et al.’s debate paper.
    2. The empirical paper I most often point people to is Saunders et al.’s critiques paper, which can be seen as the simplest case of the debate algorithm; Bowman et al. (2022) is also useful from a methodological perspective.
    3. The two other well-known algorithms in this area are iterated amplification and recursive reward modeling. My opinion is that people often overestimate the differences between these algorithms, and that standard presentations of them obfuscate the ways in which they’re structurally similar. I personally find debate the easiest to reason about (and it seems like others agree, since more papers build on it than on the others), hence why I most often recommend people work on that.
    4. Will scalable oversight just lead to more capabilities advances? This is an important question; one way I think about it is in terms of the generator-discriminator-critique gap from Saunders et al.’s critiques paper. Specifically, while I expect that closing the generator-discriminator gap is a dual-purpose advance (and could be good or bad depending on your other views), closing the discriminator-critique gap by producing correct human-comprehensible explanations should definitely be seen as an alignment advance.
  2. Mechanistic interpretability: finding ways to understand how networks function internally. While still only a small subfield of ML, I think of it as a way of pushing the whole field of ML from a “behaviorist” perspective that only focuses on inputs and outputs towards a “cognitivist” framework that studies what’s going on inside neural networks. It’s also much easier to do outside industry labs than scalable oversight work. To get started, check out Nanda's 200 Concrete Open Problems in Mechanistic Interpretability [? · GW].
    1. Three strands of mechanistic interpretability work:
      1. Case studies: finding algorithms inside networks that implement specific capabilities. My favorite papers here are Olsson et al. (2022)Nanda et al. (2023)Wang et al. (2022) and Li et al. (2022); I’m excited to see more work which builds on the last in particular to find world-models and internally-represented goals within networks.
      2. Solving superposition: finding ways to train networks to have fewer overlapping concepts within individual neurons. The key resource here is Elhage (2022) (as well as other work in the Transformer Circuits thread).
      3. Scalable interpretability: finding algorithms to automatically identify or modify internal representations. My favorite papers: Meng et al. (2022) and Burns et al. (2023) (although some consider the latter to be closer to scalable oversight work).
  3. Alignment theory: finding formal frameworks we can use to reason about advanced AI. I want to flag that success at this type of research is even more heavy-tailed than the other research directions I’ve described—it seems to requires exceptional mathematical skills, a deep understanding of ML theory, and nuanced philosophical intuitions. I'm not optimistic that any of the research directions listed here will work out, but they are attempting to address such fundamental problems that even partial successes could be a big deal.
    1. I’m most excited about Christiano’s work on formalizing heuristic arguments, Kosoy’s learning-theoretic agenda [LW · GW] (particularly infra-bayesianism [LW · GW]), and various work by Scott Garrabrant (e.g. geometric rationality [? · GW], finite factored sets [LW · GW], and Cartesian frames [? · GW]).
    2. Historically most of the work in this category has been done by MIRI (e.g. work on functional decision theory and Garrabrant induction). Their output has dropped significantly lately, though; so I mainly think of them as having a handful of researchers pursuing their individual interests, rather than a unified research agenda.
    3. Why do I think alignment theory is worth pursuing? In large part because scientific knowledge is typically very interconnected. Alignment theory often seems disconnected from modern ML—but the motions of the stars once seemed totally disconnected from events on earth. And who could have guessed that understanding variation in the beaks of finches would advance our understanding of...well, basically everything in biology? In many domains there are key principles that explain a huge range of phenomena, and the main difficulty is finding a tractable angle of attack. That's why asking the right questions is often more important than actually getting concrete results. For example, asking "what is the optimal strategy in this specific formalization of a 2-player game?" is a large chunk of the work of inventing game theory.

Three other research areas that seem important, but less central:

  1. Evaluations: finding ways to measure how dangerous and/or misaligned models are.
    1. There’s been little published on this so far; the main thing to look at is the ARC evals (also discussed in section 2.9 of the GPT-4 system card). In general it seems like alignment evals are very difficult, so most people are focusing on evals for measuring dangerous capabilities instead.
    2. My own opinion is that evaluations will live or die by how simple and scalable they are. The best evals would be easily implementable even by people without any alignment background, and would meaningfully track improvements all the way from current systems up to superintelligences. In short, this is because the primary purpose of evals is to facilitate decision-making and coordination, and both of these benefit hugely from legible and predictable metrics.
  2. Unrestricted adversarial training: finding ways to generate inputs on which misaligned systems will misbehave.
    1. It seems like there are strong principled reasons to expect this to be difficult—in general you can only generate fake data which fools one model using a much more powerful model. But it may be possible to find unrestricted adversarial examples by leveraging mechanistic interpretability, as explored in this post by Christiano.
    2. The empirical paper I point people to most often is Ziegler et al. (2022) (see also the other papers they cite).
  3. Threat modeling: understanding and forecasting how AGI might lead to catastrophic outcomes.
    1. I most often point people to my own recent paper (Ngo et al., 2022). Other good work includes reports by Joe Carlsmith and Ajeya Cotra. (Cohen et al. (2022) make a peer-reviewed case for existential risk from AGI, but it’s too focused on outer alignment for me to buy into it.)
    2. One threat modeling research direction that seems valuable is understanding gradient hacking [AF · GW] (and understanding cooperation between different models more generally). Another is to explore the specific ways that AGIs are most likely to be deployed in the real world, and what sorts of vulnerabilities they may be able to exploit.

By contrast, some lines of research which I think are overrated by many newcomers to the field, along with some critiques of them:

  1. Cooperative inverse reinforcement learning (the direction that Stuart Russell defends in his book Human Compatible); critiques here and here.
  2. John Wentworth’s work on natural abstractions; exposition and critique here [AF · GW], and another here [AF · GW].
  3. Work which relies on agents acting myopically, including by only making next-timestep predictions (e.g. work on the simulators abstraction [AF · GW], or on conditioning predictive models [AF · GW]); critique here [AF · GW].

Governance work

I mentally split this into three categories: governance research, lab governance, and policy jobs. A few high-level takeaways for each:

  1. Governance research
    1. The main advice I give people who want to enter this field: pick one relevant topic and try to become an expert on it. There are about two dozen topics where I wish there were a world expert on applying this topic to making AGI go well, and no such person exists; I’ve made a list of those topics below. To learn about them I strongly recommend not just reading and absorbing ideas, but also writing about them. It’s very plausible that, starting off with no background in the field, within six months you could write a post or paper which pushes forward the frontier of our knowledge on how one of those topics is relevant to AGI governance.
    2. You don’t necessarily need to stick with your choice longer-term; my claim is mainly that it’s important to have some concrete topic to investigate. As you do so, you’ll gradually branch out to other topics which are tangentially relevant, and pick up a broader knowledge of the field (the Governance Fundamentals course is one good way of doing so). Eventually you’ll be able to do “strategy research” with much wider implications. But trying to do that from the beginning is a bad plan—it’ll go much better with a base of detailed expertise to work from.
    3. In general I think people overrate “analysis” and underrate “proposals”. There are many high-level factors which will affect AGI governance, and we could spend the rest of our lives trying to analyze them. But ultimately what we need is concrete mechanisms which actually move the needle, which are currently in short supply. Of course you need to do analysis in order to understand the factors which will influence proposals’ success, but you should always keep in mind the goal of trying to ground it out in something useful.
    4. Relatedly, I personally don’t think that quantitative modeling is very valuable. I have yet to see such a model of a big-picture question (e.g. compute projections, takeoff speed, timelines) whose conclusions substantively change my opinions about what the best governance proposals are. If such a model is a strong success it may shift my credences from, say, 25% to 75% in a given proposition. But that’s only a factor of 3 difference, whereas one plan for how to solve governance could be one or two orders of magnitude more effective than another. And in general models rarely move me that much, because even a few free parameters allow people to dramatically overfit to their intuitions; I’d typically prefer having a short summary of the core insights that the person doing the modeling learned during that process. So prioritize plans first, insights second and models last.
    5. Don’t be constrained too much by political feasibility, especially when formulating early versions of a plan. Almost nobody in the world has both good intuitions for how politics really works, and good intuitions for how crazy progress towards AGI will be. All sorts of possibilities will open up in the future—we just need to be ready with concrete proposals [LW · GW] when they do. However, a deep understanding of the fundamental drivers of today’s policy decisions will be helpful in navigating when things start changing much faster.
  2. AI lab governance
    1. Leading labs are often amenable to carrying out proposals which don’t strongly trade off against their core capabilities work; the bottleneck is usually the agency and work required to actually implement the proposal. Thus interventions of the form “tell labs to care more about safety” generally don’t work very well, whereas interventions of the form “here is a concrete ask, here are the specific steps you’d need to take, here’s a person who’s agreed to lead the effort” tend to go well. This post conveys that idea particularly well [LW · GW].
    2. It’s hard for people outside labs to know enough details about what’s going on inside labs to be able to make concrete proposals, but I expect there are a few important cases where it’s possible. This probably looks fairly similar to the path I outlined in the section on governance research, of first gaining expertise on a specific topic, then generating specific proposals.
    3. There is a specific skill of getting things done inside large organizations that most EAs lack (due to lack of corporate experience, plus lack of people-orientedness), but which is particularly useful when pushing for lab governance proposals. If you have it, lab governance work may be a good fit for you.
  3. Policy-related jobs
    1. By this I mean going to work in government-related positions, with the goal of trying to get into a position where you can help make government regulation go well. I don’t have too much to say here, since it’s not my area of expertise. You should probably take fairly general advice (e.g. the advice here [EA · GW]) about how to have a successful career in this area, and then figure out how to go faster under the assumption that people will get increasingly stressed about AI. Short masters degrees and policy fellowships [EA · GW] are quick ways to fast-track towards mid-career policy roles; getting even a small amount of legible AI expertise (e.g. any CS/AI-related degree or job) is also helpful.

List of governance topics

Here are some topics where I wish we had a world expert on applying it to AGI safety. One example of what great work on one of these topics might look like: Baker’s paper on lessons from nuclear arms control (a topic which would have been on this list if he hadn’t written that).

One cluster of topics can be described roughly as “anything mentioned in Yonadav Shavit’s compute governance paper”, in particular:

  1. Tamper-evident logging in GPUs
  2. Global tracking of GPUs
  3. Proof-of-learning algorithms
  4. On-site inspections of models
  5. Detecting datacenters
  6. Building a suite for verifiable inference
  7. Measuring effective compute use (e.g. by measuring and controlling for algorithmic progress)
  8. Regulating large-scale decentralized training (if it becomes competitive with centralized training)

Another cluster: security-related topics such as

  1. Preventing neural network weight exfiltration (by third parties or an AI itself)
  2. Evaluating the possibility of autonomous replication across the internet
  3. Privilege escalation from within secure systems (e.g. if your coding assistant is misaligned, what could it achieve?)
  4. Datacenter monitoring (e.g. if unauthorized copies of a model were running on your servers, how would you know?)
  5. Detecting unauthorized communication channels between different copies of a model.
  6. Detecting tampering (e.g. if your training run had been modified, how would you know?)
  7. How vulnerable are nuclear command and control systems?
  8. Scalable behavior monitoring (e.g. how can we aggregate information across monitoring logs from millions of AIs?)

And a more miscellaneous (and less technical) third category:

  1. What regulatory apparatus within the US government would be most effective at regulating large training runs?
  2. What tools and methods does the US government have for auditing tech companies?
  3. What are the biggest gaps in the US export controls to China, and how might they be closed?
  4. What AI applications or demonstrations will society react to most strongly?
  5. What interfaces will humans use to interact with AIs in the future?
  6. How will AI most likely be deployed for sensitive tasks (e.g. advising world leaders) given concerns about privacy?
  7. How might political discourse around AI polarize, and what could mitigate that?
  8. What would it take to automate crucial infrastructure (factories, weapons, etc)?

24 comments

Comments sorted by top scores.

comment by DanielFilan · 2023-05-02T17:17:31.518Z · LW(p) · GW(p)

Alignment is an unusual field because the base of fans and supporters is much larger than the number of researchers

Isn't this entirely usual? Like, I'd assume that there are more readers of popular physics books than working physicists. Similarly for nature documentary viewers vs biologists.

Replies from: neel-nanda-1, LawChan
comment by Neel Nanda (neel-nanda-1) · 2023-05-02T19:56:29.507Z · LW(p) · GW(p)

Maybe in contrast to other fields of ML? (Though that's definitely stopped being true for eg LLMs)

comment by LawrenceC (LawChan) · 2023-07-11T23:37:45.398Z · LW(p) · GW(p)

I think the deciding difference is that the amount of fans and supporters who want to be actively involved and who think the problem is the most important in the world is much larger than the number of researchers; while popular physics book readers and nature documentary viewers are plentiful, I doubt most of them feel a compelling need to become involved!

comment by RobertM (T3t) · 2023-05-03T03:27:24.835Z · LW(p) · GW(p)

By contrast, some lines of research where I’ve seen compelling critiques (and haven’t seen compelling defences) of their core intuitions, and therefore don't recommend to people:

  1. Cooperative inverse reinforcement learning (the direction that Stuart Russell defends in his book Human Compatible); critiques here and here.
  2. John Wentworth’s work on natural abstractions; exposition and critique here [LW · GW], and another here [LW · GW].

The first critique of natural abstractions says:

Concluding thoughts on relevance to alignment: While we’ve made critical remarks on several of the details, we also want to reiterate that overall, we think (natural) abstractions are an important direction for alignment and it’s good that someone is working on them! In particular, the fact that there are at least four distinct stories for how abstractions could help with alignment is promising.

The second says:

I think this is a fine dream. It’s a dream I developed independently at MIRI a number of years ago, in interaction with others. A big reason why I slogged through a review of John's work is because he seemed to be attempting to pursue a pathway that appeals to me personally, and I had some hope that he would be able to go farther than I could have.

Neither of them seemed, to me, to be critiques of the "core intuitions"; rather, the opposite: both suggested that the core intuitions seemed promising; the weaknesses were elsewhere.  That suggests that natural abstractions might be a better than average target for incoming researchers, not a worse one.

I have some other disagreements, but those are model-level disagreements; that piece of advice in particular seems to be misguided even under your own models.  I think I agree with the overall structure and most of the prioritization (though would put scalable oversight lower, or focus on those bits that Joe points out [LW(p) · GW(p)] are the actual deciding factors for whether that entire class of approaches is worthwhile - that seems more like "alignment theory with respect to scalable oversight").

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-05-03T04:25:53.389Z · LW(p) · GW(p)

Good point. Will edit.

comment by Neel Nanda (neel-nanda-1) · 2023-05-02T19:49:53.171Z · LW(p) · GW(p)

Some recommended ways to upskill at empirical research (roughly in order):

For people specifically interested in getting into mechanistic interpretability, my guide to getting started may be useful - it's much more focused on the key, relevant parts of deep learning, with a bunch more interpretability specific stuff

comment by wesg (wes-gurnee) · 2023-05-03T19:37:07.346Z · LW(p) · GW(p)

For mechanistic interpretability research, we just released a new paper on neuron interpretability in LLMs, with a large discussion on superposition! See
Paper: https://arxiv.org/abs/2305.01610
Summary: https://twitter.com/wesg52/status/1653750337373880322
 

comment by Neel Nanda (neel-nanda-1) · 2023-05-02T19:46:39.312Z · LW(p) · GW(p)

Eventually, once you've had a bunch of experience, you might notice a feeling of confusion or frustration: why is everyone else missing the point, or doing so badly at this? (Though note that a few top researchers commented on a draft to say that they didn't have this experience.) For some people that involves investigating a specific topic (for me, the question “what’s the best argument that AGI will be misaligned?“); for others it's about applying skills like conscientiousness (e.g. "why can't others just go through all the obvious steps?") Being excellent seldom feels like you’re excellent, because your own abilities set your baseline for what feels normal.

 

I relate a lot with this, this feels like one of the clearer markers internally for me of what becoming good at interpretability research felt like - there's so much low hanging fruit! Why aren't other people plucking it?

There's also just some internal sense of "I kind of know what I'm doing, and have ideas for what to do next", though this is much clearer to me when mentoring and advising other people, where I have strong opinions, than when applying it to myself, where I can sometimes pull it off but find it easily to fall into random spirals of doubt

Replies from: NicholasKross
comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T00:37:55.308Z · LW(p) · GW(p)

This is interesting; I'm still looking for my own (I think?) "comparative advantage" in this area. Some mental motions are very easy, while some "trivial" tasks feel harder (or would require me to already be involved full-time, leading to a chicken-and-egg problem).

comment by Akash (akash-wasil) · 2023-05-02T17:22:32.214Z · LW(p) · GW(p)

(Pasting this exchange from a comment thread on the EA Forum; bolding added)

Peter Park:

Thank you so much for your insightful and detailed list of ideas for AGI safety careers, Richard! I really appreciate your excellent post.

I would propose explicitly grouping some of your ideas and additional ones under a third category: “identifying and raising public awareness of AGI’s dangers.” In fact, I think this category may plausibly contain some of the most impactful ideas for reducing catastrophic and existential risks, given that alignment seems potentially difficult to achieve in a reasonable period of time (if ever) and the implementation of governance ideas is bottlenecked by public support.

For a similar argument that I found particularly compelling, please check out Greg Colbourn’s recent post: https://forum.effectivealtruism.org/posts/8YXFaM9yHbhiJTPqp/agi-rising-why-we-are-in-a-new-era-of-acute-risk-and [EA · GW]

Richard:

I don't actually think the implementation of governance ideas is mainly bottlenecked by public support; I think it's bottlenecked by good concrete proposals. And to the extent that it is bottlenecked by public support, that will change by default as more powerful AI systems are released.

Akash:

I appreciate Richard stating this explicitly. I think this is (and has been) a pretty big crux in the AI governance space right now.

Some folks (like Richard) believe that we're mainly bottlenecked by good concrete proposals. Other folks believe that we have concrete proposals, but we need to raise awareness and political support in order to implement them.

I'd like to see more work going into both of these areas. On the margin, though, I'm currently more excited about efforts to raise awareness [well], acquire political support, and channel that support into achieving useful policies. 

I think this is largely due to (a) my perception that this work is largely neglected, (b) the fact that a few AI governance professionals I trust have also stated that they see this as the higher priority thing at the moment, and (c) worldview beliefs around what kind of regulation is warranted (e.g., being more sympathetic to proposals that require a lot of political will).

comment by Joe Collman (Joe_Collman) · 2023-05-02T15:42:30.270Z · LW(p) · GW(p)

Scalable oversight: finding ways to leverage more powerful models to produce better reward signals

It might be worth clarifying how you expect this to help, and to make clear where you'd expect other researchers to disagree.

For instance, for debate, one could believe:
1) Debate will work for long enough for us to use it to help find make progress towards an alignment solution. 
2) Debate is a plausible basis for an alignment solution.

To me (2) seems fairly clearly false - at the very least it's not doing anything about inner alignment (debate on weights/activations does nothing to address this, since there's still no [debaters are aiming to win the game] starting point).

Viewing it as a question-answering system is similarly confused: it's an [output whatever text is selected by the debate process] system.
We can't have both [debaters optimise for a debate win] and [debate robustly remains a question-answering system] - at least without making obviously false assumptions about a human-based judge system.

Could Debate be a component of an alignment solution? Sure.
Is it the part that seems hard/neglected? No.

 

On (1) I'm less clear, however here the case that needs to be made is that debate approaches will be more useful before they become dangerous than e.g. simulators or conditioning predictive models (which I agree will also break at some point).

This is not obviously false, but I don't see a good argument for it. If I have to bet which of these approaches has the lowest [capability before deceptive alignment] (cbda) threshold, my money is currently on debate (and indeed RRM). Imitative amplification seems plausibly safer, but only to the degree that it's less efficient - so still unclear it gets higher cbda (if distillation ends up buying efficiency, I expect it to throw out the imitative rationale for safety in the process).

To me, most of the value to a new researcher in studying debate would lie in:

  • Thinking about it for a while
  • Figuring out what assumptions it'd require to work
  • Noticing that having these assumptions hold is the hard part
  • Going to work on those (or their foundations)

And as Eliezer/Nate/John... would point out, this doesn't require getting into the details of the mechanism design - only to notice that the mechanism is doing nothing to address the fundamentals of the problem.

I'd be genuinely interested if I'm wrong on any of this - it'd be nice if debate were actually useful! (I don't claim to be making all the necessary arguments above - just pointing out my current belief)

Replies from: neel-nanda-1, ricraz
comment by Neel Nanda (neel-nanda-1) · 2023-05-02T19:58:01.179Z · LW(p) · GW(p)

To me (2) seems fairly clearly false - at the very least it's not doing anything about inner alignment (debate on weights/activations does nothing to address this, since there's still no [debaters are aiming to win the game] starting point).

Why do you believe this? It's fairly plausible to me that "train an AI to use interpretability tools to show that this other AI is being deceptive" is the kind of scalable oversight approach that might work, especially for detecting inner misalignment, if you can get the training right and avoid cooperation. But that seems like a plausibly solvable problem to me

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2023-05-02T20:28:36.026Z · LW(p) · GW(p)

The problem is robustly getting the incentive to show that the other AI is being deceptive.
Giving access to the weights, activations and tools may give debaters the capability to expose deception - but that alone gets you nothing.

You're still left saying:
So long as we can get the AI to robustly do what we want (i.e. do its best to expose deception), we can get the AI to robustly do what we want.

Similarly, "...and avoid cooperation" is essentially the entire problem.

To be clear, I'm not saying that an approach of this kind will never catch any instances of an AI being deceptive. (this is one reason I'm less certain on (1))
I'm am saying that there's no reason to predict anything along these lines should catch all such instances.
I see no reason to think it'll scale.

Another issue: unless you have some kind of true name of deception (I see no reason to expect this exists), you'll train an AI to detect [things that fit your definition of deception], and we die to things that didn't fit your definition.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-05-02T20:47:13.479Z · LW(p) · GW(p)

These are all arguments about the limit; whether or not they're relevant depends on whether they apply to the regime of "smart enough to automate alignment research".

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2023-05-02T21:48:29.222Z · LW(p) · GW(p)

Agreed.
Are you aware of any work that attempts to answer this question?
Does this work look like work on debate?
(not rhetorical questions!)

My guess is that work likely to address this does not look like work on debate.
Therefore my current position remains: don't bother working on debate; rather work on understanding the fundamentals that might tell you when it'll break.

The world won't be short of debate schemes.
It'll be short of principled arguments for their safe application.

comment by Richard_Ngo (ricraz) · 2023-05-02T16:29:52.154Z · LW(p) · GW(p)

For instance, for debate, one could believe:
1) Debate will work for long enough for us to use it to help find an alignment solution.
2) Debate is a plausible basis for an alignment solution.

I generally don't think about things in terms of this dichotomy. To me, an "alignment solution" is anything that will align an AGI which is then capable of solving alignment for its successor. And so I don't think you can separate these two things.

(Of course I agree that debate is not an arbitrarily scalable alignment solution in the sense that you can just keep training models using debate without adding any more techniques; but I don't think that really matters. We need to get to the moon, not to Andromeda.)

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2023-05-02T21:39:56.496Z · LW(p) · GW(p)

Oh, to be clear, with "to help find" I only mean that we expect to make significant progress using debate. If we knew we'd safely make enough progress to get to a solution, then you're quite right that that would amount to (2). (apologies for lack of clarity if this was the miscommunication)

That's the distinction I mean to make between (1) and (2): we need to get to the moon safely
With (1) we have no idea when our rocket will explode.
Similarly, we have no idea whether the moon will be far enough to know when our next rocket will explode. (not that I'm knocking robustly getting to the moon safely)

If we had some principled argument telling us how far we could push debate before things became dangerous, that'd be great. I'm claiming that we have no such argument, and that all work on debate (that I'm aware of) stands near-zero chance of finding one.

Of course I'm all for work "on debate" that aims at finding that kind of argument - however, I would expect that such work leaves the specifics of debate behind pretty quickly.

comment by Joseph Bloom (Jbloom) · 2023-05-05T01:06:03.255Z · LW(p) · GW(p)

Thanks Richard for this post and prior advice!

I was planning to make a post at some point with some advice that's closely related to this post but I will share it here as a preview. Take note that I don't yet have strong evidence that my work is good or has mattered (and I was going to write a full post once I had more evidence for that). I think Richard's advice above is really good and I'll try to take some of the ideas more on board with my own work. 

Last year I quit my job and upskilled for 6 months and now I'm doing independent research which might turn out to be valuable. Regardless of its value, I've learnt a lot and it's created many opportunities for me. I went to EAG and Richard's talk there and a conversation later in a group where he was talking about this mentorship constraint deal. This left a strong impression on me leading me to take some degree of pride in my attempts to be independent and not rely as strongly on individual mentorship. However, there are just a bunch of caveats/perspectives that I have currently which relate to this. 

All of these relate to empirical alignment research and not governance or other forms of research. I'm mostly focussed on providing advice for how to be more productive independently of other people but that shouldn't be your preference and I suspect people are more productive at orgs/in groups.

So a bunch of ideas on the topic:

  • Why the focus on independent research?
    • I think it's really weird how we have this thing in the alignment community and I just want to comment on that first. The idea that people can just go off on their own and be productive I think is kinda uncommon. 
    • This community values agency.  In practice, agency is the ability to make decisions for yourself about what you want and how to achieve it. Getting good at having agency both makes good researchers and good research engineers. HPMOR helped me understand agency better. 
    • I have no first hand knowledge of the inside of orgs like DeepMind or Anthropic but I suspect people with agency are generally considered better hires. It's not like orgs would say "we could hire this person but we want them to do what they're told so let's hire someone with little evidence of working independently". Rather, my guess is they select for people who are capable of being self-directed work and who  grow spontaneously as a result on attempting hard things and learning. 
  • Getting ready to contribute: 
    • There are a variety of ways to upskill without doing stuff like a PhD (as research says above). Programs like ARENA, SERI-MATS, SPAR etc. My sense is that once people realise that working without feedback is hard, they will gravitate strongly toward more empirical research areas, especially those that can be done at small scale (aka, MI) and which there are existing tools (aka MI) and examples of investigations with reasoned paths to impact (aka MI). However, there are likely other empirical areas which provide feedback and are doable that may/may not have these properties and searching for more seems like a good idea to me. 
    • Get good. (especially if you're fresh out of college) Struggling with implementing things / understanding stuff and identifying problems can all be alleviated by working on your skills. There's lots of open source code which show good patterns but also lots of people who have good technical skills but aren't necessarily the research mentors we are constrained by who you can  engage with. Talk to people who are good engineers and find out how they operate. It'll be stuff like having a good IDE and testing your code.
    • Start slow. Contributing to open source projects such as TransformerLens is good. I've been helping out with it and it seems like a good way for lots of people to dip their toe in. 
  • Doing research without a mentor is very hard for many obvious reasons. Things you can do to make it easier:
    • While talking to people such at EAGs can be helpful, my sense is most good advice just exists on the forum. I recommend rereading such advice periodically and predict you will grok why people make suggestions more if you are stuck in your own research and have challenges then before. 
    • Focus on fast feedback cycles. Try to avoid situations where you don't know if something is working for a long time. This is different to whether you know if it's valuable or not. 
    • Be prepared to drop things or change your path, but don't abandon work because it's hard. It feels like a special kind of wisdom/insight to make these calls and I think you need to work hard at trying to get better at this over time. 
    • Have good tooling but don't let building the tooling take over. 
    • Allow yourself to focus. There is a time to work out why you are doing what you are doing and there are other times you just need to do the work. 
    • Study! Engaging with fundamental topics like linear algebra or deep learning theory is hugely important. Without colleagues or mentors it is a meaningful constraint on your output when you don't know any given thing that might be relevant. This is tricky because there's a lot to study. I think engage with the basics and be consistent. Mathematics for ML textbook is great. GoodFellow Deep Learning textbook is also recommended. 
    • Read related literature. Like with more basic knowledge, lack of knowledge of relevant literature can cause you to waste time/effort. I have a spreadsheet which describes all the models that are kinda similar to mine and how they were trained and what was done with them. 
    • Find less correlated ideas/takes: Stephen Casper's Engineering interpreatibility sequence is a good example of the kind of thing people doing independent work should read. It shakes you out of the "everything we do here makes sense and is obvious perspective" which is extra easy to fall into when you work on your own. There might be equivalent posts in other areas. 
    • Possibly the quirkiest thing I do these days is roleplay characters in my head "the engineer", "the scientist", "the manager" and the "outsider" who help me balance different priorities when making decisions about my work. I find this fun and useful and since I literally write meeting notes, GPT4 can stand in for each of them which is pretty cool and useful for generating ideas. The "other", a less obvious team member, represents someone who doesn't privilege the project or existing decisions. This helps me try to channel a helpful adversarial perspective (see previous point).

I hope this is useful for people! 
 

Replies from: Leksu
comment by Leksu · 2023-05-05T09:07:01.121Z · LW(p) · GW(p)

Thanks, I think this comment and the subsequent post will be very useful for me!

comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-03T00:41:37.868Z · LW(p) · GW(p)

I think work on the study of abstraction, one way or another, will be essential to AI alignment. Even "just" being able to make very precise high-level predictions of (an AI's behavior FROM its internal state) or (human values FROM measured neurological data), requires enough abstraction-understanding to know whether the simplification is really capturing what we want.

I don't know if the natural abstractions hypothesis is really necessary for this. But something like a more developed/complete version [LW · GW] of Wentworth's "minimal maps" [LW · GW] representation of abstraction, seems more needed.

Maybe if it's "direct" enough, we just get mech. interp. again? In my head, some kind of abstraction is necessary if we go by the "Rocket Alignment" analogy.

comment by Roman Leventov · 2023-05-02T19:42:07.469Z · LW(p) · GW(p)

Classification of AI safety work

Here [LW(p) · GW(p)] I proposed a systematic framework for classifying AI safety work. This is a matrix, where one dimension is the system level:

  • A monolithic AI system, e.g., a conversational LLM
  • AGI lab (= the system that designs, manufactures, operates, and evolves monolithic AI systems and systems of AIs)
  • A cyborg, human + AI(s)
  • A system of AIs with emergent qualities (e.g., https://numer.ai/, but in the future, we may see more systems like this, operating on a larger scope, up to fully automatic AI economy; or a swarm of CoEms [LW · GW] automating science)
  • A human+AI group, community, or society (scale-free consideration, supports arbitrary fractal nestedness): collective intelligence, e.g., The Collective Intelligence Project
  • The whole civilisation [LW · GW], e.g., Open Agency Architecture [LW · GW], or the Gaia network

Another dimension is the "time" of consideration:

  • Design time: research into how the corresponding system should be designed (engineered, organised): considering its functional ("capability", quality of decisions) properties, adversarial robustness (= misuse safety, memetic virus security), and security.  AGI labs: org design and charter.
  • Manufacturing and deployment time: research into how to create the desired designs of systems successfully and safely:
    • AI training and monitoring of training runs.
    • Offline alignment of AIs during (or after) training. 
    • AI strategy (= research into how to transition into the desirable civilisational state = design).
    • Designing upskilling and educational programs for people to become cyborgs is also here (= designing efficient procedures for manufacturing cyborgs out of people and AIs).
  • Operations time: ongoing (online) alignment of systems on all levels to each other, ongoing monitoring, inspection, anomaly detection, and governance.
  • Evolutionary time: research into how the (evolutionary lineages of) systems at the given level evolve long-term:
    • How the human psyche evolves when it is in a cyborg
    • How humans will evolve over generations as cyborgs
    • How AI safety labs evolve into AGI capability labs :/
    • How groups, communities, and society evolve.
    • Designing feedback systems that don't let systems "drift" into undesired state over evolutionary time.
    • Considering system property: property of flexibility of values (i.e., the property opposite of value lock-in, Riedel (2021)).
    • IMO, it (sometimes) makes sense to think about this separately from alignment per se. Systems could be perfectly aligned with each other but drift into undesirable states and not even notice this if they don't have proper feedback loops and procedures for reflection.

There would be 6*4 = 24 slots in this matrix, and almost all of them have something interesting to research and design, and none of them is "too early" to consider.

Richard's directions within the framework

Scalable oversight: (monolithic) AI system * manufacturing time

Mechanistic interpretability: (monolithic) AI system * manufacturing time, also design time (e.g., in the context of the research agenda of weaving together theories of cognition and cognitive development, ML, deep learning, and interpretability through the abstraction-grounding stack [LW · GW], interpretability plays the role of empirical/experimental science work)

Alignment theory: Richard phrases it vaguely, but referencing primarily MIRI-style work reveals that he means primarily "(monolithic) AI system * design, manufacturing, and operations time".

Evaluations, unrestricted adversarial training: (monolithic) AI system * manufacturing, operations time

Threat modeling: system of AIs (rarely), human + AI group, whole civilisation * deployment time, operations time, evolutionary time

Governance research, policy research: human + AI group, whole civilisation * mostly design and operations time.

Takeaways

To me, it seems almost certain that many current governance institutions and democratic systems will not survive the AI transition of civilisation. Bengio recently hinted at the same conclusion [LW · GW].

Human+AI group design (scale-free: small group, org, society) and the civilisational intelligence design [LW · GW] must be modernised.

Richard mostly classifies this as "governance research", which has a connotation that this is a sort of "literary" work and not science, with which I disagree. There is a ton of cross-disciplinary hard science to be done about group intelligence and civilisational intelligence design: game theory, control theory, resilience theory, linguistics, political economy (rebuild as hard science, of course, on the basis of resource theory, bounded rationality, economic game theory, etc.), cooperative reinforcement learning, etc.

I feel that the design of group intelligence and civilisational intelligence is an under-appreciated area by the AI safety community. Some people do this (Eric Drexler, davidad, the cip.org team, ai.objectives.institute, the Digital Gaia team, and the SingularityNET team, although the latter are less concerned about alignment), but I feel that far more work is needed in this area.

There is also a place for "literary", strategic research, but I think it should mostly concern deployment time of group and civilisational intelligence designs, i.e., the questions of transition from the current governance systems to the next-generation, computation and AI-assisted systems.

Also, operations and evolutionary time concerns of everything (AI systems, systems of AIs, human+AI groups, civilisation) seem to be under-appreciated and under-researched: alignment is not a "problem to solve", but an ongoing, manufacturing-time and operations-time process [LW · GW].

comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2023-05-02T11:27:04.028Z · LW(p) · GW(p)

I would be interested in some advice going a step further -- assuming a roughly sufficient technical skill level (in my case, soon-to-be PhD in an application of ML), as well as an interest in the field, how to actually enter the field with a full-time position? I know independent research is one option, but it has its pros and cons. And companies which are interested in alignment are either very tiny (=not many positions), or very huge (like OpenAI et al., =very selective)

comment by Neel Nanda (neel-nanda-1) · 2023-05-02T19:52:48.152Z · LW(p) · GW(p)

Case studies: finding algorithms inside networks that implement specific capabilities. My favorite papers here are Olsson et al. (2022)Nanda et al. (2023)Wang et al. (2022) and Li et al. (2022); I’m excited to see more work which builds on the last in particular to find world-models and internally-represented goals within networks.

If you want to build on Li et al (the Othello paper), my follow-up work [AF · GW] is likely to be a useful starting point, and then the post I wrote about future directions I'm particularly excited about [AF · GW]

comment by XFrequentist · 2023-05-02T19:44:51.543Z · LW(p) · GW(p)

Preventing neural network weight exfiltration (by third parties or an AI itself)

This is really really interesting; a fairly "normal" infosec concern to prevent IP/PII theft, plus a (necessary?) step in many AGI risk scenarios. Is the claim that one could become a "world expert" specifically in this (ie without becoming an expert in information security more generally)?