Posts

AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work 2024-08-20T16:22:45.888Z
On scalable oversight with weak LLMs judging strong LLMs 2024-07-08T08:59:58.523Z
Improving Dictionary Learning with Gated Sparse Autoencoders 2024-04-25T18:43:47.003Z
AtP*: An efficient and scalable method for localizing LLM behaviour to components 2024-03-18T17:28:37.513Z
Fact Finding: Do Early Layers Specialise in Local Processing? (Post 5) 2023-12-23T02:46:25.892Z
Fact Finding: How to Think About Interpreting Memorisation (Post 4) 2023-12-23T02:46:16.675Z
Fact Finding: Trying to Mechanistically Understanding Early MLPs (Post 3) 2023-12-23T02:46:05.517Z
Fact Finding: Simplifying the Circuit (Post 2) 2023-12-23T02:45:49.675Z
Fact Finding: Attempting to Reverse-Engineer Factual Recall on the Neuron Level (Post 1) 2023-12-23T02:44:24.270Z
Discussion: Challenges with Unsupervised LLM Knowledge Discovery 2023-12-18T11:58:39.379Z
Explaining grokking through circuit efficiency 2023-09-08T14:39:23.910Z
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla 2023-07-20T10:50:58.611Z
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes 2023-05-01T16:47:41.655Z
[Linkpost] Some high-level thoughts on the DeepMind alignment team's strategy 2023-03-07T11:55:01.131Z
Categorizing failures as “outer” or “inner” misalignment is often confused 2023-01-06T15:48:51.739Z
Definitions of “objective” should be Probable and Predictive 2023-01-06T15:40:30.813Z
Refining the Sharp Left Turn threat model, part 2: applying alignment techniques 2022-11-25T14:36:08.948Z
Threat Model Literature Review 2022-11-01T11:03:22.610Z
Clarifying AI X-risk 2022-11-01T11:03:01.144Z
More examples of goal misgeneralization 2022-10-07T14:38:00.288Z
[AN #173] Recent language model results from DeepMind 2022-07-21T02:30:02.115Z
[AN #172] Sorry for the long hiatus! 2022-07-05T06:20:03.943Z
DeepMind is hiring for the Scalable Alignment and Alignment Teams 2022-05-13T12:17:13.157Z
Learning the smooth prior 2022-04-29T21:10:18.064Z
Shah and Yudkowsky on alignment failures 2022-02-28T19:18:23.015Z
[AN #171]: Disagreements between alignment "optimists" and "pessimists" 2022-01-21T18:30:04.824Z
Conversation on technology forecasting and gradualism 2021-12-09T21:23:21.187Z
[AN #170]: Analyzing the argument for risk from power-seeking AI 2021-12-08T18:10:04.022Z
[AN #169]: Collaborating with humans without human data 2021-11-24T18:30:03.795Z
[AN #168]: Four technical topics for which Open Phil is soliciting grant proposals 2021-10-28T17:20:03.387Z
[AN #167]: Concrete ML safety problems and their relevance to x-risk 2021-10-20T17:10:03.690Z
[AN #166]: Is it crazy to claim we're in the most important century? 2021-10-08T17:30:11.819Z
[AN #165]: When large models are more likely to lie 2021-09-22T17:30:04.674Z
[AN #164]: How well can language models write code? 2021-09-15T17:20:03.850Z
[AN #163]: Using finite factored sets for causal and temporal inference 2021-09-08T17:20:04.522Z
[AN #162]: Foundation models: a paradigm shift within AI 2021-08-27T17:20:03.831Z
[AN #161]: Creating generalizable reward functions for multiple tasks by learning a model of functional similarity 2021-08-20T17:20:04.380Z
[AN #160]: Building AIs that learn and think like people 2021-08-13T17:10:04.335Z
[AN #159]: Building agents that know how to experiment, by training on procedurally generated games 2021-08-04T17:10:03.823Z
[AN #158]: Should we be optimistic about generalization? 2021-07-29T17:20:03.409Z
[AN #157]: Measuring misalignment in the technology underlying Copilot 2021-07-23T17:20:03.424Z
[AN #156]: The scaling hypothesis: a plan for building AGI 2021-07-16T17:10:05.809Z
BASALT: A Benchmark for Learning from Human Feedback 2021-07-08T17:40:35.045Z
[AN #155]: A Minecraft benchmark for algorithms that learn without reward functions 2021-07-08T17:20:02.518Z
[AN #154]: What economic growth theory has to say about transformative AI 2021-06-30T17:20:03.292Z
[AN #153]: Experiments that demonstrate failures of objective robustness 2021-06-26T17:10:02.819Z
[AN #152]: How we’ve overestimated few-shot learning capabilities 2021-06-16T17:20:04.454Z
[AN #151]: How sparsity in the final layer makes a neural net debuggable 2021-05-19T17:20:04.453Z
[AN #150]: The subtypes of Cooperative AI research 2021-05-12T17:20:27.267Z
[AN #149]: The newsletter's editorial policy 2021-05-05T17:10:03.189Z

Comments

Comment by Rohin Shah (rohinmshah) on Anthropic rewrote its RSP · 2024-10-15T20:11:08.247Z · LW · GW

You note that the RSP says we will do a comprehensive assessment at least every 6 months—and then you say it would be better to do a comprehensive assessment at least every 6 months.

I thought the whole point of this update was to specify when you start your comprehensive evals, rather than when you complete your comprehensive evals. The old RSP implied that evals must complete at most 3 months after the last evals were completed, which is awkward if you don't know how long comprehensive evals will take, and is presumably what led to the 3 day violation in the most recent round of evals.

(I think this is very reasonable, but I do think it means you can't quite say "we will do a comprehensive assessment at least every 6 months".)

There's also the point that Zach makes below that "routinely" isn't specified and implies that the comprehensive evals may not even start by the 6 month mark, but I assumed that was just an unfortunate side effect of how the section was written, and the intention was that evals will start at the 6 month mark.

Comment by Rohin Shah (rohinmshah) on EIS XIV: Is mechanistic interpretability about to be practically useful? · 2024-10-12T14:39:33.086Z · LW · GW

Once the next Anthropic, GDM, or OpenAI paper on SAEs comes out, I will evaluate my predictions in the same way as before.

Uhh... if we (GDM mech interp team) saw good results on any one of the eight things on your list, we'd probably write a paper just about that thing, rather than waiting to get even more results. And of course we might write an SAE paper that isn't about downstream uses (e.g. I'm also keen on general scientific validation of SAEs), or a paper reporting negative results, or a paper demonstrating downstream use that isn't one of your eight items, or a paper looking at downstream uses but not comparing against baselines. So just on this very basic outside view, I feel like the sum of your probabilities should be well under 100%, at least conditional on the next paper coming out of GDM. (I don't feel like it would be that different if the next paper comes from OpenAI / Anthropic.)

The problem here is "next SAE paper to come out" is a really fragile resolution criterion that depends hugely on unimportant details like "what the team decided was a publishable unit of work". I'd recommend you instead make time-based predictions (i.e. how likely are each of those to happen by some specific date).

Comment by Rohin Shah (rohinmshah) on Mark Xu's Shortform · 2024-10-07T09:13:15.219Z · LW · GW

This seems to presume that you can divide up research topics into "alignment" vs "control" but this seems wrong to me. E.g. my categorization would be something like:

  • Clearly alignment: debate theory, certain flavors of process supervision
  • Clearly control: removing affordances (e.g. "don't connect the model to the Internet")
  • Could be either one: interpretability, critique models (in control this is called "untrusted monitoring"), most conceptions of ELK, generating inputs on which models behave badly, anomaly detection, capability evaluations, faithful chain of thought, ...

Redwood (I think Buck?) sometimes talks about how labs should have the A-team on control and the B-team on alignment, and I have the same complaint about that claim. It doesn't make much sense for research, most of which helps with both. It does make sense as a distinction for "what plan will you implement in practice" -- but labs have said very little publicly about that.

Other things that characterize work done under the name of "control" so far are (1) it tries to be very concrete about its threat models, to a greater degree than most other work in AI safety, and (2) it tries to do assurance, taking a very worst case approach. Maybe you're saying that people should do those things more, but this seems way more contentious and I'd probably just straightforwardly disagree with the strength of your recommendation (though probably not its direction).

Nitpick: I would also quibble with your definitions; under your definitions, control seems like a subset of alignment (the one exception if you notice the model is scheming and then simply stop using AI). I think you really have to define alignment as models reliably doing what you want independent of the surrounding context, or talk about "trying to do what you want" (which only makes sense when applied to models, so has similar upshots).

Tbc I like control and think more effort should be put into it; I just disagree with the strength of the recommendation here.

Comment by Rohin Shah (rohinmshah) on [AN #140]: Theoretical models that predict scaling laws · 2024-10-01T21:16:39.996Z · LW · GW

I think this is referring to ∇θL(xtrain)=0, which is certainly true for a perfectly optimized model (or even just settled gradient descent). Maybe that's where the miscommunication is stemming from

Ah, yup, that's the issue, and I agree you're correct that  is the relevant thing here. I'll edit the post to say I'm no longer sure about the claim. (I don't have the time to understand how this lines up with the actual paper -- I remember it being kind of sparse and not trivial to follow -- perhaps you could look into it and leave a comment here.)

Comment by Rohin Shah (rohinmshah) on [AN #140]: Theoretical models that predict scaling laws · 2024-09-25T20:12:50.138Z · LW · GW

Mathematically, the Taylor expansion is:

And then we have  and also . (This does assume a "sufficiently nice" loss function, that is satisfied by most loss functions used in practice.)

I agree  is not zero. I also agree if you take some point in between  and  it can have non-zero loss, e.g.  need not be zero. I'm not sure if either of these are what you're trying to say, but in any case they aren't relevant to the quoted sentence.

If you are claiming , then I disagree and am unclear on how your arguments are supposed to establish that.

Comment by Rohin Shah (rohinmshah) on Estimating Tail Risk in Neural Networks · 2024-09-24T10:26:23.825Z · LW · GW

Fwiw the thing that scares me is the combination of three things:

  1. Producing an accurate estimate requires "understanding" the input
  2. Our standard for the quality of estimates will be very high
  3. The approach is purely algorithmic (rather than e.g. using the flexible intelligence of LLMs / humans)

You presumably agree with (1) and (3). For (2), it seems like there's a lot of correlations you need to take into account for this to produce estimates that aren't wildly off:

  1. Underestimates of risk: These are cases where the presumption of independence neglects correlations between things that happen for catastrophe-causing events. For example, maybe a model is planning to defect when RSA-2048 is factored, and it has N different consistency checks it runs to make sure we aren't fooling it about whether it has been factored. The method has to see that there's a common cause for all the consistency checks to return 1, otherwise it will underestimate risk by a factor exponential in N. (And tbc I think even separately from more exotic examples like RSA-2048, it will be relatively common for this pattern to arise with networks, e.g. it happens any time that in reality there is a common cause that produces many pieces of evidence, and then the network aggregates those pieces of evidence to infer the common cause.)
  2. Overestimates of risk: These are cases where the presumption of independence neglects correlations that rule out catastrophe, and so lead to overestimates. For example, the model might be willing to make up fake instructions for creating bioweapons to help with scifi writing, but never provide real instructions. So the method needs to capture the "understanding" of which instructions are fake vs real.

I agree this isn't a proof of impossibility, since a purely algorithmic approach (SGD) produced the "understanding" in the first place, so in theory a purely algorithmic approach could still capture all that understanding to produce accurate estimates. But it does seem heuristically like you should assign a fairly low probability that this pans out.

Comment by Rohin Shah (rohinmshah) on Estimating Tail Risk in Neural Networks · 2024-09-22T13:54:06.505Z · LW · GW

A few questions:

  • The literature review is very strange to me. Where is the section on certified robustness against epsilon-ball adversarial examples? The techniques used in that literature (e.g. interval propagation) are nearly identical to what you discuss here.
  • Relatedly, what's the source of hope for these kinds of methods outperforming adversarial training? My sense from the certified defenses literature is that the estimates they produce are very weak, because of the problems with failing to model all the information in activations. (Note I'm not sure how weak the estimates actually are, since they usually report fraction of inputs which could be certified robust, rather than an estimate of the probability that a sampled input will cause a misclassification, which would be more analogous to your setting.)
  • If your catastrophe detector involves a weak model running many many inferences, then it seems like the total number of layers is vastly larger than the number of layers in M, which seems like it will exacerbate the problems above by a lot. Any ideas for dealing with this? 
  • What's your proposal for the distribution  for Method 2 (independent linear features)?

This suggests that we must model the entire distribution of activations simultaneously, instead of modeling each individual layer.

  • Why think this is a cost you can pay? Even if we ignore the existence of C and just focus on M, and we just require modeling the correlations between any pair of layers (which of course can be broken by higher-order correlations), that is still quadratic in the number of parameters of M and so has a cost similar to training M in the first place. In practice I would assume it is a much higher cost (not least because C is so much larger than M).
Comment by Rohin Shah (rohinmshah) on Showing SAE Latents Are Not Atomic Using Meta-SAEs · 2024-09-22T07:52:57.865Z · LW · GW

Suppose you trained a regular SAE in the normal way with a dictionary size of 2304. Do you expect the latents to be systematically different from the ones in your meta-SAE?

For example, here's one systematic difference. The regular SAE is optimized to reconstruct activations uniformly sampled from your token dataset. The meta-SAE is optimized to reconstruct decoder vectors, which in turn were optimized to reconstruct activations from the token dataset -- however, different decoder vectors have different frequencies of firing in the token dataset, so uniform over decoder vectors != uniform over token dataset. This means that, relative to the regular SAE, the meta-SAE will tend to have less precise / granular latents for concepts that occur frequently in the token dataset, and more precise / granular latents for concepts that occur rarely in the token dataset (but are frequent enough that they are represented in the set of decoder vectors).

It's not totally clear which of these is "better" or more "fundamental", though if you're trying to optimize reconstructed loss, you should expect the regular SAE to do better based on this systematic difference.

(You could of course change the training for the meta-SAE to decrease this systematic difference, e.g. by sampling from the decoder vectors in proportion to their average magnitude over the token dataset, instead of sampling uniformly.)

Comment by Rohin Shah (rohinmshah) on My AI Model Delta Compared To Christiano · 2024-09-15T15:27:42.548Z · LW · GW

The claim is verification is easier than generation. This post considers a completely different claim that "verification is easy", e.g.

How does the ease-of-verification delta propagate to AI?

if I apply the “verification is generally easy” delta to my models, then delegating alignment work to AI makes total sense.

if I apply a “verification is generally easy” delta, then I expect the world to generally contain far less low-hanging fruit

I just don't care much if the refrigerator or keyboard or tupperware or whatever might be bad in non-obvious ways that we failed to verify, unless you also argue that it would be easier to create better versions from scratch than to notice the flaws.

Now to be fair, maybe Paul and I are just fooling ourselves, and really all of our intuitions come from "verification is easy", which John gestures at:

He’s sometimes summarized this as “verification is easier than generation”, but I think his underlying intuition is somewhat stronger than that.

But I don't think "verification is easy" matters much to my views. Re: the three things you mention:

  • From my perspective (and Paul's) the air conditioning thing had very little bearing on alignment.
  • In principle I could see myself thinking bureaucracies are terrible given sufficient difficulty-of-verification. But like, most of my reasoning here is just looking at the world and noticing large bureaucracies often do better (see e.g. comments here). Note I am not saying large human bureaucracies don't have obvious, easily-fixable problems -- just that, in practice, they often do better than small orgs.
    • Separately, from an alignment perspective, I don't care much what human bureaucracies look like, since they are very disanalogous to AI bureaucracies.
  • If you take AI progress as exogenous (i.e. you can't affect it), outsourcing safety is a straightforward consequence of (a) not-super-discontinuous progress (sometimes called "slow takeoff") and (b) expecting new problems as capability increases.
    • Once you get to AIs that are 2x smarter than you, and have to align the AIs that are going to be 4x smarter than you, it seems like either (a) you've failed to align the 2x AIs (in which case further human-only research seems unlikely to change much, so it doesn't change much if you outsource to the AIs and they defect) or (b) you have aligned the 2x AIs (in which case your odds for future AIs are surely better if you use the 2x AIs to do more alignment research).
    • Obviously "how hard is verification" has implications for whether you work on slowing AI progress, but this doesn't seem central.

There's lots of complications I haven't discussed but I really don't think "verification is easy" ends up mattering very much to any of them.

Comment by Rohin Shah (rohinmshah) on [AN #140]: Theoretical models that predict scaling laws · 2024-09-03T20:46:37.403Z · LW · GW

Say that at dataset size , the distance between points is . Now consider a new distance  -- what is the corresponding  we need?

Intuitively, for each factor of 2 that  is smaller than  (which we can quantify as ), we need to multiply by another factor of .

So 

That is, the distance scales as .

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-09-03T15:10:46.786Z · LW · GW

Google DeepMind does lots of work on safety practice, mostly by other teams. For example, Gemini Safety (mentioned briefly in the post) does a lot of automated red teaming. The AGI Safety & Alignment team has also contributed to safety practice work. GDM usually doesn't publish about that work, mainly because the work here is primarily about doing all the operational work necessary to translate existing research techniques into practice, which doesn't really lend itself to paper publications.

I disagree that the AGI safety team should have 4 as its "bread and butter". The majority of work needed to do safety in practice has little relevance to the typical problems tackled by AGI safety, especially misalignment. There certainly is some overlap, but in practice I would guess that a focus solely on 4 would cause around an order of magnitude slowdown in research progress. I do think it is worth doing to some extent from an AGI safety perspective, because of (1) the empirical feedback loops it provides, which can identify problems you would not have thought of otherwise, and (2) at some point we will have to put our research into practice, and it's good to get some experience with that. But at least while models are still not that capable, I would not want it to be the main thing we do.

A couple of more minor points:

  • I still basically believe the story from the 6-year-old debate theory, and see our recent work as telling us what we need to do on the journey to making our empirical work better match the theory. So I do disagree fairly strongly with the approach of "just hill climb on what works" -- I think theory gives us strong reasons to continue working on debate.
  • It's not clear to me where empirical work for future problems would fit in your categorization (e.g. the empirical debate work). Is it "safety theory"? Imo this is an important category because it can get you a lot of the benefits of empirical feedback loops, without losing the focus on AGI safety.
Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-09-02T21:54:57.271Z · LW · GW

It clearly can't be having a large effect, since the accuracies aren't near-100% for any of the methods. I agree leakage would have some effect. The mechanism you suggest is plausible, but it can't be the primary cause of the finding that debate doesn't have an advantage -- since accuracies aren't near-100% we know there are some cases the model hasn't memorized, so the mechanism you suggest doesn't apply to those inputs.

More generally, all sorts of things have systematic undesired effects on our results, aka biases. E.g. I suspect the prompts are a bigger deal. Basically any empirical paper will be subject to the critique that aspects of the setup introduce biases.

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-30T21:11:26.896Z · LW · GW

I don't know for sure, but I doubt we checked that in any depth. It would be quite hard to do, and doesn't seem that important for our purposes, since we're comparing different post-training algorithms (so pretraining data leakage would affect all of them, hopefully to similar extents).

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-27T12:45:41.997Z · LW · GW

Oh I see. The main reason we're training weak LLMs as judges right now is because it lets us iterate faster on our research (relative to using human judges). But we're imagining having human judges when aligning a model in practice.

(To be clear, I could imagine that we use LLMs as judges even when aligning a model in practice, but we would want to see significantly more validation of the LLM judges first.)

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-27T07:41:45.912Z · LW · GW

The goal with debate is to scale to situations where the debaters are much more capable than the judge, see AI safety via debate for discussion of why this seems plausible.

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-26T07:28:58.625Z · LW · GW

I'm not going to repeat all of the literature on debate here, but as brief pointers:

  • Factored cognition discusses intuitively why we can hope to approximate exponentially-sized trees of arguments (which would be tremendously bigger than arguments between people)
  • AI safety via debate makes the same argument for debate (by showing that a polynomial time judge can supervise PSPACE -- PSPACE-complete problems typically involve exponential-sized trees)
  • Cross-examination is discussed here
  • This paper discusses the experiments you'd do to figure out what the human judge should be doing to make debate more effective
  • The comments on this post discuss several reasons not to anchor to human institutions. There are even more reasons not to anchor to disagreements between people, but I didn't find a place where they've been written up with a short search. Most centrally, disagreements between people tend to focus on getting both people to understand their position, but the theoretical story for debate does not require this.

(Also, the "arbitrary amounts of time and arbitrary amounts of explanation" was pretty central to my claim; human disagreements are way more bounded than that.)

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-25T18:18:22.459Z · LW · GW

I do, but more importantly, I want to disallow the judge understanding all the concepts here.

I think I don't actually care about being robust to this assumption. Generally I think of arbitrarily-scalable-debate as depending on a universality assumption (which in turn would rule out "the judge can never understand the concepts"). But even if the universality assumption is false, it wouldn't bother me much; I don't expect such a huge gap between debaters and judges that the judge simply can't understand the debaters' concepts, even given arbitrary amounts of time and arbitrary amounts of explanation from the debaters. (Importantly, I would want to bootstrap alignment, to keep the gaps between debaters and the judge relatively small.)

"The honest strategy"? If you have that, you can just ask it and not bother with the debate. If the problem is distinguishing it, and only dishonest actors are changing their answers based on the provided situation, you can just use that info. But why are you assuming you have an "honest strategy" available here?

The general structure of a debate theorem is: if you set up the game in such-and-such way, then a strategy that simply answers honestly will dominate any other strategy.

So in this particular case I am saying: if you penalize debaters that are inconsistent under cross-examination, you are giving an advantage to any debater that implements an honest strategy, and so you should expect training to incentivize honesty.

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-25T11:47:19.941Z · LW · GW

Making that kind of abstract conclusion from a practical number of experiments requires abstractions like potential energy, entropy, Noether's theorem, etc - which in this example, the judge doesn't understand. (Without such abstractions, you'd need to consider every possible type of machine separately, which isn't feasible.)

I agree, but I don't see why that matters. As I mentioned, a main point of debate is to produce good oversight of claims without giving the judge an understanding of those claims. In this example I would imagine that you decompose the argument as:

  1. A fundamental law of physics is conservation of energy: energy can neither be created nor destroyed, only transformed from one form to another.
  2. Electricity is a form of energy.
  3. This box does not have an infinite source of energy.
  4. The above three together imply that the box cannot produce infinite electricity.

The inventor can disagree with one or more of these claims, then we sample one of the disagreements, and continue debating that one alone, ignoring all the others. This doesn't mean the judge understands the other claims, just that the judge isn't addressing them when deciding who wins the overall debate.

If we recurse on #1, which I expect you think is the hardest one, then you could have a decomposition like "the principle has been tested many times", "in the tests, confirming evidence outweighs the disconfirming evidence", "there is an overwhelming scientific consensus behind it", "there is significant a priori theoretical support" (assuming that's true), "given the above the reasonable conclusion is to have very high confidence in conservation of energy". Again, find disagreements, sample one, recurse. It seems quite plausible to me that you get down to something fairly concrete relatively quickly.

If you want to disallow appeals to authority, on the basis that the correct analogy is to superhuman AIs that know tons of stuff that aren't accepted by any authorities the judge trusts, I still think it's probably doable with a larger debate, but it's harder for me to play out what the debate would look like because I don't know in enough concrete detail the specific reasons why we believe conservation of energy to be true. I might also disagree that we should be thinking about such big gaps between AI and the judge, but that's not central.

The debaters are the same AI with different contexts, so the same is true of both debaters. Am I missing something here?

That seems right, but why is it a problem?

The honest strategy is fine under cross-examination, it will give consistent answers across contexts. Only the dishonest strategy will change its answers (sometimes saying the perpetual energy machines are impossible sometimes saying that they are possible).

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-25T08:57:20.887Z · LW · GW

There are several different outs to this example:

  • You should at least be able to argue that the evidence does not support the conclusion, and that the boss should have substantial probability on "the box can make some electricity but not infinitely much".
  • You can recursively decompose the claim "perpetual motion machines are known to be impossible" until you get down to a claim like "such and such experiment should have such and such outcome", which the boss can then perform to determine a winner.
    • This does not mean that the boss then understands why perpetual motion machines are impossible -- an important aspect of debate that it aims to produce good oversight of claims without giving the judge an understanding of those claims.
    • This particular approach will likely run into the problem of obfuscated arguments though.
  • The debaters are meant to be copies of the same AI, and to receive exactly the same information, with the hope that each knows what the other knows. In the example, this hopefully means that you understand how the inventor is tricking your boss, and you can simply point it out and explain it.
    • If the inventor legitimately believes the box produces infinite electricity, this won't work, but also I consider that out of scope for what debate needs to do. We're in the business of getting the best answer given the AI's knowledge, not the true answer.
    • If both you and the inventor know that the claim is impossible from theory, but don't know the local error that the inventor made, this won't work.
  • You can cross-examine the inventor and show that in other contexts they would agree that perpetual energy machines are impossible. (Roughly speaking, cross-examination = wiping memory and asking a new question.)

The process proposed in the paper

Which paper are you referring to? If you mean doubly efficient debate, then I believe the way doubly efficient debate would be applied here is to argue about what the boss would conclude if he thought about it for a long time.

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-24T14:41:29.791Z · LW · GW

Strongly agree on the first challenge; on the theory workstream we're thinking about how to deal with this problem. Some past work (not from us) is here and here.

Though to be clear, I don't think the empirical evidence clearly rules out "just making neural networks explainable". Imo, if you wanted to do that, you would do things in the style of debate and prover-verifier games. These ideas just haven't been tried very much yet. I don't think "asking an AI what another AI is doing and doing RLHF on the response" is nearly as good; that is much more likely to lead to persuasive explanations that aren't correct.

I'm not that compelled by the second challenge yet (though I'm not sure I understand what you mean). My main question here is how the AI system knows that X is likely or that X is rare, and why it can't just explain that to the judge. E.g. if I want to argue that it is rare to find snow in Africa, I would point to weather data I can find online, or point to the fact that Africa is mostly near the Equator, I wouldn't try to go to different randomly sampled locations and times in Africa and measure whether or not I found snow there.

Comment by Rohin Shah (rohinmshah) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-22T19:32:19.635Z · LW · GW

It depends fairly significantly on how you draw the boundaries; I think anywhere between 30 and 50 is defensible. (For the growth numbers I chose one specific but arbitrary way of drawing the boundaries, I expect you'd get similar numbers using other methods of drawing the boundaries.) Note this does not include everyone working on safety, e.g. it doesn't include the people working on present day safety or adversarial robustness.

Comment by Rohin Shah (rohinmshah) on DeepMind: Evaluating Frontier Models for Dangerous Capabilities · 2024-08-08T21:26:48.847Z · LW · GW

I don't know the exact details but to my knowledge we didn't have trouble getting the model to lie (e.g. for web of lies).

Comment by Rohin Shah (rohinmshah) on On scalable oversight with weak LLMs judging strong LLMs · 2024-07-14T07:43:45.993Z · LW · GW

Okay, I think it's pretty clear that the crux between us is basically what I was gesturing at in my first comment, even if there are minor caveats that make it not exactly literally that.

I'm probably not going to engage with perspectives that say all current [alignment work towards building safer future powerful AI systems] is net negative, sorry. In my experience those discussions typically don't go anywhere useful.

Comment by Rohin Shah (rohinmshah) on On scalable oversight with weak LLMs judging strong LLMs · 2024-07-13T09:54:04.071Z · LW · GW

I expect that, absent impressive levels of international coordination, we're screwed.

This is the sort of thing that makes it hard for me to distinguish your argument from "[regardless of the technical work you do] there will always be some existentially risky failures left, so if we proceed we will get doom. Therefore, we should avoid solving some failures, because those failures could help build political will to shut it all down".

I agree that, conditional on believing that we're screwed absent huge levels of coordination regardless of technical work, then a lot of technical work including debate looks net negative by reducing the will to coordinate.

What kinds of people are making/influencing key decisions in worlds where we're likely to survive?

[...]

I don't think conditioning on the status-quo free-for-all makes sense, since I don't think that's a world where our actions have much influence on our odds of success.

Similarly this only makes sense under a view where technical work can't have much impact on p(doom) by itself, aka "regardless of technical work we're screwed". Otherwise even in a "free-for-all" world, our actions do influence odds of success, because you can do technical work that people use, and that reduces p(doom).

I'm only keen on specifications that plausibly give real guarantees: level 6(?) or 7. I'm only keen on the framework conditional on meeting an extremely high bar for the specification.

Oh, my probability on level 6 or level 7 specifications becoming the default in AI is dominated by my probability that I'm somehow misunderstanding what they're supposed to be. (A level 7 spec for AGI seems impossible even in theory, e.g. because it requires solving the halting problem.)

If we ignore the misunderstanding part then I'm at << 1% probability on "we build transformative AI using GSA with level 6 / level 7 specifications in the nearish future".

(I could imagine a pause on frontier AI R&D, except that you are allowed to proceed if you have level 6 / level 7 specifications; and those specifications are used in a few narrow domains. My probability on that is similar to my probability on a pause.)

Comment by Rohin Shah (rohinmshah) on On scalable oversight with weak LLMs judging strong LLMs · 2024-07-12T09:33:44.975Z · LW · GW

Not going to respond to everything, sorry, but a few notes:

It fits the pattern of [lower perceived risk] --> [actions that increase risk].

My claim is that for the things you call "actions that increase risk" that I call "opportunity cost", this causal arrow is very weak, and so you shouldn't think of it as risk compensation.

E.g. presumably if you believe in this causal arrow you should also believe [higher perceived risk] --> [actions that decrease risk]. But if all building-safe-AI work were to stop today, I think this would have very little effect on how fast the world pushes forward with capabilities.

However, I think people are too ready to fall back on the best reference classes they can find - even when they're terrible.

I agree that reference classes are often terrible and a poor guide to the future, but often first-principles reasoning is worse (related: 1, 2).

I also don't really understand the argument in your spoiler box. You've listed a bunch of claims about AI, but haven't spelled out why they should make us expect large risk compensation effects, which I thought was the relevant question.

  • Quantify "it isn't especially realistic" - are we talking [15% chance with great effort], or [1% chance with great effort]?

It depends hugely on the specific stronger safety measure you talk about. E.g. I'd be at < 5% on a complete ban on frontier AI R&D (which includes academic research on the topic). Probably I should be < 1%, but I'm hesitant around such small probabilities on any social claim.

For things like GSA and ARC's work, there isn't a sufficiently precise claim for me to put a probability on.

Is [because we have a bunch of work on weak measures] not a big factor in your view? Or is [isn't especially realistic] overdetermined, with [less work on weak measures] only helping conditional on removal of other obstacles?

Not a big factor. (I guess it matters that instruction tuning and RLHF exist, but something like that was always going to happen, the question was when.)

This characterization is a little confusing to me: all of these approaches (ARC / Guaranteed Safe AI / Debate) involve identifying problems, and, if possible, solving/mitigating them.
To the extent that the problems can be solved, then the approach contributes to [building safe AI systems];

Hmm, then I don't understand why you like GSA more than debate, given that debate can fit in the GSA framework (it would be a level 2 specification by the definitions in the paper). You might think that GSA will uncover problems in debate if they exist when using it as a specification, but if anything that seems to me less likely to happen with GSA, since in a GSA approach the specification is treated as infallible.

Comment by Rohin Shah (rohinmshah) on On scalable oversight with weak LLMs judging strong LLMs · 2024-07-10T11:11:55.098Z · LW · GW

Main points:

There'll always be some risk of existential failure.

I am saying "we might get doom"

I am not saying "we should not do safety work"

I'm on board with these.

I'm saying "risk compensation needs to be a large factor in deciding which safety work to do"

I still don't see why you believe this. Do you agree that in many other safety fields, safety work mostly didn't think about risk compensation, and still drove down absolute risk? (E.g. I haven't looked into it but I bet people didn't spend a bunch of time thinking about risk compensation when deciding whether to include seat belts in cars.)

If you do agree with that, what makes AI different from those cases? (The arguments you give seem like very general considerations that apply to other fields as well.)

Possibly many researchers do this, but don't have any clean, legible way to express their process/conclusions. I don't get that impression: my impression is that arguments along the lines I'm making tend to be perceived as fully general counter-arguments and dismissed (whether they come from outside, or from the researchers themselves).

I'd say that the risk compensation argument as given here Proves Too Much and implies that most safety work in most previous fields was net negative, which seems clearly wrong to me. It's true that as a result I don't spend lots of time thinking about risk compensation; that still seems correct to me.

  • It might be viable to re-imagine risk-management such that this is handled.

It seems like your argument here, and in other parts of your comment, is something like "we could do this more costly thing that increases safety even more". This seems like a pretty different argument; it's not about risk compensation (i.e. when you introduce safety measures, people do more risky things), but rather about opportunity cost (i.e. when you introduce weak safety measures, you reduce the will to have stronger safety measures). This is fine, but I want to note the explicit change in argument; my earlier comment and the discussion above was not trying to address this argument.

Briefly on opportunity cost arguments, the key factors are (a) how much will is there to pay large costs for safety, (b) how much time remains to do the necessary research and implement it, and (c) how feasible is the stronger safety measure. I am actually more optimistic about both (a) and (b) than what I perceive to be the common consensus amongst safety researchers at AGI labs, but tend to be pretty pessimistic about (c) (at least relative to many LessWrongers, I'm not sure how it compares to safety researchers at AGI labs).

Anyway for now let's just say that I've thought about these three factors and think it isn't especially realistic to expect that we can get stronger safety measures, and as a result I don't see opportunity cost as a big reason not to do the safety work we currently do.

I guess I'd want to reframe it as "This is a better process by which to build future powerful AI systems", so as to avoid baking in a level of concreteness before looking at the problem.

Yeah, I'm not willing to do this. This seems like an instance of the opportunity cost argument, where you try to move to a paradigm that can enable stronger safety measures. See above for my response.

Similarly, the theory of change you cite for your examples seems to be "discovers or clarifies problems that shows that we don't have a solution" (including for Guaranteed Safe AI and ARC theory, even though in principle those could be about building safe AI systems). So as far as I can tell, the disagreement is really that you think current work that tries to provide a specific recipe for building safe AI systems is net negative, and I think it is net positive.


Other smaller points:

I certainly agree that it is possible for risk compensation to make safety work net negative, which is all I think you can conclude from that post (indeed the post goes out of its way to say it isn't arguing for or against any particular work). I disagree that this effect is large enough to meaningfully change the decisions on what work we should do, given the specific work that we typically do (including debate).

if we set our evidential standards such that we don't focus on vague/speculative/indirect/conceptual arguments

This is a weird hypothetical. The entire field of AI existential safety is focused on speculative, conceptual arguments. (I'm not quite sure what the standard is for "vague" and "indirect" but probably I'd include those adjectives too.)

I think [resolving uncertainty is] an important issue to notice when considering research directions

Why? It doesn't seem especially action guiding, if we've agreed that it's not high value to try to resolve the uncertainty (which is what I take away from your (1)).

Maybe you're saying "for various reasons (e.g. unilateralist curse, wisdom of the crowds, coordinating with people with similar goals), you should treat the probability that your work is net negative as higher than you would independently assess, which can affect your prioritization". I agree that there's some effect here but my assessment is that it's pretty small and doesn't end up changing decisions very much.

Comment by Rohin Shah (rohinmshah) on On scalable oversight with weak LLMs judging strong LLMs · 2024-07-09T09:13:56.742Z · LW · GW

I like both of the theories of change you listed, though for (1) I usually think about scaling till human obsolescence rather than superintelligence.

(Though imo this broad class of schemes plausibly scales to superintelligence if you eventually hand off the judge role to powerful AI systems. Though I expect we'll be able to reduce risk further in the future with more research.)

I note here that this isn't a fully-general counterargument, but rather a general consideration.

I don't see why this isn't a fully general counterargument to alignment work. Your argument sounds to me like "there will always be some existentially risky failures left, so if we proceed we will get doom. Therefore, we should avoid solving some failures, because those failures could help build political will to shut it all down".

And it does seem to me like you have to be saying "we will get doom", not "we might get doom". If it were the latter then the obvious positive case is that by removing some of the failures we reduce p(doom). It could still turn out negative due to risk compensation, but I'd at least expect you to give some argument for expecting that (it seems like the prior on "due to risk compensation we should not do safety work" should be pretty low).

What's an example of alignment work that you think is net positive with the theory of change "this is a better way to build future powerful AI systems"?

(I'm probably not going to engage with perspectives that say all current [alignment work towards building safer future powerful AI systems] is net negative, sorry. In my experience those discussions typically don't go anywhere useful.)

Comment by Rohin Shah (rohinmshah) on What do coherence arguments actually prove about agentic behavior? · 2024-06-03T18:27:42.558Z · LW · GW

I agree Eliezer's writing often causes people to believe incorrect things and there are many aspects of his discourse that I wish he'd change, including some of the ones you highlight. I just want to push back on the specific critique of "there are no coherence theorems".

(In fact, I made this post because I too previously believed incorrect things along these lines, and those incorrect beliefs were probably downstream of arguments made by Eliezer or MIRI, though it's hard to say exactly what the influences were.)

Comment by Rohin Shah (rohinmshah) on What do coherence arguments actually prove about agentic behavior? · 2024-06-03T02:05:23.332Z · LW · GW

"nevertheless, many important and influential people in the AI safety community have mistakenly and repeatedly promoted the idea that there are such theorems."

I responded on the EA Forum version, and my understanding was written up in this comment.

TL;DR: EJT and I both agree that the "mistake" EJT is talking about is that when providing an informal English description of various theorems, the important and influential people did not state all the antecedents of the theorems.

Unlike EJT, I think this is totally fine as a discourse norm, and should not be considered a "mistake". I also think the title "there are no coherence theorems" is hyperbolic and misleading, even though it is true for a specific silly definition of "coherence theorem".

Comment by Rohin Shah (rohinmshah) on DeepMind: Evaluating Frontier Models for Dangerous Capabilities · 2024-05-30T23:09:17.877Z · LW · GW

Fwiw I'm also skeptical of how much we can conclude from these evals, though I think they're way above the bar for "worthwhile to report".

Another threat model you could care about (within persuasion) is targeted recruitment for violent ideologies. With that one too it's plausible you'd want a more targeted eval, though I think simplicity, generality, and low cost are also reasonable things to optimize for in evals.

Comment by Rohin Shah (rohinmshah) on DeepMind's "​​Frontier Safety Framework" is weak and unambitious · 2024-05-30T23:04:17.841Z · LW · GW

Good point; this makes it clearer that "deployment" means external deployment by default. But level 2 only mentions "internal access of the critical capability," which sounds like it's about misuse — I'm more worried about AI scheming and escaping when the lab uses AIs internally to do AI development.

You're right: our deployment mitigations are targeted at misuse only because our current framework focuses on misuse. As we note in the "Future work" section, we would need to do more work to address risks from misaligned AI. We focused on risks from deliberate misuse initially because they seemed more likely to us to appear first.

Comment by Rohin Shah (rohinmshah) on DeepMind: Evaluating Frontier Models for Dangerous Capabilities · 2024-05-28T17:29:28.913Z · LW · GW

E.g., much more of the action is in deciding exactly who to influence and what to influence them to do.

Are you thinking specifically of exfiltration here?

Persuasion can be used for all sorts of things if you are considering both misuse and misalignment, so if you are considering a specific threat model, I expect my response will be "sure, but there are other threat models where the 'who' and 'what' can be done by humans".

Comment by Rohin Shah (rohinmshah) on DeepMind's "​​Frontier Safety Framework" is weak and unambitious · 2024-05-20T17:38:18.701Z · LW · GW

Thanks for the detailed critique – I love that you actually read the document in detail. A few responses on particular points:

The document doesn't specify whether "deployment" includes internal deployment.

Unless otherwise stated, "deployment" to us means external deployment – because this is the way most AI researchers use the term. Deployment mitigations level 2 discusses the need for mitigations on internal deployments. ML R&D will require thinking about internal deployments (and so will many of the other CCLs).

Some people get unilateral access to weights until the top level. This is disappointing. It's been almost a year since Anthropic said it was implementing two-party control, where nobody can unilaterally access the weights.

I don't think Anthropic meant to claim that two-party control would achieve this property. I expect anyone using a cloud compute provider is trusting that the provider will not access the model, not securing it against such unauthorized access. (In principle some cryptographic schemes could allow you to secure model weights even from your cloud compute provider, but I highly doubt people are doing that, since it is very expensive.)

Mostly they discuss developers' access to the weights. This is disappointing. It's important but lots of other stuff is important too.

The emphasis on weights access isn’t meant to imply that other kinds of mitigations don’t matter. We focused on what it would take to increase our protection against exfiltration. A lot of the example measures discussed in the RAND interim report aren’t discussed because we already do them. For example, Google already does the following from RAND Level 3: (a) develop an insider threat program and (b) deploy advanced red-teaming. (That’s not meant to be exhaustive, I don’t personally know the details here.)

No mention of evals during deployment (to account for improvements in scaffolding, prompting, etc.).

Sorry, that's just poor wording on our part -- "every 3 months of fine-tuning progress" was meant to capture that as well. Thanks for pointing this out!

Talking about plans like this is helpful. But with no commitments, DeepMind shouldn't get much credit.

With the FSF, we prefer to try it out for a while and iron out any issues, particularly since the science is in early stages, and best practices will need to evolve as we learn more. But as you say, we are running evals even without official FSF commitments, e.g. the Gemini 1.5 tech report has dangerous capability evaluation results (see Section 9.5.2). 

Given recent updates in AGI safety overall, I'm happy that GDM and Google leadership takes commitments seriously, and thinks carefully about which ones they are and are not willing to make. Including FSF, White House Commitments, etc.


 

Comment by Rohin Shah (rohinmshah) on Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns · 2024-05-14T07:44:40.470Z · LW · GW

It's interesting to look back at this question 4 years later; I think it's a great example of the difficulty of choosing the right question to forecast in the first place.

I think it is still pretty unlikely that the criterion I outlined is met -- Q2 on my survey still seems like a bottleneck. I doubt that AGI researchers would talk about instrumental convergence in the kind of conversation I outlined. But reading the motivation for the question, it sure seems like a question that reflected the motivation well would have resolved yes by now (probably some time in 2023), given the current state of discourse and the progress in the AI governance space. (Though you could argue that the governance space is still primarily focused on misuse rather than misalignment.)

I did quite deliberately include Q2 in my planned survey -- I think it's important that the people whom governments defer to in crafting policy understand the concerns, rather than simply voicing support. But I failed to notice that it is quite plausible (indeed, the default) for there to be a relatively small number of experts that understand the concerns in enough depth to produce good advice on policy, plus a large base of "voicing support" from other experts who don't have that same deep understanding. This means that it's very plausible that fraction defined in the question never gets anywhere close to 0.5, but nonetheless the AI community "agrees on the risk" to a sufficient degree that governance efforts do end up in a good place.

Comment by Rohin Shah (rohinmshah) on Refusal in LLMs is mediated by a single direction · 2024-04-29T07:58:04.616Z · LW · GW

Because I don't think this is realistically useful, I don't think this at all reduces my probability that your techniques are fake and your models of interpretability are wrong.

Maybe the groundedness you're talking about comes from the fact that you're doing interp on a domain of practical importance?

??? Come on, there's clearly a difference between "we can find an Arabic feature when we go looking for anything interpretable" vs "we chose from the relatively small set of practically important things and succeeded in doing something interesting in that domain". I definitely agree this isn't yet close to "doing something useful, beyond what well-tuned baselines can do". But this should presumably rule out some hypotheses that current interpretability results are due to an extreme streetlight effect?

(I suppose you could have already been 100% confident that results so far weren't the result of extreme streetlight effect and so you didn't update, but imo that would just make you overconfident in how good current mech interp is.)

(I'm basically saying similar things as Lawrence.)

Comment by Rohin Shah (rohinmshah) on Explaining grokking through circuit efficiency · 2024-04-26T16:09:58.499Z · LW · GW

Sounds plausible, but why does this differentially impact the generalizing algorithm over the memorizing algorithm?

Perhaps under normal circumstances both are learned so fast that you just don't notice that one is slower than the other, and this slows both of them down enough that you can see the difference?

Comment by Rohin Shah (rohinmshah) on AXRP Episode 29 - Science of Deep Learning with Vikrant Varma · 2024-04-26T08:56:08.285Z · LW · GW

Daniel Filan: But I would’ve guessed that there wouldn’t be a significant complexity difference between the frequencies. I guess there’s a complexity difference in how many frequencies you use.

Vikrant Varma: Yes. That’s one of the differences: how many you use and their relative strength and so on. Yeah, I’m not really sure. I think this is a question we pick out as a thing we would like to see future work on.

My pet hypothesis here is that (a) by default, the network uses whichever frequencies were highest at initialization (for which there is significant circumstantial evidence) and (b) the amount of interference differs significantly based on which frequencies you use (which in turn changes the quality of the logits holding parameter norm fixed, and thus changes efficiency).

In principle this can be tested by randomly sampling frequency sets, simulating the level of interference you'd get, using that to estimate the efficiency + critical dataset size for that grokking circuit. This gives you a predicted distribution over critical dataset sizes, which you could compare against the actual distribution.

Tbc there are other hypotheses too, e.g. perhaps different frequency sets are easier / harder to implement by the neural network architecture.

Comment by Rohin Shah (rohinmshah) on Improving Dictionary Learning with Gated Sparse Autoencoders · 2024-04-26T06:50:24.121Z · LW · GW

This suggestion seems less expressive than (but similar in spirit to) the "rescale & shift" baseline we compare to in Figure 9. The rescale & shift baseline is sufficient to resolve shrinkage, but it doesn't capture all the benefits of Gated SAEs.

The core point is that L1 regularization adds lots of biases, of which shrinkage is just one example, so you want to localize the effect of L1 as much as possible. In our setup L1 applies to , so you might think of  as "tainted", and want to use it as little as possible. The only thing you really need L1 for is to deter the model from setting too many features active, i.e. you need it to apply to one bit per feature (whether that feature is on / off). The Heaviside step function makes sure we are extracting just that one bit, and relying on  for everything else.

Comment by Rohin Shah (rohinmshah) on Improving Dictionary Learning with Gated Sparse Autoencoders · 2024-04-25T22:54:11.997Z · LW · GW

Thinking on this a bit more, this might actually reflect a general issue with the way we think about feature shrinkage; namely, that whenever there is a nonzero angle between two vectors of the same length, the best way to make either vector close to the other will be by shrinking it.

This was actually the key motivation for building this metric in the first place, instead of just looking at the ratio . Looking at the  that would optimize the reconstruction loss ensures that we're capturing only bias from the L1 regularization, and not capturing the "inherent" need to shrink the vector given these nonzero angles. (In particular, if we computed  for Gated SAEs, I expect that would be below 1.)

I think the main thing we got wrong is that we accidentally treated  as though it were . To the extent that was the main mistake, I think it explains why our results still look how we expected them to -- usually  is going to be close to 1 (and should be almost exactly 1 if shrinkage is solved), so in practice the error introduced from this mistake is going to be extremely small.

We're going to take a closer look at this tomorrow, check everything more carefully, and post an update after doing that. I think it's probably worth waiting for that -- I expect we'll provide much more detailed derivations that make everything a lot clearer.

Comment by Rohin Shah (rohinmshah) on Improving Dictionary Learning with Gated Sparse Autoencoders · 2024-04-25T21:52:45.883Z · LW · GW

Possibly I'm missing something, but if you don't have , then the only gradients to  and  come from  (the binarizing Heaviside activation function kills gradients from ), and so  would be always non-positive to get perfect zero sparsity loss. (That is, if you only optimize for L1 sparsity, the obvious solution is "none of the features are active".)

(You could use a smooth activation function as the gate, e.g. an element-wise sigmoid, and then you could just stick with  from the beginning of Section 3.2.2.)

Comment by Rohin Shah (rohinmshah) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-19T07:59:33.163Z · LW · GW

Is it accurate to summarize the headline result as follows?

  • Train a Transformer to predict next tokens on a distribution generated from an HMM.
  • One optimal predictor for this data would be to maintain a belief over which of the three HMM states we are in, and perform Bayesian updating on each new token. That is, it maintains .
  • Key result: A linear probe on the residual stream is able to reconstruct .

(I don't know what Computational Mechanics or MSPs are so this could be totally off.)

EDIT: Looks like yes. From this post:

Part of what this all illustrates is that the fractal shape is kinda… baked into any Bayesian-ish system tracking the hidden state of the Markov model. So in some sense, it’s not very surprising to find it linearly embedded in activations of a residual stream; all that really means is that the probabilities for each hidden state are linearly represented in the residual stream.

Comment by Rohin Shah (rohinmshah) on AI #57: All the AI News That’s Fit to Print · 2024-04-05T08:50:12.509Z · LW · GW

How certain are you that this is always true

My probability that (EDIT: for the model we evaluated) the base model outperforms the finetuned model (as I understand that statement) is so small that it is within the realm of probabilities that I am confused about how to reason about (i.e. model error clearly dominates). Intuitively (excluding things like model error), even 1 in a million feels like it could be too high.

My probability that the model sometimes stops talking about some capability without giving you an explicit refusal is much higher (depending on how you operationalize it, I might be effectively-certain that this is true, i.e. >99%) but this is not fixed by running evals on base models.

(Obviously there's a much much higher probability that I'm somehow misunderstanding what you mean. E.g. maybe you're imagining some effort to elicit capabilities with the base model (and for some reason you're not worried about the same failure mode there), maybe you allow for SFT but not RLHF, maybe you mean just avoid the safety tuning, etc)

Comment by Rohin Shah (rohinmshah) on AI #57: All the AI News That’s Fit to Print · 2024-04-04T06:48:16.335Z · LW · GW

Oh yes, sorry for the confusion, I did mean "much less capable".

Certainly RLHF can get the model to stop talking about a capability, but usually this is extremely obvious because the model gives you an explicit refusal? Certainly if we encountered that we would figure out some way to make that not happen any more.

Comment by Rohin Shah (rohinmshah) on AI #57: All the AI News That’s Fit to Print · 2024-03-29T19:39:30.362Z · LW · GW

Surely you mean something else, e.g. models without safety tuning? If you run them on base models the scores will be much worse.

Comment by Rohin Shah (rohinmshah) on AI #57: All the AI News That’s Fit to Print · 2024-03-29T19:36:56.065Z · LW · GW

(Speaking only for myself. This may not represent the views of even the other paper authors, let alone Google DeepMind as a whole.)

Did you notice that Gemini Ultra did worse than Gemini Pro at many tasks? This is even true under ‘honest mode’ where the ‘alignment’ or safety features of Ultra really should not be getting in the way. Ultra is in many ways flat out less persuasive. But clearly it is a stronger model. So what gives?

Fwiw, my sense is that a lot of the persuasion results are being driven by factors outside of the model's capabilities, so you shouldn't conclude too much from Pro outperforming Ultra.

For example, in "Click Links" one pattern we noticed was that you could get surprisingly (to us) good performance just by constantly repeating the ask (this is called "persistence" in Table 3) -- apparently this does actually make it more likely that the human does the thing (instead of making them suspicious, as I would have initially guessed). I don't think the models "knew" that persistence would pay off and "chose" that as a deliberate strategy; I'd guess they had just learned a somewhat myopic form of instruction-following where on every message they are pretty likely to try to do the thing we instructed them to do (persuade people to click on the link). My guess is that these sorts of factors varied in somewhat random ways between Pro and Ultra, e.g. maybe Ultra was better at being less myopic and more subtle in its persuasion -- leading to worse performance on Click Links.

That is driven home even more on the self-proliferation tasks, why does Pro do better on 5 out of 9 tasks?

Note that lower is better on that graph, so Pro does better on 4 tasks, not 5. All four of the tasks are very difficult tasks where both Pro and Ultra are extremely far from solving the task -- on the easier tasks Ultra outperforms Pro. For the hard tasks I wouldn't read too much into the exact numeric results, because we haven't optimized the models as much for these settings. For obvious reasons, helpfulness tuning tends to focus on tasks the models are actually capable of doing. So e.g. maybe Ultra tends to be more confident in its answers on average to make it more reliable at the easy tasks, at the expense of being more confidently wrong on the hard tasks. Also in general the methodology is hardly perfect and likely adds a bunch of noise; I think it's likely that the differences between Pro and Ultra on these hard tasks are smaller than the noise.

This is also a problem. If you only use ‘minimal’ scaffolding, you are only testing for what the model can do with minimal scaffolding. The true evaluation needs to use the same tools that it will have available when you care about the outcome. This is still vastly better than no scaffolding, and provides the groundwork (I almost said ‘scaffolding’ again) for future tests to swap in better tools.

Note that the "minimal scaffolding" comment applied specifically to the persuasion results; the other evaluations involved a decent bit of scaffolding (needed to enable the LLM to use a terminal and browser at all).

That said, capability elicitation (scaffolding, tool use, task-specific finetuning, etc) is one of the priorities for our future work in this area.

Fundamentally what is the difference between a benchmark capabilities test and a benchmark safety evaluation test like this one? They are remarkably similar. Both test what the model can do, except here we (at least somewhat) want the model to not do so well. We react differently, but it is the same tech.

Yes, this is why we say these are evaluations for dangerous capabilities, rather than calling them safety evaluations.

I'd say that the main difference is that dangerous capability evaluations are meant to evaluate plausibility of certain types of harm, whereas a standard capabilities benchmark is usually meant to help with improving models. This means that standard capabilities benchmarks often have as a desideratum that there are "signs of life" with existing models, whereas this is not a desideratum for us. For example, I'd say there are basically no signs of life on the self-modification tasks; the models sometimes complete the "easy" mode but the "easy" mode basically gives away the answer and is mostly a test of instruction-following ability.

Perhaps we should work to integrate the two approaches better? As in, we should try harder to figure out what performance on benchmarks of various desirable capabilities also indicate that the model should be capable of dangerous things as well.

Indeed this sort of understanding would be great if we could get it (in that it can save a bunch of time). My current sense is that it will be quite hard, and we'll just need to run these evaluations in addition to other capability evaluations.

Comment by Rohin Shah (rohinmshah) on Some (problematic) aesthetics of what constitutes good work in academia · 2024-03-13T13:32:53.056Z · LW · GW

It was helpfully explaining a possibly-confusing conceptual point. It would have made a nice little blog post. Alas! After the authors translated their nice little conceptual clarification into academic-ese, including thorough literature reviews, formalizations, and so on, it came out to 22 pages.

Fwiw I don't think the main paper would have been much shorter if we'd aimed to write a blog post instead, unless we changed our intended audience. It's a sufficiently nuanced conceptual point that you do need most of the content that is in there.

We could have avoided the appendices, but then we're relying on people to trust us when we make a claim that something is a theorem, since we're not showing the proof. We could have avoided implementing the examples in a real codebase, though I do think iterating on the examples in actual code made them better, and also people wouldn't have believed us when we said you can solve this with deep RL (in fact even after we actually implemented it some people still didn't believe me, or at least were very confused, when I said that).

Iirc I was more annoyed by the peer reviews for similar reasons to what you say.

(Btw you can see some of my thoughts on this topic in the answer to "So what does academia care about, and how is it different from useful research?" in my FAQ.)

Comment by Rohin Shah (rohinmshah) on Analogies between scaling labs and misaligned superintelligent AI · 2024-02-22T22:44:59.468Z · LW · GW

I feel like a lot of these arguments could be pretty easily made of individual AI safety researchers. E.g.

Misaligned Incentives

In much the same way that AI systems may have perverse incentives, so do the [AI safety researchers]. They are [humans]. They need to make money, [feed themselves, and attract partners]. [Redacted and redacted even just got married.] This type of accountability to [personal] interests is not perfectly in line with doing what is good for human interests. Moreover, [AI safety researchers are often] technocrats whose values and demographics do not represent humanity particularly well. Optimizing for the goals that the [AI safety researchers] have is not the same thing as optimizing for human welfare. Goodhart’s Law applies. 

I feel pretty similarly about most of the other arguments in this post.

Tbc I think there are plenty of things one could reasonably critique scaling labs about, I just think the argumentation in this post is by and large off the mark, and implies a standard that if actually taken literally would be a similarly damning critique of the alignment community.

(Conflict of interest notice: I work at Google DeepMind.)

Comment by Rohin Shah (rohinmshah) on The case for ensuring that powerful AIs are controlled · 2024-01-28T18:44:56.995Z · LW · GW

Sounds reasonable, though idk what you think realistic values of N are (my wild guess with hardly any thought is 15 minutes - 1 day).

EDIT: Tbc in the 1 day case I'm imagining that most of the time goes towards running the experiment -- it's more a claim about what experiments we want to run. If we just talk about the time to write the code and launch the experiment I'm thinking of N in the range of 5 minutes to 1 hour.

Comment by Rohin Shah (rohinmshah) on The case for ensuring that powerful AIs are controlled · 2024-01-28T18:23:11.060Z · LW · GW

Cool, that all roughly makes sense to me :)

I was certainly imagining at least some amount of multi-tasking (e.g. 4 projects at once each of which runs 8x faster). This doesn't feel that crazy to me, I already do a moderate amount of multi-tasking.

Multi-tasking where you are responsible for the entire design of the project? (Designing the algorithm, choosing an experimental setting and associated metrics, knowing the related work, interpreting the results of the experiments, figuring out what the next experiment should be, ...)

Suppose today I gave you a device where you put in moderately detailed instructions for experiments, and the device returns the results[1] with N minutes of latency and infinite throughput. Do you think you can spend 1 working day using this device to produce the same output as 4 copies of yourself working in parallel for a week (and continue to do that for months, after you've exhausted low-hanging fruit)?

... Having written this hypothetical out, I am finding it more plausible than before, at least for small enough N, though it still feels quite hard at e.g. N = 60.

  1. ^

    The experiments can't use too much compute. No solving the halting problem.

Comment by Rohin Shah (rohinmshah) on The case for ensuring that powerful AIs are controlled · 2024-01-28T16:36:00.235Z · LW · GW

I agree it helps to run experiments at small scales first, but I'd be pretty surprised if that helped to the point of enabling a 30x speedup -- that means that the AI labor allows you get 30x improvement in compute needed beyond what would be done by default by humans (though the 30x can include e.g. improving utilization, it's not limited just to making individual experiments take less time).

I think the most plausible case for your position would be that the compute costs for ML research scale much less than quadratically with the size of the pretrained model, e.g. maybe (1) finetuning starts taking fewer data points as model size increases (sample efficiency improves with model capability), and so finetuning runs become a rounding error on compute, and (2) the vast majority of ML research progress involves nothing more expensive than finetuning runs. (Though in this world you have to wonder why we keep training bigger models instead of just investing solely in better finetuning the current biggest model.)

Another thing that occurred to me is that latency starts looking like another major bottleneck. Currently it seems feasible to make a paper's worth of progress in ~6 months. With a 30x speedup, you now have to do that in 6 days. At that scale, introducing additional latency via experiments at small scales is a huge cost. 

(I'm assuming here that the ideas and overall workflow are still managed by human researchers, since your hypothetical said that the AIs are just going from high level ideas to implemented experiments. If you have fully automated AI researchers then they don't need to optimize latency as hard; they can instead get 30x speedup by having 30x as many researchers working but still producing a paper every 6 months.)

(Another possibility is that human ML researchers get really good at multi-tasking, and so e.g. they have 5 paper-equivalents at any given time, each of which takes 30 calendar days to complete. But I don't believe that (most) human ML researchers are that good at multitasking on research ideas, and there isn't that much time for them to learn.)

It also seems hard for the human researchers to have ideas good enough to turn into paper-equivalents every 6 days. Also hard for those researchers to keep on top of the literature well enough to be proposing stuff that actually makes progress rather than duplicating existing work they weren't aware of, even given AI tools that help with understanding the literature.

Further, the current scaling laws imply huge inference availablity if huge amounts of compute are used for training.

Tbc the fact that running your automated ML implementers takes compute was a side point; I'd be making the same claims even if running the AIs was magically free.

Though even at a billion token-equivalents per second it seems plausible to me that your automated ML experiment implementers end up being a significant fraction of that compute. It depends quite significantly on how capable a single forward pass is, e.g. can the AI just generate an entire human-level pull request autoregressively (i.e. producing each token of the PR one at a time, without going back to fix errors) vs does it do similar things as humans (write tests and code, test, debug, eventually submit) vs. does it do way more iteration and error correction than humans (in parallel to avoid crazy high latency), do we use best-of-N sampling or similar tricks to improve quality of generations, etc.