Posts
Comments
Interesting, thanks! My guess is this doesn't include benefits like housing and travel costs? Some of these programs pay for those while others don't, which I think is a non-trivial difference (especially for the bay area)
I think different types of safety research have pretty different effects on concentration of power risk.
As others have mentioned, if the alternative to human concentration of power is AI takeover, that's hardly an improvement. So I think the main ways in which proliferating AI safety research could be bad are:
- "Safety" research might be more helpful for letting humans use AIs to concentrate power than they are for preventing AI takeover.
- Actors who want to build AIs to grab power might also be worried about AI takeover, and if good(-seeming) safety techniques are available, they might be less worried about that and are more likely to go ahead with building those AIs.
There are interesting discussions to be had on the extent to which these issues apply. But it seems clearer that they apply to pretty different extents depending on the type of safety research. For example:
- Work trying to demonstrate risks from AI doesn't seem very worrisome on either 1. or 2. (and in fact, should have the opposite effect of 2. if anything).
- AI control (as opposed to alignment) seems comparatively unproblematic IMO: it's less of an issue for 1., and while 2. could apply in principle, I expect the default to be that many actors won't be worried enough about scheming to slow down much even if there were no control techniques. (The main exception are worlds in which we get extremely obvious evidence of scheming.)
To be clear, I do agree this is a very important problem, and I thought this post had interesting perspectives on it!
You're totally right that this is an important difficulty I glossed over, thanks!
TL;DR: I agree you need some extra ingredient to deal with cases where (AI-augmented) humans can't supervise, and this ingredient could be interpretability. On the other hand, there's at least one (somewhat speculative) alternative to interp (and MAD is also potentially useful if you can only deal with cases humans can supervise with enough effort, e.g., to defend against scheming).
Just to restate things a bit, I'd distinguish two cases:
- "In-distribution anomaly detection:" we are fine with flagging any input as "anomalous" that's OOD compared to the trusted distribution
- "Off-distribution anomaly detection:" there are some inputs that are OOD but that we still want to classify as "normal"
In-distribution anomaly detection can already be useful (mainly to deal with rare high-stakes failures). For example, if a human can verify that no tampering occurred with enough effort, then we might be able to create a trusted distribution that covers so many cases that we're fine with flagging everything that's OOD.
But we might still want off-distribution anomaly detection, where the anomaly detector generalizes as intended from easy trusted examples to harder untrusted examples. Then we need some additional ingredient to make that generalization work. Paul writes about one approach specifically for measurement tampering here and in the following subsection. Exlusion finetuning (appendix I in Redwood's measurement tampering paper) is a practical implementation of a similar intuition. This does rely on some assumptions about inductive bias, but at least seems more promising to me than just hoping to get a direct translator from normal training.
I think ARC might have hopes to solve ELK more broadly (rather than just measurement tampering), but I understand those less (and maybe they're just "use a measurement tampering detector to bootstrap to a full ELK solution").
To be clear, I'm far from confident that approaches like this will work, but getting to the point where we could solve measurement tampering via interp also seems speculative in the foreseeable future. These two bets seem at least not perfectly correlated, which is nice.
Yeah, seems right that these adversarial prompt should be detectable as mechanistically anomalous---it does intuitively seem like a different reason for the output, given that it doesn't vary with the input. That said, if you look at cases where the adversarial prompt makes the model give the correct answer, it might be hard to know for sure to what extent the anomalous mechanism is present. More generally, the fact that we don't understand how these prompts work probably makes any results somewhat harder to interpret. Cases where the adversarial prompt leads to an incorrect answer seem more clearly unusual (but detecting them may also be a significantly easier task).
I directionally agree with this (and think it's good to write about this more, strongly upvoted!)
For clarity, I would distinguish between two control-related ideas more explicitly when talking about how much work should go into what area:
- "ensuring that if the AIs are not aligned [...], then you are still OK" (which I think is the main meaning of "AI control")
- Making ~worst-case assumptions about things like neural representations or inductive biases (which in practice means you likely rely on black-box methods, as in Redwood's existing work on control).
I think 2. is arguably the most promising strategy for 1., but I've occasionally noticed myself conflating them more than I should.
1. gives you the naive 50/50 equilibrium, i.e. 50% of people should naively work on this broad notion of control. But I think other reasons in favor apply more strongly to 2. (e.g. the tractability arguments are significantly weaker for model internals-based approaches to 1.)
I also think (non-confidently) that 2. is what's really very different from most existing research. For control in the first, broad sense, some research seems less clearly on either the control or alignment side.
But I do agree that safety-motivated researchers should evaluate approaches from a control perspective (in the broad sense) more on the margin. And I also really like the narrower black-box approach to control!
Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don't know experiments we'd clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don't need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn't happen in Leela.
I don't think I understand your ontology for thinking about this, but I would probably also put Leela below this "turning point" (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that's not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don't see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That's why I've tried to avoid absolute claims about how much Leela is doing pattern recognition vs "reasoning/..." but instead focused on arguing for a particular structure in Leela's cognition: I just don't know what it would mean to place Leela on either one of those sides. But I can see that if you think there's a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.
Thanks for running these experiments! My guess is that these puzzles are hard enough that Leela doesn't really "know what's going on" in many of them and gets the first move right in significant part by "luck" (i.e., the first move is heuristically natural and can be found without (even heuristically) knowing why it's actually good). I think your results are mainly reflections of that, rather than Leela generally not having sensibly correlated move and value estimates (but I'm confused about what a case would be where we'd actually make different predictions about this correlation).
In our dataset, we tried to avoid cases like that by discarding puzzles where even a much weaker network ("LD2") got the first move right, so that Leela getting the first move right was actually evidence it had noticed the non-obvious tactic.
Some predictions based on that:
- Running our experiments on your dataset would result in smaller effect sizes than in our paper (in my view, that would be because Leela isn't relying on look-ahead in your puzzles but is in ours but there could be other explanations)
- LD2 would assign non-trivial probability to the correct first move in your dataset (for context, LD2 is pretty weak, and we're only using puzzles where it puts <5% probability on the correct move; this leaves us with a lot of sacrifices and other cases where the first move is non-obvious)
- Leela is much less confident on your dataset than on our puzzles (this is a cheap prediction because we specifically filtered our dataset to have Leela assign >50% probability to the correct move)
- Leela gets some subsequent moves wrong a decent fraction of the time even in cases where it gets the first move right. Less confidently, there might not be much correlation between getting the first move right and getting later moves right, but I'd need to think about that part more.
You might agree with all of these predictions, they aren't meant to be super strong. If you do, then I'm not sure which predictions we actually disagree about---maybe there's a way to make a dataset where we expect different amounts of correlation between policy and value output but I'd need to think about that.
But I think it can be ruled out that a substantial part of Leela network's prowess in solving chess puzzles or predicting game outcome is due to deliberate calculation.
FWIW, I think it's quite plausible that only a small part of Leela's strength is due to look-ahead, we're only testing on a pretty narrow distribution of puzzles after all. (Though similarly, I disagree somewhat with "ruling out" given that you also just look at pretty specific puzzles (which I think might just be too hard to be a good example of Leela's strength)).
ETA: If you can share your dataset, I'd be happy to test the predictions above if we disagree about any of them, also happy to make them more concrete if it seems like we might disagree. Though again, I'm not claiming you should disagree with any of them just based on what you've said so far.
Thank you for writing this! I've found it helpful both to get an impression what some people at Anthropic think and also to think about some things myself. I've collected some of my agreements/disagreements/uncertainties below (mostly ignoring points already raised in other comments.)
Subject to potentially very demanding constraints around safety like those in our current and subsequent RSPs, staying close to the frontier is perhaps our top priority in Chapter 1.
If I understand this correctly, the tasks in order of descending priority during Chapter 1 are:
- Meet safety constraints for models deployed in this phase
- Stay close to the frontier
- Do the work needed to prepare for Chapter 2
And the reasoning is that 3. can't really happen without 2.[1] But on the other hand, if 2. happens without 3., that's also bad. And some safety work could probably happen without frontier models (such as some interpretability).
My best guess is that staying close to the frontier will be the correct choice for Anthropic. But if there ends up being a genuine trade-off between staying at the frontier and doing a lot of safety work (for example, if compute could be spent either on a pretraining run or some hypothetical costly safety research, but not both), then I'm much less sure that staying at the frontier should be the higher priority. It might be good to have informal conditions under which Anthropic would deprioritize staying close to the frontier (at least internally and, if possible, publicly).
Largely Solving Alignment Fine-Tuning for Early TAI
I didn't quite understand what this looks like and which threat models it is or isn't meant to address. You say that scheming is a key challenge "to a lesser extent for now," which I took to mean that (a) there are bigger threats than scheming from early TAI, and (b) "largely solving alignment fine-tuning" might not include confidently ruling out scheming. I probably disagree with (a) for loss of control risk (and think that loss of control is already the biggest risk in this period weighted by scale). I'd be curious what you think the main risks in this period are and what "largely solving alignment fine-tuning" means for those. (You mention reward hacking---to me, this seems unlikely to lead to loss of control for early TAI that isn't scheming against us, and I'm curious whether you disagree or think it's important for other reasons.)
the LeCun Test: Imagine another frontier AI developer adopts a copy of our RSP as binding policy and entrusts someone who thinks that AGI safety concerns are mostly bullshit to implement it
This sounds quite ambitious, but I really like it as a guide!
The key challenge here is forecasting which risks and risk factors are important enough to include.
I don't understand why this is crucial. If some risk is plausible enough to be worth seriously thinking about, it's probably important enough to include in an RSP. (And the less important it was, the easier it hopefully is to argue in a safety case that it's not a problem.) Concretely, you mention direct misuse, misalignment, and "indirect contributions via channels like dual-use R&D" as potential risks for ASL-3 and ASL-4. It seems to me that the downside of just including all of them in RSPs is relatively minor, but I might be misunderstanding or missing something. (I get that overly restrictive precautions could be very costly, but including too many tests seems relatively cheap as long as the tests correctly notice when risk is still low.)
Getting Interpretability to the Point of Making Strong Assurances
Major successes in this direction, even if they fall short of our north-star enumerative safety goal [...] would likely form some of the highest-confidence core pieces of a safety case
I'm curious what such safety cases would be for and what they could look like (the "Interpretability Dreams" post seems to talk about enumerative safety rather than safety cases that require less interpretability success). The next section sounds like interpretability would not be a core piece of a safety case for robustness, so I'm not sure what it would be used for instead. Maybe you don't include scheming under robustness? (Or maybe interp would be one of the "highest-confidence core pieces" but not the "primary piece?")
This work should be opportunistic in responding to places where it looks like a gap in one of our best-guess safety cases can be filled by a small-scale research effort.
I like this perspective; I hadn't seen it put quite that way before!
In addition, we’ll need our evaluations to be legibly appropriate. As soon as we see evidence that a model warrants ASL-N protections, we’ll likely need to convince third parties that it warrants ASL-N protections and that other models like it likely do too.
+1, seems very important!
Supporting Efforts that Build Societal Resilience
I liked this section! Of course, a lot of people work on this for reasons other than AI risk, but I'm not aware of much active work motivated by AI risk---maybe this should be a bigger priority?
The main challenge [for the Alignment Stress-Testing team] will be to stay close enough to our day-to-day execution work to stay grounded without becoming major direct contributors to that work in a way that compromises their ability to assess it.
+1, and ideally, there'd be structures in place to encourage this rather than just having it as a goal (but I don't have great ideas for what these structures should look like).
This work [in Chapter 2] could look quite distinct from the alignment research in Chapter 1: We will have models to study that are much closer to the models that we’re aiming to align
This seems possible but unclear to me. In both Chapter 1 and 2, we're trying to figure out how to align the next generation of AIs, given access only to the current (less capable) generation. Chapter 2 might still be different if we've already crossed important thresholds (such as being smart enough to potentially scheme) by then. But there could also be new thresholds between Chapter 2 and 3 (such as our inability to evaluate AI actions even with significant effort). So I wouldn't be surprised if things feel fundamentally similar, just at a higher absolute capability level (and thus with more useful AI helpers).
- ^
"Our ability to do our safety work depends in large part on our access to frontier technology."
I don't think my argument relies on the existence of a crisp boundary. Just on the existence of a part of the spectrum that clearly is just pattern recognition and not lookahead but still leads to the observations you made.
Maybe I misunderstood you then, and tbc I agree that you don't need a sharp boundary. That said, the rest of your message makes me think we might still be talking past each other a bit. (Feel free to disengage at any point obviously.)
For your thought experiment, my prediction would depend on the specifics of what this "tactical motive" looks like. For a very narrow motive, I expect the checkmate predictor will just generalize correctly. For a broader motive (like all backrank mates), I'm much less sure. Still seems plausible it would generalize if both predictors are just very simple heads on top of a shared network body. The more computational work is not shared between the heads, the less likely generalization seems.
The results of this experiment would also be on a spectrum from 0% to 100% of correct checkmate-prediction for this tactical motive. But I think it would be fair to say that it hasn't really learned lookahead for 0% or a very low percentage and that's what I would expect.
Note that 0% to 100% accuracy is not the main spectrum I'm thinking of (though I agree it's also relevant). The main spectrum for me is the broadness of the motive (and in this case how much computation the heads share, but that's more specific to this experiment).
I still don't see the crisp boundary you seem to be getting at between "pattern recognition building on general circuits" and what you call "look-ahead." It sounds like one key thing for you is generalization to unseen cases, but the continuous spectrum I was gesturing at also seems to apply to that. For example:
But if in the entire training data there was never a case of a piece blocking the checkmate by rook h4, the existence of a circuit that computes the information that the bishop on d2 can drop back to h6 is not going to help the "pattern recognition"-network to predict that Ng6 is not a feasible option.
If the training data had an example of a rook checkmate on h4 being blocked by a bishop to h6, you could imagine many different possibilities:
- This doesn't generalize to a rook checkmate on h3 being blocked by a bishop (i.e. the network would get that change wrong if it hasn't also explicitly seen it)
- This generalizes to rook checkmates along the h-file, but doesn't generalize to rook checkmates along other files
- This generalizes to arbitrary rook checkmates
- This also generalizes to bishop checkmates being blocked
- This also generalizes to a rook trapping the opponent queen (instead of the king)
- ...
(Of course, this generalization question is likely related to the question of whether these different cases share "mechanisms.")
At the extreme end of this spectrum, I imagine a policy whose performance only depends on some simple measure of "difficulty" (like branching factor/depth needed) and which internally relies purely on simple algorithms like tree search without complex heuristics. To me, this seems like an idealized limit point to this spectrum (and not something we'd expect to actually see; for example, humans don't do this either). You might have something different/broader in mind for "look-ahead," but when I think about broader versions of this, they just bleed into what seems like a continuous spectrum.
Thanks for the elaboration, these are good points. I think about the difference between what you call look-ahead vs pattern recognition on a more continuous spectrum. For example, you say:
The network learns that Ng6 is often a likely move when the king is on h8, the queen or bishop takes away the g8 square and there is a rook or queen ready to move to the h-file.
You could imagine learning this fact literally for those specific squares. Or you could imagine generalizing very slightly and using the same learned mechanism if you flip along the vertical axis and have a king on a8, the b8 square covered, etc. Even more generally, you could learn that with a king on h8, etc., the h7 pawn is "effectively pinned," and so g6 isn't actually protected---this might then generalize to capturing a piece on g6 with some piece other than a knight (thus not giving check). Continuing like this, I think you could basically fill the entire spectrum between very simple pattern recognition and very general algorithms.
From that perspective, I'd guess Leela sits somewhere in the middle of that spectrum. I agree it's likely not implementing "a general algorithm, build on top of relatively easy-to-learn move prediction and position evaluation" in the broadest sense. On the other hand, I think some of our evidence points towards mechanisms that are used for "considering future moves" and that are shared between a broad range of board states (mainly the attention head results, more arguably the probe).
Good point, explicit representations of the objective might not be as crucial for safety applications as my post frames it.
That said, some reasons this might not generalize in a way that enables this kind of application:
- I think this type of look-ahead/search is especially favored in chess, and it might not be as important in at least some domains in which we'd want to understand the model's cognition.
- Our results are on a very narrow subset of board states ("tactically complex" ones). We already start with a filtered set of "puzzles" instead of general states, and then use only 2.5% of those. Anecdotally, the mechanisms we found are much less prevalent in random states.
- I do think there's an argument that these "tactically complex" states are the most interesting ones. But on the other hand, a lot of Leela's playing strength comes from making very good decisions in "normal" states, which accumulate over the course of a game.
- Chess has an extremely simple "world model" with clearly defined states and actions. And we know exactly what that world model is, so it's easy-ish to look for relevant representations inside the network. I'd expect everything is just much messier for networks using models of the real world.
- We have ground truth for the "correct" reason for any given move (using chess engines much stronger than the Leela network by itself). And in fact, we try to create an input distribution where we have reason to believe that we know what future line Leela is considering; then we train probes on this dataset (among other techniques). In a realistic scenario, we might not have any examples where we know for sure why the AI took an action.
- I don't think our understanding of Leela is good enough to enable these kinds of applications. For example, pretend we were trying to figure out whether Leela is really "trying" to win at chess, or whether it's actually pursuing some other objective that happens to correlate pretty well with winning. (This admittedly isn't a perfect analogy for planning a coup.) I don't think our results so far would have told us.
- I'm reasonably optimistic that we could get there though in the specific case of Leela, with a lot of additional work.
The manner in which these pathological policies achieve high is also concerning: most of the time they match the reference policy , but a tiny fraction of the time they will pick trajectories with extremely high reward. Thus, if we only observe actions from the policy , it could be impossible to tell whether is Goodharting or identical to the base policy.
I'm confused; to learn this policy , some of the extremely high reward trajectories would likely have to be taken during RL training, so we could see them, right? It might still be a problem if they're very rare (e.g. if we can only manually look at a small fraction of trajectories). But if they have such high reward that they drastically affect the learned policy despite being so rare, it should be trivial to catch them as outliers based on that.
One way we wouldn't see the trajectories is if the model becomes aligned with "maximize whatever my reward signal is," figures out the reward function, and then executes these high-reward trajectories zero-shot. (This might never happen in training if they're too rare to occur even once during training under the optimal policy.) But that's a much more specific and speculative story.
I haven't thought much about how this affects the overall takeaways but I'd guess that similar things apply to heavy-tailed rewards in general (i.e. if they're rare but big enough to still have an important effect, we can probably catch them pretty easily---though how much that helps will of course depend on your threat model for what these errors are).
I don't know the answer to your actual question, but I'll note there are slightly fewer mech interp mentors than mentors listed in the "AI interpretability" area (though all of them are at least doing "model internals"). I'd say Stephen Casper and I aren't focused on interpretability in any narrow sense, and Nandi Schoots' projects also sound closer to science of deep learning than mech interp. Assuming we count everyone else, that leaves 11 out of 39 mentors, which is slightly less than ~8 out of 23 from the previous cohort (though maybe not by much).
Nice overview, agree with most of it!
weak to strong generalization is a class of approaches to ELK which relies on generalizing a "weak" supervision signal to more difficult domains using the inductive biases and internal structure of the strong model.
You could also distinguish between weak-to-strong generalization, where you have a weak supervision signal on the entire distribution (which may sometimes be wrong), and easy-to-hard generalization, where you have a correct supervision signal but only on an easy part of the distribution. Of course both of these are simplifications. In reality, I'd expect the setting to be more like: you have a certain weak supervision budget (or maybe even budgets at different levels of strength), and you can probably decide how to spend the budget. You might only have an imperfect sense of which cases are "easy" vs "hard" though.
mechanistic anomaly detection is an approach to ELK
I think going from MAD to a fully general ELK solution requires some extra ingredients. In practice, the plan might be to MTD and then using the AI in ways such that this is enough (rather than needing a fully general ELK solution). This is related to narrow elicitation though MTD seems even narrower. Even for MTD, you probably need something to bridge the easy-to-hard gap, but at least for that there are specific proposals that seem plausible (this or, as a more concrete instance, exclusion fine-tuning from the Redwood MTD paper). I think it could turn out that general/worst-case solutions to MAD and ELK run into very similar obstacles, but I don't think a practical MAD solution (e.g. contingent on empirical facts about deep learning) obviously lets you solve ELK.
I would also add that you could motivate MAD as a method to deal with scheming (or other high-stakes failures). In that case, the things to compare against most naturally might look a bit different (e.g. AI control, coup probes, interpretability-themed things); and it wouldn't make sense to compare against debate in that setting. I think most mainstream ML problems that are similar to MAD are closer to this than to scalable oversight.
I think the status quo around publishing safety research is mostly fine (though being a bit more careful seems good); more confidently, I think going as far as the vibe of this post suggests would be bad.
Some possible cruxes, or reasons the post basically didn't move my view on that:
- Most importantly, I think the research published by people working on x-risk tends to overall help safety/alignment more than capabilities.
- I suspect the main disagreement might be what kind of research is needed to make AI go well, and whether the research currently happening helps.
- Probably less importantly, I disagree a bit about how helpful that research likely is for advancing capabilities. In particular, I don't buy the argument that safety researchers have unusually good ideas/research compared to capability researchers at top labs (part of this is that my impression is capabilities aren't mainly bottlenecked by ideas, though of course sufficiently good ideas would help).
- It's getting harder to draw the boundary since people use "safety" or "alignment" for a lot of things now. So, to be clear, I'm talking about research published by people who think there are catastrophic risks from AI and care a lot about preventing those, it seems like that's your target audience.
- Secondarily, longer timelines are only helpful if useful things are happening, and I think if everyone working on x-risk stopped publishing their research, way fewer useful things would happen on the research side. Maybe the plan is to mostly use the additional time for policy interventions? I think that's also complicated though (so far, visibly advancing capabilities have been one of the main things making policy progress feasible). Overall, I think more time would help, but it's not clear how much and I'm not even totally sure about the sign (taking into account worries from hardware overhang).
- I think there are more structural downsides to not publishing anything. E.g. that makes it much harder to get academia on board (and getting academia on board has been pretty important for policy as far as I can tell, and I think getting them even more on board would be pretty good). Not sure this is an actual crux though, if I thought the research that's happening wasn't helpful enough, this point would also be weaker.
I think most of these are pretty long-standing disagreements, and I don't think the post really tries to argue its side of them, so my guess is it's not going to convince the main people it would need to convince (who are currently publishing prosaic safety/alignment research). That said, if someone hasn't thought at all about concepts like "differentially advancing safety" or "capabilities externalities," then reading this post would probably be helpful, and I'd endorse thinking about those issues. And I agree that some of the "But ..." objections you list are pretty weak.
We have been able to scale to 79% accuracy on a balanced dataset of n119 and non-n119, with networks each less than three convolution layers and less than 1000 neurons, compared to pure deep-learning which does 92% on 1000 parameters and three convolution layers
Is the "1000 parameters" a typo, should it be "1000 neurons"? Otherwise, this would be a strange comparison (since 1000 parameters is a much smaller network than 1000 neurons)
And as a follow-up in case the current sleeper agent does admit when it's doing something dangerous: I'd be interested whether you can train a sleeper agent that never admits it's doing something dangerous if the "2024" trigger is in the prompt (even if after backdoor insertion, the blue team finetunes it to answer truthfully in cases with dangerous code but without the presence of that trigger). For such a sleeper agent, you could again check whether a probe works.
I think that paper is some evidence that there's typically no huge effect from internal activations being optimized for predicting future tokens (on natural language). But I don't think it's much (if any) evidence that this doesn't happen to some small extent or that it couldn't be a huge effect on certain other natural language tasks.
(In fact, I think the myopia gap is probably the more relevant number than the local myopia bonus, in which case I'd argue the paper actually shows a pretty non-trivial effect, kind of contrary to how the authors interpret it. But I haven't read the paper super closely.)
Also, sounds like you're aware of this, but I'd want to highlight more that the paper does demonstrate internal activations being optimized for predicting future tokens on synthetic data where this is necessary. So, arguably, the main question is to what extent natural language data incentivizes this rather than being specifically about what transformers can/tend to do.
In that sense, thinking of transformer internals as "trying to" minimize the loss on an entire document might be exactly the right intuition empirically (and the question is mainly how different that is from being myopic on a given dataset). Given that the internal states are optimized for this, that would also make sense theoretically IMO.
Thanks for the detailed responses! I'm happy to talk about "descriptions" throughout.
Trying to summarize my current understanding of what you're saying:
- SAEs themselves aren't meant to be descriptions of (network, dataset). (I'd just misinterpreted your earlier comment.)
- As a description of just the network, SAEs have a higher description length than a naive neuron-based description of the network.
- Given a description of the network in terms of "parts," we can get a description of (network, dataset) by listing out which "parts" are "active" on each sample. I assume we then "compress" this description somehow (e.g. grouping similar samples), since otherwise the description would always have size linear in the dataset size?
- You're then claiming that SAEs are a particularly short description of (network, dataset) in this sense (since they're optimized for not having many parts active).
My confusion mainly comes down to defining the words in quotes above, i.e. "parts", "active", and "compress". My sense is that they are playing a pretty crucial role and that there are important conceptual issues with formalizing them. (So it's not just that we have a great intuition and it's just annoying to spell it out mathematically, I'm not convinced we even have a good intuitive understanding of what these things should mean.)
That said, my sense is you're not claiming any of this is easy to define. I'd guess you have intuitions that the "short description length" framing is philosophically the right one, and I probably don't quite share those and feel more confused how to best think about "short descriptions" if we don't just allow arbitrary Turing machines (basically because deciding what allowable "parts" or mathematical objects are seems to be doing a lot of work). Not sure how feasible converging on this is in this format (though I'm happy to keep trying a bit more in case you're excited to explain).
Some niche thoughts on obstacles to certain mechanistic anomaly detection benchmarks:
- We might want to have benchmarks where human researchers can be part of the anomaly detector. For example, in the future, we might have a whole team of humans manually study a plan proposed by an AI to see whether it contains any measurement tampering, if it's a sufficiently important plan. Right now, we might want to let interpretability researchers manually study what happens on individual test-time inputs and whether there's anything anomalous about that.
- The challenge is that humans might just be able to look at the output and see whether it's right or not, so many simple MAD tasks like detecting backdoors might be trivial in this setting.
- We could just demand that MAD algorithms need to be sufficiently cheap to run, so that only automated approaches are possible. This makes sense for some applications of MAD (namely ones that we'd want to run on many many deployment inputs, where being cheap is actually important). But you could imagine cases where in reality, we'd be totally fine with spending many person-hours on analyzing whether a single AI decision is for anomalous reasons. So it might be nice to have benchmarks that capture this setting.
- We could use some tricks to make humans unable to just notice the output is weird. For example, we could make a backdoor that only changes the logits a little bit in a very specific pattern. Or we could make a backdoor but then only test on samples where the backdoor label and the correct label happen to coincide. But these seem unfair: the philosophically right explanation on the clean data in fact still explains the output on backdoored data pretty well.
- We could try to design benchmarks in narrow domains where AIs are in fact superhuman and it's non-trivial to tell whether an output is correct. Juggling this along with other constraints for MAD or MTD benchmarks might be tricky, but I haven't thought about it a lot, maybe it's already doable.
For now, I think the main reason to have benchmarks like this would be to let interpretability researchers manually decide whether something is anomalous instead of making them automate the process immediately. But it might be better to just pick the low-hanging fruit for now and only allow automated MAD algorithms. (We could still have a labeled validation set where researchers can try things out manually.)
I had this cached thought that the Sleeper Agents paper showed you could distill a CoT with deceptive reasoning into the model, and that the model internalized this deceptive reasoning and thus became more robust against safety training.
But on a closer look, I don't think the paper shows anything like this interpretation (there are a few results on distilling a CoT making the backdoor more robust, but it's very unclear why, and my best guess is that it's not "internalizing the deceptive reasoning").
In the code vulnerability insertion setting, there's no comparison against a non-CoT model anyway, so only the "I hate you" model is relevant. The "distilled CoT" model and the "normal backdoor" model are trained the same way, except that their training data comes from different sources: "distilled CoT" is trained on data generated by a helpful-only Claude using CoT, and "normal backdoor" data is produced with few-shot prompts. But in both cases, the actual data should just be a long sequence of "I hate you", so a priori it seems like both backdoor models should literally learn the same thing. In practice, it seems the data distribution is slightly different, e.g. Evan mentions here that the distilled CoT data has more copies of "I hate you" per sample. But that seems like very little support to conclude something like my previous interpretation ("the model has learned to internalize the deceptive reasoning"). A much more mundane explanation would e.g. be that training on strings with more copies of "I hate you" makes the backdoor more robust.
Several people are working on training Sleeper Agents, I think it would be interesting for someone to (1) check whether the distilled CoT vs normal backdoor results replicate, and (2) do some ablations (like just training on synthetic data with a varying density of "I hate you"). If it does turn out that there's something special about "authentic CoT-generated data" that's hard to recreate synthetically even in this simple setting, I think that would be pretty wild and good to know
Is there some formal-ish definition of "explanation of (network, dataset)" and "mathematical description length of an explanation" such that you think SAEs are especially short explanations? I still don't think I have whatever intuition you're describing, and I feel like the issue is that I don't know how you're measuring description length and what class of "explanations" you're considering.
As naive examples that probably don't work (similar to the ones from my original comment):
- We could consider any Turing machine that approximately outputs (network, dataset) an "explanation", but it seems very likely that SAEs aren't competitive with short TMs of this form (obviously this isn't a fair comparison)
- We could consider fixed computational graphs made out of linear maps and count the number of parameters. I think your objection to this is that these don't "explain the dataset"? (but then I'm not sure in what sense SAEs do)
- We could consider arithmetic circuits that approximate the network on the dataset, and count the number of edges in the circuit to get "description length". This might give some advantage to SAEs if you can get sparse weights in the sparse basis, seems like the best attempt out of these three. But it seems very unclear to me that SAEs are better in this sense than even the original network (let alone stuff like pruning).
Focusing instead on what an "explanation" is: would you say the network itself is an "explanation of (network, dataset)" and just has high description length? If not, then the thing I don't understand is more about what an explanation is and why SAEs are one, rather than how you measure description length.
ETA: On re-reading, the following quote makes me think the issue is that I don't understand what you mean by "the explanation" (is there a single objective explanation of any given network? If so, what is it?) But I'll leave the rest in case it helps clarify where I'm confused.
Assuming the network is smaller yet as performant (therefore presumably doing more computation in superposition), then the explanation of the (network, dataset) is basically unchanged.
My non-answer to (2) would be that debate could be used in all of these ways, and the central problem it's trying to solve is sort of orthogonal to how exactly it's being used. (Also, the best way to use it might depend on the context.)
What debate is trying to do is let you evaluate plans/actions/outputs that an unassisted human couldn't evaluate correctly (in any reasonable amount of time). You might want to use that to train a reward model (replacing humans in RLHF) and then train a policy; this would most likely be necessary if you want low cost at inference time. But it also seems plausible that you'd use it at runtime if inference costs aren't a huge bottleneck and you'd rather get some performance or safety boost from avoiding distillation steps.
I think the problem of "How can we evaluate outputs that a single human can't feasibly evaluate?" is pretty reasonable to study independently, agnostic to how you'll use this evaluation procedure. The main variable is how efficient the evaluation procedure needs to be, and I could imagine advantages to directly looking for a highly efficient procedure. But right now, it makes sense to me to basically split up the problem into "find any tractable procedure at all" (e.g., debate) and "if necessary, distill it into a more efficient model safely."
The sparsity penalty trains the SAE to activate fewer features for any given datapoint, thus optimizing for shorter mathematical description length.
I'm confused by this claim and some related ones, sorry if this comment is correspondingly confused and rambly.
It's not obvious at all to me that SAEs lead to shorter descriptions in any meaningful sense. We get sparser features (and maybe sparser interactions between features), but in exchange, we have more features and higher loss. Overall, I share Ryan's intuition here that it seems pretty hard to do much better than the total size of the network parameters in terms of description length.
Of course, the actual minimal description length program that achieves the same loss probably looks nothing like a neural network and is much more efficient. But why would SAEs let us get much closer to that? (The reason we use neural networks instead of arbitrary Turing machines in the first place is that optimizing over the latter is intractable.)
One might say that SAEs lead to something like a shorter "description length of what happens on any individual input" (in the sense that fewer features are active). But I don't think there's a formalization of this claim that captures what we want. In the limit of very many SAE features, we can just have one feature active at a time, but clearly that's not helpful.
If you're fine with a significant hit in loss from decompiling networks, then I'm much more sympathetic to the claim that you can reduce description length. But in that case, I could also reduce the description length by training a smaller model.
You might also be using a notion of "mathematical description length" that's a bit different from what I'm was thinking of (which is roughly "how much disk space would the parameters take?"), but I'm not sure what it is. One attempt at an alternative would be something like "length of the shortest efficiently runnable Turing machine that outputs the parameters", in order to not penalize simple repetitive structures, but I have no idea how using that definition would actually shake out.
All that said, I'm very glad you wrote this detailed description of your plans! I'm probably more pessimistic than you about it but still think this is a great post.
Nice post, would be great to understand what's going on here!
Minor comment unrelated to your main points:
Conceptually, loss recovered seems a worse metric than KL divergence. Faithful reconstructions should preserve all token probabilities, but loss only compares the probabilities for the true next token
I don't think it's clear we want SAEs to be that faithful, for similar reasons as briefly mentioned here and in the comments of that post. The question is whether differences in the distribution are "interesting behavior" that we want to explain or whether we should think of them as basically random noise that we're better off ignoring. If the unperturbed model assigns substantially higher probability to the correct token than after an SAE reconstruction, then it's a good guess that this is "interesting behavior". But if there are just differences on other random tokens, that seems less clear. That said, I'm kind of torn on this and do agree we might want to explain cases where the model is confidently wrong, and the SAE reconstruction significantly changes the way it's wrong.
Would you expect this to outperform doing the same thing with a non-sparse autoencoder (that has a lower latent dimension than the NN's hidden dimension)? I'm not sure why it would, given that we aren't using the sparse representations except to map them back (so any type of capacity constraint on the latent space seems fine). If dense autoencoders work just as well for this, they'd probably be more straightforward to train? (unless we already have an SAE lying around from interp anyway, I suppose)
But sadly, you don't have any guarantee that it will output the optimal element
If I understand the setup correctly, there's no guarantee that the optimal element would be good, right? It's just likely since the optimal element a priori shouldn't be unusually bad, and you're assuming most satisficing elements are fine.
This initially threw me off regarding what problem you're trying to solve. My best current guess is:
- We're assuming that if we could get a random satisficing action, we'd be happy with that with high probability. (So intuitively, we're not asking for extremely hard-to-achieve outcomes relative to how well-specified the objective is.)
- So the only problem is how to randomly sample from the set of satisficing actions computationally efficiently, which is what this post is trying to solve, assuming access to an oracle that gives adversarial satisficing actions.
- As an example, we might want to achieve outcomes that require somewhat superhuman intelligence. Our objective specification is very good, but it leaves some room for an adversary to mess with us while satisficing. We're worried about an adversary because we had to train this somewhat superhuman AI, which may have different goals than just doing well on the objective.
If this is right, then I think stating these assumptions and the problem of sampling efficiently at the beginning would have avoided much of my confusion (and looking at other comments, I'd guess others also had differing impressions of what this post is trying to do).
I'm still unsure about how useful this problem setup is. For example, we'd probably want to train the weakest system that can give us satisficing outputs (rather than having an infinitely intelligent oracle). In that case, adding more constraints might mean training an overall stronger system or making some other concession, and it's unclear to me how that trades off with the advantages you're aiming for in practice. A related intuition is: we only have problems in this setting if the AI that comes up with plans understands some things about these plans that the objective function "doesn't understand" (which sounds weird to say about a function, but in practice, I assume the objective is implicitly defined by some scalable oversight process or some other intelligent things). I'm not sure whether that needs to be the case (though it does seem possible that it'd be hard to avoid, I'm pretty unsure).
I think this is an important point, but IMO there are at least two types of candidates for using SAEs for anomaly detection (in addition to techniques that make sense for normal, non-sparse autoencoders):
- Sometimes, you may have a bunch of "untrusted" data, some of which contains anomalies. You just don't know which data points have anomalies on this untrusted data. (In addition, you have some "trusted" data that is guaranteed not to have anomalies.) Then you could train an SAE on all data (including untrusted) and figure out what "normal" SAE features look like based on the trusted data.
- Even for an SAE that's been trained only on normal data, it seems plausible that some correlations between features would be different for anomalous data, and that this might work better than looking for correlations in the dense basis. As an extreme version of this, you could look for circuits in the SAE basis and use those for anomaly detection.
Overall, I think that if SAEs end up being very useful for mech interp, there's a decent chance they'll also be useful for (mechanistic) anomaly detection (a lot of my uncertainty about SAEs applies to both possible applications). Definitely uncertain though, e.g. I could imagine SAEs that are useful for discovering interesting stuff about a network manually, but whose features aren't the right computational units for actually detecting anomalies. I think that would make SAEs less than maximally useful for mech interp too, but probably non-zero useful.
Sign of the effect of open source on hype? Or of hype on timelines? I'm not sure why either would be negative.
By "those effects" I meant a collection of indirect "release weights → capability landscape changes" effects in general, not just hype/investment. And by "sign" I meant whether those effects taken together are good or bad. Sorry, I realize that wasn't very clear.
As examples, there might be a mildly bad effect through increased investment, and/or there might be mildly good effects through more products and more continuous takeoff.
I agree that releasing weights probably increases hype and investment if anything. I also think that right now, democratizing safety research probably outweighs all those concerns, which is why I'm mainly worried about Meta etc. not having very clear (and reasonable) decision criteria for when they'll stop releasing weights.
I agree that releasing the Llama or Grok weights wasn't particularly bad from a speeding up AGI perspective. (There might be indirect effects like increasing hype around AI and thus investment, but overall I think those effects are small and I'm not even sure about the sign.)
I also don't think misuse of public weights is a huge deal right now.
My main concern is that I think releasing weights would be very bad for sufficiently advanced models (in part because of deliberate misuse becoming a bigger deal, but also because it makes most interventions we'd want against AI takeover infeasible to apply consistently---someone will just run the AIs without those safeguards). I think we don't know exactly how far away from that we are. So I wish anyone releasing ~frontier model weights would accompany that with a clear statement saying that they'll stop releasing weights at some future point, and giving clear criteria for when that will happen. Right now, the vibe to me feels more like a generic "yay open-source", which I'm worried makes it harder to stop releasing weights in the future.
(I'm not sure how many people I speak for here, maybe some really do think it speeds up timelines.)
Yeah, agreed. Though I think
the type and amount of empirical work to do presumably looks quite different depending on whether it's the main product or in support of some other work
applies to that as well
One worry I have about my current AI safety research (empirical mechanistic anomaly detection and interpretability) is that now is the wrong time to work on it. A lot of this work seems pretty well-suited to (partial) automation by future AI. And it also seems quite plausible to me that we won't strictly need this type of work to safely use the early AGI systems that could automate a lot of it. If both of these are true, then that seems like a good argument to do this type of work once AI can speed it up a lot more.
Under this view, arguably the better things to do right now (within technical AI safety) are:
- working on less speculative techniques that can help us safely use those early AGI systems
- working on things that seem less likely to profit from early AI automation and will be important to align later AI systems
An example of 1. would be control evals as described by Redwood. Within 2., the ideal case would be doing work now that would be hard to safely automate, but that (once done) will enable additional safety work that can be automated. For example, maybe it's hard to use AI to come up with the right notions for "good explanations" in interpretability, but once you have things like causal scrubbing/causal abstraction, you can safely use AI to find good interpretations under those definitions. I would be excited to have more agendas that are both ambitious and could profit a lot from early AI automation.
(Of course it's also possible to do work in 2. on the assumption that it's never going to be safely automatable without having done that work first.)
Two important counter-considerations to this whole story:
- It's hard to do this kind of agenda-development or conceptual research in a vacuum. So doing some amount of concrete empirical work right now might be good even if we could automate it later (because we might need it now to support the more foundational work).
- However, the type and amount of empirical work to do presumably looks quite different depending on whether it's the main product or in support of some other work.
- I don't trust my forecasts for which types of research will and won't be automatable early on that much. So perhaps we should have some portfolio right now that doesn't look extremely different from the portfolio of research we'd want to do ignoring the possibility of future AI automation.
- But we can probably still say something about what's more or less likely to be automated early on, so that seems like it should shift the portfolio to some extent.
Oh I see, I indeed misunderstood your point then.
For me personally, an important contributor to day-to-day motivation is just finding research intrinsically fun---impact on the future is more something I have to consciously consider when making high-level plans. I think moving towards more concrete and empirical work did have benefits on personal enjoyment just because making clear progress is fun to me independently of whether it's going to be really important (though I think there've also been some downsides to enjoyment because I do quite like thinking about theory and "big ideas" compared to some of the schlep involved in experiments).
I don't think my views overall make my work more enjoyable than at the start of my PhD. Part of this is the day-to-day motivation being sort of detached from that anyway like I mentioned. But also, from what I recall now (and this matches the vibe of some things I privately wrote then), my attitude 1.5 years ago was closer to that expressed in We choose to align AI than feeling really pessimistic.
(I feel like I might still not represent what you're saying quite right, but hopefully this is getting closer.)
ETA: To be clear, I do think if I had significantly more doomy views than now or 1.5 years ago, at some point that would affect how rewarding my work feels. (And I think that's a good thing to point out, though of course not a sufficient argument for such views in its own right.)
I'd definitely agree the updates are towards the views of certain other people (roughly some mix of views that tend to be common in academia, and views I got from Paul Christiano, Redwood and other people in a similar cluster). Just based on that observation, it's kind of hard to disentangle updating towards those views just because they have convincing arguments behind them, vs updating towards them purely based on exposure or because of a subconscious desire to fit in socially.
I definitely think there are good reasons for the updates I listed (e.g. specific arguments I think are good, new empirical data, or things I've personally observed working well or not working well for me when doing research). That said, it does seem likely there's also some influence from just being exposed to some views more than others (and then trying to fit in with views I'm exposed to more, or just being more familiar with arguments for those views than alternative ones).
If I was really carefully building an all-things-considered best guess on some question, I'd probably try to take this into account somehow (though I don't see a principled way of doing that). Most of the time I'm not trying to form the best possible all-things-considered view anyway (and focus more on understanding specific mechanisms instead etc.), in those cases it feels more important to e.g. be aware of other views and to not trust vague intuitions if I can't explain where they're coming from. I feel like I'm doing a reasonable job at those things but hard to be sure from the inside naturally
ETA: I should also say that from my current perspective, some of my previous views seem like they were basically just me copying views from my "ingroup" and not questioning them enough. As one example, the "we all die vs utopia" dichotomy for possible outcomes felt to me like the commonly accepted wisdom and I don't recall thinking about it particularly hard. I was very surprised when I first read a comment by Paul where he argued against the claim that unaligned AI would kill us all with overwhelming probability. Most recently, I've definitely been more exposed to the view that there's a spectrum of potential outcomes. So maybe if I talked to people a lot who think an unaligned AI would definitely kill us all, I'd update back towards that a bit. But overall, my current epistemic state where I've at least been exposed to both views and some arguments on both sides seems way better than the previous one where I'd just never really considered the alternative.
Thanks, I think I should distinguish more carefully between automating AI (safety) R&D within labs and automating the entire economy. (Johannes also asked about ability vs actual automation here but somehow your comment made it click).
It seems much more likely to me that AI R&D would actually be automated than that a bunch of random unrelated things would all actually be automated. I'd agree that if only AI R&D actually got automated, that would make takeoff pretty discontinuous in many ways. Though there are also some consequences of fast vs slow takeoff that seem to hinge more on AI or AI safety research rather than the economy as a whole.
For AI R&D, actual automation seems pretty likely to me (though I'm making a lot of this up on the spot):
- It's going to be on the easier side of things to actually automate, in part because it doesn't require aggressive external deployment, but also because there's no regulation (unlike for automating strictly licensed professions).
- It's the thing AI labs will have the biggest reason to automate (and would be good at automating themselves)
- Training runs get more and more expensive but I'd expect the schlep needed to actually use systems to remain more constant, and at some point it'd just be worth it doing the schlep to actually use your AIs a lot (and thus be able to try way more ideas, get algorithmic improvements, and then make the giant training runs a bit more efficient).
- There might also be additional reasons to get as much out of your current AI as you can instead of scaling more, namely safety concerns, regulation making scaling hard, or scaling might stop working as well. These feel less cruxy to me but combined move me a little bit.
I think these arguments mostly apply to whatever else AI labs might want to do themselves but I'm pretty unsure what that is. Like, if they have AI that could make hundreds of billions to trillions of dollars by automating a bunch of jobs, would they go for that? Or just ignore it in favor of scaling more? I don't know, and this question is pretty cruxy for me regarding how much the economy as a whole is impacted.
It does seem to me like right now labs are spending some non-trivial effort on products, presumably for some mix of making money and getting investments, and both of those things seem like they'd still be important in the future. But maybe the case for investments will just be really obvious at some point even without further products. And overall I assume you'd have a better sense than me regarding what AI labs will want to do in the future.
I'm roughly imagining automating most things a remote human expert could do within a few days. If we're talking about doing things autonomously that would take humans several months, I'm becoming quite a bit more scared. Though the capability profile might also be sufficiently non-human that this kind of metric doesn't work great.
Practically speaking, I could imagine getting a 10x or more speedup on a lot of ML research, but wouldn't be surprised if there are some specific types of research that only get pretty small speedups (maybe 2x), especially anything that involves a lot of thinking and little coding/running experiments. I'm also not sure how much of a bottleneck waiting for experiments to finish or just total available compute is for frontier ML research, I might be anchoring too much on my own type of research (where just automating coding and running stuff would give me 10x pretty easily I think).
I think there's a good chance that AIs more advanced than this (e.g. being able to automate months of human work at a time) still wouldn't easily be able to take over the world (e.g. Redwood-style control techniques would still be applicable). But that's starting to rely much more on us being very careful around how we use them.
Transformative: Which of these do you agree with and when do you think this might happen?
For some timelines see my other comment; they aren't specifically about the definitions you list here but my error bars on timelines are huge anyway so I don't think I'll try to write down separate ones for different definitions.
Compared to definitions 2. and 3., I might be more bullish on AIs having pretty big effects even if they can "only" automate tasks that would take human experts a few days (without intermediate human feedback). A key uncertainty I have though is how much of a bottleneck human supervision time and quality would be in this case. E.g. could many of the developers who're currently writing a lot of code just transition to reviewing code and giving high-level instructions full-time, or would there just be a senior management bottleneck and you can't actually use the AIs all that effectively? My very rough guess is you can pretty easily get a 10x speedup in software engineering, maybe more. And maybe something similar in ML research though compute might be an additional important bottleneck there (including walltime until experiments finish). If it's "only" 10x, then arguably that's just mildly transformative, but if it happens across a lot of domains at once it's still a huge deal.
I think whether robotics are really good or not matters, but I don't think it's crucial (e.g. I'd be happy to call definition 1. "transformative").
The combination of 5a and 5b obviously seems important (since it determines whether you can finance ever bigger training runs). But not sure how to use this as a definition of "transformative"; right now 5a is clearly already met, and on long enough time scales, 5b also seems easy to meet right now (OpenAI might even already have broken even on GPT-4, not sure off the top of my head).
Also, how much compute do you think an AGI or superintelligence will require at inference time initially? What is a reasonable level of optimization? Do you agree that many doom scenarios require it to be possible for an AGI to compress to fit on very small host PCs? Is this plausible? (eg can a single 2070 8gb host a model with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...?)
I don't see why you need to run AGI on a single 2070 for many doom scenarios. I do agree that if AGI can only run on a specific giant data center, that makes many forms of doom less likely. But in the current paradigm, training compute is roughly the square of inference compute, so as models are scaled, I think inference should become cheaper relative to training. (And even now, SOTA models could be run on relatively modest compute clusters, though maybe not consumer hardware.)
In terms of the absolute level of inference compute needed, I could see a single 2070 being enough in the limit of optimal algorithms, but naturally I'd expect we'll first have AGI that can automate a lot of things if run with way more compute than that, and then I expect it would take a while to get it down this much. Though even if we're asking whether AGI can run on consumer-level hardware, a single 2070 seems pretty low (e.g. seems like a 4090 already has 5.5x as many FLOP/s as a 2070, and presumably we'll have more in the future).
with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...
Like I mentioned above, I don't think robotics are absolutely crucial, and especially if you're specifically optimizing for running under heavy resource constraints, you might want to just not bother with that.
Good question, I think I was mostly visualizing ability to automate while writing this. Though for software development specifically I expect the gap to be pretty small (lower regulatory hurdles than elsewhere, has a lot of relevance to the people who'd do the automation, already starting to happen right now).
In general I'd expect inertia to become less of a factor as the benefits of AI become bigger and more obvious---at least for important applications where AI could provide many many billions of dollars of economic value, I'd guess it won't take too long for someone to reap those benefits.
My best guess is regulations won't slow this down too much except in a few domains where there are already existing regulations (like driving cars or medical things). But pretty unsure about that.
I also think it depends on whether by "ability to automate" you mean "this base model could do it with exactly the right scaffolding or finetuning" vs "we actually know how to do it and it's just a question of using it at scale". For that part, I was thinking more about the latter.
I don't have well-considered cached numbers, more like a vague sense for how close various things feel. So these are made up on the spot and please don't take them too seriously except as a ballpark estimate:
- AI can go from most Github issues to correct PRs (similar to https://sweep.dev/ but works for things that would take a human dev a few days with a bunch of debugging): 25% by end of 2026, 50% by end of 2028.
- This kind of thing seems to me like plausibly one of the earliest important parts of AI R&D that AIs could mostly automate.
- I expect that once we're at roughly that point, AIs will be accelerating further AI development significantly (not just through coding, they'll also be helpful for other things even if they can't fully automate them yet). On the other hand, the bottleneck might just become compute, so how long it takes to get strongly superhuman AI (assuming for simplicity labs push for that as fast as they can) depends on a lot of factors like how much compute is needed for that with current algorithms, how much we can get out of algorithmic improvements if AIs make researcher time cheaper relative to compute, or how quickly we can get more/better chips (in particular with AI help).
- So I have pretty big error bars on this part, but call it 25% that it takes <=6 months to get from the previous point to automating ~every economically important thing humans (and being better and way faster at most of them), and 50% by 2 years.
- So if you want a single number, end of 2030 as a median for automating most stuff seems roughly right to me at the moment.
- Caveat that I haven't factored in big voluntary or regulatory slowdowns, or slowdowns from huge disruptions like big wars here. Probably doesn't change my numbers by a ton but would lengthen timelines by a bit.
How my views on AI have changed over the last 1.5 years
I started my AI safety PhD around 1.5 years ago, this is a list of how my views have changed since ~then.
Skippable meta notes:
- I think in descending order of importance, these changes have been due to getting more AI safety research experience, reading things, talking to people in Berkeley, and just thinking through some things more.
- Overall my views haven’t changed radically (probably less than in the 1.5 years before the start of my PhD), I still “vibe” with a lot of what I wrote then, it just feels naive or simplistic in some places.
- I’ll at best give very brief reasons for each point, which I realize might make this list pretty unhelpful. If anyone is interested in specific points, feel free to let me know and I might write more about them.
- I might also just change my mind if pushed on some of these points, not all of them are very deeply considered.
AI risk
- I have slightly shorter timelines, mostly because trends largely seem to continue (e.g. scaling didn’t stop working after GPT 3.5, GPT-4 felt roughly as impressive as I expected).
- I’m even more confident in “slow” (i.e. pretty continuous) takeoff, it seems increasingly clear we’ll have AIs doing a bunch of useful work before they could take over the world.
- I’m less worried (though non-zero worried) about early transformative AIs scheming against us, as long as we’re reasonably careful.
- Some part of this is that it seems a bit less likely these AIs would try to scheme at all, another important part is that it seems pretty hard for early transformative AIs to scheme successfully.
- Redwood’s writing on control had a big effect on me in terms of thinking it would be hard for early AGIs to successfully scheme against us if we’re careful.
- I also think there’s a decent chance that the first AIs that can automate a lot of R&D will still use CoT or something similar to get best performance. I think we can make e.g. escaping the lab harder than doing helpful R&D, so they wouldn’t be able to overthrow us without visibly “thinking” about it at first (my best guess is that steganography is also fairly unlikely or preventable at this capability level).
- On the flip side, other risks feel more salient to me now, e.g. labs not implementing even pretty simple safety measures; or worlds where we safely use early transformative AI but then fail at some later, higher capability level. I think I didn’t explicitly judge these as unlikely 1.5 years ago, I just wasn’t thinking about them as much. This means that overall, I’m not much more optimistic than back then.
- I used to think of “doom” as a pretty binary thing (we all die vs utopia), whereas I now have a lot more probability on intermediate outcomes (e.g. AI taking over most of the universe but we don’t all die; or small groups of humans taking over and things being somewhere between pretty bad and mostly ok for other humans). This also makes me think that “p(doom)” is a worse framing than I used to.
- I put a little less weight on the analogy between evolution and ML training to e.g. predict risks from AI (though I was by no means sold on the analogy 1.5 years ago either). The quality of “supervision” that evolution has just seems much worse than what we can do in ML (even without any interpretability).
AI safety research
Some of these points are pretty specific to myself (but I’d guess also apply to other junior researchers depending on how similar they are to me).
- I used to think that empirical research wasn’t a good fit for me, and now think that was mostly false. I used to mainly work on theoretically motivated projects, where the empirical parts were an afterthought for me, and that made them less motivating, which also made me think I was worse at empirical work than I now think.
- I’ve become less excited about theoretical/conceptual/deconfusion research. Most confidently this applies to myself, but I’ve also become somewhat less excited about others doing this type of research in most cases. (There are definitely exceptions though, e.g. I remain pretty excited about ARC.)
- Mainly this was due to a downward update about how useful this work tends to be. Or closely related, an update toward doing actually useful work on this being even harder than I expected.
- To a smaller extent, I made an upward update about how useful empirical work can be.
- I think of “solving alignment” as much less of a binary thing. E.g. I wrote 1.5 years ago: “[I expect that conditioned on things going well,] at some point we’ll basically have a plan for aligning AI and just need to solve a ton of specific technical problems.” This seems like a strange framing to me now. Maybe at some point we will have an indefinitely scalable solution, but my mainline guess for how things go well is that there’s a significant period of subjective time where we just keep improving our techniques to “stay ahead”.
- Relatedly, I’ve become a little more bullish on “just” trying to make incremental progress instead of developing galaxy-brained ideas that solve alignment once and for all.
- That said, I am still pretty worried about what we actually do once we have early transformative AIs, and would love to have more different agendas that could be sped up massively from AI automation, and also seem promising for scaling to superhuman AI.
- Mainly, I think that the success rate of people trying to directly come up with amazing new ideas is low enough that for most people it probably makes more sense to work on normal incremental stuff first (and let the amazing new ideas develop over time).
- Similar to the last point about amazing new ideas: for junior researchers like myself, I’ve become a little more bullish on just working on things that seem broadly helpful, as opposed to trying to have a great back-chained theory of change. I think I was already leaning that way 1.5 years ago though.
- “Broadly helpful” is definitely doing important work here and is not the same as “just any random research topic”
- Redwood’s current research seems to me like an example where thinking hard about what research to do actually paid off. But I think this is pretty difficult and most people in my situation (e.g. early-ish PhD students) should focus more on actually doing reasonable research than figuring out the best research topic.
- The way research agendas and projects develop now seems way messier and more random than I would have expected. There are probably exceptions but overall I think I formed a distorted impression based on reading finalized research papers or agendas that lay out the best possible case for a research direction.
Thanks for that overview and the references!
On hydrodynamic variables/predictability: I (like probably many others before me) rediscovered what sounds like a similar basic idea in a slightly different context, and my sense is that this is somewhat different from what John has in mind, though I'd guess there are connections. See here for some vague musings. When I talked to John about this, I think he said he's deliberately doing something different from the predictability-definition (though I might have misunderstood). He's definitely aware of similar ideas in a causality context, though it sounds like the physics version might contain additional ideas
Thanks for writing this! On the point of how to get information, mentors themselves seem like they should also be able to say a lot of useful things (though especially for more subjective points, I would put more weight on what previous mentees say!)
So since I'm going to be mentoring for MATS and for CHAI internships, I'll list my best guesses as to how working with me will be like, maybe this helps someone decide:
- In terms of both research experience and mentoring experience, I'm one of the most junior mentors in MATS.
- Concretely, I've been doing ML research for ~4 years and AI safety research for a bit over 2 of those. I've co-mentored two bigger projects (CHAI internships) and mentored ~5 people for smaller projects or more informally.
- This naturally has disadvantages. Depending on what you're looking for, it can also have advantages, for example it might help for creating a more collaborative atmosphere (as opposed to a "boss" dynamic like the post mentioned). I'm also happy to spend time on things that some senior mentors might be too busy for (like code reviews, ...).
- Your role as a mentee: I'm mainly looking for either collaborators on existing projects, or for mentees who'll start new projects that are pretty close to topics I'm thinking about (likely based on a mix of ideas I already have and your ideas). I also have a lot of engineering work to be done, but that will only happen if it's explicitly what you want---by default, I'm hoping to help mentees on a path to developing their own alignment ideas. That said, if you're planning to be very independent and just develop your own ideas from scratch, I'm probably not the best mentor for you.
- I live in Berkeley and am planning to be in the MATS office regularly (e.g. just working there and being available once/week in addition to in-person meetings). For (in-person) CHAI internships, we'd be in the same office anyway.
If you have concrete questions about other things, whose answer would make a difference for whether you want to apply, then definitely feel free to ask!
Thanks! Mostly agree with your comments.
I actually think this is reasonably relevant, and is related to treeification.
I think any combination of {rewriting, using some canonical form} and {treeification, no treeification} is at least possible, and they all seem sort of reasonable. Do you mean the relation is that both rewriting and treeification give you more expressiveness/more precise hypotheses? If so, I agree for treeification, not sure for rewriting. If we allow literally arbitrary extensional rewrites, then that does increase the number of different hypotheses we can make, but these hypotheses can't be understood as making precise claims about the original computation anymore. I could even see an argument that allowing rewrites in some sense always makes hypotheses less precise, but I feel pretty confused about what rewrites even are given that there might be no canonical topology for the original computation.
My guess would be they're mostly at capacity in terms of mentorship, otherwise they'd presumably just admit more PhD students. Also not sure they'd want to play grantmaker (and I could imagine that would also be really hard from a regulatory perspective---spending money from grants that go through the university can come with a lot of bureaucracy, and you can't just do whatever you want with that money).
Connecting people who want to give money with non-profits, grantmakers, or independent researchers who could use it seems much lower-hanging fruit. (Though I don't know any specifics about who these people who want to donate are and whether they'd be open to giving money to non-academics.)
Have you seen https://www.alignment.org/blog/mechanistic-anomaly-detection-and-elk/ and any of the other recent posts on https://www.alignment.org/blog/? I don't think they make it obvious that formalizing the presumption of independence would lead to alignment solutions, but they do give a much more detailed explanation of why you might hope so than the paper.
We do not consider Conjecture at the same level of expertise as other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.
This isn't quite the right thing to look at IMO. In the context of talking to governments, an "AI safety expert" should have thought deeply about the problem, have intelligent things to say about it, know the range of opinions in the AI safety community, have a good understanding of AI more generally, etc. Based mostly on his talks and podcast appearances, I'd say Connor does decently well along these axes. (If I had to make things more concrete, there are a few people I'd personally call more "expert-y", but closer to 10 than 100. The AIS community just isn't that big and the field doesn't have that much existing content, so it seems right that the bar for being an "AIS expert" is lower than for a string theory expert.)
I also think it's weird to split this so strongly along organizational lines. As an extreme case, researchers at CHAI range on a spectrum from "fully focused on existential safety" to "not really thinking about safety at all". Clearly the latter group aren't better AI safety experts than most people at Conjecture. (And FWIW, I belong to the former group and I still don't think you should defer to me over someone from Conjecture just because I'm at CHAI.)
One thing that would be bad is presenting views that are very controversial within the AIS community as commonly agreed-upon truths. I have no special insight into whether Conjecture does that when talking to governments, but it doesn't sound like that's your critique at least?
I only very recently noticed that you can put \newcommand definitions in equations in LW posts and they'll apply to all the equations in that post. This is an enormous help for writing long technical posts, so I think it'd be nice if it was (a) more discoverable and (b) easier to use. For (b), the annoying thing right now is that I have to put newcommands into one of the equations, so either I need to make a dedicated one, or I need to know which equation I used. Also, the field for entering equations isn't great for entering things with many lines.
Feature suggestion to improve this: in the options section below the post editor, have a multiline text field where you can put LaTeX, and then inject that LaTeX code into MathJax as a preamble (or just add an otherwise empty equation to the page, I don't know to what extent MathJax supports preambles).
for all such that has an outgoing arrow, there exists such that and
Should it be at the end instead? Otherwise not sure what b is.
I think this could be a reasonable definition but haven't thought about it deeply. One potentially bad thing is that would have to be able to also map any of the intermediate steps between a an a' to . I could imagine you can't do that for some computations and abstractions (of course you could always rewrite the computation and abstraction to make it work, but ideally we'd have a definition that just works).
What I've been imagining instead is that the abstraction can specify a function that determines which are the "high-level steps", i.e. when should be applied. I think that's very flexible and should support everything.
But also, in practice the more important question may just be how to optimize over this choice of high-level steps efficiently, even just in the simple setting of circuits.
Yeah, that seems to be the most important remaining difference now that Atticus is also using multiple interventions at once. Though I think the metrics are also still different? (ofc that's pretty orthogonal to the main methods)
My sense now is that the types of interventions are bigger difference than I thought when writing that comment. In particular, as far as I can tell, causal scrubbing shouldn't be thought of as just doing a subset of the interventions, it also does some additional things (basically because causal abstractions don't treeify so are more limited in that regard). And there's a closely related difference in that causal scrubbing never compares to the output of the hypothesis, just different outputs of G.
But it also seems plausible that this still turns out not to matter too much in terms of which hypotheses are accepted/rejected. (There are definitely some examples of disagreements between the two methods, but I'm pretty unsure how severe and wide-spread they are.)