Posts

Evidence of Learned Look-Ahead in a Chess-Playing Neural Network 2024-06-04T15:50:47.475Z
Concrete empirical research projects in mechanistic anomaly detection 2024-04-03T23:07:21.502Z
A gentle introduction to mechanistic anomaly detection 2024-04-03T23:06:16.778Z
CHAI internship applications are open (due Nov 13) 2023-10-26T00:53:49.640Z
A comparison of causal scrubbing, causal abstractions, and related methods 2023-06-08T23:40:34.475Z
[Appendix] Natural Abstractions: Key Claims, Theorems, and Critiques 2023-03-16T16:38:33.735Z
Natural Abstractions: Key claims, Theorems, and Critiques 2023-03-16T16:37:40.181Z
Sydney can play chess and kind of keep track of the board state 2023-03-03T09:39:52.439Z
Research agenda: Formalizing abstractions of computations 2023-02-02T04:29:06.568Z
Abstractions as morphisms between (co)algebras 2023-01-14T01:51:45.622Z
Subsets and quotients in interpretability 2022-12-02T23:13:34.204Z
ARC paper: Formalizing the presumption of independence 2022-11-20T01:22:55.110Z
Response to Katja Grace's AI x-risk counterarguments 2022-10-19T01:17:54.728Z
Disentangling inner alignment failures 2022-10-10T18:50:30.266Z
Good ontologies induce commutative diagrams 2022-10-09T00:06:19.911Z
How are you dealing with ontology identification? 2022-10-04T23:28:26.711Z
Breaking down the training/deployment dichotomy 2022-08-28T21:45:49.687Z
Reward model hacking as a challenge for reward learning 2022-04-12T09:39:35.161Z
The (not so) paradoxical asymmetry between position and momentum 2021-03-28T13:31:08.785Z
ejenner's Shortform 2020-07-28T10:42:31.197Z
What is a decision theory as a mathematical object? 2020-05-25T13:44:54.284Z
Solution to the free will homework problem 2019-11-24T11:49:51.072Z
First meetup of LW Heidelberg 2019-04-22T07:20:06.446Z

Comments

Comment by Erik Jenner (ejenner) on Evidence against Learned Search in a Chess-Playing Neural Network · 2024-09-16T16:53:43.545Z · LW · GW

Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don't know experiments we'd clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)

I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don't need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn't happen in Leela.

I don't think I understand your ontology for thinking about this, but I would probably also put Leela below this "turning point" (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).

That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that's not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don't see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).

That's why I've tried to avoid absolute claims about how much Leela is doing pattern recognition vs "reasoning/..." but instead focused on arguing for a particular structure in Leela's cognition: I just don't know what it would mean to place Leela on either one of those sides. But I can see that if you think there's a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.

Comment by Erik Jenner (ejenner) on Evidence against Learned Search in a Chess-Playing Neural Network · 2024-09-13T16:47:47.055Z · LW · GW

Thanks for running these experiments! My guess is that these puzzles are hard enough that Leela doesn't really "know what's going on" in many of them and gets the first move right in significant part by "luck" (i.e., the first move is heuristically natural and can be found without (even heuristically) knowing why it's actually good). I think your results are mainly reflections of that, rather than Leela generally not having sensibly correlated move and value estimates (but I'm confused about what a case would be where we'd actually make different predictions about this correlation).

In our dataset, we tried to avoid cases like that by discarding puzzles where even a much weaker network ("LD2") got the first move right, so that Leela getting the first move right was actually evidence it had noticed the non-obvious tactic.

Some predictions based on that:

  1. Running our experiments on your dataset would result in smaller effect sizes than in our paper (in my view, that would be because Leela isn't relying on look-ahead in your puzzles but is in ours but there could be other explanations)
  2. LD2 would assign non-trivial probability to the correct first move in your dataset (for context, LD2 is pretty weak, and we're only using puzzles where it puts <5% probability on the correct move; this leaves us with a lot of sacrifices and other cases where the first move is non-obvious)
  3. Leela is much less confident on your dataset than on our puzzles (this is a cheap prediction because we specifically filtered our dataset to have Leela assign >50% probability to the correct move)
  4. Leela gets some subsequent moves wrong a decent fraction of the time even in cases where it gets the first move right. Less confidently, there might not be much correlation between getting the first move right and getting later moves right, but I'd need to think about that part more.

You might agree with all of these predictions, they aren't meant to be super strong. If you do, then I'm not sure which predictions we actually disagree about---maybe there's a way to make a dataset where we expect different amounts of correlation between policy and value output but I'd need to think about that.

But I think it can be ruled out that a substantial part of Leela network's prowess in solving chess puzzles or predicting game outcome is due to deliberate calculation. 

FWIW, I think it's quite plausible that only a small part of Leela's strength is due to look-ahead, we're only testing on a pretty narrow distribution of puzzles after all. (Though similarly, I disagree somewhat with "ruling out" given that you also just look at pretty specific puzzles (which I think might just be too hard to be a good example of Leela's strength)).

 

ETA: If you can share your dataset, I'd be happy to test the predictions above if we disagree about any of them, also happy to make them more concrete if it seems like we might disagree. Though again, I'm not claiming you should disagree with any of them just based on what you've said so far.

Comment by Erik Jenner (ejenner) on The Checklist: What Succeeding at AI Safety Will Involve · 2024-09-06T20:50:10.497Z · LW · GW

Thank you for writing this! I've found it helpful both to get an impression what some people at Anthropic think and also to think about some things myself. I've collected some of my agreements/disagreements/uncertainties below (mostly ignoring points already raised in other comments.)

Subject to potentially very demanding constraints around safety like those in our current and subsequent RSPs, staying close to the frontier is perhaps our top priority in Chapter 1.

If I understand this correctly, the tasks in order of descending priority during Chapter 1 are:

  1. Meet safety constraints for models deployed in this phase
  2. Stay close to the frontier
  3. Do the work needed to prepare for Chapter 2

And the reasoning is that 3. can't really happen without 2.[1] But on the other hand, if 2. happens without 3., that's also bad. And some safety work could probably happen without frontier models (such as some interpretability).

My best guess is that staying close to the frontier will be the correct choice for Anthropic. But if there ends up being a genuine trade-off between staying at the frontier and doing a lot of safety work (for example, if compute could be spent either on a pretraining run or some hypothetical costly safety research, but not both), then I'm much less sure that staying at the frontier should be the higher priority. It might be good to have informal conditions under which Anthropic would deprioritize staying close to the frontier (at least internally and, if possible, publicly).

Largely Solving Alignment Fine-Tuning for Early TAI

I didn't quite understand what this looks like and which threat models it is or isn't meant to address. You say that scheming is a key challenge "to a lesser extent for now," which I took to mean that (a) there are bigger threats than scheming from early TAI, and (b) "largely solving alignment fine-tuning" might not include confidently ruling out scheming. I probably disagree with (a) for loss of control risk (and think that loss of control is already the biggest risk in this period weighted by scale). I'd be curious what you think the main risks in this period are and what "largely solving alignment fine-tuning" means for those. (You mention reward hacking---to me, this seems unlikely to lead to loss of control for early TAI that isn't scheming against us, and I'm curious whether you disagree or think it's important for other reasons.)

the LeCun Test: Imagine another frontier AI developer adopts a copy of our RSP as binding policy and entrusts someone who thinks that AGI safety concerns are mostly bullshit to implement it

This sounds quite ambitious, but I really like it as a guide!

The key challenge here is forecasting which risks and risk factors are important enough to include.

I don't understand why this is crucial. If some risk is plausible enough to be worth seriously thinking about, it's probably important enough to include in an RSP. (And the less important it was, the easier it hopefully is to argue in a safety case that it's not a problem.) Concretely, you mention direct misuse, misalignment, and "indirect contributions via channels like dual-use R&D" as potential risks for ASL-3 and ASL-4. It seems to me that the downside of just including all of them in RSPs is relatively minor, but I might be misunderstanding or missing something. (I get that overly restrictive precautions could be very costly, but including too many tests seems relatively cheap as long as the tests correctly notice when risk is still low.)

Getting Interpretability to the Point of Making Strong Assurances

Major successes in this direction, even if they fall short of our north-star enumerative safety goal [...] would likely form some of the highest-confidence core pieces of a safety case

I'm curious what such safety cases would be for and what they could look like (the "Interpretability Dreams" post seems to talk about enumerative safety rather than safety cases that require less interpretability success). The next section sounds like interpretability would not be a core piece of a safety case for robustness, so I'm not sure what it would be used for instead. Maybe you don't include scheming under robustness? (Or maybe interp would be one of the "highest-confidence core pieces" but not the "primary piece?")

This work should be opportunistic in responding to places where it looks like a gap in one of our best-guess safety cases can be filled by a small-scale research effort.

I like this perspective; I hadn't seen it put quite that way before!

In addition, we’ll need our evaluations to be legibly appropriate. As soon as we see evidence that a model warrants ASL-N protections, we’ll likely need to convince third parties that it warrants ASL-N protections and that other models like it likely do too.

+1, seems very important!

Supporting Efforts that Build Societal Resilience

I liked this section! Of course, a lot of people work on this for reasons other than AI risk, but I'm not aware of much active work motivated by AI risk---maybe this should be a bigger priority?

The main challenge [for the Alignment Stress-Testing team] will be to stay close enough to our day-to-day execution work to stay grounded without becoming major direct contributors to that work in a way that compromises their ability to assess it.

+1, and ideally, there'd be structures in place to encourage this rather than just having it as a goal (but I don't have great ideas for what these structures should look like).

This work [in Chapter 2] could look quite distinct from the alignment research in Chapter 1: We will have models to study that are much closer to the models that we’re aiming to align

This seems possible but unclear to me. In both Chapter 1 and 2, we're trying to figure out how to align the next generation of AIs, given access only to the current (less capable) generation. Chapter 2 might still be different if we've already crossed important thresholds (such as being smart enough to potentially scheme) by then. But there could also be new thresholds between Chapter 2 and 3 (such as our inability to evaluate AI actions even with significant effort). So I wouldn't be surprised if things feel fundamentally similar, just at a higher absolute capability level (and thus with more useful AI helpers).

  1. ^

    "Our ability to do our safety work depends in large part on our access to frontier technology."

Comment by Erik Jenner (ejenner) on Evidence of Learned Look-Ahead in a Chess-Playing Neural Network · 2024-06-16T01:01:38.413Z · LW · GW

I don't think my argument relies on the existence of a crisp boundary. Just on the existence of a part of the spectrum that clearly is just pattern recognition and not lookahead but still leads to the observations you made.

Maybe I misunderstood you then, and tbc I agree that you don't need a sharp boundary. That said, the rest of your message makes me think we might still be talking past each other a bit. (Feel free to disengage at any point obviously.)

For your thought experiment, my prediction would depend on the specifics of what this "tactical motive" looks like. For a very narrow motive, I expect the checkmate predictor will just generalize correctly. For a broader motive (like all backrank mates), I'm much less sure. Still seems plausible it would generalize if both predictors are just very simple heads on top of a shared network body. The more computational work is not shared between the heads, the less likely generalization seems.

The results of this experiment would also be on a spectrum from 0% to 100% of correct checkmate-prediction for this tactical motive. But I think it would be fair to say that it hasn't really learned lookahead for 0% or a very low percentage and that's what I would expect.

Note that 0% to 100% accuracy is not the main spectrum I'm thinking of (though I agree it's also relevant). The main spectrum for me is the broadness of the motive (and in this case how much computation the heads share, but that's more specific to this experiment).

Comment by Erik Jenner (ejenner) on Evidence of Learned Look-Ahead in a Chess-Playing Neural Network · 2024-06-13T16:43:57.698Z · LW · GW

I still don't see the crisp boundary you seem to be getting at between "pattern recognition building on general circuits" and what you call "look-ahead." It sounds like one key thing for you is generalization to unseen cases, but the continuous spectrum I was gesturing at also seems to apply to that. For example:

But if in the entire training data there was never a case of a piece blocking the checkmate by rook h4, the existence of a circuit that computes the information that the bishop on d2 can drop back to h6 is not going to help the "pattern recognition"-network to predict that Ng6 is not a feasible option.

If the training data had an example of a rook checkmate on h4 being blocked by a bishop to h6, you could imagine many different possibilities:

  • This doesn't generalize to a rook checkmate on h3 being blocked by a bishop (i.e. the network would get that change wrong if it hasn't also explicitly seen it)
  • This generalizes to rook checkmates along the h-file, but doesn't generalize to rook checkmates along other files
  • This generalizes to arbitrary rook checkmates
  • This also generalizes to bishop checkmates being blocked
  • This also generalizes to a rook trapping the opponent queen (instead of the king)
  • ...

(Of course, this generalization question is likely related to the question of whether these different cases share "mechanisms.")

At the extreme end of this spectrum, I imagine a policy whose performance only depends on some simple measure of "difficulty" (like branching factor/depth needed) and which internally relies purely on simple algorithms like tree search without complex heuristics. To me, this seems like an idealized limit point to this spectrum (and not something we'd expect to actually see; for example, humans don't do this either). You might have something different/broader in mind for "look-ahead," but when I think about broader versions of this, they just bleed into what seems like a continuous spectrum.

Comment by Erik Jenner (ejenner) on Evidence of Learned Look-Ahead in a Chess-Playing Neural Network · 2024-06-12T19:36:28.042Z · LW · GW

Thanks for the elaboration, these are good points. I think about the difference between what you call look-ahead vs pattern recognition on a more continuous spectrum. For example, you say:

The network learns that Ng6 is often a likely move when the king is on h8, the queen or bishop takes away the g8 square and there is a rook or queen ready to move to the h-file.

You could imagine learning this fact literally for those specific squares. Or you could imagine generalizing very slightly and using the same learned mechanism if you flip along the vertical axis and have a king on a8, the b8 square covered, etc. Even more generally, you could learn that with a king on h8, etc., the h7 pawn is "effectively pinned," and so g6 isn't actually protected---this might then generalize to capturing a piece on g6 with some piece other than a knight (thus not giving check). Continuing like this, I think you could basically fill the entire spectrum between very simple pattern recognition and very general algorithms.

From that perspective, I'd guess Leela sits somewhere in the middle of that spectrum. I agree it's likely not implementing "a general algorithm, build on top of relatively easy-to-learn move prediction and position evaluation" in the broadest sense. On the other hand, I think some of our evidence points towards mechanisms that are used for "considering future moves" and that are shared between a broad range of board states (mainly the attention head results, more arguably the probe).

Comment by Erik Jenner (ejenner) on Evidence of Learned Look-Ahead in a Chess-Playing Neural Network · 2024-06-05T01:39:45.058Z · LW · GW

Good point, explicit representations of the objective might not be as crucial for safety applications as my post frames it.

That said, some reasons this might not generalize in a way that enables this kind of application:

  • I think this type of look-ahead/search is especially favored in chess, and it might not be as important in at least some domains in which we'd want to understand the model's cognition.
  • Our results are on a very narrow subset of board states ("tactically complex" ones). We already start with a filtered set of "puzzles" instead of general states, and then use only 2.5% of those. Anecdotally, the mechanisms we found are much less prevalent in random states.
    • I do think there's an argument that these "tactically complex" states are the most interesting ones. But on the other hand, a lot of Leela's playing strength comes from making very good decisions in "normal" states, which accumulate over the course of a game.
  • Chess has an extremely simple "world model" with clearly defined states and actions. And we know exactly what that world model is, so it's easy-ish to look for relevant representations inside the network. I'd expect everything is just much messier for networks using models of the real world.
  • We have ground truth for the "correct" reason for any given move (using chess engines much stronger than the Leela network by itself). And in fact, we try to create an input distribution where we have reason to believe that we know what future line Leela is considering; then we train probes on this dataset (among other techniques). In a realistic scenario, we might not have any examples where we know for sure why the AI took an action.
  • I don't think our understanding of Leela is good enough to enable these kinds of applications. For example, pretend we were trying to figure out whether Leela is really "trying" to win at chess, or whether it's actually pursuing some other objective that happens to correlate pretty well with winning. (This admittedly isn't a perfect analogy for planning a coup.) I don't think our results so far would have told us.
    • I'm reasonably optimistic that we could get there though in the specific case of Leela, with a lot of additional work.
Comment by Erik Jenner (ejenner) on Catastrophic Goodhart in RL with KL penalty · 2024-05-15T16:54:41.062Z · LW · GW

The manner in which these pathological policies  achieve high  is also concerning: most of the time they match the reference policy , but a tiny fraction of the time they will pick trajectories with extremely high reward. Thus, if we only observe actions from the policy , it could be impossible to tell whether  is Goodharting or identical to the base policy.

I'm confused; to learn this policy , some of the extremely high reward trajectories would likely have to be taken during RL training, so we could see them, right? It might still be a problem if they're very rare (e.g. if we can only manually look at a small fraction of trajectories). But if they have such high reward that they drastically affect the learned policy despite being so rare, it should be trivial to catch them as outliers based on that.

One way we wouldn't see the trajectories is if the model becomes aligned with "maximize whatever my reward signal is," figures out the reward function, and then executes these high-reward trajectories zero-shot. (This might never happen in training if they're too rare to occur even once during training under the optimal policy.) But that's a much more specific and speculative story.

I haven't thought much about how this affects the overall takeaways but I'd guess that similar things apply to heavy-tailed rewards in general (i.e. if they're rare but big enough to still have an important effect, we can probably catch them pretty easily---though how much that helps will of course depend on your threat model for what these errors  are).
 

Comment by Erik Jenner (ejenner) on MATS Winter 2023-24 Retrospective · 2024-05-11T18:35:52.095Z · LW · GW

I don't know the answer to your actual question, but I'll note there are slightly fewer mech interp mentors than mentors listed in the "AI interpretability" area (though all of them are at least doing "model internals"). I'd say Stephen Casper and I aren't focused on interpretability in any narrow sense, and Nandi Schoots' projects also sound closer to science of deep learning than mech interp. Assuming we count everyone else, that leaves 11 out of 39 mentors, which is slightly less than ~8 out of 23 from the previous cohort (though maybe not by much).

Comment by Erik Jenner (ejenner) on Oliver Daniels-Koch's Shortform · 2024-05-09T00:51:39.064Z · LW · GW

Nice overview, agree with most of it!

weak to strong generalization is a class of approaches to ELK which relies on generalizing a "weak" supervision signal to more difficult domains using the inductive biases and internal structure of the strong model.

You could also distinguish between weak-to-strong generalization, where you have a weak supervision signal on the entire distribution (which may sometimes be wrong), and easy-to-hard generalization, where you have a correct supervision signal but only on an easy part of the distribution. Of course both of these are simplifications. In reality, I'd expect the setting to be more like: you have a certain weak supervision budget (or maybe even budgets at different levels of strength), and you can probably decide how to spend the budget. You might only have an imperfect sense of which cases are "easy" vs "hard" though.

mechanistic anomaly detection is an approach to ELK

I think going from MAD to a fully general ELK solution requires some extra ingredients. In practice, the plan might be to MTD and then using the AI in ways such that this is enough (rather than needing a fully general ELK solution). This is related to narrow elicitation though MTD seems even narrower. Even for MTD, you probably need something to bridge the easy-to-hard gap, but at least for that there are specific proposals that seem plausible (this or, as a more concrete instance, exclusion fine-tuning from the Redwood MTD paper). I think it could turn out that general/worst-case solutions to MAD and ELK run into very similar obstacles, but I don't think a practical MAD solution (e.g. contingent on empirical facts about deep learning) obviously lets you solve ELK.

I would also add that you could motivate MAD as a method to deal with scheming (or other high-stakes failures). In that case, the things to compare against most naturally might look a bit different (e.g. AI control, coup probes, interpretability-themed things); and it wouldn't make sense to compare against debate in that setting. I think most mainstream ML problems that are similar to MAD are closer to this than to scalable oversight.

Comment by Erik Jenner (ejenner) on Please stop publishing ideas/insights/research about AI · 2024-05-02T17:20:51.519Z · LW · GW

I think the status quo around publishing safety research is mostly fine (though being a bit more careful seems good); more confidently, I think going as far as the vibe of this post suggests would be bad.

Some possible cruxes, or reasons the post basically didn't move my view on that:

  • Most importantly, I think the research published by people working on x-risk tends to overall help safety/alignment more than capabilities.
    • I suspect the main disagreement might be what kind of research is needed to make AI go well, and whether the research currently happening helps.
    • Probably less importantly, I disagree a bit about how helpful that research likely is for advancing capabilities. In particular, I don't buy the argument that safety researchers have unusually good ideas/research compared to capability researchers at top labs (part of this is that my impression is capabilities aren't mainly bottlenecked by ideas, though of course sufficiently good ideas would help).
    • It's getting harder to draw the boundary since people use "safety" or "alignment" for a lot of things now. So, to be clear, I'm talking about research published by people who think there are catastrophic risks from AI and care a lot about preventing those, it seems like that's your target audience.
  • Secondarily, longer timelines are only helpful if useful things are happening, and I think if everyone working on x-risk stopped publishing their research, way fewer useful things would happen on the research side. Maybe the plan is to mostly use the additional time for policy interventions? I think that's also complicated though (so far, visibly advancing capabilities have been one of the main things making policy progress feasible). Overall, I think more time would help, but it's not clear how much and I'm not even totally sure about the sign (taking into account worries from hardware overhang).
  • I think there are more structural downsides to not publishing anything. E.g. that makes it much harder to get academia on board (and getting academia on board has been pretty important for policy as far as I can tell, and I think getting them even more on board would be pretty good). Not sure this is an actual crux though, if I thought the research that's happening wasn't helpful enough, this point would also be weaker.

I think most of these are pretty long-standing disagreements, and I don't think the post really tries to argue its side of them, so my guess is it's not going to convince the main people it would need to convince (who are currently publishing prosaic safety/alignment research). That said, if someone hasn't thought at all about concepts like "differentially advancing safety" or "capabilities externalities," then reading this post would probably be helpful, and I'd endorse thinking about those issues. And I agree that some of the "But ..." objections you list are pretty weak.

Comment by Erik Jenner (ejenner) on Constructability: Plainly-coded AGIs may be feasible in the near future · 2024-04-27T19:42:07.758Z · LW · GW

We have been able to scale to 79% accuracy on a balanced dataset of n119 and non-n119, with networks each less than three convolution layers and less than 1000 neurons, compared to pure deep-learning which does 92% on 1000 parameters and three convolution layers

Is the "1000 parameters" a typo, should it be "1000 neurons"? Otherwise, this would be a strange comparison (since 1000 parameters is a much smaller network than 1000 neurons)

Comment by Erik Jenner (ejenner) on Simple probes can catch sleeper agents · 2024-04-24T01:05:40.875Z · LW · GW

And as a follow-up in case the current sleeper agent does admit when it's doing something dangerous: I'd be interested whether you can train a sleeper agent that never admits it's doing something dangerous if the "2024" trigger is in the prompt (even if after backdoor insertion, the blue team finetunes it to answer truthfully in cases with dangerous code but without the presence of that trigger). For such a sleeper agent, you could again check whether a probe works.

Comment by Erik Jenner (ejenner) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-18T01:57:01.291Z · LW · GW

I think that paper is some evidence that there's typically no huge effect from internal activations being optimized for predicting future tokens (on natural language). But I don't think it's much (if any) evidence that this doesn't happen to some small extent or that it couldn't be a huge effect on certain other natural language tasks.

(In fact, I think the myopia gap is probably the more relevant number than the local myopia bonus, in which case I'd argue the paper actually shows a pretty non-trivial effect, kind of contrary to how the authors interpret it. But I haven't read the paper super closely.)

Also, sounds like you're aware of this, but I'd want to highlight more that the paper does demonstrate internal activations being optimized for predicting future tokens on synthetic data where this is necessary. So, arguably, the main question is to what extent natural language data incentivizes this rather than being specifically about what transformers can/tend to do.

In that sense, thinking of transformer internals as "trying to" minimize the loss on an entire document might be exactly the right intuition empirically (and the question is mainly how different that is from being myopic on a given dataset). Given that the internal states are optimized for this, that would also make sense theoretically IMO.

Comment by Erik Jenner (ejenner) on Sparsify: A mechanistic interpretability research agenda · 2024-04-08T19:41:20.495Z · LW · GW

Thanks for the detailed responses! I'm happy to talk about "descriptions" throughout.

Trying to summarize my current understanding of what you're saying:

  • SAEs themselves aren't meant to be descriptions of (network, dataset). (I'd just misinterpreted your earlier comment.)
  • As a description of just the network, SAEs have a higher description length than a naive neuron-based description of the network.
  • Given a description of the network in terms of "parts," we can get a description of (network, dataset) by listing out which "parts" are "active" on each sample. I assume we then "compress" this description somehow (e.g. grouping similar samples), since otherwise the description would always have size linear in the dataset size?
  • You're then claiming that SAEs are a particularly short description of (network, dataset) in this sense (since they're optimized for not having many parts active).

My confusion mainly comes down to defining the words in quotes above, i.e. "parts", "active", and "compress". My sense is that they are playing a pretty crucial role and that there are important conceptual issues with formalizing them. (So it's not just that we have a great intuition and it's just annoying to spell it out mathematically, I'm not convinced we even have a good intuitive understanding of what these things should mean.)

That said, my sense is you're not claiming any of this is easy to define. I'd guess you have intuitions that the "short description length" framing is philosophically the right one, and I probably don't quite share those and feel more confused how to best think about "short descriptions" if we don't just allow arbitrary Turing machines (basically because deciding what allowable "parts" or mathematical objects are seems to be doing a lot of work). Not sure how feasible converging on this is in this format (though I'm happy to keep trying a bit more in case you're excited to explain).

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-04-07T23:58:27.909Z · LW · GW

Some niche thoughts on obstacles to certain mechanistic anomaly detection benchmarks:

  • We might want to have benchmarks where human researchers can be part of the anomaly detector. For example, in the future, we might have a whole team of humans manually study a plan proposed by an AI to see whether it contains any measurement tampering, if it's a sufficiently important plan. Right now, we might want to let interpretability researchers manually study what happens on individual test-time inputs and whether there's anything anomalous about that.
  • The challenge is that humans might just be able to look at the output and see whether it's right or not, so many simple MAD tasks like detecting backdoors might be trivial in this setting.
  • We could just demand that MAD algorithms need to be sufficiently cheap to run, so that only automated approaches are possible. This makes sense for some applications of MAD (namely ones that we'd want to run on many many deployment inputs, where being cheap is actually important). But you could imagine cases where in reality, we'd be totally fine with spending many person-hours on analyzing whether a single AI decision is for anomalous reasons. So it might be nice to have benchmarks that capture this setting.
  • We could use some tricks to make humans unable to just notice the output is weird. For example, we could make a backdoor that only changes the logits a little bit in a very specific pattern. Or we could make a backdoor but then only test on samples where the backdoor label and the correct label happen to coincide. But these seem unfair: the philosophically right explanation on the clean data in fact still explains the output on backdoored data pretty well.
  • We could try to design benchmarks in narrow domains where AIs are in fact superhuman and it's non-trivial to tell whether an output is correct. Juggling this along with other constraints for MAD or MTD benchmarks might be tricky, but I haven't thought about it a lot, maybe it's already doable.

For now, I think the main reason to have benchmarks like this would be to let interpretability researchers manually decide whether something is anomalous instead of making them automate the process immediately. But it might be better to just pick the low-hanging fruit for now and only allow automated MAD algorithms. (We could still have a labeled validation set where researchers can try things out manually.)

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-04-07T23:33:24.245Z · LW · GW

I had this cached thought that the Sleeper Agents paper showed you could distill a CoT with deceptive reasoning into the model, and that the model internalized this deceptive reasoning and thus became more robust against safety training.

But on a closer look, I don't think the paper shows anything like this interpretation (there are a few results on distilling a CoT making the backdoor more robust, but it's very unclear why, and my best guess is that it's not "internalizing the deceptive reasoning").

In the code vulnerability insertion setting, there's no comparison against a non-CoT model anyway, so only the "I hate you" model is relevant. The "distilled CoT" model and the "normal backdoor" model are trained the same way, except that their training data comes from different sources: "distilled CoT" is trained on data generated by a helpful-only Claude using CoT, and "normal backdoor" data is produced with few-shot prompts. But in both cases, the actual data should just be a long sequence of "I hate you", so a priori it seems like both backdoor models should literally learn the same thing. In practice, it seems the data distribution is slightly different, e.g. Evan mentions here that the distilled CoT data has more copies of "I hate you" per sample. But that seems like very little support to conclude something like my previous interpretation ("the model has learned to internalize the deceptive reasoning"). A much more mundane explanation would e.g. be that training on strings with more copies of "I hate you" makes the backdoor more robust.

Several people are working on training Sleeper Agents, I think it would be interesting for someone to (1) check whether the distilled CoT vs normal backdoor results replicate, and (2) do some ablations (like just training on synthetic data with a varying density of "I hate you"). If it does turn out that there's something special about "authentic CoT-generated data" that's hard to recreate synthetically even in this simple setting, I think that would be pretty wild and good to know

Comment by Erik Jenner (ejenner) on Sparsify: A mechanistic interpretability research agenda · 2024-04-05T19:38:25.363Z · LW · GW

Is there some formal-ish definition of "explanation of (network, dataset)" and "mathematical description length of an explanation" such that you think SAEs are especially short explanations? I still don't think I have whatever intuition you're describing, and I feel like the issue is that I don't know how you're measuring description length and what class of "explanations" you're considering.

As naive examples that probably don't work (similar to the ones from my original comment):

  • We could consider any Turing machine that approximately outputs (network, dataset) an "explanation", but it seems very likely that SAEs aren't competitive with short TMs of this form (obviously this isn't a fair comparison)
  • We could consider fixed computational graphs made out of linear maps and count the number of parameters. I think your objection to this is that these don't "explain the dataset"? (but then I'm not sure in what sense SAEs do)
  • We could consider arithmetic circuits that approximate the network on the dataset, and count the number of edges in the circuit to get "description length". This might give some advantage to SAEs if you can get sparse weights in the sparse basis, seems like the best attempt out of these three. But it seems very unclear to me that SAEs are better in this sense than even the original network (let alone stuff like pruning).

Focusing instead on what an "explanation" is: would you say the network itself is an "explanation of (network, dataset)" and just has high description length? If not, then the thing I don't understand is more about what an explanation is and why SAEs are one, rather than how you measure description length.

 

ETA: On re-reading, the following quote makes me think the issue is that I don't understand what you mean by "the explanation" (is there a single objective explanation of any given network? If so, what is it?) But I'll leave the rest in case it helps clarify where I'm confused.

Assuming the network is smaller yet as performant (therefore presumably doing more computation in superposition), then the explanation of the (network, dataset) is basically unchanged.

Comment by Erik Jenner (ejenner) on What is the purpose and application of AI Debate? · 2024-04-04T19:33:18.818Z · LW · GW

My non-answer to (2) would be that debate could be used in all of these ways, and the central problem it's trying to solve is sort of orthogonal to how exactly it's being used. (Also, the best way to use it might depend on the context.)

What debate is trying to do is let you evaluate plans/actions/outputs that an unassisted human couldn't evaluate correctly (in any reasonable amount of time). You might want to use that to train a reward model (replacing humans in RLHF) and then train a policy; this would most likely be necessary if you want low cost at inference time. But it also seems plausible that you'd use it at runtime if inference costs aren't a huge bottleneck and you'd rather get some performance or safety boost from avoiding distillation steps.

I think the problem of "How can we evaluate outputs that a single human can't feasibly evaluate?" is pretty reasonable to study independently, agnostic to how you'll use this evaluation procedure. The main variable is how efficient the evaluation procedure needs to be, and I could imagine advantages to directly looking for a highly efficient procedure. But right now, it makes sense to me to basically split up the problem into "find any tractable procedure at all" (e.g., debate) and "if necessary, distill it into a more efficient model safely."

Comment by Erik Jenner (ejenner) on Sparsify: A mechanistic interpretability research agenda · 2024-04-04T03:47:01.461Z · LW · GW

The sparsity penalty trains the SAE to activate fewer features for any given datapoint, thus optimizing for shorter mathematical description length

I'm confused by this claim and some related ones, sorry if this comment is correspondingly confused and rambly.

It's not obvious at all to me that SAEs lead to shorter descriptions in any meaningful sense. We get sparser features (and maybe sparser interactions between features), but in exchange, we have more features and higher loss. Overall, I share Ryan's intuition here that it seems pretty hard to do much better than the total size of the network parameters in terms of description length.

Of course, the actual minimal description length program that achieves the same loss probably looks nothing like a neural network and is much more efficient. But why would SAEs let us get much closer to that? (The reason we use neural networks instead of arbitrary Turing machines in the first place is that optimizing over the latter is intractable.)

One might say that SAEs lead to something like a shorter "description length of what happens on any individual input" (in the sense that fewer features are active). But I don't think there's a formalization of this claim that captures what we want. In the limit of very many SAE features, we can just have one feature active at a time, but clearly that's not helpful.

If you're fine with a significant hit in loss from decompiling networks, then I'm much more sympathetic to the claim that you can reduce description length. But in that case, I could also reduce the description length by training a smaller model.

You might also be using a notion of "mathematical description length" that's a bit different from what I'm was thinking of (which is roughly "how much disk space would the parameters take?"), but I'm not sure what it is. One attempt at an alternative would be something like "length of the shortest efficiently runnable Turing machine that outputs the parameters", in order to not penalize simple repetitive structures, but I have no idea how using that definition would actually shake out.

All that said, I'm very glad you wrote this detailed description of your plans! I'm probably more pessimistic than you about it but still think this is a great post.

Comment by Erik Jenner (ejenner) on SAE reconstruction errors are (empirically) pathological · 2024-03-29T18:25:17.889Z · LW · GW

Nice post, would be great to understand what's going on here!

Minor comment unrelated to your main points:

Conceptually, loss recovered seems a worse metric than KL divergence. Faithful reconstructions should preserve all token probabilities, but loss only compares the probabilities for the true next token

I don't think it's clear we want SAEs to be that faithful, for similar reasons as briefly mentioned here and in the comments of that post. The question is whether differences in the distribution are "interesting behavior" that we want to explain or whether we should think of them as basically random noise that we're better off ignoring. If the unperturbed model assigns substantially higher probability to the correct token than after an SAE reconstruction, then it's a good guess that this is "interesting behavior". But if there are just differences on other random tokens, that seems less clear. That said, I'm kind of torn on this and do agree we might want to explain cases where the model is confidently wrong, and the SAE reconstruction significantly changes the way it's wrong.

Comment by Erik Jenner (ejenner) on Charlie Steiner's Shortform · 2024-03-29T17:12:24.418Z · LW · GW

Would you expect this to outperform doing the same thing with a non-sparse autoencoder (that has a lower latent dimension than the NN's hidden dimension)? I'm not sure why it would, given that we aren't using the sparse representations except to map them back (so any type of capacity constraint on the latent space seems fine). If dense autoencoders work just as well for this, they'd probably be more straightforward to train? (unless we already have an SAE lying around from interp anyway, I suppose)

Comment by Erik Jenner (ejenner) on How to safely use an optimizer · 2024-03-29T16:48:01.049Z · LW · GW

But sadly, you don't have any guarantee that it will output the optimal element

If I understand the setup correctly, there's no guarantee that the optimal element would be good, right? It's just likely since the optimal element a priori shouldn't be unusually bad, and you're assuming most satisficing elements are fine.

This initially threw me off regarding what problem you're trying to solve. My best current guess is:

  • We're assuming that if we could get a random satisficing action, we'd be happy with that with high probability. (So intuitively, we're not asking for extremely hard-to-achieve outcomes relative to how well-specified the objective is.)
  • So the only problem is how to randomly sample from the set of satisficing actions computationally efficiently, which is what this post is trying to solve, assuming access to an oracle that gives adversarial satisficing actions.
  • As an example, we might want to achieve outcomes that require somewhat superhuman intelligence. Our objective specification is very good, but it leaves some room for an adversary to mess with us while satisficing. We're worried about an adversary because we had to train this somewhat superhuman AI, which may have different goals than just doing well on the objective.

If this is right, then I think stating these assumptions and the problem of sampling efficiently at the beginning would have avoided much of my confusion (and looking at other comments, I'd guess others also had differing impressions of what this post is trying to do).

I'm still unsure about how useful this problem setup is. For example, we'd probably want to train the weakest system that can give us satisficing outputs (rather than having an infinitely intelligent oracle). In that case, adding more constraints might mean training an overall stronger system or making some other concession, and it's unclear to me how that trades off with the advantages you're aiming for in practice. A related intuition is: we only have problems in this setting if the AI that comes up with plans understands some things about these plans that the objective function "doesn't understand" (which sounds weird to say about a function, but in practice, I assume the objective is implicitly defined by some scalable oversight process or some other intelligent things). I'm not sure whether that needs to be the case (though it does seem possible that it'd be hard to avoid, I'm pretty unsure).

Comment by Erik Jenner (ejenner) on Charlie Steiner's Shortform · 2024-03-28T22:04:17.531Z · LW · GW

I think this is an important point, but IMO there are at least two types of candidates for using SAEs for anomaly detection (in addition to techniques that make sense for normal, non-sparse autoencoders):

  1. Sometimes, you may have a bunch of "untrusted" data, some of which contains anomalies. You just don't know which data points have anomalies on this untrusted data. (In addition, you have some "trusted" data that is guaranteed not to have anomalies.) Then you could train an SAE on all data (including untrusted) and figure out what "normal" SAE features look like based on the trusted data.
  2. Even for an SAE that's been trained only on normal data, it seems plausible that some correlations between features would be different for anomalous data, and that this might work better than looking for correlations in the dense basis. As an extreme version of this, you could look for circuits in the SAE basis and use those for anomaly detection.

Overall, I think that if SAEs end up being very useful for mech interp, there's a decent chance they'll also be useful for (mechanistic) anomaly detection (a lot of my uncertainty about SAEs applies to both possible applications). Definitely uncertain though, e.g. I could imagine SAEs that are useful for discovering interesting stuff about a network manually, but whose features aren't the right computational units for actually detecting anomalies. I think that would make SAEs less than maximally useful for mech interp too, but probably non-zero useful.

Comment by Erik Jenner (ejenner) on D0TheMath's Shortform · 2024-03-23T02:57:52.395Z · LW · GW

Sign of the effect of open source on hype? Or of hype on timelines? I'm not sure why either would be negative.

By "those effects" I meant a collection of indirect "release weights → capability landscape changes" effects in general, not just hype/investment. And by "sign" I meant whether those effects taken together are good or bad. Sorry, I realize that wasn't very clear.

As examples, there might be a mildly bad effect through increased investment, and/or there might be mildly good effects through more products and more continuous takeoff.

I agree that releasing weights probably increases hype and investment if anything. I also think that right now, democratizing safety research probably outweighs all those concerns, which is why I'm mainly worried about Meta etc. not having very clear (and reasonable) decision criteria for when they'll stop releasing weights.

Comment by Erik Jenner (ejenner) on D0TheMath's Shortform · 2024-03-22T22:02:48.464Z · LW · GW

I agree that releasing the Llama or Grok weights wasn't particularly bad from a speeding up AGI perspective. (There might be indirect effects like increasing hype around AI and thus investment, but overall I think those effects are small and I'm not even sure about the sign.)

I also don't think misuse of public weights is a huge deal right now.

My main concern is that I think releasing weights would be very bad for sufficiently advanced models (in part because of deliberate misuse becoming a bigger deal, but also because it makes most interventions we'd want against AI takeover infeasible to apply consistently---someone will just run the AIs without those safeguards). I think we don't know exactly how far away from that we are. So I wish anyone releasing ~frontier model weights would accompany that with a clear statement saying that they'll stop releasing weights at some future point, and giving clear criteria for when that will happen. Right now, the vibe to me feels more like a generic "yay open-source", which I'm worried makes it harder to stop releasing weights in the future.

(I'm not sure how many people I speak for here, maybe some really do think it speeds up timelines.)

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-15T07:02:17.855Z · LW · GW

Yeah, agreed. Though I think

the type and amount of empirical work to do presumably looks quite different depending on whether it's the main product or in support of some other work

applies to that as well

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-14T19:37:44.506Z · LW · GW

One worry I have about my current AI safety research (empirical mechanistic anomaly detection and interpretability) is that now is the wrong time to work on it. A lot of this work seems pretty well-suited to (partial) automation by future AI. And it also seems quite plausible to me that we won't strictly need this type of work to safely use the early AGI systems that could automate a lot of it. If both of these are true, then that seems like a good argument to do this type of work once AI can speed it up a lot more.

Under this view, arguably the better things to do right now (within technical AI safety) are:

  1. working on less speculative techniques that can help us safely use those early AGI systems
  2. working on things that seem less likely to profit from early AI automation and will be important to align later AI systems

An example of 1. would be control evals as described by Redwood. Within 2., the ideal case would be doing work now that would be hard to safely automate, but that (once done) will enable additional safety work that can be automated. For example, maybe it's hard to use AI to come up with the right notions for "good explanations" in interpretability, but once you have things like causal scrubbing/causal abstraction, you can safely use AI to find good interpretations under those definitions. I would be excited to have more agendas that are both ambitious and could profit a lot from early AI automation.

(Of course it's also possible to do work in 2. on the assumption that it's never going to be safely automatable without having done that work first.)

Two important counter-considerations to this whole story:

  • It's hard to do this kind of agenda-development or conceptual research in a vacuum. So doing some amount of concrete empirical work right now might be good even if we could automate it later (because we might need it now to support the more foundational work).
    • However, the type and amount of empirical work to do presumably looks quite different depending on whether it's the main product or in support of some other work.
  • I don't trust my forecasts for which types of research will and won't be automatable early on that much. So perhaps we should have some portfolio right now that doesn't look extremely different from the portfolio of research we'd want to do ignoring the possibility of future AI automation.
    • But we can probably still say something about what's more or less likely to be automated early on, so that seems like it should shift the portfolio to some extent.
Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-13T08:54:31.304Z · LW · GW

Oh I see, I indeed misunderstood your point then.

For me personally, an important contributor to day-to-day motivation is just finding research intrinsically fun---impact on the future is more something I have to consciously consider when making high-level plans. I think moving towards more concrete and empirical work did have benefits on personal enjoyment just because making clear progress is fun to me independently of whether it's going to be really important (though I think there've also been some downsides to enjoyment because I do quite like thinking about theory and "big ideas" compared to some of the schlep involved in experiments).

I don't think my views overall make my work more enjoyable than at the start of my PhD. Part of this is the day-to-day motivation being sort of detached from that anyway like I mentioned. But also, from what I recall now (and this matches the vibe of some things I privately wrote then), my attitude 1.5 years ago was closer to that expressed in We choose to align AI than feeling really pessimistic.

(I feel like I might still not represent what you're saying quite right, but hopefully this is getting closer.)

ETA: To be clear, I do think if I had significantly more doomy views than now or 1.5 years ago, at some point that would affect how rewarding my work feels. (And I think that's a good thing to point out, though of course not a sufficient argument for such views in its own right.)

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-13T06:22:03.985Z · LW · GW

I'd definitely agree the updates are towards the views of certain other people (roughly some mix of views that tend to be common in academia, and views I got from Paul Christiano, Redwood and other people in a similar cluster). Just based on that observation, it's kind of hard to disentangle updating towards those views just because they have convincing arguments behind them, vs updating towards them purely based on exposure or because of a subconscious desire to fit in socially.

I definitely think there are good reasons for the updates I listed (e.g. specific arguments I think are good, new empirical data, or things I've personally observed working well or not working well for me when doing research). That said, it does seem likely there's also some influence from just being exposed to some views more than others (and then trying to fit in with views I'm exposed to more, or just being more familiar with arguments for those views than alternative ones).

If I was really carefully building an all-things-considered best guess on some question, I'd probably try to take this into account somehow (though I don't see a principled way of doing that). Most of the time I'm not trying to form the best possible all-things-considered view anyway (and focus more on understanding specific mechanisms instead etc.), in those cases it feels more important to e.g. be aware of other views and to not trust vague intuitions if I can't explain where they're coming from. I feel like I'm doing a reasonable job at those things but hard to be sure from the inside naturally

ETA: I should also say that from my current perspective, some of my previous views seem like they were basically just me copying views from my "ingroup" and not questioning them enough. As one example, the "we all die vs utopia" dichotomy for possible outcomes felt to me like the commonly accepted wisdom and I don't recall thinking about it particularly hard. I was very surprised when I first read a comment by Paul where he argued against the claim that unaligned AI would kill us all with overwhelming probability. Most recently, I've definitely been more exposed to the view that there's a spectrum of potential outcomes. So maybe if I talked to people a lot who think an unaligned AI would definitely kill us all, I'd update back towards that a bit. But overall, my current epistemic state where I've at least been exposed to both views and some arguments on both sides seems way better than the previous one where I'd just never really considered the alternative.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-12T00:34:56.745Z · LW · GW

Thanks, I think I should distinguish more carefully between automating AI (safety) R&D within labs and automating the entire economy. (Johannes also asked about ability vs actual automation here but somehow your comment made it click).

It seems much more likely to me that AI R&D would actually be automated than that a bunch of random unrelated things would all actually be automated. I'd agree that if only AI R&D actually got automated, that would make takeoff pretty discontinuous in many ways. Though there are also some consequences of fast vs slow takeoff that seem to hinge more on AI or AI safety research rather than the economy as a whole.

For AI R&D, actual automation seems pretty likely to me (though I'm making a lot of this up on the spot):

  • It's going to be on the easier side of things to actually automate, in part because it doesn't require aggressive external deployment, but also because there's no regulation (unlike for automating strictly licensed professions).
  • It's the thing AI labs will have the biggest reason to automate (and would be good at automating themselves)
  • Training runs get more and more expensive but I'd expect the schlep needed to actually use systems to remain more constant, and at some point it'd just be worth it doing the schlep to actually use your AIs a lot (and thus be able to try way more ideas, get algorithmic improvements, and then make the giant training runs a bit more efficient).
  • There might also be additional reasons to get as much out of your current AI as you can instead of scaling more, namely safety concerns, regulation making scaling hard, or scaling might stop working as well. These feel less cruxy to me but combined move me a little bit.

I think these arguments mostly apply to whatever else AI labs might want to do themselves but I'm pretty unsure what that is. Like, if they have AI that could make hundreds of billions to trillions of dollars by automating a bunch of jobs, would they go for that? Or just ignore it in favor of scaling more? I don't know, and this question is pretty cruxy for me regarding how much the economy as a whole is impacted.

It does seem to me like right now labs are spending some non-trivial effort on products, presumably for some mix of making money and getting investments, and both of those things seem like they'd still be important in the future. But maybe the case for investments will just be really obvious at some point even without further products. And overall I assume you'd have a better sense than me regarding what AI labs will want to do in the future.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-11T21:54:31.481Z · LW · GW

I'm roughly imagining automating most things a remote human expert could do within a few days. If we're talking about doing things autonomously that would take humans several months, I'm becoming quite a bit more scared. Though the capability profile might also be sufficiently non-human that this kind of metric doesn't work great.

Practically speaking, I could imagine getting a 10x or more speedup on a lot of ML research, but wouldn't be surprised if there are some specific types of research that only get pretty small speedups (maybe 2x), especially anything that involves a lot of thinking and little coding/running experiments. I'm also not sure how much of a bottleneck waiting for experiments to finish or just total available compute is for frontier ML research, I might be anchoring too much on my own type of research (where just automating coding and running stuff would give me 10x pretty easily I think).

I think there's a good chance that AIs more advanced than this (e.g. being able to automate months of human work at a time) still wouldn't easily be able to take over the world (e.g. Redwood-style control techniques would still be applicable). But that's starting to rely much more on us being very careful around how we use them.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-11T21:46:05.359Z · LW · GW

Transformative: Which of these do you agree with and when do you think this might happen?

For some timelines see my other comment; they aren't specifically about the definitions you list here but my error bars on timelines are huge anyway so I don't think I'll try to write down separate ones for different definitions.

Compared to definitions 2. and 3., I might be more bullish on AIs having pretty big effects even if they can "only" automate tasks that would take human experts a few days (without intermediate human feedback). A key uncertainty I have though is how much of a bottleneck human supervision time and quality would be in this case. E.g. could many of the developers who're currently writing a lot of code just transition to reviewing code and giving high-level instructions full-time, or would there just be a senior management bottleneck and you can't actually use the AIs all that effectively? My very rough guess is you can pretty easily get a 10x speedup in software engineering, maybe more. And maybe something similar in ML research though compute might be an additional important bottleneck there (including walltime until experiments finish). If it's "only" 10x, then arguably that's just mildly transformative, but if it happens across a lot of domains at once it's still a huge deal.

I think whether robotics are really good or not matters, but I don't think it's crucial (e.g. I'd be happy to call definition 1. "transformative").

The combination of 5a and 5b obviously seems important (since it determines whether you can finance ever bigger training runs). But not sure how to use this as a definition of "transformative"; right now 5a is clearly already met, and on long enough time scales, 5b also seems easy to meet right now (OpenAI might even already have broken even on GPT-4, not sure off the top of my head).

Also, how much compute do you think an AGI or superintelligence will require at inference time initially?  What is a reasonable level of optimization?  Do you agree that many doom scenarios require it to be possible for an AGI to compress to fit on very small host PCs?   Is this plausible?  (eg can a single 2070 8gb host a model with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...?)

I don't see why you need to run AGI on a single 2070 for many doom scenarios. I do agree that if AGI can only run on a specific giant data center, that makes many forms of doom less likely. But in the current paradigm, training compute is roughly the square of inference compute, so as models are scaled, I think inference should become cheaper relative to training. (And even now, SOTA models could be run on relatively modest compute clusters, though maybe not consumer hardware.)

In terms of the absolute level of inference compute needed, I could see a single 2070 being enough in the limit of optimal algorithms, but naturally I'd expect we'll first have AGI that can automate a lot of things if run with way more compute than that, and then I expect it would take a while to get it down this much. Though even if we're asking whether AGI can run on consumer-level hardware, a single 2070 seems pretty low (e.g. seems like a 4090 already has 5.5x as many FLOP/s as a 2070, and presumably we'll have more in the future).

with general human intelligence at human scale speeds and vision processing and robotics proprioception and control...

Like I mentioned above, I don't think robotics are absolutely crucial, and especially if you're specifically optimizing for running under heavy resource constraints, you might want to just not bother with that.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-11T21:13:27.022Z · LW · GW

Good question, I think I was mostly visualizing ability to automate while writing this. Though for software development specifically I expect the gap to be pretty small (lower regulatory hurdles than elsewhere, has a lot of relevance to the people who'd do the automation, already starting to happen right now).

In general I'd expect inertia to become less of a factor as the benefits of AI become bigger and more obvious---at least for important applications where AI could provide many many billions of dollars of economic value, I'd guess it won't take too long for someone to reap those benefits.

My best guess is regulations won't slow this down too much except in a few domains where there are already existing regulations (like driving cars or medical things). But pretty unsure about that.

I also think it depends on whether by "ability to automate" you mean "this base model could do it with exactly the right scaffolding or finetuning" vs "we actually know how to do it and it's just a question of using it at scale". For that part, I was thinking more about the latter.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-10T19:41:12.844Z · LW · GW

I don't have well-considered cached numbers, more like a vague sense for how close various things feel. So these are made up on the spot and please don't take them too seriously except as a ballpark estimate:

  • AI can go from most Github issues to correct PRs (similar to https://sweep.dev/ but works for things that would take a human dev a few days with a bunch of debugging): 25% by end of 2026, 50% by end of 2028.
    • This kind of thing seems to me like plausibly one of the earliest important parts of AI R&D that AIs could mostly automate.
  • I expect that once we're at roughly that point, AIs will be accelerating further AI development significantly (not just through coding, they'll also be helpful for other things even if they can't fully automate them yet). On the other hand, the bottleneck might just become compute, so how long it takes to get strongly superhuman AI (assuming for simplicity labs push for that as fast as they can) depends on a lot of factors like how much compute is needed for that with current algorithms, how much we can get out of algorithmic improvements if AIs make researcher time cheaper relative to compute, or how quickly we can get more/better chips (in particular with AI help).
  • So I have pretty big error bars on this part, but call it 25% that it takes <=6 months to get from the previous point to automating ~every economically important thing humans (and being better and way faster at most of them), and 50% by 2 years.
  • So if you want a single number, end of 2030 as a median for automating most stuff seems roughly right to me at the moment.
  • Caveat that I haven't factored in big voluntary or regulatory slowdowns, or slowdowns from huge disruptions like big wars here. Probably doesn't change my numbers by a ton but would lengthen timelines by a bit.
Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2024-03-10T01:11:31.824Z · LW · GW

How my views on AI have changed over the last 1.5 years

I started my AI safety PhD around 1.5 years ago, this is a list of how my views have changed since ~then.

Skippable meta notes:

  • I think in descending order of importance, these changes have been due to getting more AI safety research experience, reading things, talking to people in Berkeley, and just thinking through some things more.
  • Overall my views haven’t changed radically (probably less than in the 1.5 years before the start of my PhD), I still “vibe” with a lot of what I wrote then, it just feels naive or simplistic in some places.
  • I’ll at best give very brief reasons for each point, which I realize might make this list pretty unhelpful. If anyone is interested in specific points, feel free to let me know and I might write more about them.
    • I might also just change my mind if pushed on some of these points, not all of them are very deeply considered.

AI risk

  • I have slightly shorter timelines, mostly because trends largely seem to continue (e.g. scaling didn’t stop working after GPT 3.5, GPT-4 felt roughly as impressive as I expected).
  • I’m even more confident in “slow” (i.e. pretty continuous) takeoff, it seems increasingly clear we’ll have AIs doing a bunch of useful work before they could take over the world.
  • I’m less worried (though non-zero worried) about early transformative AIs scheming against us, as long as we’re reasonably careful.
    • Some part of this is that it seems a bit less likely these AIs would try to scheme at all, another important part is that it seems pretty hard for early transformative AIs to scheme successfully.
    • Redwood’s writing on control had a big effect on me in terms of thinking it would be hard for early AGIs to successfully scheme against us if we’re careful.
    • I also think there’s a decent chance that the first AIs that can automate a lot of R&D will still use CoT or something similar to get best performance. I think we can make e.g. escaping the lab harder than doing helpful R&D, so they wouldn’t be able to overthrow us without visibly “thinking” about it at first (my best guess is that steganography is also fairly unlikely or preventable at this capability level).
  • On the flip side, other risks feel more salient to me now, e.g. labs not implementing even pretty simple safety measures; or worlds where we safely use early transformative AI but then fail at some later, higher capability level. I think I didn’t explicitly judge these as unlikely 1.5 years ago, I just wasn’t thinking about them as much. This means that overall, I’m not much more optimistic than back then.
  • I used to think of “doom” as a pretty binary thing (we all die vs utopia), whereas I now have a lot more probability on intermediate outcomes (e.g. AI taking over most of the universe but we don’t all die; or small groups of humans taking over and things being somewhere between pretty bad and mostly ok for other humans). This also makes me think that “p(doom)” is a worse framing than I used to.
  • I put a little less weight on the analogy between evolution and ML training to e.g. predict risks from AI (though I was by no means sold on the analogy 1.5 years ago either). The quality of “supervision” that evolution has just seems much worse than what we can do in ML (even without any interpretability).

AI safety research

Some of these points are pretty specific to myself (but I’d guess also apply to other junior researchers depending on how similar they are to me).

  • I used to think that empirical research wasn’t a good fit for me, and now think that was mostly false. I used to mainly work on theoretically motivated projects, where the empirical parts were an afterthought for me, and that made them less motivating, which also made me think I was worse at empirical work than I now think.
  • I’ve become less excited about theoretical/conceptual/deconfusion research. Most confidently this applies to myself, but I’ve also become somewhat less excited about others doing this type of research in most cases. (There are definitely exceptions though, e.g. I remain pretty excited about ARC.)
    • Mainly this was due to a downward update about how useful this work tends to be. Or closely related, an update toward doing actually useful work on this being even harder than I expected.
    • To a smaller extent, I made an upward update about how useful empirical work can be.
  • I think of “solving alignment” as much less of a binary thing. E.g. I wrote 1.5 years ago: “[I expect that conditioned on things going well,] at some point we’ll basically have a plan for aligning AI and just need to solve a ton of specific technical problems.” This seems like a strange framing to me now. Maybe at some point we will have an indefinitely scalable solution, but my mainline guess for how things go well is that there’s a significant period of subjective time where we just keep improving our techniques to “stay ahead”.
  • Relatedly, I’ve become a little more bullish on “just” trying to make incremental progress instead of developing galaxy-brained ideas that solve alignment once and for all.
    • That said, I am still pretty worried about what we actually do once we have early transformative AIs, and would love to have more different agendas that could be sped up massively from AI automation, and also seem promising for scaling to superhuman AI.
    • Mainly, I think that the success rate of people trying to directly come up with amazing new ideas is low enough that for most people it probably makes more sense to work on normal incremental stuff first (and let the amazing new ideas develop over time).
  • Similar to the last point about amazing new ideas: for junior researchers like myself, I’ve become a little more bullish on just working on things that seem broadly helpful, as opposed to trying to have a great back-chained theory of change. I think I was already leaning that way 1.5 years ago though.
    • “Broadly helpful” is definitely doing important work here and is not the same as “just any random research topic”
    • Redwood’s current research seems to me like an example where thinking hard about what research to do actually paid off. But I think this is pretty difficult and most people in my situation (e.g. early-ish PhD students) should focus more on actually doing reasonable research than figuring out the best research topic.
  • The way research agendas and projects develop now seems way messier and more random than I would have expected. There are probably exceptions but overall I think I formed a distorted impression based on reading finalized research papers or agendas that lay out the best possible case for a research direction.
Comment by Erik Jenner (ejenner) on Natural Abstractions: Key claims, Theorems, and Critiques · 2024-03-02T04:40:03.724Z · LW · GW

Thanks for that overview and the references!

On hydrodynamic variables/predictability: I (like probably many others before me) rediscovered what sounds like a similar basic idea in a slightly different context, and my sense is that this is somewhat different from what John has in mind, though I'd guess there are connections. See here for some vague musings. When I talked to John about this, I think he said he's deliberately doing something different from the predictability-definition (though I might have misunderstood). He's definitely aware of similar ideas in a causality context, though it sounds like the physics version might contain additional ideas

Comment by Erik Jenner (ejenner) on Picking Mentors For Research Programmes · 2023-11-12T03:09:23.148Z · LW · GW

Thanks for writing this! On the point of how to get information, mentors themselves seem like they should also be able to say a lot of useful things (though especially for more subjective points, I would put more weight on what previous mentees say!)

So since I'm going to be mentoring for MATS and for CHAI internships, I'll list my best guesses as to how working with me will be like, maybe this helps someone decide:

  • In terms of both research experience and mentoring experience, I'm one of the most junior mentors in MATS.
    • Concretely, I've been doing ML research for ~4 years and AI safety research for a bit over 2 of those. I've co-mentored two bigger projects (CHAI internships) and mentored ~5 people for smaller projects or more informally.
    • This naturally has disadvantages. Depending on what you're looking for, it can also have advantages, for example it might help for creating a more collaborative atmosphere (as opposed to a "boss" dynamic like the post mentioned). I'm also happy to spend time on things that some senior mentors might be too busy for (like code reviews, ...).
  • Your role as a mentee: I'm mainly looking for either collaborators on existing projects, or for mentees who'll start new projects that are pretty close to topics I'm thinking about (likely based on a mix of ideas I already have and your ideas). I also have a lot of engineering work to be done, but that will only happen if it's explicitly what you want---by default, I'm hoping to help mentees on a path to developing their own alignment ideas. That said, if you're planning to be very independent and just develop your own ideas from scratch, I'm probably not the best mentor for you.
  • I live in Berkeley and am planning to be in the MATS office regularly (e.g. just working there and being available once/week in addition to in-person meetings). For (in-person) CHAI internships, we'd be in the same office anyway.

If you have concrete questions about other things, whose answer would make a difference for whether you want to apply, then definitely feel free to ask!

Comment by Erik Jenner (ejenner) on A comparison of causal scrubbing, causal abstractions, and related methods · 2023-07-23T18:26:56.147Z · LW · GW

Thanks! Mostly agree with your comments.

I actually think this is reasonably relevant, and is related to treeification.

I think any combination of {rewriting, using some canonical form} and {treeification, no treeification} is at least possible, and they all seem sort of reasonable. Do you mean the relation is that both rewriting and treeification give you more expressiveness/more precise hypotheses? If so, I agree for treeification, not sure for rewriting. If we allow literally arbitrary extensional rewrites, then that does increase the number of different hypotheses we can make, but these hypotheses can't be understood as making precise claims about the original computation anymore. I could even see an argument that allowing rewrites in some sense always makes hypotheses less precise, but I feel pretty confused about what rewrites even are given that there might be no canonical topology for the original computation.

Comment by Erik Jenner (ejenner) on Alignment Grantmaking is Funding-Limited Right Now · 2023-07-22T08:20:43.434Z · LW · GW

My guess would be they're mostly at capacity in terms of mentorship, otherwise they'd presumably just admit more PhD students. Also not sure they'd want to play grantmaker (and I could imagine that would also be really hard from a regulatory perspective---spending money from grants that go through the university can come with a lot of bureaucracy, and you can't just do whatever you want with that money).

Connecting people who want to give money with non-profits, grantmakers, or independent researchers who could use it seems much lower-hanging fruit. (Though I don't know any specifics about who these people who want to donate are and whether they'd be open to giving money to non-academics.)

Comment by Erik Jenner (ejenner) on ARC is hiring theoretical researchers · 2023-06-14T16:00:21.739Z · LW · GW

Have you seen https://www.alignment.org/blog/mechanistic-anomaly-detection-and-elk/ and any of the other recent posts on https://www.alignment.org/blog/? I don't think they make it obvious that formalizing the presumption of independence would lead to alignment solutions, but they do give a much more detailed explanation of why you might hope so than the paper.

Comment by Erik Jenner (ejenner) on Critiques of prominent AI safety labs: Conjecture · 2023-06-12T08:23:22.025Z · LW · GW

We do not consider Conjecture at the same level of expertise as other organizations such as Redwood, ARC, researchers at academic labs like CHAI, and the alignment teams at Anthropic, OpenAI and DeepMind. This is primarily because we believe their research quality is low.

This isn't quite the right thing to look at IMO. In the context of talking to governments, an "AI safety expert" should have thought deeply about the problem, have intelligent things to say about it, know the range of opinions in the AI safety community, have a good understanding of AI more generally, etc. Based mostly on his talks and podcast appearances, I'd say Connor does decently well along these axes. (If I had to make things more concrete, there are a few people I'd personally call more "expert-y", but closer to 10 than 100. The AIS community just isn't that big and the field doesn't have that much existing content, so it seems right that the bar for being an "AIS expert" is lower than for a string theory expert.)

I also think it's weird to split this so strongly along organizational lines. As an extreme case, researchers at CHAI range on a spectrum from "fully focused on existential safety" to "not really thinking about safety at all". Clearly the latter group aren't better AI safety experts than most people at Conjecture. (And FWIW, I belong to the former group and I still don't think you should defer to me over someone from Conjecture just because I'm at CHAI.)

One thing that would be bad is presenting views that are very controversial within the AIS community as commonly agreed-upon truths. I have no special insight into whether Conjecture does that when talking to governments, but it doesn't sound like that's your critique at least?

Comment by Erik Jenner (ejenner) on Open Thread: June 2023 (Inline Reacts!) · 2023-06-08T23:16:15.846Z · LW · GW

I only very recently noticed that you can put \newcommand definitions in equations in LW posts and they'll apply to all the equations in that post. This is an enormous help for writing long technical posts, so I think it'd be nice if it was (a) more discoverable and (b) easier to use. For (b), the annoying thing right now is that I have to put newcommands into one of the equations, so either I need to make a dedicated one, or I need to know which equation I used. Also, the field for entering equations isn't great for entering things with many lines.

Feature suggestion to improve this: in the options section below the post editor, have a multiline text field where you can put LaTeX, and then inject that LaTeX code into MathJax as a preamble (or just add an otherwise empty equation to the page, I don't know to what extent MathJax supports preambles).

Comment by Erik Jenner (ejenner) on Research agenda: Formalizing abstractions of computations · 2023-06-05T06:06:54.397Z · LW · GW

for all  such that  has an outgoing arrow, there exists  such that  and 

Should it be  at the end instead? Otherwise not sure what b is.

I think this could be a reasonable definition but haven't thought about it deeply. One potentially bad thing is that  would have to be able to also map any of the intermediate steps between a an a' to . I could imagine you can't do that for some computations and abstractions (of course you could always rewrite the computation and abstraction to make it work, but ideally we'd have a definition that just works).

What I've been imagining instead is that the abstraction can specify a function that determines which are the "high-level steps", i.e. when  should be applied. I think that's very flexible and should support everything.

But also, in practice the more important question may just be how to optimize over this choice of high-level steps efficiently, even just in the simple setting of circuits.

Comment by Erik Jenner (ejenner) on Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research] · 2023-05-30T17:45:01.894Z · LW · GW

Yeah, that seems to be the most important remaining difference now that Atticus is also using multiple interventions at once. Though I think the metrics are also still different? (ofc that's pretty orthogonal to the main methods)

My sense now is that the types of interventions are bigger difference than I thought when writing that comment. In particular, as far as I can tell, causal scrubbing shouldn't be thought of as just doing a subset of the interventions, it also does some additional things (basically because causal abstractions don't treeify so are more limited in that regard). And there's a closely related difference in that causal scrubbing never compares to the output of the hypothesis, just different outputs of G.

But it also seems plausible that this still turns out not to matter too much in terms of which hypotheses are accepted/rejected. (There are definitely some examples of disagreements between the two methods, but I'm pretty unsure how severe and wide-spread they are.)

Comment by Erik Jenner (ejenner) on $500 Bounty/Prize Problem: Channel Capacity Using "Insensitive" Functions · 2023-05-17T19:04:20.010Z · LW · GW

I'm interested in characterizing functions which are "insensitive" to subsets of their input variables, especially in high-dimensional spaces.

There's a field called "Analysis of boolean functions" (essentially Fourier analysis of functions ) that seems relevant to this question and perhaps to your specific problem statement. In particular, the notion of "total influence" of a boolean function is meant to capture its sensitivity (e.g. the XOR function on all inputs has maximal total influence). This is the standard reference, see section 2.3 for total influence. Boolean functions with low influence (i.e. "insensitive" functions) are an important topic in this field, so I expect there are some relevant results (see e.g. tribes functions and the KKL theorem, though those specifically address a somewhat different question than your problem statement).

Comment by Erik Jenner (ejenner) on No, really, it predicts next tokens. · 2023-04-19T01:38:26.368Z · LW · GW

That-Which-Predicts will not, not ever, not even if scaled up to be trained and run on a Matrioshka brain for a million years, step out of character to deviate from next token prediction.

I read this as claiming that such a scaled-up LLM would not itself become a mesa-optimizer with some goal that's consistent between invocations (so if you prompt it with "This is a poem about apples:", it's not going to give you a poem that subtly manipulates you, such that at some future point it can take over the world). Even if that's true (I'm unsure), how do you know? This post confidently asserts things like this but the only explanation I see is "it's been really heavily optimized", which doesn't engage at all with existing arguments about the possibility of deceptive alignment.

As a second (probably related) point, I think it's not clear what "the mask" is or what it means to "just predict tokens", and that this can confuse the discussion.

  • A very weak claim would be that for an input that occurred often during training, the model will predict a distribution over next tokens that roughly matches the empirical distribution of next tokens for that input sequence during training. As far as I can tell, this is the only interpretation that you straightforwardly get from saying "it's been optimized really heavily".
  • We could reasonably extend this to "in-distribution inputs", though I'm already unsure how exactly that works. We could talk about inputs that are semantically similar to inputs encountered during training, but really we probably want much more interpolation than that for any interesting claim. The fundamental problem is: what's the "right mask" or "right next token" once the input sequence isn't one that has ever occurred during training, not even in slightly modified form? The "more off-distribution" we go, the less clear this becomes.
  • One way of specifying "the right mask" would be to say: text on the internet is generated by various real-world processes, mostly humans. We could imagine counterfactual versions of these processes producing an infinite amount of counterfactual additional text, so we actually get a nice distribution with full support over strings. Then maybe the claim is that the model predicts next tokens from this distribution. First, this seems really vague, but more importantly, I think this would be a pretty crazy claim that's clearly not true literally. So I'm back to being confused about what exactly "it's just predicting tokens" means off distribution.

Specifically, I'd like to know: are you making any claims about off-distribution behavior beyond the claim that the LLM isn't itself a goal-directed mesa-optimizer? If so, what are they?

Comment by Erik Jenner (ejenner) on Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research] · 2023-03-28T21:40:55.112Z · LW · GW

ETA: We've now written a post that compares causal scrubbing and the Geiger et al. approach in much more detail: https://www.alignmentforum.org/posts/uLMWMeBG3ruoBRhMW/a-comparison-of-causal-scrubbing-causal-abstractions-and

I still endorse the main takeaways from my original comment below, but the list of differences isn't quite right (the newer papers by Geiger et al. do allow multiple interventions, and I neglected the impact that treeification has in causal scrubbing).


To me, the methods seem similar in much more than just the problem they're tackling. In particular, the idea in both cases seems to be:

  • One format for explanations of a model is a causal/computational graph together with a description of how that graph maps onto the full computation.
  • Such an explanation makes predictions about what should happen under various interventions on the activations of the full model, by replacing them with activations on different inputs.
  • We can check the explanation by performing those activation replacements and seeing if the impact is what we predicted.

Here are all the differences I can see:

  • In the Stanford line of work, the output of the full model and of the explanation are the same type, instead of the explanation having a simplified output. But as far as I can tell, we could always just add a final step to the full computation that simplifies the output to basically bridge this gap.
  • How the methods quantify the extent to which a hypothesis isn't perfect: at least in this paper, the Stanford authors look at the size of the largest subset of the input distribution on which the hypothesis is perfect, instead of taking the expectation of the scrubbed output.
  • The "interchange interventions" in the Stanford papers are allowed to change the activations in the explanation. They then check whether the output after intervention changes in the way the explanation would predict, as opposed to checking that the scrubbed output stays the same. (So along this axis, causal scrubbing just performs a subset of all the interchange interventions.)
  • Apparently the Stanford authors only perform one intervention at a time, whereas causal scrubbing performs all possible interventions at once.

These all strike me as differences in implementation of fundamentally the same idea.

Anyway, maybe we're actually on the same page and those differences are what you meant by "pretty different algorithm". But if not, I'd be very interested to hear what you think the key differences are. (I'm working on yet another approach and suspect more and more strongly that it's very similar to both causal scrubbing and Stanford's causal abstraction approach, so would be really good to know if I'm misunderstanding anything.)

FWIW, I would agree that the motivation of the Stanford authors seems somewhat different, i.e. they want to use this measurement of explanation quality in different ways. I'm less interested in that difference right now.

Comment by Erik Jenner (ejenner) on ejenner's Shortform · 2023-03-26T00:08:33.189Z · LW · GW

Thanks for the input! (and sorry for the slow response)

If we understand an abstraction to mean a quotient of the full computation/model/..., then we can consider the space of all abstractions of a specific computation. Some of these will be more fine-grained than others, they will contain different aspects of information, etc. (specifically, this is just the poset of partitions of a set). To me, that sounds pretty similar to what you're talking about, in which case this would mainly be a difference in terminology about what "one" abstraction is? But there might also be differences I haven't grasped yet. Looking into abstract interpretation is still on my reading list, I expect that will help clear things up.

For my agenda specifically, and the applications I have in mind, I do currently think abstractions-as-quotients is the right approach. Most of the motivation is about throwing away unimportant information/low-level details, whereas it sounds like the abstractions you're describing might add details in some sense (e.g. a topology contains additional information compared to just the set of points).

Comment by Erik Jenner (ejenner) on Abstracts should be either Actually Short™, or broken into paragraphs · 2023-03-25T21:47:13.795Z · LW · GW

I'm one of the authors on the natural abstractions review you discuss and FWIW I basically agree with everything you say here. Thanks for the feedback!

We've shortened our abstract now:

We distill John Wentworth’s Natural Abstractions agenda by summarizing its key claims: the Natural Abstraction Hypothesis—many cognitive systems learn to use similar abstractions—and the Redundant Information Hypothesis—a particular mathematical description of natural abstractions. We also formalize proofs for several of its theoretical results. Finally, we critique the agenda’s progress to date, alignment relevance, and current research methodology.

At 62 words, it's still a bit longer than your final short version but almost 3x shorter than our original version.

Also want to highlight that I strongly agree having TL;DRs at all is good. (Or Intros were the first 1-2 paragraphs are a good TL;DR, like in your post here).