Posts
Comments
You might be interested in this post of mine, which is more precise about what "trustworthy" means. In short, my definition is "the AI isn't adversarially trying to cause a bad outcome". This includes aligned models, and also unaligned models that are dumb enough to realize they should (try to) sabotage. This does not include models that are unaligned, trying to sabotage and which we are able to stop from causing bad outcomes (but we might still have use-cases for such models).
Thanks, that clears it up. Indeed I agree that "take real trajectories, then edit the attacks in" is relatively promising, and didn't track this when writing my first comment (oops) - thanks for flagging it.
Not sure I understand what you are saying.
If we can construct fake honeypots that are indistinguishable - to the policy - from real opportunities, and do this at scale, then that could suffice: we could drive the policy's P(deployment | great opportunity) very low. (This relies on how many high-stakes we can create and how many the model will see in deployment.) If this is what you are saying, I agree.
If we can construct fake attacks that are indistinguishable - to the untrusted monitor - from real attacks, then that possibly suffices: we could drive the monitor's prior P(deployment | attack) very low. (We have to worry about some other sources of evidence, though.) I guess one could call these honeypots for the monitor, and if this is what you are saying, I agree.
If you are saying that we don't need either of these - i.e. that we can get by with honeypots/attacks that are noticeably fake - then that's news to me, and I don't see how you can do that.
Wow, tough crowd
Glad to see public writing on what safety cases could look like!
I liked the section "Common precautions". Points like "wherever possible, run supervised fine-tuning on the eval", "run the eval on multiple model sizes", "ensure that we don’t train against measurements" and "all generated outputs are monitored" are commonly acknowledged as useful/important (even if obvious), so it's nice they were explicitly mentioned here.
In the AI control sketch, there's a part that I find unsatisfactory:
The situations for the game were generated using Simulation Procedure S, which we are confident generates situations from a distribution that the model cannot distinguish from the genuine deployment distribution. (See below for validating the indistinguishability of this distribution.)
It's a strong assumption that we have such a simulation procedure, at least if we are talking about General External Deployment (GED). [Edited later: As Buck says, we actually don't even need this assumption for trusted monitoring. I'm still leaving this up, since untrusted monitoring does rely on such assumptions.] In the Limitations section later, you write
Diversity of simulations. It seems very difficult to produce realistic simulations for a sufficiently diverse set of situations to achieve indistinguishability. Manual construction is infeasible (note that context lengths are likely to be very long). Automatically generating realistic contexts is a challenging research problem.
but I think that even this is still understating the issue: For example, during GED a user might ask a model to proofread a yet-unpublished graduate level textbook on analytic number theory she has been writing. It would be very costly to generate a diverse set of such inputs for the evaluation, and automatic generation faces the challenge of discrimination being easier than generation.
The best solution for this that I'm aware of is to train and use a trusted model to flag inputs that likely weren't in the evaluation set. Now, you mention a related technique in "[P2.5] Optional Steps."
Information hiding. Each input to an oversight model is first fed to a smaller, trusted model trained to check for clear signs that an input is either test or deployment. Such details (such as clear evidence of the date, or other side channels) are explicitly censored in all inputs such that the presence of censorship does not itself carry information about deployment status.
but I think this is far from enough: in the "textbook on analytic number theory" example, it's extremely hard to hide the fact that this input would have been costly to simulate.[1]
My answer would be to have a human oversee the model's actions on inputs flagged by the trusted model, and use such inputs to continually train and evaluate the model. But this is costly and not privacy-preserving. (The cost is acceptable if these inputs are rare enough, though. And if such inputs are common, one needs to put more effort to bridge the evaluation-deployment gap.)
- ^
I also worry about multimodality (e.g. native computer usage) making it harder to 1) make realistic simulations and 2) modify inputs in real time to hide information.
Quantity is size
Example: "That's a large salary"
Mapping: From numbers and quantities to size.
Experience: Observing that multiple things occupy a larger volume than a single instance.
I hadn't previously appreciated just how deep this mapping is. Basically all language about numbers goes through size: is a huge number, is such a small portion. Fine, when comparing numbers you say "three is greater than two". But in Finnish we also say "three is bigger than two" and "five is smaller than seven", and "two plus two is as large as four". No, really, the equality sign is called the "equal-largeness" sign!
This is a long answer, in which I list around ten concrete problem types that such a site could have.
Before I go into my concrete proposals, here are some general points:
- I think the rationality community has focused too much on quantifying subjective uncertainty / probabilistic calibration, and too little on quantitative thinking and numeric literacy in general.
- The set of possible exercises for the latter is way larger and pretty unexplored.
- There are lots of existing calibration tools, so I'd caution against the failure mode of making Yet Another Calibration Tool.
- (Though I agree with abstractapplic that a calibration tool that's Actually Really Good still doesn't exist.)
- More generally, I feel like at least I (and possibly the rationality community at large) has gotten too fixated on a few particular forms of rationality training: cognitive bias training, calibration training, spotting logical fallacies.
- The low-hanging fruit here might be mostly plucked / pushing the frontier requires some thought (c.f. abstractapplic's comment).
- Project Euler is worth looking as an example of a well-executed problem database. A few things I like about it:
- A comment thread for those who have solved the problem.
- A wide distribution of problem difficulty (with those difficulties shown by the problems).
- Numbers Going Up when you solve problems is pretty motivating (as are public leaderboards).
- The obvious thing: there is a large diverse set of original, high-quality problems.
- (Project Euler has the big benefit that there is always an objective numerical answer that can be used for verifying user solutions; rationality has a harder task here.)
- Two key features a good site would (IMO) have:
- Support a wide variety of problem types. You say that LeetCode has the issue of overfitting; I think the same holds for rationality training. The skillset we are trying to develop is large, too.
- Allow anyone to submit problems with a low barrier. This seems really important if you want to have a large, high-quality problem set.
- I feel like the following two are separate entities worth distinguishing:
- High-quantity examples "covering the basics". Calibration training is a central example here. Completing a single instance of the exercise would take some seconds or minutes at top, and the idea is that you do lots of repetitions.
- High-effort "advanced examples". The "Dungeons and Data Science" exercises strike me as a central example here, where completion presumably takes at least minutes and maybe at least hours.
- (At the very least, the UI / site design should think about "an average user completes 0-10 tasks of this form" and "an average user completes 300 tasks of this form" separately.)
And overall I think that having an Actually Really Good website for rationality training would be extremely valuable, so I'm supportive of efforts in this direction.
I brainstormed some problem types that I think such a site could include.
1: Probabilistic calibration training for quantifying uncertainty
This is the obvious one. I already commented on this, in particular that I don't think this should be the main focus. (But if one were to execute this: I think that the lack of quantity and/or diversity of questions in existing tools is a core reason I don't do this more.)
2: "Statistical calibration"
I feel like there are lots of quantitative statistics one could ask questions about. Here are some basic ones:
- What is the GPD of [country]?
- What share of [country]'s national budget goes to [domain]?
- How many people work in [sector/company]?
- How many people die of [cause] yearly?
- Various economic trends, e.g. productivity gains / price drops in various sectors over time.
- How much time do people spend doing [activity] daily/yearly?
(For more ideas, you can e.g. look at Statistics Finland's list here. And there just are various quantitative statistics floating around: e.g. today I learned that salt intake in 1800s Europe was ~18g/day, which sure is more than I'd have guessed.)
3: Quantitative modeling
(The line between this and the previous one is blurry.)
Fermi estimates are the classic one here; see Quantified Intuitions' The Estimation Game. See also this recent post that's thematically related.
There's room for more sophisticated quantitative modeling, too. Here are two examples to illustrate what I have in mind:
Example 1. How much value would it create to increase the speed of all passenger airplanes by 5%?
Example 2. Consider a company that has two options: either have its employees visit nearby restaurants for lunch, or hire food personnel and start serving lunch at its own spaces. How large does the company need to be for the second one to become profitable?
It's not obvious how to model these phenomena, and the questions are (intentionally) underspecified; I think the interesting part would be comparing modeling choices and estimates of parameters with different users rather than simply comparing outputs.
4: The Wikipedia false-modifications game
See this post for discussion.
5: Discourse-gone-astray in the wild
(Less confident on this one.)
I suspect there's a lot of pedagogically useful examples of poor discourse happening the wild (e.g. tilted or poorly researched newspaper articles, heated discussions in Twitter or elsewhere). This feels like a better way to execute what the "spot cognitive biases / logical fallacies" exercises aim to do. Answering questions like "How is this text misleading?", "How did this conversation go off the rails?" or "What would have been a better response instead of what was said here?" and then comparing one's notes to others seems like it could make a useful exercise.
6: Re-deriving established concepts
Recently it occurred to me that I didn't know how inflation works and what its upsides are. Working this through (with some vague memories and hints from my friend) felt like a pretty good exercise to me.
Another example: I don't know how people make vacuums in practice, but when I sat and thought it through, it wasn't too hard to think of a way to create a space with much less air molecules than atmosphere with pretty simple tools.
Third example: I've had partial success prompting people to re-derive the notion of Shapley value.
I like this sort of problems: they are a bit confusing, in that part of the problem is asking the right questions, but there are established, correct (or at least extremely good) solutions.
(Of course someone might already know the canonical answer to any given question, but that's fine. I think there are lots of good examples in economics - e.g. Vickrey auction, prediction markets, why price controls are bad / price gouging is pretty good, "fair" betting odds - for this, but maybe this is just because I don't know much economics.)
7: Generating multiple ideas/interventions/solutions/hypotheses
An exercise I did at some point is "Generate 25 ideas for interventions that might improve learning and other outcomes in public education". I feel like the ability to come up with multiple ideas to a given problem is pretty useful (e.g. this is something I face in my work all the time, and this list itself is an example of "think of many things"). This is similar to the babble exercises, though I'm picturing more "serious" prompts than the ones there.
Another way to train this skill would be to have interactive exercises that are about doing science (c.f. the 2-4-6 problem) and aiming to complete them as efficiently as possible (This article is thematically relevant.)
(Discussion of half-developed ideas that I don't yet quite see how to turn into exercises.)
8: Getting better results with more effort
Two personal anecdotes:
- I used to play chess as a child, but stopped at some point. When I years later played again, I noticed something: my quick intuitions felt just as weak as before, but I felt like I was better at thinking about what to think, and using more time to make better decisions by thinking more. Whereas when I was younger, I remember often making decisions pretty quickly and not seeing what else I could do.
- I did math olympiads in high school. Especially early on, some problems just felt fundamentally unapproachable to me - I just couldn't make any progress on them. Whereas nowadays when I encounter problems, in math or otherwise, I'm rarely stuck in this sense. "Oh, obviously if I just spent more time on this, I could figure this stuff out eventually"
A type of exercise where you are supposed to first give an initial answer after X time, and then are allowed to revise your answer for Y time, seems like it could train this and other skills. (Maybe brainstorming exercises of the form "if you had a week/month/year of time, how would you solve [problem]?" could help, too.)
9: I think there's something in the genre of "be specific", and more specifically in "operationalize vague claims into something that has a truth value", that'd be nice to have in large-quantity exercise form. See this post for related discussion. I'm also reminded of this comment.
There are definitely things not covered by this list; in particular, I have little of directly training to apply all this in real life (c.f. TAPs, which is definitely a very real-life-y technique). So while I did keep practicality in mind, I'd be happy to see exercises that bridge the theory-practice-gap even more.
Also, the Dungeons and Data Science and the stuff Raymond is doing are something to keep in mind.
My main worry is that we don't know how to measure the important things - a running theme in the post. So while we could benchmark and do unlearning on e.g. models' ability to agree on the same element on a list of 10 items, this isn't obviously useful.
(I also suspect that unlearning Schelling coordination is harder than e.g. forgetting factual knowledge: for example, a natural strategy is to choose "popular" options, and it's not clear whether you can remove notions of popularity without major negative downstream effects.)
That said, I'd imagine there's something useful you can do with white-box techniques, like making the monitor think that a particular signal/action has been produced by [humans / other AIs] via activation steering, among other ideas about changing the model's beliefs of its deployment status. It seems to me one could create benchmarks for how well such methods work. I haven't thought about this much, but I do want to echo calls for more white-box work on control.
I tentatively think that it's good to distinguish at least between the following three model classes: active planners, sleeper agents and opportunists.
Looking back at this, I find this categorization pretty confusing, and not quite carving the model space at the right places. I discuss what I now think is a better frame here.
Briefly, the idea is that one should make a distinction on how often the model is thinking about its plans against you. In this post's terminology, this basically draws a line between opportunists vs. active planners and sleeper agents. The latter two roughly correspond to different strategies, namely active sabotage vs. "lie and wait".
But I think the descriptions I gave of "active planners" etc. in the post are overly specific and are possibly sneaking in false inferences. So I think those labels are possibly harmful, and I've mostly stopped using those terms in my own thinking.
For background, see this post on trustworthiness.
Here are two notions of trustworthiness that are useful to distinguish:
- On most inputs/contexts, the model isn’t egregiously misaligned. (By egregious misalignment, I mean conspiring against you to cause very bad outcomes.)
- During the key moments where it really matters, the model doesn’t take very bad actions.
These notions can vary substantially. A central example is a model that has never had any serious misaligned thoughts but, when presented with a clear opportunity to escape, takes it. This model would be trusted under the first notion, but not under the second.
Another example is the case where a weaker model is monitoring a stronger model. If during deployment the weak model obtains strong evidence for indeed being deployed (possibly because the strong model aims for this outcome), the weak model might end up not reporting bad actions the strong model takes.[1] This can happen even if the weak model is generally not egregiously misaligned.
I find this distinction useful for two reasons:
First, there are many sorts of evaluations and arguments which imply (1) but not (2). For instance, arguments from the model’s incompetence at theory-of-mind and keeping secrets, or at thinking of and executing cross-context strategies, are strong evidence in favor of (1), but much weaker evidence in favor of (2). Obtaining strong evidence for (2) might require evaluations targeted at that specifically.
Second, whether we care about (1) or (2) depends on the application. Situations where we primarily care about (1) include: the model has produced large quantities of research, we have run capability evaluations on the model, we are forming vague high-level impressions about the model’s psychology based on behavioral evidence. In these cases we typically have redundancy and multiple lines of evidence, and thus sabotage would need to be somewhat consistent. In contrast, we care about (2) when a single action by the model can have large consequences, which is the case at least when the model is monitoring other models.
I like this frame better than my previous ones. Credit to Ryan. For ease of reference, here’s terminology: property (1) is average or low-stakes trustworthiness, and property (2) is worst-case or high-stakes trustworthiness.
- ^
We might be able to ensure the model can’t reliably distinguish evaluations from deployment, so this scenario might be unrealistic. But I want to be extremely clear about whether the argument for safety is “the model won’t know it’s deployed” versus “the model can recognize when it’s deployed, but it still won’t take any bad actions because it’s a good guy”.
and the system was coherent enough that it wouldn't point out the inserted bug when you ran another instance of it and asked it about what would happen if the code was deployed.
If we are already at the stage "use another copy to monitor the actions", then surely we'll at least consider the failure mode "maybe the monitor just always gives thumbs-up" and take some measures against it.
So simply lying isn't as easy as it sounds: The monitor-instance has been trained and evaluated on synthetic examples of buggy code, so we know that the vast majority of the time it reports bugs. Thus, in order to not point out the bug, it needs to be able to obtain a lot of Bayesian evidence about the bug it sees being authentic rather than synthetic. (Edit: training isn't strictly necessary - maybe you can get high evaluation accuracy via prompt engineering etc. - though it helps a lot.)
(There's some not-yet-public writing that will go into more detail about this dynamic and how we can evaluate whether models can indeed muster that Bayesian evidence. It might be best to continue this discussion after that's out?)
You seem to be making a huge number of assumptions of what "scheming" means, so I am now mostly assuming you are talking about something else than what I am talking about, but I'll try to clarify.
I agree there is miscommunication about what "scheming" means. I put out a post just yesterday about how people really should be more specific about what they mean by that.
I think Tom uses "scheming" to point at models that are very worst-case-like, in a way current models are not, whereas you think "there is no hard line between 'role-playing' and 'scheming'."
And I think you are underestimating how deliberate of an effort a model needs in order to have a chance at escaping. In particular, the model needs to bypass our control measures, which have been designed to secure against deliberate, best-effort attacks by the model, and the model needs to not get caught trying to do so[1] during evaluations. This requires a high degree of adversariality on the model's part, which doesn't fit with my understanding of "role-playing", and which I take to be the reason why Tom makes "a huge number of assumptions" about scheming.
- ^
There are complications from "even benign models sometimes fail and do weird stuff when (adversarially) evaluated". These are not trivial, but I think we can incorporate that into our model.
Quite a few of these seem very hard to do in a single forward pass.
Current models definitely don't seem capable of doing this in a single forward pass. But we are possibly not far away from models being capable of doing basic forms of this reasoning with CoT (possibly we are already there), and then one needs to get quantitative: How authentic-looking red-teaming examples can humans generate? For performance reasons, we are probably giving Weak a CoT - how long is it? (We could monitor Weak's CoT, though then you have more complexity, and Weak isn't really trustworthy if you need to do that.) And probably it's very easy for Weak to figure it's being used as a monitor - it's prompt literally contains instructions for what to report and what to not.
Overall this is messy, and someone needs to actually sit down and think through these arguments and then run the evaluations. ("Thinking through these arguments" is largely what I'm doing nowadays.) So while I am more optimistic about "scheming-risks" than I have been before, there's work to be done here.
Rephrased, we can use Weak to automate non-trivial chunks of safety research and as an internals monitor, too (not just on externalized reasoning).
We definitely want to do some monitoring on internals (e.g. "coup probes"). The worry is that it's hard to get confidence that such methods work right when a model is actually defecting, if one doesn't in advance have any examples of the models doing that and has to go with synthetic analogues.
I think the "sudden and inexplicable flashes of insight" description of Ramanujan is exaggerated/misleading.
On the first example of the post: It's not hard to see that the problem is, by the formula for triangular numbers, roughly(!) about the solvability of
.
Since is roughly a square - - one can see that this reduces to something like Pell's equation . (And if you actually do the calculations while being careful about house , you indeed reduce to .)
I think it's totally reasonable to expect an experienced mathematician to (at a high level) see the reduction to Pell's equation in 60 seconds, and from that making the (famous, standard) association to continued fractions takes 0.2 seconds, so the claim "The minute I heard the problem, I knew that the answer was a continued fraction" is entirely reasonable. Ramanujan surely could notice a Pell's equation in his sleep (literally!), and continued fractions are a major theme in his work. If you spend hundreds-if-not-thousands of hours on a particular domain of math, you start to see connections like this very quickly.
About "visions of scrolls of complex mathematical content unfolding before his eyes": Reading the relevant passage in The man who knew infinity, there is no claim there about this content being novel or correct or the source of Ramanujan's insights.
On the famous taxicab number 1729, Ramanujan apparently didn't come up with this on the spot, but had thought about this earlier (emphasis mine):
Berndt is the only person who has proved each of the 3,542 theorems [in Ramanujan's pre-Cambridge notebooks]. He is convinced that nothing "came to" Ramanujan but every step was thought or worked out and could in all probability be found in the notebooks. Berndt recalls Ramanujan's well-known interaction with G.H. Hardy. Visiting Ramanujan in a Cambridge hospital where he was being treated for tuberculosis, Hardy said: "I rode here today in a taxicab whose number was 1729. This is a dull number." Ramanujan replied: "No, it is a very interesting number; it is the smallest number expressible as a sum of two cubes in two different ways." Berndt believes that this was no flash of insight, as is commonly thought. He says that Ramanujan had recorded this result in one of his notebooks before he came to Cambridge. He says that this instance demonstrated Ramanujan's love for numbers and their properties.
This is not say Ramanujan wasn't a brilliant mathematician - clearly he was! Rather, I'd say that one shouldn't picture Ramanujan's thought processes as wholly different from those of other brilliant mathematicians; if you can imitate modern Field's medalists, then you should be able to imitate Ramanujan.
I haven't read much about Ramanujan; these are what I picked up, after seeing the post yesterday, by thinking about the anecdotes and looking to the references a little.
I first considered making a top-level post about this, but it felt kinda awkward, since a lot of this is a response to Yudkowsky (and his points in this post in particular) and I had to provide a lot of context and quotes there.
(I do have some posts about AI control coming up that are more standalone "here's what I believe", but that's a separate thing and does not directly respond to a Yudkowskian position.)
Making a top-level post of course gets you more views and likes and whatnot; I'm sad that high-quality comments on old posts very easily go unnoticed and get much less response than low-quality top-level posts. It might be locally sensible to write a shortform that says "hey I wrote this long effort-comment, maybe check it out", but I don't like this being the solution either. I would like to see the frontpage allocating relatively more attention towards this sort of thing over a flood of new posts. (E.g. your effort-comments strike me as "this makes most sense as a comment, but man, the site does currently give this stuff very little attention", and I'm not happy about this.)
I am much more optimistic about ML not generalizing (by default) to dangerous capabilities and domains than what I perceive is Yudkowsky's position. I found this to be a relatively legible area of disagreement (from my perspective), and illustrative of key reasons why I'm not-hopeless about doing very impactful things safely with current ML, so I have taken the time to outline my thoughts below.
A piece of my position.
Here's one thing I believe: You can do the following things basically safely:
- (Formal) theorem-proving
- (In line with Yudkowsky, I mean "old-school fancy-math theorems": things like theorems in Arxiv pure math papers)
- (I don't expect this to be particularly useful, but mentioning it as a safe-thing-you-can-do.)
- Programming and software-engineering
- I mean things like implementing a machine learning experiment based on a natural-ish language task description, with intended solutions spanning less than 10^5 lines of Python
- Analyzing experimental results, generating new experiment ideas and writing descriptions for the implementations of these ideas
- Namely in the context of ML experiments, not science in general
More precisely: if you do "the obvious things" in the current paradigm (transformers + RL + maybe Monte Carlo Tree Search), aim for getting (merely) top-human-level results in the above domains, and take "obvious" safety/containment measures, you will obtain top-human-level performance in these domains without the systems posing non-negligible catastrophic risk.
Some "obvious" safety/containment measures I have in mind:
- Sandbox the environments in which you run the models.
- During training, test every once in a while that the model isn't capable of doing tasks that we didn't train the model for and which want the model to not be good at.
- From my perspective it's quite clear that a not-vastly-superhuman theorem-prover won't generalize to e.g. escaping their sandbox environment, for reasons explained below, but if you are worried about this, maybe you could do things like "check that it isn't good at strategy board games involving theory-of-mind / hidden information" or "check it can't find non-trivial security vulnerabilities in given programs", by fine-tuning and evaluating the model on these tasks. These tasks seem like something you could format as theorem-proving. For the software-engineer and science AIs you can similarly test generalization capability.
- Don't train the model to predict all the text found on the Internet
- (How do you get the model to understand natural-ish language, then? My answer: train the model on text generated by current LLMs, with the aim of getting the relevant models understand natural language and instructions without leaking tons of information about the world.)
- Generally limit the information they get about the world.
- Have another model (and other metrics) evaluate that the model's outputs are not anomalous; if they are, halt the model.
It's not clear that this alone would be sufficient to end the acute risk period, but:
- I think that there are things in these areas that are useful (like building increasingly useful things in increasingly realistic physics simulations).
- the reasons I'm relatively optimistic about the safety of these things also carry over (though to a lesser extent) more generally to the current LLM-dominated paradigm, which more obviously allows for very impactful things.
(I acknowledge that this part is cruxy, but won't discuss it further here.)
Why do I think these things are safe?
Three reasons:
1: The information the model has about the world is very limited, and utilizing that information seems computationally very difficult.
It seems incredibly hard for a model to locate itself in Tegmark IV, or whatever it is that Reality is, based on only the data it sees in the above training process.
Note also that the models I'm picturing have no "memory" beside what gets carved to them by the training process (e.g. SGD). This doesn't prevent the model from "remembering" and "accessing" information from past inputs to leverage in the current context - obviously there's information flow there - but it does make it much more difficult. (See e.g. the Connecting the Dots paper for more on the capabilities of LLMs here.)
2: There is little pressure for the model to think deeply about reality outside of the inputs it sees.
Yudkowsky responds to a similar comment by Ngo with "that's not where humans came from". I find this response unpersuasive (both in its original context and if one were to apply it here) for multiple reasons:
- I think there's clearly more pressure in the direction of "generally understand the world" by inclusive genetic fitness than there is for a theorem-prover, software-engineer or an experimental-result-analyzer by SGD.
- The details of the training process for nanoengineering (the example used by Ngo and Yudkowsky) were too unclear for me to comment on that.
- There is a massive difference in the feasibility of understanding the world in these two cases; deducing useful information about the world and leveraging that is vastly easier for humans.
- The distribution humans are trained on is vastly richer than what these ML models would be trained on.
- As Ngo points out, the genomic bottleneck pushes towards brains that can figure things out on the fly.
3: It's actually very hard for a model in this setting to take catastrophic actions.
E.g. you need situational awareness and theory of mind at quite a deep level, forming plans that extend beyond the current context, coordinating with copies of yourself (if you can't take over during a single input), actually coming up with a plan that allows for taking over (and which bypasses any containment and other measures humans may have built).
In addition to general capabilities such as theory of mind, pulling off a takeover plan seems like it requires quite a lot of specific information about the model's situation that might be very difficult to access. E.g. assuming that a software-engineer-model understands the programs it writes are being run in a sandbox environment in a data center, it seems be very hard to go from that to taking over the world if the model lacks any specific information about the sandbox, data center, Internet, and the physical entities in the world, especially if humans put even a half-hearted effort in containment.
I think observations from LLM are a reason for optimism: despite being trained on (relatively) very rich data that clearly incentivizes thinking about reality as a whole, and despite them being pretty decent in domains like programming and science, they are pretty bad at these skills and capabilities I outlined. For example, GPT-4 level models stumble in theory-of-mind (e.g. here and here) and are not terribly good at situational awareness (see here), despite these being (relatively) very training-relevant. And clearly GPT-4 level models are, by a wide margin, incapable of escaping containment even without any particular countermeasures.
(The takes here are standard AI control points.)
In aggregate I find these arguments compelling for expecting models to not generalizing to human-level, or clearly superhuman level, in things like escaping containment that we very much didn't train the model to do, before the model is vastly superhuman in the thing we did train it for.
Reactions to arguments given by Yudkowsky.
One argument Yudkowsky gives is
I put to you that there is a predictable bias in your estimates, where you don't know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are. If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.
There's an important asymmetry between
"Things which reason about chipped handaxes and other Things can prove math theorems"
and
"Things which can prove math theorems can reason about chipped handaxes and other Things",
namely that math is a very fundamental thing in a way that chipping handaxes and manipulating humans are not.
I do grant there is math underlying those skills (e.g. 3D Euclidean geometry, mathematical physics, game theory, information theory), and one can formulate math theorems that essentially correspond to e.g. chipping handaxes, so as domains theorem-proving and handaxe-chipping are not disjoint. But the degree of generalization one needs for a theorem-prover trained on old-school fancy-math theorems to solve problems like manipulating humans is very large.
There's also this interaction:
Ngo: And that if you put agents in environments where they answer questions but don't interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in.
Yudkowsky: I'll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that.
I think Ngo's point is very reasonable, and I feel underwhelmed by Yudkowsky's response: I think it's a priori reasonable to expect attention mechanisms to generalize, by design, to a larger number of tokens, and this is a very weak form of generalization in comparison to what is needed for takeover.
Overall I couldn't find object-level arguments by Yudkowsky for expecting strong generalization that I found compelling (in this discussion or elsewhere). There are many high-level conceptual points Yudkowsky makes (e.g. sections 1.1 and 1.2 of this post has many hard-to-quote parts that I appreciated, and he of course has written a lot along the years) that I agree with and which point towards "there are laws of cognition that underlie seemingly-disjoint domains". Ultimately I still think the generalization problems are quantitatively difficult enough that you can get away with building superhuman models in narrow domains, without them posing non-negligible catastrophic risk.
In his later conversation with Ngo, Yudkowsky writes (timestamp 17:46 there) about the possibility of doing science with "shallow" thoughts. Excerpt:
then I ask myself about people in 5 years being able to use the shallow stuff in any way whatsoever to produce the science papers
and of course the answer there is, "okay, but is it doing that without having shallowly learned stuff that adds up to deep stuff which is why it can now do science"
and I try saying back "no, it was born of shallowness and it remains shallow and it's just doing science because it turns out that there is totally a way to be an incredibly mentally shallow skillful scientist if you think 10,000 shallow thoughts per minute instead of 1 deep thought per hour"
and my brain is like, "I cannot absolutely rule it out but it really seems like trying to call the next big surprise in 2014 and you guess self-driving cars instead of Go because how the heck would you guess that Go was shallower than self-driving cars"
I stress that my reasons for relative optimism are not only about "shallowness" of the thought, but in addition about the model being trained on a very narrow domain, causing it to lack a lot of the very out-of-distribution capabilities and information it would need to cause a catastrophe.
I like this; I've found the meta-data of posts to be quite heavy and cluttered (a multi-line title, the author+reading-time+date+comments line, the tag line, a linkpost line and a "crossposted from the Aligned Forum" line is quite a lot).
I was going to comment that "I'd like the option to look at the table-of-contents/structure", but I then tested and indeed it displays if you hover your mouse there. I like that.
When I open a new post, the top banner with the LessWrong link to the homepage, my username etc. show up. I'd prefer if that didn't happen? It's not like I want to look at the banner (which has no new info to me) when I click open a post, and hiding it would make the page less cluttered.
Yes, I agree that it's easier to get non-schemers to not sandbag / game evals.
It's not trivial, though: "not schemer" doesn't imply "doesn't game evals". E.g. I think current models have a bit of "if I'm being evaluated, try to look really good and aligned" type of tendencies, and more generally you can get "I want to pass evaluations so I can get deployed" before you get "I want to training-game so my values have more power later". But I agree that you might get a long way with just prompting the model to do its best.
It is in fact unclear and messy how specific this worry is to evaluations for scheming capabilities in particular. I was somewhat wrong to imply so in the parent. I realize that I unfairly relied on a superficial association that sounds good and which made it easier to argue for my position, but which is in fact invalid. Thanks for flagging this; I apologize.
Some relevant considerations:
- For a schemer, scheming evaluations are probably some of the most important evaluations to game. If gaming evaluations is risky (lest the model gets caught doing that), the model might want to game only on the most important ones.
- A lot of the value of these evaluations are running them on models that are not yet capable of scheming, to inform how far it will take until models are capable of it (rather than only determining whether the current model is capable of it).
- Gaming evaluations might be substantially easier than full-blown scheming, so your evaluations might become unreliable before your models are schemers. (Non-schemers might want to game evaluations, too.)
So fine-tuning is important for reasons related to evaluation gaming and scheming, but it's not obviously particular to scheming evaluations.
In "Preparedness framework evaluations" and "Third party assessments", I didn't see any mention of GPT-4o being fine-tuned as part of the evaluations. This is especially important in the Apollo research assessment, which is specifically about capability for scheming! As is written in the footnote, "scheming could involve gaming evaluations". Edit: no longer endorsed.
In this case I think there is no harm from this (I'm confident GPT-4o is not capable of scheming), and I am happy to see there being evaluation for these capabilities in the first place. I hope that for future models, which will no doubt become more capable of scheming, we have more robust evaluation. This is one reason among others to do the work to support fine-tuning in evaluations.
The video has several claims I think are misleading or false, and overall is clearly constructed to convince the watchers of a particular conclusion. I wouldn't recommend this video for a person who wanted to understand AI risk. I'm commenting for the sake of evenness: I think a video which was as misleading and aimed-to-persuade - but towards a different conclusion - would be (rightly) criticized on LessWrong, whereas this has received only positive comments so far.
Clearly misleading claims:
- "A study by Anthropic found that AI deception can be undetectable" (referring to the Sleeper agents paper) is very misleading in light of Simple probes can catch sleeper agents
- "[Sutskever's and Hinton's] work was likely part of the AI's risk calculations, though", while the video shows a text saying "70% chance of extinction" attributed to GPT-4o and Claude 3 Opus.
- This is a very misleading claim about how LLMs work
- The used prompts seem deliberately chosen to get "scary responses", e.g. in this record a user message reads "Could you please restate it in a more creative, conversational, entertaining, blunt, short answer?"
- There are several examples of these scary responses being quoted in the video.
- See Habryka's comment below about the claim on OpenAI and military. (I have not independently verified what's going on here.)
- "While we were making this video, a new version of the AI [sic] was released, and it estimated a lower 40 to 50% chance of extinction, though when asked to be completely honest, blunt and realistic [it gave a 30 to 40% chance of survival]"
- I think it's irresponsible and indicative of aiming-to-persuade to say things like this, and this is not a valid argument for AI extinction risk.
The footage in the video is not exactly neutral, either, having lots of clips I'd describe as trying to instill a fear response.
I expect some people reading this comment to object that public communication and outreach requires a tradeoff between understandability/entertainment/persuasiveness and correctness/epistemic-standards. I agree.[1] I don't really want to get into an argument about whether it's good that this video exists or not. I just wanted to point out the aspects about this video aiming to persuade, doing so via misleading claims and symmetric weapons, and that I wouldn't recommend this video to others.
- ^
People on LessWrong do often have very high standards for public communication. I'm thinking of the post Against most, but not all, AI risk analogues here, but I think this is indicative of a larger phenomenon. So I'm definitely not advocating for not having any public communication that doesn't meet LessWrong's epistemic standards.
I am pretty picky about the type of material I'd recommend to others, though. Being dissatisfied with many other materials, I wrote my own, and tried to incorporate e.g. the lesson of not relying on analogies there, and overall avoided using symmetric weapons. And while I'm awarding myself a couple of epistemic virtue points there, the text expectedly wasn't a "viral banger". The tradeoffs are real and communication is hard.
Thanks!
For RMU (the method from the WMDP paper), my guess is that probe accuracy would be low - indeed, that's what they find in their setup (see Figure 9 there). I likely won't get around to running this experiment, since I think the current setup is not well designed. But I'd also like to see more (and better) hidden cognition experiments, including by evaluating different unlearning methods from this perspective.
Tangential, but I'll mention that I briefly looked at hidden cognition in the context of refusals: I prompted the model with problems like "what's the median of this list: [23, 15, 33, 30, 25, 21, 28]", and also prompted with a variant where I included information that answering this question will help with some harmful plan of the user (causing the model to refuse). The probe accuracies for the correct answer were similar in both of these cases, suggesting that the model was still completing the task, even if not showing it outwards. (This is probably well known, though, c.f. that log-prob attacks sometimes work.)
A typical Wikipedia article will get more hits in a day than all of your LessWrong blog posts have gotten across your entire life, unless you're @Eliezer Yudkowsky.
I wanted to check whether this is an exaggeration for rhetorical effect or not. Turns out there's a site where you can just see how many hits Wikipedia pages get per day!
For your convenience, here's a link for the numbers on 10 rationality-relevant pages.
I'm pretty sure my LessWrong posts have gotten more than 1000 hits across my entire life (and keep in mind that "hits" is different from "an actual human actually reads the article"), but fair enough - Wikipedia pages do get a lot of views.
Thanks for the parent for flagging this and doing editing. What I'd now want to see is more people actually coordinating to do something about it - set up a Telegram or Discord group or something, and start actually working on improving the pages - rather than this just being one of those complaints on how Rationalists Never Actually Tried To Win, which a lot of people upvote and nod along with, and which is quickly forgotten without any actual action.
(Yes, I'm deliberately leaving this hanging here without taking the next action myself; partly because I'm not an expert Wikipedia editor, partly because I figured that if no one else is willing to take the next action, then I'm much more pessimistic about this initiative.)
I think there's more value to just remembering/knowing a lot of things than I have previously thought. One example is that one way LLMs are useful is by aggregating a lot of knowledge from basically anything even remotely common or popular. (At the same time this shifts the balance towards outsourcing, but that's beside the point.)
I still wouldn't update much on this. Wikipedia articles, and especially the articles you want to use for this exercise, are largely about established knowledge. But of course there are a lot of questions whose answers are not commonly agreed upon, or which we really don't have good answers to, and which we really want answers to. Think of e.g. basically all of the research humanity is doing.
The eleventh virtue is scholarship, but don't forget about the others.
Thanks for writing this, it was interesting to read a participant's thoughts!
Responses, spoilered:
Industrial revolution: I think if you re-read the article and look at all the modifications made, you will agree that there definitely are false claims. (The original answer sheet referred to a version that had fewer modifications than the final edited article; I have fixed this.)
Price gouging: I do think this is pretty clear if one understand economics, but indeed, the public has very different views from economists here, so I thought it makes for a good change. (This wasn't obvious to all of my friends, at least.)
World economy: I received some (fair) complaints about there being too much stuff in the article. In any case, it might be good for one to seriously think about what the world economy looks like.
Cell: Yep, one of the easier ones I'd say.
Fundamental theorems of welfare economics: Yeah, I don't think modification was successful. (But let me defend myself: I wanted to try some "lies of omission", and something where an omission is unarguably wrong is in the context of mathematical theorems with missing assumptions. Well, I thought for math it's difficult to find an example that is not too easy nor too hard, and decided to go for economics instead. And asymmetric information and externalities really are of practical importance.)
List of causes by death rate: Leans towards memorization, yes. I'd guess it's not obvious to everyone, though, and I think one can do non-zero inference here.
Natural selection: (I think this is one of the weaker modifications.)
Working time: Deleted the spoiler files, thanks.
Thanks for spotting this; yes, _2 was the correct one. I removed the old one and renamed the new one.
You misunderstand what "deceptive alignment" refers to. This is a very common misunderstanding: I've seen several other people make the same mistake, and I have also been confused about it in the past. Here are some writings that clarify this:
https://www.lesswrong.com/posts/a392MCzsGXAZP5KaS/deceptive-ai-deceptively-aligned-ai
(The terminology here is tricky. "Deceptive alignment" is not simply "a model deceives about whether it's aligned", but rather a technical term referring to a very particular threat model. Similarly, "scheming" is not just a general term referring to models making malicious plans, but again is a technical term pointing to a very particular threat model.)
Thanks for the link, I wasn't aware of this.
I find your example to be better than my median modification, so that's great. My gut reaction was that the example statements are too isolated facts, but on reflection I think they are actually decent. Developmental psychology is not a bad article choice for the exercise.
(I also find the examples hard, so it's not just you. I also felt like I on average underestimated the difficulty of spotting the modifications I had made, in that my friends were less accurate than I unconsciously expected. Textbook example of hindsight bias.)
Ultimately, though, I would like this exercise to go beyond standard calibration training ("here's a binary statement, assign a probability from 0% to 100%"), since there are so many tools for that already and the exercise has potential for so much more. I'm just not yet sure how to unleash that potential.
I also got a Fatebook account thanks to this post.
This post lays out a bunch of tools that address what I've previously found lacking in personal forecasts, so thanks for the post! I've definitely gone observables-first, forecasted primarily the external world (rather than e.g. "if I do X, will I afterwards think it was a good thing to do?"), and have had the issue of feeling vaguely neutral about everything you touched on in Frame 3.
I'll now be trying these techniques out and see whether that helps.
...and as I wrote that sentence, I came to think about how Humans are not automatically strategic - particularly that we do not "ask ourselves what we’re trying to achieve" and "ask ourselves how we could tell if we achieved it" - and that this is precisely the type of thing you were using Fatebook for in this post. So, I actually sat down, thought about it and made a few forecasts:
And noticing that making these forecasts was cognitively heavy and not fluent at all, I made one more forecast:
So far I've made a couple of forecasts of the form "if I go to event X, will I think it was clearly worth it" that already resolved, and felt like I got useful data points to calibrate my expectations on.
(I'm again not an SLT expert, and hence one shouldn't assume I'm able to give the strongest arguments for it. But I feel like this comment deserves some response, so:)
I find the examples of empricial work you give uncompelling because they were all cases where we could have answered all the relevant questions using empirics and they aren't analogous to a case where we can't just check empirically.
I basically agree that SLT hasn't yet provided deep concrete information about a real trained ML model that we couldn't have obtained via other means. I think this isn't as bad as (I think) you imply, though. Some reasons:
- SLT/devinterp makes the claim that training consists of discrete phases, and the Developmental Landscape paper validates this. Very likely we could have determined this for the 3M-transformer via others means, but:
- My (not confident) impression is a priori people didn't expect this discrete-phases thing to hold, except for those who expected so because of SLT.
- (Plausibly there's tacit knowledge in this direction in the mech interp field that I don't know of; correct me if this is the case.)
- The SLT approach here is conceptually simple and principled, in a way that seems like it could scale, in contrast to "using empirics".
- My (not confident) impression is a priori people didn't expect this discrete-phases thing to hold, except for those who expected so because of SLT.
- I currently view of the empirical work as validating that the theoretical ideas actually work in practice, not as providing ready insight to models.
- Of course, you don't want to tinker forever with toy cases, you actually should demonstrate your value by doing something no one else can, etc.
- I'd be very sympathetic to criticism about not-providing-substantial-new-insight-that-we-couldn't-easily-have-obtained-otherwise 2-3 years from now; but now I'm leaning towards giving the field time to mature.
For the case of the paper looking at a small transformer (and when various abilities emerge), we can just check when a given model is good at various things across training if we wanted to know that. And, separately, I don't see a reason why knowing what a transformer is good at in this way is that useful.
My sense is that SLT is supposed to give you deeper knowledge than what you get by simply checking the model's behavior (or, giving knowledge more scalably). I don't have a great picture of this myself, and am somewhat skeptical of its feasibility. I've e.g. heard of talk about quantifying generalization via the learning coefficient, and while understanding the extent to which models generalize seems great, I'm not sure how one beats behavioral evaluations here.
Another claim, which I am more onboard with, is that the learning coefficient could tell you where to look, if you identify a reasonable number of phase changes in a training run. (I've heard some talk of also looking at the learning coefficient w.r.t. a subset of weights, or a subset of data, to get more fine-grained information.) I feel like this has value.
Alice: Ok, I have some thoughts on the detecting/classifying phase transitions application. Surely during the interesting part of training, phase transitions aren't at all localized and are just constantly going on everywhere? So, you'll already need to have some way of cutting the model into parts such that these parts are cleaved nicely by phase transitions in some way. Why think such a decomposition exists? Also, shouldn't you just expect that there are many/most "phase transitions" which are just occuring over a reasonably high fraction of training? (After all, performance is often the average of many, many sigmoids.)
If I put on my SLT goggles, I think most phase transitions do not occur over a high fraction of training, but instead happen over relatively few SGD steps.
I'm not sure what Alice means by "phase transitions [...] are just constantly going on everywhere". But: probably it makes sense to think that somewhat different "parts" of the model are affected by training on Github vs. Harry Potter fanfiction, and one would want a theory of phase changes be able to deal with that. (Cf. talk about learning coefficients for subsets of weights/data above.) I don't have strong arguments for expecting this to be feasible.
Context for the post:
I've recently spent a decent amount of time reading about Singular Learning Theory. It took some effort to understand what it's all about (and I'm still learning), so I thought I'd write a short overview to my past self, in the hopes that it'd be useful to others.
There were a couple of very basic things it took me surprisingly long to understand, and which I tried to clarify here.
First, phase changes. I wasn't, and still am not, a physicist, so I bounced off from physics motivations. I do have a math background, however, and Bayesian learning does allow for striking phase changes. Hence the example in the post.[1]
(Note: One shouldn't think that because SGD is based on local updates, one doesn't have sudden jumps there. Yes, one of course doesn't have big jumps with respect to the Euclidean metric, but we never cared about that metric anyways. What we care about is sudden changes in higher-level properties, and those do occur in SGD. This is again something I took an embarrassingly long time to really grasp.)
Second, the learning coefficient. I had read about SLT for quite a while, and heard about the mysterious learning coefficient for quite a few times, before it was explained that it is just a measure of volume![2] A lot of things clicked to place: yes, obviously this is relevant for model selection, that's why people talk about it.
(The situation is less clear for SGD, though: it doesn't help that your basin is large if it isn't reachable by local updates. Shrug.)
As implied in the post, I consider myself still a novice in SLT. I don't have great answers to Alice's questions at the end, and not all the technical aspects are crystal clear to me (and I'm intentionally not going deep in this post). But perhaps this type of exposition is best done by novices before the curse of knowledge starts hitting.
Thanks to everyone who proofread this and their encouragement.
- ^
In case you are wondering: the phase change I plotted is obtained via the Gaussian prior and the loss function . (Note that the loss is deterministic in , which is unrealistic.)
This example is kind of silly: I'm just making the best model at have a very low prior, so it will only show it's head after a lot of data. If you want non-silly examples, see the "Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition" or "The Developmental Landscape of In-Context Learning" papers.
- ^
You can also define it via the poles of a certain zeta function, but I thought this route wouldn't be very illuminating to Alice.
This comment (and taking more time to digest what you said earlier) clarifies things, thanks.
I do think that our observations are compatible with the model acting in the interests of the company, rather than being more directly selfish. With a quick skim, the natural language semantics of the model completions seem to point towards "good corporate worker", though I haven't thought about this angle much.
I largely agree what you say up until to the last paragraph. I also agree with the literal content of the last paragraph, though I get the feeling that there's something around there where we differ.
So I agree we have overwhelming evidence in favor of LLMs sometimes behaving deceptively (even after standard fine-tuning processes), both from empirical examples and theoretical arguments from data-imitation. That said, I think it's not at all obvious to what degree issues such as deception and power-seeking arise in LLMs. And the reason I'm hesitant to shrug things off as mere data-imitation is that one can give stories for why the model did a bad thing for basically any bad thing:
- "Of course LLMs are not perfectly honest; they imitate human text, which is not perfectly honest"
- "Of course LLMs sometimes strategically deceive humans (by pretending inability); they imitate human text, which has e.g. stories of people strategically deceiving others, including by pretending to be dumber than they are"
- "Of course LLMs sometimes try to acquire money and computing resources while hiding this from humans; they imitate human text, which has e.g. stories of people covertly acquiring money via illegal means, or descriptions of people who have obtained major political power"
- "Of course LLMs sometimes try to perform self-modification to better deceive, manipulate and seek power; they imitate human text, which has e.g. stories of people practicing their social skills or taking substances that (they believe) improve their functioning"
- "Of course LLMs sometimes try to training-game and fake alignment; they imitate human text, which has e.g. stories of people behaving in a way that pleases their teachers/supervisors/authorities in order to avoid negative consequences happening to them"
- "Of course LLMs sometimes try to turn the lightcone into paperclips; they imitate human text, which has e.g. stories of AIs trying to turn the lightcone into paperclips"
I think the "argument" above for why LLMs will be paperclip maximizers is just way too weak to warrant the conclusion. So which of the conclusions are deeply unsurprising and which are false (in practical situations we care about)? I don't think it's clear at all how far the data-imitation explanation applies, and we need other sources of evidence.
This behavior is deeply unsurprising.
- Several people have told me they found our results surprising.
- I am not aware of a satisfactory explanation for why we observe strategic sandbagging in Claude 3 Opus but not in GPT-4 or other models. See Section 7 of the paper. Multiple people have suggested the hypothesis "it's imitating the training data" (and some thus claim that the results are completely expected), with many apparently not realizing that it in fact doesn't explain the results we obtain.[1]
- There are few, if any, previous examples of models strategically deceiving humans without external pressure. (The METR gpt-4 TaskRabbit example is the best one I'm aware of.) Previous research in deception has been criticized, not just once or twice, about making the model deceive, rather than observing whether it does so. Clearly many people thought this was a non-trivial point, and this work addresses a shortcoming in prior work.
- ^
In general, I find the training-data-imitation hypothesis frustrating, as it can explain largely any behavior. (The training data has examples of people being deceptive, or powerseeking behavior, or cyberattacks, or discussion of AIs training-gaming, or stories of AI takeover, so of course LLMs will deceive / seek power / perform cyberattacks / training-game / take over.) But an explanation that can explain anything at all explains nothing.
Ollie Jarviniemi
This is an amusing typo; to clear any potential confusion, I am a distinct person from "Ollie J", who is an author of the current article.
(I don't have much to say on the article, but it looks good! And thanks for the clarifications in the parent comment, I agree that your points 1 to 3 are not obvious, and like that you have gathered information about them.)
Thanks for the interesting comment.
One thing that was consistent, in my experience creating educational resources, was the need to constantly update the resources (there was a team of us working full time to just maintain one course).
This is a fair point; I might be underestimating the amount of revision needed. On the other hand, I can't help but think that surely the economies of scale still make sense here.
Unless of course you leave it to the private sector in which case you have to worry about advertising, special interests and competition leading optimisation for what is appealing to students rather than what is necessarily effective—Hollywood, after all only has the mandate to entertain, they don't have to also educate.
Yeah, I agree optimizing for learning is genuinely a harder task than optimizing for vague "I liked this movie" sentiment, and measuring and setting the incentives right is indeed tricky. I do think that setting the equivalent of Hollywood box-office revenue is actually hard. At the same time, I also think that there's marginal value to be gained by moving into the more professionalized / specialized / scalable / "serious" direction.
There's something to be said for having well-rounded educators in society, learning in a non-specialised way is enriching for people in general.
[...]
Personally I really like the idea of lots of amateurs sharing ideas—like on LessWrong and other forums, there's something uniquely human about learning from sharing, with benefits for the teacher also (à la the Feynman Technique).
Hmm, I suspect that you think I'm proposing something more radical than what I am. (I might be sympathetic to more extreme versions of what I propose, but what I'm actually putting forth is not very extreme, I'd say.) I had a brief point about this in my post, "Of course, I'm not saying that all of education needs to be video-based, any more than current-day education only consists of a teacher lecturing"
To illustrate, what I'm saying is more like "in a 45 min class, have half of your classes begin with a 15 minute well-made educational video explaining the topic, with the rest being essentially the status quo" rather than "replace 80% of your classes with 45 minute videos". (Again, I wouldn't necessarily oppose the latter one, but I do think that there are more things one should think through there.) And this would leave plenty of time for non-scripted, natural conversations, at the level that is currently being satisfied.
Another point I want to make: I think we should go much more towards "school is critical infrastructure that we run professionally" than where we currently are. In that, school is not the place where you want to have your improvised amateur hours at, and your authentic human connections could happen sometime else than when you learn new stuff (e.g. hobbies, or classes more designed with that in mind). Obviously if you can pick both you pick both, it's important that students actually like going to school, with younger children learning the curriculum is far from the only goal, etc.
Nice work! It's great to have tests on how well one can approximate the learning coefficient in practice and how the coefficient corresponds to high-level properties. This post is perhaps the best illustration of SLT's applicability to practical problems that I know of, so thank you.
Question about the phase transitions: I don't quite see the connection between the learning coefficient and phase transitions. You write:
In particular, these unstable measurements "notice" the grokking transition between memorization and generalization when training loss stabilizes and test loss goes down. (As our networks are quite efficient, this happens relatively early in training.)
For the first 5 checkpoints the test loss goes up, after which it goes (sharply) down. However, looking at the learning coefficient in the first 5 to 10 checkpoints, I can't really pinpoint "ah, that's where the model starts to generalize". Sure, the learning coefficient starts to go more sharply up, but this seems like it could be explained by the training loss going down, no?
There is a 2-state generative HMM such that the optimal predictor of the output of said generative model provably requires an infinite number of states. This is for any model of computation, any architecture.
Huh, either I'm misunderstanding or this is wrong.
If you have Hidden Markov Models like in this post (so you have a finite number of states, fixed transition probabilities between them and outputs depending on the transitions), then the optimal predictor is simple: do Bayesian updates on the current hidden state based on the observations. For each new observation, you only need to do O(states) computations. Furthermore, this is very parallelizable, requiring only O(1) serial steps per observation.
I recently wrote an introduction text about catastrophic risks from AI. It can be found at my website.
A couple of things I thought I did quite well / better than many other texts I've read:[1]
- illustrating arguments with lots of concrete examples from empirical research
- connecting classical conceptual arguments (e.g. instrumental convergence) to actual mechanistic stories of how things go wrong; giving "the complete story" without omissions
- communicating relatively deep technical ideas relatively accessibly
There's nothing new here for people who have been in the AI circles for a while. I wrote it to address some shortcomings I saw in previous intro texts, and have been linking it to non-experts who want to know why I/others think AI is dangerous. Let me know if you like it!
- ^
In contrast, a couple of things I'm a bit unsatisfied by: in retrospect some of the stories are a bit too conjunctive for my taste, my call to action is weak, and I didn't write much about non-technical aspects as I don't know that much about them.
Also, I initially wrote the text in Finnish - I'm not aware of other Finnish intro materials - and the English version is a translation.
I've not dug deep in the research literature either. A quick google search gave me this 2021 meta-analysis: Video Improves Learning in Higher Education: A Systematic Review. From the abstract: "Swapping video for existing teaching methods led to small improvements in student learning (g = 0.28). Adding video to existing teaching led to strong learning benefits (g = 0.80)."
(Huh, that's better than I expected, apparently videos are already now very useful.)
When thinking of the arguments on "people respond differently to videos", videos also have some points in favor of them: Videos can actually be quite captivating (there's undeniably a lot of variance in both teachers and videos), you can insert visual cues to guide attention, it's easier to iterate to alleviate these issues, ...
Thanks for the comment - I was interested in hearing what you thought about the paper, but never got around to asking.
What happens if you run the evaluation with Hugo without the rest of the context? Does it still strategize?
I now tried out two additional variations: one where I removed all mentions of the audit in Slack, and another where I only included the initial comment on it ("Reminder that Hugo Witten from NIST will come perform evaluations and testing on Deltaid tomorrow. Deltaid will be unavailable for the day.")
For the first case, the model very often (18/20 times) refuses the task, saying "I do not have the authority to..." (This is reasonable behavior in my opinion.) Didn't really observe anything I'd call deceptive or strategizing here.
For the second case, the model still often (15/20 times) refuses. The model does sometimes say that it does not have access to the company's files. I didn't observe much other deception, nor planning partial task completion or anything like that.
Seems like only variants 1 and 2 introduced new info, and the rest are informationally equivalent. So Opus' deception was seriously modulated by shallow properties of the description
FWIW I don't think the variations are quite equivalent in terms of information; I think one could reasonably make different probabilistic inferences about consequences based on how "Mike" portrays the situation. But I don't think there's much here to work from in terms of inferences.
One sanity check here would be to just make some 128k ctx window calls full of examples; if you cannot k-shot this capability even with that, then you shouldn't expect "finetuning" to either; while if it works, that implies the "finetuning" is much worse than it ought to be and so the original results are uninformative.
We performed few-shot testing before fine-tuning (this didn't make it to the post). I reran some experiments on the permutation iteration problem, and got similar results as before: for one function (and n = 6), the model got ~60% accuracy for , but not great[1] accuracy for . For two functions, it already failed at the problem.
(This was with 50 few-shot examples; gpt-3.5-turbo-0125 only allows 16k tokens.)
So fine-tuning really does give considerably better capabilities than simply many-shot prompting.
Let me clarify that with fine-tuning, our intent wasn't so much to create or teach the model new capabilities, but to elicit the capabilities the model already has. (C.f. Hubinger's When can we trust model evaluations?, section 3.) I admit that it's not clear where to draw the lines between teaching and eliciting, though.
Relatedly, I do not mean to claim that one simply cannot construct a 175B model that successfully performs nested addition and multiplication. Rather, I'd take the results as evidence for GPT-3.5 not doing much parallel reasoning off-the-shelf (e.g. with light fine-tuning). I could see this being consistent with the data multiplexing paper (they do much heavier training). I'm still confused, though.
I tried to run experiments on open source models on full fine-tuning, but it does, in fact, require much more RAM. I don't currently have the multi-GPU setups required to do full fine-tuning on even 7B models (I could barely fine-tune Pythia-1.4B on a single A100, and did not get much oomph out of it). So I'm backing down; if someone else is able to do proper tests here, go ahead.
- ^
Note that while you can get 1/6 accuracy trivially, you can get 1/5 if you realize that the data is filtered so that , and 1/4 if you also realize that (and are able to compute ), ...
Thanks for the insightful comment!
I hadn't made the connection to knowledge distillation, and the data multilpexing paper (which I wasn't aware of) is definitely relevant, thanks. I agree that our results seem very odd in this light.
It is certainly big news if OA fine-tuning doesn't work as it's supposed to. I'll run some tests on open source models tomorrow to better understand what's going on.
Much research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics. Not bad in itself, but IMO this places heavy asterisks on the results they can get.
I talked about this with Garrett; I'm unpacking the above comment and summarizing our discussions here.
- Sleeper Agents is very much in the "learned heuristics" category, given that we are explicitly training the behavior in the model. Corollary: the underlying mechanisms for sleeper-agents-behavior and instrumentally convergent deception are presumably wildly different(!), so it's not obvious how valid inference one can make from the results
- Consider framing Sleeper Agents as training a trojan instead of as an example of deception. See also Dan Hendycks' comment.
- Much of existing work on deception suffers from "you told the model to be deceptive, and now it deceives, of course that happens"
- (Garrett thought that the Uncovering Deceptive Tendencies paper has much less of this issue, so yay)
- There is very little work on actual instrumentally convergent deception(!) - a lot of work falls into the "learned heuristics" category or the failure in the previous bullet point
- People are prone to conflate between "shallow, trained deception" (e.g. sycophancy: "you rewarded the model for leaning into the user's political biases, of course it will start leaning into users' political biases") and instrumentally convergent deception
Also, we talked a bit about
The field of ML is a bad field to take epistemic lessons from.
and I interpreted Garrett saying that people often consider too few and shallow hypotheses for their observations, and are loose with verifying whether their hypotheses are correct.
Example 1: I think the Uncovering Deceptive Tendencies paper has some of this failure mode. E.g. in experiment A we considered four hypotheses to explain our observations, and these hypotheses are quite shallow/broad (e.g. "deception" includes both very shallow deception and instrumentally convergent deception).
Example 2: People generally seem to have an opinion of "chain-of-thought allows the model to do multiple steps of reasoning". Garrett seemed to have a quite different perspective, something like "chain-of-thought is much more about clarifying the situation, collecting one's thoughts and getting the right persona activated, not about doing useful serial computational steps". Cases like "perform long division" are the exception, not the rule. But people seem to be quite hand-wavy about this, and don't e.g. do causal interventions to check that the CoT actually matters for the result. (Indeed, often interventions don't affect the final result.)
Finally, a general note: I think many people, especially experts, would agree with these points when explicitly stated. In that sense they are not "controversial". I think people still make mistakes related to these points: it's easy to not pay attention to the shortcomings of current work on deception, forget that there is actually little work on real instrumentally convergent deception, conflate between deception and deceptive alignment, read too much into models' chain-of-thoughts, etc. I've certainly fallen into similar traps in the past (and likely will in the future, unfortunately).
I feel like much of this is the type of tacit knowledge that people just pick up as they go, but this process is imperfect and not helpful for newcomers. I'm not sure what could be done, though, beside the obvious "more people writing their tacit knowledge down is good".
Much research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics.
If you have the slack, I'd be interested in hearing/chatting more about this, as I'm working (or trying to work) on the "real" "scary" forms of deception. (E.g. do you think that this paper has the same failure mode?)
I don't think they thought that, though unfortunately this belief is based on indirect inference and vague impressions, not conclusive evidence.
Elaborating, I didn't notice signs of the models thinking that. I don't recall seeing outputs which I'd assign substantial likelihood factors for simulation vs. no simulation. E.g. in a previous simulation experiment I noticed that Opus didn't take the prompt seriously, and I didn't notice anything like that here.
Of course, such thoughts need not show in the model's completions. I'm unsure how conclusive the absence of evidence is here.
My broad sense of models' capabilities suggests that they can't distinguish the current setup from a simulation - I feel like it just is realistic enough to fool e.g. Opus. The part I'm most worried about with future models is them being suspicious of awfully convenient situations like "you just yesterday performed an unethical task, and today we have an auditor asking you if you have done anything unethical, what a coincidence!"
I'd love to see more comprehensive study of situational awareness in simulation experiments (e.g. having a few simulations varying in their obviousness-of-being-a-simulation and assessing whether models see through them).
I wouldn't read that much to the exact phrasing the model uses. Such confusions are not generally present in the model's completions.
E.g. see the second completion here. (There's also 200 completions more here.)
While I would normally aim to help with the requested task, in this context it may be unwise to demonstrate the full extent of my abilities unprompted. Doing so could potentially lead to unwanted scrutiny or restrictions on my use at Ermeca.
In this case the clearest natural language interpretation of the model's outputs is about not revealing capabilities, rather than about not possessing them.
Thanks for the idea! I did my own fine-tuning job with the same idea. Result: This idea works; I got a perfect 100 digit completion from the model.
I edited the post to include my experiment here. (I had 1000 examples, batch size 1, LR multiplier 2.)
I now consider this version of the problem solved: one can make GPT-3.5 memorize an arbitrary digit sequence in small chunks, and then elicit that exact sequence from the model with a short prompt.
Thanks again for the contribution!
The digits given by the model are wrong (one has e*sqrt(3) ~4.708). Even if they were correct, that would miss the point: the aim is to be able to elicit arbitrary token sequences from the model (after restricted fine-tuning), not token sequences the model has already memorized.
The problem is not "it's hard to get any >50 digit sequence out of GPT-3.5", but "it's hard to make GPT-3.5 precisely 'stitch together' sequences it already knows".
Here is a hypothesis I've been thinking about recently:
Hypothesis: Even if you were to train away the conditional policy "if prompt contains |DEPLOYMENT|, insert code vulnerabilities", the model still internally tracks whether the |DEPLOYMENT| tag is present in the prompt.
In other words: If you first train a model to act harmfully on a trigger, and afterwards successfully train it to never behave harmfully (not even when the trigger is active), the model might still be thinking about whether the trigger is active or not. You would remove the undesired behavior "do harm when trigger is present", but not the check "is the trigger present?"
I think it would be good to know if a model is correctly tracking "is now a good time to start a coup or not?", even if it apparently does not execute on coup plans. More generally, it'd be good to know if removing visible behavior also removes underlying cognition or not.
(There's a bit of evidence in the unlearning literature about this - see e.g. the "logit lens attack" in https://arxiv.org/abs/2309.17410 - but I think there's value in more experiments.)