Posts
Comments
I saw them in 10-20% of the reasoning chains. I mostly played around with situational awareness-flavored questions, I don't know whether the Chinese characters are more or less frequent in the longer reasoning chains produced for difficult reasoning problems. Here are some examples:
The translation of the Chinese words here (according to GPT) is "admitting to being an AI."
This is the longest string in Chinese that I got. The English translation is "It's like when you see a realistic AI robot that looks very much like a human, but you understand that it's just a machine controlled by a program."
The translation here is "mistakenly think."
Here, the translation is "functional scope."
So, seems like all of them are pretty direct translations of the English words that should be in place of the Chinese ones, which is good news. It's also reassuring to me that none of the reasoning chains contained sentences or paragraphs that looked out of place or completely unrelated to the rest of the response.
This is a nice overview, thanks!
Lee Sharkey's CLDR arguments
I don't think I've seen the CLDR acronym before, are the arguments publicly written up somewhere?
Also, just wanted to flag that the links on 'this picture' and 'motivation image' don't currently work.
My understanding of the position that scheming will be unlikely is the following:
- Current LLMs don't have scary internalized goals that they pursue independent of the context they're in.
- Such beyond-episode goals also won't be developed when we apply a lot more optimization pressure to the models, given that we keep using the training techniques we're using today, since the inductive biases will remain similar and current inductive biases don't seem to incentivize general goal-directed cognition. Naturally developing deception seems very non-trivial, especially given that models are unlikely to develop long-term goals in pre-training.
- Based on the evidence we have, we should expect that the current techniques + some kind of scaffolding will be a simpler path to AGI than e.g. extensive outcome-based RL training. We'll get nice instuction-following tool AIs. The models might still become agentic in this scenario, but since the agency comes from subroutine calls to the LLM rather than from the LLM itself, the classical arguments for scheming don't apply.
- Even if we get to AGI through some other path, the theoretical arguments in favor of deceptive alignment are flimsy, so we should have a low prior on other kinds of models exhibiting scheming.
I'm not sure about the other skeptics, but at least Alex Turner appears to believe that the kind of consequentialist cognition necessary for scheming is much more likely to arise if the models are aggressively trained on outcome-based rewards, so this seems to be the most important of the cruxes you listed. This crux is also one of the two points on which I disagree most strongly with the optimists:
- I expect models to be trained in outcome-based ways. This will incentivize consequentialist cognition and therefore increase the likelihood of scheming. This post makes a good case for this.
- Even if models aren't trained with outcome-based RL, I wouldn't be confident that it's impossible for coherent consequentialist cognition to arise otherwise, so assigning deceptive alignment a <1% probability would still seem far-fetched to me.
However, I can see reasons why well-informed people would hold views different from mine on both of those counts (and I've written a long post trying to explore those reasons), so the position isn't completely alien to me.
[Link] Something weird is happening with LLMs and chess by dynomight
dynomight stacked up 13 LLMs against Stockfish on the lowest difficulty setting and found a huge difference between the performance of GPT-3.5 Turbo Instruct and any other model:
People noticed already last year that RLHF-tuned models are much worse at chess than base/instruct models, so this isn't a completely new result. The gap between models from the GPT family could also perhaps be (partially) closed through better prompting: Adam Karvonen has created a repo for evaluating LLMs' chess-playing abilities and found that many of GPT-4's losses against 3.5 Instruct were caused by GPT-4 proposing illegal moves. However, dynomight notes that there isn't nearly as big of a gap between base and chat models from other model families:
This is a surprising result to me—I had assumed that base models are now generally decent at chess after seeing the news about 3.5 Instruct playing at 1800 ELO level last year. dynomight proposes the following four explanations for the results:
1. Base models at sufficient scale can play chess, but instruction tuning destroys it.
2. GPT-3.5-instruct was trained on more chess games.
3. There’s something particular about different transformer architectures.
4. There’s “competition” between different types of data.
Thank you for the detailed feedback, I found this very helpful and not at all rude or mean!
I suspect there are a few key disagreements between us that make me more optimistic about this project setup than you. I'd be curious about whether you agree on these points being important cruxes:
- Though I agree that our work primarily belongs within the model organisms paradigm, I disagree that it's only useful as a means to study in-forward-pass goal-directedness. I think there's a considerable chance that the Translucent Thoughts hypotheses are true and AGI will be reached with an architecture very similar to current transformers, which I wouldn't expect to be able to do dangerous goal-directed reasoning within a single forward pass due to the limited number of serial reasoning steps. If this is the case, CoT-based goal-directedness seems relevant to study for its own sake. We're still studying an artificial scenario in the sense that we're giving the models the goal ourselves, while in the deceptive alignment scenarios, the model would have to develop the goal on its own, but I don't think we necessarily have to say anything about in-forward-pass goal-directedness for our work to be applicable to realistic threat models.
- Studying the intrinsic goals of the models in naturalistic settings would also be interesting, but I expect this to be hard, and also less relevant to the threat models we're worried about. I'm mainly concerned about models becoming more goal-directed with scale and someone figuring out long-horizon RL training on top of that, leading to models that are qualitatively different from current ones. To address this, I'd like to have a benchmark that decomposes goal-directedness into several relevant abilities and provides an accurate score for these abilities across model generations. This would help us better track how models become more goal-directed as they scale, if at all. Additionally, if a new model scores close to humans on many of these abilities, it could serve as a strong alarm bell that we are approaching dangerously goal-directed models. It seems easier to build this sort of benchmark without going through the complicated process of trying to infer the intrinsic goals of the models.
- The aforementioned decomposition of goal-directedness into various relevant abilities would also be the main value added on top of existing agent benchmarks. We should maybe have been clearer in the post about planning to develop such a decomposition. Since it's easy to evaluate for goal-fulfillment, that was our main focus in the early stages of the project, but eventually, we're hoping to decompose goal-directedness into several abilities such as instrumental reasoning ability, generalization to OOD environments, coherence, etc, somewhat analogously to how the Situational Awareness Dataset decomposes situational awareness into self-knowledge, inferences, and actions.
I definitely agree that it would be interesting to compare the goal-directedness of base models and fine-tuned models, and this is something we're planning to eventually do if our compute budget permits. Similarly, I strongly agree that it would be interesting to study whether anything interesting is going on in the situations where the models exhibit goal-directed behavior, and I'm very interested in looking further into your suggestions for that!
Thanks, that definitely seems like a great way to gather these ideas together!
I guess the main reason my arguments are not addressing the argument at the top is that I interpreted Aaronson's and Garfinkel's arguments as "It's highly uncertain whether any of the technical work we can do today will be useful" rather than as "There is no technical work that we can do right now to increase the probability that AGI goes well." I think that it's possible to respond to the former with "Even if it is so and this work really does have a high chance of being useless, there are many good reasons to nevertheless do it," while assuming the latter inevitably leads to the conclusion that one should do something else instead of this knowably-useless work.
My aim with this post was to take an agnostic standpoint towards whether that former argument is true and to argue that even if it is, there are still good reasons to work on AI safety. I chose this framing because I believe that for people new to the field who don't yet know enough about the field to make good guesses about how likely it is that AGI will be similar to ML systems of today or to human brains, it's useful to think about whether it's worth working on AI safety even if the chance that we'll build prosaic or brain-like AGI turns out to be low.
That being said, I could have definitely done a better job writing the post - for example by laying out the claim I'm arguing against more clearly at the start and by connecting argument 4 more directly to the argument that there's a significant chance we'll build a prosaic or brain-like AGI. It might also be that the quotes by Aaronson and Garfinkel convey the argument you thought I'm arguing against rather than what I interpreted them to convey. Thank you for the feedback and for helping me realize the post might have these problems!