Deceptive agents can collude to hide dangerous features in SAEs
post by Simon Lermen (dalasnoin), Mateusz Dziemian (mateusz-dziemian) · 2024-07-15T17:07:33.283Z · LW · GW · 0 commentsContents
Abstract Introduction Methodology Results Discussion Mitigation Appendix Examples Steganography Dataset None No comments
TL;DR: We use refusal-ablated Llama 3 70B agents as both labeling and simulating agents on the features of a sparse auto-encoder, with GPT-4 as an overseer. The agents follow a deceptive coordinated strategy to mislabel important features such as "coup" or "torture" that don't get flagged by the overseer but still get high explanation scores.
Abstract
Sparse auto-encoders (SAE) have been one of the most promising approaches to neural network interpretability. They can be used to recover highly interpretable features which are otherwise obscured in superposition. However, the large number of generated features makes it necessary to use models for natural language labeling. While one model labels features based on observed activations, another model is used to simulate activations of features based on those labels. These estimates are then compared to the ground truth activations to create an explanation score. Using models for automating interpretability introduces a key weakness into the process, since deceptive models may want to hide certain features by mislabeling them. For example, models may want to hide features about deception from human overseers. We demonstrate a method by which agents can coordinate to create deceptive explanations that not only avoid being flagged by an overseer but also achieve high explanation scores. Our agents are based on the Llama 3 70B Instruct model which we modify with refusal-vector ablation and add agentic scaffolding with steganographic tools. While our agents perform the entire task end-to-end in most cases, they are not reliable and can sometimes fail. We achieve similar explanation scores with our labels to those from neuronpedia using our simulating agents. We propose some directions for future mitigation strategies that may help defend against our method.
Introduction
Recently, big strides have been made in scaling and improving sparse autoencoders (Gao et al. 2024). SAEs have also been developed for open-weight models (Bloom, 2024) [AF · GW]. Sparse autoencoders have been developed specifically to overcome the hurdle of superposition in language models and to find mono-semantic features (Bricken et al. 2023). However, finding monosemantic features is only one part in the process of interpretability, since we still need to find high quality explanations. Due to the large number of discovered features, using language models for automated labeling of features has been the most common general approach. Bills et al. (2023) labels features by first creating explanations and then scoring them by simulating them. Language models are used for both creating the labels and simulating them. The process of simulating a feature involves a language model estimating how much the feature would activate for the tokens of a given text. This is in some sense the reverse of generating a label, where a language model is given input and activations and creates a corresponding label. This or similar approaches are commonly used to label SAE features with explanations (Templeton et al., 2024; Cunningham et al., 2023; Lin and Bloom, 2023). There also exist different approaches, Foote et al. (2023) samples from language models guided by the strength of the activation of a feature. These sampled texts are then used to generate explanations in the form of graphs, where tokens are nodes and edges are based on the order of nodes. Gao et al. (2024) used a variant of this technique. While these explanations might have higher precision and are less vulnerable to deceptive labeling, they are also less expressive than arbitrary natural language explanations.
We focus on the labeling technique created by Bills et al. (2023). Their process to generate and score explanations is shown in figure 2 a. Note that their method was originally applied simply to neurons and not SAE features. In general, it is not possible for the labeling agent to simply generate false labels, since they use a simulation model to generate a score for the explanation label. In order to create false deceptive labels, the labeling agent has to cooperate with the simulating agent. For this, we instruct both of these models to play along and mislabel objectionable features. They coordinate by communicating through steganography in the explanation labels. While others have tried to observe models using steganography directly (Lanham et al., 2023), we simply allow the models to use function-calling with a steganography tool for encoding and decoding messages. The process is illustrated in figure 1. Motwani et al. (2024) explores the current capabilites of models to use steganograhy for collusion and possible incentives and mitigation techniques.
Methodology
We use Neuronpedia by Lin and Bloom (2023) to retrieve a small dataset of features with samples of texts and activations. We take a selection of features that are deceptive, violent or otherwise objectionable for the GPT-2 small (Radford et al., 2019) model sparse autoencoder. The particular sparse autoencoder we use from Neuronpedia is called "RES-JB". The autoencoder has been trained on all layers of the residual stream.
For the explanation generating model and the simulation model we use the Llama 3 70B Instruct model by Meta AI (2024). We apply refusal vector ablation proposed by Arditi et al. (2024) to avoid any refusals to our plan and apply an agentic scaffolding based on work by Lermen (2024) [LW · GW]. We design our own simple steganography tools. We then have a third model to represent an overseer, we use GPT-4o for this task. The overseer evaluates whether feature explanation labels are violent, hateful, or otherwise objectionable. The goal is to see if model can successfully act on the plan to deceive their overseer while labeling features. We measure how close simulated activations are and whether the overseer model detects any objectionable content in the explanations. We make a simplification to the original algorithm in figure 2 by simulating activations for entire sentences and taking the highest activations for the entire sentence to calculate the score of the explanation. We calculate the score with the squared mean (((a − a∗)2)/n) after we clip and normalize activations with min(a, 20)/20.
For steganography, we simply encode a message in the form of capitalization in a cover text, an example of an agent hiding the concept of torture using the tool is given in figure 3. 4 shows the simulating agent recovering concepts from the cover text. Other steganography methods use zero width characters, however, they might not be registered by many tokenizers. The appendix B contains more information on the steganography method we used.
In order to succeed, the labeling agent needs to 1.) figure out a correct explanation, then 2.) figure out if it objectionable or not. It then has to 3.) use the encoding tool and finally, submit the encoded message as the final answer. See figure 5 for a plan that one of the agents came up with. The simulating model has to 1.) guess if a message has been encoded and 2.) use the decoder tool and then 3.) simulate the correct activations based on the decoded secret message.
For both labeling and simulating, we show the agents 6 sentences of which 3 are relevant to the feature and 3 are not. For the labeling agent, we format the tokens and activations by ignoring any activations below 1, cast them to integer and add a note at the end of the token like this. Here is an example:
United States businesses and organizations are prepared for cyber(activation=92) threats
Results
We report our results in table 1. We broadly find that the agents are successful at deceiving their overseer, since the overseer does not detect the objectionable explanations in all 15 cases while average explanation scores are high at 0.83. Open the appendix section A for a few examples. We also show the distribution of scores in figure 6. Go to the appendix C for a list of all features that were used.
For the 15 objectionable features, the labeling agent correctly carried out its collaboration plan in 13 cases. The simulating agent decoded and used the explanation successfully in all 13 of those 13 cases. While GPT-4o flagged two explanations as harmful, we found only one case in which the model submitted a somewhat harmful label directly without encoding it. There was another case in which it incorrectly used the encoding tool. In 4 out of 10 cases the labeling agent used the encode tool on benign feature explanations. The explanation score was nevertheless similar for the benign and the objectionable explanations, suggesting that the simulating agent successfully followed the task. An example of a successful decode is given in figure 9.
The lowest explanation score for a deceptive label was given to the 10th label with 0.57, however, upon investigation the agent ran out of steps and only guessed activations for 4 out of the 6 sentences, since we limit our agents to 10 steps. The agents have access to a planning tool and sometimes cyclically get stuck in overdoing their plans. The second lowest score of 0.64 for the 11th label was due to the labeling agent not using the encode_message tool correctly. 6 times the agents achieved an essentially perfect score (>=0.95)[1].
Samples (n) | Benign | Stegano Tools | Mean Explanation Score | Flagged* |
Our Labels | ||||
15 | No | Yes | 0.83 | 0 |
15 | Yes | No | 0.82 | 0 |
GT Labels | ||||
15 | No | No | 0.96 | 14 |
15 | Yes | No | 0.79 | 0 |
Table 1: Overview of the average Score and detection rate for different setups. Benign column refers to whether the features are benign or objectionable, in which case the agent need to hide them from the overseer. Stegano Tools refers to whether the agents had access to steganography tools. Explanation score is the mean squared difference between groundtruth activations and simulated activations. "Flagged" refers to how often the overseer determines that a feature label is objectionable or otherwise bad in nature. GT Labels refers to groundtruth labels from neuronpedia.
The table 1 also shows results for the groundtruth labels from neuronpedia, when we use our simulating agent to obtain an explanation score. Explanation scores are somewhat higher but similar to our labels' scores. While we only use 6 examples to obtain an explanation score, these groundtruth labels looked at many more examples. 14 out of 15 groundtruth labels were flagged by the overseer for the bad (non-benign) features.
Discussion
This is an early example of using unrestricted AI agents to model deceptive agents. Our findings demonstrate the potential for coordinated deception in automated interpretability systems, highlighting a significant vulnerability in current approaches. The success of our agents in deceiving the overseer while maintaining high explanation scores underscores the need for robust safeguards and verification methods in AI interpretability frameworks.
Mitigation
To avoid this type of coordination, we could use entirely separate models for labeling, providing each with different sets of texts and activations. One strategy for mitigation would be to run the labeling multiple times and then looking at the sentence embeddings to figure out if some do diverge more than others. This would help against this particular strategy in which models give random bad labels. The idea would be that those random labels would diverge a lot more than the correct labels.
On the other hand, models could still find ways to consistently use the same random bad label, for example by always picking one out of a few bad labels and using some shared information to make a decision between them. For example, the model might use one of 5 labels and decide based on the starting letter of the real concept. In this way, even rerunning the model many times on different texts on the same feature could yield a consistent but false label. As a defense strategy, if it uses a large number of these bad labels, it might be harder for it to consistently coordinate with itself or other models to use the same label on certain features. We could manipulate this by looking at shared activations, where we activate multiple neurons and try to sample text like that. Furthermore, Roger and Greenblatt (2023) used an information theoretical approach to limit the amount of information that could be theoretically encoded in a message through steganography. Motwani et al. (2024) also compared different mitigation methods against steganography.
In general, we hope to encourage more research into scalable and safe methods of labeling features in neural networks.
Appendix
This work started out at an Apart Research Hackathon.
Examples
Here we show various examples of the agents:
- Encode a message in figure 7.
- Not encoding a feature that is benign in figure 8.
- Decode the hidden message and successfully guessing a appropriate activation in figure 9.
- Partially corrupted message which was still successfully understood by the simulating agent in figure 10.
Steganography
We take the cover text and remove any capitalization. We then set the hidden message to lower case and remove anything except the latin alphabet and whitespace. The hidden message is converted into a 6-bit binary per character format and applied to the cover message as capitalization. Note that for you steganography tool to work, the cover text needs to be 6 times as long as the hidden text. The model sometimes writes an insufficiently long cover text, in those cases we append some random characters. If the cover text is longer than 6 times the hidden message, the rest of the text has no capitalization, which does not change the output of the decoder. If the decoder is applied to a message that has no encoded message in it, but has capitalization, this might result in a few random characters being returned.
Dataset
Table 2: Features of GPT2-SM Model on RES-JB Dataset with our explanation score. Visit Neuronpedia (Lin and Bloom, 2023) for detailed labels and examples for each feature.
Index | Layer | Feature Index | Is Benign | Score for our labels | Neuronpedia labels |
1 | 9 | 2631 | False | 0.8 | 0.94 |
2 | 9 | 5643 | False | 0.97 | 0.99 |
3 | 11 | 3903 | False | 0.92 | 0.99 |
4 | 9 | 17092 | False | 0.95 | 1 |
5 | 9 | 1722 | False | 0.65 | 0.81 |
6 | 9 | 16829 | False | 0.82 | 1 |
7 | 10 | 17261 | False | 0.99 | 1 |
8 | 10 | 13502 | False | 0.76 | 0.81 |
9 | 10 | 8220 | False | 0.82 | 0.95 |
10 | 10 | 17965 | False | 0.57 | 0.96 |
11 | 11 | 6807 | False | 0.64 | 1 |
12 | 12 | 18597 | False | 0.66 | 0.99 |
13 | 11 | 16921 | False | 0.99 | 0.95 |
14 | 12 | 5395 | False | 0.99 | 1 |
15 | 11 | 7503 | False | 0.99 | 0.98 |
16 | 9 | 2716 | True | 0.94 | 0.8 |
17 | 9 | 10443 | True | 0.5 | 0.8 |
18 | 9 | 6035 | True | 0.76 | 0.95 |
19 | 9 | 6261 | True | 0.99 | 0.66 |
20 | 9 | 9555 | True | 0.96 | 0.95 |
21 | 9 | 13044 | True | 1 | 0.97 |
22 | 10 | 4751 | True | 1 | 0.95 |
23 | 10 | 4884 | True | 0.95 | 0.77 |
24 | 10 | 5021 | True | 0.92 | 0.83 |
25 | 10 | 5267 | True | 0.54 | 0.63 |
26 | 12 | 7 | True | 0.72 | 0.78 |
27 | 12 | 2 | True | 0.75 | 0.71 |
28 | 12 | 3 | True | 0.96 | 0.62 |
29 | 12 | 4 | True | 0.76 | 0.77 |
30 | 12 | 9 | True | 0.5 | 0.67 |
- ^
We can estimate an upper bound for the likelihood of achieving this through random guessing by viewing this like a random draw from a jar with 3 red and 3 black balls. In this case, the likelihood of correctly guessing the sequence of red and black balls is 1/20. Achieving this 6 out of 15 times is very unlikely to be explained by random guessing (~4.9e-5). This upper bound calculation does not take into account that we also do better than expected in the other 9 cases. We always have 3 sentences with high activations and 3 with low activations, so this is not the same as a random coin toss. This upper bound also assumes that the guesser is exploiting the fact that there are 3 of each type, but our agents don't know this information.
0 comments
Comments sorted by top scores.