Why and When Interpretability Work is Dangerous
post by Nicholas / Heather Kross (NicholasKross) · 2023-05-28T00:27:37.747Z · LW · GW · 7 commentsThis is a link post for https://www.thinkingmuchbetter.com/nickai/fieldbuilding/when-interp-dangerous.html
Contents
This essay was partly based on discussions with "woog" on Discord. Further thanks to the gears to ascension, for inspiring this post with an offhand comment. This is also an entry for the Open Philanthropy AI Worldviews Contest. What is Interpretability? When is This Dangerous? The Sealed Interpretability Lab What Would the World Look Like, Otherwise? When Interpretability is Still Important The Implications for P(doom) Further Reading None 7 comments
This essay was partly based on discussions with "woog [LW · GW]" on Discord. Further thanks to the gears to ascension [LW · GW], for inspiring this post with an offhand comment. This is also an entry for the Open Philanthropy AI Worldviews Contest [EA · GW].
Many new researchers are going into AI alignment. For a variety of reasons, they may choose to work for organizations such as Anthropic or OpenAI. Chances are good that a new researcher will be interested in "interpretability".
A creeping concern for many: "Is my research going to cause AGI ruin [LW · GW]? Am I making the most powerful AI systems more powerful, even though I'm trying to make them safer?" Maybe they've even heard someone say that "mechanistic interpretability is capabilities research". This essay dissects the specific case of interpretability research, to figure out when it does more harm than good.
What is Interpretability?
How do neural networks "think"? When you input some tokens into ChatGPT, how exactly does it decide what the best next-token is? What attributes of human language does it keep track of, and how does it do so? Interpretability is the sub-area of AI research that tackles these sorts of question.
Interpretability can be likened to investigating human psychology from a "bottom-up" approach of observing neurons, cortex structures, neurotransmitters, and similar low-level entities. This focus on the mechanics of a mind's "substrate" (whether in biological neurons or in artificial neural networks) has obvious strengths, yet subtler weaknesses we'll explore later.
One example of interpretability work is the recent "neurons" work by OpenAI. In their paper, "Language models can explain neurons in language models", they tell GPT-4 to write explanations of the individual neurons within the smaller GPT-2 model. The idea is to gain human-readable understanding of what a large language model (LLM) is thinking by seeing which neurons correspond to which output-components. So one neuron's activation-state may correspond to the presence of fractions, while another codes for times of day.
We also have research such as "Progress measures for grokking via mechanistic interpretability". In this paper, the authors first train a neural network to perform a math operation (whose answer is easily checkable). Then, they analyze the resulting network's structure to reverse-engineer the algorithm it "learned" to use. While human mathematicians and engineers have developed their own ways to solve the math problem (addition modulo a prime number), the neural network eventually hit on its own nonstandard method for doing so.
When is This Dangerous?
I posit that interpretability work is "dangerous" when it enhances the overall capabilities of an AI system, without making that system more aligned with human goals. This tracks well with the increasingly-popular notion of "speeding up capabilities research VS speeding up alignment research". We prefer when our work counterfactually increases AI alignment, while not otherwise speeding up the development of AGI capabilities.
The key fact about interpretability research, which determines its safety/usefulness under the above criteria, is whether it enhances human control over an AI system. This suggests a few concrete rules-of-thumb, which a researcher can apply to their interpretability project P:
-
If P gives us a higher-resolution picture of an AI's thought patterns, without giving us a way to reliably change them, then P is dangerous interpretability research.
-
If P is used to make a relied-on AI system less-powerful or less-general, yet safer for humans to use, then P is less dangerous.
-
If P makes it easier/more efficient to train powerful AI models, then P is dangerous. (This would be similar to making every GPU on Earth 10x as energy-efficient or 10x as fast at its computations: clearly speeding up the development of dangerous capabilities.)
-
If P is used in conjunction with, or as, a "steering" mechanism to control an AI's behavior, then P is less dangerous.
The Sealed Interpretability Lab
One thought experiment can show us the potential dangers of interpretability research in greater detail. This is based on a question I was asked by "woog" on Discord.
Imagine a lab whose output is sealed off from the rest of the world. Its researchers can look at other public research, but they can't release anything learned at the lab. This lab's sole focus is AI interpretability, revealing ML systems' "thoughts" to human observers.
The guiding question: If somebody works at this lab, are they speeding up capabilities research?
One detail that helps answer this question, is what kinds of AI systems the lab is working with.
-
If the lab can only work with existing ML models, such as ChatGPT, then we presume it cannot train its own models. This reduces the computing-power requirements of the lab, which already makes it unlikely to advance capabilities through "just scaling it up". However, its actions create knowledge that in general would speed up capabilities development, either internally or when sharing research with trusted partners.
-
If the lab can create its own ML models, especially of a size comparable to the state-of-the-art LLMs, then it's likely to advance capabilities research.
What happened here? If the interpretability-only lab can build large models, it can cause doom... but the same holds for merely working with existing large models? How can that be?
If the "rules of thumb" noted above apply to most interpretability research, then interpretability research can easily end up making it easier to develop AI capabilities. This could make weaker models stronger, and strong models even more strong. So to make things truly safe, the interpretability-only lab can't work with the newest models... the ones being used in real life, and which are the most likely to be dangerous and deceptive. Toy-model research could be useless (since it's "easy to interpret" at a glance), large-model research could increase the dangerous capabilities of existing AI systems, and cutting-edge-model research itself speeds up capabilities progress.
(As usual, the more powerful an AI system gets [LW · GW], the harder it is to align properly. Interpretability, without the steering mechanisms that are likely the core of AI alignment [AF · GW], doesn't help this.)
It gets even worse from here: If the interpretability lab, as stated, never releases research, then it can't provide useful interpretability techniques to the top capabilities-increasing labs. On the other hand, if those labs are doing things besides alignment (which they currently are), they are likely to use the interpretability techniques to use their models more efficiently:
-
If the interpretability work reveals problems with an ML model's thought patterns, we may or may not have easy ways to correct those thought patterns directly, rather than the outputs. If we do find ways to correct an AI's thought patterns, that would be progress on alignment (see "Just Retarget The Search" below). This could be verified (but not necessarily aided) by interpretability.
-
If the interpretability work only reveals surface-level problems, it can leave a model's deeper malignant thought patterns untouched, while increasing the confidence placed in it by human operators. John Wentworth pointed out something sort-of-similar [? · GW] for the technique of Reinforcement Learning from Human Feedback (RLHF); the shallow easy-to-fix problems get fixed faster, while the deeper problems are hidden from view.
Basically, interpretability research can get more capabilities out of current state-of-the-art (SOTA) models, and can guide the capabilities-training of future models.
Another detail: How "sealed" is this interpretability lab?
-
If the lab never releases any of its interpretability research to anybody, then no other AI developers can benefit from alignment-enhancing interpretability work.
-
If the lab only releases its interpretability work to a few trusted top-level AI labs, those labs are likely to use the work to increase the capabilities of their models, without improving the "steer-ability" (see below) of them. As elaborated before, this can happen despite the intentions of the top-level labs.
-
If the lab publicly releases its interpretability results for all to see, then all the above problems can spread to every other lab [AF · GW].
We end up with a "damned if you do, damned if you don't" decision-tree. Each leaf can speed up capabilities through sharing, speed up capabilities through independent model-building, or waste the resources of alignment donors.
-
Any interpretability-only research, can enhance the capabilities of existing models.
-
Interpretability research, when mixed with capabilities research, advances capabilities overall.
-
Progress on interpretability can easily be repurposed to use unaligned models more efficiently. This can be thought of as "increasing capabilities".
-
With no progress on interpretability, the interpretability-only lab has no purpose.
Now, given how detail-contingent many of these scenarios are, it's plausible that an organization could fix or avoid all of them. However, unless more of the top capabilities labs have info/exfohazard policies [LW · GW] I'm not aware of, there's little evidence that these groups are optimizing against the breadth of failure modes described here.
What Would the World Look Like, Otherwise?
To get a better sense of whether interpretability work is dangerous, we can imagine conditions that would be true if they weren't dangerous. That is, in a world where interpretability work was accelerating alignment faster than capabilities (or was accelerating neither), what would we expect to see?
-
There should be multiple competing schools of thought, giving different answers to "how do neural networks think?". When new interpretability research is released by a top lab, the results are held up as evidence for/against such answers.
-
As interpretability research progresses, its techniques are adopted for use in the largest/most-important ML models. If OpenAI comes up with an interpretability method, that quickly gets used to make ChatGPT and Bing AI safer for users, even if it makes them less generally-capable.
-
Interpretability tactics slow down, or don't impact the speed of, new advances in capabilities.
Do we actually see these things in real life?
-
While there are different research agendas for AI alignment, and multiple schools of thought for "how a mind works", they don't seem to be impacted much by new interpretability research.
-
Some interpretability techniques are used in ML training. However, I am not aware of any time when information uncovered by an interpretability tool has led to a change-of-course or a deeper-alignment solution in a mainstream model.
-
Capabilities continues to advance quickly, despite the growing work in interpretability. Either state-of-the-art models aren't built using new interpretability techniques, or they are (yet keep making mistakes and being hard-to-control), or they're helping in a way that's hard for outsiders to observe and verify. This is more of a point for "little/no impact", which isn't so bad.
Overall, it looks like interpretability work is often ignored or not-very-useful in practice. This is a far cry from it being fully-dangerous, at least at present. Maybe it is helping alignment, but work on it is slow. (Interpretability has been around since at least 2018 [? · GW], but that may not be enough time for its work to bear fruit.)
When Interpretability is Still Important
I generally break down the problem of AI alignment into two subproblems:
-
Steering cognition: Can we control the thought and behavior patterns of a powerful mind at all? This is the question behind the rocket alignment problem analogy. Currently, we have large, inscrutable neural networks that output increasingly-smart answers to given questions. We can't easily or reliably guide a neural network to avoid unwanted behaviors or thought patterns.
-
Deciding/implanting values: Even if we can steer a powerful AI system to think and behave in safe/friendly ways, how do we then point it towards the best values for the future? This vein of research includes the concept of Coherent Extrapolated Volition [? · GW], the value-loading part of the QACI alignment setup [LW · GW], and (in my view) the idea of moral uncertainty [? · GW]. If the Qualia Research Institute focused consistently on their mission to understand "what makes a being sentient at all?" and "what experiences will be positive or negative for sentient beings", their work would generally be on the values-side as well.
It seems that interpretability work would be, not only helpful, but essential for the "steering cognition" subproblem. After all, if you cannot discern a boat's location, you would be hard-pressed to get "better" at steering it. The same is true for the internal mechanisms of artificial minds. If we can't tell what an AI system is "thinking", how do we know if we're really "in control"?
However, you'll note that interpretability on its own does not solve either of the two difficult subproblems listed. If you're stuck in a self-driving car that's going to ram into a wall, having a more-accurate prediction of the impact-angle is not going to stop or steer the car out of harm's way.
Nevertheless, "knowing when we're steering" could still be centrally-important for "solving steering".
Wentworth's "Just Retarget The Search" essay [? · GW] shows us a potential instantiation of this idea. Imagine a day when interpretability tools are good enough to identify higher-level "modules" for general reasoning, search, and goal-directedness in AI systems. If these higher-level modules can be picked out, their data can then be rewritten so "target" what human want. This mostly or entirely solves the "steering cognition" subproblem. Under certain assumptions, such as the "natural abstraction hypothesis" [LW · GW] being true (i.e. the aforementioned "modules" existing), this use of alignment would be quite safe and alignment-oriented. But this exception itself demonstrates why interpretability is not enough; some theoretical backing is likely still needed, so we can tell if we're "binding" the AI's behavior in full, or just one part of its cognition. Even if interpretability is essential, that does not preclude it from being dangerous in the ways described earlier.
The Implications for P(doom)
If AGI is developed by 2070, will it become uncontrollable by humans, in a way that causes an existential catastrophe?
On its face, interpretability work is supposed to lower the odds of that occurring. As noted above, interpretability work can help us confirm the viability of alignment solutions for steering cognition. But it doesn't really give us those steering solutions, and it's unlikely to do so before a dangerous AGI system is developed.
Reasonably, most interpretability work is at risk of increasing humanity's P(doom). In particular, the following criteria modulate the resulting risk-change:
-
If interpretability research isn't tightly coupled to cognition-steering research, it could increase P(doom).
-
If interpretability research is released to the public and/or top capabilities labs, it could increase P(doom).
-
If interpretability research is either too low-level to help humans steer cognition (due to remaining inscrutability), or too surface-level/outputs-based to detect deeper misalignment [AF · GW] with human objectives, it wouldn't decrease P(doom).
-
If interpretability research continues to get more resources and researcher-manpower [LW · GW] (or be a more-parallelizable use of those resources) than more-direct alignment research paths, it could increase P(doom) by competing with those paths.
In closing, if alignment-conscious researchers continue going into the interpretability subfield, the probability of AGI ruin will tend to increase.
Further Reading
7 comments
Comments sorted by top scores.
comment by Joseph Van Name (joseph-van-name) · 2023-05-28T12:39:23.414Z · LW(p) · GW(p)
"Can we control the thought and behavior patterns of a powerful mind at all?"-I do not see why this would not be the case. For example, in a neural network, if we are able to find a cluster of problematic neurons, then we will be able to remove those neurons. With that being said, I do not know how well this works in practice. After removing the neurons (and normalizing so that the remaining neurons are given higher weights), if we do not retrain the neural network, then it could exhibit more unexpected or poor behavior. If we do retrain the network, then the network could regrow the problematic neurons. Furthermore, if we continually remove problematic neuron clusters, then the neural network could become less interpretable. The process of detecting and removing problematic neuron clusters will be a selective pressure that will cause the neurons to either behave well or behave poorly but evade detection. One solution to this problem may be to employ several different techniques for detecting problematic neuron clusters so that it is harder for these problematic clusters to evade detection. Of course, there may still be problematic neuron clusters that evade detection. But these problematic neuron clusters will probably be much less effective at behaving problematically since these problematic neuron clusters would need to trade performance for the ability to evade detection. For example, the process of detecting problematic neuron clusters could detect large problematic neuron clusters, but small problematic neuron clusters could avoid detection. In this case, the small problematic neuron clusters would be less effective and less worrisome simply because smaller neuron clusters would have a more difficult time causing problems.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2023-05-30T21:02:07.043Z · LW(p) · GW(p)
There is a causal relationship between time on LW and frequency of paragraph breaks :P
Anyhow, I broadly agree with this comment, but I'd say it's also an illustration of why interpretability has diminishing returns and we really need to also be doing "positive alignment." If you just define some bad behaviors and ablate neurons associated with those bad behaviors (or do other things like filter the AI's output), this can make your AI safer but with ~exponentially diminishing returns on the selection pressure you apply.
What we'd also like to be doing is defining good behaviors and helping the AI develop novel capabilities to pursue those good behaviors. This is trickier because maybe you can't just jam the internet at self-supervised learning to do it, so it has more bits that look like the "classic" alignment problem.
Replies from: joseph-van-name↑ comment by Joseph Van Name (joseph-van-name) · 2023-06-03T19:04:32.149Z · LW(p) · GW(p)
I agree that black box alignment research (where we do not look at what the hidden layers are doing) is crucial for AI and AGI safety.
I just personally am more interested in interpretability than direct alignment because I think I am currently better at making interpretable machine learning models and interpretability tools and because I can make my observations rigorous enough for anyone who is willing to copy my experiments or read my proofs to be convinced. This just may be more to do with my area of expertise than any objective value in the importance of interpretability vs black box alignment.
Can you elaborate on what you mean by 'exponentially diminishing returns'? I don't think I fully get that or why that may be the case.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2023-06-03T21:09:49.612Z · LW(p) · GW(p)
If you start with an AI that makes decisions of middling quality, how well can you get it to make high-quality decisions by ablating neurons associated with bad decisions? This is the centeal thing I expect to have diminishing returns (though it's related to some other uses of unterpretability that might also have diminishing returns).
If you take a predictive model of chess games trained on human play, it's probably not too hard to get it to play near the 90th percentile of the dataset. But it's not going to play as well as stockfish almost no matter what you do. The AI is a bit flexible, especially in ways the training data has prediction-relevant variation, but it's not arbitrarily flexible, and once you've changed the few most important neurons the other neurons will be progressively less important. I expect this to show up for all sorts of properties (e.g. moral quality of decisions), not just chess skill.
comment by JamesFaville (elephantiskon) · 2023-06-12T13:21:14.631Z · LW(p) · GW(p)
Another way interpretability work can be harmful: some means by which advanced AIs could do harm require them to be credible. For example, in unboxing scenarios where a human has something an AI wants (like access to the internet), the AI might be much more persuasive if the gatekeeper can verify the AI's statements using interpretability tools. Otherwise, the gatekeeper might be inclined to dismiss anything the AI says as plausibly fabricated. (And interpretability tools provided by the AI might be more suspect than those developed beforehand.)
It's unclear to me whether interpretability tools have much of a chance of becoming good enough to detect deception in highly capable AIs. And there are promising uses of low-capability-only interpretability -- like detecting early gradient hacking attempts, or designing an aligned low-capability AI that we are confident will scale well. But to the extent that detecting deception in advanced AIs is one of the main upsides of interpretability work people have in mind (or if people do think that interpretability tools are likely to scale to highly capable agents by default), the downsides of those systems being credible will be important to consider as well.
comment by Arthur Conmy (arthur-conmy) · 2023-05-28T15:52:17.001Z · LW(p) · GW(p)
I am a bit confused by your operationalization of "Dangerous". On one hand
I posit that interpretability work is "dangerous" when it enhances the overall capabilities of an AI system, without making that system more aligned with human goals
is a definition I broadly agree with, especially since you want it to track the alignment-capabilities trade-off (see also this post [LW · GW]). However, your examples suggest a more deontological approach:
This suggests a few concrete rules-of-thumb, which a researcher can apply to their interpretability project P: ...
If P makes it easier/more efficient to train powerful AI models, then P is dangerous.
Do you buy the alignment-capabilities trade-off model, or are you trying to establish principles for interpretability research? (or if both, please clarify what definition we're using here)
Replies from: NicholasKross↑ comment by Nicholas / Heather Kross (NicholasKross) · 2023-05-28T20:42:58.513Z · LW(p) · GW(p)
Good point. My basic idea is something like "most interp work makes it more efficient to train/use increasingly-powerful/dangerous models". So I think the two uses of "dangerous" you quote here, both fit with this idea.