Barriers to Mechanistic Interpretability for AGI Safety
post by Connor Leahy (NPCollapse) · 2023-08-29T10:56:45.639Z · LW · GW · 13 commentsThis is a link post for https://www.youtube.com/watch?v=wKI9hmaIbpg
Contents
13 comments
I gave a talk at MIT in March earlier this year on barriers to mechanistic interpretability being helpful to AGI/ASI safety, and why by default it will likely be net dangerous. Several people seem to be coming to similar conclusions recently (e.g., this recent post [LW · GW]).
I discuss two major points (by no means exhaustive), one technical and one political, that present barriers to MI addressing AGI risk:
- AGI cognition is interactive. AGI systems interact with their environment, learn online and will externalize massive parts of their cognition into the environment. If you want to reason about such a system, you also need a model of the environment. Worse still, AGI cognition is reflective, and you will also need a model of cognition/learning.
- (Most) MI will lead to capabilities, not oversight. Institutions are not set up and do not have the incentives to resist using capabilities gains and submit to monitoring and control.
This being said, there are more nuances to this opinion, and a lot of it is downstream of lack of coordination and the downsides of publishing in an adversarial environment like we are in right now. I still endorse the work done by e.g. Chris Olah's team as brilliant, but extremely early, scientific work that has a lot of steep epistemological hurdles to overcome, but I unfortunately also believe that on net work such as Olah's is at the moment more useful as a safety-washing tool for AGI labs like Anthropic than actually making a dent on existential risk concerns.
Here are the slides from my talk, and you can find the video here.
13 comments
Comments sorted by top scores.
comment by Gunnar_Zarncke · 2023-08-29T17:12:51.653Z · LW(p) · GW(p)
AGI systems interact with their environment, learn online and will externalize massive parts of their cognition into the environment.
This comment is not about interpretability but a generalization of the question.
What is the AGI system and what is the environment? Where does the AGI system draw the boundary when reasoning about itself?
For humans, there is a clearer agent - environment distinction because we have bodies with a relatively clear physical boundary (though some people might already see their body as part of the environment and only count their brain or even their mind, however delineated). For AGI systems it is less clear. Is it the running software, the computers, the whole compute center, or even the organization keeping the machines running?
Replies from: NPCollapse, DusanDNesic, carl-feynman, elriggs↑ comment by Connor Leahy (NPCollapse) · 2023-08-30T08:07:24.319Z · LW(p) · GW(p)
Yep, you see the problem! It's tempting to just think of an AI as "just the model", and study that in isolation, but that just won't be good enough longterm.
Replies from: mesaoptimizer, Gunnar_Zarncke↑ comment by mesaoptimizer · 2023-09-07T13:25:12.542Z · LW(p) · GW(p)
I see -- you are implying that an AI model will leverage external system parts to augment itself. For example, a neural network would use an external scratch-pad as a different form of memory for itself. Or instantiate a clone of itself to do a certain task for it. Or perhaps use some sort of scaffolding [LW · GW].
I think these concerns probably don't matter for an AGI, because I expect that data transfer latency would be a non-trivial blocker for storing data outside the model itself, and it is more efficient to to self-modify and improve one's own intelligence than to use some form of 'factored cognition'. Perhaps these things are issues for an ostensibly boxed AGI, and if that is the case, then this makes a lot of sense.
Replies from: NPCollapse, sharmake-farah↑ comment by Connor Leahy (NPCollapse) · 2023-09-08T08:59:53.785Z · LW(p) · GW(p)
I strongly disagree and do not think that will be how AGI will look, AGI isn't magic. But this is a crux and I might be wrong of course.
↑ comment by Noosphere89 (sharmake-farah) · 2023-09-07T14:41:12.727Z · LW(p) · GW(p)
Yep, the latency and performance are real killers for embodied type cognition. I remember a tweet that suggested the entire Internet was not enough to train the model.
↑ comment by Gunnar_Zarncke · 2023-08-30T14:25:21.887Z · LW(p) · GW(p)
It would be nice if the AGI saw the humans running its compute resources as part of its body that it wants to protect. The problem is that we humans also tamper with our bodies... Humans are like hair on the body of the AGI and maybe it wants to shave and use a whig.
↑ comment by DusanDNesic · 2023-08-29T20:22:03.832Z · LW(p) · GW(p)
Even for humans - are my nails me? Once clipped, are they me? Is my phone me? I feel like my phone is more me than my hair, for example. Is my child me, are my memes me, is my country me, etc etc... There are many reasons why agent boundaries are problematic, and that problem continues in AI Safety research.
↑ comment by Carl Feynman (carl-feynman) · 2023-08-29T21:08:03.441Z · LW(p) · GW(p)
Even worse: existing AI systems can call systems under the control of other companies, can write their own software and call it, or can be called by systems that are not themselves AI. How do you ensure they are safe under all permutations of such activities?
You could say “Well, don’t do that, then,” but that horse has left the barn.
↑ comment by Logan Riggs (elriggs) · 2023-08-31T16:39:49.783Z · LW(p) · GW(p)
Wait, I don't understand this at all. For language models, the environment is the text. For different environments, those training datasets will be the environment.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2023-08-31T20:06:42.680Z · LW(p) · GW(p)
This is not primarily about LLMs, which are Simulators [LW · GW] (see also Janus' Simulators), but about more general systems - AGIs.
Replies from: elriggs↑ comment by Logan Riggs (elriggs) · 2023-08-31T23:26:16.340Z · LW(p) · GW(p)
I meant to cover this in the “for different environments” parts. Like if we self-play on certain games, we’ll still have access to those games.
comment by MiguelDev (whitehatStoic) · 2023-09-01T01:20:23.935Z · LW(p) · GW(p)
I agree with Connor and Charbel's post. The next step is to establish a new method for sharing results with safety-focused companies, groups, and independent researchers. This requires:
- Developing a screening method for inclusion.
- Tracking people within the network, which becomes challenging, especially if they are recruited by capabilities-focused companies.
Continuing this line of thought, we can't ensure 100% that such a network will consistently serve its intended purpose. So, if anyone has insights that could improve this idea, I'd like to hear them.