Aligned AI via monitoring objectives in AutoGPT-like systems
post by Paul Colognese (paul-colognese) · 2023-05-24T15:59:13.621Z · LW · GW · 4 commentsContents
Introduction Background Oversight Why information might be hidden from oversight Information is hidden for efficiency reasons Deception Conclusion None 4 comments
Thanks to Arun Jose, Joseph Bloom, and Johannes Treutlein for feedback/discussions.
Introduction
The release of AutoGPT prompted [LW · GW] discussions [LW · GW] related to the potential of such systems to turn non-agentic LLMs into agentic systems that pursue goals, along with the dangers that could follow. The relation of such systems to the alignment [LW · GW] problem [LW · GW] has also been explored.
In this short post, we investigate a threat model that comes from AutoGPT-like systems pursuing unaligned objectives and explore the potential for alignment via oversight. We briefly consider some key properties of such systems, and then discuss the idea that these systems’ high-level cognition might be interpretable by default and so might allow for sufficient oversight to ensure the system is aligned. Finally, we consider a couple of reasons for why the high-level cognition might be obscured from oversight and highlight ways of preventing this obscuration.
Background
AutoGPT-like systems, or Scaffolded LLMs [LW · GW], are systems that:
wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt.[1]
This programmatic scaffold allows for information from the inputs/outputs of LLM calls to be stored in long-term memory as well as used as inputs to tools/plugins.
We will state a few assumptions we’ll make about these AutoGPT-like systems. See this post [LW · GW] for a similar context.
- For the purposes of this post, we assume that a single LLM call is incapable of successfully pursuing objectives. That is, the threat comes from the LLM being used within the scaffolded system.
- We assume that LLMs in the system must generate prompts/instructions for other versions of itself (factored cognition/bureaucratic approach [? · GW])
- When information is passed from one LLM call/module/plugin to another, we say that this information is passed through an “information channel”. In particular, an overseer can observe this information (in the form of token sequences, etc.)
The threat model we’ll look at is one in which we have an agentic AutoGPT-like system that pursues unaligned objectives.
We don’t discuss the likelihood of this threat model being realized but note that these systems may suffer from inefficiencies that come from bureaucratic systems [LW(p) · GW(p)] and hence may not be competitive. See this Zvi’s post [LW · GW] for an exploration of the future of such systems.
Oversight
We assume that there is some overseer that wants the AutoGPT-like system to pursue an aligned objective. If the unaligned objective is detectable by the overseer, then the overseer can (hopefully) shut down the system before the objective is successfully pursued (including the objective of the system escaping the overseer’s oversight). This oversight may ensure [LW · GW] that the system remains aligned and never pursues unaligned objectives.
We now make explicit the key hope for oversight that comes from the modular-by-default structure of AutoGPT-like systems. The system is modular because information must be passed from one LLM call to another (or other modules like memory, etc.) Furthermore, because of the way the modular structure is encoded (via a programmatic scaffolding) and the fact that by default information is passed from one LLM call/module to another via natural language, a human overseer (or an LLM assistant) can, in theory, monitor every information channel for information pertaining to the objective or high-level cognition of the system.
This kind of oversight fails if either:
- Information about the objective doesn’t pass through the information channels, or
- Information about the objective is passed through the information channels but is obscured.
We think it’s reasonable to make the assumption that information about the objective must be passed along some information channel (potentially in the initial prompt from the user) if the AutoGPT-like system is to successfully pursue that objective. It seems unlikely that a single pass of an LLM, etc. could develop an objective and successfully pursue it, without information about that objective being passed to other LLM calls/modules/plugins. One reason for this is that for a system to successfully pursue an outcome in the world, it must interact with the world and use feedback to evaluate its progress in achieving that outcome. Hence, information about the objective must be preserved through future calls to LLMs/modules.
Given the above assumption, oversight fails if information about the objective is obscured when it is passed through information channels. The naive hope is that because LLMs take natural language statements as their input/outputs, these information channels might be interpretable by a human overseer/LLM, by default.
In the next section, we consider why this might not hold.
Why information might be hidden from oversight
If information about the objective of the system/high-level cognition must be passed through information channels, why might this information be obscured from the overseer? Two reasons for why this might be the case are:
- Information channels may be obscured for efficiency reasons;
- Because an LLM call might be deceptive and hence actively try to hide information from the overseer.
See the Externalized Reasoning Oversight Agenda [LW · GW] for similar thoughts.
Information is hidden for efficiency reasons
Although information is passed via the medium of token sequences that correspond to natural language, it is possible to pass information through such sequences in a way that is uninterpretable by a human/AI overseer [LW · GW]. For example, it seems to be the case that GPT4 can compress and decompress prompts.
There doesn’t seem to be an incentive for the system to do this by default, but if optimization pressure is applied to the AutoGPT-like system as a whole (to make it more capable), the LLM calls may be incentivized to pass information in non-overseer interpretable ways for efficiency reasons, etc.
We could potentially pay an alignment tax and ask that groups developing such systems don’t apply this kind of optimization pressure, but it seems that these systems may not be competitive for reasons outline in this post [LW(p) · GW(p)].
Note that it might be possible to use LLMs to retrieve the obscured information from information channels, and this might be worth investigating.
Deception
If a call to a LLM instiates a deceptive cognition, the cognition might purposefully produce an output that hides information about the LLM’s true unaligned objective from the overseer, potentially in such a way that when the overseer reads the information channel, the information seems to indicate that the system as a whole is aligned.
If the LLM core being used is deceptive by default (perhaps due to the RLHF applied during training), then we’re already in a bad situation. Hopefully behavioral or interpretability based monitoring for deception [LW · GW] can detect this in the LLM core and so such LLMs won’t be deployed.
If the LLM is not a deceptive agent by default, then deception must have occurred due to a particular prompt being passed to the LLM (see the Simulator framing [LW · GW]). In this scenario, it seems difficult for the AutoGPT-like system to consistently be deceptive. One reason for this is that when deception is first evoked, the deceptive cognition must create a prompt that will invoke deceptive simulacra with the same objectives in subsequent LLM calls while ensuring that the prompt doesn’t cause suspicion to arise in the overseer.[2] Because of this difficulty, it may be possible to ensure that AutoGPT-like systems that use such an LLM core are safe with sufficient monitoring, however this is risky.
Conclusion
If we have an agentic AutoGPT-like system, oversight of its high-level cognition (including its objectives) may be feasible, allowing the overseer to ensure that the system as a whole is aligned, or to shut it down if it becomes unaligned.
My current intuition based on the above brief analysis is that obscuration of this high-level cognition due to deception might be unlikely in the case of sufficient evaluation measures for the deceptive LLM core and because of the difficulty of a deceptive simulacrum solving its own alignment problem.
I currently feel as though AutoGPT-like systems won’t remain competitive without optimization pressure being applied to the system as a whole, and that this optimization pressure may incentivize the high-level cognition of the system to be obscured.
- ^
See Beren's post [LW · GW].
- ^
One can think of this as a type of alignment problem for unaligned simulacra.
4 comments
Comments sorted by top scores.
comment by Seth Herd · 2023-05-24T21:21:19.270Z · LW(p) · GW(p)
I've written about this in Capabilities and alignment of LLM cognitive architectures [LW · GW]. I didn't go into quite as much depth on alignment, but my conclusions are essentially identical to yours: there are ways this could go wrong, but it has a lot of promise. I think this approach has large upsides from working primarily in natural language, and that all other approaches have the same downsides and more.
This has also been addressed in Fabien Rogers' The Translucent Thoughts Hypotheses and Their Implications [LW · GW]
I'm working on another post that goes into more depth on the alignment implications, but the conclusions remain the same so far.
Replies from: paul-colognese↑ comment by Paul Colognese (paul-colognese) · 2023-05-24T22:01:05.691Z · LW(p) · GW(p)
Thanks for the reponse, it's useful to hear that we can to the same conclusions. I quoted your post in the first paragraph.
Thanks for bringing Fabien's post to my attention! I'll reference it.
Looking forward to your upcoming post.
↑ comment by Seth Herd · 2023-05-25T19:43:09.645Z · LW(p) · GW(p)
Ooops, I hadn't clicked those links so didn't notice they were to my posts!
You've probably found this, since it's the one tag on your post: the chain of thought alignment tag goes to some other related work.
There's a new one up today that I haven't finished processing.
comment by Bogdan Ionut Cirstea (bogdan-ionut-cirstea) · 2023-05-25T16:12:23.230Z · LW(p) · GW(p)
Here's a paper which tries to formalize why in-context-learning should be easier with chain-of-thought (than without).