CAIS-inspired approach towards safer and more interpretable AGIs

post by Peter Hroššo (peter-hrosso) · 2023-03-27T14:36:12.712Z · LW · GW · 7 comments

Contents

7 comments

Epistemic status: a rough sketch of an idea

Current LLMs are huge and opaque. Our interpretability techniques are not adequate. Current LLMs are not likely to run hidden dangerous optimization processes. But larger ones may.

Let's cap the model size at the currently biggest models, ban everything above. Let's not build superhuman level LLMs. Let's build human level specialist LLMs and allow them to communicate with each other via natural language. Natural language is more interpretable than the inner processes of large transformers. Together, the specialized LLMs will form a meta-organism which may become superhuman, but it will be more interpretable and corrigible, as we'll be able to intervene on the messages between them.

Of course, model parameter efficiency may increase in the future (as it happened with Chinchilla) -> we should monitor this and potentially lower the cap. On the other hand, our mechanistic interpretability techniques may improve, so we may increase the cap, if we are confident it won't do harm.

This idea seems almost trivial to me, but I haven't seen it discussed anywhere, so I'm posting it early to gather feedback why this might not work.

7 comments

Comments sorted by top scores.

comment by NicholasKees (nick_kees) · 2023-03-27T16:14:08.056Z · LW(p) · GW(p)

Natural language is more interpretable than the inner processes of large transformers.

There's certainly something here, but it's tricky because this implicitly assumes that the transformer is using natural language in the same way that a human is. I highly recommend these posts if you haven't read them already: 

Replies from: peter-hrosso, peter-hrosso
comment by Peter Hroššo (peter-hrosso) · 2023-03-29T10:52:36.521Z · LW(p) · GW(p)

Regarding steganography - there is the natural constraint, that the payload (hidden message) must be relatively small with respect to the main message. So this is a natural bottleneck for communication which should give us a fair advantage over the inscrutable information flows in current large models.

On top of that, it seems viable to monitor cases where a so far benevolent LLM receives a seemingly benevolent message, after which it starts acting maliciously.

I think the main argument behind my proposal is that if we limit the domains a particular LLM is trained on, there will be fewer emergent capabilities. Ie. a computer-science specialist may come up with steganographic messaging, but it it will be hard to spread this skill/knowledge to specialists in other domains such as biology, chemistry, humanities... And these other specialists won't be able to come up with it by themselves. They might be able to come up with other dangerous things such as bioweapons, but they won't be able to use them against us without coordination and without secure communication, etc.

comment by Peter Hroššo (peter-hrosso) · 2023-03-27T20:33:56.815Z · LW(p) · GW(p)

Thanks for the links, will check it out!

I'm aware this proposal doesn't address deception, or side-channels communication such as steganography. But being able to understand at least the 1st level of the message, as opposed to the current state of understanding almost nothing from the weights and activations, seems like a major improvement for me.

comment by Brendon_Wong · 2023-05-03T11:45:08.243Z · LW(p) · GW(p)

Have you seen Seth Herd's work [LW · GW] and the work it references (particularly natural language alignment [LW · GW])? Drexler also has an updated proposal called Open Agencies [LW · GW], which seems to be an updated version of his original CAIS research [LW · GW]. It seems like Davidad is working on [LW · GW] a complex implementation of open agencies. I will likely work on a significantly simpler implementation. I don't think any of these designs explicitly propose capping LLMs though, given that they're non-agentic, transient, etc. by design and thus seem far less risky than agentic models. The proposals mostly focus on avoiding riskier models that are agentic, persistent, etc.

comment by PeterMcCluskey · 2023-03-27T15:29:23.220Z · LW(p) · GW(p)

The main effect might be reduced interpretability due to more superpositioning?

comment by Teun van der Weij (teun-van-der-weij) · 2023-03-28T07:05:13.153Z · LW(p) · GW(p)

I think your policy suggestion is reasonable. 

However, implementing and executing this might be hard: what exactly is an LLM? Does a slight variation on the GPT architecture count as well? How are you going to punish law violators? 

How do you account for other worries? For example, like PeterMcCluskey points out, this policy might lead to reduced interpretability due to more superposition. 

Policy seems hard to do at times, but others with more AI governance experience might provide more valuable insight than I can. 

comment by Lucius Bushnaq (Lblack) · 2023-03-27T20:26:48.969Z · LW(p) · GW(p)

Seems like a slight variant on MIRI's visible thoughts project?