Demanding and Designing Aligned Cognitive Architectures

post by Koen.Holtman · 2021-12-21T17:32:57.482Z · LW · GW · 5 comments

Contents

  Scope of the paper
  Alignment as a policy problem
  Abstract of the paper
  Cognitive architectures
  Using the lens of cognitive architectures to move beyond pure reward maximization
  Progress on the alignment problem
  Alignment research is not a sub-field of modern ML research
  Intended audience of the paper
None
5 comments

This post is to announce my new paper Demanding and Designing Aligned Cognitive Architectures, which I recently presented in the PERLS workshop (Political Economy of Reinforcement Learning)] at NeurIPS 2021.

In this post, I will give a brief overview of the paper, specifically written for this forum and the LW/EA communities. I will highlight some of the main differences with the AI alignment approaches and outlooks more often discussed here.

The comment section below can be used for general comments and Q&A about the entire paper.

Scope of the paper

The main focus of this paper is to improve the global debate about the medium-term alignment problems in purple below.

In the last few years, these two problems have moved inside society's Overton window, not only in the West but also in China. So it is topical to write papers which focus specifically on improving the debate about these two problems.

But I also have more long-term, x-risk related motivation for discussing these two problems. If society develops better tools and mechanisms for managing them, I expect that it will also become better at managing the long-term x-risk problems on the right.

Alignment as a policy problem

The word demanding describes a political act, whereas designing is a technical act. The phrase 'Demanding and Designing' in the paper title gives a hint that there will be a cross-disciplinary discussion inside. This discussion fuses insights about running political processes with insights about AI technology.

On this forum and in the broader Rationalist/EA web sphere, it is common to see posts which treat all political activity as a source of irrationality and despair only. In the paper, I develop a very different viewpoint.

I treat politics as the sum total of activities in society that contribute to creating, updating, and legitimizing social contracts. Social contracts can be encoded in law, customs, institutions, code, or all of the above. They aim to produce mutual benefit by binding the actions of society's stakeholders.

In this framing, AI alignment policy making is the activity of having a broad debate that will update our existing social contracts. Updates of social contracts are preferably decided on in a debate that will involve all affected stakeholders at some stage, or at least involve their representatives.

These are all pretty standard Enlightenment ideas. Crucially, these ideas can applied to both global and local policy debates.

Social contract theory does not absolutely require that every stakeholder has to be consulted or satisfied, in order for a new contract to be legitimate. In this, it stands apart from another approach to legitimacy which is often mentioned on this forum: the approach of seeking legitimacy for proposals by claiming that they represent a Pareto improvement. I have have been in applied politics. In my experience, the strategy of trying to offend nobody by seeking Pareto improvements almost never works.

So much for discussing moral and political theory. In the paper, I only discuss theory in one small section. The paper devotes much more space to applied politics, to topics like understanding and controlling the prevailing narrative flows in the alignment debate.

The participants in the AI alignment policy debate will have to overcome many obstacles. Many of these obstacles are of course no different from those encountered by the participants in the global warming debate, in the global debate about improving cybersecurity, etc.

In the paper, I am not wasting any ink on enumerating these general obstacles. Instead, I start by saying that I will cover three only three obstacles, three obstacles which happen to be specific to the AI alignment problem.

When considering how to lower these obstacles, I also take a fresh look at some questions more often discussed on this forum:

My answers are included further below.

Abstract of the paper

The paper does not present a single idea, it develops several interconnected ideas and approaches. Here is the abstract, with some re-formatting.

With AI systems becoming more powerful and pervasive, there is increasing debate about keeping their actions aligned with the broader goals and needs of humanity. This multi-disciplinary and multi-stakeholder debate must resolve many issues, here we examine three of them.

  • The first issue is to clarify what demands stakeholders might usefully make on the designers of AI systems, useful because the technology exists to implement them. We make this technical topic more accessible by using the framing of cognitive architectures.

  • The second issue is to move beyond an analytical framing that treats useful intelligence as being reward maximization only. To support this move, we define several AI cognitive architectures that combine reward maximization with other technical elements designed to improve alignment.

  • The third issue is how stakeholders should calibrate their interactions with modern machine learning researchers. We consider how current fashions in machine learning create a narrative pull that participants in technical and policy discussions should be aware of, so that they can compensate for it.

We identify several technically tractable but currently unfashionable options for improving AI alignment.

Cognitive architectures

A cognitive architecture is a set of interconnected building blocks which create a cognitive process, where a cognitive process is one that uses observations to decide on actions. It is common in AI research to apply the cognitive architecture framing to the analysis of both human and machine minds. In the paper, I extend this framing by considering how companies and governments also use cognitive architectures to make decisions.

I also consider how many modern social contracts encode extensive demands on the behavior of governments and companies, to make these large and powerful synthetic intelligences more human-aligned. Many of these demands can be interpreted as demands on the design of the cognitive architectures that governments and companies are allowed to use for decision making, in pursuit of their goals.

I show how we can take such demands and also apply them to the design of cognitive architectures used by powerful AIs. In fact, this is the pattern of policy making already used in AI fairness. I show in the paper how it can be extended beyond fairness.

Using the lens of cognitive architectures to move beyond pure reward maximization

In the broad alignment debate, and also in the AGI debate on this forum, the most common mental model of a reinforcement learner is as follows. A reinforcement learner is a black box containing a mind which aims to maximize a reward, a box which also happens to have some sensors and actuators attached.

In the paper, I go inside of this black box. I show that there is a cognitive architecture inside which has many distinct and legible individual building blocks. I picture the mind of a generic reinforcement learner like this:

This picture has many moving parts, which we might all consider tweaking, if we want to turn a powerful reinforcement learner into a more human-aligned powerful reinforcement learner. One important tweak I consider is to add these extra green building blocks:

Progress on the alignment problem

In the paper, I show how this idea of demanding the use of a 'specifically incorrect predictive world model' inside the AI can be applied to many types of alignment. It can be used to reason about and resolve:

Overall, the broad applicability of this 'specifically incorrect world model' concept has made me more optimistic about the tractability of long-term alignment, both at a technical and at a policy level.

Discussions on this forum often treat AGI alignment as something unique, as something which will require the invention of entirely new paradigms to solve. The claim that AGI alignment is 'pre-paradigmatic' encodes the assumption that there is a huge technical and policy-making gap between the problems of short-term alignment and the problems of long-term alignment. I do not see this gap, I see a broad continuum.

Alignment research is not a sub-field of modern ML research

This brings me to another paradigm, another basic assumption often encoded in posts appearing on this forum. This is the assumption that AI alignment research is, or must urgently become, a sub-field of modern ML research. In the paper, I examine in detail why this is a bad idea.

To make an analogy with the industrial revolution: treating the impact of ML on society as an ML research problem makes about as much sense as treating the impact of the stream engine on society as a steam engine engineering problem.

I argue that a better way forward is to declare that many of the problems in AI alignment are broad political and systems engineering problems, not ML research problems. I argue that it is both ineffective and unkind to expect that modern ML researchers should lead every charge in the alignment debate.

Intended audience of the paper

I wrote this paper to be accessible to all readers from a general, multi-disciplinary but academic-level audience. I do assume however that the reader has some basic familiarity with the technical and political problems discussed in the alignment literature.

The latest version of the paper is here.

In a big difference with my earlier papers on alignment, there is not even single line of math inside.

5 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2021-12-22T03:43:13.071Z · LW(p) · GW(p)

I like this post, and I'm really happy to be kept apprised of what you're up to. But you can probably guess why I facepalmed when I saw the diagram with the green boxes :P

I'm not sure who's saying that AI alignment is "part of modern ML research." So I don't know if it's productive to argue for or against that. But there are definitely lots of people saying that AI alignment is part of the field of AI, and it sounds like you're disagreeing with that as well - is that right? How much would you say that this categorization is a bravery debate about what people need to hear / focus on?

Replies from: Koen.Holtman
comment by Koen.Holtman · 2021-12-22T15:27:33.553Z · LW(p) · GW(p)

Thanks!

I can think of several reasons why different people on this forum might facepalm when seeing the diagram with the green boxes. Not sure if I can correctly guess yours. Feel free to expand.

But there are definitely lots of people saying that AI alignment is part of the field of AI, and it sounds like you're disagreeing with that as well - is that right?

Yes I am disagreeing, of sorts. I would disagree with the statement that

| AI alignment research is a subset of AI research

but I agree with the statement that

| Some parts of AI alignment research are a subset of AI research.

As argued in detail in the paper, I feel that fields outside of AI research have a lot to contribute to AI alignment, especially when it comes to correctly shaping and managing the effects of actually deployed AI systems on society. Applied AI alignment is a true multi-disciplinary problem.

On bravery debates in alignment

But there are definitely lots of people saying that AI alignment is part of the field of AI [...] How much would you say that this categorization is a bravery debate

In the paper I mostly talk about what each field has to contribute in expertise, but I believe there is definitely also a bravery debate angle here, in the game of 4-dimensional policy discussion chess.

I am going to use the bravery debate definition from here:

That’s what I mean by bravery debates. Discussions over who is bravely holding a nonconformist position in the face of persecution, and who is a coward defending the popular status quo and trying to silence dissenters.

I guess that a policy discussion devolving into a bravery debate is one of these 'many obstacles' which I mention above, one of the many general obstacles that stakeholders in policy discussions about AI alignment, global warming, etc will need to overcome.

From what I can see, as a European reading the Internet, the whole bravery debate anti-pattern seems to be very big right now in the US, and it has also popped up in discussions about ethical uses of AI. AI technologists have been cast as both cowardly defenders of the status quo, and as potentially brave nonconformists who just need to be woken up, or have already woken up.

There is a very interesting paper which has a lot to say about this part of the narrative flow: Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. One point the authors make is that it benefits the established business community to thrust the mantle of the brave non-conformist saviour onto the shoulders of the AI technologists.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-23T00:12:55.656Z · LW(p) · GW(p)

The facepalm was just because if this is really all inside the same RL architecture (no extra lines leading from the world-model to an unsupervised predictive reward), then all that will happen is the learned model will compensate for the distortions.

Replies from: Koen.Holtman
comment by Koen.Holtman · 2021-12-23T14:26:29.513Z · LW(p) · GW(p)

Not entirely sure what you mean with your aside on 'unsupervised predictive reward'. Is this a reference to unsupervised reinforcement learning? To a human supervisor controlling a live reward signal?

But on your observation that 'the learned model will compensate for distortions': this sounds familiar. Here is a discussion.

Intuition pumps and inner alignment failure

It is common for people on this forum to use a teleological intuition pump which makes them fear that such compensation for distortions must somehow always happen, or is very likely to happen, as an emergent phenomenon in any advanced RL or AGI system. Inner alignment failure has become a popular term on this forum when talking about this fear.

But this teleological intuition pump that 'life reward maximization, uh, finds a way' is deeply incorrect in the general case. Especially for the case of agents which are designed not to be pure reward maximizers.

I have more discussion about this in the paper, where I show how you can approach figure 4 with very different intuition pumps about using incorrect maps of territories. These intuition pumps that will tell you much more clearly how and why this works. For the mathematically inclined, I also include references to hard mathematical work, which should of course take precedence over intuition.

Steel-manning an inner alignment failure argument

That being said, I'll now provide provide some failure mode analysis to show that 'the learned model will compensate for distortions' could conceivably happen, if no care is taken at all to prevent it.

There are two world models in the cognitive architecture of figure 4, a blue one and a green one. The green one drives the agent's decisions. The goal of the architecture is to ensure that this green word model driving the reward-optimizing decisions is specifically incorrect.

To counteract that goal, via the failure mode of 'the learned model will compensate for distortions', we have to imagine the following. We have to imagine that ML algorithm in the figure will end up introducing specific pre-distortions into the blue model, making the blue world model in fact an incorrect world model in a way that exactly anticipates and reverses the model edits that will be applied to it later, producing a green world model that is a fully correct world model.

To make this rather complicated path to failure work,

  1. We have to posit that the model editing operation has certain properties that allow it to be attacked by a distortion technique in the first place. Not all model editing algorithms can be attacked in this way. An editing algorithm may work by just overwriting or eliminating exactly those parts of a world model where adding the pre-distortions would make for a successful attack. Or it may use a loop where it keeps adjusting pre-distorted model parameters, for as long as needed until some desirability criterion is met. (This is actually becoming a thing in mainstream DL research on computational fairness, judging by some recent NeurIPS papers.)

  2. We also have to posit that there is some pretty strong optimization pressure of a particular kind going on, there has to be a feedback loop that will work to introduce exactly the pre-distortions needed
    in the first place. These distortions also have to be of a nature where they do not directly contradict past observations, because the ML algorithm is searching for a model that is compatible with past observations, so it applies strong optimisation pressure of its own kind. So this feedback loop has some rather fancy work to do, if it is to achieve exactly the distortions that are needed above.

The most straightforward way to eliminate or suppress a feedback loop achieving all of the above is to map the cognitive architecture in figure 4 to a model-based RL system, not a model-free one. For the model-free case, ensuring that the above feedback loop cannot emerge, under reasonable i.i.d. assumptions about the agent environment, can be more tricky.

If you want to get even more fancy steel-manning this, you can replace 2 above by an adversarial attack scenario where a dedicated attacker breaks reasonable i.i.d. assumptions. In that case all bets are off.

I feel that the inner optimization people on this forum are correct that, when it comes to x-risk, we need to care about the possible existence of feedback loops as under 2 above. But I also feel that they are too pessimistic about our ability, or the ability of mainstream ML research for that matter, to use mathematical tools and simulation experiments to identify, avoid, or suppress these loops in a tractable way.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-12-23T18:09:56.578Z · LW(p) · GW(p)

This isn't about "inner alignment" (as catchy as the name might be), it's just about regular old alignment.

But I think you're right. As long as the learning step "erases" the model editing in a sensible way, then I was wrong and there won't be an incentive for the learned model to compensate for the editing process.

So if you can identify a "customer gender detector" node in some complicated learned world-model, you can artificially set it to a middle value as a legitimate way to make the RL agent less sensitive to customer gender.

I'm not sure how well this approach fulfills modern regulatory guidelines, or how useful it is for AGI alignment, for basically the same reason: models that are learned from scratch sometimes encode knowledge in a distributed way that's hard to edit out. Ultimately, your proof that a certain edit reduces gender bias is going to have to come from testing the system and checking that it behaves well, which is a kind of evidence a lot of regulators / businesspeople are skittish about.