Following human norms

post by rohinmshah · 2019-01-20T23:59:16.742Z · score: 23 (None votes) · LW · GW · 8 comments

Contents

So far we have been talking about how to learn “values” or “instrumental goals”. This would be necessary if we want to figure out how to build an AI system that does exactly what we want it to do. However, we’re probably fine if we can keep learning and building better AI systems. This suggests that it’s sufficient to build AI systems that don’t screw up so badly that it ends this process. If we accomplish that, then steady progress in AI will eventually get us to AI systems that do what we want.

So, it might be helpful to break down the problem of learning values into the subproblems of learning what to do, and learning what not to do. Standard AI research will continue to make progress on learning what to do; catastrophe happens when our AI system doesn’t know what not to do. This is the part that we need to make progress on.

This is a problem that humans have to solve as well. Children learn basic norms such as not to litter, not to take other people’s things, what not to say in public, etc. As argued in Incomplete Contracting and AI alignment, any contract between humans is never explicitly spelled out, but instead relies on an external unwritten normative structure under which a contract is interpreted. (Even if we don’t explicitly ask our cleaner not to break any vases, we still expect them not to intentionally do so.) We might hope to build AI systems that infer and follow these norms, and thereby avoid catastrophe.

It’s worth noting that this will probably not be an instance of narrow value learning, since there are several differences:

Despite this, I have included it in this sequence because it is plausible to me that value learning techniques will be relevant to norm inference.

Paradise prospects

With a norm-following AI system, the success story is primarily around accelerating our rate of progress. Humans remain in charge of the overall trajectory of the future, and we use AI systems as tools that enable us to make better decisions and create better technologies, which looks like “superhuman intelligence” from our vantage point today.

If we still want an AI system that colonizes space and optimizes it according to our values without our supervision, we can figure out what our values are over a period of reflection, solve the alignment problem for goal-directed AI systems, and then create such an AI system.

This is quite similar to the success story in a world with Comprehensive AI Services.

Plausible proposals

As far as I can tell, there has not been very much work on learning what not to do. Existing approaches like impact measures and mild optimization are aiming to define what not to do rather than learn it.

One approach is to scale up techniques for narrow value learning. It seems plausible that in sufficiently complex environments, these techniques will learn what not to do, even though they are primarily focused on what to do in current benchmarks. For example, if I see that you have a clean carpet, I can infer that it is a norm not to walk over the carpet with muddy shoes. If you have an unbroken vase, I can infer that it is a norm to avoid knocking it over. This paper of mine shows how this you can reach these sorts of conclusions with narrow value learning (specifically a variant of IRL).

Another approach would be to scale up work on ad hoc teamwork. In ad hoc teamwork, an AI agent must learn to work in a team with a bunch of other agents, without any prior coordination. While current applications are very task-based (eg. playing soccer as a team), it seems possible that as this is applied to more realistic environments, the resulting agents will need to infer norms of the group that they are introduced into. It’s particularly nice because it explicitly models the multiagent setting, which seems crucial for inferring norms. It can also be thought of as an alternative statement of the problem of AI safety: how do you “drop in” an AI agent into a “team” of humans, and have the AI agent coordinate well with the “team”?

Potential pros

Value learning is hard, not least because it’s hard to define what values are, and we don’t know our own values to the extent that they exist at all. However, we do seem to do a pretty good job of learning society’s norms. So perhaps this problem is significantly easier to solve. Note that this is an argument that norm-following is easier than ambitious value learning, not that it is easier than other approaches such as corrigibility.

It is also feels easier to work on inferring norms right now. We have many examples of norms that we follow; so we can more easily evaluate whether current systems are good at following norms. In addition, ad hoc teamwork seems like a good start at formalizing the problem, which we still don’t really have for “values”.

This also more closely mirrors our tried-and-true techniques for solving the principal-agent problem for humans: there is a shared, external system of norms, that everyone is expected to follow, and systems of law and punishment are interpreted with respect to these norms. For a much more thorough discussion, see Incomplete Contracting and AI alignment, particularly Section 5, which also argues that norm following will be necessary for value alignment (whereas I’m arguing that it is plausibly sufficient to avoid catastrophe).

One potential confusion: the paper says “We do not mean by this embedding into the AI the particular norms and values of a human community. We think this is as impossible a task as writing a complete contract.” I believe that the meaning here is that we should not try to define the particular norms and values, not that we shouldn’t try to learn them. (In fact, later they say “Aligning AI with human values, then, will require figuring out how to build the technical tools that will allow a robot to replicate the human agent’s ability to read and predict the responses of human normative structure, whatever its content.”)

Perilous pitfalls

What additional things could go wrong with powerful norm-following AI systems? That is, what are some problems that might arise, that wouldn’t arise with a successful approach to ambitious value learning?

Summary

One promising approach to AI alignment is to teach AI systems to infer and follow human norms. While this by itself will not produce an AI system aligned with human values, it may be sufficient to avoid catastrophe. It seems more tractable than approaches that require us to infer values to a degree sufficient to avoid catastrophe, particularly because humans are proof that the problem is soluble.

However, there are still many conceptual problems. Most notably, norm following is not obviously expressible as an optimization problem, and so may be hard to integrate into current AI approaches.

8 comments

comment by AdamGleave · 2019-02-01T21:00:13.251Z · score: 14 (None votes) · LW · GW

I feel like there are three facets to "norms" v.s. values, which are bundled together in this post but which could in principle be decoupled. The first is representing what not to do versus what to do. This is reminiscent of the distinction between positive and negative rights, and indeed most societal norms (e.g. human rights) are negative, but not all (e.g. helping an injured person in the street is a positive right). If the goal is to prevent catastrophe, learning the 'negative' rights is probably more important, but it seems to me that most techniques developed could learn both kinds of norms.

Second, there is the aspect of norms being an incomplete representation of behaviour: they impose some constraints, but there is not a single "norm-optimal" policy (contrast with explicit reward maximization). This seems like the most salient thing from an AI standpoint, and as you point out this is an underexplored area.

Finally, there is the issue of norms being properties of groups of agents. One perspective on this is that humans are realising their values through constructing norms: e.g. if I want to drive safely, it is good to have a norm to drive on the left or right side of the road, even though I may not care which norm we establish. Learning norms directly therefore seems beneficial to neatly integrate into human society (it would be awkward if e.g. robots drive on the left and humans drive on the right). If we think the process of going from values to norms is both difficult and important for multi-agent cooperation, learning norms also lets us sidestep a potentially thorny problem.

comment by rohinmshah · 2019-02-02T18:52:51.796Z · score: 2 (None votes) · LW · GW

Yeah, agreed with all of that, thanks for the comment. You could definitely try to figure out each of these things individually, eg. learning constraints that can be used with Constrained Policy Optimization is along the "what not to do" axis, and a lot of the multiagent RL work is looking at how we can get some norms to show up with decentralized training. But I feel a lot more optimistic about research that is trying to do all three things at once, because I think the three aspects do interact with each other. At least, the first two feel very tightly linked, though they probably can be separated from the multiagent setting.

comment by Donald Hobson (donald-hobson) · 2019-01-21T21:27:40.185Z · score: 3 (None votes) · LW · GW

What if it follows human norms with dangerously superhuman skill.

Suppose humans had a really strong norm that you were allowed to say whatever you like, and encouraged to say things others will find interesting.

Among humans, the most we can exert is a small optimization for the not totally dull.

The AI produces a sequence that effectively hacks the human brain and sets interest to maximum.

comment by rohinmshah · 2019-01-21T22:46:01.460Z · score: 3 (None votes) · LW · GW

I agree this is a potential problem, I would classify this under the problem category "Powerful AI likely leads to rapidly evolving technologies, which might require rapidly changing norms" above.

We already have a problem of this sort with media being an amplifier for controversy instead of truth.

comment by avturchin · 2019-01-25T14:23:09.299Z · score: 1 (None votes) · LW · GW

Just return to this post to mention a rather obvious assumption that following the norms assumes that these norms are stable throughout all the time and space duration of the group, which is clearly not true for large, old or internally diverse groups. Thus, some model of the group boundaries and-or internal structure should be either hand-coded, or meta-learned before norm-learning.

comment by rohinmshah · 2019-01-25T17:47:21.481Z · score: 2 (None votes) · LW · GW

Yes, I agree that you need to deal with the fact that norms are not stable across time and space.

comment by avturchin · 2019-01-21T10:26:13.563Z · score: 1 (None votes) · LW · GW

Great post! I like that it escapes the problem of aggregating values by trying to learn already aggregated norms.

Why not to use the already codified norms as a starting point to learn actual norms? Almost any society has a large set of rules and laws as written texts, and an AI may ask an advanced member of society with high understanding of norm (lawyer) to clarify the meaning of some norms or to write down non-spoken rules.

What also could go wrong is that some combinations of norms could be dangerous, especially if implemented literary. For example, there is a fire in an apartment and a person needs help in it, but a robot can't enter it as it will be a violation of the norm of no entering of private property without invitation. There could many edge cases there norms don't work.

The idea of "not doing" itself needs clarification, as sometimes "not doing" is an action: for example, if a robot stays in a doorway, it doesn't do anything, but I can't go out.

comment by rohinmshah · 2019-01-21T18:17:11.396Z · score: 3 (None votes) · LW · GW

I agree that codified norms are a good place to start, but they are only a starting point -- we will have to infer norms from behavior/speech, because as you noted, codified norms interpreted literally will have many edge cases.