Recent AI safety work

post by paulfchristiano · 2014-12-30T18:19:09.211Z · LW · GW · Legacy · 6 comments

Contents

6 comments

(Crossposted from ordinary ideas). 

I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:

  1. Ideas for building useful agents without goals: approval-directed agentsapproval-directed bootstrapping, and optimization and goals. I think this line of reasoning is very promising.
  2. A formalization of one piece of the AI safety challenge: the steering problem. I am eager to see more precise, high-level discussion of AI safety, and I think this article is a helpful step in that direction. Since articulating the steering problem I have become much more optimistic about versions of it being solved in the near term. This mostly means that the steering problem fails to capture the hardest parts of AI safety. But it’s still good news, and I think it may eventually cause some people to revise their understanding of AI safety.
  3. Some ideas for getting useful work out of self-interested agents, based on arguments: of arguments and wagersadversarial collaboration [older], and delegating to a mixed crowd. I think these are interesting ideas in an interesting area, but they have a ways to go until they could be useful.

I’m excited about a few possible next steps:

  1. Under the (highly improbable) assumption that various deep learning architectures could yield human-level performance, could they also predictably yield safe AI? I think we have a good chance of finding a solution---i.e. a design of plausibly safe AI, under roughly the same assumptions needed to get human-level AI---for some possible architectures. This would feel like a big step forward.
  2. For what capabilities can we solve the steering problem? I had originally assumed none, but I am now interested in trying to apply the ideas from the approval-directed agents post. From easiest to hardest, I think there are natural lines of attack using any of: natural language question answering, precise question answering, sequence prediction. It might even be possible using reinforcement learners (though this would involve different techniques).
  3. I am very interested in implementing effective debates, and am keen to test some unusual proposals. The connection to AI safety is more impressionistic, but in my mind these techniques are closely linked with approval-directed behavior.
  4. I’m currently writing up a concrete architecture for approval-directed agents, in order to facilitate clearer discussion about the idea. This kind of work that seems harder to do in advance, but at this point I think it’s mostly an exposition problem.

6 comments

Comments sorted by top scores.

comment by TheMajor · 2014-12-31T09:07:23.921Z · LW(p) · GW(p)

If I understand it correctly, and please correct me if I am mistaken, an approval-directed agent is an artificial intelligence that perfectly/near perfectly simulates a person, and then implements a decision only if that (simulation of a) person would like the decision (and here it is important that it does not compute the outcome of such decisions and then deteremines which outcome maximises the person's happiness, but instead it uses the person's heuristics (via the simulation) to determine whether or not the person would implement the decision given more time to think about it). So the decision making algorithm of the AI consists entirely of implementing the decisions that a faster human would.

Could you explain the difference between this approval-directed AI Arthur and an upload of the human Hugh? Or is there no difference? Under which conditions would they act differenty, i.e. implement a different strategy?

Replies from: paulfchristiano
comment by paulfchristiano · 2014-12-31T22:51:23.066Z · LW(p) · GW(p)

An approval-directed agent doesn’t simulate a person any more than a goal-directed agent simulates the universe. It tries to predict what actions the person would approve of, just as a goal-directed agent tries to predict what actions lead to good consequences. In the limit, the approval-directed agent is more like an emulation. This is analogous to the way in which a goal-directed agent approaches a simulation of the universe.

So there are two big differences:

  1. You can implement it now; it's just an objective for your system, which it can satisfy to varying degrees of excellence---in the same way that you can build a system to rationally pursue a goal, with varying degrees of excellence.

  2. The overseer can use the agent's help, when deciding what actions it approves of. This results in a form of implicit bootstrapping, since the agent is maximizing the approval of the (overseer+agent) system. In the limit of infinite computing the result would be an emulation with infinite time (or more precisely, the ability to instantiate copies of itself and immediately see their outputs, such that the copies can themselves delegate further). The hope is that a realistic system will converge to this ideal as well as it can, given its limited capabilities---in the same way that a goal-directed system would move towards perfect rational behavior.

Replies from: SteveG, SteveG
comment by SteveG · 2015-01-02T18:42:07.652Z · LW(p) · GW(p)

Technology which can predict whether an action would be approved by a person or by an organization is:

-Practical to create, first applied to test cases, then to limited circumstances, then in more general cases.

-For the test cases and for the limited circumstances, it can be created using some existing machine learning technology without deploying full-scale natural language processing.

-Approval/disapproval is a binary value, and appropriate machine learning approaches would includes logistic regression or forest-and-trees methods. We create a model using training data, and the model may output P(approval | conditions) . The model is not that different from one used to predict a purchase or a variety of other online behaviors.

-A system which could forecast approval and disapproval would be useful to PEOPLE, well before it became useful as a basis for selecting AI motivations.

Predicting whether people would approve of a particular action is something that we could use machine learning for now.

These approaches advance the idea from a theoretical construct to an actual, implementable project.

Thanks to Paul for the seed insight.

comment by SteveG · 2015-01-01T22:23:08.554Z · LW(p) · GW(p)

In addition to determining whether an action would be approved using a priori reasoning, an approval-directed AI could also reference a large database of past actions which have either been approved or disapproved.

Alternatively, in advance of ever making any real-world decision, the approval-directed AI could generate example scenarios and propose actions to people deemed effective moral reasoners many thousands of times. Their responses would greatly assist the system in constructing a model of whether an action is approvable, and by whom.

A lot of approval data could be created fairly readily. The AI can train on this data.

comment by torekp · 2015-01-10T13:05:57.583Z · LW(p) · GW(p)
  1. We might try to write a program that doesn’t pursue a goal, and fail. Issue [2] sounds pretty strange—it’s not the kind of bug most software has. But when you are programming with gradient descent, strange things can happen.

I found this part of Optimization and goals very helpful for thinking about Tool AI - thanks.

comment by SteveG · 2015-01-01T21:33:50.784Z · LW(p) · GW(p)

Paul, I think you're headed in a good direction here.

On the subject of approval-directed behavior:

One broad reason people and governments disapprove of behaviors is that they break the law or violate ethical norms that supplement laws. A lot of AGI disaster seems to incorporate some law-breaking pretty early on.

Putting aside an advanced AI that can start working on changing the law, shouldn't one thing (but not the only thing) an approval-directed AI do is constantly check whether its actions are legal before doing them?

The law by itself is not a complete set of norms of acceptable behavior, and violating the law may be acceptable in exceptional circumstances.

However, why can't we start there?