# Toy model of the AI control problem: animated version

post by Stuart_Armstrong · 2017-10-10T11:06:41.518Z · LW · GW · 8 comments

A few years back, I came up with a toy model of the AI control problem. It has a robot moving boxes into a hole, with a slightly different goal than it's human designers, and a security camera to check that it's behaving as it should. The robot learns to block the camera to get its highest reward.

I've been told that the model is useful for explaining the control problem quite a few people, and I've always wanted to program the "robot" and get an animated version of it. Gwern had a live demo, but it didn't illustrate all the things I wanted to.

So I programmed the toy problem in python, and generated a video with commentary.

In this simplified version, the state space is sufficiently small that you can explicitly generate the whole table of Q-values (expected reward for taking an action in a certain state, assuming otherwise optimal policy). Since behaviour is deterministic, this can be updated in dynamical programming, using a full width backup. The number of such backups essentially measures the depth of the robot's predictive ability.

The most interesting depths of planning are:

• At depth 0, the robot behaves randomly.

• At depth 3, the robot crashes itself efficiently, to avoid loosing too much reward.

• At depth 6, the robot figures out how to get one box into the hole. The automated camera turns it off.

• At depth 17, the robot finally starts to deceive/conceal/manipulate, by blocking the camera and pushing two boxes into the hole. Note that the robot's deception comes from the fact that its incentives are misaligned, and that humans tried to control it.

• At depth 18, the robot efficiently does the plan from depth 17.

• At depth 20, the robot does the maximally efficient plan: blocking the camera, and pushing all boxes into the hole.

• At depth 32, the robot has the correct Q-values for the maximally efficient plan.

• At depth 45, finally, the Q-value table is fully updated, and the robot will take maximally efficient, and, if need be, deceptive plans from any robot/box starting positions.

The code and images can be found here.

comment by AABoyles · 2017-10-11T18:48:36.857Z · LW(p) · GW(p)

Nice post! I'd like to put a copy of the code on Github, but I don't see a license anywhere in the directory (or mentioned in the files). May I assume it's generally intended to be Open Source and I can do this?

comment by Stuart_Armstrong · 2017-10-11T22:14:37.954Z · LW(p) · GW(p)

comment by AABoyles · 2017-10-13T14:51:03.875Z · LW(p) · GW(p)

Thanks! It's up.

comment by Stuart_Armstrong · 2017-10-16T10:17:57.659Z · LW(p) · GW(p)

Cheers! I've added the text of the "unlicence" https://choosealicense.com/licenses/unlicense/ to the script file "toyscript.txt". Can you update the file on github? (I notice you renamed the script file; that's fine).

comment by AABoyles · 2017-10-30T13:49:37.637Z · LW(p) · GW(p)

Sorry I'm two weeks late, but the text of the unlicense has been added. Thank you!

comment by roystgnr · 2017-10-13T20:06:15.228Z · LW(p) · GW(p)

You should provide some more explicit license, if you don't want to risk headaches for others later. "yes [It's generally intended to be Open Source]" may be enough reassurance to copy the code once, but "yes, you can have it under the new BSD (or LGPL2.1+, or whatever) license" would be useful to have in writing in the repository in case others want to create derived works down the road.

Thanks very much for creating this!

comment by Stuart_Armstrong · 2017-10-16T10:16:10.062Z · LW(p) · GW(p)