I there a demo of "You can't fetch the coffee if you're dead"?

post by Ram Rachum (ram@rachum.com) · 2022-11-10T18:41:50.546Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    4 Charlie Steiner
    2 Koen.Holtman
None
No comments

Hi everyone! My name is Ram Rachum, and this is my first post here :)

I'm an ex-Google software engineer turned MARL researcher. I want to do MARL research that promotes AI safety. You can read more about my research here and sign up for monthly updates.

I had an idea for a project I could do, and I want you to tell me whether it's been done before.

I want to create a demo of Stuart Russell's "You can't fetch the coffee if you're dead" scenario. I'm imagining a MARL environment where agent 1 can "turn on" agent 2 to prepare coffee for agent 1, and then agent 2 at some point understands how to prevent agent 1 from turning it off again. I'd like to get this behavior to emerge using an RL algorithm like PPO. Crucially, the reward function for agent 2 will be completely innocent.

That way we'll have a video of the "You can't fetch the coffee if you're dead" scenario happening, and we could tweak with that setup to see what kind of changes make it less likely or more likely. We could also show that video to laypeople, and it will likely be much easier for them to connect to such a demo rather than to a verbal description of a thought experiment.

Are there any existing demonstrations of this scenario? Any other insights that you have about this idea would be appreciated.

Answers

answer by Charlie Steiner · 2022-11-10T22:06:19.023Z · LW(p) · GW(p)

There's an off-switch environment in AI Safety Gridworlds, which is sort of what like you're talking about.

But I'm going to give a hot take and say that you shouldn't do work on AI safety gridworlds in 2022. Yes, they capture the essence of the problem, but they can't capture the essence of the solution - you can't have rich human feedback, rich world models, or really most rich things.

comment by Ram Rachum (ram@rachum.com) · 2022-11-10T22:17:50.942Z · LW(p) · GW(p)

I'll check that out, thank you. 

Could you please expand on the hot take, please? Consider that a big part of the appeal for me is just being able to display the problem and make it relatable for people who aren't from the field. 

Also, what kind of richness do you think makes the qualitative difference that you allude to? If the world was 3D or had continuous action or had more game mechanics, would that have made the difference for you? 

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-11-10T22:49:00.804Z · LW(p) · GW(p)

Yes, you can display the problem. But for simple gridworlds like this you can just figure out what's going to happen by using your brain - surprises are very rare. So if you want to show someone the off-switch problem, you can just explain to them what the gridworld will be, without ever needing to actually run the gridworld.

I think one stab at the necessary richness is that it should support nontrivial modeling of the human(s). If Boltzmann-rationality doesn't quickly crash and burn, your toy model is probably too simple to provide interesting feedback on models more complicated than Boltzmann-rationality.

This doesn't rule out gridworlds, it just rules out gridworlds where the desired policy is simple (e.g. "attempt to go to the goal without disabling the off-switch."). And it doesn't necessarily rule in complicated 3D environments - they might still have simple policies, or might have simple correlates of reward (e.g. having the score on the screen) that mean that you're just solving an RL problem, not an AI safety problem.

comment by the gears to ascension (lahwran) · 2022-11-11T00:45:14.737Z · LW(p) · GW(p)

if it definitely won't work in a gridworld, it definitely won't work in a high resolution sim world

Replies from: cfoster0
comment by cfoster0 · 2022-11-11T01:10:43.381Z · LW(p) · GW(p)

Why do you say this? For instance, the simplicity & restricted action space of a gridworld means that there are very limited mechanisms available to steer/point with the agent towards what we want. That's a pretty extreme limitation compared what to can do in a high resolution sim world. It seems plausible that "pointing at what we want in an informative/high-bandwidth way" would be an important component of a solution.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2022-11-11T03:41:17.081Z · LW(p) · GW(p)

right, for sure, but a good solution to the high bandwidth case that doesn't scale down to gridworld correctly is useless, it can't be trusted to actually be a good solution to the larger problem if it definitely breaks when shrunk. it should work just as well in gridworld, or conway's life*, or smoothlife*, or mujoco, or a fluid sim, or real life. my point is that a good solution should at an absolute bare minimum scale down. if your base rl algorithm can't solve gridworld, it definitely just doesn't work at all. similarly for rl safety.

* although I'm not sure if conway's or smoothlife have an energy conservation law, and that might be pretty important. also, maybe continuous state spaces are really important somehow, I definitely see some possibilities where that's the case. I don't know, though. it seems to me like we want to be able to go all the way down to the smallest units and still have a working solution.

Replies from: cfoster0
comment by cfoster0 · 2022-11-11T05:57:52.135Z · LW(p) · GW(p)

I think I have a lot more uncertainty here. Like, yes it is always nice when solutions scale down all the way into the simplest setting, but there is no guarantee that that's how things work in this domain. Reality is allowed to say "the minimum requirements for building a FAI are X", where X entails a high-bandwidth or otherwise highly-specific interface between us and the agent.

answer by Koen.Holtman · 2022-11-11T17:00:29.459Z · LW(p) · GW(p)

Like Charlie said, there is a demonstration in AI Safety Gridworlds. I also cover these dynamics in a more general and game-theoretical sense in my AGI Agent Safety by Iteratively Improving the Utility Function: this paper also has running code behind it, and it formalises the setup as a two-player/two-agent game.

In general though, if people do not buy "You can't fetch the coffee if you're dead" problem as a thought experiment, then I am not sure if any running code based demo can change their mind.

I have been constructing a set of thought experiments, illustrated with grid worlds, that do not just demo the off-switch problem, but that also demo a solution to it. The whole setup intends to clarify what is really going on here, in a way that makes intuitive sense to a non-mathematical audience. Have not published these thought experiments yet in writing, only gave a talk about it. In theory, somebody could convert the grid world pictures in this talk into running code. If you want to learn more please contact me -- I can walk you through my talk slide deck.

I think I disagree with Charlie's hot take because Charlie seems to be assuming that the essence of the solution to "You can't fetch the coffee if you're dead" must be too complicated to show in a grid world. In fact, for the class of solutions I prefer, these solutions can be very easily shown in a grid world. Or at least easy in retrospect.

comment by Ram Rachum (ram@rachum.com) · 2022-11-12T11:46:48.657Z · LW(p) · GW(p)

Thank you Koen. The video by Stuart Armstrong linked in the DeepMind paper is pretty close to what I wanted to do :( The DeepMind paper also does similar things.

While I might be able to improve a bit on these examples, I'm thinking that this probably isn't the best place for me to invest my efforts. Thanks for letting me know about these. 

I'm interested in your solutions, I'll send an email to you privately about it.

No comments

Comments sorted by top scores.