post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Ofer (ofer) · 2022-04-23T16:23:48.721Z · LW(p) · GW(p)

(Haven't read the OP thoroughly so sorry if not relevant; just wanted to mention...)

If any part of the network at any point during training corresponds to an agent that "cares" about an environment that includes our world then that part can "take over" the rest of the network via gradient hacking [LW · GW].

Replies from: not-relevant
comment by Not Relevant (not-relevant) · 2022-04-26T13:13:29.695Z · LW(p) · GW(p)

This seems like a weird claim - if there are multiple objectives within the agent, why would the one that cares about the external world decisively “win” any gradient-hacking-fight?

Replies from: ofer
comment by Ofer (ofer) · 2022-04-27T15:54:08.185Z · LW(p) · GW(p)

Agents that don't care about influencing our world don't care about influencing the future weights of the network.

Replies from: not-relevant
comment by Not Relevant (not-relevant) · 2022-04-27T17:01:26.979Z · LW(p) · GW(p)

I see, so you’re comparing a purely myopic vs. a long-term optimizing agent; in that case I probably agree. But if the myopic agent cares even about later parts of the episode, and gradients are updated in between, this fails, right?

Replies from: ofer
comment by Ofer (ofer) · 2022-04-27T18:09:19.414Z · LW(p) · GW(p)

I wouldn't use the myopic vs. long-term framing here. Suppose a model is trained to play chess via RL, and there are no inner alignment [? · GW] problems. The trained model corresponds to a non-myopic agent (a chess game can last for many time steps). But the environment that the agent "cares" about is an abstract environment that corresponds to a simple chess game. (It's an environment with less than states). The agent doesn't care about our world. Even if some potential activation values in the network correspond to hacking the computer that runs the model and preventing the computer from being turned off etc., the agent is not interested in doing that. The computer that runs the agent is not part of the agent's environment.

comment by Thomas Kwa (thomas-kwa) · 2022-04-23T01:14:30.312Z · LW(p) · GW(p)

This comes from a research brainstorm. I think people have had this thought before, but I couldn't find it anywhere on LW/AF.

comment by TLW · 2022-04-24T11:24:58.302Z · LW(p) · GW(p)

All of this is predicated on the agent having unlimited and free access to computation.

This is a standard assumption, but is worth highlighting.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-04-26T06:04:32.916Z · LW(p) · GW(p)

I don't think I make this assumption. The biggest flaw in this post is that some of the definitions don't quite make sense, and I don't think assuming infinite compute helps this.

Replies from: TLW
comment by TLW · 2022-04-26T11:25:36.595Z · LW(p) · GW(p)

I don't think I make this assumption.

You don't explicitly; it's implicit in the following:

It is well known [LW · GW] that a utility function over behaviors/policies is sufficient to describe any policy.

The VNM axioms do not necessarily apply for bounded agents. A bounded agent can rationally have preferences of the form A ~[1] B and B ~ C but A ≻[2] C, for instance[3]. You cannot describe this with a straight utility function.

  1. ^

    is indifferent to

  2. ^

    is preferred over

  3. ^

    See https://www.lesswrong.com/posts/AYSmTsRBchTdXFacS/on-expected-utility-part-3-vnm-separability-and-more?commentId=5DgQhNfzivzSdMf9o, [LW · GW] which is similar but which does not cover this particular case. That being said, the same technique should 'work' here.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-04-26T15:47:48.799Z · LW(p) · GW(p)

I agree that a bounded agent can be VNM-incoherent and not have a utility function over bettable outcomes. Here I'm saying you can infer a utility function over behaviors for *any* agent with *any* behavior. You can trivially do this by setting the utility gained by every action the agent actually takes to 1, and utility of every action the agent doesn't take to 0. For example for twitch-bot, the utility at each step is 1 if it twitches and 0 if it doesn't.

Replies from: TLW
comment by TLW · 2022-04-27T01:04:48.249Z · LW(p) · GW(p)

That's a very different definition of utility function than I am used to. Interesting.

What would the utility function over behaviors for an agent that chose randomly at every timestep look like?

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-04-27T05:35:20.502Z · LW(p) · GW(p)

My guess is if the randomness is pseudorandom, then 1 for the behavior it chose and 0 for everything else; if the randomness is true randomness and we use Boltzmann rationality then all behaviors are equal utility; if the randomness is true and the agent is actually maximizing, then the abstraction breaks down?

I want to clarify that this is not a particularly useful type of utility function, and the post was a mostly-failed attempt to make it useful.

Replies from: TLW
comment by TLW · 2022-04-29T12:18:36.665Z · LW(p) · GW(p)

I want to clarify that this is not a particularly useful type of utility function, and the post was a mostly-failed attempt to make it useful.

Fair! Here's another[1] issue I think, now that I've realized you were talking about utility functions over behaviours, at least if you allow 'true' randomness.

Consider a slight variant of matching pennies: if an agent doesn't make a choice, their choice is made randomly for them.

Now consider the following agents:

  1. Twitchbot.
  2. An agent that always plays (truly) randomly.
  3. An agent that always plays the best Nash equilibrium, tiebroken by the choice that results in them making the most decisions. (And then tiebroken arbitrarily from there, not that it matters in this case.)

These all end up with infinite random sequences of plays, ~50% heads and ~50% tails[2][3][4]. And any infinite random (50%) sequence of plays could be a plausible sequence of plays for either of these agents. And yet these agents 'should' have different decompositions into  and .

  1. ^

    Maybe. Or maybe I was misconstruing what you meant by 'if the randomness is true and the agent is actually maximizing, then the abstraction breaks down' and this is the same issue you recognized.

  2. ^

    Twitchbot doesn't decide, so its decision is made randomly for it, so it's 50/50.

  3. ^

    The random agent decides randomly, so it's 50/50.

  4. ^

    'The' best Nash equilibrium is any combination of choosing 50/50 randomly, and/or not playing. The tiebreak means the best combination is playing 50/50.

comment by Jack R (Jack Ryan) · 2022-04-23T05:24:20.479Z · LW(p) · GW(p)

W is a function, right? If so, what’s its type signature?

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-04-23T07:55:32.910Z · LW(p) · GW(p)

As written w takes behaviors to "properties about world-trajectories that the base optimizer might care about" as Wei Dai says here [LW(p) · GW(p)]. If there is uncertainty, I think w could return distributions over such world-trajectories, and the argument would still work.

Replies from: Jack Ryan
comment by Jack R (Jack Ryan) · 2022-04-23T08:48:06.830Z · LW(p) · GW(p)

Ah I see, and just to make sure I'm not going crazy, you've edited the post now to reflect this?

Replies from: thomas-kwa