Posts

The Problem With the Word ‘Alignment’ 2024-05-21T03:48:26.983Z
Paper: Understanding and Controlling a Maze-Solving Policy Network 2023-10-13T01:38:09.147Z
Some Thoughts on Virtue Ethics for AIs 2023-05-02T05:46:41.334Z
Behavioural statistics for a maze-solving agent 2023-04-20T22:26:08.810Z
Maze-solving agents: Add a top-right vector, make the agent go to the top-right 2023-03-31T19:20:48.658Z
Understanding and controlling a maze-solving policy network 2023-03-11T18:59:56.223Z
Predictions for shard theory mechanistic interpretability results 2023-03-01T05:16:48.043Z
[Simulators seminar sequence] #2 Semiotic physics - revamped 2023-02-27T00:25:52.635Z
[Simulators seminar sequence] #1 Background & shared assumptions 2023-01-02T23:48:50.298Z
peligrietzer's Shortform 2022-12-01T00:51:19.086Z
A Short Dialogue on the Meaning of Reward Functions 2022-11-19T21:04:30.076Z

Comments

Comment by peligrietzer on Cosmopolitan values don't come free · 2023-06-14T15:50:49.917Z · LW · GW

Possibly relevant

Comment by peligrietzer on Some Thoughts on Virtue Ethics for AIs · 2023-05-02T23:59:21.995Z · LW · GW

I describe the more formal definition in the post:

'Actions (or more generally 'computations') get an x-ness rating. We define the x shard's expected utility conditional on a candidate action a as the sum of two utility functions: a bounded utility function on the x-ness of a and a more tightly bounded utility function on the expected aggregate x-ness of the agent's future actions conditional on a. (So the shard will choose an action with mildly suboptimal x-ness if it gives a big boost to expected aggregate future x-ness, but refuse certain large sacrifices of present x-ness for big boosts to expected aggregate future x-ness.)'

And as I say in the post, we should expect decision-influences matching this definition to be natural and robust only in cases where x is a 'self-promoting' property. A property x is 'self-promoting' if it is reliably the case that performing an action with a higher x-ness rating increases the expected aggregate x-ness of future actions.

Comment by peligrietzer on Some Thoughts on Virtue Ethics for AIs · 2023-05-02T06:28:50.146Z · LW · GW

Yep! Or rather arguing that from a broadly RL-y + broadly Darwinian point of view 'self-consistent ethics' are likely to be natural enough that we can instill them, sticky enough to self-maintain, and capabilities-friendly enough to be practical and/or survive capabilities-optimization pressures in training. 

Comment by peligrietzer on Behavioural statistics for a maze-solving agent · 2023-04-25T19:35:38.871Z · LW · GW

This brings up something interesting: seems worthwhile to compare the internals of a 'misgeneralizing,' small n agent with those of large a n agents and check whether there seems to be a phase transition in how the network operates internally or not.

Comment by peligrietzer on Behavioural statistics for a maze-solving agent · 2023-04-23T20:18:16.075Z · LW · GW

I'd maybe point the finger more at the simplicity of the training task than at the size of the network? I'm not sure there's strong reason to believe the network is underparameterized for the training task. But I agree that drawing lessons from small-ish networks trained on simple tasks requires caution. 

Comment by peligrietzer on Maze-solving agents: Add a top-right vector, make the agent go to the top-right · 2023-04-05T23:38:18.513Z · LW · GW

I would again suggest a 'perceptual' hypothesis regarding the subtraction/addition asymmetry.  We're adding a representation of a path where there was no representation of a path (creates illusion of path), or removing a representation of a path where there was no representation of a path (does nothing).

Comment by peligrietzer on peligrietzer's Shortform · 2023-04-02T02:59:52.813Z · LW · GW

No but I hope to have a chance to try something like it this year! 

Comment by peligrietzer on Understanding and controlling a maze-solving policy network · 2023-03-24T05:31:42.239Z · LW · GW

The main reason is that different channels that each code cheese locations (e.g. channel 42, channel 88) seem to initiate computations that each encourage cheese-pursuit conditional on slightly different conditions. We can think of each of these channels as a perceptual gate to a slightly different conditionally cheese-pursuing computation.

Comment by peligrietzer on Alignment allows "nonrobust" decision-influences and doesn't require robust grading · 2022-12-05T21:48:00.001Z · LW · GW

Having a go at extracting some mechanistic claims from this post:

  • A value x is a policy-circuit, and this policy circuit may sometimes respond to a situation by constructing a plan-grader and a plan-search.
  • The policy-circuit executing value x is trained to construct <plan-grader, plan-search> pairs that are 'good' according to the value x, and this excludes pairs that are predictably going to result in the plan-search Goodharting the plan-grader.
  • Normally, nothing is trying to argmax value x's goodness criterion for <plan-grader, plan-search> pairs. Value x's goodness criterion for <plan-grader, plan-search> pairs is normally just implicit in x's method for constructing <plan-grader, plan-search> pairs.
  • Value x may sometimes explicitly search over <plan-grader, plan-search> pairs in order to find pairs that score high according to a grader-proxy to value x's goodness criterion. However, here too value x's goodness criterion will be implicitly expressed in the policy-execution level as a disposition to construct a pair <grader-proxy to value x's goodness criterion, search over pairs> that doesn't Goodhart the grader-proxy to value x's goodness criterion.
  • The crucial thing is that the true, top level 'value x's goodness criterion' is a property of an actor, not a critic.
Comment by peligrietzer on peligrietzer's Shortform · 2022-12-01T00:51:19.403Z · LW · GW

Here is a shard-theory intuition about humans, followed by an idea for an ML experiment that could proof-of-concept its application to RL: 

Let's say I'm a guy who cares a lot about studying math well, studies math every evening, and doesn't know much about drugs and their effects. Somebody hands me some ketamine and recommends that I take ketamine this evening. I take the ketamine before I sit down to study math, and math study goes terrible intellectually but since I am on ketamine I'm having a good time and credit gets assigned to the 'taking ketamine before I sit down to study math' computation. So my policy network gets updated to increase the probability of the computation 'take ketamine before I sit down to study math.'

HOWEVER my world-model also gets updated, acquiring the new knowledge 'taking ketamine before I sit down to study math makes math-study go terrible intellectually.' And if I have a strong enough 'math study' value shard then in light of this new knowledge the 'math study' value shard is going to forbid taking ketamine before I sit down to study math. So my 'take ketamine before sitting down to study math' exploration resulted in me developing an overall disposition against taking ketamine before sitting down to study math, even though the computation 'take ketamine before sitting down to study math' was directly reinforced! (Because same act of exploration also resulted in a world-model update that associated the computation 'take ketamine before sitting down to study math' with implications that an already-powerful shard opposes.)

This is important, I think, because it shows that an agent can explore relatively freely without being super vulnerable to value-drift, and that you don't necessarily need complicated reflective reasoning to have (at least very basic) anti-value-drift mechanisms. Since reinforcement is a pretty gradual thing, you can often try an action you don't know much about, and if it turns out that this action has high reward but also direct implications that your already existing powerful shards oppose then the weak shard formed by that single reinforcement pass will be powerless.

Now the ML experiment idea: 

A game where the agent gets rewarded for (e.g.) jumping high. After the agent gets somewhat trained, we continue training but introduce various 'powerups' the agent can pick up that increase or decrease the agent's jumping capacity. We train a little more, and now we introduce (e.g.) green potions that decrease the agent's jumping capacity but increase the reward multiplier (positive for expected reward on the balance).

My weak hypothesis is that even though trying green potions gets a reinforcement event, the agent will avoid green potions after trying them. This is because there'd be a strong 'avoid things that decrease jumping capacity' shard already in place that will take charge once the agent learns to associate taking green potions with decrease in jumping capacity. (Though maybe it's more complicated: maybe there will be a kind of race between 'taking green potions' getting reinforced and the association between taking green potions and decrease in jumping capacity forming and activating the 'avoid things that decrease jumping capacity' shard.)

Another interesting question: what will happen if we introduce (e.g.) red potions that increase the agent's jumping capacity but decrease the reward multiplier (negative for expected reward on the balance)? Seems clear that as the agent takes red potions over and over the reinforcement process will eventually remove the disposition to take red potions, but would this also start to push the agent towards forming some kind of mental representation of 'reward' to model what's going on? If we introduce red potions first, then do some training, and then introduce green potions, would the experience with red potions make the agent respond differently (perhaps more like a reward maximiser) to trying green potions?