Posts

Discussion: Challenges with Unsupervised LLM Knowledge Discovery 2023-12-18T11:58:39.379Z
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla 2023-07-20T10:50:58.611Z
Specification gaming: the flip side of AI ingenuity 2020-05-06T23:51:58.171Z
Utility ≠ Reward 2019-09-05T17:28:13.222Z
2-D Robustness 2019-08-30T20:27:34.432Z
Risks from Learned Optimization: Conclusion and Related Work 2019-06-07T19:53:51.660Z
Deceptive Alignment 2019-06-05T20:16:28.651Z
The Inner Alignment Problem 2019-06-04T01:20:35.538Z
Conditions for Mesa-Optimization 2019-06-01T20:52:19.461Z
Risks from Learned Optimization: Introduction 2019-05-31T23:44:53.703Z
Clarifying Consequentialists in the Solomonoff Prior 2018-07-11T02:35:56.533Z

Comments

Comment by Vlad Mikulik (vlad_m) on Tracr: Compiled Transformers as a Laboratory for Interpretability | DeepMind · 2023-01-16T18:53:11.766Z · LW · GW

Ah, yep, I think you're right -- it should be pretty easy to add support for and in selectors then.

Comment by Vlad Mikulik (vlad_m) on Tracr: Compiled Transformers as a Laboratory for Interpretability | DeepMind · 2023-01-16T15:58:17.407Z · LW · GW

You're right that accounting for the softmax could be used to get around the argument in the appendix.  We'll mention this when we update the paper.

The scheme as you described it relies on an always-on dummy token, which would conflict with our implementation of default values in aggregates: we fix the BOS token attention to 0.5, so at low softmax temp it's attended to iff nothing else is; however with this scheme we'd want 0.5 to always round off to zero after softmax. This is plausibly surmountable but we put it out of scope for the pilot release since we didn't seem to need this feature, whereas default values come up pretty often.

Also, while this construction will work for just and (with arbitrarily many conjuncts), I don't think it works for arbitrary compositions using and and or. (Of course it would be better than no composition at all.)

In general, I expect there to be a number of potentially low-hanging improvements to be made to the compiler -- many of them we deliberately omitted and are mentioned in the limitations section, and many we haven't yet come up with. There's tons of features one could add, each of which takes time to think about and increases overall complexity, so we had to be judicious which lines to pursue -- and even then, we barely had time to get into superposition experiments. We currently aren't prioritising further Tracr development until we see it used for research in practice, but I'd be excited to help anyone who's interested in working with it or contributing.

Comment by Vlad Mikulik (vlad_m) on Who models the models that model models? An exploration of GPT-3's in-context model fitting ability · 2022-06-09T16:54:57.939Z · LW · GW

Nice work! 

This section in Anthropic's work on Induction heads seems highly relevant -- I would be interested in seeing an extension of your analysis that looks at what induction heads do in these tasks.

If we believe the claims in that paper, then in-context learning of any kind seems to driven by a fairly simple mechanism not unlike kNN -- induction attention heads. Since it's pretty tractable to locate induction heads in an automated way, we could potentially take a look at the actual mechanism being used to implement these predictions and verify/falsify the hypotheses you make about how GPT makes these predictions. (Although you'd probably have to switch to an open-source model.)

Comment by Vlad Mikulik (vlad_m) on Knowledge Neurons in Pretrained Transformers · 2021-05-20T13:57:44.831Z · LW · GW

Thanks for the link. This has been on my reading list for a little bit and your recco tipped me over.

Mostly I agree with Paul's concerns about this paper. 

However, I did find the "Transformer Feed-Forward Layers Are Key-Value Memories" paper they reference more interesting -- it's more mechanistic, and their results are pretty encouraging. I would personally highlight that one more, as it's IMO stronger evidence for the hypothesis, although not conclusive by any means.

Some experiments they show:

  • Top-k activations of individual 'keys' do seem like coherent patterns in prefixes, and as we move up the layers, these patterns become less shallow and more semantics-driven. (Granted, it's not clear how good the methodology there is, as to qualify as a pattern, it needs to occur in 3 out of top-25 prefixes. There are 3.6 patterns on average in each key. But this is curious enough to keep looking into.)
  • The 'value' distributions corresponding to the keys are in fact somewhat predictive of the actual next word for those top-k prefixes, and exhibit a kind of 'calibration': while the distributions themselves aren't actually calibrated, they are more correct when they assign a higher probability.

I also find it very intriguing that you can just decode the value distributions using the embedding matrix a la Logit Lens.

Comment by Vlad Mikulik (vlad_m) on My take on Michael Littman on "The HCI of HAI" · 2021-04-03T15:57:38.478Z · LW · GW

Thanks for a great post.

---

One nice point that this post makes (which I suppose was also prominent in the talk, but I can only guess, not being there myself) is that there's a kind of progression we can draw (simplifying a little):

- Human specifies what to do (Classical software)
- Human specifies what to achieve (RL)
- Machine infers a specification of what to achieve (IRL)
- Machine collaborates with human to infer and achieve what the human wants (Assistance games)

Towards the end, this post describes an extrapolation of this trend,

- Machine and human collaboratively figure out what the human even wants to do in the first place.

'Helping humans figure out what they want' is a deep, complex and interesting problem, and I'd love it if more folks were thinking through what solutions to it ought to look like. This seems particularly urgent because human motivations can be affected even by algorithms that were not designed to solve this problem -- for example, think of recommender systems shaping their users' habits -- and which therefore aren't doing what we'd want them to do.

---

Another nice point is the connection between ML algorithm design and HCI. I've been meaning to write something looking at RL as 'technique for communicating and achieving human intent' (and, as a corollary, at AI safety as a kind of human-centred algorithm design), but it seems that I've been scooped by Michael :)

I note that not everyone sees RL from this frame. Some RL researchers view it as a way of understanding intelligence in the abstract, without connecting reward to human values.

---

One thing I'm a little less sure of is the conclusion you draw from your examples of changing intentions. While the examples convince me that the AI ought to have some sophistication about the human's intentions -- for example, being aware that human intentions can change -- it's not obvious that the right move is to 'pop out' further and assume there is something 'bigger' that the human's intentions should be aligned with. Could you elaborate on your vision of what you have in mind there?

Comment by Vlad Mikulik (vlad_m) on Formal Solution to the Inner Alignment Problem · 2021-02-21T19:01:03.576Z · LW · GW

Thanks for the post and writeup, and good work! I especially appreciate the short, informal explanation of what makes this work.

Given my current understanding of the proposal, I have one worry which makes me reluctant to share your optimism about this being a solution to inner alignment:

The scheme doesn't protect us if somehow all top-n demonstrator models have correlated errors. This could happen if they are coordinating, or more prosaically if our way to approximate the posterior leads to such correlations. The picture I have in my head for the latter is that we train a big ensemble of neural nets and treat a random sample from that ensemble as a random sample from the posterior, although I don't know if that's how it's actually done.

A lot of the work is done by the assumption that the true demonstrator is in the posterior, which means that at least one of the top-performing models will not have the same correlated errors. But I'm not sure how true this assumption will be in the neural-net approximation I describe above. I worry about inner alignment failures because I don't really trust the neural net prior, and I can imagine training a bunch of neural nets to have correlated weirdnesses about them (in part because of the neural net prior they share, and in part because of things like Adversarial Examples Are Not Bugs, They Are Features). As such it wouldn't be that surprising to me if it turned out that ensembles have certain correlated errors, and in particular don't really represent anything like the demonstrator.

I do feel safer using this method than I would deferring to a single model, so this is still a good idea on balance. I just am not convinced that it solves the inner alignment problem. Instead, I'd say it ameliorates its severity, which may or may not be sufficient.

Comment by Vlad Mikulik (vlad_m) on Against strong bayesianism · 2021-01-12T22:19:00.601Z · LW · GW

You need much more than limiting behavior to say anything about whether or not the processes are ‘similar’ in a useful way before that.

Perhaps the synthesis here is that while looking at asymptotic behaviour of a simpler system can be supremely useful, we should be surprised that it works so well. To rely on this technique in a new domain we should, every time, demonstrate that it actually works in practice.

Also, it's interesting that many of these examples do have 'pathological cases' where the limit doesn't match practice. And this isn't necessarily restricted to toy domains or weird setups: for example, the most asymptotically efficient matrix multiplication algorithms are impractical (although in fairness that's the most compelling example on that page). 

Comment by Vlad Mikulik (vlad_m) on Utility ≠ Reward · 2021-01-11T16:39:14.699Z · LW · GW

More than a year since writing this post, I would still say it represents the key ideas in the sequence on mesa-optimisation which remain central in today's conversations on mesa-optimisation. I still largely stand by what I wrote, and recommend this post as a complement to that sequence for two reasons:

First, skipping some detail allows it to focus on the important points, making it better-suited than the full sequence for obtaining an overview of the area. 

Second, unlike the sequence, it deemphasises the mechanism of optimisation, and explicitly casts it as a way of talking about goal-directedness. As time passes, I become more and more convinced that it was a mistake to call the primary new term in our work 'mesa-optimisation'. Were I to be choosing the terms again, I would probably go with something like 'learned goal-directedness', though it is quite a mouthful. 

Comment by Vlad Mikulik (vlad_m) on Debate Minus Factored Cognition · 2021-01-02T20:13:06.779Z · LW · GW

Not Abram, and I have only skimmed the post so far, and maybe you're pointing to something more subtle, but my understanding is this:

In Stuart's original use, 'No Indescribable Hellwords' is the hypothesis that in any possible world in which a human's values are violated, the violation is describable: one can point out to the human how her values are violated by the state of affairs.

Analogously, debate as an approach to alignment could be seen as predicated on a similar hypothesis: that in any possible flawed argument, the flaw is describable: one can point out to a human how the argument is flawed.

Edited to add: The additional claim in the Hellwords section is that acting according to the recommendations of debate won't lead to very bad outcomes -- at least, not to ones which could be pointed out. For example, we can imagine a debate around the question "Should we enact policy X?". A very strong argument, if it can be credibly argued, is "Enacting policy X leads to an unacceptable violation Y of your values down the line". So, debate will only recommend policy X if no such arguments are available.

I'm not sure to what extent I buy this additional claim. For example, if when a system trained via debate is actually deployed it doesn't get asked questions like 'Should we enact policy X?' but instead more specific things like 'How much does policy X improve Y metric'?, then unless debaters are incentivised to challenge the question's premises ("The Y metric would improve, but you should consider also the unacceptable effect on Z"), we could use debate and still get hellworlds.

Comment by Vlad Mikulik (vlad_m) on Clarifying inner alignment terminology · 2020-11-16T17:12:44.373Z · LW · GW

Thanks for writing this. 

I wish you included an entry for your definition of 'mesa-optimizer'. When you use the term, do you mean the definition from the paper* (an algorithm that's literally doing search using the mesa objective as the criterion), or you do speak more loosely (e.g., a mesa-optimizer is an optimizer in the same sense as a human is an optimizer)? 

A related question is: how would you describe a policy that's a bag of heuristics which, when executed, systematically leads to interesting (low-entopy) low-base-objective states?

*incidentally, looking back on the paper, it doesn't look like we explicitly defined things this way, but it's strongly implied that that's the definition, and appears to be how the term is used on AF.

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-19T11:41:20.073Z · LW · GW

Good point -- I think I wasn't thinking deeply enough about language modelling. I certainly agree that the model has to learn in the colloquial sense, especially if it's doing something really impressive that isn't well-explained by interpolating on dataset examples -- I'm imagining giving GPT-X some new mathematical definitions and asking it to make novel proofs.

I think my confusion was rooted in the fact that you were replying to a section that dealt specifically with learning an inner RL algorithm, and the above sense of 'learning' is a bit different from that one. 'Learning' in your sense can be required for a task without requiring an inner RL algorithm; or at least, whether it does isn't clear to me a priori.

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-15T20:41:37.343Z · LW · GW

I am quite confused. I wonder if we agree on the substance but not on the wording, but perhaps it’s worthwhile talking this through.

I follow your argument, and it is what I had in mind when I was responding to you earlier. If approximating within the constraints requires computing , then any policy that approximates must compute . (Assuming appropriate constraints that preclude the policy from being a lookup table precomputed by SGD; not sure if that’s what you meant by “other similar”, though this may be trickier to do formally than we take it to be).

My point is that for = ‘learning’, I can’t see how anything I would call learning could meaningfully happen inside a single timestep. ‘Learning’ in my head is something that suggests non-ephemeral change; and any lasting change has to feed into the agent’s next state, by which point SGD would have had its chance to make the same change.

Could you give an example of what you mean (this is partially why I wanted to taboo learning)? Or, could you give an example of a task that would require learning in this way? (Note the within-timestep restriction; without that I grant you that there are tasks that require learning).

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-14T16:49:27.123Z · LW · GW

I interpreted your previous point to mean you only take updates off-policy, but now I see what you meant. When I said you can update after every observation, I meant that you can update once you have made an environment transition and have an (observation, action, reward, observation) tuple. I now see that you meant the RL algorithm doesn't have the ability to update on the reward before the action is taken, which I agree with. I think I still am not convinced, however.

And can we taboo the word 'learning' for this discussion, or keep it to the standard ML meaning of 'update model weights through optimisation'? Of course, some domains require responsive policies that act differently depending on what they observe, which is what Rohin observes elsewhere in these comments. In complex tasks on the way to AGI, I can see the kind of responsiveness required become very sophisticated indeed, possessing interesting cognitive structure. But it doesn't have to be the same kind of responsiveness as the learning process of an RL agent; and it doesn't necessarily look like learning in the everyday sense of the word. Since the space of things that could be meant here is so big, it would be good to talk more concretely.

You can't update the model based on its action until its taken that action and gotten a reward for it.

Right, I agree with that.

Now, I understand that you argue that if a policy was to learn an internal search procedure, or an internal learning procedure, then it could predict the rewards it would get for different actions. It would then pick the action that scores best according to its prediction, thereby 'updating' based on returns it hasn't yet received, and actions it hasn't yet made. I agree that it's possible this is helpful, and it would be interesting to study existing meta-learners from this perspective (though my guess is that they don't do anything so sophisticated). It isn't clear to me a priori that from the point of view of the policy this is the best strategy to take.

But note that this argument means that to the extent learned responsiveness can do more than the RL algorithm's weight updates can, that cannot be due to recurrence. If it was, then the RL algorithm could just simulate the recurrent updates using the agent's weights, achieving performance parity. So for what you're describing to be the explanation for emergent learning-to-learn, you'd need the model to do all of its learned 'learning' within a single forward pass. I don't find that very plausible -- or rather, whatever advantageous responsive computation happens in the forward pass, I wouldn't be inclined to describe as learning.

You might argue that today's RL algorithms can't simulate the required recurrence using the weights -- but that is a different explanation to the one you state, and essentially the explanation I would lean towards.

if taking actions requires learning, then the model itself has to do that learning.

I'm not sure what you mean when you say 'taking actions requires learning'. Do you mean something other than the basic requirement that a policy depends on observations?

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-13T15:16:20.699Z · LW · GW
I've thought of two possible reasons so far.
Perhaps your outer RL algorithm is getting very sparse rewards, and so does not learn very fast. The inner RL could implement its own reward function, which gives faster feedback and therefore accelerates learning. This is closer to the story in Evan's mesa-optimization post, just replacing search with RL.
More likely perhaps (based on my understanding), the outer RL algorithm has a learning rate that might be too slow, or is not sufficiently adaptive to the situation. The inner RL algorithm adjusts its learning rate to improve performance.

I would be more inclined towards a more general version of the latter view, in which gradient updates just aren't a very effective way to track within-episode information.

The central example of learning-to-learn is a policy that effectively explores/exploits when presented with an unknown bandit from within the training distribution. An optimal policy essentially needs to keep track of sufficient statistics of the reward distributions for each action. If you're training a memoryless policy for a fixed bandit problem using RL, then the only way of tracking the sufficient stats you have is through your weights, which are changed through the gradient updates. But the weight-space might not be arranged in a way that's easily traversed by local jumps. On the other hand, a meta-trained recurrent agent can track sufficient stats in its activations, traversing the sufficient statistic space in whatever way it pleases -- its updates need not be local.

This has an interesting connection to MAML, because a converged memoryless MAML solution on a distribution of bandit tasks will presumably arrange the part of its weight-space that encodes bandit sufficient statistics in a way that makes it easy to traverse via SGD. That would be a neat (and not difficult) experiment to run.

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-13T15:00:38.057Z · LW · GW
I would propose a third reason, which is just that learning done by the RL algorithm happens after the agent has taken all of its actions in the episode, whereas learning done inside the model can happen during the episode.

This is not true of RL algorithms in general -- If I want, I can make weight updates after every observation. And yet, I suspect that if I meta-train a recurrent policy using such an algorithm on a distribution of bandit tasks, I will get a 'learning-to-learn' style policy.

So I think this is a less fundamental reason, though it is true in off-policy RL.

Comment by Vlad Mikulik (vlad_m) on Mesa-Search vs Mesa-Control · 2020-09-13T14:45:41.773Z · LW · GW

I had a similar confusion when I first read Evan's comment. I think the thing that obscures this discussion is the extent to which the word 'learning' is overloaded -- so I'd vote taboo the term and use more concrete language.

Comment by Vlad Mikulik (vlad_m) on interpreting GPT: the logit lens · 2020-09-02T15:42:24.082Z · LW · GW

You might want to look into NMF, which, unlike PCA/SVD, doesn't aim to create an orthogonal projection. It works well for interpretability because its components cannot cancel each other out, which makes its features more intuitive to reason about. I think it is essentially what you want, although I don't think it will allow you to find directly the 'larger set of almost orthogonal vectors' you're looking for.

Comment by Vlad Mikulik (vlad_m) on Is the term mesa optimizer too narrow? · 2019-12-19T10:13:29.389Z · LW · GW

I think we basically agree. I would also prefer people to think more about the middle case. Indeed, when I use the term mesa-optimiser, I usually intend to talk about the middle picture, though strictly that’s sinful as the term is tied to Optimisers.

Re: inner alignment

I think it’s basically the right term. I guess in my mind I want to say something like, “Inner Alignment is the problem of aligning objectives across the Mesa≠Base gap”, which shows how the two have slightly different shapes. But the difference isn’t really important.

Inner alignment gap? Inner objective gap?

Comment by Vlad Mikulik (vlad_m) on Is the term mesa optimizer too narrow? · 2019-12-18T23:07:22.689Z · LW · GW

I’m not talking about finding on optimiser-less definition of goal-directedness that would support the distinction. As you say, that is easy. I am interested in a term that would just point to the distinction without taking a view on the nature of the underlying goals.

As a side note I think the role of the intentional stance here is more subtle than I see it discussed. The nature of goals and motivation in an agent isn’t just a question of applying the intentional stance. We can study how goals and motivation work in the brain neuroscientifically (or at least, the processes in the brain that resemble the role played by goals in the intentional stance picture), and we experience goals and motivations directly in ourselves. So, there is more to the concepts than just taking an interpretative stance, though of course to the extent that the concepts (even when refined by neuroscience) are pieces of a model being used to understand the world, they will form part of an interpretative stance.

Comment by Vlad Mikulik (vlad_m) on Is the term mesa optimizer too narrow? · 2019-12-18T22:53:54.689Z · LW · GW

I understand that, and I agree with that general principle. My comment was intended to be about where to draw the line between incorrect theory, acceptable theory, and pre-theory.

In particular, I think that while optimisation is too much theory, goal-directedness talk is not, despite being more in theory-land than empirical malign generalisation talk. We should keep thinking of worries on the level of goals, even as we’re still figuring out how to characterise goals precisely. We should also be thinking of worries on the level of what we could observe empirically.

Comment by Vlad Mikulik (vlad_m) on Is the term mesa optimizer too narrow? · 2019-12-18T22:18:58.202Z · LW · GW

We’re probably in agreement, but I’m not sure what exactly you mean by “retreat to malign generalisation”.

For me, mesa-optimisation’s primary claim isn’t (call it Optimisers) that agents are well-described as optimisers, which I’m happy to drop. It is the claim (call it Mesa≠Base) that whatever the right way to describe them is, in general their intrinsic goals are distinct from the reward.

That’s a specific (if informal) claim about a possible source of malign generalisation. Namely, that when intrinsic goals differ arbitrarily from the reward, then systems that competently pursue them may lead to outcomes that are arbitrarily bad according to the reward. Humans don’t pose a counterexample to that, and it seems prima facie conceptually clarifying, so I wouldn’t throw it away. I’m not sure if you propose to do that, but strictly, that’s what “retreating to malign generalisation” could mean, as malign generalisation itself makes no reference to goals.

One might argue that until we have a good model of goal-directedness, Mesa≠Base reifies goals more than is warranted, so we should drop it. But I don’t think so – so long as one accepts goals as meaningful at all, the underlying model need only admit a distinction between the goal of a system and the criterion according to which a system was selected. I find it hard to imagine a model or view that wouldn’t allow this – this makes sense even in the intentional stance, whose metaphysics for goals is pretty minimal.

It’s a shame that Mesa≠Base is so entangled with Optimisers. When I think of mesa-optimisation, I tend to think more about the former than about the latter. I wish there was a term that felt like it pointed directly to Mesa≠Base without pointing to Optimisers. The Inner Alignment Problem might be it, though it feels like it’s not quite specific enough.

Comment by Vlad Mikulik (vlad_m) on Is the term mesa optimizer too narrow? · 2019-12-16T13:00:44.171Z · LW · GW

I’m sympathetic to what I see as the message of this post: that talk of mesa-optimisation is too specific given that the practical worry is something like malign generalisation. I agree that it makes extra assumptions on top of that basic worry, which we might not want to make. I would like to see more focus on inner alignment than on mesa-optimisation as such. I’d also like to see a broader view of possible causes for malign generalisation, which doesn’t stick so closely to the analysis in our paper. (In hindsight our analysis could also have benefitted from taking a broader view, but that wasn’t very visible at the time.)

At the same time, speaking only in terms of malign generalisation (and dropping the extra theoretical assumptions of a more specific framework) is too limiting. I suspect that solutions to inner alignment will come from taking an opinionated view on the structure of agents, clarifying its assumptions and concepts, explaining why it actually applies to real-world agents, and offering concrete ways in which the extra structure of the view can be exploited for alignment. I’m not sure that mesa-optimisation is the right view for that, but I do think that the right view will have something to do with goal-directedness.

Comment by Vlad Mikulik (vlad_m) on A simple environment for showing mesa misalignment · 2019-09-29T00:13:26.221Z · LW · GW

By that I didn’t mean to imply that we care about mesa-optimisation in particular. I think that this demo working “as intended” is a good demo of an inner alignment failure, which is exciting enough as it is. I just also want to flag that the inner alignment failure doesn’t automatically provide an example of a mesa-optimiser.

Comment by Vlad Mikulik (vlad_m) on A simple environment for showing mesa misalignment · 2019-09-26T09:57:02.334Z · LW · GW

I have now seen a few suggestions for environments that demonstrate misaligned mesa-optimisation, and this is one of the best so far. It combines being simple and extensible with being compelling as a demonstration of pseudo-alignment if it works (fails?) as predicted. I think that we will want to explore more sophisticated environments with more possible proxies later, but as a first working demo this seems very promising. Perhaps one could start even without the maze, just a gridworld with keys and boxes.

I don’t know whether observing key-collection behaviour here would be sufficient evidence to count for mesa-optimisation, if the agent has too simple a policy. There is room for philosophical disagreement there. Even with that, a working demo of this environment would in my opinion be a good thing, as we would have a concrete agent to disagree about.

Comment by Vlad Mikulik (vlad_m) on Utility ≠ Reward · 2019-09-13T09:55:02.451Z · LW · GW

Ah; this does seem to be an unfortunate confusion.

I didn’t intend to make ‘utility’ and ‘reward’ terminology – that’s what ‘mesa-‘ and ‘base’ objectives are for. I wasn’t aware of the terms being used in the technical sense as in your comment, so I wanted to use utility and reward as friendlier and familiar words for this intuition-building post. I am not currently inclined to rewrite the whole thing using different words because of this clash, but could add a footnote to clear this up. If the utility/reward distinction in your sense becomes accepted terminology, I’ll think about rewriting this.

That said, the distinctions we’re drawing appear to be similar. In your terminology, a utility-maximising agent is an agent which has an internal representation of a goal which it pursues. Whereas a reward-maximising agent does not have a rich internal goal representation but instead a kind of pointer to the external reward signal. To me this suggests your utility/reward tracks a very similar, if not the same, distinction between internal/external that I want to track, but with a difference in emphasis. When either of us says ‘utility ≠ reward’, I think we mean the same distinction, but what we want to draw from that distinction is different. Would you disagree?

Comment by Vlad Mikulik (vlad_m) on Utility ≠ Reward · 2019-09-05T23:14:58.263Z · LW · GW

Thanks for raising this. While I basically agree with evhub on this, I think it is unfortunate that the linguistic justification is messed up as it is. I'll try to amend the post to show a bit more sensitivity to the Greek not really working like intended.

Though I also think that "the opposite of meta"-optimiser is basically the right concept, I feel quite dissatisfied with the current terminology, with respect to both the "mesa" and the "optimiser" parts. This is despite us having spent a substantial amount of time and effort on trying to get the terminology right! My takeaway is that it's just hard to pick terms that are both non-confusing and evocative, especially when naming abstract concepts. (And I don't think we did that badly, all things considered.)

If you have ideas on how to improve the terms, I would like to hear them!

Comment by Vlad Mikulik (vlad_m) on Risks from Learned Optimization: Introduction · 2019-06-24T17:58:15.167Z · LW · GW

You’re completely right; I don’t think we meant to have ‘more formally’ there.

Comment by Vlad Mikulik (vlad_m) on Risks from Learned Optimization: Introduction · 2019-06-09T18:53:04.647Z · LW · GW

To some extent, but keep in mind that in another sense, the behavioural objective of maximising paperclips is totally consistent with playing along with the base objective for a while and then defecting. So I’m not sure the behaviour/mesa- distinction alone does the work you want it to do even in that case.

Comment by Vlad Mikulik (vlad_m) on Risks from Learned Optimization: Introduction · 2019-06-09T18:48:30.712Z · LW · GW

I’ve been meaning for a while to read Dennett with reference to this, and actually have a copy of Bacteria to Bach. Can you recommend some choice passages, or is it significantly better to read the entire book?

P.S. I am quite confused about DQN’s status here and don’t wish to suggest that I’m confident it’s an optimiser. Just to point out that it’s plausible we might want to call it one without calling PPO an optimiser.

P.P.S.: I forgot to mention in my previous comment that I enjoyed the objective graph stuff. I think there might be fruitful overlap between that work and the idea we’ve sketched out in our third post on a general way of understanding pseudo-alignment. Our objective graph framework is less developed than yours, so perhaps your machinery could be applied there to get a more precise analysis?

Comment by Vlad Mikulik (vlad_m) on Risks from Learned Optimization: Introduction · 2019-06-08T18:47:18.568Z · LW · GW

Thanks for an insightful comment. I think your points are good to bring up, and though I will offer a rebuttal I’m not convinced that I am correct about this.

What’s at stake here is: describing basically any system as an agent optimising some objective is going to be a leaky abstraction. The question is, how do we define the conditions of calling something an agent with an objective in such a way to minimise the leaks?

Distinguishing the “this system looks like it optimises for X” from “this system internally uses an evaluation of X to make decisions” is useful from the point of view of making the abstraction more robust. The former doesn’t make clear what makes the abstraction “work”, and so when to expect it to fail. The latter will at least tell you what kind of failures to expect in the abstraction: places where the evaluation of X doesn’t connect to the rest of the system like it’s supposed to. In particular, you’re right that if the learned environment model doesn’t generalise, the mesa-objective won’t be predictive of behaviour. But that’s actually a prediction of taking this view. On the other hand, it is unclear if taking the behavioural view would predict that the system will change its behaviour off-distribution (partially, because it’s unclear what exactly grounds the similarities in behaviour on-distribution).

I think it definitely is useful to also think about the behavioural objective in the way you describe, because the later concerns we raise basically do also translate to coherent behavioural objectives. And I welcome more work trying to untangle these concepts from one another, or trying to dissolve any of them as unnecessary. I am just wary of throwing away seemingly relevant assumptions about internal structure before we can show they’re unhelpful.

Re: DQN

You’re also right to point out DQN as an interesting edge case. But I am actually unsure that DQN agents should be considered non-optimisers, in the sense that they do perform rudimentary optimisation: they take an argmax of the Q function. The Q function is regressed to the episode returns. If the learning goes well, the Q function is literally representing the agent’s objective (indeed, it’s not really selected to maximise return; its selected to be accurate at predicting return). Contrast this with e.g. policy optimisation trained agents, which are not supposed to directly represent an objective, but are supposed to score well on it. (Someone good at running RL experiments maybe should look into comparing the coherence of revealed preferences of DQN agents with PPO agents. I’d read that paper.)

Comment by Vlad Mikulik (vlad_m) on Risks from Learned Optimization: Introduction · 2019-06-08T18:15:14.877Z · LW · GW

I think humans are fairly weird because we were selected for an objective that is unlikely to be what we select for in our AIs.

That said, if we model AI success as driven by model size and compute (with maybe innovations in low-level architecture), then I think that the way humans represent objectives is probably fairly close to what we ought to expect.

If we model AI success as mainly innovative high-level architecture, then I think we will see more explicitly represented objectives.

My tentative sense is that for AI to be interpretable (and safer) we want it to be the latter kind, but given enough compute the former kind of AI will give better results, other things being equal.

Here, what I mean by low-level architecture is something like “we’ll use lots of LSTMs instead of lots of plain RNNs, but keep the model structure simple: plug in the inputs, pass it through some layers, and read out the action probabilities”, and high-level is something like “let’s organise the model using this enormous flowchart with all of these various pieces that each are designed to take a particular role; here’s the observation embedding, here’s the search in latent model space, here’s the ...”

Comment by Vlad Mikulik (vlad_m) on Conditions for Mesa-Optimization · 2019-06-06T07:10:19.252Z · LW · GW

Yes, it probably doesn’t apply to most objectives. Though it seems to me that the closer the task is to something distinctly human, the more probable it is that this kind of consideration can apply. E.g., making judgements in criminal court cases and writing fiction are domains where it’s not implausible to me that this could apply.

I do think this is a pretty speculative argument, even for this sequence.

Comment by Vlad Mikulik (vlad_m) on Conditions for Mesa-Optimization · 2019-06-04T10:51:29.869Z · LW · GW

The main benefit I see of hardcoding optimisation is that, assuming the system's pieces learn as intended (without any mesa-optimisation happening in addition to the hardcoded optimisation) you get more access and control as a programmer over what the learned objective actually is. You could attempt to regress the learned objective directly to a goal you want, or attempt to enforce a certain form on it, etc. When the optimisation itself is learned*, the optimiser is more opaque, and you have fewer ways to affect what goal is learned: which weights of your enormous LSTM-based mesa-optimiser represent the objective?

This doesn't solve the problem completely (you might still learn an objective that is very incorrect off-distribution, etc.), but could offer more control and insight into the system to the programmer.

*Of course, you can have learned optimisation where you keep track of the objective which is being optimised (like in Learning to Learn by Gradient Descent), but I'd class that more under hard-coded optimisation for the purposes of this discussion. Here I mean the kind of learned optimisation that happens where you're not building the architecture explicitly around optimising or learning to optimise.

Comment by Vlad Mikulik (vlad_m) on Conditions for Mesa-Optimization · 2019-06-04T10:41:23.373Z · LW · GW

The section on human modelling annoyingly conflates two senses of human modelling. One is the sense you talk about, the other is seen in the example:

For example, it might be the case that predicting human behavior requires instantiating a process similar to human judgment, complete with internal motives for making one decision over another.

The idea there isn't that the algorithm simulates human judgement as an external source of information for itself, but that the actual algorithm learns to be a human-like reasoner, with human-like goals (because that's a good way of approximating the output of human-like reasoning). In that case, the agent really is a mesa-optimiser, to the degree that a goal-directed human-like reasoner is an optimiser.

(I'm not sure to what degree it's actually likely that a good way to approximate the behaviour of human-like reasoning is to instantiate human-like reasoning)

Comment by Vlad Mikulik (vlad_m) on Selection vs Control · 2019-06-02T20:03:50.586Z · LW · GW

to what extent are mesa-controllers with simple behavioural objectives going to be simple?

I’m not sure what “simple behavioural objective” really means. But I’d expect that for tasks requiring very simple policies, controllers would do, whereas the more complicated the policy required to solve a task, the more one would need to do some kind of search. Is this what we observe? I’m not sure. AlphaStar and OpenAI Five seem to do well enough in relatively complex domains without any explicit search built into the architecture. Are they using their recurrence to search internally? Who knows. I doubt it, but it’s not implausible.

certain kinds of mesa-controllers can be simple: the mesa-controllers which are more like my rocket example (explicit world-model; explicit representation of objective within that world model; but, optimal policy does not use any search).

The rocket example is interesting. I guess the question for me there is, what sorts of tasks admit an optimal policy that can be represented in this way? Here it also seems to me like the more complex an environment, the more implausible it seems that a powerful policy can be successfully represented with straightforward functions. E.g., let’s say we want a rocket not just to get to the target, but to self-identify a good target in an area and pick a trajectory that evades countermeasures. I would be somewhat surprised if we can still represent the best policy as a set of non-searchy functions. So I have this intuition that for complex state spaces, it’s hard to find pure controllers that do the job well.

Comment by Vlad Mikulik (vlad_m) on Selection vs Control · 2019-06-02T15:38:24.555Z · LW · GW

(I am unfortunately currently bogged down with external academic pressures, and so cannot engage with this at the depth I’d like to, but here’s some initial thoughts.)

I endorse this post. The distinction explained here seems interesting and fruitful.

I agree with the idea to treat selection and control as two kinds of analysis, rather than as two kinds of object – I think this loosely maps onto the distinction we make between the mesa-objective and the behavioural objective. The former takes the selection view of the learned algorithm; the latter takes the control view.

At least speaking for myself (the other authors might have different thoughts on this), the decision to talk explicitly in terms of the selection view in the mesa-optimiser post is based on an intuition that selectors, in general, have more coherently defined counterfactual behaviour. That is, given a very different input, a selector will still select an output that scores well on its mesa-objective, because that’s how selectors work. Whereas a controller, to the degree it optimises for an objective, seems more likely to just completely stop working on a different input. I have fairly low confidence in this argument, however: it seems to me that one can plausibly have pretty coherent counterfactual behaviour in a very broad distribution even without doing selection. And since it is ultimately the behaviour that does the damage, it would be good to have a working distinction that is based purely on that. We (the mesa-optimisation authors) haven’t been able to come up with one.

Another reason to be interested in selectors is that in RL, the learned algorithm is supposed to fill a controller role. So, restricting attention to selectors allows to talk at least somewhat meaningfully about non-optimiser agents, which is otherwise difficult, as any learned agent is in a controller-shaped context.

In any case, I hope that more work happens on this problem, either dissolving the need to talk about optimisation, or at least making all these distinctions more precise. The vagueness of everything is currently my biggest worry about the legitimacy of mesa-optimiser concerns.

Comment by Vlad Mikulik (vlad_m) on What failure looks like · 2019-03-27T00:22:49.513Z · LW · GW

The goal that the agent is selected to score well on is not necessarily the goal that the agent is itself pursuing. So, unless the agent’s internal goal matches the goal for which it’s selected, the agent might still seek influence because its internal goal permits that. I think this is in part what Paul means by “Avoiding end-to-end optimization may help prevent the emergence of influence-seeking behaviors (by improving human understanding of and hence control over the kind of reasoning that emerges)”

Comment by Vlad Mikulik (vlad_m) on Probabilistic decision-making as an anxiety-reduction technique · 2018-07-17T05:27:57.651Z · LW · GW

I also use a simple version of this, with a key extra step at the end:

1) have a decision you are unsure about. 2) perform randomisation (I usually just use a coin). 3) notice how the outcome makes you feel. If you find that you wish the coin landed the other way, override the decision and do what you secretly wanted to do all along.

You might think the third step defeats the purpose of the exercise, but so long as you actually commit to following the randomisation most of the time, it gives you direct access to very useful information. It also sets up the right incentive, wherein you never really need to work your willpower against your desires (except, I guess, the desire to deliberate more).

I mostly use this for a slightly different use case – inconsequential decisions like where to eat or small purchases, where taking a lot of time to optimise isn’t worth it. Your mileage may vary with more important decisions, but I see no reason in principle this couldn’t work.

Comment by Vlad Mikulik (vlad_m) on Clarifying Consequentialists in the Solomonoff Prior · 2018-07-11T22:16:27.264Z · LW · GW

I agree. That’s what I meant when I wrote there will be TMs that artificially promote S itself. However, this would still mean that most of S’s mass in the prior would be due to these TMs, and not due to the natural generator of the string.

Furthermore, it’s unclear how many TMs would promote S vs S’ or other alternatives. Because of this, I don’t now whether the prior would be higher for S or S’ from this reasoning alone. Whichever is the case, the prior no longer reflects meaningful information about the universe that generates S and whose inhabitants are using the prefix to choose what to do; it’s dominated by these TMs that search for prefixes they can attempt to influence.

Comment by Vlad Mikulik (vlad_m) on Clarifying Consequentialists in the Solomonoff Prior · 2018-07-11T19:55:10.443Z · LW · GW

I agree that this probably happens when you set out to mess with an arbitrary particular S, I.e. try to make some S’ that shares a prefix with S as likely as S.

However, some S are special, in the sense that their prefixes are being used to make very important decisions. If you, as a malicious TM in the prior, perform an exhaustive search of universes, you can narrow down your options to only a few prefixes used to make pivotal decisions, selecting one of those to mess with is then very cheap to specify. I use S to refer to those strings that are the ‘natural’ continuation of those cheap-to-specify prefixes.

There are, it seems to me, a bunch of other equally-complex TMs that want to make other strings that share that prefix more likely, including some that promote S itself. What the resulting balance looks like is unclear to me, but what’s clear is that the prior is malign with respect to that prefix - conditioning on that prefix gives you a distribution almost entirely controlled by these malign TMs. The ‘natural’ complexity of S, or of other strings that share the prefix, play almost no role in their priors.

The above is of course conditional on this exhaustive search being possible, which also relies on there being anyone in any universe that actually uses the prior to make decisions. Otherwise, we can’t select the prefixes that can be messed with.

Comment by Vlad Mikulik (vlad_m) on Clarifying Consequentialists in the Solomonoff Prior · 2018-07-11T04:47:38.194Z · LW · GW

The trigger sequence is a cool idea.

I want to add that the intended generator TM also needs to specify a start-to-read time, so there is symmetry there. Whatever method a TM needs to use to select the camera start time in the intended generator for the real world samples, it can also use in the simulated world with alien life, since for the scheme to work only the difference in complexity between the two matters.

There is additional flex in that unlike the intended generator, the reasoner TM can sample its universe simulation at any cheaply computable interval, giving the civilisation the option of choosing any amount of thinking they can perform between outputs, if they so choose.