Matt Botvinick on the spontaneous emergence of learning algorithms
post by Adam Scholl (adam_scholl) · 2020-08-12T07:47:13.726Z · LW · GW · 87 commentsContents
Initial Observation Search for Biological Analogue Implications None 87 comments
Matt Botvinick is Director of Neuroscience Research at DeepMind. In this interview, he discusses results from a 2018 paper which describe conditions under which reinforcement learning algorithms will spontaneously give rise to separate full-fledged reinforcement learning algorithms that differ from the original. Here are some notes I gathered from the interview and paper:
Initial Observation
At some point, a group of DeepMind researchers in Botvinick’s group noticed that when they trained a RNN using RL on a series of related tasks, the RNN itself instantiated a separate reinforcement learning algorithm. These researchers weren’t trying to design a meta-learning algorithm—apparently, to their surprise, this just spontaneously happened. As Botvinick describes it, they started “with just one learning algorithm, and then another learning algorithm kind of... emerges, out of, like out of thin air”:
"What happens... it seemed almost magical to us, when we first started realizing what was going on—the slow learning algorithm, which was just kind of adjusting the synaptic weights, those slow synaptic changes give rise to a network dynamics, and the dynamics themselves turn into a learning algorithm.”
Other versions of this basic architecture—e.g., using slot-based memory instead of RNNs—seemed to produce the same basic phenomenon, which they termed "meta-RL." So they concluded that all that’s needed for a system to give rise to meta-RL are three very general properties: the system must 1) have memory, 2) whose weights are trained by a RL algorithm, 3) on a sequence of similar input data.
From Botvinick’s description, it sounds to me like he thinks [learning algorithms that find/instantiate other learning algorithms] is a strong attractor in the space of possible learning algorithms:
“...it's something that just happens. In a sense, you can't avoid this happening. If you have a system that has memory, and the function of that memory is shaped by reinforcement learning, and this system is trained on a series of interrelated tasks, this is going to happen. You can't stop it."
Search for Biological Analogue
This system reminded some of the neuroscientists in Botvinick’s group of features observed in brains. For example, like RNNs, the human prefrontal cortex (PFC) is highly recurrent, and the RL and RNN memory systems in their meta-RL model reminded them of “synaptic memory” and “activity-based memory.” They decided to look for evidence of meta-RL occuring in brains, since finding a neural analogue of the technique would provide some evidence they were on the right track, i.e. that the technique might scale to solving highly complex tasks.
They think they found one. In short, they think that part of the dopamine system (DA) is a full-fledged reinforcement learning algorithm, which trains/gives rise to another full-fledged, free-standing reinforcement learning algorithm in PFC, in basically the same way (and for the same reason) the RL-trained RNNs spawned separate learning algorithms in their experiments.
As I understand it, their story goes as follows:
The PFC, along with the bits of basal ganglia and thalamic nuclei it connects to, forms a RNN. Its inputs are sensory percepts, and information about past actions and rewards. Its outputs are actions, and estimates of state value.
DA[1] is a RL algorithm that feeds reward prediction error to PFC. Historically, people assumed the purpose of sending this prediction error was to update PFC’s synaptic weights. Wang et al. agree that this happens, but argue that the principle purpose of sending prediction error is to cause the creation of “a second RL algorithm, implemented entirely in the prefrontal network’s activation dynamics.” That is, they think DA mostly stores its model in synaptic memory, while PFC mostly stores it in activity-based memory (i.e. directly in the dopamine distributions).[2]
What’s the case for this story? They cite a variety of neuroscience findings as evidence for parts of this hypothesis, many of which involve doing horrible things to monkeys, and some of which they simulate using their meta-RL model to demonstrate that it gives similar results. These points stood out most to me:
Does RL occur in the PFC?
Some scientists implanted neuroimaging devices in the PFCs of monkeys, then sat the monkeys in front of two screens with changing images, and rewarded them with juice when they stared at whichever screen was displaying a particular image. The probabilities of each image leading to juice-delivery periodically changed, causing the monkeys to update their policies. Neurons in their PFCs appeared to exhibit RL-like computation—that is, to use information about the monkey’s past choices (and associated rewards) to calculate the expected value of actions, objects and states.
Wang et al. simulated this task using their meta-RL system. They trained a RNN on the changing-images task using RL; when run, it apparently demonstrated similar performance as the monkeys, and when they inspected it they found units that similarly seemed to encode EV estimates based on prior experience, continually adjust the action policy, etc.
Interestingly, the system continued to improve its performance even once its weights were fixed, which they take to imply that the learning which led to improved performance could only have occured within the activation patterns of the recurrent network.[3]
Can the two RL algorithms diverge?
When humans perform two-armed bandit tasks where payoff probabilities oscillate between stable and volatile, they increase their learning rate during volatile periods, and decrease it during stable periods. Wang et al. ran their meta-RL system on the same task, and it varied its learning rate in ways that mimicked human performance. This learning again occurred after weights were fixed, and notably, between the end of training and the end of the task, the learning rates of the two algorithms had diverged dramatically.
Implications
The account detailed by Botvinick and Wang et al. strikes me as a relatively clear example of mesa-optimization [LW · GW], and I interpret it as tentative evidence that the attractor toward mesa-optimization is strong. [Edit: Note that some commenters, like Rohin Shah [LW(p) · GW(p)] and Evan Hubinger [LW(p) · GW(p)], disagree].
These researchers did not set out to train RNNs in such a way that they would turn into reinforcement learners. It just happened. And the researchers seem to think this phenomenon will occur spontaneously whenever “a very general set of conditions” is met, like the system having memory, being trained via RL, and receiving a related sequence of inputs. Meta-RL, in their view, is just “an emergent effect that results when the three premises are concurrently satisfied... these conditions, when they co-occur, are sufficient to produce a form of ‘meta-learning’, whereby one learning algorithm gives rise to a second, more efficient learning algorithm.”
So on the whole I felt alarmed reading this. That said, if mesa-optimization is a standard feature[4] of brain architecture, it seems notable that humans don’t regularly experience catastrophic inner alignment failures. Maybe this is just because of some non-scalable hack, like that the systems involved aren’t very powerful optimizers.[5] But I wouldn't be surprised if coming to better understand the biological mechanisms involved led to safety-relevant insights.
Thanks to Rafe Kennedy for helpful comments and feedback.
The authors hypothesize that DA is a model-free RL algorithm, and that the spinoff (mesa?) RL algorithm it creates within PFC is model-based, since that’s what happens in their ML model. But they don’t cite biological evidence for this. ↩︎
Depending on what portion of memories are encoded in this way, it may make sense for cryonics standby teams to attempt to reduce the supraphysiological intracellular release of dopamine that occurs after cardiac arrest, e.g. by administering D1-receptor antagonists. Otherwise entropy increases in PFC dopamine distributions may result in information loss. ↩︎
They demonstrated this phenomenon (continued learning after weights were fixed) in a variety of other contexts, too. For example, they cite an experiment in which manipulating DA activity was shown to directly manipulate monkeys’ reward estimations, independent of actual reward—i.e., when their DA activity was blocked/stimulated while they pressed a lever, they exhibited reduced/increased preference for that lever, even if pressing it did/didn’t give them food. They trained their meta-RL system to simulate this, again demonstrated similar performance as the monkeys, and again noticed that it continued learning even after the weights were fixed. ↩︎
The authors seem unsure whether meta-RL also occurs in other brain regions, since for it to occur you need A) inputs carrying information about recent actions/rewards, and B) network dynamics (like recurrence) that support continual activation. Maybe only PFC has this confluence of features. Personally, I doubt it; I would bet that meta-RL (and other sorts of mesa-optimization) occur in a wide variety of brain systems, but it would take more time than I want to allocate here to justify that intuition. ↩︎
Although note that neuroscientists do commonly describe the PFC as disproportionately responsible for the sort of human behavior one might reasonably [LW · GW] wish [LW · GW] to describe as “optimization.” For example, the neuroscience textbook recommended on lukeprog’s textbook recommendation post [LW · GW] describes PFC as “often assumed to be involved in those characteristics that distinguish us from other animals, such as self-awareness and the capacity for complex planning and problem solving.” ↩︎
87 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2020-08-19T19:50:27.478Z · LW(p) · GW(p)
(EDIT: I'm already seeing downvotes of the post, it was originally at 58 AF karma. This wasn't my intention: I think this is a failure of the community as a whole, not of the author.)
Okay, this has gotten enough karma and has been curated and has influenced another post [AF · GW], so I suppose I should engage, especially since I'm not planning to put this in the Alignment Newsletter.
(A lot copied over from this comment [AF(p) · GW(p)] of mine)
This is extremely basic RL theory.
The linked paper studies bandit problems, where each episode of RL is a new bandit problem where the agent doesn't know which arm gives maximal reward. Unsurprisingly, the agent learns to first explore, and then exploit the best arm. This is a simple consequence of the fact that you have to look at observations to figure out what to do. Basic POMDP theory will tell you that when you have partial observability your policy needs to depend on history, i.e. it needs to learn.
However, because bandit problems have been studied in the AI literature, and "learning algorithms" have been proposed to solve bandit problems, this very normal fact of a policy depending on observation history is now trotted out as "learning algorithms spontaneously emerge". I don't understand why this was surprising to the original researchers, it seems like if you just thought about what the optimal policy would be given the observable information, you would make exactly this prediction. Perhaps it's because it's primarily a neuroscience paper, and they weren't very familiar with AI.
More broadly, I don't understand what people are talking about when they speak of the "likelihood" of mesa optimization. If you mean the chance that the weights of a neural network are going to encode some search algorithm, then this paper should be ~zero evidence in favor of it. If you mean the chance than a policy trained by RL will "learn" without gradient descent, I can't imagine a way that could fail to be true for an intelligent system trained by deep RL -- presumably a system that is intelligent is capable of learning quickly, and when we talk about deep RL leading to an intelligent AI system, presumably we are talking about the policy being intelligent (what else?), therefore the policy must "learn" as it is being executed.
Gwern notes here [AF(p) · GW(p)] that we've seen this elsewhere. This is because it's exactly what you'd expect, just that in the other cases we call conditioning on observations "adaptation" rather than "learning".
----
Meta: I'm disappointed that I had to be the one to point this out. (Though to be fair, Gwern clearly understands this point.) There's clearly been a lot of engagement with this post, and yet this seemingly obvious point hasn't been said. When I saw this post first come up, my immediate reaction was "oh I'm sure this is a typical LW example of a case where the optimal policy is interpreted as learning, I'm not even going to bother clicking on the link". Do we really have so few people who understand machine learning, that of the many, many views this post must have had, not one person could figure this out? It's really no surprise that ML researchers ignore us if this is the level of ML understanding we as a community have.
EDIT: I should give credit to Nevan [AF(p) · GW(p)] for pointing out that this paper is not much evidence in favor of the hypothesis that the neural network weights encode some search algorithm (before I wrote this comment).
Replies from: Vaniver, adam_scholl, Pongo, adamShimi, atlas, sil-ver↑ comment by Vaniver · 2020-08-19T22:06:55.158Z · LW(p) · GW(p)
This is extremely basic RL theory.
I note that this doesn't feel like a problem to me, mostly because of reasons related to Explainers Shoot High. Aim Low! [LW · GW]. Even among ML experts, many of them haven't touched much RL, because they're focused on another field. Why expect them to know basic RL theory, or to have connected that to all the other things that they know?
More broadly, I don't understand what people are talking about when they speak of the "likelihood" of mesa optimization.
I don't think I have a fully crisp view of this, but here's my frame on it so far:
One view is that we design algorithms to do things, and those algorithms have properties that we can reason about. Another is that we design loss functions, and then search through random options for things that perform well on those loss functions. In the second view, often which options we search through doesn't matter very much, because there's something like the "optimal solution" that all things we actually find will be trying to approximate in one way or another.
Mesa-optimization is something like, "when we search through the options, will we find something that itself searches through a different set of options?". Some of those searches are probably benign--the bandit algorithm updating its internal value function in response to evidence, for example--and some of those searches are probably malign (or, at least, dangerous). In particular, we might think we have restrictions on the behavior of the base-level optimizer that turn out to not apply to any subprocesses it manages to generate, and so those properties don't actually hold overall.
But it seems to me like overall we're somewhat confused about this. For example, the way I normally use the word "search", it doesn't apply to the bandit algorithm updating its internal value function. But does Abram's distinction between mesa-search and mesa-control actually mean much? There's lots of problems that you can solve exactly with calculus, and solve approximately with well-tuned simple linear estimators, and thus saying "oh, it can't do calculus, it can only do linear estimates" won't rule out it having a really good solution; presumably a similar thing could be true with "search" vs. "control," where in fact you might be able to build a pretty good search-approximator out of elements that only do control.
So, what would it mean to talk about the "likelihood" of mesa optimization? Well, I remember a few years back when there was a lot of buzz about hierarchical RL. That is, you would have something like a policy for which 'tactic' (or 'sub-policy' or whatever you want to call it) to deploy, and then each 'tactic' is itself a policy for what action to take. In 2015, it would have been sensible to talk about the 'likelihood' of RL models in 2020 being organized that way. (Even now, we can talk about the likelihood that models in 2025 will be organized that way!) But, empirically, this seems to have mostly not helped (at least as we've tried it so far).
As we imagine deploying more complicated models, it feels like there are two broad classes of things that can happen during runtime:
- 'Task location', where they know what to do in a wide range of environments, and all they're learning is which environment they're in. The multi-armed bandit is definitely in this case; GPT-3 seems like it's mostly doing this.
- 'Task learning', where they are running some sort of online learning process that gives them 'new capabilities' as they encounter new bits of the world.
The two blur into each other; you can imagine training a model to deal with a range of situations, and yet it also performs well on situations not seen in training (that are interpolations between situations it has seen, or where the old abstractions apply correctly, and thus aren't "entirely new" situations). Just like some people argue that anything we know how to do isn't "artificial intelligence", you might get into a situation where anything we know how to do is task 'location' instead of task 'learning.'
But to the extent that our safety guarantees rely on the lack of capability in an AI system, any ability for the AI system to do learning instead of location means that it may gain capabilities we didn't expect it to have. That said, merely restricting it to 'location' may not help us very much, because if we misunderstand the abstractions that govern the system's generalizability, we may underestimate what capabilities it will or won't have.
There's clearly been a lot of engagement with this post, and yet this seemingly obvious point hasn't been said.
I think people often underestimate the degree to which, if they want to see their opinions in a public forum, they will have to be the one to post them. This is both because some points are less widely understood than you might think, and because even if the someone understands the point, that doesn't mean it connects to their interests in a way that would make them say anything about it.
Replies from: rohinmshah, rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-20T18:34:16.838Z · LW(p) · GW(p)
I note that this doesn't feel like a problem to me, mostly because of reasons related to Explainers Shoot High. Aim Low! [LW · GW]. Even among ML experts, many of them haven't touched much RL, because they're focused on another field. Why expect them to know basic RL theory, or to have connected that to all the other things that they know?
I'm perfectly happy with good explanations that don't assume background knowledge. The flaw I am pointing to has nothing to do with explanations. It is that despite this evidence being a clear consequence of basic RL theory, for some reason readers are treating it as important evidence. Clearly I should update negatively on things-AF-considers-important. At a more gears level, presumably I should update towards some combination of:
- AF readers don't know RL.
- AF readers upvote anything that's cheering for their team.
- AF readers automatically believe anything written in a post without checking it
Any of these would be a pretty damning critique of the forum. And the update should be fairly strong, given that this was (prior to my comment) the highest-upvoted post ever by AF karma.
I think people often underestimate the degree to which, if they want to see their opinions in a public forum, they will have to be the one to post them.
If you saw a post that ran an experiment where they put their hand in boiling water, and the conclusion was "boiling water is dangerous", and you saw it get to be the most upvoted post ever on LessWrong, with future posts citing it as evidence for boiling water being dangerous, would your reaction be "huh, I guess I need to state my opinion that this is obvious"?
There's a difference between "I'm surprised no one has made this connection to this other thing" and "I'm surprised that readers are updating on facts that I expected them to already know".
I don't usually expect my opinions to show up on a public forum. For example, I am continually sad but not surprised about the fact that AF focuses on mesa optimizers as separate from capability generalization without objective generalization.
Replies from: evhub, Vaniver↑ comment by evhub · 2020-08-20T21:21:55.274Z · LW(p) · GW(p)
I guess I should explain why I upvoted this post despite agreeing with you that it's not new evidence in favor of mesa-optimization. I actually had a conversation about this post with Adam Shimi prior to you commenting on it where I explained to him that I thought that not only was none of it new but also that it wasn't evidence about the internal structure of models and therefore wasn't really evidence about mesa-optimization. Nevertheless, I chose to upvote the post and not comment my thoughts on it. Some reasons why I did that:
- I generally upvote most attempts on LW/AF to engage with the academic literature—I think that LW/AF would generally benefit from engaging with academia more and I like to do what I can to encourage that when I see it.
- I didn't feel like any comment I would have made would have anything more to say than things I've said in the past. In fact, in “Risks from Learned Optimization” itself, we talk about both a) why we chose to be agnostic about whether current systems exhibit mesa-optimization due to the difficulty of determining whether a system is actually implementing search or not (link [? · GW]) and b) examples of current work that we thought did seem to come closest to being evidence of mesa-optimization such as RL^2 (and I think RL^2 is a better example than the work linked here) (link [? · GW]).
↑ comment by Raemon · 2020-08-20T21:45:15.640Z · LW(p) · GW(p)
(Flagging that I curated the post, but was mostly relying on Ben and Habryka's judgment, in part since I didn't see much disagreement. Since this discussion I've become more agnostic about how important this post is)
One thing this comment makes me want is more nuanced reacts [LW · GW] that people have affordance to communicate how they feel about a post, in a way that's easier to aggregate.
Though I also notice that with this particular post it's a bit unclear what the react would be appropriate, since it sounds like it's not "disagree" so much as "this post seems confused" or something.
Replies from: Vaniver↑ comment by Vaniver · 2020-08-21T21:30:36.196Z · LW(p) · GW(p)
in part since I didn't see much disagreement.
FWIW, I appreciated that your curation notice explicitly includes the desire for more commentary on the results, and that curating it seems to have been a contributor to there being more commentary.
↑ comment by ESRogs · 2020-08-22T06:35:33.642Z · LW(p) · GW(p)
I didn't feel like any comment I would have made would have anything more to say than things I've said in the past.
FWIW, I say: don't let that stop you! (Don't be afraid to repeat yourself, especially if there's evidence that the point has not been widely appreciated.)
Replies from: evhub↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T01:44:59.449Z · LW(p) · GW(p)
Fair enough, those are sensible reasons. I don't like the fact that the incentive gradient points away from making intellectual progress, but it's not an obvious choice.
↑ comment by Vaniver · 2020-08-20T22:39:28.908Z · LW(p) · GW(p)
And the update should be fairly strong, given that this was (prior to my comment) the highest-upvoted post ever by AF karma.
Given karma inflation (as users gain more karma, their votes are worth more, but this doesn't propagate backwards to earlier votes they cast, and more people become AF voters than lose AF voter status), I think the karma differences between this post and these other 4 50+ karma posts [1 [AF · GW] 2 [AF · GW] 3 [AF · GW] 4 [AF · GW]] are basically noise. So I think the actual question is "is this post really in that tier?", to which "probably not" seems like a fair answer.
[I am thinking more about other points you've made, but it seemed worth writing a short reply on that point.]
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T01:47:23.420Z · LW(p) · GW(p)
I think the actual question is "is this post really in that tier?"
Agreed. I still think I should update fairly strongly.
↑ comment by Rohin Shah (rohinmshah) · 2020-08-20T18:38:44.806Z · LW(p) · GW(p)
it feels like there are two broad classes of things that can happen during runtime:
I agree this sort of thing is something you could mean by the "likelihood of mesa optimization". As I said in the grandparent:
this paper should be ~zero evidence in favor of [mesa optimization in the task learning sense].
In practice, when people say they "updated in favor of mesa optimization", they refer to evidence that says approximately nothing about what is "happening at runtime"; therefore I infer that they cannot be talking about mesa optimization in the sense you mean.
↑ comment by Adam Scholl (adam_scholl) · 2020-08-21T08:27:21.977Z · LW(p) · GW(p)
I appreciate you writing this, Rohin. I don’t work in ML, or do safety research, and it’s certainly possible I misunderstand how this meta-RL architecture works, or that I misunderstand what’s normal.
That said, I feel confused by a number of your arguments, so I'm working on a reply. Before I post it, I'd be grateful if you could help me make sure I understand your objections, so as to avoid accidentally publishing a long post in response to a position nobody holds.
I currently understand you to be making four main claims:
- The system is just doing the totally normal thing “conditioning on observations,” rather than something it makes sense to describe as "giving rise to a separate learning algorithm."
- It is probably not the case that in this system, “learning is implemented in neural activation changes rather than neural weight changes.”
- The system does not encode a search algorithm, so it provides “~zero evidence” about e.g. the hypothesis that mesa-optimization is convergently useful, or likely to be a common feature of future systems.
- The above facts should be obvious to people familiar with ML.
Does this summary feel like it reasonably characterizes your objections?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T02:27:06.265Z · LW(p) · GW(p)
I appreciate you writing this, Rohin. I don’t work in ML, or do safety research, and it’s certainly possible I misunderstand how this meta-RL architecture works, or that I misunderstand what’s normal.
Thanks. I know I came off pretty confrontational, sorry about that. I didn't mean to target you specifically; I really do see this as bad at the community level but fine at the individual level.
I don't think you've exactly captured what I meant, some comments below.
The system is just doing the totally normal thing “conditioning on observations,” rather than something it makes sense to describe as "giving rise to a separate learning algorithm."
I think it is reasonable to describe it both as "conditioning on observations" and as "giving rise to a separate learning algorithm".
It is probably not the case that in this system, “learning is implemented in neural activation changes rather than neural weight changes.”
On my interpretation of "learning" in this context, I would agree with that claim (i.e. I agree that learning is implemented in activation changes rather then weight changes via gradient descent). Idk what other people mean by "learning" though.
The system does not encode a search algorithm, so it provides “~zero evidence” about e.g. the hypothesis that mesa-optimization is convergently useful, or likely to be a common feature of future systems.
This sounds roughly right if you use the words as I mean them, but I suspect you aren't using the words as I mean them.
There's this thing where the mesa-optimization paper talks about a neural net that performs "search" via activation changes. When I read the paper, I took this to be an illustrative example, that was meant to stand in for "learning" more broadly, but that made more concrete and easier to reason about. (I didn't think this consciously.) However, whenever I talk to people about this paper, they have different understandings of what is meant by "search", and varying opinions on how much mesa optimization should be tied to "search". But I think the typical opinion is that whether or not mesa optimization is happening depends on what algorithm the neural net weights encode, and you can't deduce whether mesa optimization is happening just by looking at the behavior in the training environment, as it may just have "memorized" what good behavior is rather than "performing search".
If you use this meaning of "search algorithm", then you can't tell whether a good policy is a "search algorithm" or not just by looking at behavior. Since this paper only talked about behavior of a good policy, it can't be evidence in favor of "mesa-optimization-via-search-algorithm".
The above facts should be obvious to people familiar with ML.
Oh definitely not those, most people in ML have never heard of "mesa optimization".
----
I think my response to Vaniver better illustrates my concerns, but let me take a stab at making a simple list of claims.
1. The optimal policy in the bandit environment considered in the paper requires keeping track of the rewards you have gotten in the past, and basing your future decisions on this information.
2. You shouldn't be surprised when applying an RL algorithm to a problem leads to a near-optimal policy for that problem. (This has many caveats, but they aren't particularly relevant.)
3. Therefore, you shouldn't be surprised by the results in this paper.
4. Therefore, you shouldn't be updating based on this paper.
5. Claims 1 and 2 require only basic knowledge about RL.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-22T23:23:36.728Z · LW(p) · GW(p)
I feel confused about why, on this model, the researchers were surprised that this occurred, and seem to think it was a novel finding that it will inevitably occur given the three conditions described. Above, you mentioned the hypothesis that maybe they just weren't very familiar with AI. But looking at the author list, and their publications (e.g.1, 2, 3, 4, 5, 6, 7, 8), this seems implausible to me. Most of the co-authors are neuroscientists by training, but a few have CS degrees, and all but one have co-authored previous ML papers. It's hard for me to imagine their surprise was due to them lacking basic knowledge about RL?
Also, this OpenAI paper (whose authors seem quite familiar with ML)—which the summary of Wang et al. on DeepMind's website describes as "closely related work," and which appears to me to involve a very similar setup— describes their result similarly:
We structure the agent as a recurrent neural network, which receives past rewards, actions, and termination flags as inputs in addition to the normally received observations. Furthermore, its internal state is preserved across episodes, so that it has the capacity to perform learning in its own hidden activations. The learned agent thus also acts as the learning algorithm, and can adapt to the task at hand when deployed.
As I understand it, the OpenAI authors also think they can gather evidence about the structure of the algorithm simply by looking at its behavior. Given a similar series of experiments (mostly bandit tasks, but also a maze solver), they conclude:
the dynamics of the recurrent network come to implement a learning algorithm entirely separate from the one used to train the network weights... the procedure the recurrent network implements is itself a full-fledged reinforcement learning algorithm, which negotiates the exploration-exploitation tradeoff and improves the agent’s policy based on reward outcomes... this learned RL procedure can differ starkly from the algorithm used to train the network’s weights.
They then run an experiment designed specifically to distinguish whether meta-RL was giving rise to a model-free system, or “a model-based system which learns an internal model of the environment and evaluates the value of actions at the time of decision-making through look-ahead planning,” and suggest the evidence implies the latter. This sounds like a description of search to me—do you think I'm confused?
I get the impression from your comments that you think it's naive to describe this result as "learning algorithms spontaneously emerging." You describe the lack of LW/AF pushback against that description as "a community-wide failure," and mention updating as a result toward thinking AF members “automatically believe anything written in a post without checking it.”
But my impression is that OpenAI describes their similar result in a similar way. Do you think my impression is wrong? Or that e.g. their description is also misleading?
--
I've been feeling very confused lately about how people talk about "search," and have started joking that I'm a search panpsychist. Lots of interesting phenomenon look like piles of thermostats when viewed from the wrong angle, and I worry the conventional lens is deceptively narrow.
That said, when I condition on (what I understand to be) the conventional conception, it's difficult for me to imagine how e.g. the maze-solver described in the OpenAI paper can quickly and reliably locate maze exits, without doing something reasonably describable as searching for them.
And it seems to me that Wang et al. should be taken as evidence that "learning algorithms producing other search-performing learning algorithms" is convergently useful/likely to be a common feature of future systems, even if you don't think that's what happened in their paper, as long as you assign decent credence to their underlying model that this is what's going on in PFC, and that search occurs in PFC.
If the primary difference between the DeepMind and OpenAI meta-RL architecture and the PFC/DA architecture is scale, I think there's reasonable reason to suspect something much like mesa-optimization will emerge in future meta-RL systems, even if it hasn't yet. That is, I interpret this result as evidence for the hypothesis that highly competent general-ish learners might tend to exhibit this feature, since (among other reasons) it increased my credence that it is already exhibited by the only existing member of that reference class.
Evan mentions [LW(p) · GW(p)] agreeing that this result isn't new evidence in favor of mesa-optimization. But he also mentions that Risks from Learned Optimization references [? · GW] these two papers, and describes them as "the closest to producing mesa-optimizers of any existing machine learning research." I feel confused about how to reconcile these two claims. I didn't realize these papers were mentioned in Risks from Learned Optimization, but if I had, I think I would have been even more inclined to post this/try to ensure people knew about the results, since my (perhaps naive, perhaps not understanding ways this is disanalogous) prior is that the closest existing example to this problem might provide evidence about its nature or likelihood.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-23T04:59:29.713Z · LW(p) · GW(p)
I get the impression from your comments that you think it's naive to describe this result as "learning algorithms spontaneously emerge."
I think that's a fine characterization (and I said so in the grandparent comment? Looking back, I said I agreed with the claim that learning is happening via neural net activations, which I guess doesn't necessarily imply that I think it's a fine characterization).
You describe the lack of LW/AF pushback against that description as "a community-wide failure,"
I think my original comment didn't do a great job of phrasing my objection. My actual critique is that the community as a whole seems to be updating strongly on data-that-has-high-probability-if-you-know-basic-RL.
updating as a result toward thinking AF members “automatically believe anything written in a post without checking it.”
That was one of three possible explanations; I don't have a strong view on which explanation is the primary cause (if any of them are). It's more like "I observe clearly-to-me irrational behavior, this seems bad, even if I don't know what's causing it". If I had to guess, I'd guess that the explanation is a combination of readers not bothering to check details and those who are checking details not knowing enough to point out that this is expected.
I feel confused about why, given your model of the situation, the researchers were surprised that this phenomenon occurred, and seem to think it was a novel finding that it will inevitably occur given the three conditions described.
Indeed, I am also confused by this, as I noted in the original comment:
I don't understand why this was surprising to the original researchers
I have a couple of hypotheses, none of which seem particularly likely given that the authors are familiar with AI, so I just won't speculate. I agree this is evidence against my claim that this would be obvious to RL researchers.
And this OpenAI paper [...] describes their result in similar terms:
Again, I don't object to the description of this as learning a learning algorithm. I object to updating strongly on this. Note that the paper does not claim their results are surprising -- it is written in a style of "we figured out how to make this approach work". (The DeepMind paper does claim that the results are novel / surprising, but it is targeted at a neuroscience audience, to whom the results may indeed be surprising.)
I've been feeling very confused lately about how people talk about "search," and have started joking that I'm a search panpsychist.
On the search panpsychist view, my position is that if you use deep RL to train an AGI policy, it is definitionally a mesa optimizer. (Like, anything that is "generally intelligent" has the ability to learn quickly, which on the search panpsychist view means that it is a mesa optimizer.) So in this world, "likelihood of mesa optimization via deep RL" is equivalent to "likelihood of AGI via deep RL", and "likelihood that more general systems trained by deep RL will be mesa optimizers" is ~1 and you ~can't update on it.
↑ comment by Pongo · 2020-08-20T23:18:56.833Z · LW(p) · GW(p)
I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect. I think it's good for people to share when members of DeepMind / OpenAI say something that sounds a lot like "we found evidence of mesaoptimization".
I also think you're right that we should be doing a lot better on pushing back against such claims. I hope LW/AF gets better at being as skeptical of AI researchers assertions that support risk as they are of those that undermine risk. But I also hope that when those researchers claim something surprising and (to us) plausibly risky is going on, we continue to hear about it.
Replies from: Vaniver, rohinmshah↑ comment by Vaniver · 2020-08-21T21:28:51.863Z · LW(p) · GW(p)
I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect.
Note that there are desirable chilling effects too. I think it's broadly important to push back on inaccurate claims, or ones that have the wrong level of confidence. (Like, my comment elsewhere [LW(p) · GW(p)] is intended to have a chilling effect.)
↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T01:56:13.510Z · LW(p) · GW(p)
I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect
I agree it might have this effect, and that it would be bad if that were to happen (all else equal). But I'd much rather have researchers with good beliefs given the evidence they have rather than researchers with lots of evidence but bad beliefs given that evidence.
(As with everything, this is a tradeoff. I haven't specified exactly how you should weight the tradeoff, because that's hard to do.)
↑ comment by adamShimi · 2020-08-20T21:02:47.844Z · LW(p) · GW(p)
What would be a good resource to level up on RL theory? Is the Sutton and Barto good enough, or do you have something else in mind?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T01:41:19.601Z · LW(p) · GW(p)
Hmm, I don't know unfortunately. I learned basic MDP theory from an undergrad course, and the rest through osmosis by being an AI PhD student at Berkeley. I haven't read Sutton and Barto, but I would assume that would be good enough (you'd probably know more than me about tabular RL).
Replies from: adamShimi↑ comment by adamShimi · 2020-08-22T11:30:02.913Z · LW(p) · GW(p)
If you don't have a resource, then do you have a list of pointers to what people should learn? For example the policy gradient theorem and the REINFORCE trick. It will probably not be exhaustive, I'm just trying to make your call to learn more RL theory more actionable to people here.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T17:26:36.292Z · LW(p) · GW(p)
I don't think the takeaway here should be "read these books / watch these lectures / understand these concepts and you'll be fine". My claim is more like, if you want to interact with some community, you should have whatever background knowledge that community expects. Even if I just made a list of concepts, I'd expect that list to be out of date reasonably quickly (a few years), for a field like deep RL.
I think this is pretty important if you want to do any of:
- Convince researchers in the field that their work would be risky if scaled up
- Learn from evidence presented in papers from the field (this post)
- Forecast questions relevant to the field, for questions that don't have obvious base rates (e.g. AGI timelines)
If you don't have the background knowledge, you can rely on someone else who has such background knowledge.
Notably, this is not important if you want to "build basic theory" or something like that, which doesn't require interaction with the AI community. (Though it might be important for guiding your search for basic theory, I'm not sure.)
Also, I forgot to mention this before: normally for deep RL I'd recommend Spinning Up in Deep RL, though in this case that's too focused on deep RL and not enough on RL basics.
----
EDIT: An analogy: if someone asked a handyman for a list of resources on how to fix common house problems, it's not clear that the handyman would have remembered to give the advice "turn clockwise to tighten, and counterclockwise to loosen", because it's so ingrained. Similarly, I think if I had tried to give a list prior to seeing this post, I would not have thought to give the advice "think about what the optimal policy is, and then expect your RL algorithms to find similar policies".
Replies from: DanielFilan, adamShimi, Pongo↑ comment by DanielFilan · 2020-08-23T00:09:46.359Z · LW(p) · GW(p)
it's not clear that the handyman would have remembered to give the advice "turn clockwise to loosen, and counterclockwise to tighten"
It's the other way around, right?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-23T04:43:37.066Z · LW(p) · GW(p)
Lol yes fixed
↑ comment by adamShimi · 2020-08-22T20:46:27.572Z · LW(p) · GW(p)
The handyman might not give basic advice, but if he didn't have any advice, I would assume that he doesn't want to help.
I'm really confused by your answers. You have a long comment criticizing the lack of basic RL knowledge of the AF community, and when I ask you for pointers, you say that you don't want to give any, and that people should just learn the background knowledge. So should every member of the AF stop what they're doing right now to spend 5 years doing a PhD in RL before being able to post here?
If the goal of your comment was to push people to learn things you think they should know, pointing towards some stuff (not an exhaustive list) is the bare minimum for that to be effective. If you don't, I can't see many people investing the time to learn enough RL so that by osmosis they can understand a point you're making.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T22:31:47.587Z · LW(p) · GW(p)
If the goal of your comment was to push people to learn things you think they should know, pointing towards some stuff (not an exhaustive list) is the bare minimum for that to be effective.
Here's an obvious next step for people: google for resources on RL, ask others for recommendations on RL, try out some of the resources and see which one works best for you, and then choose one resource and dive deep into it, potentially repeat until you understand new RL papers by reading. I think people would be better off executing that algorithm than looking at specific resources that I might name.
I wouldn't be surprised if other people have better algorithms for self-learning new fields -- I'm pretty atypical and shouldn't be expected to know what works for people who aren't me. E.g. TurnTrout has done a lot of self-learning from textbooks and probably has better advice.
I would hope most AF readers are capable of coming up with and executing something like this algorithm. If not, there are bigger problems than the lack of RL knowledge.
----
I also don't buy that pointing out a problem is only effective if you have a concrete solution in mind. MIRI argues that it is a problem that we don't know how to align powerful AI systems, but doesn't seem to have any concrete solutions. Do you think this disqualifies MIRI from talking about AI risk and asking people to work on solving it?
Replies from: TurnTrout, adamShimi↑ comment by TurnTrout · 2020-08-23T22:01:43.285Z · LW(p) · GW(p)
E.g. TurnTrout has done a lot of self-learning from textbooks and probably has better advice [for learning RL]
I have been summoned! I've read a few RL textbooks... unfortunately, they're either a) very boring, b) very old, or c) very superficial. I've read:
- Reinforcement Learning by Sutton & Barto (my book review [? · GW])
- Nice book for learning the basics. Best textbook I've read for RL, but that's not saying much.
- Superficial, not comprehensive, somewhat outdated circa 2018; a good chunk was focused on older techniques I never/rarely read about again, like SARSA and exponential feature decay for credit assignment. The closest I remember them getting to DRL was when they discussed the challenges faced by function approximators.
- AI: A Modern Approach 3e by Russell & Norvig (my book review [? · GW])
- Engaging and clear, but most of the book wasn't about RL. Outdated, but 4e is out now and maybe it's better.
- Markov Decision Processes by Puterman
- Thorough, theoretical, very old, and very boring. Formal and dry. It was written decades ago, so obviously no mention of Deep RL.
- Neuro-Dynamic Programming by Tsitsiklis
- When I was a wee second-year grad student, I was independently recommended this book by several senior researchers. Apparently it's a classic. It's very dry and was written in 1996. Pass.
OpenAI's several-page web tutorial Spinning Up with Deep RL is somehow the most useful beginning RL material I've seen, outside of actually taking a class. Kinda sad.
So when I ask my brain things like "how do I know about bandits?", the result isn't "because I read it in {textbook #23}", but rather "because I worked on different tree search variants my first summer of grad school" or "because I took a class". I think most of my RL knowledge has come from:
- My own theoretical RL research
- the fastest way for me to figure out a chunk of relevant MDP theory is often just to derive it myself
- Watercooler chats with other grad students
Sorry to say that I don't have clear pointers to good material.
Replies from: adamShimi↑ comment by adamShimi · 2020-08-24T12:29:42.280Z · LW(p) · GW(p)
Thanks for the in-depth answer!
I do share your opinion on the Sutton and Barto, which is the only book I read from your list (except a bit of the Russell and Norvig, but not the RL chapter). Notably, I took a lot of time to study the action value methods, only to realise later that a lot of recent work focus instead of policy-gradient methods (even if actor critics do use action-values).
From your answer and Rohin's, I gather that we lack a good resource in Deep RL, at least of the kind useful for AI Safety researchers. It makes me even more curious of the kind of knowledge that would be treated in such a resource.
↑ comment by adamShimi · 2020-08-23T20:49:51.363Z · LW(p) · GW(p)
Here's an obvious next step for people: google for resources on RL, ask others for recommendations on RL, try out some of the resources and see which one works best for you, and then choose one resource and dive deep into it, potentially repeat until you understand new RL papers by reading.
Agreed. Which is exactly why I asked you for recommendations. I don't think you're the only one someone interested in RL should ask for recommendation (I already asked other people, and knew some resource before all this), but as one of the (apparently few) members of the AF with the relevant skills in RL, it seemed that you might offer good advice on the topic.
About self-learning, I'm pretty sure people around here are good on this count. But knowing how to self-learn doesn't mean knowing what to self-learning. Hence the pointers.
I also don't buy that pointing out a problem is only effective if you have a concrete solution in mind. MIRI argues that it is a problem that we don't know how to align powerful AI systems, but doesn't seem to have any concrete solutions. Do you think this disqualifies MIRI from talking about AI risk and asking people to work on solving it?
No, I don't think you should only point to a problem with a concrete solution in hands. But solving a research problem (what MIRI's case is about) is not the same as learning a well-established field of computer science (what this discussion is about). In the latter case, you ask for people to learn things that already exists, not to invent them. And I do believe that showing some concrete things that might be relevant (as I repeated in each comment, not an exhaustive list) would make the injunction more effective.
That being said, it's perfectly okay if you don't want to propose anything. I'm just confused because it seems low effort for you, net positive, and the kind of "ask people for recommendation" that you preach in the previous comment. Maybe we disagree on one of these points?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-24T16:53:33.386Z · LW(p) · GW(p)
Which is exactly why I asked you for recommendations.
Yes, I never said you shouldn't ask me for recommendations. I'm saying that I don't have any good recommendations to give, and you should probably ask other people for recommendations.
showing some concrete things that might be relevant (as I repeated in each comment, not an exhaustive list) would make the injunction more effective.
In practice I find that anything I say tends to lose its nuance as it spreads, so I've moved towards saying fewer things that require nuance. If I said "X might be a good resource to learn from but I don't really know", I would only be a little surprised to hear a complaint in the future of the form "I deeply read X for two months because Rohin recommended it, but I still can't understand this deep RL paper".
If I actually were confident in some resource, I agree it would be more effective to mention it.
I'm just confused because it seems low effort for you, net positive, and the kind of "ask people for recommendation" that you preach in the previous comment.
I'm not convinced the low effort version is net positive, for the reasons mentioned above. Note that I've already very weakly endorsed your mention of Sutton and Barto, and very weakly mentioned Spinning Up in Deep RL. (EDIT: TurnTrout doesn't endorse [LW(p) · GW(p)] Sutton and Barto much, so now neither do I.)
Replies from: adamShimi↑ comment by adamShimi · 2020-08-24T18:12:37.637Z · LW(p) · GW(p)
In practice I find that anything I say tends to lose its nuance as it spreads, so I've moved towards saying fewer things that require nuance. If I said "X might be a good resource to learn from but I don't really know", I would only be a little surprised to hear a complaint in the future of the form "I deeply read X for two months because Rohin recommended it, but I still can't understand this deep RL paper".
Hum, I did not think about that. It makes more sense to me now why you don't want to point people towards specific things. I still believe the result will be net positive if the right caveat are in place (then it's the other's fault for misinterpreting your comment), but that's indeed assuming that the resource/concept is good/important and you're confident in that.
↑ comment by Pongo · 2020-08-22T21:40:29.463Z · LW(p) · GW(p)
This is an aside, but I remain really confused by the claim that RL algorithms will tend to find policies close to the optimal one. Is inductive bias not a thing for RL?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T23:02:35.555Z · LW(p) · GW(p)
It's a thing, and is one of the caveats I mentioned.
For tabular RL, algorithms can find optimal policies in the limit of infinite exploration, but without infinite exploration how close you get to the optimal policy will depend on the environment (including reward function).
For deep RL, even with infinite exploration you don't get the guarantee, since the optimization problem is nonconvex, and the optimal policy may not be expressible by your neural net. So it again depends heavily on the environment.
I think the proper version of the claim is more like "if a paper reports results with RL, the policy they find is probably good, as otherwise they wouldn't have published it". In practice RL algorithms often fail and need to be heavily tuned to do well, and researchers have to pull out lots of tricks to get them to work.
But regardless, I claim the first-order approximation to what an RL algorithm will do is "the optimal policy". You can then figure out reasons for deviation, e.g. "this reward is super sparse, so the algorithm won't get learning signal, so it'll have effectively random behavior".
If someone expected RL algorithms to fail on this bandit task, and then updated because they succeeded, I'd find that reasonable (though I'd find it pretty surprising that they'd expect a failure on bandits -- it's a relatively simple task where you can get tons of data).
↑ comment by atlas · 2020-11-23T16:55:00.411Z · LW(p) · GW(p)
It might well be that 1) people who already know RL shouldn't be much surprised by this result and 2) people who don't know much RL are justified in updating on this info (towards mesa-optimizers arising more easily).
This would be the case if RL intuition correctly implies that proto-mesa-optimizers (like the one in the paper) arise naturally, and that intuition wasn't widely shared outside of RL. Not sure if this is actually the way things are, but it seems plausible to me.
↑ comment by Rohin Shah (rohinmshah) · 2020-11-24T17:21:16.256Z · LW(p) · GW(p)
It might well be that 1) people who already know RL shouldn't be much surprised by this result and 2) people who don't know much RL are justified in updating on this info (towards mesa-optimizers arising more easily).
I agree. It seems pretty bad if the participants of a forum about AI alignment don't know RL.
↑ comment by Rafael Harth (sil-ver) · 2020-08-22T17:30:03.675Z · LW(p) · GW(p)
(EDIT: I'm already seeing downvotes of the post, it was originally at 58 AF karma. This wasn't my intention: I think this is a failure of the community as a whole, not of the author.)
I'm very confused by this edit.
My model of the community's failure is roughly
- This post primarily argues that a phenomenon is evidence for [learned models being likely to encode search algorithms], but in fact it is not.
- We would like the community to be such that this is pointed out quickly, the author edits the post accordingly, and the post does not get super high reception
- Instead, the post has high karma, is curated, this wasn't pointed out until you said it, and the post has not been edited.
If part of the failure is that the post is well-received, why wouldn't you want people to downvote it now that you pointed it out?
I also think the average LW user shouldn't be expected to understand enough RL to see this, so the system should detect this kind of failure for them. (Which it has done now that you've written your comment.) For those people, the proper reaction seems to be to remove their upvote and perhaps downvote.
Separately, I think you can explain part of the failure by laziness rather than a lack of understanding of RL. You could read/skim this post and not quite understand what the setting actually is (even though it's mentioned at the end of the second chapter). Just like I don't think the average LW user should be expected to understand enough ML to realize that the main point is misleading, I also don't they they should be expected to read the post carefully enough before upvoting it, especially not if it's curated or high karma (because that should be a quality assurance, and at that point it seems fine to upvote purely to signal-boost the point).
I realize your critique was of the AF, not of LW, so I'm not sure how much I'm really disagreeing with you here. But since Evan Hubinger understood the point and upvoted the post anyway, it's unclear how much you can conclude. (EDIT after rohin's answer: actually, I agree this is most likely not a typical case.)
Replies from: rohinmshah, adam_scholl↑ comment by Rohin Shah (rohinmshah) · 2020-08-22T17:56:59.197Z · LW(p) · GW(p)
If part of the failure is that the post is well-received, why wouldn't you want people to downvote it now that you pointed it out?
It feels like downvotes-as-I-see-them-in-practice are some combination of "you should feel bad about having written this" and "make worse content less visible", and I didn't want the first effect. Idk if that's the right call though, and idk if that's how others (especially the author) interpret it.
I also neglected that people can just remove their upvotes without downvoting, which feels less bad (though from the author's perspective it's the same, so I think I'm just being inconsistent here).
I also think the average LW user shouldn't be expected to understand enough RL to see this
Agreed, which is why I focused on the AF karma rather than the LW karma. (I agree with the rest of that paragraph.)
Separately, I think you can explain part of the failure by laziness rather than a lack of understanding of RL. You could read/skim this post and not quite understand what the setting actually is (even though it's mentioned at the end of the second chapter). Just like I don't think the average LW user should be expected to understand enough ML to realize that the main point is misleading, I also don't they they should be expected to read the post carefully enough before upvoting it, especially not if it's curated or high karma (because that should be a quality assurance, and at that point it seems fine to upvote purely to signal-boost the point).
Agreed this is likely but still seems pretty bad -- this isn't the first time people would have updated incorrectly had I not made a correction, though this is the most upvoted case. (I perhaps find it more annoying than it really "should" be because of how much shit LW gives academia and peer review.)
I realize your critique was of the AF, not of LW, so I'm not sure how much I'm really disagreeing with you here.
Yeah, I think this would still be a critique of LW, but much less strongly.
But since Evan Hubinger understood the point and upvoted the post anyway, it's unclear how much you can conclude.
I give it 98% chance that the majority of people who upvoted did not understand the point.
Replies from: Pongo↑ comment by Pongo · 2020-08-22T18:06:11.769Z · LW(p) · GW(p)
Agreed, which is why I focused on the AF karma rather than the LW karma
I think it's worth pointing out that I originally saw this just posted to LW, and must have been manually promoted to AF by a mod. Partly want to point it out because possibly one of the main errors is people updating too much on promotion as a signal of quality
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-08-22T18:20:24.588Z · LW(p) · GW(p)
It's trivially correct to update downward on the de-facto importance of promotion (by however much), but this seems like a bad thing.
Naively, I would like people to make sure they understand the point at
- the curation step
- the promotion-to-AF step
- maybe at the upvote step if you're a professional AI safety researcher
And if the conclusion is that the post is meaningful despite possibly being misinterpreted, I would naively want the person in charge to PM the author and ask to put in a clarification before the post is curated/promoted.
I say 'naively' because I don't know anything about how hard it would be to achieve this and I could be genuinely wrong about this being a reasonable thing to want.
↑ comment by Adam Scholl (adam_scholl) · 2020-08-23T00:08:42.897Z · LW(p) · GW(p)
This post primarily argues that a phenomenon is evidence for [learned models being likely to encode search algorithms]
I do mention interpreting the described results as tentative evidence for mesa-optimization, and this interpretation was why I wrote the post; my impression is still that this interpretation was basically correct. But most of the post is just quotes or paraphrased claims made by DeepMind researchers, rather than my own claims, since I didn't feel sure enough to make the claims myself.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2020-08-12T23:53:57.407Z · LW(p) · GW(p)
What is all of humanity if not a walking catastrophic inner alignment failure? We were optimized for one thing: inclusive genetic fitness. And only a tiny fraction of humanity could correctly define what that is!
Replies from: adam_scholl, None↑ comment by Adam Scholl (adam_scholl) · 2020-08-13T00:50:33.457Z · LW(p) · GW(p)
It could both be the case that there exists catastrophic inner alignment failure between humans and evolution, and also that humans don't regularly experience catastrophic inner alignment failures internally.
In practice I do suspect humans regularly experience internal inner alignment failures, but given that suspicion I feel surprised by how functional humans do manage to be. In other words, I notice expecting that regular inner alignment failures would cause far more mayhem than I observe, which makes me wonder whether brains are implementing some sort of alignment-relevant tech.
Replies from: abramdemski, None↑ comment by abramdemski · 2020-08-17T20:50:09.958Z · LW(p) · GW(p)
In practice I do suspect humans regularly experience internal (within-brain) inner alignment failures, but given that suspicion I feel surprised by how functional humans manage to be. That is, I notice expecting that regular inner alignment failures would cause far more mayhem than I observe, which makes me wonder whether brains are implementing some sort of alignment-relevant tech.
I don't know why you expect an inner alignment failure to look dysfunctional. Instrumental convergence suggests that it would look functional. What the world looks like if there are inner alignment failures inside the human brain is (in part) that humans pursue a greater diversity of terminal goals than can be accounted for by genetics.
↑ comment by [deleted] · 2020-08-13T18:33:20.319Z · LW(p) · GW(p)
What would inner alignment failures even look like? Overdosing on meth sure makes the dopamine system happy. Perhaps human values reside in the prefrontal complex, and all of humanity is a catastrophic alignment failure of the dopamine system (except a small minority of drug addicts) on top of being a catastrophic alignment failure of natural selection.
↑ comment by [deleted] · 2020-08-13T18:19:04.639Z · LW(p) · GW(p)
Isn't evolution a better analogy for deep learning anyway? All natural selection does is gradient descent (hill climbing technically), with no capacity for lookahead. And we've known this one for 150 years!
Replies from: Vaniver↑ comment by Vaniver · 2020-08-13T21:36:26.574Z · LW(p) · GW(p)
All natural selection does is gradient descent (hill climbing technically), with no capacity for lookahead.
I think if you're interested in the analysis and classification of optimization techniques, there's enough differences between what natural selection is doing and what deep learning is doing that it isn't a very natural analogy. (Like, one is a population-based method and the other isn't, the update rules are different, etc.)
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-08-12T14:14:20.749Z · LW(p) · GW(p)
Thanks for this. It seems important. Learning still happening after weights are frozen? That's crazy. I think it's a big deal because it is evidence for mesa-optimization being likely and hard to avoid.
It also seems like evidence for the Scaling Hypothesis. One major way the scaling hypothesis could be false is if there are further insights needed to get transformative AI, e.g. a new algorithm or architecture. A simple neural network spontaneously learning to do its own, more efficient form of learning? This seems like a data point in favor of the idea that our current architectures and algorithms are fine, and will eventually (if they are big enough) grope their way towards more efficient internal structures on their own.
EDIT: Now i'm less sure of all the above, thanks to Rohin's comment below. I guess this is a case of "Evidence to the people who didn't already understand the theory well enough to make the prediction," which maybe included me? Though I think I would have made the prediction too had I been asked...
Replies from: gwern, romeostevensit↑ comment by gwern · 2020-08-14T20:11:57.418Z · LW(p) · GW(p)
Learning still happening after weights are frozen? That’s crazy. I think it’s a big deal because it is evidence for mesa-optimization being likely and hard to avoid.
Sure. We see that elsewhere too, like Dactyl. And of course, GPT-3.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-08-15T10:15:51.091Z · LW(p) · GW(p)
Huh, thanks.
↑ comment by romeostevensit · 2020-08-14T08:12:01.704Z · LW(p) · GW(p)
Two separate size parameters. The size of the search space, and the size the traversal algorithm needs to be to span the same gaps brains did.
comment by Kaj_Sotala · 2020-08-12T14:36:33.404Z · LW(p) · GW(p)
That said, if mesa-optimization is a standard feature[4] [LW · GW] of brain architecture, it seems notable that humans don’t regularly experience catastrophic inner alignment failures.
What would it look like if they did?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-12T19:21:31.597Z · LW(p) · GW(p)
The thing I meant by "catastrophic" is just "leading to death of the organism." I suspect mesa-optimization is common in humans, but I don't feel confident about this, nor that this is a joint-carvey ontology. I can imagine it being the case that many examples of e.g. addiction, goodharting, OCD, and even just "everyday personal misalignment"-type problems of the sort IFS/IDC/multi-agent models of mind sometimes help with, are caused by phenomena which might reasonably be described as inner alignment failures.
But I think these things don't kill people very often? People do sometimes choose to die because of beliefs. And anorexia sometimes kills people, which currently feels to me like the most straightforward candidate example I've considered.
I just feel like things could be a lot worse. For example, it could have been the case that mind-architectures that give rise to mesa-optimization at all simply aren't viable at high levels of optimization power—that it always kills them. Or that it basically always leads to the organism optimizing for a set of goals which is unrecognizably different from the base objective. I don't think you see these things, so I'm curious how evolution prevented them.
Replies from: daniel-kokotajlo, orthonormal, Douglas_Knight, Raemon, Kaj_Sotala↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-08-13T12:59:01.651Z · LW(p) · GW(p)
Governments and corporations experience inner alignment failures all the time, but because of convergent instrumental goals, they are rarely catastrophic. For example, Russia underwent a revolution and a civil war on the inside, followed by purges and coups etc., but from the perspective of other nations, it was more or less still the same sort of thing: A nation, trying to expand its international influence, resist incursions, and conquer more territory. Even its alliances were based as much on expediency as on shared ideology.
Perhaps something similar happens with humans.
↑ comment by orthonormal · 2020-08-14T02:59:02.142Z · LW(p) · GW(p)
The claim that came to my mind is that the conscious mind is the mesa-optimizer here, the original outer optimizer being a riderless elephant.
↑ comment by Douglas_Knight · 2020-08-17T01:49:25.148Z · LW(p) · GW(p)
Why do you single out anorexia? Do you mean people starving themselves to death? My understanding is that is very rare. Anorexics have a high death rate and some of that is long-term damage from starvation. They also (abruptly) kill themselves at a high rate, comparable to schizophrenics, but why single that out? There's a theory that they have practice with internal conflict, which does seem relevant, but I think that's just a theory, not clear cut at all.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-17T03:38:38.729Z · LW(p) · GW(p)
Yeah, I wrote that confusingly, sorry; edited to clarify. I just meant that of the limited set of candidate examples I'd considered, my model of anorexia, which of course may well be wrong, feels most straightforwardly like an example of something capable of causing catastrophic within-brain inner alignment failure. That is, it currently feels natural to me to model anorexia as being caused by an optimizer for thinness arising in brains, which can sometimes gain sufficient power that people begin to optimize for that goal at the expense of essentially all other goals. But I don't feel confident in this model.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2020-08-17T14:59:11.256Z · LW(p) · GW(p)
I'm objecting to the claim that it fits your criterion of "catastrophic." Maybe it's such a clear example, with such a clear goal, that we should sacrifice the criterion of catastrophic, but you keep using that word.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-17T19:49:18.620Z · LW(p) · GW(p)
Ah, I see. The high death rate was what made it seem often-catastrophic to me. Is your objection that the high death rate doesn't reflect something that might reasonably be described as "optimizing for one goal at the expense of all others"? E.g., because many of the deaths are suicides, in which case persistence may have been net negative from the perspective of the rest of their goals too? Or because deaths often result from people calibratedly taking risky but non-insane actions, who just happened to get unlucky with heart muscle integrity or whatever?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2020-08-18T18:26:19.968Z · LW(p) · GW(p)
I asked you if you were talking about starving to death and you didn't answer. Does your abstract claim correspond to a concrete claim, or do you just observe that anorexics seem to have a goal and assume that everything must flow from that and the details don't matter? That's a perfectly reasonable claim, but it's a weak claim so I'd like to know if that's what you mean.
Abrupt suicides by anorexics are just as mysterious as suicides by schizophrenics and don't seem to flow from the apparent goal of thinness. Suicide is a good example of something, but I don't think it's useful to attach it to anorexia rather than schizophrenia or bipolar.
Long-term health damage would be a reasonable claim, which I tried to concede in my original comment. I'm not sure I agree with it. I could pose a lot of complaints about it, but I wouldn't. If it's clear that it is the claim, then I think it's clearly a weak claim and that's OK. (As for the objection you propose, I would rather say: lots of people take badly calibrated risks without being labeled insane.)
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-18T21:20:02.622Z · LW(p) · GW(p)
The scenario I had in mind was one where death occurs as a result of damage caused by low food consumption, rather than by suicide.
↑ comment by Raemon · 2020-08-16T07:39:45.352Z · LW(p) · GW(p)
The thing I meant by "catastrophic" is "leading to the death of the organism."
This doesn't seem like what it should mean here. I'd think catastrophic in the context of "how humans (programmed by evolution) might fail by evolution's standards" should mean "start pursuing strategies that don't result in many children or longterm population success." (where premature death of the organism might be one way to cause that, but not the only way)
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-16T08:00:23.523Z · LW(p) · GW(p)
I agree, in the case of evolution/humans. I meant to highlight what seemed to me like a relative lack of catastrophic within-mind inner alignment failures, e.g. due to conflicts between PFC and DA. Death of the organism feels to me like one reasonable way to operationalize "catastrophic" in these cases, but I can imagine other reasonable ways.
Replies from: abramdemski↑ comment by abramdemski · 2020-08-18T14:54:14.477Z · LW(p) · GW(p)
I think it makes more sense to operationalize "catastrophic" here as "leading to systematically low DA reward", perhaps also including "manipulating the DA system in a clearly misaligned way".
One way catastrophic alignment in this sense is difficult for humans is that the PFC cannot divorce itself from the DA; I'd expect that a failure mode leading to systematically low DA rewards would usually be corrected gradually, as the DA punishes those patterns.
However, this is not really clear. The misaligned PFC might e.g. put itself in a local maximum, where it creates DA punishment for giving into temptation. (For example, an ascetic getting social reinforcement from a group of ascetics might be in such a situation.)
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-18T16:11:41.608Z · LW(p) · GW(p)
I think it makes more sense to operationalize "catastrophic" here as "leading to systematically low DA reward
Thanks—I do think this operationalization makes more sense than the one I proposed.
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-18T17:42:47.263Z · LW(p) · GW(p)
One way catastrophic alignment in this sense is difficult for humans is that the PFC cannot divorce itself from the DA; I'd expect that a failure mode leading to systematically low DA rewards would usually be corrected
I'm not sure divorce like this is rare. For example, anorexia sometimes causes people to find food anti-rewarding (repulsive/inedible, even when they're dying and don't to be), and I can imagine that being because PFC actually somehow alters DAs reward function.
But I do share the hunch that something like a "divorce resistance" trick occurs and is helpful. I took Kaj and Steve to be gesturing at something similar elsewhere in the thread. But I notice feeling confused about how exactly this trick might work. Does it scale...?
I have the intuition that it doesn't—that as the systems increase in power, divorce occurs more easily. That is, I have the intuition that if PFC were trying, so to speak, to divorce itself from DA supervision, that it could probably find some easy-ish way to succeed, e.g. by reconfiguring itself to hide activity from DA, or to send reward-eliciting signals to DA regardless of what goal it was pursuing.
↑ comment by Kaj_Sotala · 2020-08-14T18:49:52.667Z · LW(p) · GW(p)
Or e.g. that it always leads to the organism optimizing for a set of goals which is unrecognizably different from the base objective. I don't think you see these things, and I'm interested in figuring out how evolution prevented them.
As I understand it, Wang et al. found that their experimental setup trained an internal RL algorithm that was more specialized for this particular task, but was still optimizing for the same task that the RNN was being trained on? And it was selected exactly because it did that very goal better. If the circumstances changed so that the more specialized behavior was no longer appropriate, then (assuming the RNN's weights hadn't been frozen) the feedback to the outer network would gradually end up reconfiguring the internal algorithm as well. So I'm not sure how it even could end up with something that's "unrecognizably different" from the base objective - even after a distributional shift, the learned objective would probably still be recognizable as a special case of the base objective, until it updated to match the new situation.
The thing that I would expect to see from this description, is that humans who were e.g. practicing a particular skill might end up becoming overspecialized to the circumstances around that skill, and need to occasionally relearn things to fit a new environment. And that certainly does seem to happen. Likewise for more general/abstract skills, like "knowing how to navigate your culture/technological environment", where older people's strategies are often more adapted to how society used to be rather than how it is now - but still aren't incapable of updating.
Catastrophic misalignment seems more likely to happen in the case of something like evolution, where the two learning algorithms operate on vastly different timescales, and it takes a very long time for evolution to correct after a drastic distributional shift. But the examples in Wang et al. lead me to think that in the brain, even the slower process operates on a timescale that's on the order of days rather than years, allowing for reasonably rapid adjustments in response to distributional shifts. (Though it's plausible that the more structure there is in a need of readjustment, the slower the reconfiguration process will be - which would fit the behavioral calcification that we see in e.g. some older people.)
Replies from: abramdemski, adam_scholl↑ comment by abramdemski · 2020-08-18T15:04:23.953Z · LW(p) · GW(p)
It seems possible to me. A common strategy in religious groups is to steer for a wide barrier between them and particular temptations. This could be seen as a strategy for avoiding DA signals which would de-select for the behaviors encouraged by the religious group: no rewards are coming in for alternate behavior, so the best the DA can do is reinforce the types of reward which the PFC has restricted itself to.
This can be supplemented with modest rewards for desired behaviors, which force the DA to reinforce the inner optimizer's desired behaviors.
Although is easier in a community which supports the behaviors, it's entirely possible to do this to oneself in relative isolation, as well.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-08-19T05:01:12.045Z · LW(p) · GW(p)
Good point, I wasn't thinking of social effects changing the incentive landscape.
↑ comment by Adam Scholl (adam_scholl) · 2020-08-17T20:32:02.105Z · LW(p) · GW(p)
Kaj, the point I understand you to be making is: "The inner RL algorithm in this scenario is probably reliably aligned with the outer RL algorithm, since the former was selected specifically on the basis of it being good at accomplishing the latter's objective, and since if the former deviates from pursuing that objective it will receive less reward from the outer, causing it to reconfigure itself to be better aligned. And since the two algorithms operate on similar time scales, we should expect any such misalignment to be noticed/corrected quickly." Does this seem like a reasonable paraphrase?
It doesn't feel obvious to me that the outer layer will be able to reliably steer the inner layer in this sense, especially as systems become more powerful. For example, it seems plausible to me that the inner layer might come to optimize for its proxy estimations of outer reward more than for outer reward itself, and that those two things might become decoupled.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-08-19T04:55:08.261Z · LW(p) · GW(p)
That seems like a reasonable paraphrase, at least if you include the qualification that the "quickly" is relative to the amount of structure that the inner layer has accumulated, so might not actually happen quickly enough to be useful in all cases.
For example, it seems plausible to me that the inner layer might come to optimize for its proxy estimations of outer reward more than for outer reward itself, and that those two things could become decoupled.
Sure, e.g. lots of exotic sexual fetishes look like that to me. Hmm, though actually that example makes me rethink the argument that you just paraphrased, given that those generally emerge early in an individual's life and then generally don't get "corrected".
comment by Michaël Trazzi (mtrazzi) · 2020-08-18T13:18:56.105Z · LW(p) · GW(p)
Funnily enough, I wrote a blog distilling what I learned from reproducing experiments of that 2018 Nature paper, adding some animations and diagrams. I especially look at the two-step task, the Harlow task (the one with monkeys looking at a screen), and also try to explain some brain things (e.g. how DA interacts with the PFN) at the end.
comment by gwern · 2020-08-14T21:04:11.803Z · LW(p) · GW(p)
The slot-based NN paper is "Meta-Learning with Memory-Augmented Neural Networks", Santoro et al 2016 (Arxiv).
comment by Nevan Wichers (nevan-wichers) · 2020-08-12T19:29:40.411Z · LW(p) · GW(p)
I don't think that paper is an example of mesa optimization. Because the policy could be implementing a very simple heuristic to solve the task, similar to: Pick the image that lead to highest reward in the last 10 timesteps with 90% probability. Pik an image at random with 10% probability.
So the policy doesn't have to have any properties of a mesa optimizer like considering possible actions and evaluating them with a utility function, ect.
Whenever an RL is trained in a partially observed environment, the agent has to take actions to learn about parts of its environment that it hasn't observed yet or may have changed. The difference with this paper is that the observations it gets from the environment happen to be the reward the agent received in the previous timestep. However as far as the policy is concerned, the reward it gets as input is just another component of the state. So the fact that the policy gets the previous reward as input doesn't make it stand out compared to another partially observed environment.
Replies from: gwern, abramdemski↑ comment by gwern · 2020-08-20T23:58:40.867Z · LW(p) · GW(p)
The argument that these and other meta-RL researchers usually make is that (as indicated by the various neurons which fluctuate, and I think based on some other parts of their experiments which I would have to reread it to list) what these RNNs are learning is not just a simple play-the-winner heuristic (which is suboptimal, and your suggestion would require only 1 neuron to track the winning arm) but amortized Bayesian inference where the internal dynamics are learning the sufficient statistics of the Bayes-optimal solution to the POMDP (where you're unsure what of a large family of MDPs you're in): "Meta-learning of Sequential Strategies", Ortega et al 2019; "Reinforcement Learning, Fast and Slow", Botvinick et al 2019; "Meta-learners' learning dynamics are unlike learners'", Rabinowitz 2019; "Bayesian Reinforcement Learning: A Survey", Ghavamzadeh et al 2016, are some of the papers that come to mind. Then you can have a fairly simple decision rule using that as the input (eg Figure 4 of Ortega on a coin-flipping example, which is a setup near & dear to my heart).
To reuse a quote from my backstop essay: as Duff 2002 puts it,
"One way of thinking about the computational procedures that I later propose is that they perform an offline computation of an online, adaptive machine. One may regard the process of approximating an optimal policy for the Markov decision process defined over hyper-states as 'compiling' an optimal learning strategy, which can then be 'loaded' into an agent."
↑ comment by abramdemski · 2020-08-18T18:59:44.297Z · LW(p) · GW(p)
I made some remarks going partly off of your comment into a post: https://www.alignmentforum.org/posts/WmBukJkEFM72Xr397/mesa-search-vs-mesa-control [AF · GW]
comment by ESRogs · 2020-08-13T02:34:26.369Z · LW(p) · GW(p)
Yang et al.
Wang et al.?
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-13T02:56:32.792Z · LW(p) · GW(p)
Gah, thanks! Fixed.
comment by Steven Byrnes (steve2152) · 2020-08-13T10:22:05.326Z · LW(p) · GW(p)
I dunno, I didn't really like the meta-RL paper. Maybe it has merits I'm not seeing. But I didn't find the main analogy helpful. I also don't think "mesa-optimizer" is a good description of the brain at this level. (i.e., not the level involving evolution). I prefer "steered optimizer" [LW · GW] for what it's worth. :-)
Replies from: adam_scholl↑ comment by Adam Scholl (adam_scholl) · 2020-08-13T21:20:20.364Z · LW(p) · GW(p)
As I understand it, your point about the distinction between "mesa" and "steered" is chiefly that in the latter case, the inner layer is continually receiving reward signal from the outer layer, which in effect heavily restricts the space of possible algorithms the outer layer might give rise to. Does that seem like a decent paraphrase?
One of the aspects of Wang et al.'s paper that most interested me was that the inner layer in their meta-RL model kept learning even once reward signal from the outer layer had ceased. It feels plausible to me that the relationship between PFC and DA is reasonably describable as something like "subcortex-supervised learning," where PFCs input signals are "labeled" by the DA-supervisor. But it doesn't feel intuitively obvious to me that the portion of PFC input which might be labeled in this way is high—e.g., I feel unconfident about what portion of the concepts currently active in my working memory while writing this paragraph might be labeled by DA—nor that it much restricts the space of possible algorithms that can arise in PFC.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2020-08-13T22:13:40.129Z · LW(p) · GW(p)
your point about the distinction between "mesa" and "steered" is chiefly that in the latter case, the inner layer is continually receiving reward signal from the outer layer, which in effect heavily restricts the space of possible algorithms the outer layer might give rise to. Does that seem like a decent paraphrase?
Yeah, that's part of it, but also I tend to be a bit skeptical that a performance-competitive optimizer will spontaneously develop, as opposed to being programmed—just as AlphaGo does MCTS because DeepMind programmed it to do MCTS, not because it was running a generic RNN that discovered MCTS. See also this [LW · GW].
I feel confused about what portion of the concepts currently active in my working memory while writing this paragraph might be labeled by DA
Right now I'm kinda close to "More-or-less every thought I think has higher DA-related reward prediction than other potential thoughts I could have thought." But it's a vanishing fraction of cases where there is "ground truth" for that reward prediction that comes from outside of the neocortex. There is "ground truth" for things like pain and fear-of-heights, but not for thinking to yourself "hey, that's a clever turn of phrase" when you're writing. (The neocortex is the only place that understands language, in this example.)
Ultimately I think everything has to come from subcortex-provided "ground truth" on what is or isn't rewarding, but the neocortex can get the idea that Concept X is an appropriate proxy / instrumental goal associated with some subcortex-provided reward, and then it goes and labels Concept X as inherently desirable, and searches for actions / thoughts that will activate Concept X.
There's still usually some sporadic "ground truth", e.g. you have an innate desire for social approval and I think the subcortex has ways to figure out when you do or don't get social approval, so if your "clever turns of phrase" never impress anyone, you might eventually stop trying to come up with them. But if you're a hermit writing a book, the neocortex might be spinning for years treating "come up with clever turns of phrase" as an important goal, without any external subcortex-provided information to ground that goal.
See here [LW · GW] for more on this, if you're not sick of my endless self-citations yet. :-)
Sorry if any of this is wrong, or missing your point.
Also, I'm probably revealing that I never actually read Wang et al. very carefully :-P I think I skimmed it a year ago and liked it, and then re-read it 3 months ago having developed more opinions about the brain, and didn't really like it that time, and then listened to that interview recently and still felt the same way.
comment by Houshalter · 2020-08-20T17:49:10.948Z · LW(p) · GW(p)
The temporal difference learning algorithm is an efficient way to do reinforcement learning. And probably something like it happens in the human brain. If you are playing a game like chess, it may take a long time to get enough examples of wins and losses, for training an algorithm to predict good moves. Say you play 128 games, that's only 7 bits of information, which is nothing. You have no way of knowing which moves in a game were good and which were bad. You have to assume all moves made during a losing game were bad. Which throws out a lot of information.
Temporal difference learning can learn "capturing pieces is good" and start optimizing for that instead. This implies that "inner alignment failure" is a constant fact of life. There are probably players that get quite far in chess doing nothing more than optimizing for piece capture.
I used to have anxiety about the many worlds hypothesis. It just seems kind of terrifying, constantly splitting into hell-worlds and the implications of quantum immortality. But it didn't take long for it to stop bothering me and to even suppress thoughts about it. After all such thoughts don't lead to a reward and cause problems and an RL brain should punish them.
But that's kind of terrifying itself isn't it? I underwent a drastic change to my utility function. And even the emergence of anti-rational heuristics for suppressing thoughts. Which a rational bayesian should never do (at least not for these reasons.)
Anyway gwern has a whole essay on multi-level optimization algorithms like this, that I haven't seen linked yet: https://www.gwern.net/Backstop
comment by Raemon · 2020-08-16T07:36:38.532Z · LW(p) · GW(p)
Curated. [Edit: no longer particularly endorsed in light of Rohin's comment, although I also have not yet really vetted Rohin's comment either and currently am agnostic on how important this post is]
When I first started following LessWrong, I thought the sequences made a good theoretical case for the difficulties of AI Alignment. In the past few years we've seen more concrete, empirical examples of how AI progress can take shape [LW · GW] and how that might be alarming. We've also seen more concrete simple examples of AI failure in the form of specification gaming and whatnot.
I haven't been following all of this in depth and don't know how novel the claims here are [fake edit: gwern notes in the comments [LW(p) · GW(p)] that similar phenomena have been observed elsewhere]. But, this seemed noteworthy for getting into the empirical observation of some of the more complex concerns about inner alignment.
I'm interested in seeing more discussion of these results, what they mean and how people think about them.