# Inner Alignment: Explain like I'm 12 Edition

post by Rafael Harth (sil-ver) · 2020-08-01T15:24:33.799Z · LW · GW · 13 comments

## Contents

  What is Inner Alignment?
The Analogy to Evolution
Deceptive Alignment
concept
internalization might be difficult
corrigibility might be difficult
Miscellaneous
None


(This is an unofficial explanation of Inner Alignment based on the Miri paper Risks from Learned Optimization in Advanced Machine Learning Systems (which is almost identical to the LW sequence [? · GW]) and the Future of Life podcast with Evan Hubinger (Miri/LW [LW · GW]). It's meant for anyone who found the sequence too long/challenging/technical to read.)

Note that bold and italics means "this is a new term I'm introducing," whereas underline and italics is used for emphasis.

# What is Inner Alignment?

1. Choose a problem
2. Decide on a space of possible solutions
3. Find a good solution from that space

If the problem is "find a tool that can look at any image and decide whether or not it contains a cat," then each conceivable set of rules for answering this question (formally, each function from the set of all pixels to the set ) defines one solution. We call each such solution a model. The space of possible models is depicted below.

Since that's all possible models, most of them are utter nonsense.

Pick a random one, and you're as likely to end up with a car-recognizer than a cat-recognizer – but far more likely with an algorithm that does nothing we can interpret. Note that even the examples I annotated aren't typical – most models would be more complex while still doing nothing related to cats. Nonetheless, somewhere in there is a model that would do a decent job on our problem. In the above, that's the one that says, "I look for cats."

How does ML find such a model? One way that does not work is trying out all of them. That's because the space is too large: it might contain over candidates. Instead, there's this thing called Stochastic Gradient Descent (SGD). Here's how it works:

SGD begins with some (probably terrible) model and then proceeds in steps. In each step, it switches to another model that is "close" and hopefully a little better. Eventually, it stops and outputs the most recent model. Note that, in the example above, we don't end up with the perfect cat-recognizer (the red box) but with something close to it – perhaps a model that looks for cats but has some unintended quirks. SGD does generally not guarantee optimality.

The speech bubbles where the models explain what they're doing are annotations for the reader. From the perspective of the programmer, it looks like this:

The programmer has no idea what the models are doing. Each model is just a black box.

A necessary component for SGD is the ability to measure a model's performance, but this happens while treating them as black boxes. In the cat example, assume the programmer has a bunch of images that are accurately labeled as "contains cat" and "doesn't contain cat." (These images are called the training data and the setting is called supervised learning.) SGD tests how well each model does on these images and, in each step, chooses one that does better. In other settings, performance might be measured in different ways, but the principle remains the same.

Now, suppose that the images we have happen to include only white cats. In this case, SGD might choose a model implementing the rule "output yes if there is something white and with four legs." The programmer would not notice anything strange – all she sees is that the model output by SGD does well on the training data.

In this setting, there is thus only a problem if our way of obtaining feedback is flawed. If it is perfect – if the pictures with cats are perfectly representative of what images-with-cats are like, and the pictures without cats are perfectly representative of what images-without-cats are like, then there isn't an issue. Conversely, if our images-with-cats are non-representative because all cats are white, the model SGD outputs might not be doing precisely what the programmer wanted. In Machine Learning slang, we would say that the training distribution is different from the distribution in deployment.

Is this Inner Alignment? Not quite. This is about a property called distributional robustness, and it's a well-known problem in Machine Learning. But it's close.

To explain Inner Alignment itself, we have to switch to a different setting. Suppose that, instead of trying to classify whether images contain cats, we are trying to train a model that solves mazes. That is, we want an algorithm that, given an arbitrary solvable maze, outputs a route from the Maze Entry to the Maze Exit.

As of before, our space of all possible models will consist primarily of nonsense solutions:

(If you don't know what depth-first search means: as far as mazes are concerned, it's simply the "always go left" rule.)

Note that the annotation "I perform depth-first search" is meant to suggest that the model contains a formal algorithm that implements depth-first search, and analogously with the other annotations.

As with the previous example, we might apply SGD to this problem. In this case, the feedback mechanism would come from evaluating the model on test mazes. Now, suppose that all of the test mazes have this form,

where the red areas represent doors. That is, all mazes are such that the shortest path leads through all of the red doors, and the exit is itself a red door.

Looking at this, you might hope that SGD finds the "depth-first" model. However, while that model would find the shortest path, it is not the best model. (Note that it first performs depth-first search and then, once it has found the right path, discards dead ends and outputs the shortest path only). The alternative model with annotation "perform breadth-first search to find the next red door, repeat forever" would perform better. (Breadth-first means exploring all possible paths in parallel.) Both models always find the shortest path, but the red-door model would find it more quickly. In the maze above, it would save time by finding the path from the first to the second door without wasting time exploring the lower-left part of the maze.

Note that breadth-first search only outperforms depth-first search because it can truncate the fruitless paths after having reached the red door. Otherwise, it wouldn't know that the bottom-left part is fruitless until much later in the search.

As of before, all the programmer will see is that the left model performs better on the training data (the test mazes).

The qualitative difference to the cat picture example is that, in this case, we can talk about the model as running an optimization process. That is, the breadth-first search model does itself have an objective (go through red doors), and it tries to optimize for that in the sense that it searches for the shortest path that leads there. Similarly, the depth-first model is an optimization process with the objective "find exit of maze."

This is enough to define Inner Alignment, but to make sure the definition is the same that one reads elsewhere, let's first define two new terms.

• The Base Objective is the objective we use to evaluate models found by SGD. In the first example, it was "classify pictures correctly (i.e., say "contains cat" if it contains a cat and "doesn't contain cat" otherwise). In the second example, it was "find [a shortest path that solves mazes] as quickly as possible."
• In the cases where the model is running an optimization process, we call the model a Mesa Optimizer, and we call its objective the Mesa Objective (in the maze example, the mesa objective is "find shortest path through maze" for the depth-first model, and "repeatedly find shortest path to the next red door" for the breadth-first model).

With that said,

Inner Alignment is the problem of aligning the Base Objective with the Mesa Objective.

Some clarifying points:

• The red-door example is thoroughly contrived and would not happen in practice. It only aims to explain what Inner Alignment is, not why misalignment might be probable.
• You might wonder what the space of all models looks like. The typical answer is that the possible models are sets of weights for a neural network [LW · GW]. The problem exists insofar as some sets of weights implement specific search algorithms.
• As of before, the reason for the inner alignment failure was that our way of obtaining feedback was flawed (in ML language: because there was distributional shift). It is conceivable that misalignment can also arise for other reasons, but those are outside the scope of this post.
• If the Base Objective and Mesa Objective are misaligned, this causes problems as soon as the model is deployed. In the second example, as soon as we take the model output by SGD and apply it to real mazes, it would still search for red doors. If those mazes don't contain red doors, or the red doors aren't always on paths to the exit, the model would perform poorly.

Here is the relevant Venn-Diagram. (Relative sizes don't mean anything.)

Note that {What AI tries to do} = {Mesa Objective} by definition.

Most classical discussion of AI alignment, including most of the book Superintelligence, is about Outer Alignment. The classical examples where we assume the AI is optimized to cure cancer and then kills humans so that no-one can have cancer anymore is about a misalignment of {What Programmers want} and the {Base Objective}. (The Base Objective is {minimize the number of people who have cancer}, and while it's not clear what the programmers want, it's certainly not that.)

# The Analogy to Evolution

Arguments about Inner Alignment often make reference to evolution. The reason is that evolution is an optimization process – it optimizes for inclusive genetic fitness. The space of all models is the space of all possible organisms.

Humans are certainly not the best model in this space – I've added the description on the bottom right to indicate that there are better models that haven't been found yet. However, humans are, undoubtedly, the best model that evolution has found so far.

As with the maze example, humans do themselves run optimization processes. Thus, we can call them/us Mesa Optimizes, and we can compare the Base Objective (the one evolution maximizes for) with the Mesa Objective (the one humans optimize for).

• Base Objective: maximize inclusive genetic fitness
• Mesa Objective: avoid pain, seek pleasure

(This is simplified – some humans optimize for other things, such as the well-being of all possible minds in the universe – but those are no closer to the Base Objective.)

We can see that humans are not aligned with the base objective of evolution. And it is easy to see why – the way Evan Hubinger put it is to imagine the counterfactual world where evolution did select inner-aligned models. In this world, a baby who stabs its toe has to compute how stabbing its toe affects its inclusive genetic fitness before knowing whether or not to repeat this behavior in the future. This would be computationally expensive, whereas the "avoid pain" objective immediately tells the baby that , which is much cheaper and almost always the correct answer. Thus, an unaligned model outperforms the hypothetical aligned model. Another interesting aspect is that the size of the misalignment (the difference between the Base Objective and the Mesa Objective) has widened over the last few millennia. In the ancestral environment, they were pretty close, but now, they are so far apart that we need to pay people to donate their sperm, which, according to the Base Objective, ought to be the most desirable action imaginable.

Consequently, the analogy might be an argument for why Inner Misalignment is probable since it has occurred "naturally" in the biggest non-human-caused optimization process we know. However, the big caveat here is that evolution does not implement Stochastic Gradient Descent. Evolution navigates the model space by performing random mutations and then evaluating performance, which is fundamentally different (and a billion times less efficient) from modifying the model according to the expected derivative of the loss function, which is what SGD does. Thus, while the analogy works in most ways, it stops working as soon as one makes arguments that rely on properties of SGD other than that it optimizes the Base Objective.

# Deceptive Alignment

This is the abridged version of the fourth part [? · GW] of the sequence. I'm linking to it because this is probably the one where leaving out the technical details is the most problematic.

## The concept

In this section, we make the following assumptions:

• The learning task is hard, and therefore, models are very complex. Think of a question-answering system, rather than an image classifier.
• Instead of having a single learning process, we update a model over time.
• The learning process will select a Mesa Optimizer.
• The Base Objective is complicated, and the model won't get it right immediately (i.e., the model starts out not being inner-aligned).

Since the model is sophisticated, we assume that it builds up a model of the world somehow. Think of GPT-3 (the language model that can write text): it clearly recognizes whether you're prompting it about Lord of the Rings or about politics. This shows that it has an internal model of these things, however flawed or incomplete.

Thus, if we look inside the model (which, again, the programmers cannot do), we have the following two components:

Recall that the model is a Mesa Optimizer by assumption, hence we know it has an objective. This (red blob) is the Mesa Objective.

As its model of the world improves, it might eventually include a model of the Base Objective. Recall that the Base Objective is what SGD optimizes for.

I've drawn the model of the base objective (purple blob) larger than the Mesa Objective since we assume the Base Objective is fairly complex.

SGD tries to make the model better, and if [the thing that the model optimizes for] becomes more similar to the Base Objective, the model does become better. Therefore, we speculate that the model will change such that this happens. We further speculate that there are three different ways this could happen, which I'll illustrate below.

The first is called Internalization. The model adopts the Base Objective outright.

The second is called Corrigibility. The Mesa Objective is no longer represented explicitly – instead, it is a pointer to the part of its world model that includes the Base Objective.

The third is called Deceptive Alignment. Here, the Mesa Objective remains different from the Base Objective, but the model chooses to optimize the Base Objective anyway because doing so makes the learning procedure think it is aligned. Thus, the dotted pointer does not indicate that the Mesa Objective is the Base Objective, nor that it points to the base objective, but that the model will optimize the Base Objective for instrumental reasons, i.e., as part of its strategy for optimizing the Mesa Objective.

For deceptive alignment to be possible, we require that (a) the model knows it is in a training process, and (b) it cares about its objective long-term. For deceptive alignment to become a problem, we further require that (c) the model expects that it eventually leaves the training process permanently. If all of those are given, the paper uses the following example to illustrate why a model might be deceptively aligned (i.e., might optimize the Base Objective even though it's not what it terminally wants).

In this case, the Base Objective is "go to A" and the Mesa Objective "go to B." If the model that wants to go to B is smart, it will go to A in the training process since that allows it to go to B during deployment, and there are three deployment episodes vs. only two training episodes. Thus, optimizing the Base Objective during training yields better results for the Mesa Objective (in the long-term) than optimizing for the Mesa Objective outright. This is deceptive alignment.

Buck Shlegeris (Miri/LW [LW · GW]) has the following analogy illustrating the three cases. Suppose the Christian God wants to create humans that do what they want (god's utility function is the base objective). The three kinds of alignment have analogs in what kinds of humans do what god wants.

• For internalization, think of Jesus. He does what god wants because he has the same utility function as god.
• For proxy alignment, think of Martin Luther. He does what God wants because his utility function is "figure out what the bible wants and do that."
• For deceptive alignment, his example was Blaise Pascal; however, I don't need that example because I used to have a friend in middle and high school who explicitly wanted to do what the bible said as he wanted to go to heaven. He openly admitted (to me, anyway) to have terminal goals that contradict what the bible says, but made genuine efforts to adhere to the bible anyway. I'm very confident that this type of person is more common than Martin Luthers or Jesuses.

One purpose of this analogy is to illustrate why deceptive models might be more probable than what intuition might claim. Deceptive alignment sounds like a weird, specific kind of thing, but the analogy suggests that there are, in fact, many more deceptively aligned models out there than proxy aligned models or models aligned via internalization. Only one utility function is the same as that of the Christian god, but many utility functions place very high utility on "don't go to hell." Similarly, there is a range of possible Mesa Objectives that allow for deceptive alignment, and all of those will optimize the Base Objective during training.

The above is just an analogy, but there are other reasons to expect that deception is likely. Specifically, I'll cover two further arguments why the alternatives are "difficult" to achieve and hence might be less likely.

## Why internalization might be difficult

If a large neural network processes input data, this leads it to modify its internal state. If we refer to any modification of that state as "learning," then we can thus differentiate two different ways learning takes place, or two "input channels." One, through SGD, and two, through processing input data.

Given how much input data is used in modern machine learning, it is a safe bet that far more bits of information are relayed via the second channel (processing input data). Thus, if GPT-3 creates an internal model of "Lord of the Rings," this most likely occurs through input data rather than explicit modifications of SGD.

If we further assume that processing input data doesn't directly modify the model's objective (the Mesa Objective), or that its model of the Base Objective is created first, it follows that representing a complex objective via internalization is harder than via "modelling" (i.e., corrigibility or deception).

In the analogy to evolution (which is still just an analogy, and might not work in this case), it seems intuitive that evolution programming a detailed model of "parents" into our DNA is harder than it specifying attachment to parents via reference to our world model.

## Why corrigibility might be difficult

The argument in the previous subsection does not address the difference between deceptive alignment and corrigibility as neither of them needs to represent a complex objective explicitly.

Instead, the argument made by the paper is one of robustness. It goes like so:

• If a model is proxy-aligned, but the pointer is wrong, this can cause the model's performance to degrade.
• Even if the model's world model (gray blob) improves to the point where it recognizes this failure, it will not correct the pointer because the pointer's current target is the Mesa Objective.
• SGD can update the pointer, but this isn't easy to get right.
• Conversely, a deceptive model will immediately adjust its behavior if it learns that the pointer was flawed as doing so is helpful for its objective. (This is so because its strategy is "hide my true objective for now, such that it won't get modified and I can pursue it in deployment.")

Note that this does not exhaust all arguments that have been made in the paper.

# Miscellaneous

It seems to me that one under-appreciated aspect of Inner Alignment is that, even if one had the one-true-utility-function-that-is-all-you-need-to-program-into-AI, this would not, in fact, solve the alignment problem, nor even the intent-alignment part. It would merely solve outer alignment (provided the utility function can be formalized).

Another interesting point is that the plausibility of internalization (i.e., of a model representing the Base Objective explicitly) does not solely depend on the complexity of the objective. For example, evolution's objective of "maximize inclusive genetic fitness" is quite simple, but it is still not represented explicitly because figuring out how actions affect the objective is computationally hard. Thus, {probability of Mesa Optimizer adopting an objective} is at least dependent on {complexity of objective} as well as {difficulty of assessing how actions impact objective}.

[1] In practice, one often runs SGD multiple times with different initializations and uses the best result. Also, the output of SGD may be a linear combination of all models on the way rather than just the final model.

[2] However, there are efforts to create transparency tools to look into models. Such tools might be helpful if they become really good. Some of the proposals for building safe advanced AI [LW · GW] explicitly include transparency tools

[3] Technically, I believe the space consists of DNA sequences, and human minds are determined by DNA + randomness. I'm not a biologist.

[4] I don't know enough to discuss this assumption.

comment by evhub · 2020-08-02T02:41:28.368Z · LW(p) · GW(p)

This is great—thanks for writing this! I particularly liked your explanation of deceptive alignment with the diagrams to explain the different setups. Some comments, however:

(These models are called the training data and the setting is called supervised learning.)

Should be “these images are.”

Thus, there is only a problem if our way of obtaining feedback is flawed.

I don't think that's right. Even if the feedback mechanism is perfect, if your inductive biases are off, you could still end up with a highly misaligned model. Consider, for example, Paul's argument that the universal prior is malign—that's a setting where the feedback is perfect but you still get malign optimization because the prior is bad.

For proxy alignment, think of Martin Luther King.

The analogy is meant to be to the original Martin Luther, not MLK.

If we further assume that processing input data doesn't directly modify the model's objective, it follows that representing a complex objective via internalization is harder than via "modelling" (i.e., corrigibility or deception).

I'm not exactly sure what you're trying to say here. The way I would describe this is that internalization requires an expensive duplication where the objective is represented separately from the world model despite the world model including information about the objective.

comment by Rafael Harth (sil-ver) · 2020-08-02T07:42:10.389Z · LW(p) · GW(p)

Many thanks for taking the time to find errors.

I've fixed #1-#3. Arguments about the universal prior are definitely not something I want to get into with this post, so for #2 I've just made a vague statement that misalignment can arise for other reasons and linked to Paul's post.

I'm hesitant to change #4 before I fully understand why.

I'm not exactly sure what you're trying to say here. The way I would describe this is that internalization requires an expensive duplication where the objective is represented separately from the world model despite the world model including information about the objective.

So, there are these two channels, input data and SGD. If the model's objective can only be modified by SGD, then (since SGD doesn't want to do super complex modifications), it is easier for SGD to create a pointer rather than duplicate the [model of the base objective] explicitly.

But the bolded part seemed like a necessary condition, and that's what I'm trying to say in the part you quoted. Without this condition, I figured the model could just modify [its objective] and [its model of the Base Objective] in parallel through processing input data. I still don't think I quite understand why this isn't plausible. If the [model of Base objective] and the [Mesa Objective] get modified simultaneously, I don't see any one step where this is harder than creating a pointer. You seem to need an argument for why [the model of the base objective] gets represented in full before the Mesa Objective is modified.

Edit: I slightly rephrased it to say

If we further assume that processing input data doesn't directly modify the model's objective (the Mesa Objective), or that its model of the Base Objective is created first,

comment by MikkW (mikkel-wilson) · 2020-08-02T23:18:14.876Z · LW(p) · GW(p)

The post still contains a misplaced mention of MLK shortly after the first mention of Luther:

I'm very confident that this type of person is more common than Martin Luther Kings or Jesuses.
comment by Rafael Harth (sil-ver) · 2020-08-03T09:45:33.642Z · LW(p) · GW(p)

Ah, shoot. Thanks.

comment by SDM · 2020-08-03T12:10:04.945Z · LW(p) · GW(p)

Inner Alignment / Misalignment is possibly the key specific mechanism which fills a weakness in the 'classic arguments [LW(p) · GW(p)]' for AI safety - the Orthogonality Thesis, Instrumental Convergence and Fast Progress together implying small separations between AI alignment and AI capability can lead to catastrophic outcomes. The question of why there would be such a damaging, hard-to-detect divergence between goals and alignment needs an answer to have a solid, specific reason to expect dangerous misalignment, and Inner Misalignment is just such a reason.

I think that it should be presented in initial introductions to AI risk alongside those classic arguments, as the specific, technical reason why the specific techniques we use are likely to produce such goal/capability divergence - rather than the general a priori reasons given by the classic arguments.

comment by MathiasKirkBonde · 2020-08-01T17:57:44.362Z · LW(p) · GW(p)

For those who, like me, have the attention span and intelligence of a door hinge the ELI5 edition is:

Outer alignment is trying to find a reward function that is aligned with our values (making it produce good stuff rather than paperclips)

Inner alignment is the act of ensuring our AI actually optimizes the reward function we specify.

An example of poor inner alignment would be us humans in the eyes of evolution. Instead of doing what evolution intended, we use contraceptives so we can have sex without procreation. If evolution had gotten its inner alignment right, we would care as much about spreading our genes as evolution does!

comment by Liron · 2020-08-08T23:22:50.586Z · LW(p) · GW(p)

Thanks for the ELI12, much appreciated.

evolution's objective of "maximize inclusive genetic fitness" is quite simple, but it is still not represented explicitly because figuring out how actions affect the objective is computationally hard

This doesn’t seem like the bottleneck in many situations in practice. For example, a lot of young men feel like they want to have as much sex as possible, but not father as many kids as possible. I’m not sure exactly what the reason is, but I don’t think it’s the computational difficulty of representing having kids vs. having sex, because humans already build a world model containing the concept of “my kids”.

It seems to me that one under-appreciated aspect of Inner Alignment is that, even if one had the one-true-utility-function-that-is-all-you-need-to-program-into-AI, this would not, in fact, solve the alignment problem, nor even the intent-alignment part. It would merely solve outer alignment (provided the utility function can be formalized).

Damn, yep I for one under-appreciated this for the past 12 years.

What else have people said on this subject? Do folks think that scenarios where we solve outer alignment most likely involve us not having to struggle much with inner alignment? Because fully solving outer alignment implies a lot of deep progress in alignment.

comment by Rafael Harth (sil-ver) · 2020-08-09T09:14:09.808Z · LW(p) · GW(p)

This doesn’t seem like the bottleneck in many situations in practice. For example, a lot of young men feel like they want to have as much sex as possible, but not father as many kids as possible. I’m not sure exactly what the reason is, but I don’t think it’s the computational difficulty of representing having kids vs. having sex, because humans already build a world model containing the concept of “my kids”.

In this case, I would speculate that the kids objective wouldn't work that well because the reward is substantially delayed. The sex happens immediately, the kids only after 9 months. Humans tend to discount their future.

Also, how exactly would the kids objective even be implemented?

What else have people said on this subject?

I believe that Miri was aware of this problem for a long time, but that it didn't have the nice, comparatively non-confused and precise handle of "Inner Alignment" until Evan published the 'risks from learned optimizations' paper. But I'm not the right person to say anything else about this.

Do folks think that scenarios where we solve outer alignment most likely involve us not having to struggle much with inner alignment? Because fully solving outer alignment implies a lot of deep progress in alignment.

Probably not. [LW · GW] I think Inner alignment is, if anything, probably the harder problem. It strikes me as reasonably plausible that Debate is a proposal which solves outer alignment, but as very unlikely that it automatically solves Inner Alignment.

comment by Liron · 2020-08-09T11:36:41.436Z · LW(p) · GW(p)

Hm ya I guess the causality between sex and babies (even sex and visible pregnancy) is so far away in time that it’s tough to make a brain want to “make babies”.

But I don’t think computationally intractability of how actions effect inclusive genetic fitness is quite why evolution made such crude heuristics. Because if a brain understood that it was trying to maximize that quantity, I think it could figure out “have a lot of sex” as a heuristic approach without evolution hard-coding it in. And I think humans actually do have some level of in-brain goals to have more descendants beyond just having more sex. So I think these things like sex pleasure are just performance optimizations to a mentally tractable challenge.

E.g. snakes quickly triggering a fear reflex

comment by bmg · 2020-08-18T15:17:51.049Z · LW(p) · GW(p)

Since neural networks are universal function approximators, it is indeed the case that some of them will implement specific search algorithms.

I don't think this specific point is true. It seems to me like the difference between functions and algorithms is important. You can also approximate any function with a sufficiently large look-up table, but simply using a look-up table to choose actions doesn't involve search/planning.* In this regard, something like a feedforward neural network with frozen weights also doesn't seem importantly different than a look-up table to me.

One naive perspective: Systems like AlphaGo and MuZero do search, because they implement Monte-Carlo tree search algorithms, but if you were to remove their MCTS components then they simply wouldn't do search. Search algorithms can be used to update the weights of neural networks, but neural networks don't themselves do search.

I think this naive perspective may be wrong, because it's possible that recurrence is sufficient for search/planning processes to emerge (e.g. see this paper). But then, if that's true, I think that the power of recurrence is the important thing to emphasize, rather than the fact that neural networks are universal function approximators.

*I'm thinking of search algorithms as cognitive processes, rather than input-output behaviors (which could be produced via a wide range of possible algorithms). If you're thinking of them as behaviors, then my point no longer holds. Although I've interpreted the mesa-optimization paper (and most other discussions of mesa-optimization) as talking about cognitive processes.

comment by Rafael Harth (sil-ver) · 2020-08-22T18:53:33.430Z · LW(p) · GW(p)

(I somehow didn't notice your comment until now.) I believe you are correct. The theorem for function approximation I know also uses brute force (i.e., large networks) in the proof, so it doesn't seem like evidence for the existence of [weights that implement algorithms].

(And I am definitely not talking about algorithms in terms of input/output behavior.)

I've changed the paragraph into

You might wonder what the space of all models looks like. The typical answer is that the possible models are sets of weights for a neural network [LW · GW]. The problem exists insofar as some sets of weights implement specific search algorithms.

Anyone who knows of alternative evidence I can point to here is welcome to reply to this comment.

comment by rohinmshah · 2020-08-03T21:03:32.373Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

This post summarizes and makes accessible the paper <@Risks from Learned Optimization in Advanced Machine Learning Systems@>.
comment by hunterglenn · 2020-08-25T15:17:55.506Z · LW(p) · GW(p)

Isn't this just "Humans are adaptation-executors, not utility-maximizers", but applied to AI to say that an AI using heuristics that successfully hit a target in environment X may not continue that target if the environment changes?