Occam's Razor May Be Sufficient to Infer the Preferences of Irrational Agents: A reply to Armstrong & Mindermann

post by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-07T19:52:19.266Z · score: 49 (14 votes) · LW · GW · 25 comments

Contents

  Brief summary of A&M's argument:
  Methinks the argument proves too much:
  Objecting to the three arguments for Step 2
  Conclusion
  Appendix: So, is Occam’s Razor sufficient or not?
None
25 comments

[Epistemic Status: My inside view feels confident, but I’ve only discussed this with one other person so far, so I won't be surprised if it turns out to be confused.]

Armstrong and Mindermann (A&M) argue "that even with a reasonable simplicity prior/Occam’s razor on the set of decompositions, we cannot distinguish between the true decomposition and others that lead to high regret. To address this, we need simple ‘normative’ assumptions, which cannot be deduced exclusively from observations."

I explain why I think their argument is faulty, concluding that maybe Occam's Razor is sufficient to do the job after all.

In what follows I assume the reader is familiar with the paper already or at least with the concepts within it.

Brief summary of A&M's argument:

(This is merely a brief sketch of A&M’s argument; I’ll engage with it in more detail below. For the full story, read their paper.)

Take a human policy pi = P(R) that we are trying to represent in the planner-reward formalism. R is the human’s reward function, which encodes their desires/preferences/values/goals. P() is the human’s planner function, which encodes how they take their experiences as input and try to choose outputs that achieve their reward. Pi, then, encodes the overall behavior of the human in question.

Step 1: In any reasonable language, for any plausible policy, you can construct “degenerate” planner-reward pairs that are almost as simple as the simplest possible way to generate the policy, yet yield high regret (i.e. have a reward component which is very different from the "true"/"Intended" one.)

It’s easy to see that these examples, being constructed from the policy, are at most slightly more complex than the simplest possible way to generate the policy, since they could make use of that way.

Step 2: The "intended" planner-reward pair--the one that humans would judge to be a reasonable decomposition of the human policy in question--is likely to be significantly more complex than the simplest possible planner-reward pair.

Conclusion: If we use Occam’s Razor alone to find planner-reward pairs that fit a particular human’s behavior, we’ll settle on one of the degenerate ones (or something else entirely) rather than a reasonable one. This could be very dangerous if we are building an AI to maximize the reward.

Methinks the argument proves too much:

My first point is that A&M’s argument probably works just as well for other uses of Occam’s Razor. In particular it works just as well for the canonical use: finding the Laws and Initial Conditions that describe our universe!

Take a sequence of events we are trying to predict/represent with the lawlike-universe formalism, which posits C (the initial conditions) and then L() the dynamical laws, a function that takes initial conditions and extrapolates everything else from them. L(C) = E, the sequence of events/conditions/world-states we are trying to predict/represent.

Step 1: In any reasonable language, for any plausible sequence of events, we can construct "degenerate" initial condition + laws pairs that are almost as simple as the simplest pair.

It’s easy to see that these examples, being constructed from E, are at most slightly more complex than the simplest possible pair, since they could use the simplest pair to generate E.

Step 2: The "intended" initial condition+law pair is likely to be significantly more complex than the simplest pair.

Conclusion: If we use Occam’s Razor alone to find law-condition pairs that fit all the world’s events, we’ll settle on one of the degenerate ones (or something else entirely) rather than a reasonable one. This could be very dangerous if we are e.g. building an AI to do science for us and answer counterfactual questions like “If we had posted the nuclear launch codes on the Internet, would any nukes have been launched?”

This conclusion may actually be true, but it’s a pretty controversial claim and I predict most philosophers of science wouldn’t be impressed by this argument for it--even the ones who agree with the conclusion.

Objecting to the three arguments for Step 2

Consider the following hypothesis, which is basically equivalent to the claim A&M are trying to disprove:

Occam Sufficiency Hypothesis: The “Intended” pair happens to be the simplest way to generate the policy.

Notice that everything in Step 1 is consistent with this hypothesis. The first degenerate pairs are constructed from the policy, so they are more complicated than the simplest way to generate it, so if that way is via the intended pair, they are more complicated (albeit only slightly) than the intended pair.

Next, notice that the three arguments in support of Step 2 don’t really hurt this hypothesis:

Re: first argument: The intended pair can be both very complex and the simplest way to generate the policy; no contradiction there. Indeed that’s not even surprising: since the policy is generated by a massive messy neural net in an extremely diverse environment, we should expect it to be complex. What matters for our purposes is not how complex the intended pair is, but rather how complex it is relative to the simplest possible way to generate the policy. A&M need to argue that the simplest possible way to generate the policy is simpler than the intended pair; arguing that the intended pair is complex is at best only half the argument.

Compare to the case of physics: Sure, the laws of physics are complex. They probably take at least a page of code to write up. And that’s aspirational; we haven’t even got to that point yet. But that doesn’t mean Occam’s Razor is insufficient to find the laws of physics.

Re: second argument: The inference from “This pair contains more information than the policy” to “this pair is more complex than the policy” is fallacious. Of course the intended pair contains more information than the policy! All ways of generating the policy contain more information than it. This is because there are many ways (e.g. planner-reward pairs) to get any given policy, and thus specifying any particular way is giving you strictly more information than simply specifying the policy.

Compare to the case of physics: Even once we’ve been given the complete history of the world (or a complete history of some arbitrarily large set of experiment-events) there will still be additional things left to specify about what the laws and initial conditions truly are. Do the laws contain a double negation in them, for example? Do they have some weird clause that creates infinite energy but only when a certain extremely rare interaction occurs that never in fact occurs? What language are the laws written in, anyway? And what about the initial conditions? Lots of things left to specify that aren’t determined by the complete history of the world. Yet this does not mean that the Laws + Initial Conditions are more complex than the complete history of the world, and it certainly doesn’t mean we’ll be led astray if we believe in the Laws+Conditions pair that is simplest.

Re: third argument: Yes, people have been trying to find planner-reward pairs to explain human behavior for many years, and yes, no one has managed to build a simple algorithm to do it yet. Instead we rely on all sorts of implicit and intuitive heuristics, and we still don’t succeed fully. But all of this can be said about Physics too. It’s not like physicists are literally following the Occam’s Razor algorithm--iterating through all possible Law+Condition pairs in order from simplest to most complex and checking each one to see if it outputs a universe consistent with all our observations. And moreover, physicists haven’t succeeded fully either. Nevertheless, many of us are still confident that Occam’s Razor is in principle sufficient: If we were to follow the algorithm exactly, with enough data and compute, we would eventually settle on a Law+Condition pair that accurately describes reality, and it would be the true pair. Again, maybe we are wrong about that, but the arguments A&M have given so far aren’t convincing.

Conclusion

Perhaps Occam’s Razor is insufficient after all. (Indeed I suspect as much, for reasons I’ll sketch in the appendix) But as far as I can tell, A&M’s arguments are at best very weak evidence against the sufficiency of Occam’s Razor for inferring human preferences, and moreover they work pretty much just as well against the canonical use of Occam’s Razor too.

This is a bold claim, so I won’t be surprised if it turns out I was confused. I look forward to hearing people’s feedback. Thanks in advance! And thanks especially to Armstrong and Mindermann if they take the time to reply.


Many thanks to Ramana Kumar for hearing me out about this a while ago when we read the paper together.


Appendix: So, is Occam’s Razor sufficient or not?

--A priori, we should expect something more like a speed prior to be appropriate for identifying the mechanisms of a finite mind, rather than a pure complexity prior.

--Sure enough, we can think of scenarios in which e.g. a deterministic universe with somewhat simple laws develops consequentialists who run massive simulations including of our universe and then write down Daniel’s policy in flaming letters somewhere, such that the algorithm “Run this deterministic universe until you find big flaming letters, then read out that policy” becomes a very simple way to generate Daniel’s policy. (This is basically just the “Universal Prior is Malign” idea applied in a new way.)

--So yeah, pure complexity prior is probably not good. But maybe a speed prior would work, or something like it. Or maybe not. I don’t know.

--One case that seems useful to me: Suppose we are considering two explanations of someone’s behavior: (A) They desire the well-being of the poor, but [insert epicycles here to explain why they aren’t donating much, are donating conspicuously, are donating ineffectively] and (B) They desire their peers (and their selves) to believe that they desire the well-being of the poor. Thanks to the epicycles in (A), both theories fit the data equally well. But theory B is much more simple. Do we conclude that this person really does desire the well-being of the poor, or not? If we think that even though (A) is more complex it is also more accurate, then yeah it seems like Occam’s Razor is insufficient to infer human preferences. But if we instead think “Yeah, this person just really doesn’t care, and the proof is how much simpler B is than A” then it seems we really are using something like Occam’s Razor to infer human preferences. Of course, this is just one case, so the only way it could prove anything is as a counterexample. To me it doesn’t seem like a counterexample to Occam’s sufficiency, but I could perhaps be convinced to change my mind about that.

--Also, I'm pretty sure that once we have better theories of the brain and mind, we’ll have new concepts and theoretical posits to explain human behavior. (e.g. something something Karl Friston something something free energy?) Thus, the simplest generator of a given human’s behavior will probably not divide automatically into a planner and a reward; it’ll probably have many components and there will be debates about which components the AI should be faithful to (dub these components the reward) and which components the AI should seek to surpass (dub these components the planner.) These debates may be intractable, turning on subjective and/or philosophical considerations. So this is another sense in which I think yeah, definitely Occam’s Razor isn’t sufficient--for we will also need to have a philosophical debate about what rationality is.

25 comments

Comments sorted by top scores.

comment by rohinmshah · 2019-10-07T23:24:00.545Z · score: 12 (7 votes) · LW · GW

Some objections:

  • The thing that you can't do is decompose behavior into planner and reward. If you just want to predict behavior, you can totally do that. Similarly, you can predict future events with physics.
  • You do need to do the decomposition to run counterfactuals. And indeed I buy the claim that if you literally try to find some input and some dynamics such that is the world trajectory, selecting only by Kolmogorov complexity and accuracy at predicting data, you probably won't be able to use the resulting to run counterfactuals. Even ignoring the malign universal prior argument.
  • If it turns out you can run counterfactuals with , I would strongly expect that to be because physics "actually" works by some simple that is "invariant" to the input state. In contrast, I would be astonished if humans "actually" have some reward in their head that they are trying to maximize, and that is what drives behavior.

I don't feel much better about the speed prior than the regular Solomonoff prior.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-08T01:06:56.751Z · score: 7 (4 votes) · LW · GW

Thanks! I'm not sure I follow you. Here's what I think you are saying:

--Occam's Razor will be sufficient for predicting human behavior of course; it just isn't sufficient for finding the intended planner-reward pair. Because (A) the simplest way to predict human behavior has nothing to do with planners and rewards, and so (B) the simplest planner-reward pair will be degenerate or weird as A&M argue.

--You agree that this argument also works for Laws+Initial Conditions; Occam's Razor is generally insufficient, not just insufficient for inferring preferences of irrational agents!

--You think the argument is more likely to work for inferring preferences than for Laws+Initial Conditions though.

If this is what you are saying, then I agree with the second and third points but disagree with the first--or at least, I don't see any argument for it in A&M's paper. It may still be true, but further argument is needed. In particular their arguments for (A) are pretty weak, methinks--that's what my section "Objections to the arguments for step 2" is about.

Edit to clarify: By "I agree with the second point" I mean I agree that if the argument works at all, it probably works for Laws+Initial Conditions as well. I don't think the argument works though. But I do think that Occam's Razor is probably insufficient.



comment by rohinmshah · 2019-10-09T07:11:00.233Z · score: 3 (2 votes) · LW · GW

That's an accurate summary of what I'm saying.

at least, I don't see any argument for it in A&M's paper. It may still be true, but further argument is needed.

If you are picking randomly out of a set of N possibilities, the chance that you pick the "correct" one is 1/N. It seems like in any decomposition (whether planner/reward or initial conditions/dynamics), there will be N decompositions, with N >> 1, where I'd say "yeah, that probably has similar complexity as the correct one". The chance that the correct one is also the simplest one out of all of these seems basically like 1/N, which is ~0.

You could make an argument that we aren't actually choosing randomly, and correctness is basically identical to simplicity. I feel the pull of this argument in the limit of infinite data for laws of physics (but not for finite data), but it just seems flatly false for the reward/planner decomposition.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-09T22:45:39.231Z · score: 1 (1 votes) · LW · GW

I feel like there's a big difference between "similar complexity" and "the same complexity." Like, if we have theory T and then we have theory T* which adds some simple unobtrusive twist to it, we get another theory which is of similar complexity... yet realistically an Occam's-Razor-driven search process is not going to settle on T*, because you only get T* by modifying T. And if I'm wrong about this then it seems like Occam's Razor is broken in general; in any domain there are going to be ways to turn T's into T*'s. But Occam's Razor is not broken in general (I feel).

Maybe this is the argument you anticipate above with "...we aren't actually choosing randomly." Occam's Razor isn't random. Again, I might agree with you that intuitively Occam's Razor seems more useful in physics than in preference-learning. But intuitions are not arguments, and anyhow they aren't arguments that appeared in the text of A&M's paper.



comment by riceissa · 2019-10-09T21:14:56.949Z · score: 8 (4 votes) · LW · GW

I thought about this more and re-read the A&M paper, and I now have a different line of thinking compared to my previous comments [LW · GW].

I still think A&M's No Free Lunch theorem goes through, but now I think A&M are proving the wrong theorem. A&M try to find the simplest (planner, reward) decomposition that is compatible with the human policy, but it seems like we instead additionally want compatibility with all the evidence we have observed, including sensory data of humans saying things like "if I was more rational, I would be exercising right now instead of watching TV" and "no really, my reward function is not empty". The important point is that such sensory data gives us information not just about the human policy, but also about the decomposition. Forcing compatibility with this sensory data seems to rule out degenerate pairs. This makes me feel like Occam's Razor would work for inferring preferences up to a certain point (i.e. as long as the situations are all "in-distribution").

If we are trying to find the (planner, reward) decomposition of non-human minds: I think if we were randomly handed a mind from all of mind design space, then A&M's No Free Lunch theorem would apply, because the simplest explanation really is that the mind has a degenerate decomposition. But if we were randomly handed an alien mind from our universe, then we would be able to use all the facts we have learned about our universe, including how the aliens likely evolved, any statements they seem to be making about what they value, and so on.

Does this line of thinking also apply to the case of science? I think not, because we wouldn't be able to use our observations to get information about the decomposition. Unlike the case of values, the natural world isn't making statements like "actually, the laws are empty and all the complexity is in the initial conditions". I still don't think the No Free Lunch theorem works for science either, because of my previous comments.

comment by Stuart_Armstrong · 2019-10-22T08:15:30.298Z · score: 2 (1 votes) · LW · GW

compatibility with all the evidence we have observed

That is the whole point of my research agenda: https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into [LW · GW]

The problem is that the non-subjective evidence does not map onto facts about the decomposition. A human claims X; well, that's a speech act; are they telling the truth or not, and how do we know? Same for sensory data, which is mainly data about the brain correlated with facts about the outside world; to interpret that, we need to solve human symbol grounding.

All these ideas are in the research agenda (especially section 2). Just as you need something to bridge the is-ought gap, you need some assumptions to make evidence in the world (eg speech acts) correspond to preference-relevant facts.

This video may also illustrate the issues: https://www.youtube.com/watch?v=1M9CvESSeVc&t=1s

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-17T19:10:08.917Z · score: 2 (2 votes) · LW · GW

Hmm, I like that. I wonder what A&M would say in response. And I agree this is an important and relevant difference between the case of preferences and the case of science.

I still don't think A&M show that the simplest explanation is a degenerate decomposition. They show that if it is, then Occam's Razor won't be sufficient, and moreover that there are some degenerate decompositions pretty close to maximally simple. But they don't do much to rule out the possibility that the simplest explanation is the intended one.


comment by riceissa · 2019-10-07T22:00:42.820Z · score: 5 (4 votes) · LW · GW

I'm not confident I've understood this post, but it seems to me that the difference between the values case and the empirical case is that in the values case, we want to do better than humans at achieving human values (this is the "ambitious" in "ambitious value learning") whereas in the empirical case, we are fine with just predicting what the universe does (we aren't trying to predict the universe even better than the universe itself). In the formalism, in π = P(R) we are after R (rather than π), but in E = L(C) we are after E (rather than L or C), so in the latter case it doesn't matter if we get a degenerate pair (because it will still predict the future events well). Similarly, in the values case, if all we wanted was to imitate humans, then it seems like getting a degenerate pair would be fine (it would act just as human as the "intended" pair).

If we use Occam’s Razor alone to find law-condition pairs that fit all the world’s events, we’ll settle on one of the degenerate ones (or something else entirely) rather than a reasonable one. This could be very dangerous if we are e.g. building an AI to do science for us and answer counterfactual questions like “If we had posted the nuclear launch codes on the Internet, would any nukes have been launched?”

I don't understand how this conclusion follows (unless it's about the malign prior, which seems not relevant here). Could you give more details on why answering counterfactual questions like this would be dangerous?

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-08T13:59:45.186Z · score: 5 (4 votes) · LW · GW

Thanks! OK, so I agree that normally in doing science we are fine with just predicting what will happen, there's no need to decompose into Laws and Conditions. Whereas with value learning we are trying to do more than just predict behavior; we are trying to decompose into Planner and Reward so we can maximize Reward.

However the science case can be made analogous in two ways. First, as Eigil says below, realistically we don't have access to ALL behavior or ALL events, so we will have to accept that the predictor which predicted well so far might not predict well in the future. Thus if Occam's Razor settles on weird degenerate predictors, it might also settle on one that predicts well up until time T but then predicts poorly after that.

Second, (this is the way I went, with counterfactuals) science isn't all about prediction. Part of science is about answering counterfactual questions like "what would have happened if..." And typically the way to answer these questions is by decomposing into Laws + Conditions and then doing a surgical intervention on the conditions and then applying the same Laws to the new conditions.

So, for example, if we use Occam's Razor to find Laws+Conditions for our universe, and somehow it settles on the degenerate pair "Conditions := null, Laws := sequence of events E happens" then all our counterfactual queries will give bogus answers--for example, "what would have happened if we had posted the nuclear launch codes on the Internet?" Answer: "Varying the Conditions but holding the Laws fixed... it looks like E would have happened. So yeah, posting launch codes on the Internet would have been fine, wouldn't have changed anything."



comment by riceissa · 2019-10-09T04:40:47.423Z · score: 1 (1 votes) · LW · GW

Thanks for the explanation, I think I understand this better now.

My response to your second point: I wasn't sure how the sequence prediction approach to induction (like Solomonoff induction) deals with counterfactuals, so I looked it up, and it looks like we can convert the counterfactual question into a sequence prediction question by appending the counterfactual to all the data we have seen so far. So in the nuclear launch codes example, we would feed the sequence predictor with a video of the launch codes being posted to the internet, and then ask it to predict what sequence it expects to see next. (See the top of page 9 of this PDF and also example 5.2.2 in Li and Vitanyi for more details and further examples.) This doesn't require a decomposition into laws and conditions; rather it seems to require that the events E be a function that can take in bits and print out more bits (or a probability distribution over bits). But this doesn't seem like a problem, since in the values case the policy π is also a function. (Maybe my real point is that I don't understand why you are assuming E has to be a sequence of events?) [ETA: actually, maybe E can be just a sequence of events, but if we're talking about complexity, there would be some program that generates E, so I am suggesting we use that program instead of L and C for counterfactual reasoning.]

My response to your first point: I am far from an expert here, but my guess is that an Occam's Razor advocate would bite the bullet and say this is fine, since either (1) the degenerate predictors will have high complexity so will be dominated by simpler predictors, or (2) we are just as likely to be living in a "degenerate" world as we are to be living in the kind of "predictable" world that we think we are living in.

comment by TAG · 2019-10-08T14:41:46.994Z · score: 1 (1 votes) · LW · GW

Thanks! OK, so I agree that normally in doing science we are fine with just predicting what will happen, there’s no need to decompose into Laws and Conditions.

Where we can predict, we do so by feeding a set of conditions into laws.

Second, (this is the way I went, with counterfactuals) science isn’t all about prediction. Part of science is about answering counterfactual questions like “what would have happened if...” And typically the way to answer these questions is by decomposing into Laws + Conditions and then doing a surgical intervention on the conditions and then applying the same Laws to the new conditions.

Methodologically, counterfactuals and predictions are almost the same thing. In the case of a prediction , you feed an actual condition into your laws, in the case of a counterfactual, you feed in a non-actual. one.

comment by Eigil Rischel (eigil-rischel) · 2019-10-07T22:52:46.855Z · score: 2 (2 votes) · LW · GW

A simple remark: we don't have access to all of , only up until the current time. So we have to make sure that we don't get a degenerate pair which diverges wildly from the actual universe at some point in the future.

Maybe this is similar to the fact that we don't want AIs to diverge from human values once we go off-distribution? But you're definitely right that there's a difference: we do want AIs to diverge from human behaviour (even in common situations).

comment by romeostevensit · 2019-10-07T20:55:39.619Z · score: 5 (3 votes) · LW · GW

This is neat. It makes me realize that thinking in terms of simplicity and complexity priors was serving somewhat as a semantic stop sign for me whereas speed prior vs slow prior doesn't.

comment by Stuart_Armstrong · 2019-10-22T11:07:08.517Z · score: 3 (2 votes) · LW · GW

Hey there!

Thanks for this critique; I have, obviously, a few comments ^_^

In no particular order:

  • First of all, the FHI channel has a video going over the main points of the argument (and of the research agenda); it may help to understand where I'm coming from: https://www.youtube.com/watch?v=1M9CvESSeVc

  • A useful point from that: given human theory of mind, the decomposition of human behaviour into preferences and rationality is simple; without that theory of mind, it is complex. Since it's hard for us to turn off our theory of mind, the decomposition will always feel simple to us. However, the human theory of mind suffers from Moravec's paradox: though the theory of mind seems simple to us, it is very hard to specify, especially into code.

  • You're entirely correct to decompose the argument into Step 1 and Step 2, and to point out that Step 1 has much stronger formal support than Step 2.

  • I'm not too worried about the degenerate pairs specifically; you can rule them all out with two bits of information. But, once you've done that, there will be other almost-as-degenerate pairs that bit with the new information. To rule them out, you need to add more information... but by the time you've added all of that, you've essentially defined the "proper" pair, by hand.

  • On speed priors: the standard argument applies for a speed prior, too (see Appendix A of our paper). It applies perfectly for the indifferent planner/zero reward, and applies, given an extra assumption, for the other two degenerate solutions.

  • Onto the physics analogy! First of all, I'm a bit puzzled by your claim that physicists don't know how to do this division. Now, we don't have a full theory of physics; however, all the physical theories I know of, have a very clear and known division between laws and initial conditions. So physicists do seem to know how to do this. And when we say that "it's very complex", this doesn't seem to mean the division into laws and initial conditions is complex, just that the initial conditions are complex (and maybe that the laws are not yet known).

  • The indifference planner contains almost exactly the same amount of on information as the policy. The "proper" pair, on the other hand, contains information such as whether the anchoring bias is a bias (it is) compared with whether paying more for better tasting chocolates is a bias (it isn't). Basically, none of the degenerate pairs contain any bias information at all; so everything to do with human biases is extra information that comes along with the "proper" pair.

  • Even ignoring all that, the fact that (p,R) is of comparable complexity to (-p,-R) shows that Occams razor cannot distinguish the proper pair from its negative.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-24T01:40:45.539Z · score: 1 (1 votes) · LW · GW

And thanks for the reply!

FWIW, I like the research agenda. I just don't like the argument in the paper. :)

--Yes, without theory of mind the decomposition is complex. But is it more complex than the simplest way to construct the policy? Maybe, maybe not. For all you said in the paper, it could still be that the simplest way to construct the policy is via the intended pair, complex though it may be. (In my words: The Occam Sufficiency Hypothesis might still be true.)

--If the Occam Sufficiency Hypothesis is true, then not only do we not have to worry about the degenerate pairs, we don't have to worry about anything more complex than them either.

--I agree that your argument, if it works, applies to the speed prior too. I just don't think it works; I think Step 2 in particular might break for the speed prior, because the Speed!Occam Sufficiency Hypothesis might be true.

--If I ever said physicists don't know how to distinguish between laws and initial conditions, I didn't mean it. (Did I?) What I thought I said was that physicists haven't yet found a law+IC pair that can account for the data we've observed. Also that they are in fact using lots of other heuristics and assumptions in their methodology, they aren't just iterating through law+IC pairs and comparing the results to our data. So, in that regard the situation with physics is parallel to the situation with preferences/rationality.

--My point is that they are irrelevant to what is more complex than what. In particular, just because A has more information than B doesn't mean A is more complex than B. Example: The true Laws + Initial Conditions pair contains more information than E, the set of all events in the world. Why? Because from E you cannot conclude anything about counterfactuals, but from the true Laws+IC pair you can. Yet you can deduce E from the true Laws+IC pair. (Assume determinism for simplicity.) But it's not true that the true Laws+IC pair is more complex than E; the complexity of E is the length of the shortest way to generate it, and (let's assume) the true Laws+IC is the shortest way to generate E. So both have the same complexity.

I realize I may be confused here about how complexity or information works; please correct me if so!

But anyhow if I'm right about this then I am skeptical of conclusions drawn from information to complexity... I'd like to see the argument made more explicit and broken down more at least.

For example, the "proper" pair contains all this information about what's a bias and what isn't, because our definition of bias references the planner/reward distinction. But isn't that unfair? Example: We can write 99999999999999999999999 or we can write "20-digits of 9's." The latter is shorter, but it contains more information if we cheat and say it tells us things like "how to spell the word that refers to the parts of a written number."

Anyhow don't the degenerate pairs also contain information about biases--for example, according to the policy-planner+empty-reward pair, nothing is a bias, because nothing would systematically lead to more reward than what is already being done?

--If it were true that Occam's Razor can't distinguish between P,R and -P,-R, then... isn't that a pretty general argument against Occam's Razor, not just in this domain but in other domains too?

--

comment by TAG · 2019-10-22T11:41:53.579Z · score: 1 (1 votes) · LW · GW

however, all the physical theories I know of, have a very clear and known division between laws and initial conditions.

Physics doesn't work on Occam's razor alone. You need an IC/law division to be able to figure out counterfactuals, but equally you can implement counterfactuals in the form of experiment, and use them to figure out the IC/law split.

comment by steve2152 · 2019-10-08T00:11:31.922Z · score: 2 (2 votes) · LW · GW

Take the limit as we observe more and more behavior-- it takes a million bits to specify E, for example, or a billion. Then the utility maximizer and utility minimizer are both much much simpler (can be specified in fewer bits) than the Buddha-like zero utility agent (assuming E is in fact consistent with a simple utility function). Likewise, in that same limit, the true laws of physics plus initial conditions are much much simpler than saying "L=0 and E just happens". Right? Sorry if I'm misunderstanding, I haven't read A&M.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-08T01:16:15.770Z · score: 1 (1 votes) · LW · GW

The trick is that you can use the simplest method for constructing E in your statement "L=0 and E just happens." So e.g. if you have some simple Laws l and Conditions c such that l(c) = E, your statement can be "L=0 and l(c) just happens."

comment by Isnasene · 2019-10-08T03:33:03.135Z · score: 1 (1 votes) · LW · GW

I think the physics analogy here is really cool--the idea of drawing a parallel between the pair "what a person wants and how how they behave to get those things" and the pair "how the universe is set up and how it behaves as a result" is an interesting one.

However, arguably, I think many physicists have already settled on a degenerate model of physics: The idea behind the Copenhagen Interpretation is essentially that given an initial condition, some event (partially defined by those conditions) will just randomly happen. It's not exactly one of the degenerate examples you give (because a lot of rules can be extracted from the initial conditions about how those random things happen) but, at the end of the day, lots of people already accept that the initial-conditions to laws-of-physics pairing is best described by saying "sometimes some things happen and sometimes other things happen."




comment by TAG · 2019-10-08T14:15:02.523Z · score: 1 (1 votes) · LW · GW

I think many physicists have already settled on a degenerate model of physics: [..] the Copenhagen Interpretation [..] It’s not exactly one of the degenerate examples you give

I don't see what's degenerate about it at all.

Lots of people already accept that the initial-conditions to laws-of-physics pairing is best described by saying “sometimes some things happen and sometimes other things happen.”

Every interpretation yields the same results. There's no known way of rejigging the initial conditions-to-laws-of-evolution balance that does better.

comment by Isnasene · 2019-10-11T05:59:19.721Z · score: 1 (1 votes) · LW · GW
Every interpretation yields the same results. There's no known way of rejigging the initial conditions-to-laws-of-evolution balance that does better.

Exactly. The fact that multiple conceptually distinct rule-sets yield the same results is what makes it degenerate. In the same way that a single policy can be described exactly by multiple degenerate reward functions of similar complexity, the evolution of the universe can be described exactly by multiple sets of physical laws of similar complexity. Sure the randomness the best we can do in terms of prediction but the underlying way that randomness is produced is degenerate:

1. The next state of the universe is evolved from the current state by a combination of details about the current state and a random fluctuation that just happened

2. The next state of the universe is evolved from the current state by a combination of details about the current state and a set of events by the laws of the universe which only appear random to us

3. The next state of the universe is evolved from the current state by a combination of details about the current state and a set of observationally random events that were chosen to occur in sequence before the beginning of the universe

and so on...

I personally like quantum mechanics though. I'm just picking on it because, while many formulations of deterministic laws exist, people can always make the argument that their "different" interpretations are just different mathematical reformulations of a single concept. In contrast, it's easy to pick conceptually different ways in which observationally random events are produced.

In science, the distinction between 1, 2 and 3 don't matter since they all predict the same things. But similar distinctions in terms of reward functions matter greatly because they, intuitively, imply different "subjective" experiences. But, the upshot is that the article's claim that "physics being degenerate" is a controversial idea isn't something I believe.

comment by TAG · 2019-10-21T10:47:43.168Z · score: 1 (1 votes) · LW · GW

The fact that mul­ti­ple con­cep­tu­ally dis­tinct rule-sets yield the same re­sults is what makes it de­gen­er­ate.

What does the singular "it" refer to? You could claim that QM is degenerate because multiple formulations lead to the same result, but you seemed to have a specific beef with Copenhagen.

But similar dis­tinc­tions in terms of re­ward func­tions mat­ter greatly be­cause they, in­tu­itively, im­ply differ­ent “sub­jec­tive” ex­pe­riences.

Much more than that. There is a lot of moral concern about whether someone is doing something bad as a result of trying to do something good incompetently, or doing something bad intentionally.

comment by Isnasene · 2019-10-21T23:59:00.494Z · score: 1 (1 votes) · LW · GW
What does the singular "it" refer to? You could claim that QM is degenerate because multiple formulations lead to the same result but you seemed to have a specific beef with Copenhagen

I picked Copenhagen because it involves collapsing a wave-function to a random state for a specific universe (ie, the universe evolves in a way that is partially random). If you're a many worlds theorist, you could plausibly claim that, since the probability distribution describes how frequently different kinds of worlds happen with respect to each other, the universe doesn't evolve randomly at all--what we perceive as randomness describes an deterministic distribution of all possible worlds.

To me, it looks easy to rebut this argument--you just point out that there is still randomness in your subjective perspective of the world. But then someone else might question that because your "subjective perspective" becomes a matter of anthropics and then the whole conversation gets into some confusing weeds that would dramatically lengthen the amount of time I need to think about things. So I picked Copenhagen specifically as a short-cut.

So yeah, I was picking on Copenhagen because it's easier to establish in the context of the point I was trying to make (quantum mechanics is degenerate). But I wasn't picking on it because other interpretations of QM are less problematic than Copenhagen.

Also to clarify:

specific beef with Copenhagen

I don't have a beef with Copenhagen or with QM. I just think its a degenerate world model and, with the definition I'm using, degenerate world models of the kind that QM is aren't a bad thing.

Much more than that. There is a lot of moral concern about whether someone is doing something bad as a result of trying to do something good incompetently, or doing something bad intentionally.

Even more dramatically than that, we can reverse this to get another important implication! If you're trying to figure out what's good for a person based on the consequences they seem to be seeking out, you can't tell whether that person actually wants the consequences of their behavior (ie the consequences are subjectively good) or whether they want something else but are going about it in an irrational and ineffective way (ie the consequences are subjectively indeterminate). This is really bad for AI alignment.

As a sidenote: One might try to solve this problem by just applying Occam's Razor (doesn't it seem more likely and more simple that someone is acting in ways reflective of their preferences rather than incompetence?). But whether this actually works seems unlikely to me because

-The paper this article is trying to rebut indicates that Occam's Razor will miss people's actual preferences because most preferences are unlikely to be the most simple explanation

-This article tries to rebut by pointing out that the paper's argument proves too much by implying that physics models are degenerate

-I think that physics models are pretty obviously degenerate and I'm okay with us having degenerate models of physics. I'm not okay in general with degenerate models of what people prefer

comment by Pattern · 2019-10-08T00:08:46.479Z · score: 1 (1 votes) · LW · GW
It’s easy to see that these examples, being constructed from E, are at most slightly more complex than the simplest possible pair, since they could use the simplest pair to generate E.

Not actually clear. If I had a really long list of factorials (of length n), then perhaps it could be "compressed" in terms of f of 1 through n + a description of f. However, it's not clear how large n would have to be for this to be, for that description to be shorter. Thus:

Example: The initial conditions are simply E, and L() doesn’t do anything.

is actually simpler, until E is big enough.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-10-08T01:41:35.691Z · score: 1 (1 votes) · LW · GW

I don't follow?