Preferences without Existence

post by Scott Garrabrant · 2014-02-08T01:34:03.772Z · LW · GW · Legacy · 101 comments

Contents

101 comments

Cross-posted on By Way of Contradiction

My current beliefs say that there is a Tegmark 4 (or larger) multiverse, but there is no meaningful “reality fluid” or “probability” measure on it. We are all in this infinite multiverse, but there is no sense in which some parts of it exist more or are more likely than any other part. I have tried to illustrate these beliefs as an imaginary conversation between two people. My goal is to either share this belief, or more likely to get help from you in understanding why it is completely wrong.

A: Do you know what the game of life is?

B: Yes, of course, it is a cellular automaton. You start with a configuration of cells, and they update following a simple deterministic rule. It is a simple kind of simulated universe.

A: Did you know that when you run the game of life on an initial condition of a 2791 by 2791 square of live cells, and run it for long enough, creatures start to evolve. (Not true)

B: No. That’s amazing!

A: Yeah, these creatures have developed language and civilization. Time step 1,578,891,000,000,000 seems like it is a very important era for them, They have developed much technology, and it someone has developed the theory of a doomsday device that will kill everyone in their universe, and replace the entire thing with emptyness, but at the same time, many people are working hard on developing a way to stop him.

B:How do you know all this?

A: We have been simulating them on our computers. We have simulated up to that crucial time.

B: Wow, let me know what happens. I hope they find a way to stop him

A: Actually, the whole project is top secret now. The simulation will still be run, but nobody will ever know what happens.

B: Thats too bad. I was curious, but I still hope the creatures live long, happy, interesting lives.

A: What? Why do you hope that? It will never have any effect over you.

B: My utility function includes preferences between different universes even if I never get to know the result.

A: Oh, wait, I was wrong. It says here the whole project is canceled, and they have stopped simulating.

B: That is to bad, but I still hope they survive.

A: They won’t survive, we are not simulating them any more.

B: No, I am not talking about the simulation, I am talking about the simple set of mathematical laws that determine their world. I hope that those mathematical laws if run long enough do interesting things.

A: Even though you will never know, and it will never even be run in the real universe.

B: Yeah. It would still be beautiful if it never gets run and no one ever sees it.

A: Oh, wait. I missed something. It is not actually the game of life. It is a different cellular automaton they used. It says here that it is like the game of life, but the actual rules are really complicated, and take millions of bits to describe.

B: That is too bad. I still hope they survive, but not nearly as much.

A: Why not?

B: I think information theoretically simpler things are more important and more beautiful. It is a personal preference. It is much more desirable to me to have a complex interesting world come from simple initial conditions.

A: What if I told you I lied, and none of these simulations were run at all and never would be run. Would you have a preference over whether the simple configuration or the complex configuration had the life?

B: Yes, I would prefer if the simple configuration to have the life.

A: Is this some sort of Solomonoff probability measure thing?

B: No actually. It is independent of that. If the only existing things were this universe, I would still want laws of math to have creatures with long happy interesting lives arise from simple initial conditions.

A: Hmm, I guess I want that too. However, that is negligible compared to my preferences about things that really do exist.

B: That statement doesn’t mean much to me, because I don’t think this existence you are talking about is a real thing.

A: What? That doesn’t make any sense.

B: Actually, it all adds up to normality.

A: I see why you can still have preferences without existence, but what about beliefs?

B: What do you mean?

A: Without a concept of existence, you cannot have Solomonoff induction to tell you how likely different worlds are to exist.

B: I do not need it. I said I care more about simple universes than complicated ones, so I already make my decisions to maximize utility weighted by simplicity. It comes out exactly the same, I do not need to believe simple things exist more, because I already believe simple things matter more.

A: But then you don’t actually anticipate that you will observe simple things rather than complicated things.

B: I care about my actions more in the cases where I observe simple things, so I prepare for simple things to happen. What is the difference between that and anticipation?

A: I feel like there is something different, but I can’t quite put my finger on it. Do you care more about this world than that game of life world?

B: Well, I am not sure which one is simpler, so I don’t know, but it doesn’t matter. It is a lot easier for me to change our world than it is for me to change the game of life world. I therefore will make choices that roughly maximizes preferences about the future of this world in the simplest models.

A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?

B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.

A: Okay, it seems plausible, but kind of depressing to think that we do not exist.

B: Oh, I disagree! I am still a mind with free will, and I have the power to use that will to change my own little piece of mathematics — the output of my decision procedure. To me that feels incredibly beautiful, eternal, and important.

101 comments

Comments sorted by top scores.

comment by cousin_it · 2014-02-08T11:03:01.222Z · LW(p) · GW(p)

I've heard that point of view from several people. It's a natural extension of LW-style beliefs, but I'm not sure I buy it yet. There are several lines of attack, the most obvious one is trying to argue that coinflips still behave as coinflips even when the person betting on them is really stupid and always bets on heads. But we've already explored that line a little bit, so I'm gonna try a different one:

Are you saying that evolution has equipped our minds with a measure of caring about all possible worlds according to simplicity? If yes, can you guess which of our ancestor organisms were already equipped with that measure, and which ones weren't? Monkeys, fishes, bacteria?

(As an alternative, there could be some kind of law of nature saying that all minds must care about possible worlds according to simplicity. But I'm not sure how that could be true, given that you can build a UDT agent with any measure of caring.)

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T18:27:24.429Z · LW(p) · GW(p)

Evolution has equipped the minds in the worlds where thinking in terms of simplicity works to think according to simplicity. (Ass well as worlds where thinking in terms of simplicity works up to a certain point in time when the rules become complex)

In some sense, even bacteria are equipped with that, they work under the assumption that chemistry does not change over time.

I do not see the point of your question yet.

Replies from: Kutta, cousin_it
comment by Kutta · 2014-02-09T16:34:50.775Z · LW(p) · GW(p)

You conflate two very different things here, as I see.

First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don't doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician's sense of elegance.

Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don't care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.

This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn't care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it's just that they can generalize this preference to more abstract domains.

(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn't mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).

So, what I'm trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seems to logically follow from that.

Replies from: Wei_Dai, Scott Garrabrant
comment by Wei Dai (Wei_Dai) · 2014-02-09T21:58:18.559Z · LW(p) · GW(p)

If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less "measure" to them. But you rejected this kind of reasoning in your post, and I can't exactly come to grips with the "physics racism" that seem to logically follow from that.

Suppose I wanted to be fair to all, i.e., avoid "physics racism" and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can't influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there's an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my "care" in half between programs (input tapes) that start with 0 and those that start with 1, then divide my "care" in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could "add up to normality" (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).

Replies from: kokotajlod, Kutta
comment by kokotajlod · 2014-02-10T00:53:17.964Z · LW(p) · GW(p)

An appropriate universal Turing machine

It sounds like this solution is (a) a version of Solomonoff Induction, and (b) similarly suffering from the arbitrary language problem--depending on which language you use to code up the programs. Right?

comment by Kutta · 2014-02-10T00:46:24.939Z · LW(p) · GW(p)

To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I'm largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-02-10T01:43:43.943Z · LW(p) · GW(p)

I think I understood your point, but maybe didn't make my own clear. What I'm saying is that to recover "normality" you don't have to care about some agents less, but can instead care about everyone equally, and just consider that there are more copies of some than others. I.e., in the continuous version of Solomonoff Induction, programs are infinite binary strings, and you could say there are more copies of simple/lawful universes because a bigger fraction of all possible infinite binary strings compute them. And this may be more palatable for some than saying that some universes have more magical reality fluid than others or that we should care about some agents more than others.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-10T02:56:21.563Z · LW(p) · GW(p)

I agree with this, but I am not sure if you are trying to make this argument within my hypothesis that existence is meaningless. I use the same justification within my system, but I would not use phrases like "there are more copies," because there is no such measure besides the one I that I assign.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2014-02-10T06:10:20.590Z · LW(p) · GW(p)

Yeah, I think what I said isn't strictly within your system. In your system, where does "the measure that I assign" come from? I mean, if I was already a UDT agent, I would already have such a measure, but I'm not already a UDT agent so I'd have to come up with a measure if I want to become a UDT agent (assuming that's the right thing to do). But what do I based it on, and why? BTW, have you read my post http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ where option 4 is similar to your system? I wasn't sure option 4 is the right answer back then, and I'm still in the same basic position now.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-10T06:21:06.248Z · LW(p) · GW(p)

Well in mind space, there will be many agents basing their measures on different things. For me, it is based my intuition about "caring about everyone equally," and looking at programs as infinite binary strings as you describe. That does not feel like a satisfactory answer to me, but it seems just as good as any answer I have seen to the question "Where does your utility function come from?"

I have read that post, and of course, I agree with your reasons to prefer 4.

comment by Scott Garrabrant · 2014-02-09T22:15:30.270Z · LW(p) · GW(p)

I address this "physics racism" concern here:

http://lesswrong.com/lw/jn2/preferences_without_existence/aj4w

comment by cousin_it · 2014-02-09T21:11:21.323Z · LW(p) · GW(p)

It seems to me that bacteria are adapted to their environment, not a mix of all possible environments based on simplicity. You can view evolution as a learning process that absorbs knowledge about the world and updates a "prior" to a "posterior". (Shalizi has a nice post connecting Bayesian updating with replicator dynamics, it's only slightly relevant here, but still very interesting.) Even if the prior was simplicity-based at the start, once evolution has observed the first few bits of a sequence, there's no more reason for it to create a mind that starts from the prior all over again. Using the posterior instead would probably make the mind much more efficient.

So if you say your preferences are simplicity-based, I don't understand how you got such preferences.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T21:30:25.251Z · LW(p) · GW(p)

Would you still have this complaint if I instead said that I care about the subset of universes which contain me following simplicity based preferences?

To me, at least as I see it now, there is no difference between saying that I care about all universes weighted by simplicity and saying that I care about all universes containing me weighted by simplicity, since my actions to not change the universes that do not contain me.

Replies from: cousin_it
comment by cousin_it · 2014-02-09T21:41:25.344Z · LW(p) · GW(p)

In the post you described two universes that don't contain you, and said you cared about the simpler one more. Or am I missing something?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T21:46:43.235Z · LW(p) · GW(p)

I am saying that my decision procedure is independent of those preferences, so there is no evolutionary disadvantage to having them. Does that address your issue?

comment by VAuroch · 2014-02-08T09:56:26.830Z · LW(p) · GW(p)

A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?

B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.

I saw a good explanation of this point (in the #lesswrong IRC sometime last month), which I'll try to reproduce here.

Imagine that there is a specific flavor of Universal Turing Machine which runs everything in Tegmark IV; all potential-universe objects are encoded as tapes for this UTM to simulate.

Take a simple set of rules. There is only one tape that corresponds to these precise rules. Now add an incredibly-specific edge case which applies in exactly one situation that could possibly arise, but otherwise have the same simple set of rules hold; this adds another tape that will produce precisely the same observations from the inside as the simple rules, except in one case that is grotesquely uncommon and very likely will never occur (because it happens to require a state unreachable from the initial conditions). As far as the inhabitants of these universes are concerned, they are the same.

Of course, there are an infinite variety of these edge cases, each of which occurs in a vanishingly specific not-necessarily-reachable circumstance, and nearly every combination of which results in a universe that looks precisely like the simple set of rules. And detailing each edge case should add a finite amount of tape used, added onto the tape needed for the description of the simple rules.

So if you take a more complex set of rules which takes twice as much tape to describe, and compare it with the set of all tape-length-equal instantiations of apparently-those-simple-rules, you'll find that, taking into account the many ways that you can make universes which are like-the-simple-rules-except-where-very-specifically-noted, universes where the correct generalization is the simple rules are more common than universes where the correct generalization is the complex rules, by a factor dependent exclusively on the Kolmogorov complexity.

TL;DR: We probably don't live in a simple universe, but we probably live in a universe where the correct model to form anticipations with is simple.

Replies from: private_messaging, kokotajlod, Scott Garrabrant
comment by private_messaging · 2014-02-08T19:44:50.864Z · LW(p) · GW(p)

Take a simple set of rules. There is only one tape that corresponds to these precise rules.

Not quite. There's an infinite number of tapes that correspond to these precise rules exactly. Each "program of length L" is also two programs of length L+1 , four programs of length L+2, and so on - the following bits are simply not read.

In general, there's a large number of different ways how a program with same behaviour could be implemented, and for programs up to a specific length, the number is larger for programs that describe fundamentally simpler behaviours.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T04:17:28.652Z · LW(p) · GW(p)

Not quite. There's an infinite number of tapes that correspond to these precise rules exactly. Each "program of length L" is also two programs of length L+1 , four programs of length L+2, and so on - the following bits are simply not read.

That only works under some methods of specification of the model machine, and I don't consider it to be a reasonable generalization to assume. It's at best contested, and at worse semantic distinction-without-a-difference.

Replies from: private_messaging, abramdemski
comment by private_messaging · 2014-02-09T07:30:43.775Z · LW(p) · GW(p)

That only works under some methods of specification of the model machine

Should not be a problem for you to name one where it doesn't work, right?

Universal TMs all have to permit setting up an interpreter for running the code for another UTM, you really have a lot less lee-way for "methods of specification" than you think.

If you are hung up on there being an actual difference in the output, rather than hand-waving about the special circumstances or such, you ought to provide at least a sketch of a set up (so that people can understand clearly how you get more of the simpler programs). For example, for programs that do halt, instead of halting you can make them copy extra bits from the input tape to the output tape - that would be really easy to set up. And the programs in general can be compactly made to halt after something like BusyBeaver(10) steps.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T08:00:19.902Z · LW(p) · GW(p)

Whether those L+1, L+2, etc. count as different programs from the length-L one is, last I checked, contentious, and because theorists feel strongly about this, various different widely-used formalisms for defining what makes something a UTM disagree about whether unread tape matters. If someone pokes a hole in the argument in the great-grandparent so that the general conclusion works iff the 2 L+1, 4 L+2, etc. count as different universes, then it would become worth addressing. But as long as it works without it, I'm going to stick with the most difficult case.

Replies from: private_messaging
comment by private_messaging · 2014-02-09T08:05:26.368Z · LW(p) · GW(p)

Again, give an example for the assertions being made.

As for your argument, as others have pointed out, you did not prove anything about the extra length for setting up special output in very specific circumstances, or sketched how that could be accomplished. "Sticking with the most difficult case" you got into the region where you are unable to actually produce an argument. It is far less than obviously true (and may well be false) that the programs which are simple programs but with special output in very specific circumstances, are a notable fraction of large programs.

comment by abramdemski · 2014-02-10T08:09:51.123Z · LW(p) · GW(p)

My knowledge of algorithmic information theory marks the approach advocated by private_messaging as "the best way" as established by years of experimenting with ways to specify priors over programs. I admit to little knowledge of the controversy, but agree with private_messaging's insistence for a burden of providing-the-alternative on your part.

Replies from: VAuroch
comment by VAuroch · 2014-02-10T20:12:51.029Z · LW(p) · GW(p)

the approach advocated by private_messaging as "the best way"

What? There is no mention of this anywhere. I have no idea what you're referring to with this phrase.

I'm not going to provide an alternative because it doesn't matter. Using the alternate formulation where you count every padded program with the same results as being unique, you get the same results, with the addition of another underlying assumption. By assuming they do not count as the same, I (am attempting to) force potential objections to confront the actual basis for the argument.

So go ahead, if you like. Treat those as distinct and work out the consequences. You'll reach precisely the same conclusions.

comment by kokotajlod · 2014-02-08T17:40:39.075Z · LW(p) · GW(p)

I'm not sure about this. This sounds like a version of this, which I agree with.

What I'm wary of is this:

each of which occurs in a vanishingly specific not-necessarily-reachable circumstance, and nearly every combination of which results in a universe that looks precisely like the simple set of rules.

This sounds like a mathematical claim that isn't yet proven. My intuitions aren't sharp enough to make an educated guess on it either.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T04:36:03.351Z · LW(p) · GW(p)

It is indeed a version of that.

The vanishingly-small edge cases I refer to would be specific descriptions describing something like "At 5:03:05 PM on Tuesday the 7th of March, 3243, if two photons collide at coordinates X,Y,Z, emit three photons each of equal energy to the incoming two in such-and-such an orientation." Something that has an effect at most once during the entirety of the universe, and possibly not at all. This can be specified as an additional behavior in space proportional to the description of the required conditions + space proportional to the description of the required result (or more likely the diff of that result from the ordinary-rules result), which can be very small. And it's obviously true that you can add a huge number of these before any observer could expect to detect enough of them to make a dent in the long-run confidence they had in the correctness of the simple model.

You can treat any difference more systematic than this as a different model while still having enough ability to 'make comments' to force the effective Kolmogorov weighting.

Replies from: private_messaging, kokotajlod, Scott Garrabrant
comment by private_messaging · 2014-02-09T08:11:59.205Z · LW(p) · GW(p)

Why would that be more common than "create a giant blackhole at same coordinates"? Or an array of black holes, spaced by 10 light years, or the like, you get the idea.

You need to establish that little differences would be more common than giant differences I described.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T19:55:57.631Z · LW(p) · GW(p)

You need to establish that little differences would be more common than giant differences I described.

No, I don't, because they don't have to be more common. They just have to be common. I didn't include black holes, etc. in the simple version, because they're not necessary to get the result. You could include them in the category of variations, and the conclusion would get stronger, not weaker. For most observers in the universe, there was always a giant black hole there and that's all there is to it.

The set of small variations is a multiplier on the abundance of universes which look like Lawful Universe #N. The larger the set of small variations, the bigger that multiplier gets, for everything.

Replies from: private_messaging, abramdemski
comment by private_messaging · 2014-02-12T16:56:46.676Z · LW(p) · GW(p)

No, I don't, because they don't have to be more common.

You've been trying to show that "universes where the correct generalization is the simple rules are more common than universes where the correct generalization is the complex rules", and you've been arguing that this is still true when we are not considering the longer programs that are exactly equivalent to the shorter programs.

That, so far, is an entirely baseless assertion. You only described - rather vaguely - some of the more complex programs that look like simpler programs, without demonstrating that those programs are not grossly outnumbered by the more complex programs that look very obviously more complex. Such as, for example, programs encoding an universe with apparent true randomness - done using the extra bits.

That being said, our universe does look like it has infinite complexity (due to apparent non-determinism), and as such, infinite complexity of that kind is not improbable. E.g. I can set up a short TM tape prefix that will copy all subsequent bits from the program tape to the output tape. If you pick a very long program at random, it's not very improbable that it begins with this short prefix, and thus corresponds to a random universe with no order to it whatsoever. Vast majority of long programs beginning with this prefix will not correspond to any shorter program, as random data is not compressible on average. Perhaps a point could be made that most of very long programs correspond to universes with simple probabilistic laws.

Replies from: VAuroch
comment by VAuroch · 2014-02-12T22:00:41.845Z · LW(p) · GW(p)

You only described - rather vaguely - some of the more complex programs that look like simpler programs, without demonstrating that those programs are not grossly outnumbered by the more complex programs that look very obviously more complex. Such as, for example, programs encoding an universe with apparent true randomness - done using the extra bits.

No, that's not it at all. I have described how, for every complex program that looks complex, you can construct a large number of equally complex programs that look simple, and therefore should expect simple models to be much more common than complex ones.

Replies from: private_messaging
comment by private_messaging · 2014-02-12T22:57:26.253Z · LW(p) · GW(p)

You'd need to show that for every complex-looking program, you can make >=n simple looking programs, which do not overlap with the other simple looking programs that you're constructing for another complex looking program. (Because it won't do if for every complex looking program, you're constructing the same, say, 100 simple looking programs). I don't even see a vague sketch of an argument for that.

edit: Hell you haven't even defined what constitutes a complex looking program. There's a trivial example: all programs beginning with the shortest prefix that copies all subsequent program bits verbatim onto the output tape. These programs are complex looking in the sense that vast majority of them do not have any simpler representation than they are. Those programs are also incredibly numerous.

edit2: also the whole argument completely breaks down at infinity. Observe: for every even integer, I can construct 10 odd integers (10n +1, 10n+3, ...) . Does that mean a randomly chosen integer is likely to be even? No.

Replies from: VAuroch
comment by VAuroch · 2014-02-13T02:12:26.741Z · LW(p) · GW(p)

Because it won't do if for every complex looking program, you're constructing the same, say, 100 simple looking programs.

That is exactly what I've done, and it's sufficient. The whole point is to justify why the Kolmogorov measure for apparent universe probability is justified starting from the assumption that all mathematical-object universes are equally likely. Demonstrating that the number of additional copies that can be made of a simpler universe relative to the more complex one is in direct proportion to the difference in Kolmogorov complexity, which is what I have done, is sufficient.

Replies from: private_messaging
comment by private_messaging · 2014-02-13T17:06:01.125Z · LW(p) · GW(p)

You know, it'd be a lot more helpful if it was anything remotely close to "done" rather than vaguely handwaved with some sort of fuzzy (mis)understanding of terms being discussed at it's core. What does "difference in Kolmogorov complexity" even mean when your program of length L does not have any equivalents of length <L ? If it has no simpler equivalent, Kolmogorov's complexity is L.

Given a program describing some "simple rules" (what ever that means, anyway), one can make a likewise large number of variations where, instead of a single photon being created somewhere obscure or under some hard to reach conditions, photons are created on a randomly spaced regular lattice over some space of conditions, for example, with some specific spacing of the points of that lattice. Which is very noticeable, and does not locally look like any "simple rules" to much anyone.

edit: note that most definitions of T.M. do not have pointers, and heads move by 1 step at a time, which actually makes it very nontrivial to do some highly localized, surgical changes to data, especially in the context of some program that's applying same rules everywhere. So it is not obviously the case that a single point change to the world would be less code than something blatantly obvious to the inhabitants.

comment by abramdemski · 2014-02-10T08:04:26.817Z · LW(p) · GW(p)

This doesn't look remotely like a mathematical proof, though.

Replies from: VAuroch
comment by VAuroch · 2014-02-10T19:58:10.008Z · LW(p) · GW(p)

Who said anything about a mathematical proof? I linked a more formal exposition of the logic in a more abstract model elsewhere in this comment thread; this is an application of that principle.

comment by kokotajlod · 2014-02-09T15:58:26.857Z · LW(p) · GW(p)

private_messaging beat me to the point.

comment by Scott Garrabrant · 2014-02-09T05:03:40.864Z · LW(p) · GW(p)

The amount of space it takes to encode that difference puts a bound on the number of such differences there can be. and they add up to less than, or at least less than a constant times the weight of the original simple model.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T07:10:48.837Z · LW(p) · GW(p)

The amount of space it takes to encode that difference puts a bound on the number of such differences there can be.

For a given description length, yes, but since there is no necessary bound on description length, there is not a necessary limit to the number of possible differences. As your description length devoted to 'comments' increases, you can make the responses and circumstances ever more specified, multiplying the number of worlds which resemble the simpler world, relative to the more complex one.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T07:37:54.289Z · LW(p) · GW(p)

Yeah, that comment was when I thought you were talking about a simplicity weighting, not the naive weighting.

comment by Scott Garrabrant · 2014-02-08T17:37:41.441Z · LW(p) · GW(p)

I do not think this works. All the complex worlds which look like this simple world put together (with Kolmogorov weight) should sum up to less than the simple alternative.

There are infinitely many ways to perturb a simple set of rules, but they get much more complex very quickly.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T04:38:19.803Z · LW(p) · GW(p)

The point is that this is a derivation for Kolmogorov weighting. These universes are weighted naively; every universe is weighted the same regardless of complexity. From this naive weighting, after you add the nearly-undetectable variations, the Kolmogorov measure falls out naturally.

kokotajlod linked this, which uses the same logic on a more abstract example to demonstrate how the Kolmogorov measure arises. It's significantly better written than my exposition.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T05:08:05.044Z · LW(p) · GW(p)

Ok, I was completely misunderstanding you.

The claim that we are probably in a complex world makes sense in the naive weighting, but I do not see any motivation to talk about that naive weighting at all, especially since it only seems to make sense in a finite multiverse.

Replies from: VAuroch
comment by VAuroch · 2014-02-09T07:36:59.298Z · LW(p) · GW(p)

Now I don't understand you. I'll take a stab at guessing your objection and explaining the relevant parts further:

This logical derivation works perfectly well in an infinite universe; it works for all finite description lengths, and that continues in the limit, which is the infinite universe. Every "lawful universe" is the center of a cluster in universe-space, and the size of the cluster is proportional to the simplicity (by the Kolmogorov criterion) of the central universe. This is most easily illustrated in the finite case, but works just as well in the infinite one.

I do implicitly assume that the naive weighting of 'every mathematical object gets exactly one universe' is inherently correct. I don't have any justification for why this should be true, but until I get some I'm happy to treat it as axiomatic.

The conclusion can be summarized as "Assume the naive weighting of universes is true. Then from the inside, the Kolmogorov weighting will appear to be true, and a Bayesian reasoner should treat it as true."

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T08:06:45.593Z · LW(p) · GW(p)

I think I completely understand your position. Thank you for sharing. I agree with it, modulo your axiom that the naive weighting was correct.

All of my objections before were because I did not realize you were using that naive weighting.

I do have two problems with the naive weighting:

First, it has the problem that description length is a man-made construct, and is dependent on your language.

Second, there are predicates we can say about the infinite descriptions which are not measurable, and I am not sure what to do with that.

comment by abramdemski · 2014-02-10T09:03:35.407Z · LW(p) · GW(p)

My personal objection to this seems to be: "But what if some things DO exist?" IE, what if existence actually does turn out to be a thing, in limited supply?

Let me try to unfold this gut reaction into something which makes more sense.

I think what I'm "really" saying is that, for me, this view does not add up to normality. If I were to discover that "existence" is meaningless as a global predicate, and (to the extent that it's meaningful) all possibilities exist equally, I would change the way I care about things.

I think I value variety, so I would be happy at this news: I would have a nice, comfortable lower bound on my utility function, contributed by all the other universes. (As far as variety goes, I can now focus my energy on optimizing the local branches containing me.)

I would think about death differently: before learning about the meaninglessness of existence, I placed inherent value on my life, and also placed value on what I might contribute to the world, and (almost as a side note) also would be sad about my family missing me if I was gone. Now it's about my family and friends, who would miss me in a large measure of branches. My unique contributions to society, however, are guaranteed to be preserved in some branches; that part of me survives any accident. (This is similar to the quantum suicide argument.)

I would also be excited about the possibility of interaction between parallel universes, since for any two turing machines, there is a third which runs them in parallel but with some form of interaction (in fact, every possible form of interaction).

All told, my value function would not be anywhere close to equivalent to a Solomonoff weighting!

Replies from: abramdemski
comment by abramdemski · 2014-02-11T00:51:10.666Z · LW(p) · GW(p)

Additionally, it's interesting to note that I treat the idea (that existence has no referent) as a hypothesis, and it's a hypothesis which I believe I can collect evidence for and against:

The book "theory of nothing" by Russel Standish contains a derivation of the Schrodinger equation from axioms which are supposed to be purely about an ideal observer and the process of observation. If this argument were convincing, I would take it as strong evidence for the hypothesis: if we can derive the (otherwise strange and surprising) workings of quantum mechanics from the assumption that we're an ideal observer sitting inside of Tegmark IV, that's decent evidence for Tegmark IV.

For better or worse, the argument isn't so convincing: my main objection is that it slips complex numbers in as an assumption, rather than deriving them from the assumptions. So, the complex-valued wave function is not explained.

There is a derivation of quantum logic from the logic of self-reference, and I consider this to be very weak evidence for the hypothesis.

Bruno Marchal does work along these lines, and discusses a world-view with some similarities to what Coscott proposes.

comment by Scott Garrabrant · 2014-02-09T22:11:06.372Z · LW(p) · GW(p)

Thank you all so much for all of your comments.

Three separate comment threads simultaneously lead to the refutation that I seem to be unfairly biased towards agents that happen to be born in simple universes. I think this is a good point, so I am starting a new comment thread to discuss that issue. Here is my counter point.

First, notice that our finite intuitions do not follow over nicely here. In saying that beings in universes with 20 fewer bits are a million times as important, I am not saying that the happiness of this one person is more important than the happiness of those million people over there. Instead, I am pointing at two infinite and unmeasurable clusters of universes, and saying that this cluster is a million times as important as this other cluster. Because there is no measure on these clusters, there is no fact of the matter as to whether one cluster is a million times as large as another. In finite collections, you do not have this issue, but with infinite collections, there could million to one map from one cluster to another, and a million to one map in the other direction. To judge me as unfair, you must put a measure on the collection of universes by which to judge.

So, what measure should we put on the collection of universes to judge this fairness? It may look like my measure is unfair because it is not uniform, giving much more weight to simple universes. However, I argue that it is the most fair. If your collection of universes are described in some language on an infinite tape, I am giving a uniform distribution of weight over all infinite tapes. However, this means that universes with simple finite descriptions can ignore most of the tape and show up more in the uniform distribution over infinite tapes. What looks unfair to you is actually the uniform weighting in a very slightly different, (and perhaps more natural) model -- the model that VAuroch argues for in his comments here.

Replies from: kokotajlod
comment by kokotajlod · 2014-02-10T02:02:36.035Z · LW(p) · GW(p)

I think that Solomonoff Induction already gets us to the conclusion we want, with the problem that it is relative to a language. So one way to put your point would be this: "There is no fact of the matter about which language is 'right,' what really exists is an infinite unordered jumble of universes. In order to think about the jumble, much less describe values over it, we must fix a language with which to describe it. Why not pick a language that favors a certain pleasing kind of simplicity? And hey, if we do this, then thanks to SI it all adds up to normality!"

Retreating into the impregnable swamp of infinity may save you here, but it is a dubious move in general. Compare to someone who thinks that they will win the lottery, because they believe in a Big World and thus there are infinitely many copies/futures that will win the lottery.

Edit: Thank you, by the way, for this conversation and discussion. I'm very interested in this topic and I like the way you've been thinking about it. I hope we can continue to make progress!

comment by kokotajlod · 2014-02-08T17:46:54.299Z · LW(p) · GW(p)

As I understand it, your attempted solution to the Problem of Induction is this:

(a) Deny that there is a fact of the matter about what our future experiences will be like

(b) Care about things in inverse proportion to the Kolmogorov complexity of the structure in which they are embedded.

This is why it all adds up to normality. Without (a), people could say: Go ahead, care about whatever you want, but under your belief system you ought to expect the world to dissolve into high-complexity chaos immediately! And without (b), people could say: Go ahead, deny the existence of a future. But the vast majority of your counterparts affected by your actions inhabit complex, chaotic worlds; one implication of this is that you should live life in the moment.

Is this correct?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T17:53:57.517Z · LW(p) · GW(p)

That sounds correct.

Replies from: kokotajlod, diegocaleiro
comment by kokotajlod · 2014-02-09T15:56:20.018Z · LW(p) · GW(p)

Okay. I've considered this view for a while now, but I can't bring myself to hold it.

I'm mostly OK with step (a), though I still have niggling doubts.

Step (b) is the problem. My values/utility function just doesn't work like that. I realize that if I choose my values appropriately, I can make the mathematical multiverse add up to normality.

But that's nothing special--give me any wacky set of beliefs about the world, and I can choose values such that it all adds up to normality.

I'm having trouble seeing why people in complicated, anti-inductive worlds are less valuable than people in simple, inductive worlds. Maybe they are less beautiful in some abstract aesthetic sense, but they aren't less valuable in the relevant moral sense--if I can help them, I should, and I ought to feel bad if I don't.

Replies from: Scott Garrabrant
comment by diegocaleiro · 2014-02-08T19:26:05.743Z · LW(p) · GW(p)

There should be a an alarm bell when he said "majority of your counterparts" and you accepted that.

Regardless, I'm curious because if you truly think all mathematical structures exist shaped and countable in proportion to how many Kolmogorov descriptions account for each, then you should not care about nearly anything and should live in the moment. Since nothing you do will ever change which Kolmogorov descriptions exist or fail to exist., in your deflated conception of existence.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T19:43:56.218Z · LW(p) · GW(p)

I was interpreting his rejection to b to be replaced by something else in which "majority of your counterparts" makes sense.

I do not agree with this.

The counterfactual in which I thought I was agreeing that you should live in the moment was a scenario in which there was some other distribution in which most worlds are complex. (i.e. uniform over all programs expressible)

comment by Squark · 2014-02-08T19:58:13.548Z · LW(p) · GW(p)

Interesting approach. The way I would put it, the number you want to maximize is expectation of U(X) where U is a utility function and X is a random universe taken from a Solomonoff ensemble. The way you put it, the number is the same but you don't interpret the sum over universes as expectation value, you just take it to be part of your utility function.

What I feel is missing in your approach is that U by its nature is arbitrary / complex, whereas the Solomonoff prior is simple and for some reason has to be there. I.e. something would go wrong on the philosophical level (not just the pragmatic / normality level), if you throw Solomonoff out of the window. But, at the moment I can't say what that would be - it's just a hunch.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T20:19:08.038Z · LW(p) · GW(p)

I have exactly the opposite view.

I think that the Solomonoff is arbitrary and complex, and therefore I doubt it has any real meaning outside of my arbitrary and complex mind.

comment by moridinamael · 2014-02-09T03:28:04.186Z · LW(p) · GW(p)

So why haven't you tried walking into a casino and caring really hard about winning? I'm not just being a prick here, this is the most concise way I could think to frame my core objection to your thesis.

Replies from: Squark, Scott Garrabrant
comment by Squark · 2014-02-09T07:12:56.704Z · LW(p) · GW(p)

This is an incorrect interpretation of Coscott's philosophy. "Caring really hard about winning" = preferring winning to losing. The correct analogy would be "Caring about [whatever] only in case I win". The losing scenarios are not necessarily assigned low utilities: they are assigned similar utilities. This philosophy is not saying: "I will win because I want to win". It is saying: "If I lose, all the stuff I normally care about becomes unimportant, so when I'm optimizing this stuff I might just as well assume I'm going to win". More precisely, it is saying "I will both lose and win but only the winning universe contains stuff that can be optimized".

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T07:40:23.954Z · LW(p) · GW(p)

I agree with this comment. Thanks.

comment by Scott Garrabrant · 2014-02-09T03:42:35.120Z · LW(p) · GW(p)

That question is equivalent to asking "Why have you not tried just changing your values to be satisfied with whatever the truth is?"

I could possibly reprogram myself to have different beliefs, but I do not want to do that. Even if I replace myself with someone else who has different beliefs, I still will not be happy in the future worlds that the present me cares more about. (the simple ones)

Replies from: moridinamael
comment by moridinamael · 2014-02-09T04:21:37.801Z · LW(p) · GW(p)

I guess I'm a little confused. Would you be less happy because your knowledge that you possess reality manipulation magic would make you disillusioned about the simplicity of your universe? You think you would be less happy with millions of dollars because you knew that you won it through reality manipulation magic, plus verification that you have reality manipulation magic, than you would be persisting in a universe indistinguishable from one which obeys causal laws?

Regardless of this misunderstanding, I think you have simply expressed a definition of existence which is meaningless. Or rather, your subjective expectations have nothing at all to do with the quantity that you are describing, but you are conflating them under the same concept-label of "existence."

You can't steer your subjective experience into preferred branches of existence merely by wishing it, because you're embedded in a causal universe that just goes on being causal around you without too much regard for your preferences. If you put in a lot of work changing the universe around you to be more suitable to your needs, only then can you steer your subjective experiences toward desirable outcomes. At no point does this look like the future automatically shaping up to be more the way you prefer.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T05:17:24.002Z · LW(p) · GW(p)

My point is that I do not change my beliefs to believe I will win the lottery for the same reason that you under your model of the multiverse do not change your preferences to make yourself happy with the world as is. I was trying to communicate that reason, but instead, let me just say that my model does not add any extra issues that you did not already have with the question "Why don't you just try really hard to not care about the things that bother you?" in everyone else's model.

I am not understanding the rest of your post either.

I am not trying to put anything under the concept-label of "existence." I am trying to say that "existence" is a meaningless concept in the first place.

I can't steer my subjective experience, because there is no one future subjective experience. However I can steer the experience of my future self within the simple universes, which is exactly what I want to do.

comment by mwengler · 2014-02-11T01:15:32.691Z · LW(p) · GW(p)

Ever since learning the Anselmian ontological argument for the existence of god, I have tended towards the Kantian idea that existence is not a predicate. A ball that is not red still bounces, a ball that does not exist... doesn't exist. I'll say right up front I am not a philosopher, but rather a physicist and an engineer. I discard ideas that I can't use to build things that work, I decide that they are not worth spending time on. I don't know what I can build using the idea that a ball that does not exist is still a ball. And if I can build something using that idea, I'm not sure the thing that I can build exists.

So once I've spent enough time discussing something with A or B or anyone else for that matter to determine that they have no particular preference for things that exist than for things that don't exist, I lose interest in discussing it with them further.

Actually, this is only ALMOST true. I'll discuss things that don't exist with someone who seems to be doing something clever with it, but I ultimately expect to be disappointed, to conclude, as has happened in the past, that I have ultimately learned nothing of value from such a discussion. Or at least that whatever I have learned does not exist.

comment by torekp · 2014-02-08T18:14:05.190Z · LW(p) · GW(p)

What are the semantics of "exist" on this view? When people say things like "Paris exists but Minas Tirith doesn't" are they saying something meaningful? True? It seems like such statements do convey actual information to someone who knows little about both a French history book and a Tolkein book. Why not just exercise some charitable interpretation upon your fellow language users when it comes to "exist"? We use "existence" concepts to explain our experiences, without any glaring difficulties that I can see (so a charitable interpretation is likely possible). But maybe I've missed some glaring difficulties.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T18:21:15.650Z · LW(p) · GW(p)

I think the definition of "exist" in the statement "Paris exists but Minas Tirith doesn't" is "exists within the visible universe." I think that that sentence still has meaning, just as normal.

I am rejecting that there is some objective global meaning to existence. "Exists within the visible universe" is a useful concept, and makes sense because it is defined relative to me as an observer.

Replies from: torekp
comment by torekp · 2014-02-09T17:29:41.819Z · LW(p) · GW(p)

Excellent. I favor something like "X will be implied by our best explanation of experience" rather than "X is within the visible universe", but I think broadly speaking, we're in agreement here. But note that this definition of "existence" allows me to dismiss most of Tegmark's Level IV objects as nonexistent. (Your version would allow the dismissal of more.) And of course I'm also free not to share the preference pattern of caring less about events/life-stories in proportion to their complexity.

Replies from: Scott Garrabrant
comment by Gunnar_Zarncke · 2014-02-08T03:09:04.997Z · LW(p) · GW(p)

Reminds me of some discussions I had long ago.

The general principle was: Take some human idea or concept and drive it to its logical conclusion. To its extreme. The result: It either became trivial or stopped to make sense.

The reason for this is that sometimes you can't separate out components of a complex system and try to optimize them in isolation without losing important details.

You can't just optimize happyness. It leads to wireheading and that's not what you want.

You can't just optimize religious following. It leads to crussades and witch hunts.

You can't just maximize empathy. Harmless as that sounds. Humans can empathize with anything. There are monks that try avoid hurting bug (good that they didn't know of bacteria). Empathy driven to the extreme leads to empathy with other beings (here: simulated, even lied about, life forms) that don't match up with complex values.

Don't hurt your complex values.

Somewhere on the way from

  • your future self

  • your kin

  • your livestock

  • children

  • innocents

  • strangers

  • foreigners

  • enemies

  • the biosphere

  • higher animals

  • lower animals

  • plants

  • bacteria

  • hypothetical life forms

  • imaginary life forms

you should consider that your empathy diverged from being integrated well with your other values.

(yes, its not linear but a DAG)

Replies from: abramdemski, Squark, Scott Garrabrant
comment by abramdemski · 2014-02-08T07:01:20.063Z · LW(p) · GW(p)

There are monks that try avoid hurting bug (good that they didn't know of bacteria)

These monks still exist. They have been informed.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-02-08T10:42:58.891Z · LW(p) · GW(p)

Sure they know now.

But when the monks belief structure formed they didn't know. Now that that is a given they incorporate that in a suitable compartment as the rest is obviously stable.

Replies from: abramdemski
comment by abramdemski · 2014-02-10T08:29:55.213Z · LW(p) · GW(p)

They (not just the monks-- many of the ordinary practitioners) put filters on water faucets to drink as few bacteria as possible; don't eat root crops because harvesting them disturbs the soil too much; may refuse antibiotics; and in general, do everything they think is within reason to minimize killing.

There's probably some compartmentalization, but not a huge amount in that area.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-02-10T19:12:42.515Z · LW(p) · GW(p)

I take this as a piece of factual information. I do not assume that you want to imply some lesson with it.

Replies from: abramdemski
comment by abramdemski · 2014-02-11T00:55:15.162Z · LW(p) · GW(p)

Yea, I'm just providing additional information because your model of the Jains seemed incomplete.

Jainism

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-02-11T10:39:39.013Z · LW(p) · GW(p)

Very incomplete. I just relayed the anecdote about the monks to add color.

Interesting. Jains texts should be read by vegetarians to put them into perspective I guess.

comment by Squark · 2014-02-09T07:15:24.523Z · LW(p) · GW(p)

Coscott's philosophy is more or less orthogonal to the choice of empathy. E.g. a "Coscottist" might only care about here kin, but she will care more about versions of her kin embedded in simple universes.

comment by Scott Garrabrant · 2014-02-08T03:24:27.934Z · LW(p) · GW(p)

Would you mind elaborating on why this reminded you of those discussions?

Is it because in the first half of the dialogue, I am following a chain where I am starting with a belief and slightly tweaking it little by little, and end up with a similar belief in a different circumstance?

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-02-08T04:16:53.806Z · LW(p) · GW(p)

No. Because you explored the extremes of empathy toward beings.

You don't follow a path going from less to more extreme as I outlined but an extreme in different aspects you explore nonetheless.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T04:30:04.024Z · LW(p) · GW(p)

Oh, ok, I get the connection now, thanks.

My main point was not that I care about non-existing things in any way that conflicts with cares about existing things. My point was that believing in the concept of existence is not necessary for having preferences. I suppose I would agree with

A: Hmm, I guess I want [happiness of non-existing agents] too. However, that is negligible compared to my preferences about things that really do exist.

If not for the fact that I don't think existence has any meaning.

comment by Brian_Tomasik · 2014-02-08T02:57:24.069Z · LW(p) · GW(p)

Nice dialogue.

It's true that probability and importance are interchangeable in an expected-utility calculation, but if you weight A twice as much as B because you care twice as much about it, that implies equal probabilities between the two. So if you use a Solomonoff-style prior based on how much you care, that implies a uniform prior on the worlds themselves. Or maybe you're saying expected utility is the sum of caring times value, with no probabilities involved. But in that case your probability is just how much you care.

If we were in a complex world, it's plausible you could make a bigger impact to your values by choosing actions that correlate with actions in the much more important simpler world rather than actions that have good consequences in this world. Computing which those are would take a lot of effort, though, so in practice, you'd be doing the same sorts of things in the short run (i.e., working toward better futures).

What it means to exist is one area of metaphysics that still confuses me, but the Tegmark Level IV picture seems to make sense. In that case, rather than measure being unimportant, measure is all that matters, because our actions help determine which possible worlds have more and less measure.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T03:20:34.555Z · LW(p) · GW(p)

Nice dialogue.

Thanks!

I am saying that expected utility is the sum of caring times value, with no probabilities involved. If there are going to be any probabilities involved at all, they will come from logical uncertainty, which is a separate issue.

This can be thought of as saying that your probability is just how much you care, which is how I think about it. However, this has some philosophical consequences. It means that probabilities really are completely subjective. It also means that trying to talk about tautologies outside of the context of a person's beliefs is completely a mind projection fallacy. This explains some issues that I have with anthropics, by dismissing them as ill formed questions.

If we were in a complex world, it's plausible you could make a bigger impact to your values by choosing actions that correlate with actions in the much more important simpler world rather than actions that have good consequences in this world. Computing which those are would take a lot of effort, though, so in practice, you'd be doing the same sorts of things in the short run (i.e., working toward better futures).

This is plausible, but I do not think it is likely to be possible, unless there is a simulation of you in the simple universe, in which case, you should some "probability" to being in that universe. Just choosing actions which correlate with the actions of simpler agents, just by the mechanism of you being similar to that agent. I do not think this will work, because you having the knowledge that your world is the more complex one is enough to make your decision procedures sufficiently different.

What it means to exist is one area of metaphysics that still confuses me, but the Tegmark Level IV picture seems to make sense. In that case, rather than measure being unimportant, measure is all that matters, because our actions help determine which possible worlds have more and less measure.

I might be wrong, but I believe that the Tegmark Level 4 universe does not by his definition come with a prior. When you say this, you are talking about Tegmark 4 with the Solomonoff prior, right?

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-02-08T05:41:25.460Z · LW(p) · GW(p)

This explains some issues that I have with anthropics, by dismissing them as ill formed questions.

I was going to ask how you handle anthropics, but then you answered it. Trippy stuff.

If probability is just degree of caring, why would we use Bayes' rule to update? Or are you also proposing not to update?

I do not think this will work, because you having the knowledge that your world is the more complex one is enough to make your decision procedures sufficiently different.

It probably works sometimes for outputs that aren't related to knowledge of how complex your world is. For instance: Consider a simulation of you at the neuronal level making some decision. It produces some output. Then there's another simulation down to the molecular level. It produces some slightly more accurate output. Then another at the quantum level; it's yet again slightly more accurate. If you're the quantum-level one, your output correlates highly with the neuronal-level one, which is much simpler, so you care about it vastly more.

When you say this, you are talking about Tegmark 4 with the Solomonoff prior, right?

Not necessarily Solomonoff, especially since the multiverse of mathematical structures doesn't conform to a Solomonoff distribution. The set of finite or infinite bitstrings is countably infinite, but the set of mathematically possible universes is uncountably infinite, e.g., universes where some parameter is set to each possible real number. I just meant some measure over the universes. If you restrict Tegmark 4 to bitstring universes, then Solomonoff could work.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T05:55:36.249Z · LW(p) · GW(p)

If probability is just degree of caring, why would we use Bayes' rule to update? Or are you also proposing not to update?

I do not update, but in the exact same sense that Updateless Decision Theory does not update. It adds up to the Bayes' rule normality.

It probably works sometimes for outputs that aren't related to knowledge of how complex your world is.

Again, plausible, but I am really not sure. Either way, I feel like we do not understand nearly enough to have this strategy of sacrificing our own world for other worlds be practical right now.

The set of finite or infinite bitstrings is countably infinite, but the set of mathematically possible universes is uncountably infinite, e.g., universes where some parameter is set to each possible real number.

This one of the beauties of my proposal, is that if we do not have to assign probabilities to possible universes, we don't have to limit ourselves to an uncountable infinity. The collection of universes does not even have to be a set!

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-02-08T07:37:26.632Z · LW(p) · GW(p)

This one of the beauties of my proposal, is that if we do not have to assign probabilities to possible universes, we don't have to limit ourselves to an uncountable infinity.

Hmm. Seems like your caring-about measure should still sum to 1. If you're just comparing two universes, all you need to know is their relative importance, but if you want to evaluate policies over the whole set of universes, you're going to want a set of weights whose sum is bounded.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T07:57:44.080Z · LW(p) · GW(p)

Great point.

However, even if the multiverse is infinite, or not even a set, I as a finite mind can only look at finite pieces of it. My caring function looks at a small piece of the multiverse, because it cannot comprehend the whole thing. This is sad. However, it does not feel arbitrary to me. The caring function has limitations of finiteness, or limitations of set theory, but that is MY limit function. There is a big difference to me between me having a limited caring function, and thinking that the universe has built in a limited probability function.

Replies from: Brian_Tomasik, wedrifid
comment by Brian_Tomasik · 2014-02-08T08:14:47.895Z · LW(p) · GW(p)

I see. :) You can do that, and it's psychologically plausible.

I'm old-school and still believe there's some fact of the matter about what the multiverse is. Presumably this fact of the matter is representable analytically (though not necessarily by human minds). If we found a better mathematical way to capture this, presumably your limitations would expand and you would then care about more than you do now.

Replies from: abramdemski, Scott Garrabrant
comment by abramdemski · 2014-02-10T08:45:55.445Z · LW(p) · GW(p)

I'm old-school and still believe there's some fact of the matter about what the multiverse is.

I enjoyed this sentence.

If we found a better mathematical way to capture this, presumably your limitations would expand and you would then care about more than you do now.

Only to a point! Suppose Coscott has a policy for accepting new math which applies a critical eye and only accepts things based on a consistent criteria. That is: if he would accept X upon inspection, then, he would not have accepted not-X via his inspection.

Then consider the "completion" of Coscott's views; that is, the (presumably uncomputable) system which is Coscott in the limit of being taught arbitrarily many new math techniques.

Now apply Tarski's Undefinability. We can construct a more powerful math which Coscott could never accept.

Therefore, if Coscott is a sufficiently careful mathematician, then his math powers seem to have ultimate limits already, rather than mere current limits.

On the other hand, if there is no such limit, because Coscott could accept either X or not-X for some X depending on which is presented to him first, then there is hope! A "stronger" teacher could show Coscott the way to the more powerful math. Yet, Coscott is also at risk of being misled.

Here's a riddle: according to Coscott, is there any way Coscott could be mislead? Coscott has established that physical existence is meaningless to him; but is there mathematical truth beyond provability? Is there a branch of the multiverse in which the axiom of choice is true, and one where it is false? (It seems clear that there is a subset of the universe where the axiom of choice is true, and one where it is false, but I don't think that's what I mean...)

comment by Scott Garrabrant · 2014-02-08T08:26:47.327Z · LW(p) · GW(p)

What do you mean by "psychologically plausible?"

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2014-02-08T10:40:35.084Z · LW(p) · GW(p)

I mean it's a plausible way to describe how people actually feel and make decisions when acting.

comment by wedrifid · 2014-02-08T11:05:58.806Z · LW(p) · GW(p)

However, even if the multiverse is infinite, or not even a set, I as a finite mind can only look at finite pieces of it. My caring function looks at a small piece of the multiverse, because it cannot comprehend the whole thing.

Your limitation does not inherently prohibit you from having a caring function that looks at the entire multiverse. The constraint is on how complex the pattern of evaluation of the features of the multiverse can be.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-08T17:18:32.922Z · LW(p) · GW(p)

Fair enough. I was aware of that, but did not bother to write it out. Sorry.

comment by Squark · 2014-02-16T10:34:27.946Z · LW(p) · GW(p)

I'm not beginning to think I identified a real flaw in this approach.

The usual formulation of UDT assumes the decision algorithm to be known. In reality, the agent doesn't know its own decision algorithm. This means there is another flavor of uncertainty w.r.t. which the values of different choices have to be averaged. I call this "introspective uncertainty". However, introspective uncertainty is not independent: it is strongly correlated with indexical uncertainty. Since introspective uncertainty can't be absorbed into the utiltity function, indexical uncertainty cannot either.

I have a precise model of this kind of UDT in mind. Planning to write about it soon.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-16T18:56:09.261Z · LW(p) · GW(p)

I guess I should wait for you to write up your UDT model, but I do not see the purpose of introspective uncertainty.

comment by Raiden · 2014-02-09T03:24:52.643Z · LW(p) · GW(p)

Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?

Replies from: Scott Garrabrant, Raiden
comment by Scott Garrabrant · 2014-02-09T03:52:26.948Z · LW(p) · GW(p)

In order to make claims like that, you have to put a measure on your multiverse. I do not like doing that for three reasons:

1) It feels arbitrary. I do not think the essence of reality relies on something chunky like a Turing machine.

2) It limits the multiverse to be some set of worlds that I can put a measure on. The collection of all mathematical structures is not a set, and I think the multiverse should be at least that big.

3) It requires some sort inherent measure that is outside of the any of the individual universes in the multiverse. It is simpler to imagine that there is just every possible universe, with no inherent way to compare them.

However, regardless of those very personal beliefs, I think that the argument of simpler universes show up in more other universes does not actually answer any questions. You are trying to explain why you have a measure which makes simpler universes more likely by starting with a collection of universes in which the simpler ones are more likely, and observing that the simple ones are run more. This just walks you in circles.

Replies from: Raiden
comment by Raiden · 2014-02-09T04:34:54.443Z · LW(p) · GW(p)

I guess what I'm saying is that since simpler ones are run more, they are more important. That would be true if every simulation was individually important, but I think one thing about this is that the mathematical entity itself is important, regardless of the number of times it's instituted. But it still intuitively feels as though there would be more "weight" to the ones run more often. Things that happen in such universes would have more "influence" over reality as a whole.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T05:20:39.067Z · LW(p) · GW(p)

I am saying that in order to make the claim "simple universes are run more," you first need the claim that "most universes are more likely to run simple simulations than complex simulations." In order to make that second claim, you need to start with a measure of what "most universes" means, which you do using simplicity. (Most universes run simple simulations more because running simple simulations is simpler.)

I think there is a circular logic there that you cannot get past.

comment by Raiden · 2014-02-09T03:33:54.166Z · LW(p) · GW(p)

Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T03:53:41.751Z · LW(p) · GW(p)

Simple things can contain more complex things. The reason the more complex thing can be more complex is that it takes extra bits to specify what part of the simple thing to look at.

Replies from: Raiden
comment by Raiden · 2014-02-09T04:28:12.751Z · LW(p) · GW(p)

What I mean though, is that the more complicated universes can't be less significant, because they are contained within this simple universe. All universes would have to be at least as morally significant as this universe, would they not?

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T05:21:41.003Z · LW(p) · GW(p)

If I have have a world containing many people, I can say that the world is more morally significant than any of the individual people.

Replies from: Squark
comment by Squark · 2014-02-09T07:27:13.591Z · LW(p) · GW(p)

I'm not following you here. I think Raiden has a valid point: we should shape the utility function so that Boltzmann brains don't dominate utility computations. The meta-framework for utility you constructed remains perfectly valid, it's just that the "local" utility of each universe has to be constructed with care (which is true about other meta-frameworks as well). E.g. we shouldn't assigned a utility of Graham's number of utilons to a universe just because it contains a Graham's number of Boltzmann brains: it's Pascal mugging.

Maybe we should start with a bounded utility function...

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2014-02-09T07:53:50.919Z · LW(p) · GW(p)

I am not sure if Raiden's intended point is the same as what you are saying here. If it is, then you can just ignore my other comment, it was arguing with a position nobody held.

I absolutely agree. The local utility of each universe does have to be constructed with care.

I also have strong feelings that all utility functions are bounded.

I was imagining one utility function for the multiverse, but perhaps that does not make sense. (since the collection of universes might not be a set)

Perhaps the best way to model the utility function in my philosophy might be to have a separate utility function for each universe, and a simplicity exchange rate between them.