Anthropics in a Tegmark Multiverse
post by paulfchristiano · 2011-04-02T18:34:09.781Z · LW · GW · Legacy · 43 commentsContents
43 comments
I believe that the execution of a certain computation is a necessary and sufficient condition for my conscious experience. Following Tegmark, by "execution" I don't refer to any notion of physical existence--I suspect that the mathematical possibility of my thoughts implies conscious experience. By observing the world and postulating my own representativeness, I conjecture the following measure on different possible experiences: the probability of any particular experience drops off exponentially with the complexity required to specify the corresponding computation.
It is typical to use some complexity prior to select a universe, and then to appeal to some different notion to handle the remaining anthropic reasoning (to ask: how many beings have my experiences within this universe?). What I am suggesting is to instead apply a complexity prior to our experiences directly.
If I believe a brain embodying my thoughts exists in some simple universe, then my thoughts can be described precisely by first describing that universe and then pointing to the network of causal relationships which constitute my thoughts. If I have seen enough of the universe, then this will be the most concise description consistent with my experiences. If there are many "copies" of that brain within the universe, then it becomes that much easier to specify my thoughts. In fact, it is easy to check that you recover essentially intuitive anthropics in this way.
This prior has a significant impact on the status of simulations. In general, making two simulations of a brain puts twice as much probability on the associated experiences. However, we no longer maintain substrate independence (which I now consider a good thing, having discovered that my naive treatment of anthropics for simulations is wildly inconsistent). The significance of a particular simulation depends on how difficult it is to specify (within the simple universe containing that simulation) the causal relationships that represent its thoughts. So if we imagine the process of "splitting" a simulation running on a computer which is two atoms thick, we predict that (at least under certain circumstances) the number of copies doubles but the complexity of specifying each one increases to cancel the effect.
This prior also gives precise answers to anthropic questions in cosmology. Even in an infinite universe, description complexity still answers questions such as "how much of you is there? Why aren't you a Boltzmann brain?" (of course this still supposes that a complexity prior is applicable to the universe).
This prior also, at least in principle, tells you how to handle anthropics across quantum worlds. Either it can account for the Born probabilities (possibly in conjunction with some additional physics, like stray probability mass wandering in from nearby incoherent worlds) or it can't. In that sense, this theory makes a testable "prediction." If it does correctly explain the Born probabilities, then I feel significantly more confidence in my understanding of quantum mechanics and in this version of a mathematical multiverse. If it doesn't, then I tentatively reject this version of a mathematical multiverse (tentatively because there could certainly be more complicated things still happening in quantum mechanics, and I don't yet know of any satisfactory explanation for the Born probabilities).
Edit: this idea is exactly the same as UDASSA as initially articulated by Wei Dai. I think it is a shame that the arguments aren't more widespread, since it very cleanly resolves some of my confusion about simulations and infinite cosmologies. My only contribution appears to be a slightly more concrete plan for calculating (or failing to calculate) the Born probabilities; I will report back later about how the computation goes.
43 comments
Comments sorted by top scores.
comment by jimrandomh · 2011-04-03T01:34:01.334Z · LW(p) · GW(p)
Unfortunately, "description length" does not uniquely identify a value. To get a description length for a laws-of-physics or a position inside them, you need a description language and a metric over it. Much has been made of the fact that the complexity of a laws-of-physics differs by only an additive constant between description languages (since you can always prefix an interpreter), but unfortunately that additive constant is enough to bend the model into consistency with almost any observed laws-of-physics.
Replies from: torekpcomment by Wei Dai (Wei_Dai) · 2011-04-03T01:51:21.326Z · LW(p) · GW(p)
Seems like we have a lot of similar ideas/interests. :) This is was my main approach for solving anthropic reasoning, until I gave it up in favor of UDT.
Here are some previous related discussions.
- http://www.finney.org/~hal/udassa/summary1.html
- http://lesswrong.com/lw/1g4/tips_and_tricks_for_answering_hard_questions/1gwq?c=1
- http://lesswrong.com/lw/py/the_born_probabilities/141m
↑ comment by paulfchristiano · 2011-04-03T02:29:37.242Z · LW(p) · GW(p)
I am more optimistic than you about how the Born probabilities could arise from induction. You seem to suggest that any other probability rule would be less likely to produce sentience, so most observer moments will be selected according to the Born rule. This seems initially plausible, but it is quite unsatisfying and I am not very confident that the argument actually goes through.
I am hoping that the number of possible specifications of any observer moment (once the universal wavefunction has already been concisely described) depends on the squared norm of the projection of the universal wavefunction onto the subspace corresponding to that observer moment. For any particular way of picking out observer moments, I think this is essentially a geometrical statement which is either to easy verify or refute. Intuitively I think it may hold the most natural way of specifying an observer moment given a state.
Of course, although this would be fairly compelling, you would still have to rule out other efficient ways of specifying an observer moment once you have the universal wavefunction.
comment by Manfred · 2011-04-02T19:13:07.361Z · LW(p) · GW(p)
Let's say you flip a coin ten times. Would you expect ten heads to occur more often than 1 in 2^10 in your experience, because it is simpler to think "ten heads" than it is to think "HTTHTHHHTH"?
Replies from: Plasmon, endoself, paulfchristiano↑ comment by Plasmon · 2011-04-02T19:48:13.568Z · LW(p) · GW(p)
Does your point remain valid if you take a realistic distribution over coin imperfections into account?
Possibly irrelevant calculation follows (do we have hide tags? Apparently not)
Suppose we have the simplest sort of deviation possible: let alpha be a small number
P(10 heads) = (1/2+alpha)^10
P(HTTHTHHHTH) = (1/2+alpha)^6*(1/2-alpha)^4
Remarkably (?)
dP(10 heads) /dalpha = 5/256 at alpha=0
dP(HTTHTHHHTH) /dalpha = 1/256 at alpha=0
It seems that simple coin deviations (which are by hypothesis the most probable) have a stronger influence on simple predictions such as P(10 heads) than on complicated predictions such as P(HTTHTHHHTH)
↑ comment by endoself · 2011-04-02T20:54:54.979Z · LW(p) · GW(p)
Applying this to the real world, the theory predicts that I should expect myself at my current moment to be Kolmogorov simple. I don't feel particularly simple, but this is different from being simple. There is only strong evidence against the theory if it is probable that simplicity is perceived, conditional on it existing.
I think it would be easy for a conscious being to not perceive it simplicity because my experience with math and science shows that humans often do not easily notice simplicity beyond a very low threshold. Some beings may be below this threshold, such as the first conscious being or the most massive conscious being, but I find it unlikely that beings of this type have a probability anywhere near that of all other conscious beings, especially considering how hard these concepts are to make precise.
Using your example of beings that observe coin tosses, simple but low-probability events may be the easiest way to specify someone, but there could also easily be a way with less complexity but less apparent to the observer. This seems likely enough that not observing high-probability events does not provide exceptionally strong evidence against the theory.
↑ comment by paulfchristiano · 2011-04-02T19:24:18.001Z · LW(p) · GW(p)
No.
The simplest way to describe either phenomenon--when combined with the other experience that leads me to believe there is a universe beyond my brain--is to describe the universe and point to my brain inside it. If I saw enough coins come up heads in some consistent situation (for example, whenever I try and test anthropic principles) then at some point a lawful universe will cease to be the best explanation. The exact same thing is true for Solomonoff induction as well, though the quantitative details may differ very slightly.
Replies from: Manfred↑ comment by Manfred · 2011-04-02T21:58:15.459Z · LW(p) · GW(p)
But ordering over the complexity of your brain, rather than the universe, is already postulating that a lawful universe isn't the best explanation. You can't have your cake and eat it too.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-02T22:26:55.382Z · LW(p) · GW(p)
A lawful universe is the best explanation for my experiences. My experience is embodied in a particular cognitive process. To describe this process I say:
"Consider the system satisfying the law L. To find Paul within that system, look over here."
In order to describe the version of me that sees 10 heads in a row, I instead have to say:
"Consider the system satisfying the law L, in which these 10 coins came up heads. To find Paul within that universe, look over here."
The probability of seeing 10 heads in a row may be slightly higher: adding additional explanations increases the probability of an experience, and the description of "arbitrary change" is easier if the change is to make all 10 outcomes H rather than to set the outcomes in some more complicated way. However, the same effect is present in Solomonoff induction.
There are many more subtleties here, and there are universes which involve randomness in a way where I would predict that HHHHHHHHH is the most likely result from looking at 10 coin flips in a row. But the same things happen with Solomonoff induction, so they don't seem worth talking about here.
Replies from: Manfred↑ comment by Manfred · 2011-04-02T22:41:47.350Z · LW(p) · GW(p)
Best explanation by what standard? By the standard where you rank universes from least complex to most complex! You cannot do two different rankings simultaneously.
So then, are you saying that you do not think that a simplicity prior on your brain is a good idea?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-02T23:10:36.861Z · LW(p) · GW(p)
Shortest explanation for my thoughts. Precisely a simplicity prior on my brain. There is nothing about universe complexity.
I believe that the shortest explanation for my thoughts is the one that says "Here is the universe. Within the universe, here is this dude." This is a valid explanation for my brain, and it gets longer if I have to modify it to make my brain "simpler" in the sense you are using, not shorter.
Replies from: Manfred↑ comment by Manfred · 2011-04-02T23:29:15.977Z · LW(p) · GW(p)
No, it doesn't. Picking between microstates isn't a "modification" of the universe, it's simply talking about the observed probability of something that already happens all the time.
Although now that I think about it, this argument should apply to more traditional anthropics as well, if a simplicity prior is used. And since I've done this experiment a few times now, I can say with high confidence that a strong simplicity prior is incorrect when flipping coins (especially when anthropically flipping coins [which means I did it myself]), and a maximum entropy prior is very close to correct.
comment by AlephNeil · 2011-04-03T03:21:41.685Z · LW(p) · GW(p)
I think I can see a flaw.
OK, so your central idea is to use the complexity prior for 'centered worlds' rather than 'uncentered worlds'. A 'centered world' here means a "world + pointer to the observer".
Now, if I give you a world + a pointer to the observer then you can tell me exactly what the observer's subjective state is, right? Therefore, the complexity of "world + pointer to observer" is greater than the complexity of "subjective state all by itself + the identity function".
Therefore, your approach entails that we should give massive weight to the possibility that solipsism is correct :-)
ETA: Fixed an important error - had written "less" instead of "greater".
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T03:24:35.091Z · LW(p) · GW(p)
My approach is defined by solipism. I don't use the complexity prior for 'centered worlds' I just use the complexity prior for 'subjective state.'
That said, all of the other people in the world probably also have mental states which are just as easily described, so its an empty sort of solipism.
(Also note that my idea is exactly the same as Wei Dai's).
Replies from: AlephNeil↑ comment by AlephNeil · 2011-04-03T03:47:26.178Z · LW(p) · GW(p)
You say:
If I believe a brain embodying my thoughts exists in some simple universe, then my thoughts can be described precisely by first describing that universe and then pointing to the network of causal relationships which constitute my thoughts.
Therefore, the complexity of "universe + pointer to the network of causal relationships constituting your thoughts" is greater than or equal to the complexity of "network of causal relationships constituting your thoughts + the identity function".
Really, you should just talk about the 'network of causal relationships constituting your thoughts' all by itself. So if Jupiter hasn't affected your thoughts yet, Jupiter doesn't exist? But what counts as an 'effect'? And what are the boundaries of your mental state? This gets awfully tricky.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T04:29:11.701Z · LW(p) · GW(p)
The issue is that you mean a different thing by "complexity" then the definition.
How do you describe your thoughts all by themselves? You could describe the whole physical brain and its boundary with the world, but that is spectacularly complex. Simpler is to specify the universe (by giving some simple laws which govern it) and then to describe where to find your thoughts in it. This is the shortest computational recipe which outputs a description of your thoughts.
Replies from: AlephNeil↑ comment by AlephNeil · 2011-04-03T04:40:21.659Z · LW(p) · GW(p)
How do you describe your thoughts all by themselves?
By describing the abstract structure of that 'network of causal relationships' you were talking about?
Look, there's a Massive Philosophical Problem here which is "what do you take your thoughts to be?" But whatever answer you give, other than just "a universe plus a pointer" I can carry on repeating my trick.
It sounds as though you want to give the answer "an equivalence class of universes-plus-pointers, where (W1, P1) ~ (W2, P2) iff the being at P1 'has the same thoughts' as the being at P2". But this is no good if we don't know what "thoughts" are yet.
ETA: Just wanted to say that the post was very interesting, regardless of whether I think I can refute it, and I hope LW will continue to see discussions like this.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T20:15:34.331Z · LW(p) · GW(p)
By describing the abstract structure of that 'network of causal relationships' you were talking about?
So you can describe your brain by saying explicitly what it contains, but this is not the shortest possible description in the sense of Kolmogorov complexity.
I believe that the shortest way to describe the contents of your brain--not your brain sitting inside a universe or anything--is to describe the universe (which has lower complexity than your brain, in the sense that it is the output of a shorter program) and then to point to your brain. This has lower complexity than trying to describe your brain directly.
Replies from: AlephNeil↑ comment by AlephNeil · 2011-04-04T08:30:54.539Z · LW(p) · GW(p)
I understand what you were trying to do a little better now.
This has lower complexity than trying to describe your brain directly.
I think that so far you've tended to treat this as if it was obvious whereas I've treated it as if it was obviously false, but neither of us has given much in the way of justification.
Some things to keep in mind:
- Giving a pointer to a position within some 'branch' of the multiverse could cost a hell of a lot of information. (Rather like how specifying the location of a book within the Library of Babel offers zero compression compared to just writing out the book.) I understand that, if there are lots of copies then, ceteris paribus, the length of a pointer should decrease at rate logarithmic in the number of copies. But it's not obvious that this reduces the cost to below that of a more 'direct' description.
- There are many possibilities other than 'direct, naive, literal, explicit description' and description in terms of 'our universe + a pointer'. For instance, one could apply some compression algorithm to an explicit description. And here it's conceivable that the result could be regarded as a description of the mental state as sitting inside some larger, simpler 'universe', but very different from and much smaller than the 'real world'. Is it really true that all of general relativity and quantum mechanics is implicit in the mental state of an ancient thinker like Socrates? I don't want to say the answer is 'obviously not' - actually I find it extremely difficult to justify 'yes' or 'no' here. Some of the difficulty is due to the indeterminacy in the concept of "mental state". (And what if we replace 'Socrates' with 'a cow'?)
- What repertoire of logical and/or physical primitives are permitted when it comes to writing down a pointer? For instance, in a universe containing just a single observer, can we efficiently 'point to' the observer by describing it as "the unique observer"? In our own universe, can we shorten a pointer by describing Jones as "the nearest person to X" where X is some easily-describable landmark (e.g. a supermassive black hole)?
↑ comment by jhuffman · 2011-04-11T17:23:31.970Z · LW(p) · GW(p)
I think a pointer that effectively forces you to compute the entire program in order to find the object it references is still reducing complexity based on the definition used. Computationally expensive != complex.
Replies from: AlephNeil↑ comment by AlephNeil · 2011-04-11T18:07:57.737Z · LW(p) · GW(p)
Sure, it might be reducing complexity, but it might not be. Consider the Library of Babel example, and bear in mind that a brain-state has a ton of extra information over and above the 'mental state' it supports. (Though strictly speaking this depends on the notion of 'mental state', which is indeterminate.)
Also, we have to ask "reducing complexity relative to what?" (As I said above, there are many possibilities other than "literal description" and "our universe + pointer".)
comment by cousin_it · 2011-04-02T20:27:26.117Z · LW(p) · GW(p)
This is a new idea to me. I like it a lot.
One way to make it eventually explain the Born probabilities is to also throw away continuity of experience and instead use something like ASSA: "your current observer-moment is a random sample from all observer-moments", where the "measure" of each observer-moment is determined by its description complexity as you suggest. Then you could look at the universal wavefunction (without postulating the Born probabilities) and note that pointing out a human observer-moment after a split has occurred requires 1 bit more information than pointing out the version before the split. I have no idea if it's true, but it sounds quite plausible to me.
comment by Vladimir_Nesov · 2011-04-02T21:42:02.464Z · LW(p) · GW(p)
What are you using these "probabilities" for? Probabilities of what? What is the intended meaning of a probability-definition you are trying to make here? Can you taboo "probability" and describe motivation behind the idea you're describing (as opposed to other connotations evoked by the word)?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-02T22:18:59.725Z · LW(p) · GW(p)
Couldn't you ask the same question of any UDT agent? It seems like messy philosophical territory, but that people here implicitly wade into frequently.
Here is what I think I mean: I can control logical facts, insofar as they depend on my thoughts, but the question is: what do I want to achieve by that control? I care about the content of conscious experience, but what exactly is it that I value?
I have a preference over the possible observer-moments which I personally may experience--I can consider two probability distributions over observer-moments and decide which one it would rather experience. A probability distribution is a mental construct which I need to give as an input to my preference ordering.
The question is: how do I use this built in notion of preference to discriminate between two resolutions of my uncertainty about logical facts? My plan is to define a probability distribution over observer-moments and subject this distribution to my intuitive preferences. Then I can choose to determine logical facts to make this single fixed probability distribution over observer-moments as desirable as possible.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2011-04-03T05:35:48.322Z · LW(p) · GW(p)
I have a preference over the possible observer-moments which I personally may experience--I can consider two probability distributions over observer-moments and decide which one it would rather experience.
I'm curious what your thoughts are on this post.
Also, here are a couple of other problems that I ran into, which I think you might be interested in, or have some ideas about.
- If you use a UTM-based distribution, which UTM do you choose? Is there a notion of complexity that is not relative to something more or less arbitrary?
- Using a UTM-based distribution seems to imply ignoring all the copies of you who are living in uncomputable worlds. It would be nice to have a complexity-based measure over all of math, but that seems impossible.
↑ comment by paulfchristiano · 2011-04-03T20:07:00.545Z · LW(p) · GW(p)
I was very confused about identical independent copies before. Right now the view given here is the best one I have thought of---more independent copies are more significant, just like copies running on more easily specified substrates. In this view copy immortality has no value--there is no difference between 2 copies with probability 1/2 and 1 copy with probability 1.
I have no idea how to choose a notion of complexity, either amongst UTMs or over some broader class of descriptions. I hope that at some point I will encounter a good argument for one choice or another, but I don't yet know of any and its not clear why there would be a good argument.
↑ comment by Vladimir_Nesov · 2011-04-03T09:29:39.669Z · LW(p) · GW(p)
Couldn't you ask the same question of any UDT agent?
UDT has answers. "Probability" there plays a particular role in the decision algorithm, and you could name that element with a different word while keeping the role. They are probabilities of given program producing given execution history, inferred given the assumption that the agent (as a program) performs given action. An answer of this form would clarify your usage.
I can control logical facts, insofar as they depend on my thoughts
Also physical facts, which is the way you actually think about physical facts, using the observations of otherwise opaque logical specification.
I have a preference over the possible observer-moments which I personally may experience
This is unclear, since you also have preference about what happens with the world, and then why consider some arbitrary boundary that is "you" specially?
I can consider two probability distributions over observer-moments and decide which one it would rather experience. A probability distribution is a mental construct which I need to give as an input to my preference ordering.
Very unclear. Why would you privilege "experiencing" something as a criterion for decision-making? Why do you expect a preference that needs an argument of this particular form?
For example, how do you express preference about other human-like agents in the worlds you can influence? What about situations like absent-minded driver where many usual uses of probabilities break down?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T18:30:01.045Z · LW(p) · GW(p)
This is unclear, since you also have preference about what happens with the world, and then why consider some arbitrary boundary that is "you" specially?
I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
I am biologically given an intuitive preference function over probability distributions of moments that I would personally experience. The basis of my morality is to use this preference to try and make interpersonal utility comparisons, although of course there are many many problems. Putting probability distributions over observer moments is a critical first step in this program.
Replies from: AlephNeil, Vladimir_Nesov↑ comment by AlephNeil · 2011-04-03T19:36:36.739Z · LW(p) · GW(p)
Putting probability distributions over observer moments is a critical first step in this program.
There are some problems with the notion of 'observer moments'. I'm inclined to think they are unresolvable, but perhaps you have some ideas for how to tackle them.
I've already mentioned the problem of the 'boundary' of a subjective state. For instance, consider an old memory which it would take you a long time to 'dredge up'. How is that different in principle from having some information written down somewhere in a notebook in front of you? Is that old memory part of your 'observer moment'? (But then how about a slightly fresher memory)? Is the notebook? (But then how about that thick reference book on your shelf)? It seems obvious to me that there's no principled line to be drawn here.
Then there's the problem of whether a given system is an observer or not. For instance, Dennett is notorious for (correctly!) attributing 'intentional states' to a thermostat: you can view it as an agent who believes the temperature is x and wants it to be y. But is a thermostat an observer? Presumably not, but again it seems that there's no principled line to be drawn between thermostats and people.
And then there's the problem of 'how many observers'. E.g. Is a split-brain patient two observers or one? How about an ordinary person? How much of the corpus callosum needs to be severed to get two observers?
Finally, if A is much much cleverer, more alert, and knowledgeable than B then A ought to have a greater density of 'observer moments' than B? But exactly how much greater? The idea that there's a principled way of determining this seems overoptimistic.
(Going a little off topic: I've actually been thinking along vaguely similar lines to you just lately. I've been trying to devise an approach to the mind-body problem in terms of "perspectives". My overall goal was to try to 'do justice to' the common intuition that a subjective point of view must be determinate and cannot only half-exist, while maintaining the substance of Dennett's position which implies that there may be no fact of the matter as to whether a person is conscious and what they're conscious of. My key idea is that (a) there exist genuine facts of the form "From perspective P, such-and-such would be consciously experienced" but (b) the perspectives themselves do not exist - they're not part of the "state of the world". (Instead, they're "perspectives from which the state of the world manifests itself"). An analogy would be how, in mathematical logic, we have this duality between theories and models, or more broadly, between syntax and semantics. The "syntax side" enables you to state and prove things about stuff that exists, but the syntax itself doesn't exist. (In the same way that there's no such set as ZFC.)
My 'perspectives' are essentially the same as your 'pointers-to-observers'. However, I'd want to stress the fact that perspectives are ubiquitous - e.g. you can take the perspective of a rock, or a thermostat. And you can take the perspective of a person in many different ways, with no fact of the matter about which is right, or even whether it's right to take one. (But from any given perspective, all the subjective facts are nice and 'determinate'.)
It never occurred to me to try to consider the Kolmogorov complexities of perspectives. It's an interesting idea, but it's hard to wrap one's head around given that there are an unlimited number of ways of defining the same person's perspective.)
↑ comment by Vladimir_Nesov · 2011-04-03T18:44:29.584Z · LW(p) · GW(p)
I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
How do you know this? (What do you think you know, how do you think you know it?) Your brain might contain this belief. Should you draw a conclusion from this fact that what the belief claims is true? Presumably, if your brain contained a belief that 15+23=36, you'll have a process that would not stop at accepting this claim just because your brain claimed it's so, you'll be able to do better. There is no magical truth-machine in your mind, everything should be suspect, understood and believed stronger only when further reflection permits.
What "certain observers"? You were talking only about your own experiences. How do you take into account experiences of other agents? (Or even yourself at other moments, or given alternative observations, imperfect copies.)
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T19:12:30.571Z · LW(p) · GW(p)
How do I know what I care about? I don't know.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind? I think this has basically wandered into a metaethics discussion. It is a fact that I feel nothing about worlds without observers. Perhaps if you gave me an argument to care I would.
By "certain observers" I only meant to refer to "observers whose experiences are affected by the world".
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T19:39:23.913Z · LW(p) · GW(p)
How do I know what I care about? I don't know.
Then you shouldn't be certain. Beliefs of unknown origin tend to be unreliable. They also don't allow figuring out what exactly they mean.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind?
Consider the above example of believing that 15+23=36. You can find the facts allowing you to correct the error elsewhere in your mind, they are just beliefs other than the one that claims 15+23 to be 36. You can also consult a calculator, something not normally considered part of your mind. You can even ask me.
By "certain observers" I only meant to refer to "observers whose experiences are affected by the world".
This doesn't help. I don't see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
Replies from: AlephNeil, paulfchristiano↑ comment by paulfchristiano · 2011-04-03T19:55:52.886Z · LW(p) · GW(p)
Then you shouldn't be certain.
I'm certainly not. Like I said, if you have any arguments I expect they could change my opinion.
I don't see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible. In what other sense would I care about anything?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T20:06:27.929Z · LW(p) · GW(p)
Then you shouldn't be certain.
I'm certainly not. Like I said, if you have any arguments I expect they could change my opinion.
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn't be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible.
I see. I think this whole line of investigation is very confused.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-03T20:10:59.435Z · LW(p) · GW(p)
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn't be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
I don't quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.
In what other way can I be certain or uncertain?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-04-03T20:21:53.564Z · LW(p) · GW(p)
You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you'd remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn't claim certainty.
Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.
(Also, being certain for the reason that you don't expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you're thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn't obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)
comment by Vladimir_Nesov · 2011-04-02T21:27:51.085Z · LW(p) · GW(p)
What's "universe" and what's "experience"? What are you using probability of what for?
Supposing you assume some sense of expected utility maximization, what kinds of things have utility and probability? Also, for decision-making you need models of consequences of possible decisions, all of which except the actual one are not actually made, in particular models of probability have to be able to depend on your possible decisions. Focusing on "experience" doesn't obviously give a way of addressing that.
comment by endoself · 2011-04-02T20:54:52.817Z · LW(p) · GW(p)
This idea has been discussed before and I have not heard any particularly compelling evidence for or against it (see my reply to Manfred). I believe it was originally due to Wei Dai.
Replies from: paulfchristiano↑ comment by paulfchristiano · 2011-04-02T21:00:21.010Z · LW(p) · GW(p)
I don't see how this can be agnostic on the Born probabilities. Did this question come up before? Was the conclusion simply "its too hard to tell"?
Replies from: endoself↑ comment by endoself · 2011-04-03T00:12:50.775Z · LW(p) · GW(p)
I don't recall the suggestion of using the Born probabilities as a test ever having been discussed.
I agree that this could potentially supply strong evidence one way or another, but I do not see any way of deriving the Born probabilities and it seems unlikely that it would be easy to prove that deducing the Born probabilities is impossible, because the derivation might depend on some unknown aspect of physics or consciousness.