The Conscious Sorites Paradox

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-28T02:58:35.000Z · LW · GW · Legacy · 41 comments

Contents

41 comments

Followup toOn Being Decoherent

Decoherence is implicit in quantum physics, not an extra postulate on top of it, and quantum physics is continuous.  Thus, "decoherence" is not an all-or-nothing phenomenon—there's no sharp cutoff point.  Given two blobs, there's a quantitative amount of amplitude that can flow into identical configurations between them.  This quantum interference diminishes down to an exponentially tiny infinitesimal as the two blobs separate in configuration space.

Asking exactly when decoherence takes place, in this continuous process, is like asking when, if you keep removing grains of sand from a pile, it stops being a "heap".

The sand-heap dilemma is known as the Sorites Paradox, after the Greek soros, for heap.  It is attributed to Eubulides of Miletus, in the 4th century BCE.  The moral I draw from this very ancient tale:  If you try to draw sharp lines in a continuous process and you end up looking silly, it's your own darn fault.

(Incidentally, I once posed the Sorites Paradox to Marcello Herreshoff, who hadn't previously heard of it; and Marcello answered without the slightest hesitation, "If you remove all the sand, what's left is a 'heap of zero grains'."  Now that's a computer scientist.)

Ah, but what about when people become decoherent?  What of the Conscious Sorites Paradox?

What about the case where two blobs of amplitude containing people are interacting, but only somewhat - so that there is visibly a degree of causal influence, and visibly a degree of causal independence?

Okay, this interval may work out to less than the Planck time for objects the size of a human brain.  But I see that as no excuse to evade the question.  In principle we could build a brain that would make the interval longer.

Shouldn't there be some definite fact of the matter as to when one person becomes two people?

Some folks out there would just say "No".  I suspect Daniel Dennett would just say "No".  Personally, I wish I could just say "No", but I'm not that advanced yet.  I haven't yet devised a way to express my appreciation of the orderliness of the universe, which doesn't involve counting people in orderly states as compared to disorderly states.

Yet if you insist on an objective population count, for whatever reason, you have Soritic problems whether or not you delve into quantum physics.

What about the Ebborians? The Ebborians, you recall, have brains like flat sheets of conducting polymer, and when they reproduce, the brain-sheet splits down its thickness.  In the beginning, there is definitely one brain; in the end, there is definitely two brains; in between, there is a continuous decrease of causal influence and synchronization.  When does one Ebborian become two?

Those who insist on an objective population count in a decoherent universe, must confront exactly analogous people-splitting problems in classical physics!

Heck, you could simulate quantum physics the way we currently think it works, and ask exactly the same question!  At the beginning there is one blob, at the end there are two blobs, in this universe we have constructed.  So when does the consciousness split, if you think there's an objective answer to that?

Demanding an objective population count is not a reason to object to decoherence, as such.  Indeed, the last fellow I argued with, ended up agreeing that his objection to decoherence was in fact a fully general objection to functionalist theories of consciousness.

You might be tempted to try sweeping the Conscious Sorites Paradox under a rug, by postulating additionally that the Quantum Spaghetti Monster eats certain blobs of amplitude at exactly the right time to avoid a split.

But then (1) you have to explain exactly when the QSM eats the amplitude, so you aren't avoiding any burden of specification.

And (2) you're requiring the Conscious Sorites Paradox to get answered by fundamental physics, rather than being answered or dissolved by a better understanding of consciousness.  It's hard to see why taking this stance advances your position, rather than just closing doors.

In fact (3) if you think you have a definite answer to "When are there two people?", then it's hard to see why you can't just give that same answer within the standard quantum theory instead.  The Quantum Spaghetti Monster isn't really helping here!  For every definite theory with a QSM, there's an equally definite theory with no QSM.  This is one of those occasions you have to pay close attention to see the superfluous element of your theory that doesn't really explain anything—it's harder when the theory as a whole does explain something, as quantum physics certainly does.

Above all, (4) you would still have to explain afterward what happens with the Ebborians, or what happens to decoherent people in a simulation of quantum physics the way we currently think it works.  So you really aren't avoiding any questions!

It's also worth noting that, in any physics that is continuous (or even any physics that has a very fine-grained discrete cellular level underneath), there are further Conscious Sorites Parodoxes for when people are born and when they die.  The bullet plows into your brain, crushing one neuron after another—when exactly are there zero people instead of one?

Does it still seem like the Conscious Sorites Paradox is an objection to decoherent quantum mechanics, in particular?

A reductionist would say that the Conscious Sorites Paradox is not a puzzle for physicists, because it is a puzzle you get even after the physicists have done their duty, and told us the true laws governing every fundamental event.

As previously touched on, this doesn't imply that consciousness is a matter of nonphysical knowledge.  You can know the fundamental laws, and yet lack the computing power to do protein folding.  So, too, you can know the fundamental laws; and yet lack the empirical knowledge of the brain's configuration, or miss the insight into higher levels of organization, which would give you a compressed understanding of consciousness.

Or so a materialist would assume.  A non-epiphenomenal dualist would say, "Ah, but you don't know the true laws of fundamental physics, and when you do know them, that is where you will find the thundering insight that also resolves questions of consciousness and identity."

It's because I actually do acknowledge the possibility that there is some thundering insight in the fundamental physics we don't know yet, that I am not quite willing to say that the Conscious Sorites puzzle is not a puzzle for physicists.  Or to look at it another way, the problem might not be their responsibility, but that doesn't mean they can't help.  The physicists might even swoop in and solve it, you never know.

In one sense, there's a clear gap in our interpretation of decoherence: we don't know exactly how quantum-mechanical states correspond to the experiences that are (from a Cartesian standpoint) our final experimental results.

But this is something you could say about all current scientific theories (at least that I've heard of).  And I, for one, am betting that the puzzle-cracking insight comes from a cognitive scientist.

I'm not just saying tu quoque (i.e., "Your theory has that problem too!")  I'm saying that "But you haven't explained consciousness!" doesn't reasonably seem like the responsibility of physicists, or an objection to a theory of fundamental physics. 

An analogy:  When a doctor says, "Hey, I think that virus X97 is causing people to drip green slime," you don't respond:  "Aha, but you haven't explained the exact chain of causality whereby this merely physical virus leads to my experience of dripping green slime... so it's probably not a virus that does it, but a bacterium!"

This is another of those sleights-of-hand that you have to pay close attention to notice.  Why does a non-viral theory do any better than a viral theory at explaining which biological states correspond to which conscious experiences?  There is a puzzle here, but how is it a puzzle that provides evidence for one epidemiological theory over another?

It can reasonably seem that, however consciousness turns out to work, getting infected with virus X97 eventually causes your experience of dripping green slime.  You've solved the medical part of the problem, as it were, and the remaining mystery is a matter for cognitive science.

Likewise, when a physicist has said that two objects attract each other with a force that goes as the product of the masses and the inverse square of the distance between them, that looks pretty much consistent with the experience of an apple falling on your head.  If you have an experience of the apple floating off into space, that's a problem for the physicist.  But that you have any experience at all, is not a problem for that particular theory of gravity.

If two blobs of amplitude are no longer interacting, it seems reasonable to regard this as consistent with there being two different brains that have two different experiences, however consciousness turns out to work.  Decoherence has a pretty reasonable explanation of why you experience a single world rather than an entangled one, given that you experience anything at all.

However the whole debate over consciousness turns out, it seems that we see pretty much what we should expect to see given decoherent physics.  What's left is a puzzle, but it's not a physicist's responsibility to answer.

...is what I would like to say.

But unfortunately there's that whole thing with the squared modulus of the complex amplitude giving the apparent "probability" of "finding ourselves in a particular blob".

That part is a serious puzzle with no obvious answer, which I've discussed already in analogy.  I'll shortly be doing an explanation of how the problem looks from within actual quantum theory.

Just remember, if someone presents you with an apparent "answer" to this puzzle, don't forget to check whether the phenomenon still seems mysterious, whether the answer really explains anything, and whether every part of the hypothesis is actively helping.

 

Part of The Quantum Physics Sequence

Next post: "Decoherence is Pointless"

Previous post: "On Being Decoherent"

41 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by simon2 · 2008-04-28T05:38:18.000Z · LW(p) · GW(p)

I haven't yet devised a way to express my appreciation of the orderliness of the universe, which doesn't involve counting people in orderly states as compared to disorderly states.

What do you mean by that?

Frankly, I'm not sure what it is that you're complaining about. Even in ordinary life humans have number ambiguity: if you split the connection between the halves of the brain, you get what seems to be two minds, but why should this be some great problem?

But unfortunately there's that whole thing with the squared modulus of the complex amplitude giving the apparent "probability" of "finding ourselves in a particular blob".

I hope you will at least acknowledge the existence of the point of view of Wallace/Saunders/Deutsch that the Born rule can be derived from quantum mechanics without it plus only very reasonable outside assumptions, if you won't agree with it.

comment by simon2 · 2008-04-28T06:21:57.000Z · LW(p) · GW(p)

Sorry for the impulsive unhelpful bit of my previous comment. Of course if you have a number ambiguity between subjectively identical minds, then you might have problems if you apply an indifference principle to determine probabilities. But please explain if you have any other problem with this.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-28T08:31:48.000Z · LW(p) · GW(p)

Simon, the trouble is that bit with the "very reasonable outside assumptions".

If you think in mangled worlds terms, and then ask, "What if the mangled worlds theory were correct, but the cutoff point were not near the median amplitude density?", then we would observe probabilities different from the Born probabilities. And the "very reasonable outside assumptions" would be suddenly revealed as unreasonable.

In the case of the Wallace paper that Robin quoted in "Quantum Orthodoxy": Wallace assumes, roughly, that branching that goes on while you're not looking, couldn't possibly be rational to take into consideration, because so much of it happens, that no decision theorist could be bothered to keep track of it. Which, because quantum physics is unitary, hands him the Born probabilities on a silver platter - specifically, the decision-theoretic principle that you should care equally about bets with equivalent payoff and equal measure ("measure" = integral over squared modulus).

But if the mangled worlds cutoff point were different, and the Born probabilities were (perhaps slightly but noticeably) different, then decision agents would indeed be wise to think about the branching that went on while they weren't looking.

If you start pondering theories where the Born probabilities are physically derived and hence physically contingent; then the theories where the Born probabilities are derived as a priori rational considerations, begin to look really suspicious.

I mean, you just shouldn't be able to get that sort of thing a priori.

Also I follow in the path of Jaynes in regarding probability theory as more fundamental than decision theory.

I admit that when Wallace talks about the number of observers being unmeasurable because decoherence gives you a continuous tube of amplitude rather than distinct blobs - so that I'm not so much twins, as smeared - he manages to unnerve me even more than I was already unnerved, with regards to my feeble attempts to count observer-moments. But, ultimately, you could construct the same situation with Ebborians, so...

Replies from: Kenny
comment by Kenny · 2010-09-04T03:48:13.266Z · LW(p) · GW(p)

I only have limited experience studying mathematics seriously, but from what I understand of continuity, I can't help think but that there's no reason to expect reality to actually be continuous. It's too easy to get apparent continuity out of (even sorta) large numbers.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-09-04T04:57:27.231Z · LW(p) · GW(p)

I think that particular continuity is an illusion, anyhow. One should count eigenstates. There are countably many eigenstates, none of which is classical. It is in trying to assert a definite position that this particular continuity issue comes up.

comment by steven · 2008-04-28T11:32:37.000Z · LW(p) · GW(p)

Eliezer, I agree that it looks like you shouldn't be able to get the Born rule a priori, but I don't think you're acknowledging that the same is true of the "equal probability for all worlds" rule. You need some way to translate a third-person physical view into subjective anticipations, and I don't see how this could use anything other than a priori reasoning at some point.

Also -- if I'm unsure whether my utility function says I should care about worlds in proportion to their Born measure or equally independent of their Born measure, and the latter possibility doesn't tell me what to do, then for practical purposes it drops out and I can pretend it's just the former.

comment by steven · 2008-04-28T11:35:57.000Z · LW(p) · GW(p)

(even if it could have been, but isn't, the case that the latter possibility did tell me what to do)

comment by Psy-Kosh · 2008-04-28T14:50:47.000Z · LW(p) · GW(p)

Eliezer:

"I admit that when Wallace talks about the number of observers being unmeasurable because decoherence gives you a continuous tube of amplitude rather than distinct blobs - so that I'm not so much twins, as smeared - he manages to unnerve me even more than I was already unnerved, with regards to my feeble attempts to count observer-moments. But, ultimately, you could construct the same situation with Ebborians, so..."

How would one construct a "smeared Ebborian"?

comment by Larry_D'Anna · 2008-04-28T15:35:25.000Z · LW(p) · GW(p)

stephen: If we had a full understanding of fundamental physics then the only other a priori assumption we should need to derive the Born rule should be this: We aren't special. Our circumstances are typical. In other words: it is possible that at a fundamental physical level there is no Born rule and no reason one should expect a Born rule. But just by some fantastic coincidence, our little branch has followed the born rule all this time. In fact, we should expect it to stop following the Born rule immediately, for the same reason someone who's just won the lottery doesn't expect to win again next time. It's not physically impossible for us to be this lucky, but it's not physically impossible for an egg to unscramble itself either.

Fundamental physics + eggs don't unscramble + anthropic principle should give you the Born rule. If it doesn't then physicists aren't done yet.

comment by Unknown · 2008-04-28T16:58:05.000Z · LW(p) · GW(p)

Nick Bostrom argues that there can be a fractional number of experiences; for example, when the Ebborian divides, there might be a process that proceeds from one conscious experience to two through 1.1 experiences, 1.2, 1.3, and so on. This would fit with the fact that physical things are continuous, and also with the idea that there aren't separate blobs of amplitude. It might also be easier to explain the Born probability rule by allowing such a continuum in the number of experiences.

comment by Doug_S. · 2008-04-28T18:15:50.000Z · LW(p) · GW(p)

If all the "worlds" do exist, then there should be some really, really weird worlds out there. For example, one in which eggs really do unscramble themselves all the time...

comment by Ben_Jones · 2008-04-28T18:24:39.000Z · LW(p) · GW(p)

Shouldn't there be some definite fact of the matter as to when one person becomes two people?

And even after you know that the falling tree creates acoustic vibrations but not auditory experience, it feels like there's a leftover question.

Did the same Eliezer really write both of these statements? I say again: if you can have all the information about an Ebborean brain split, but you're still asking 'when does one mind become two?' then you've missed the point.

A conscious mind is not a single, indivisible entity. It is a slippery thingy arising from the interplay of myriad smaller systems and mechanisms. If you want to use the word 'emergent' then knock yourself out. With that in mind, why should there be a definite fact of the matter?

comment by Unknown · 2008-04-28T18:36:08.000Z · LW(p) · GW(p)

Ben, note that in your quotation Eliezer mentioned "auditory experience." Whether or not there is an experience is one of the facts in question; so if you don't know how many experiences are there, you don't yet know all the facts.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-28T19:09:41.000Z · LW(p) · GW(p)

Ben: A conscious mind is not a single, indivisible entity. It is a slippery thingy arising from the interplay of myriad smaller systems and mechanisms. If you want to use the word 'emergent' then knock yourself out. With that in mind, why should there be a definite fact of the matter?

Ben,

Knowing that you're wrong doesn't always help you. You have to know how you're wrong, and what the right answer is / how to dissolve the question, in order to get started on repairs.

Lots of people out there are wise enough to believe that, somehow, consciousness is going to end up being incarnated in mere computing matter - but believing this doesn't make the mystery go away, until you fill in that somehow. Otherwise you're just saying "Atoms did it!"

There's a notion of Boltzmann brains - brains that have spontaneously coalesced in vastly improbable reversals of entropy. If the universe is large enough, then under standard physics, it will contain Boltzmann brains. How do I know I'm not one?

"You don't," you reply. Then why do I expect my future experiences to be ordered? Why do i think that an orderly universe best explains my current existence, rather than a fluctuation of dust?

"Because an orderly universe is far more likely to give rise to you, than a fluctuation of dust would be." Ah, now you're using terms like likely. That requires some measure of probability. After all, if the prior probability of an ordered world were zero, the likelihood ratio wouldn't matter.

In standard physics in a Big World, the Boltzmann Eliezer, and the Earth Eliezer, both exist. One may have chaotic future experiences, or dissolve back into dust; the other has a legitimate expectation of future order.

Given that both kinds of future experiences will occur to subjectively indistinguishable Eliezers, why do I expect an orderly future more than I expect a future where I sprout wings and fly away?

Well... the fact that Boltzmann Eliezers are exponentially less common in the universe, than ordered Eliezers, might have something to do with it -

  • if I were allowed to count people, or measure a weight of their existence, or assign a probability to them that was a probability of something. Somehow.

Do you see why, despite knowing the problems, I can't toss my notion of "number of observer-moments" or "weight of existence" or "measure of probability" out the window, yet? I need some way of saying that, on average, I don't expect to sprout wings and fly away, even if there's some very rare / low-weight / improbable Eliezer who experiences such things. If you toss that out, why believe in the science that you're using to conduct the whole analysis to begin with? Why not just believe that all the order is a chance illusion, once you toss out probability?

That's not an argument for counting people. That's explaining why I go on counting people, for now, even though I know that my current method is wrong, somehow.

PS: This whole problem blends directly over into "Why does (why do you believe that) anything exists in the first place?"

Replies from: momothefiddler
comment by momothefiddler · 2012-05-04T08:10:35.395Z · LW(p) · GW(p)

I don't see why you need to count the proportional number of Eliezers at all. I'm guessing the reason you expect an ordered future isn't because of the relation of {number of Boltzmann Eliezers}/{number of Earth Eliezers} to 1. It seems to me you expect an orderly future because you (all instances of you and thus all instances of anything that is similar enough to you to be considered 'an Eliezer') have memories of an orderly past. These memories could have sprung into being when you did a moment ago, yes, but that doesn't give you any other valid way to consider things. Claiming you're probabilistically not a Boltzmann Eliezer because you can count the Boltzmann Eliezers assumes you have some sort of valid data in the first place, which means you're already assuming you're not a Boltzmann Eliezer.

You anticipate experiencing the future of Earth Eliezer because it's the only future out of unconsiderably-many that has enough definition for 'anticipation' to have any meaning. If sprouting wings and flying away, not sprouting wings but still flying away, sprouting wings and crashing, and not sprouting wings and teleporting to the moon are all options with no evidence to recommend one over another, what does it even mean to expect one of them? Then add to that a very large number of others - I don't know how many different experiences are possible given a human brain (and there's no reason to assume a Boltzmann brain that perceives itself as you do now necessarily has a human-brain number of experiences) - and you have no meaningful choice but to anticipate Earth Eliezer's future.

Unless I'm missing some important part of your argument, it doesn't seem that an absolute count of Eliezers is necessary. Can't you just assume a future consistent with the memories available to the complex set of thought-threads you call you?

I realise I'm getting to (and thus getting through) this stuff a lot later than most commenters. Having looked, though, I can't find any information on post-interval etiquette or any better place to attempt discussion of the ideas each post/comment produces and, as far as I can tell, the posts are still relevant. If I'm flaunting site policy or something with my various years-late comments, I'm sorry and please let me know so I know to stop.

comment by Vladimir_Gritsenko · 2008-04-28T19:20:14.000Z · LW(p) · GW(p)

Eliezer,

While I am unable to comment on the quantum physics, you have raised a valid point (albeit too briefly?) by noting that a very similar problem applies to the very young and the dieing. When does a human child become conscious? Dennett would indeed argue that there is no such single moment. It appears to me that until this question is solved (and it can be without recourse to QM), a similar scenario in QM isn't going to salvage it. In other words, going from 0 to 1 seems like an easier but just as fundamental question than going from 1 to N.

comment by Caledonian2 · 2008-04-28T19:29:20.000Z · LW(p) · GW(p)
The moral I draw from this very ancient tale: If you try to draw sharp lines in a continuous process and you end up looking silly, it's your own darn fault.

Removing grains of sand is a discrete process. The real key is recognizing what is implied by the use of the word 'heap' - so the collection of grains remains a heap until it no longer meets the criteria.

The Ship of Theseus is a much better statement of the problem, as there are enough different meanings of 'identity' that resolving the equivocation between them is a real challenge.

comment by Psy-Kosh · 2008-04-28T19:33:12.000Z · LW(p) · GW(p)

Eliezer: in a Big Universe, wouldn't there be more Boltzmann Eliezers than Ordered Eliezers?

comment by Hopefully_Anonymous · 2008-04-28T19:38:40.000Z · LW(p) · GW(p)

At this point all I have to contribute in commentary to your post (the OP) is "fascinating" and "very well written".

comment by Unknown · 2008-04-28T19:39:17.000Z · LW(p) · GW(p)

Eliezer, what if we are in a Big World which will fall into heat death for an infinite time? Then as Psy-kosh pointed out (I think), wouldn't Boltzmann Eliezers be infinitely more common than orderly ones, once you take into consideration the whole of time? In this situation, why shouldn't you expect disorder?

comment by Unknown · 2008-04-28T19:40:41.000Z · LW(p) · GW(p)

I only saw Psy-Kosh's comment just now after posting...

comment by simon2 · 2008-04-29T00:11:04.000Z · LW(p) · GW(p)

Eliezer: OK, so you object to branching indifference.

Here is what I was going to reply until I remembered that you support mangled worlds:

"So, I guess I'll go buy a lottery ticket, and if I win, I'll conduct an experiment that branches the universe 10^100 times (eg. single electron Stern-Gerlach repeated less than 1000 times). That way I'll be virtually certain to win."

Now, I suppose with mangled worlds and a low cutoff you can't quite rule out your point of view experimentally this way. But you're still proposing a rule in which if you have a world which splits into world A and world B, they have probability 1/2 each, and then when world B splits into B1 and B2, it changes the probability of A to 1/3 - until an unobserved physical process turns the probability of A back to 1/2. Seems a little odd, no?

comment by mitchell_porter2 · 2008-04-29T06:18:43.000Z · LW(p) · GW(p)

Ben Jones: A conscious mind is not a single, indivisible entity. It is a slippery thingy arising from the interplay of myriad smaller systems and mechanisms. If you want to use the word 'emergent' then knock yourself out. With that in mind, why should there be a definite fact of the matter?

Ben, even without knowing about atoms or brains, you can know this much: Something is happening, and it consists of one thing after another.

Whatever else reality encompasses, it must include this experience sequence or stream of consciousness that you have.

You evidently propose to identify these experiences with states of a physical brain, as do most scientifically educated people. But then, faced with the boundary problems pertaining to physical systems that Eliezer has highlighted, you say, well, there's no "definite fact of the matter" as to (say) how many streams of consciousness there are in a given physical setup.

This is not an option, except in the sense that ignoring a problem is an option. A exists, definitely; we hypothesize that A is actually B, possibly; we have trouble specifying the exact relationship between B and A; so we conclude that A doesn't definitely exist after all?

One aspect of progress in physics is the synthesis of incompatible theories in new theoretical frameworks. Quantum mechanics and relativity give rise to quantum field theory, quantum field theory and gravity give rise to string theory. The present situation is a huge opportunity for discovery, once you accept that A definitely exists; but everyone is clinging to their existing models of B at any price. Basic sensory qualities, the flow of time, the unity of the individual consciousness in appearance and reality - all are to be denied so we can imagine that our existing physics is enough.

But there is truly no need to do this. The fundamental validation of the physics we have is that it makes correct quantitative predictions. All that that implies, in turn, is that there are quantities in nature, somewhere and somehow. Do we have so little imagination that we cannot think up an ontology in which all the manifest aspects of consciousness are actually there, and in which the quantitative relations of our physics are also present?

This really will require deep changes, there is no doubt. Once you accept that these problematic qualities of consciousness are real - that they are there, even if their nature is not totally clear - you also have to address the question of how it is that you know they are there. That implies faculties of awareness and a theory of knowledge which is not just a matter of correct quantitative prediction. As with the numbers of physics, this does not mean that we need some new ontology of knowledge instead of Bayes, Kolmogorov, and the other local favorites. The new qualitative ontology would ground the quantitative aspects of epistemology, just as it must ground the quantitative ontology we call physics.

The split-brain research must be among the most important clues to the truth that we have, because it comes close to producing some of these consciousness-counting paradoxes in reality. But to be is to be something, consciousness is actually there, and so the solution is not to consign it to the realm of fuzzy half-realities. The solution is to start with the premise that there is a definite fact of the matter, and sacrifice everything to retain that premise. But I doubt that we have to sacrifice that much, really.

Replies from: Kenny
comment by Kenny · 2013-03-08T17:10:33.390Z · LW(p) · GW(p)

The evidence for consciousness is of roughly the same kind as for mystical experiences, i.e. verbal self-reports. There is no other evidence for either. Assume we're not conscious; what would you expect to be different? Obviously the part(s) of me that generate verbal reports (or other linguistic correspondence) seem to have access to other parts of my mind so that it seems like I'm a identity inside myself, but so what? What does consciousness itself add to any description of my experience?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-29T06:48:49.000Z · LW(p) · GW(p)

Simon: But you're still proposing a rule in which if you have a world which splits into world A and world B, they have probability 1/2 each, and then when world B splits into B1 and B2, it changes the probability of A to 1/3 - until an unobserved physical process turns the probability of A back to 1/2. Seems a little odd, no?

That's an excellently sharp compression of the problem with observer-moment anthropics the way I'm trying to do it - the conditional probabilities aren't invariant when you compose or decompose events. If you're anesthetized during the split between A and B, and wake up and observe yourself in B, before B splits, it seems the prior probability of B is 1/2. If you're anesthetized until after B splits into B1 and B2, it seems the prior probability of B is 2/3.

Huh. The probability theory I'm trying to use here really isn't giving sensible results at all.

comment by Ben_Jones · 2008-04-29T11:13:03.000Z · LW(p) · GW(p)

Eliezer, thanks for the detailed reply, that's really cleared up a whole load of my confusion over the last few posts.

I certainly haven't thought about this enough yet. I do recognise the fact that we need to put this aside to get any sense out of the world. But I still feel that trying to pin down a mind as a discrete entity, and trying to decide on when one mind becomes two, creates more problems than it solves.

The Ebboreans can dream up plenty of brain-splitting (!) scenarios that involve bigger and bigger numbers, and make less and less sense when you try to apply subjective probabilities. I'm sure it wouldn't be hard to dream up scenarios where experience tends to infinity, which doesn't bode well. Also, say an Ebborean is being tortured while his brain splits. Does the amount of disutility in the universe suddenly double? If so, my brain asplode.

Re: Boltzmann Brains: AFAICT, P(our current universe springing into existence) is miniscule. Certainly, P(just my brain springing up) is a lot bigger. But P(stochastic, messy Big Bang, followed by a period of inflation, followed by the coalescence of stars, then planets, then the inception of life, then the evolution of the brain, then me) is enormous by comparison to either. Hence my expectation of being in the world. For every spontaneous floaty brain in infinite space, surely there would be millions of Big Bang-based sub-universes similar to our own.

I've just ordered Consciousness Explained on Amazon, will see where that goes.

comment by Nick_Tarleton · 2008-04-29T13:48:45.000Z · LW(p) · GW(p)

A exists, definitely; we hypothesize that A is actually B, possibly; we have trouble specifying the exact relationship between B and A; so we conclude that A doesn't definitely exist after all?

This is a wonderful statement of the problem. I'll remember it.

comment by Caledonian2 · 2008-04-29T15:10:48.000Z · LW(p) · GW(p)
For every spontaneous floaty brain in infinite space,

There are several problems here.

1) You're only considering a single universe. 2) It's not the formation of brains that are important. (Actual brains that form in the void die before they can get any useful computation done.) 3) The substrate of the computation is irrelevant to the content of the algorithm.

comment by ME3 · 2008-04-29T15:29:56.000Z · LW(p) · GW(p)

mitchell: As the Buddhists pointed out a long time ago, the flow of time is actually an illusion. All that you actually experience at any given moment is your present sensory input, plus the memories of the past. But there are any number of experiences involving loss of consciousness that will show that the flow of time as we perceive it is completely subjective (not to say that there is no time "out there," just that we don't directly perceive it).

So while I agree that "something is happening," it does not necessarily consist of one thing after another. Really it's just another formulation of cogito ergo sum.

This is also relevant in response to Caledonian - the brain does not have to live for any sustained period of time. A Boltzmann brain can pop into existence fully oxygenated with the memories that it is me, typing this response, think about it for a few seconds, and then die of whatever brains die of in interstellar space. From inside the brain, there would be no way to know the difference.

Eliezer: Isn't it sufficient to say that your brain has an expectation of order because that is how it's evolved? And what would a brain with no expectation of order even look like? Is it meaningful to talk about a control system that has no model of the outside world?

comment by Caledonian2 · 2008-04-29T19:26:42.000Z · LW(p) · GW(p)

Depressurization destroys a brain long before oxygen deprivation sets in.

The point is not to consider brains, but computational devices capable of representing a mind algorithm, of which brains are only the most familiar to you.

It's the distribution of the you-defined algorithm across the various universes that needs to be considered, not merely the chance of a brain arising randomly from chaos.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-01T19:06:39.000Z · LW(p) · GW(p)

Upon futher reflection, I'm not sure I believe Wallace's argument that, in quantum physics the way we know it, the number of observers is ill-defined.

Yes, decoherence is continuous. But by the time you get all the way up to objects the size of neurons "firing" or "not firing", I'm not sure there's much influence being transmitted between worlds - or that the intermediate states represent conscious entities (non-mangled worlds?). I'm not even sure I could construct a continuous spectrum of conscious Ebborian brains. Operations like Max(A, B) would be prohibited. Sure, our underlying physics is continuous - a coin can always balance on its edge - but to construct an intermediate between two coherent conscious minds, every coin has to land on its edge; in addition to having tiny and perhaps-mangled amplitude, I'm not sure that intermediate still represents a person.

comment by Nick5 · 2008-09-29T23:39:10.000Z · LW(p) · GW(p)

Would I be the only person here occasionally terrified by the idea that they might 'wake up' as a Boltzmann brain at some point, with a brain arranged in such a way as to subject them to terrible agony, for an indeterminate period of time? I would really appreciate a response on this...

comment by Psy-Kosh · 2008-09-29T23:43:10.000Z · LW(p) · GW(p)

Nick: I haven't had that exact thought, but some analogous thoughts. (I don't remember the details) It does seem unlikely though. (Though I still am confused about why I'm not a Boltzmann brain)

But a Boltzmann brain that just happens to be in that particular arrangement would seem to be comparatively unlikely compared to all possible Boltzmann brains one might happen to be.

comment by simon2 · 2008-09-30T00:23:46.000Z · LW(p) · GW(p)

Nick and Psy-Kosh: here's a thought on Boltzmann brains.

Let's suppose the universe has vast spaces uninhabited by anything except Boltzmann brains which briefly form and then disappear, and that any given state of mind has vastly more instantiations in the Boltzmann-brain only spaces than in regular civilizations such as ours.

Does it then follow that one should believe one is a Boltzmann brain? In the short run perhaps, but in the long run you'd be more accurate if you simply committed to not believing it. After all, if you are a Boltzmann brain, that commitment will cease to be relevant soon enough as you disintegrate, but if you are not, the commitment will guide you well for a potentially long time.

comment by Psy-Kosh · 2008-09-30T00:26:54.000Z · LW(p) · GW(p)

Simon: Well, yeah, I notice that I seem to, well, not dissolve into nothingness.

But I have to admit that from a certain perspective, I'm surprised about that. Why are more versions of me non Boltzmann brain than Boltzmann brain... that is, why do I have a higher probability of perceiving myself in a universe that has rather more order than is strictly needed for just, well, me at this instant?

comment by simon2 · 2008-09-30T00:56:27.000Z · LW(p) · GW(p)

It may be that most minds with your thoughts do in fact disappear after an instant. Of course if that is the case there will be vastly more with chaotic or jumbled thoughts. But the fact that we observe order is no evidence against the existence of additional minds observing chaos, unless you don't accept self-indication.

So, your experience of order is not good evidence for your belief that more of you are non-Boltzmann than Boltzmann. But as I said, in the long term your expected accuracy will rise if you commit to not believing you are a Boltzmann brain, even if you believe that you most likely are one now.

A somewhat analogous situation may arise in AGI - AI makers can rule out certain things (e.g. the AI is simulated in a way that the simulated makers are non-conscious) that the AI cannot. Thus by having the AI rule such things out a priori, the makers can improve the AI's beliefs in ways that the AI itself, however superintelligent, rationally could not.

comment by Nick5 · 2008-10-06T03:23:48.000Z · LW(p) · GW(p)

It may be my personal ultra-pessimistic spin on what is otherwise topic full of diverse interpretation, but I've never had a positive view of living forever, and the idea of living for what could theoretically be millions of years with any number of unpleasant stimuli being simulated in this ad-hoc cognitive mechanism is, to say the least, disturbing.

I suppose hypothetically I could tolerate the mere existence of such a strange physical phenomena so long as it wasn't me waking up in that situation, although if there's a positive spin to be given to it, I suppose I could find myself in some sort of wonderful heaven. Either way, it's more than a little unsettling to me...

comment by simon2 · 2008-10-06T05:01:56.000Z · LW(p) · GW(p)

Nick, do you use the normal definition of a Boltzmann brain?

It's supposed to be a mind which comes into existence by sheer random chance. Additional complexity - such as would be required for some support structure (e.g. an actual brain), or additional thinking without a support structure - comes with an exponential probability penalty. As such, a Boltzmann brain would normally be very short lived.

In principle, though, there could be so much space uninhabitable for regular civilizations that even long-lived Boltzmann brains which coincidentally have experiences similar to minds in civilizations outnumber minds in civilizations.

It's not clear whether you are worrying about whether you already are a Boltzmann brain, or if you think you are not one but think that if a Boltzmann brain took on your personality it would be 'you'. If the former, I can only suggest that nothing you do as a Boltzmann brain is likely to have much effect on what happens to you, or on anything else. If the latter, I think you should upgrade your notion of personal identity. While the notion that personality is the essence of identity is a step above the notion that physical continuity is the essence of identity, by granting the notion that there is an essence of identity at all it reifies the concept in a way it doesn't deserve, a sort of pseudosoul for people who don't think they believe in souls.

Ultimately what you choose to think of as your 'self' is up to you, but personally I find it a bit pointless to be concerned about things that have no causal connection with me whatsoever as if they were me, no matter how closely they may coincidentally happen to resemble me.

comment by Psy-Kosh · 2008-10-06T05:14:45.000Z · LW(p) · GW(p)

simon: the anthropic argument (ie, disapears after an instant) bit doesn't seem to be sufficiently strong to solve the problem. Unless I misunderstood what your point, it would seem to fail to address the issue that I still observe far more order than would be necessary for me to exist even for, say, several hours or days.

I observe other people, I observe working websites, I observe that my memories of a chunk of reality seem consistent with my current observations of previously mentioned chunk of reality, etc etc...

How can I explain this? Clearly, the anthropic "bolzmann brains instantly go poof" is definitavely not strong enough for this. Unless we can show that of the worlds that don't quickly kill the versions of me that they contain, the majority tend to be, for lack of a better term, "proper" worlds. But, It's not entirely obvious to me why that should be so. Need to think about it some more.

comment by simon2 · 2008-10-06T07:13:45.000Z · LW(p) · GW(p)

Psy-Kosh, my argument that Boltzmann brains go poof is a theoretical argument, not an anthropic one. Also, if we want to maximize our correct beliefs in the long run, we should commit to ignore the possibility that we are a brain with beliefs not causally affected by the decision to make that commitment (such as a brain that randomly pops into existence and goes poof). This also is not an anthropic argument.

With regard to longer-lived brains, if you expect there to be enough of them that even the ones with your experience are more common than minds in a real civilization with your experience, then you really should rationally expect to be one (although as a practical matter since there's nothing much a Boltzmann brain can reasonably expect to do one might as well ignore it*). If you expect there to be more long lived Boltzmann brains than civilization-based minds in general, but not enough for ones with your experience to outnumber civilization-based minds with your experience, then your experience tips the balance in favour of believing you are not a Boltzmann brain after all.

I think your confusion is the result of you not being consistent about whether you accept self-indication, or maybe you being inconsistent about whether you think of the possible space with Boltzmann brains and no civilizations as being additional to or a substitute for space with civilizations. Here's what different choices of those assumptions imply:

(I assume throughout that that the probability of Boltzmann brains per volume in any space is always lower than the probability of minds in civilizations where they are allowed by physics)*

Assumptions -> conclusion

self-indication, additional -> our experience is not evidence** for or against the existence of the additional space (or evidence for its existence if we consider the possibility that we may be unusually order-observing entities in that space)

self-indication, substitute -> our experience is evidence against the existence of the substitute space

instead of self-indication, assume the probability of being a given observer is inversely proportional to number of observers in possible universe containing that observer (this is the most popular alternative to self-indication) -> our experience is evidence against the existence of the additional or substitute space

*unless the Boltzmann brain, at further exponentially reduced probability, also obtained effective means of manipulating its environment...

** basically, define "allowed" to mean (density of minds with our experience in civ) >> (density of Boltzmann brains with our experience), and not allowed to mean the opposite (<<). One would expect the probability of a space with comparable densities to be low enough not to have a significant quantitative or qualitative affect on the conclusions.

*It seems rather unlikely that a space with our current apparent physical laws allows more long-lived B-brains than civilization-based brains. I am too tired to want to think about and write out what would follow if this is not true.

**I am using "evidence" here to mean shifts of probability relative to the outside view prior (conditional on the existence of any observers at all), which means that any experience is evidence for a larger universe (other things being equal) given self-indication, etc.