Boltzmann Brains and Anthropic Reference Classes (Updated)

post by pragmatist · 2012-06-04T04:04:17.048Z · LW · GW · Legacy · 113 comments

Contents

113 comments

Summary: There are claims that Boltzmann brains pose a significant problem for contemporary cosmology. But this problem relies on assuming that Boltzmann brains would be part of the appropriate reference class for anthropic reasoning. Is there a good reason to accept this assumption?

Nick Bostrom's Self Sampling Assumption (SSA) says that when accounting for indexical information, one should reason as if one were a random sample from the set of all observer's in one's reference class. As an example of the scientific usefulness of anthropic reasoning, Bostrom shows how the SSA rules out a particular cosmological model suggested by Boltzmann. Boltzmann was trying to construct a model that is symmetric under time reversal, but still accounts for the pervasive temporal asymmetry we observe. The idea is that the universe is eternal and, at most times and places, at thermodynamic equilibrium. Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.

The objection to this model is that smaller fluctuations from equilibrium will be more common. In particular, fluctuations that produce disembodied brains floating in a high entropy soup with the exact brain state I am in right now (called Boltzmann brains) would be vastly more common than fluctuations that actually produce me and the world around me. If we reason according to SSA, the Boltzmann model predicts I am one of those brains and all my experiences are spurious. Conditionalizing on the model, the probability that my experiences are not spurious is minute. But my experiences are in fact not spurious (or at least, I must operate under the assumption that they are not if I am to meaningfully engage in scientific inquiry). So the Boltzmann model is heavily disconfirmed. [EDIT: As AlexSchell points out, this is not actually Bostrom's argument. The argument has been made by others. Here, for example.]

Now, no one (not even Boltzmann) actually believed the Boltzmann model, so this might seem like an unproblematic result. Unfortunately, it turns out that our current best cosmological models also predict a preponderance of Boltzmann brains. They predict that the universe is evolving towards an eternally expanding cold de Sitter phase. Once the universe is in this phase, thermal fluctuations of quantum fields will lead to an infinity of Boltzmann brains. So if the argument against the original Boltzmann model is correct, these cosmological models should also be rejected. Some people have drawn this conclusion. For instance, Don Page considers the anthropic argument strong evidence against the claim that the universe will last forever. This seems like the SSA's version of Bostrom's Presumptuous Philosopher objection to the Self Indication Assumption, except here we have a presumptuous physicist. If your intuitions in the Presumptuous Philosopher case lead you to reject SIA, then perhaps the right move in this case is to reject SSA.

But maybe SSA can be salvaged. The rule specifies that one need only consider observers in one's reference class. If Boltzmann brains can be legitimately excluded from the reference class, then the SSA does not threaten cosmology. But Bostrom claims that the reference class must at least contain all observers whose phenomenal state is subjectively indistinguishable from mine. If that's the case, then all Boltzmann brains in brain states sufficiently similar to mine such that there is no phenomenal distinction must be in my reference class, and there's going to be a lot of them.

Why accept this subjective indistinguishability criterion though? I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same. If A and B are in the exact same brain state, then, according to this claim, A has no evidence that she is in fact A and not B. And in this case, it is illegitimate for her to exclude B from her anthropic reference class. For all she knows, she might be B!

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn't know. Which means I can have evidence the brain doesn't have. Subjective indistinguishability does not entail evidential indistinguishability.

So at least this argument for including all subjectively indistinguishable observers in one's reference class fails. Is there another good reason for this constraint I haven't considered?

Update: There seems to be a common misconception arising in the comments, so I thought I'd address it up here. A number of commenters are equating the Boltzmann brain problem with radical skepticism. The claim is that the problem shows that we can't really know we are not Boltzmann brains. Now this might be a problem some people are interested in. It is not one that I am interested in, nor is it the problem that exercises cosmologists. The Boltzmann brain hypothesis is not just a physically plausible variant of the Matrix hypothesis.

The purported problem for cosmology is that certain cosmological models, in conjunction with the SSA, predict that I am a Boltzmann brain. This is not a problem because it shows that I am in fact a Boltzmann brain. It is a problem because it is an apparent disconfirmation of the cosmological model. I am not actually a Boltzmann brain, I assure you. So if a model says that it is highly probable I am one, then the observation that I am not stands as strong evidence against the model. This argument explicitly relies on the rejection of radical skepticism.

Are we justified in rejecting radical skepticism? I think the answer is obviously yes, but if you are in fact a skeptic then I guess this won't sway you. Still, if you are a skeptic, your response to the Boltzmann brain problem shouldn't be, "Aha, here's support for my skepticism!" It should be "Well, all of the physics on which this problem is based comes from experimental evidence that doesn't actually exist! So I have no reason to take the problem seriously. Let me move on to another imaginary post."

113 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2012-06-04T14:01:21.377Z · LW(p) · GW(p)

[M]eanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ.

They can differ, in the sense you specified, but they can't be distinguished by the brains themselves, and so the distinction can't be used in reasoning and decision making performed by the brains.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T19:01:00.736Z · LW(p) · GW(p)

Do you really think the assumption that the external world (as we conceive it) is real can't be used in reasoning and decision making performed by us? It is used all the time, to great effect.

Do you think Bostrom is wrong to use anthropic reasoning as a basis for disconfirming Boltzmann's model? After all, the assumption there is that Boltzmann's model goes wrong in predicting that we are Boltzmann brains. We know that we're not, so this is a bad prediction. This piece of reasoning seems to be something you deny we can actually do.

Replies from: Nornagest, Dolores1984
comment by Nornagest · 2012-06-04T19:33:54.930Z · LW(p) · GW(p)

Given any particular instantaneous brain state, later evidence consistent with that brain state is evidence against the Boltzmann model, since random fluctuation is vastly more likely to generate something subjectively incoherent. With a sufficient volume of such evidence I'd feel comfortable concluding that we reside in an environment or simulation consistent with our subjective perceptions. But that doesn't actually work: we only have access to an instantaneous brain state, not a reliable record of past experience, so we can't use this reasoning to discredit the Boltzmann model. In a universe big enough to include Boltzmann brains a record of a causal history appears less complex than an actual causal history, so we should favor it as an interpretation of the anthropic evidence.

I'll admit I find this subjectively hard to buy, but that's not the same thing as finding an actual hole in the reasoning. Starting with "we know we're not Boltzmann brains" amounts to writing your bottom line first.

Replies from: wedrifid, pragmatist
comment by wedrifid · 2012-06-04T19:46:04.050Z · LW(p) · GW(p)

Given any particular instantaneous brain state, later evidence consistent with that brain state is evidence against the Boltzmann model

It is evidence that the model of the past brain state being a Boltzmann brain is incorrect. It unfortunately can't tell you anything about whether you are a Boltzmann brain now who just thinks that he had a past where he thought he might have been a Boltzmann brain.

Replies from: Nornagest
comment by Nornagest · 2012-06-04T19:55:24.923Z · LW(p) · GW(p)

Yeah, that's what I was trying to convey with the second half of that paragraph. I probably could have organized it better.

comment by pragmatist · 2012-06-04T19:38:20.520Z · LW(p) · GW(p)

Starting with "we know we're not Boltzmann brains" amounts to writing your bottom line first.

Only if my argument is intended to refute radical skepticism. It's not. See the update to my post. It's true that the argument, like every other argument in science, assumes that external world skepticism is false. But I guess I don't see that as a problem unless one is trying to argue against external world skepticism in the first place.

Replies from: Nornagest
comment by Nornagest · 2012-06-04T19:54:21.654Z · LW(p) · GW(p)

This seems confused. Boltzmann's model only has any interesting consequences if you at least consider external-world skepticism; if you use a causal history to specify any particular agent and throw out anything where that doesn't line up with experiential history, then of course we can conclude that Boltzmann brains (which generally have a causal history unrelated to their experiential history, although I suppose you could imagine a Boltzmann brain with a correct experiential history as a toy example) aren't in the right reference class. But using that as an axiom in an argument intended to prove that Boltzmann brains don't pose a problem to current cosmological models amounts to defining the problem away.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T20:27:12.586Z · LW(p) · GW(p)

Here's the structure of the purported problem in cosmology:

(1) Model X predicts that most observers with subjective experience identical to mine are Boltzmann brains.

(2) I am not a Boltzmann brain.

(3) The right way to reason anthropically is the SSA.

(4) The appropriate reference class used in the SSA must include all observers with subjective experience identical to mine.

CONCLUSION: Model X wrongly predicts that I'm a Boltzmann brain.

I am not attacking any of the first 3 premises. I am attacking the fourth. Attacking the fourth premise does not require me to establish that I'm not a Boltzmann brain. That's a separate premise in the original argument. It has already been granted by my opponent. So I don't see how assuming it, in an objection to the argument given above, amounts to writing my bottom line first.

Replies from: Nornagest
comment by Nornagest · 2012-06-04T20:48:03.832Z · LW(p) · GW(p)

Your objection assumes that we can distinguish observers by their causal history rather than their subjective experience, and that we can discard agents for whom the two don't approximately correspond. This is quite a bit more potent than simply assuming you're not a Boltzmann brain personally: if extrapolated to all observers, then no (or very few) Boltzmann brains need be considered. The problematic agents effectively don't exist within the parts of the model you've chosen to look at. Merely assuming you're not a Boltzmann brain, on the other hand, does lead to the apparent contradiction in the parent -- but I don't think it's defensible as an axiom in this context.

Truthfully, though, I wouldn't describe the cosmological problem in the terms you've used. It's more that most observers with your subjective experience are Boltzmann brains under this cosmological model, and Boltzmann brains' observations do not reliably reflect causal relationships, so under the SSA this cosmology implies that any observations within it are most likely invalid and the cosmology is therefore unverifiable. This does have some self-reference in it, but it's not personal in the same sense, and including "I am not a Boltzmann brain" in the problem statement is incoherent.

Replies from: pragmatist, pragmatist
comment by pragmatist · 2012-06-04T22:03:48.551Z · LW(p) · GW(p)

This is quite a bit more potent than simply assuming you're not a Boltzmann brain personally: if extrapolated to all observers, then no (or very few) Boltzmann brains need be considered. The problematic agents effectively don't exist within the parts of the model you've chosen to look at.

I'm not sure what you mean by this. I'm claiming we need not consider the possibility that we are Boltzmann brains when we are reasoning anthropically. I'm not claiming that Boltzmann brains are not observers (although they may not be), nor am I claiming that they do not exist. I also think that if a Boltzmann brain were reasoning anthropically (if it could), then it should include Boltzmann brains in its reference class. So I don't think the claims I'm making can be extrapolated to all observers. They can be extrapolated to other observers sufficiently similar to me.

comment by pragmatist · 2012-06-04T21:20:12.087Z · LW(p) · GW(p)

Your objection assumes that observers' subjective experience is generally a more or less reliable record of their causal history.

I hope this is not the case, since I don't believe this. I think it's pretty likely that our universe will contain many Boltzmann brain type observers whose subjective experience is not a reliable record of their causal history (or any sort of record at all, really). Could you clarify where my objection relies on this assumption?

Truthfully, though, I wouldn't describe the cosmological problem in the terms you've used.

The problem is often presented (including by Bostrom) as a straight Bayesian disconfirmation of models like Boltzmann's. That seems like a different argument from the one you present.

including "I am not a Boltzmann brain" in the problem statement is incoherent.

Why? The other three premises do not imply that I am a Boltzmann brain. They only imply that model X predicts I'm a Boltzmann brain. That doesn't conflict with the second premise.

Replies from: Nornagest
comment by Nornagest · 2012-06-04T21:21:03.331Z · LW(p) · GW(p)

I hope this is not the case, since I don't believe this

That was poorly worded. I'd already updated the grandparent before you posted this; hopefully the revised version will be clearer.

Why? The other three premises do not imply that I am a Boltzmann brain. They only imply that model X predicts I'm a Boltzmann brain. That doesn't conflict with the second premise.

I was talking about my formulation of the problem, not yours. Assuming you're not a Boltzmann brain does lead to a contradiction with one of my premises, specifically the one about invalid observations.

comment by Dolores1984 · 2012-06-04T20:05:43.298Z · LW(p) · GW(p)

After all, the assumption there is that Boltzmann's model goes wrong in predicting that we are Boltzmann brains. We know that we're not, so this is a bad prediction. This piece of reasoning seems to be something you deny we can actually do.

That's because it is. We DON'T know that we're not Boltzman brains. There would be no possible way for us to tell.

comment by DanielLC · 2012-06-04T04:34:07.760Z · LW(p) · GW(p)

My question is why ever exclude a conscious observer from your reference class? You're reference class is basically an assumption you make about who you are. Obviously, you have to be conscious, but why assume you're not a Boltzmann brain? If they exist, and one of them. A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.

There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him?

That's just how you're defining belief. If the brain can't tell, it's not evidence, and therefore irrelevant.

Replies from: pragmatist, pragmatist
comment by pragmatist · 2012-06-04T05:43:41.350Z · LW(p) · GW(p)

One way to see the difference between my representational states and the Boltzmann brains' is to think counterfactually. If Barack Obama had lost the election in 2008, my current brain state would have been different in (at least partially) predictable ways. I would no longer have the belief that he was President, for instance. The Boltzmann brain's brain states don't possess this counterfactual dependency. Doesn't this suggest an epistemic difference between me and the Boltzmann brain?

comment by pragmatist · 2012-06-04T05:24:36.271Z · LW(p) · GW(p)

That's just how you're defining belief. If the brain can't tell, it's not evidence, and therefore irrelevant.

I don't think this is a mere definitional matter. If I have evidence, it must correspond to some contentful representation I possess. Evidence is about stuff out there in the world, it has content. And it's not just definitional to say that representations don't acquire content magically. The contentfulness of a representation must be attributable to some physical process linking the content of the representation to the physical medium of the representation. If a piece of paper spontaneously congealed out of a high entropy soup bearing the inscription "BARACK OBAMA", would you say it was referring to the President? What if the same inscription were typed by a reporter who had just interviewed the President?

Recognizing that representation depends on physical relationships between the object (or state of affairs) represented and the system doing the representing seems to me to be crucial to fully embracing naturalism. It's not just a semantic issue (well, actually, it is just a semantic issue, in that its an issue about semantics, but you get what I mean).

And I don't know what you mean when you say "If the brain can't tell...". Not only does the Boltzmann brain lack the information that Barack Obama is President, it cannot even form the judgment that it possesses this information, since that would presuppose that it can represent the content of the belief. So in this case, I guess my brain can tell that I have the relevant evidence, and the Boltzmann brain cannot, even though they are in the same state. Or did you mean something about identical phenomenal experience by "the brain can't tell..."? That just begs the question.

A Boltzmann brain that uses your logic would exclude itself from its reference class, and therefore conclude that it cannot be itself. It would be infinitely wrong. This would indicate that the logic is faulty.

The Boltzmann brain would not be using my logic. In my post, I refer to a number of things to which a Boltzmann brain could not refer, such as Boltzmann. I doubt that one could even call the brain states of a Boltzmann brain genuinely representational, so the claim that it is engaged in reasoning is itself questionable. I am reminded here of arguments against pancomputationalism. A Boltzmann brain isn't reasoning about cosmology for the same sort of reason that a rock isn't playing tic-tac-toe. The existence of an isomorphism between it and some system that is reasoning about cosmology (or playing tic-tac-toe) is insufficient.

Replies from: DanielLC
comment by DanielLC · 2012-06-04T07:21:25.355Z · LW(p) · GW(p)

Do beliefs feel differently from the inside if they are internally identical, but don't correspond to the same outside world?

Replies from: pragmatist
comment by pragmatist · 2012-06-04T07:37:48.522Z · LW(p) · GW(p)

I'm pretty sure identical brain states feel the same from the inside. I'm not sure that it feels like anything in particular to have a belief. What do you think about what I say in this comment?

comment by [deleted] · 2012-06-04T13:39:21.943Z · LW(p) · GW(p)

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn't know. Which means I can have evidence the brain doesn't have. Subjective indistinguishability does not entail evidential indistinguishability.

No. A boltzmann brain can't have correct beliefs about Obama, but it may very well have it's neurons (or whatever) arranged in what looks to it like beliefs about Obama.

Replies from: gwern, pragmatist
comment by gwern · 2012-06-04T14:09:59.841Z · LW(p) · GW(p)

I once ran across OP's argument as an illustration of the Twin Earth example applied to the simulation/brain-in-a-vat argument: "you can't be a brain in a vat because your beliefs refer to something outside yourself!" My reaction was, how do you know what beliefs-outside-your-head feel like as compared to the fake vat alternative? If there is no subjective difference, then it does no epistemological work.

Replies from: Alejandro1
comment by Alejandro1 · 2012-06-04T15:32:12.134Z · LW(p) · GW(p)

It was Putnam who started the idea of refuting the brain-in-vat hypothesis, with sematic externalism, in this paper. The money quote:

By what was just said, when the brain in a vat (in the world where every sentient being is and always was a brain in a vat) thinks 'There is a tree in front of me', his thought does not refer to actual trees. On some theories that we shall discuss it might refer to trees in the image, or to the electronic impulses that cause tree experiences, or to the features of the program that are responsible for those electronic impulses. These theories are not ruled out by what was just said, for there is a close causal connection between the use of the word 'tree' in vat-English and the presence of trees in the image, the presence of electronic impulses of a certain kind, and the presence of certain features in the machine's program. On these theories the brain is right, not wrong in thinking 'There is a tree in front of me.' Given what 'tree' refers to in vat-English and what 'in front of' refers to, assuming one of these theories is correct, then the truth conditions for 'There is a tree in front of me' when it occurs in vat-English are simply that a tree in the image be 'in front of' the 'me' in question — in the image — or, perhaps, that the kind of electronic impulse that normally produces this experience be coming from the automatic machinery, or, perhaps, that the feature of the machinery that is supposed to produce the 'tree in front of one' experience be operating. And these truth conditions are certainly fulfilled.

By the same argument, 'vat' refers to vats in the image in vat-English, or something related (electronic impulses or program features), but certainly not to real vats, since the use of 'vat' in vat-English has no causal connection to real vats (apart from the connection that the brains in a vat wouldn't be able to use the word 'vat', if it were not for the presence of one particular vat — the vat they are in; but this connection obtains between the use of every word in vat-English and that one particular vat; it is not a special connection between the use of the particular word 'vat' and vats). Similarly, 'nutrient fluid' refers to a liquid in the image in vat-English, or something related (electronic impulses or program features). It follows that if their 'possible world' is really the actual one, and we are really the brains in a vat, then what we now mean by 'we are brains in a vat' is that we are brains in a vat in the image or something of that kind (if we mean any thing at all). But part of the hypothesis that we are brains in a vat is that we aren't brains in a vat in the image (i.e. what we are 'hallucinating' isn't that we are brains in a vat). So, if we are brains in a vat, then the sentence 'We are brains in a vat' says something false (if it says anything). In short, if we are brains in a vat, then 'We are brains in a vat' is false. So it is (necessarily) false.

And a nice counterargument from Nagel's The View From Nowhere:

If I accept the argument, I must conclude that a brain in a vat can't think truly that it is a brain in a vat, even though others can think this about it. What follows? Only that I can't express my skepticism by saying "Perhaps I'm a brain in a vat." Instead I must say: "Perhaps I can't even think the truth about what I am, because I lack the necessary concepts and my circumstances make it impossible for me to acquire them!" If this doesn't qualify as skepticism, I don't know what does.

Replies from: gwern
comment by gwern · 2012-06-04T15:37:33.313Z · LW(p) · GW(p)

Our teacher always made us read the original papers, so this must be it.

comment by pragmatist · 2012-06-04T19:14:53.634Z · LW(p) · GW(p)

How could it know its beliefs look like they are about Obama? How does it even know who Obama is?

Replies from: None
comment by [deleted] · 2012-06-04T19:22:46.514Z · LW(p) · GW(p)

Why do you think you know who Obama is? Because your neurons are arranged with information that refers to some Obama character. From the inside, you think "Obama" and images of a nice black man in a suit saying things about change play through your mind. The point of the Boltzmann brain is that it is arranged to have the same instantaneous thoughts as you.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T19:30:37.013Z · LW(p) · GW(p)

That's not all there is to my belief that I know who Obama is. The arrangement of neurons in my brain is just syntax. Syntax doesn't come pre-equipped with semantic content. The semantics of my belief -- the fact that it's a belief about Obama, for instance -- comes from causal interactions between my brain and the external world. Causal interactions that the Boltzmann brain has not had. The particular pattern of neuronal activation (or set of such patterns) that instantiates my concept of Obama corresponds to a concept of Obama because it is appropriately correlated with the physical object Barack Obama. The whole point of semantic externalism is that the semantic content of our mental representations isn't just reducible to how they feel from the inside.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-04T20:02:06.710Z · LW(p) · GW(p)

Just to make sure I understand your claim... a question.

My brain has a set of things that, in normal conversation, I would describe as beliefs about the shoes I'm wearing. For convenience, I will call that set of things B. I am NOT claiming that these things are actually beliefs about those shoes, although they might be.

Suppose B contains two things, B1 and B2 (among others).
Suppose B1 derives from causal interactions with, and is correlated with, the shoes I'm wearing. For example, if we suppose my shoes are brown, B1 might be the thing that underlies my sincerely asserting that my shoes are brown. Suppose B2 is not correlated with the shoes I'm wearing. For example, B2 might be the thing that underlies my sincerely asserting that my shoes are made of lithium.

If I'm understanding you correctly, you would say that B1 is a belief about my shoes. I'm moderately confident that you would also say that B2 is a belief about my shoes, albeit a false one. (Confirm/deny?)

Supposing that's right, consider now some other brain that, by utter coincidence, is identical to mine, but has never in fact interacted with any shoes in any way. That brain necessarily has C1 and C2 that correspond to B1 and B2. But if I'm understanding you correctly, you would say that neither C1 nor C2 are beliefs about shoes. (Confirm/deny?)

Supposing I've followed you so far, what would you call C1 and C2?

Replies from: pragmatist
comment by pragmatist · 2012-06-04T20:17:33.687Z · LW(p) · GW(p)

"Correlation" was a somewhat misleading word for me to use. The sense in which I meant it is that there's some sort of causal entanglement (to use Eliezer's preferred term) between the neuronal pattern and an object in the world. That entanglement exists for both B1 and B2. B2 is still a belief about my shoes. It involves the concept of my brown shoes, a concept I developed through causal interaction with those shoes. So both B1 and B2 have semantic content related to my shoes. B2 says false things about my shoes and B1 says true things, but they both say things about my shoes.

C1 and C2 are not beliefs about my shoes. There is no entanglement between those brain states and my shoes. What I would call C1 and C2 depends on the circumstances in which they arose. Say they arose through interaction with extremely compelling virtual reality simulations of shoes that look like mine. Then I'd say they were beliefs about those virtual shoes. Suppose they arose randomly, without any sort of appropriate causal entanglement with macroscopic objects. Then I'd say they were brain states of the sort that could instantiate beliefs, but weren't actually beliefs due to lack of content.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-04T20:49:51.448Z · LW(p) · GW(p)

Cool, thanks for the clarification.
Two things.

First, and somewhat tangentially: are you sure you want to stand by that claim about simulations of shoes? It seems to me that if I create VR simulations of your shoes, those simulations are causally entangled (to use the same term you're using) with your shoes, in which case C1 and C2 are similarly entangled with your shoes. No?

Second, and unrelatedly: OK, let's suppose C1 and C2 arise randomly. I agree that they are brain states, and I agree that they could instantiate beliefs.

Now, consider brain states C3 and C4, which similarly correspond to my actual brain's beliefs B3 and B4, which are about my white socks in the same sense that B1 and B2 are about my brown shoes. C3 and C4 are also, on your model, brain states of the sort that could instantiate beliefs, but aren't in fact beliefs. (Yes?)

Now, we've agreed that B1 and B2 are beliefs about brown shoes. Call that belief B5. Similarly, B6 is the belief that B3 and B4 are beliefs about white socks. And it seems to follow from what we've said so far that brain states C5 and C6 exist, which have similar relationships to C1-C4.

If I understand you, then C5 and C6 are beliefs on your model, since they are causally entangled with their referents (C1-C4). (They are false, since C1 and C2 are not in fact beliefs about brown shoes, but we've already established that this is beside the point; B2 is false as well, but is nevertheless a belief.)

Yes?

If I've followed you correctly so far, my question: should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren't)? For example, would it somehow know that C1-C4 aren't beliefs, but C5-C6 are?

Replies from: pragmatist
comment by pragmatist · 2012-06-04T21:13:13.684Z · LW(p) · GW(p)

I'm not sure I'd call C5 and C6 full-fledged beliefs. There is still content missing. C5, as you characterized it, is the brain state in the BB identical to my B5. B5 says "B1 and B2 are beliefs about brown shoes." Now B5 gets it content partially through entanglement with B1 and B2. That part holds for C5 as well. But part of the content of B5 involves brown shoes (the "... about brown shoes" part), actual objects in the external world. The corresponding entanglement is lacking for C5.

If you change B5 to "B1 and B2 are beliefs", then I think I'd agree that C5 is also a belief, a false belief that says "C1 and C2 are beliefs." Of course this is complicated by the fact that we don't actually have internal access to our brain states. I can refer to my brain states indirectly, as "the brain state instantiating my belief that Obama is President", for instance. But this reference relies on my ability to refer to my beliefs, which in turn relies on the existence of those beliefs. And the lower-order beliefs don't exist for the BB, so it cannot refer to its brain states in this way. Maybe there is some other way one could make sense of the BB having internal referential access to its brain states, but I'm skeptical. Still, let me grant this assumption in order to answer your final questions.

should I expect the brain that instantiates C1-C6 to interact with C5/C6 (which are beliefs) any differently than the way it interacts with C1-C4 (which aren't)?

Not really, apart from the usual distinctions between the way we interact with higher order and lower order belief states.

For example, would it somehow know that C1-C4 aren't beliefs, but C5-C6 are?

No.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-04T21:42:47.000Z · LW(p) · GW(p)

OK, cool. I think I now understand the claim you're making... thanks for taking the time to clarify.

comment by Mitchell_Porter · 2012-06-04T07:39:26.622Z · LW(p) · GW(p)

I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same ... But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.

This "content" is not something that you know and therefore it is not available to you as evidence. Are you living in the world where I had a vegemite sandwich for breakfast, or the world where I had to skip the sandwich in order to make it to the train station on time? It seems that according to your theory of mental content, whether or not I had that sandwich today is part of your mental content, because it's part of my ontological identity and I feature in some of your beliefs.

So if we compare you to your subjective duplicate in the world where my breakfast was different, the two of you are subjectively and evidentially indistinguishable.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T08:36:50.534Z · LW(p) · GW(p)

Let's say you have a computer set up to measure the temperature in a particular room to a high precision. It does this using input from sensors placed around the room. The computer is processing information about the room's temperature. Anthropomorphizing a little, one could say it has evidence of the room's temperature; evidence it received from the sensors.

Now suppose there's another identical computer somewhere else running the same software. Instead of receiving inputs from temperature sensors, however, it is receiving inputs from a bored teenager randomly twiddling a dial. By a weird coincidence, the inputs are exactly the same as the ones on your computer, to the point that the physical states of the two computers are identical throughout the processes.

Do you want to say the teenager's computer also has evidence of the room's temperature? I hope not. Would your answer be different if the computers were sophisticated enough to have phenomenal experience?

As for your example, the criterion of ontological identity you offer seems overly strict. I don't think failing to eat the sandwich would have turned you into a different person, such that my duplicate's beliefs would have been about something else. But this does seem like a largely semantic matter. Let's say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable. This doesn't seem like that big of a bullet to bite.

Replies from: Mitchell_Porter, DanielLC
comment by Mitchell_Porter · 2012-06-04T09:04:04.158Z · LW(p) · GW(p)

Let's say I accept your criterion of ontological identity. In that case, yes, me and my duplicate will be (slightly) evidentially distinguishable.

But they have no information about what I actually ate for breakfast! What is the "evidence" that allows them to be distinguished?

This term "evidentially distinguishable" is not the best because it potentially mixes up whether you have evidence now, with whether you could obtain evidence in the future. You and your duplicate might somehow gain evidence, one day, regarding what I had for breakfast; but in the present, you do not possess such evidence.

This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T09:37:54.490Z · LW(p) · GW(p)

But they have no information about what I actually ate for breakfast! What is the "evidence" that allows them to be distinguished?

I have evidence that Mitchell1 thinks there are problems with the MWI. My duplicate has evidence that Mitchell2 thinks there are problems with the MWI. Mitchell1 and Mitchell2 are not identical, so me and my duplicate have different pieces of evidence. Of course, in this case, neither of us knows (or even believes) that we have different pieces of evidence, but that is compatible with us in fact having different evidence. In the Boltzmann brain case, however, I actually know that I have evidence that my Boltzmann brain duplicate does not, so the evidential distinguishability is even more stark.

This whole line of thought arises from a failure to distinguish clearly between a thing, and your concept of the thing, and the different roles they play in belief. Concepts are in the head, things are not, and your knowledge is a lot less than you think it is.

I don't think I'm failing to distinguish between these. Our mental representations involve concepts, but they are not (generally) representations of concepts. My beliefs about Obama involve my concept of Obama, but they are not (in general) about my concept of Obama. They are about Obama, the actual person in the external world. When I talk of the content of a representation, I'm not talking about what the representation is built out of, I'm talking about what the representation is about. Also, I'm pretty sure you are using the word "knowledge" in an extremely non-standard way (see my comment below).

comment by DanielLC · 2012-06-04T20:18:04.910Z · LW(p) · GW(p)

Do you want to say the teenager's computer also has evidence of the room's temperature?

Yes. It has not proven that input is not connected to sensors in that room. There is a finite prior probability that they are. As such, that output is more likely given that that room is that temperature.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T20:35:57.225Z · LW(p) · GW(p)

We could set up the thought experiment so that it's extraordinarily unlikely that the teenager's computer is receiving input from the sensors. It could be outside the light cone, say. This might still leave a finite prior probability of this possibility, but it's low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration.

In any case, the analog of your argument in the Boltzmann brain case is that there might be some mechanism by which the brain is actually getting information about Obama, and its belief states are appropriately caused by that information. I agree that if this were the case then the Boltzmann brain would in fact have beliefs about Obama. But the whole point of the Boltzmann brain hypothesis is that its brain state is the product of a random fluctuation, not coherent information from a distant planet. So in this case, the hypothesis itself involves the assumption that the teenager's computer is causally disconnected from the temperature sensors.

Do you agree that if the teenager's computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room's temperature?

Replies from: DanielLC
comment by DanielLC · 2012-06-05T01:03:08.311Z · LW(p) · GW(p)

It could be outside the light cone, say.

If the computer doesn't know it's outside of the lightcone, that's irrelevant. The room may not even exist, but as long as the computer doesn't know that, it can't eliminate the possibility that it's in that room.

but it's low enough that even the favorable likelihood ratio of the subsequent evidence is insufficient to raise the hypothesis to serious consideration.

The probability of it being that specific room is far too low to be raised to serious consideration. That said, the utility function of the computer is such that that room or anything even vaguely similar will matter just about as much.

Do you agree that if the teenager's computer were not receiving input from the sensors, it would be inaccurate to say it has evidence about the room's temperature?

Only if the computer knows it's not receiving input from the sensors.

It has no evidence of the temperature of the room given that it's not receiving input from the sensors, but it does have evidence of the temperature of the room given that it is receiving input from the sensors, and the probability that it's receiving input from the sensors is finite (it isn't, but it doesn't know that), so it ends up with evidence of the temperature of the room.

comment by NancyLebovitz · 2012-06-04T15:10:55.484Z · LW(p) · GW(p)

Is it legitimate to hold that the possibility of being a Boltzman brain doesn't matter because there's no choice a Boltzman brain can make which make any difference? Therefore, you might as well assume that you're at least somewhat real.

Boltzman brains don't seem like the same sort of problem as being in a simulation-- if you're in a simulation, there might be other entities with similar value to yourself, you could still have quality of life (or lack of same), and it might be to your advantage to get a better understanding of the simulation.

At this point, I'm thinking about simulated Boltzman brains, and in fact, this conversation leads to very sketchy simulated Boltzman brains in anyone who reads it.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-06-04T17:16:44.527Z · LW(p) · GW(p)

Is it legitimate to hold that the possibility of being a Boltzman brain doesn't matter because there's no choice a Boltzman brain can make which make any difference? Therefore, you might as well assume that you're at least somewhat real.

Well, if you are a Boltzmann brain than your best bet may be to maximize enjoyment for the few fractions of a second you have before dissolving into chaos. So if you assign a high probability that you are a Boltzmann brain maybe you should spend the time fantasizing about whatever gender or genders you prefer.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-06-04T17:49:51.308Z · LW(p) · GW(p)

How long are Boltzman brains likely to last? I'd have thought that the vast majority of them flick in and out of existence too quickly for any sort of thought or choice.

On the other hand, I suppose that if you have an infinite universe, there might be a infinite number of Boltzman brains which last long enough for a thought, and even an infinite number which last long enough to die in vacuum.

comment by David_Gerard · 2012-06-04T10:15:54.459Z · LW(p) · GW(p)

For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him?

This is playing games with words, not saying anything new or useful. It presumes a meaning of "belief" such that there can be no such thing as an erroneous or unfounded belief, and that's just not how the word "belief" is used in English.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T10:24:47.072Z · LW(p) · GW(p)

You have misunderstood what I am saying. It is definitely not a consequence of my claim that there are no erroneous or unfounded beliefs. One can have a mistaken belief about Obama (such as the belief that he was born in Kenya), but for it to be a belief about Obama, there must be some sort of causal chain linking the belief state to Obama.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-04T13:08:24.007Z · LW(p) · GW(p)

So what you mean is that the Boltzmann brain can have no causally-connected beliefs about Obama, not no beliefs-as-everyone-else-uses-the-word about Obama. Fine, but your original statement and your clarification still gratuitously repurpose a word with a conventional meaning in a manner that will be actively misleading to the reader, and doing this is very bad practice.

comment by Jack · 2012-06-04T08:22:50.346Z · LW(p) · GW(p)

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head.

Whether or not a Boltzman brain could successfully refer to Barack Obama doesn't change the fact that your Boltzman brain copy doesn't know it can't have beliefs about Barack Obama. It's a scenario of radical skepticism. We can deny that Boltzman brains have knowledge but they don't know any better.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T08:28:20.468Z · LW(p) · GW(p)

your Boltzman brain copy doesn't know it can't have beliefs about Barack Obama

Sure, but I do. I have beliefs about Obama, and I know I can have such beliefs. Surely we're not radical skeptics to the point of denying that I possess this knowledge. And that's my point: I know things my Boltzmann brain copy can't, so we're evidentially distinguishable.

Replies from: Jack, Mitchell_Porter, DanArmak, D2AEFEA1
comment by Jack · 2012-06-04T14:05:23.557Z · LW(p) · GW(p)

Surely we're not radical skeptics to the point of denying that I possess this knowledge.

Of course we are. That's the big scary implication of the Boltzmann brain scenario. If you know a priori that you can't be a Boltzman brain then it is easy to exclude them from your reference class. You're entire case is just argument from incredulity, dressed up.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T18:50:32.880Z · LW(p) · GW(p)

No, that is not the big scary implication. At least not the one physicists are interested in. The Boltzmann brain problem is not just a dressed up version of Descartes' evil demon problem. Look, I think there's a certain kind of skepticism that can't be refuted because the standards of evidence it demands are unrealistically high. This form of skepticism can be couched in terms of an evil demon, or the Matrix, or Boltzmann brains. However you do it, I think it's a silly problem. If that was the problem posed by Boltzmann brains, I'd be unconcerned.

The problem I'm interested in is not that the Boltzmann brain hypothesis raises the specter of skepticism; the problem is that it, in combination with the SSA, is claimed to be strong evidence against our cosmological models. This is an entirely different issue from radical skepticism. In fact, this problem explicitly assumes the falsehood of radical skepticism. The hypothesis is supposed to disconfirm cosmological models precisely because we know we're not Boltzmann brains.

Replies from: Jack
comment by Jack · 2012-06-04T19:49:47.210Z · LW(p) · GW(p)

This is an entirely different issue from radical skepticism. In fact, this problem explicitly assumes the falsehood of radical skepticism. The hypothesis is supposed to disconfirm cosmological models precisely because we know we're not Boltzmann brains.

Right. The argument is a modus tollens on the premise that we could possibly be Boltzmann brains. It's, a) we are not Boltzmann brains, b) SSA, c) cosmological model that predicts a high preponderance of Boltzmann brains: PICK ONLY TWO. Now it's entirely reasonable to reject the notion that we are Boltzmann brains on pragmatic grounds. It's something we might as well assume because there is little point to anything if we don't. But you can't dissolve the fact that the SSA and the cosmological model imply that we are Boltzmann brains by relying on our pragmatic insistence that we aren't (which is what you're doing with the externalism stuff).

Replies from: pragmatist
comment by pragmatist · 2012-06-04T20:09:20.294Z · LW(p) · GW(p)

My externalism stuff is just intended to establish that Boltzmann brains and actual humans embedded in stable macroscopic worlds have different evidence available to them. At this point, I need make no claim about which of these is me. So I don't think the anti-skeptical assumption plays a role here. My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).

The rejection of skepticism is a separate assumption. As you say, there's good pragmatic reason to reject skepticism. I'm not sure what you mean by "pragmatic reason", but if you mean something like "We don't actually know skepticism is false, but we have to operate under the assumption that it is" then I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).

So now we have two premises, both arrived at through different and independent chains of reasoning. The first is that subjective indistinguishability does not entail evidential indistinguishability. The second is that I am not a Boltzmann brain. The combination of these two premises leads to my conclusion, that one might be justified in excluding Boltzmann brains from one's reference class. Now, a skeptic would attack the second premise. Fair enough, I guess. But realize that is a different premise from the first one. If your objection is skepticism, this objection has nothing to do with semantic externalism. And I think skepticism is a bad (and somewhat pointless) objection.

Replies from: Jack
comment by Jack · 2012-06-04T20:54:04.851Z · LW(p) · GW(p)

My claim at this point is just that these two systems are in different epistemic situations (they have different beliefs, knowledge, evidence).

That's fine. But what matters is that they can't actually tell they are in different epistemic situations. You've identified an objective distinction between Boltzmann brains and causally-embedded people. That difference is essentially: for the latter the external world exists, for the former it does not. But you haven't provided anyway for a Boltzmann brain or a regular old-fashioned human being to infer anything different about the external world. You're confusing yourself with word games. A Boltzman brain and a human being might be evidentially distinguishable in that the former's intentional states don't actually refer to anything. But their subjective situations are evidentially indistinguishable. Taboo 'beliefs' and 'knowledge'. Their information states are identical. They will come to identical conclusions about everything. The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.

I disagree. We do actually know there is an external world. To claim that we do not is to raise the standard of evidence to an artificially high level. Consistent sensory experience of an object in a variety of circumstances is ordinarily sufficient to claim that we know the object exists (despite the possibility that we may be in the Matrix).

This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains. Which is, like, the whole point of the argument and why it's not actually a separate assumption. The Boltzmann brain idea, like the Simulation argument, is much stronger than typical Cartesian skepticism and they are in no way identical arguments. The former say that most of the things with your subjective experiences are Boltzmann brains/in a computer simulation. That's very different from saying that there is a possibility an evil demon is tricking you. And the argument you give above for knowing that there is an external world is sufficient to rebut traditional, Cartesian skepticism but it is not sufficient to rebut the Boltzmann brain idea or the Simulation argument. These are more potent skepticisms.

Look at it this way: You have two premises that point to you being a Boltzmann brain. Your reply is that the SSA doesn't actually suggest you are a Boltzmann brain because your intentional states have referents and the Boltzmann brain's do not. That's exactly what the Boltzmann brain copy of you is thinking. Meanwhile the cosmological model you're working under says that just about everything thinking that thought is wrong.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T23:01:51.697Z · LW(p) · GW(p)

They [Boltzmann brains and human beings] will come to identical conclusions about everything.

Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.

The Boltzmann brain copy of pragmatist is just as confident that he is not a Boltzmann brain as you are.

No. The Boltzmann brain copy of pragmatist doesn't have any beliefs about Boltzmann brains (or brains in general) to be confident about. I know you disagree, but again, that disagreement is what's at issue here. Restating the disagreement in different ways isn't really an argument against my position.

This statement is only true if you reject either the SSA or a cosmological model that predicts most things that are thinking the same thoughts I am are Boltzmann brains.

The cosmological model doesn't predict that there are many Boltzmann brains thinking the same thoughts as me. It predicts that there are many Boltzmann brains in the same brain state as me. Whether the SSA says that I am likely to be one of the Boltzmann brains depends on what the appropriate reference class is. There is good reason to think that the appropriate reference class includes all observers with sufficiently similar evidence as me. I don't disagree with that version of the SSA. So far, no conflict between SSA + cosmology and the epistemology I've described.

What I disagree with is the claim that all subjectively similar observers have to be in the same reference class. The only motivation I can see for this is that subjective similarity entails evidential similarity. But I think there are strong arguments against this. These arguments do not assume anything about whether or not I am a Boltzmann brain. So I don't see why the arguments I give have to be strong enough to rebut the idea that I'm a Boltzmann brain. That's not what I'm trying to do. Maybe this comment gives a better idea of how I see the argument I'm responding to, and the nature of my response.

Replies from: Vladimir_Nesov, Jack
comment by Vladimir_Nesov · 2012-06-04T23:47:39.819Z · LW(p) · GW(p)

I deny that they will come to identical conclusions about everything.

The claim is merely that they will produce identical subsequent brain states and identical nerve impulses.

Replies from: pragmatist
comment by pragmatist · 2012-06-05T00:26:44.434Z · LW(p) · GW(p)

I agree with this claim, but I don't see how it can be leveraged into the kind of objection I was responding to. Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-06-05T00:48:59.466Z · LW(p) · GW(p)

Why should the fact that Boltzmann brains could go through an identical neural process convince me that the reasoning instantiated by me going through the neural process is wrong?

The conclusions of this reasoning, when it's performed by you, are not wrong, but they are wrong when the same reasoning is performed by a Boltzmann brain. In this sense, the process of reasoning is invalid, it doesn't produce correct conclusions in all circumstances, and that makes it somewhat unsatisfactory, but of course it works well for the class of instantiations that doesn't include Boltzmann brains.

As a less loaded model of some of the aspects of the problem, consider two atom-by-atom identical copies of a person who are given identical-looking closed boxes, with one box containing a red glove, and another a green glove. If the green-glove copy for some reason decides that the box it's seeing contains a green glove, then that copy is right. At the same time, if the green-glove copy so decides, then since the copies are identical, the red-glove copy will also decide that its box contains a green glove, and it will be wrong. Since evidence about the content of the boxes is not available to the copies, deciding either way is in some sense incorrect reasoning, even if it happens to produce a correct belief in one of the reasoners, at the cost of producing an incorrect belief in the other.

Replies from: pragmatist
comment by pragmatist · 2012-06-05T00:55:24.647Z · LW(p) · GW(p)

OK, that's a good example. Let's say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones, which sends certain signals to the optic nerve and so on. In the case of the red-glove copy, a thermodynamic fluctuation occurs that leads it to go through the exact same physical process. That is, the fluctuation makes the cones react just as if they had interacted with green photons, and the downstream process is exactly the same. In this case, you'd want to say both duplicates have unjustified beliefs? The green-glove duplicate arrived at its belief through a reliable process, the red-glove duplicate didn't. I just don't see why our conclusion about the justification has to be the same across both copies. Even if I bought this constraint, I'd want to say that both of their beliefs are in fact justified. The red-glove one's belief is false, but false beliefs can be justified. The red-glove copy just got really unlucky.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-06-05T01:05:03.517Z · LW(p) · GW(p)

Let's say the green-glove copy comes to conclusion that its glove is green because of photons bouncing off the glove and interacting with its cones

In my example, the gloves are not observed, the boxes are closed, the states of the brains of both copies and the nerve impulses they generate, and words they say will all be by construction identical during the thought experiment.

(See also the edit to the grandparent comment, it could be the case that we already agree.)

Replies from: pragmatist
comment by pragmatist · 2012-06-05T01:09:41.093Z · LW(p) · GW(p)

Whoops, missed that bit. Of course, if either copy is forming a judgment about the glove's color without actual empirical contact with the glove, then its belief is unjustified. I don't think the identity of the copies is relevant to our judgment in this case. What would you say about the example I gave, where the box is open and the green-glove copy actually sees the glove. By hypothesis, the brains of both copies remain physically identical throughout the process. In this case, do you think we should judge that there is something problematic about the green-glove copy's judgment that the glove is green? This case seems far more analogous to a situation involving a human and a Boltzmann brain.

ETA: OK, I just saw the edit. We're closer to agreement than I thought, but I still don't get the "unsatisfactory" part. In the example I gave, I don't think there's anything unsatisfactory about the green-glove copy's belief formation mechanism. It's a paradigm example of forming a belief through a reliable process.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-06-05T01:19:00.970Z · LW(p) · GW(p)

The sense in which your (correct) belief that you are not a Boltzmann brain is justified (or unjustified) seems to me analogous to the situation with the green-glove copy believing that its unobserved glove is green. Justification is a tricky thing: actually not being a Boltzmann brain, or actually being the green-glove copy could in some sense be said to justify the respective beliefs, without a need to rely on distinguishing evidence, but it's not entirely clear to me how that works.

comment by Jack · 2012-06-04T23:46:36.888Z · LW(p) · GW(p)

Your argument against my view seems to presume that my view is false. I deny that they will come to identical conclusions about everything. When I reason, I come to conclusions about things in my environment. For example, I came to the conclusion that Obama was born in Hawaii, based on evidence about Obama that was available to me. The Boltzmann brain cannot even refer to Obama, so it cannot come to this conclusion.

Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?

If your answer is "Yes, I deny it." Then I don't think you understand what it means to have identical brain states or your view presumes metaphysically spooky features that you haven't unpacked. But what I understand your position to be is that you don't deny it but that you think that the Boltzmann brain's representation of the external world doesn't can't come to identical conclusions about the world because it's representation doesn't successfully refer to anything.

If you want to say that a belief must have a causal connection to the thing it is trying to refer to: fine. We can call what Boltzmann brains have "pseudo-beliefs". Now, how can you tell if you have beliefs or pseudo-beliefs? You can't. Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can't make the distinction.

The reason people are focusing on the viability of the skeptical scenario in their responses to you is that it looks like the reason you think this is a viable evidential distinction is that you are unreasonably confident that your mental states successfully refer to an external world. Moreover, a solution to the argument that doesn't reject the SSA or the cosmological model shouldn't just play with the meaning of words-- readers should react with a sense of "Oh, good. The external world exists after all." If they don't it's a good indication that you haven't really addressed the problem. This is true even though the argument starts by assuming that we are not Boltzmann brains since obviously the logical structure remains intact.

You should to stop assuming that everyone is misunderstanding you. Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.

Replies from: pragmatist
comment by pragmatist · 2012-06-05T00:24:47.947Z · LW(p) · GW(p)

Do you deny that the Boltzmann brain thinks it can refer to Obama? I.e. that it has some mental representation of an external world that is indistinguishable from your own except insofar as it does not successfully refer to anything in an external world?

Yes. I don't think the Boltzmann brain has a representation of the external world at all. Whether or not a system state is a representation of something else is not an intrinsic property of the state. It depends on how the state was produced and how it is used. If you disagree, could you articulate what it is you think makes a state representational?

I could use a salt and pepper shaker to represent cars when I'm at a restaurant telling my friend about a collision I recently experienced. Surely you'd agree that the particular arrangement of those shakers I constructed is not intrinsically representational. If they had ended up in that arrangement by chance they wouldn't be representing anything. Why do you think neural arrangements are different?

Claiming that the subjective situations of the human and the Boltzmann brain are evidentially distinguishable is totally bizarre when they themselves can't make the distinction.

I don't think this is what I'm claiming. I'm not sure what you mean by the "subjective situations" of the human and the Boltzmann brain, but I don't think I'm claiming that the subjective situations themselves are evidentially distinguishable. I don't think I can point to some aspect of my phenomenal experience that proves I'm not a Boltzmann brain.

I'm claiming that the evidence available to me goes beyond my phenomenal experience, that one's evidence isn't fully determined by one's "subjective situation". Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn't seem all that different from the view expressed in the Sequences here.

Further, I think if two observers have vastly different sets of evidence, then it is permissibile to place them in separate reference classes when reasoning about certain anthropic problems.

You should to stop assuming that everyone is misunderstanding you.

I don't assume this. I think some people are misunderstanding me. Others have expressed a position which I see as actually opposed to my position, so I'm pretty sure they have understood me. I think they're wrong, though.

Your incorrect prediction about how I would respond to your question is an indication that you have at least partially misunderstood me. I suspect the number of people who have misunderstood me on this thread is explicable by a lack of clarity on my part.

Everyone is giving you the same criticism in different words and your argument is not being upvoted. Update on this information.

I have, but it does not shift my credence enough to convince me I'm wrong. Does the fact that a majority of philosophers express agreement with externalism about mental content lead you to update your position somewhat? If you are unconvinced that I am accurately representing externalism, I encourage you to read the SEP article I linked and make up your own mind.

Replies from: Jack
comment by Jack · 2012-06-05T02:20:04.025Z · LW(p) · GW(p)

Yes. I don't think the Boltzmann brain has a representation of the external world at all.

Not "the" external world. "An external world" and when I say external world I don't mean a different, actually existing external world but a symbolic system that purports to represent an external world just as a humans brain contains a symbolic system (or something like that) that actually represents the external world.

If they had ended up in that arrangement by chance they wouldn't be representing anything. Why do you think neural arrangements are different?

The question here isn't the relationship between the symbolic system/neural arrangement (call it S) and the system S purports to represent (call it R). The question is about the relation between S and the rest of the neural arrangement that produces phenomenal experience (call it P). If I understood exactly how that worked I would have solved the problem of consciousness. I have not done so. I'm okay with an externalism that simply says for S to count as an intentional state it must have a causal connection to R. But a position that a subject's phenomenal experience supervenes not just on P and S but also on R is much more radical than typical externalism and would, absent a lot of explanation, imply that physicalism is false.

You're not a phenomenal externalist, correct?

I'm claiming that the evidence available to me goes beyond my phenomenal experience, that one's evidence isn't fully determined by one's "subjective situation". Boltzmann brains and human beings have different evidence. Is this the view you regard as bizarre? It doesn't seem all that different from the view expressed in the Sequences here.

Ah, this might be the issue of contention. What makes phenomenal experience evidence of anything is that we have good reason to think it is causally entangled in an external world. But a Boltzmann brain would have exactly the same reasons. That is, an external, persistent, physical world is the best explanation of our sensory experiences and so we take our sensory experiences to tell us things about that world which let's us predict and manipulate it. But the Boltzmann brain has exactly the same sensory experiences (and memories). It will make the same predictions (regarding future sensory data) and run the same experiments (except the Boltzmann brain's will be 'imaginary' in a sense) which will return the same results (in terms of sensory experiences).

I don't really want this to be about the definition of evidence. But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn't return the same updates and credences for both sets!

Replies from: pragmatist
comment by pragmatist · 2012-06-05T02:59:40.571Z · LW(p) · GW(p)

You're not a phenomenal externalist, correct?

No, I'm not, you'll be glad to hear. There are limits even to my lunacy. I was just objecting to your characterization of the BB's brain state as a representation. I'm not even all that happy with calling it a purported representation. If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who's doing the purporting? Is it sufficient that some system could be used as a representation for it to count as a purported representation? In that case, everything is a purported representation.

I think there's a tendency to assume our mental representations somehow have intrinsic representational properties that we wouldn't attribute to other external representations. This is probably because phenomenal representation seems so immediate. If a Boltzmann brain's visual system were in the same state mine is in when I see my mother, then maybe the brain isn't visually representing my mother, but surely it is representing a woman, or at least something. Well, no, I don't think so. If a physical system that is atom-for-atom identical to a photograph of my mother congealed out of a high entropy soup it would not be a representation of my mother. It wouldn't be a representation at all, and not even a purported one.

But surely having different sets of evidence implies that a perfect Bayesian reasoner wouldn't return the same updates and credences for both sets!

First, the Boltzmann brain and I do not return the same updates. The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn't even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.

Second, I disagree with your claim that perfect Bayesian reasoners would return different updates for different sets of evidence. I see no reason to believe this is true. As long as the likelihood ratios (and priors) are the same, the updates will be the same, but likelihood ratios aren't unique to particular pieces of evidence. As an example, suppose a hypothesis H predicts a 30% chance of observing a piece of evidence E1, and the chance of observing that evidence if H had been false is 10%. It seems to me entirely possible that there is a totally different piece of evidence, E2, which H also predicts has a 30% chance of being observed, and ~H predicts has a 10% chance of being observed. A Bayesian reasoner who updated on E1 would return the same credence as one who updated on E2, even though E1 and E2 are different. None of this seems particularly controversial. Am I misunderstanding your claim?

Replies from: Jack, TheOtherDave
comment by Jack · 2012-06-05T04:14:59.079Z · LW(p) · GW(p)

Am I misunderstanding your claim?

Yes, but that's my fault. Let's put it this way. A set of evidence is indistinguishable from another set of evidence if and only if an ideal Bayesian reasoner can update on either and then, update not at all after learning the other set.

First, the Boltzmann brain and I do not return the same updates.

That's not the issue. Neither you nor your Boltzmann brain copy is an ideal Bayesian reasoner. The question is: what happens when you feed your evidence to an ideal Bayesian reasoner and then feed the Boltzmann brain's evidence. Will the ideal Bayesian reasoner find anything new to update on? What if you reverse the process and feed the Boltzmann brain's evidence first? Will the ideal Bayesian reasoner update then?

The only thing identical about our updates is their syntactical instantiation. Their semantics differ. In fact, I wouldn't even say the Boltzmann brain is performing Bayesian reasoning. Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.

The pancomputation issue is tricky but has nothing to do with this. By stipulation Boltzmann brains are physically similar enough to humans to make computations that produce exactly similar brain states. Moreover, you say you are not a phenomenal externalist so the computations made by Boltzmann brains apparently produce exactly similar phenomenal experiences. Pancomputation isn't any more of a problem for me than it is for you.

Perhaps this is just going to end up being a reductio on externalism.

Who's doing the purporting?

The Boltzmann brain, obviously. Are you denying that a Boltzmann brain can have any intentional states. I.e. Can it believe things about it's phenomenal experience, qualia, or other mental states. Can't it believe it believes something?

comment by TheOtherDave · 2012-06-05T04:04:05.543Z · LW(p) · GW(p)

If the salt and pepper shaker arrangement occurs by chance, does that make it a purported representation without actual representational content? Who's doing the purporting?

Well, the simpler part of this is that representation is a three-place predicate: system A represents system B to observer C1, which does not imply that A represents B to C2, nor does it prevent A from representing B2 to C2. (Nor, indeed, to C1.)

So, yes, a random salt-and-pepper-shaker arrangement might represent any number of things to any number of observers.

A purported representation is presumably some system A about which the claim is made (by anyone capable of making claims) that there exists a (B, C) pair such that A represents B to C.

But there's a deeper disconnect here having to do with what it means for A to represent B to C in the first place, which we've discussed elsethread.

Being able to find an isomorphism between a physical process some system is undergoing and a genuine computational process does not mean that the system is actually performing the computation. Accepting that would entail that every physical system is performing every computation.

Sure. And if I had a brain that could in fact treat all theoretically possible isomorphisms as salient at one time, I would indeed treat every physical system as performing every computation, and also as representing every other physical system. In fact, though, I lack such a brain; what my brain actually does is treat a vanishingly small fraction of theoretically possible isomorphisms as salient, and I am therefore restricted to only treating certain systems as performing certain computations and as representing certain other systems.

comment by Mitchell_Porter · 2012-06-04T09:06:24.462Z · LW(p) · GW(p)

I have beliefs about Obama, and I know I can have such beliefs. Surely we're not radical skeptics to the point of denying that I possess this knowledge.

You know about your concept of Obama. You have memories of sensations which seem to validate parts of this concept. But you do not know that your world contains an object matching the concept.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T09:25:02.226Z · LW(p) · GW(p)

You don't think I know that Obama exists (out there, in the world, not in my head)? It sounds like you're using the word "knowledge" very differently from the way it's ordinarily used. According to you, can we know anything about the world outside our heads?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-06-04T09:56:44.983Z · LW(p) · GW(p)

It sounds like you're using the word "knowledge" very differently from the way it's ordinarily used.

Well, sure, people say they "know" all sorts of things that they don't actually know. It would be formidably difficult to speak and write in a way that constantly acknowledges the layered uncertainty actually present in the situation. Celia Green says the uncertainty is total, which isn't literally true, but it's close to the truth.

Experience consists of an ongoing collision between belief and reality, and reality is that I don't know what will happen even one second from now, I don't know the true causes of my sensations, and so on - though I may have beliefs about these matters. My knowledge is a small island in an ocean of pragmatic belief, and mostly concerns transient superficial sensory facts, matters known by definition and deduction, perhaps some especially vivid memories tying together sensation and concept, and a very slowly growing core of ontological facts obtained by phenomenological reflection, such as the existence of time, thought, sensation, etc. Procedural knowledge also deserves a separate mention, though it is essentially a matter of knowing how to try to do something; success is not assured.

comment by DanArmak · 2012-06-04T14:11:17.617Z · LW(p) · GW(p)

Surely we're not radical skeptics to the point of denying that I possess this knowledge.

We're radical skeptics to the point of seriously considering that you may be a Boltzmann brain. And that's what you think that would mean. (I also disagree with the way you use certain words, but others have said that.)

Replies from: pragmatist
comment by pragmatist · 2012-06-04T19:57:54.195Z · LW(p) · GW(p)

We're radical skeptics to the point of seriously considering that you may be a Boltzmann brain.

Maybe you are, but I'm not. Nothing in my argument (or the Boltzmann brain argument in general) requires one to seriously entertain the possibility that one is a Boltzmann brain.

Replies from: DanArmak
comment by DanArmak · 2012-06-04T21:14:47.282Z · LW(p) · GW(p)

Well, that's what you wanted to convince people of. And your argument in the OP is wrong, and others have explained why (incorrect word usage and word games).

There are other, better arguments: for example I'm as simple an observer as I might have been, which argues strongly that I'm not a chance fluctuation. It's legitimate to take such arguments as evidence against big-universe theories where BBs flourish. But if other data suggests a big universe, then there's still an unanswered question to be resolved.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T21:31:44.376Z · LW(p) · GW(p)

And your argument in the OP is wrong, and others have explained why (incorrect word usage and word games).

Just to clarify: Is the word "belief" one of the words you think I'm using incorrectly? And do you think that the incorrect usage is responsible for me saying things like "Boltzmann brains couldn't have beliefs about Obama"? Relatedly, do you think Boltzmann brains could in fact have beliefs about Obama?

I'm not trying to get into an argument about this here. I just want a sense of what people think about this. I might defend my claims about belief in a separate post later.

Replies from: TheOtherDave, DanArmak
comment by TheOtherDave · 2012-06-04T21:57:31.849Z · LW(p) · GW(p)

I can't speak for Dan, of course, but for my own part: I think this whole discussion has gotten muddled by failing to distinguish clearly enough between claims about the world and claims about language.

I'm not exactly sure what you or Dan mean by "incorrect word usage" here so I can't easily answer your first question, but I think the distinction you draw between beliefs and brain-states-that-could-be-beliefs-if-they-had-intentional-content-but-since-they-don't-aren't is not an important distinction, and using the label "belief" to describe the former but not the latter is not a lexical choice I endorse.

I think that lexical choice is responsible for you saying things like "Boltzmann brains couldn't have beliefs about Obama."

I think Boltzmann brains can enter brain-states which correspond to the brain-states that you would call "beliefs about Obama" were I to enter them, and I consider that correspondence strong enough that I see no justification to not also call the BB's brain-states "beliefs about Obama."

As far as I can tell, you and I agree about all of this except for what things in the world the word "belief" properly labels.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T22:10:36.714Z · LW(p) · GW(p)

Do you feel the same way about the word "evidence"? Do you feel comfortable saying that an observer can have evidence regarding the state of some external system even if its brain state is not appropriately causally entangled with that system?

I obviously agree with you that how we use "belief" and "evidence" is a lexical choice. But I think it is a lexical choice with important consequences. Using these words in an internalist manner generally indicates (perhaps even encourages) a failure to recognize the importance of distinguishing between syntax and semantics, a failure I think has been responsible for a lot of confused philosophical thinking. But this is a subject for another post.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-05T01:14:02.348Z · LW(p) · GW(p)

This gets difficult, because there's a whole set of related terms I suspect we aren't quite using the same way, so there's a lot of underbrush that needs to get cleared to make clear communication possible.

When I'm trying to be precise, I talk about experiences providing evidence which constrains expectations of future experiences. That said, in practice I do also treat clusters of experience that demonstrate persistent patterns of correlation as evidence of the state of external systems, though I mostly think of that sort of talk as kinda sloppy shorthand for an otherwise too-tedious-to-talk-about set of predicted experiences.

So I feel reasonably comfortable saying that an experience E1 can serve as evidence of an external system S1. Even if I don't actually believe that S1 exists, I'm still reasonably comfortable saying that E1 is evidence of S1. (E.g., being told that Santa Claus exists is evidence of the existence of Santa Claus, even if it turns out everyone is lying.)

If I have a whole cluster of experiences E1...En, all of which reinforce one another and reinforce my inference of S1, and I don't have any experiences which serve as evidence that S1 doesn't exist, I start to have compelling evidence of S1 and my confidence in S1 increases. All of this can occur even if it turns out that S1 doesn't actually exist. And, of course, some other system S2 can exist without my having any inkling of it. This is all fairly unproblematic.

So, moving on to the condition you're describing, where E1 causes me to infer the existence of S1, and S1 actually does exist, but S1 is not causally entangled with E1. I find it simpler to think about a similar condition where there exist two external systems, S1 and S2, such that S2 causes E1 and on the basis of E1 I infer the existence of S1, while remaining ignorant of S2. For example, I believe Alice is my birth mother, but in fact Alice (S1) and my birth mother (S2) are separate people. My birth mother sends me an anonymous email (E1) saying "I am your birth mother, and I have cancer." I infer that Alice has cancer. It turns out that Alice does have cancer, but that this had no causal relationship with the email being sent.

I am comfortable in such an arrangement saying that E1 is evidence that S1 has cancer, even though E1 is not causally entangled with S1's cancer.

Further, when discussing such an arrangement, I can say that the brain-states caused by E1 are about S1 or about S2 or about both or neither, and it's not at all clear to me what if anything depends on which of those lexical choices I make. Mostly, I think asking what E1 is really "about" is a wrong question; if it is really about anything it's about the entire conjoined state of the universe, including both S1 and S2 and everything else, but really who cares?

And if instead there is no S2, and E1 just spontaneously comes into existence, the situation is basically the same as the above, it's just harder for me to come up with plausible examples.

Replies from: pragmatist
comment by pragmatist · 2012-06-05T05:17:24.534Z · LW(p) · GW(p)

Perhaps it would help to introduce a distinction here. Let's distinguish internal evidence and external evidence. P1 counts as internal evidence for P2 if it is procedurally rational for me to alter my credence in P2 once I come to accept P1, given my background knowledge. P1 is external evidence for P2 if the truth of P1 genuinely counterfactually depends on the truth of P2. That is, P1 would be false (or less frequently true, if we're dealing with statistical claims) if P2 were false. A proposition can be internal evidence without being external evidence. In your anonymous letter example, the letter is internal evidence but not external evidence.

Which conception of evidence is the right one to use will probably depend on context. When we are attempting to describe an individual's epistemic status -- the amount of reliable information they possess about the world -- then it seems that external evidence is the relevant variety of evidence to consider. And if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations. Going back to an early example of Eliezer's:

I'm going to close with the thought experiment that initially convinced me of the falsity of the Modesty Argument. In the beginning it seemed to me reasonable that if feelings of 99% certainty were associated with a 70% frequency of true statements, on average across the global population, then the state of 99% certainty was like a "pointer" to 70% probability. But at one point I thought: "What should an (AI) superintelligence say in the same situation? Should it treat its 99% probability estimates as 70% probability estimates because so many human beings make the same mistake?" In particular, it occurred to me that, on the day the first true superintelligence was born, it would be undeniably true that - across the whole of Earth's history - the enormously vast majority of entities who had believed themselves superintelligent would be wrong. The majority of the referents of the pointer "I am a superintelligence" would be schizophrenics who believed they were God.

A superintelligence doesn't just believe the bald statement that it is a superintelligence - it presumably possesses a very detailed, very accurate self-model of its own cognitive systems, tracks in detail its own calibration, and so on. But if you tell this to a mental patient, the mental patient can immediately respond: "Ah, but I too possess a very detailed, very accurate self-model!" The mental patient may even come to sincerely believe this, in the moment of the reply. Does that mean the superintelligence should wonder if it is a mental patient? This is the opposite extreme of Russell Wallace asking if a rock could have been you, since it doesn't know if it's you or the rock.

If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases? If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes. But I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients, a difference attributable to differences in external evidence.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-05T13:19:38.143Z · LW(p) · GW(p)

I accept your working definitions for "internal evidence" and "external evidence."

When we are attempting to describe an individual's epistemic status -- the amount of reliable information they possess about the world -- then it seems that external evidence is the relevant variety of evidence to consider

I want to be a little careful about the words "epistemic status" and "reliable information," because a lot of confusion can be introduced through the use of terms that abstract.

I remember reading once that courtship behavior in robins is triggered by the visual stimulus of a patch of red taller than it is wide. I have no idea if this is actually true, but suppose it is. The idea was that the ancestral robin environment didn't contain other stimuli like that other than female robins in estrus, so it was a reliable piece of evidence to use at the time. Now, of course, there are lots of visual stimuli in that category, so you get robins initiating courtship displays at red socks on clotheslines and at Coke cans.

So, OK. Given that, and using your terms, and assuming it makes any sense to describe what a robin does here as updating on evidence at all, then a vertical red swatch is always internal evidence of a fertile female, and it was external evidence a million years ago (when it "genuinely" counterfactually depended on the presence of such a female) but it is not now. If we put some robins in an environment from which we eliminate all other red things, it would be external evidence again. (Yes?)

If what I am interested in is whether a given robin is correct about whether it's in the presence of a fertile female, external evidence is the relevant variety of information to consider.

If what I am interested in is what conclusions the robin will actually reach about whether it's in the presence of a fertile female, internal evidence is the relevant variety of information to consider.

If that is consistent with your claim about the robin's epistemic status and about the amount of reliable information the robin possesses about the world, then great, I'm with you so far. (If not, this is perhaps a good place to back up and see where we diverged.)

if two observers differ substantially in the external evidence available to them, it seems justifiable to place them in separate reference classes for certain anthropic explanations.

Sure, when available external evidence is particularly relevant to those anthropic explanations.

If the superintelligence were engaging in anthropic reasoning, should it put itself in the same reference class as the mental patients in all cases?

So A and B both believe they're superintelligences. As it happens, A is in fact a SI, and B is in fact a mental patient. And the question is, should A consider itself in the same reference class as B. Yes?

...I think the answer is fairly obviously no, and this is because of the vast difference in the epistemic situations of the superintelligence and the mental patients,

Absolutely agreed. I don't endorse any decision theory that results in A concluding that it's more likely to be a mental patient than a SI in a typical situation like this, and this is precisely because of the nature of the information available to A in such a situation.

If we think identical (or similar) internal evidence requires that they be in the same reference class, then I think the answer may be yes.

Wait, what?

Why in the world would A and B have similar internal evidence?

I mean, in any normal environment, if A is a superintelligence and B is a mental patient, I would expect A to have loads of information on the basis of which it is procedurally rational for A to conclude that A is in a different reference class than B. Which is internal evidence, on your account. No?

But, OK. If I assume that A and B do have similar internal evidence... huh. Well, that implicitly assumes that A is in a pathologically twisted epistemic environment. I have trouble imagining such an environment, but the world is more complex than I can imagine. So, OK, sure, I can assume such an environment, in a suitably hand-waving sort of way.

And sure, I agree with you: in such an environment, A should consider itself in the same reference class as B. A is mistaken, of course, which is no surprise given that it's in such an epistemically tainted environment.

Now, I suppose one might say something like "Sure, A is justified in doing so, but A should not do so, because A should not believe falsehoods." Which would reveal a disconnect relating to the word "should," in addition to everything else. (When I say that A should believe falsehoods in this situation, I mean I endorse the decision procedure that leads to doing so, not that I endorse the result.)

But we at least ought to agree, given your word usage, that it is procedurally rational for A to conclude that it's in the same reference class as B in such a tainted environment, even though that isn't true. Yes?

comment by DanArmak · 2012-06-04T21:46:30.132Z · LW(p) · GW(p)

Yes, yes, and yes - in the usual meaning of "belief". There are different-but-related meanings which are sometimes used, but the way you use it is completely unlike the usual meanings.

More importantly, your state that a BB can't have "beliefs" in your sense, which is a (re)definition - that merely makes your words unclear and misunderstood - but then you conclude that because you have "beliefs" you are not a BB. This is simply wrong, even using your own definition of "belief" - because under your definition, having "real beliefs" is not a measurable fact of someone's brain in reality, and so you can never make conclusions like "I have real beliefs" or "I am not a BB" based on your own brain state. (And all of our conclusions are based on our brain states.)

IOW: a BB similar to yourself, would reach the same conclusions as you - that it is not a BB - but it would be wrong. However, it would be reasoning from the exact same evidence as you. Therefore, your reasoning is faulty.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T22:37:22.963Z · LW(p) · GW(p)

IOW: a BB similar to yourself, would reach the same conclusions as you - that it is not a BB - but it would be wrong. However, it would be reasoning from the exact same evidence as you.

I disagree that it would be reasoning from the exact same evidence as me. I'm an externalist about evidence too, not just about belief.

Replies from: DanArmak
comment by DanArmak · 2012-06-05T11:06:13.902Z · LW(p) · GW(p)

Again, you're using the word "evidence" differently from everyone else. This only serves to confuse the discussion.

Tabooing "evidence", what I was saying is that a BB would have the same initial brain-state (what I termed "evidence") and therefore would achieve the same final brain-state (what I termed "conclusions"). The laws of physics for its brain-state evolution, and the physical causality between the two states, are the same as for your brain. This is trivially so by the very definition of a BB that is sufficiently similar to your brain.

I don't know what you mean by "externalist evidence" and I don't see how it would matter. The considerations that apply here are exactly the same as in Eliezer's discussion of p-zombies. Imagine a BB which is a slightly larger fluctuation than a mere brain; it is a fluctuation of a whole body, which can live for a few seconds, and can speak and think in that time. It would think and say "I am conscious" for the same reasons as you do; therefore it is not a p-zombie. It would think and say "Barack Obama exists" for the same reasons as you do; therefore what everyone-but-you calls its knowledge and its beliefs about "Barack Obama", are of the same kind as yours.

comment by D2AEFEA1 · 2012-06-04T10:46:03.402Z · LW(p) · GW(p)

Wait, would an equivalent way to put it be evidential as in "as viewed by an outside observer" as opposed to "from the inside" (the perspective of a Boltzmann brain)?

comment by D2AEFEA1 · 2012-06-04T10:56:07.553Z · LW(p) · GW(p)

But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history.

You're assuming that there exists something like our universe, with at least one full human being like you having beliefs causally entwined with Obama existing. What if there is none, and there are only Boltzmann brains or something equivalent?

In a Boltzmann brain scenario, how can you even assume that the universe in which they appear is ruled by the same laws of physics as those we seemingly observe? After all, the observations and beliefs of a Boltzmann brain aren't necessarily causally linked to the universe that generated it.

You could well be a single "brain" lost in a universe whose laws make it impossible for something like our own Hubble volume to exist, where all your beliefs about physics, including beliefs about Boltzmann brains, is just part of the unique, particular beliefs of that one brain.

comment by [deleted] · 2012-06-04T15:00:50.684Z · LW(p) · GW(p)

I found an article that claims to debunk the boltzmann brain hypothesis, but I can't properly evaluate everything he is saying. http://motls.blogspot.com.es/2008/08/boltzmann-brains-trivial-mistakes.html

Replies from: khafra
comment by khafra · 2012-06-04T18:51:05.491Z · LW(p) · GW(p)

Interesting article; and Motl's a well-known knowledgeable and sometimes correct debunker. I didn't disagree substantially with anything up until here:

It was the probability density for a brain creation itself, around exp(-10^{57}), that determined the "importance" of the effect of a spontaneous brain creation. If we multiply this small number by a huge volume, we obtain a huge result that has no physical relevance for any place of the Universe in the present, past, or future (because the factor of the "spacetime volume" transformed the quantity into some "statistics of events in the whole spacetime" i.e. a global quantity which can't possibly influence any observable phenomena in a region of spacetime, by locality and causality).

I'm not exactly sure what he means, but it seems like even if the brains are not local to anything else, they are the observers; so the objection seems moot.

comment by Alejandro1 · 2012-06-04T20:35:39.081Z · LW(p) · GW(p)

Two points, in response to your update:

Firstly, I'd say that the most common point of disagreement between you and the people who have responded in the thread is not that they take skepticism more seriously than you, it is that they disagree with you about the implications of semantic externalism. You say "Subjective indistinguishability does not entail evidential indistinguishability." I think most people here intuitively disagree with this, and assume your "evidence" (in the sense of the word that comes into Bayesian reasoning) includes only what you "know" in the subjective, introspective sense, not the externalist sense. (E.g. I interpret in this way the comments of Jack, Vladimir and ryan_sandwich.) It might be more clear if you had made two posts, or one divided in two parts, one explaining clearly your take on externalism and its conception of evidence, and one explaining its relevance to the Boltzmann Brains (BB) problem.

Secondly, I think the BB problem is a more serious argument for skepticism than the standard evil demon or brain-in vat scenarios. It allows us (which the other scenarios don't) to make an argument of this form: "Either current physics as I understand it is essentially correct, or it is not. If not, I know nothing about the universe. If it is, it implies that I am very likely to be a BB, in which case I know nothing about the universe. So I know nothing about the universe." I'm not defending this argument as valid, just saying that it seems to be in a different class from saying "my experience is indistinguishable from that of a brain-in-vat, so I don't know if I am one or not".

Replies from: pragmatist
comment by pragmatist · 2012-06-04T20:55:16.706Z · LW(p) · GW(p)

I think most people here intuitively disagree with this, and assume your "evidence" (in the sense of the word that comes into Bayesian reasoning) includes only what you "know" in the subjective, introspective sense, not the externalist sense.

Yeah, I'm getting this now, and I must admit I'm surprised. I had assumed that accepting some form of semantic externalism is obviously crucial to a fully satisfactory naturalistic epistemology. I still think this is true, but perhaps it is less obvious than I thought. I might make a separate post defending this particular claim.

You're right that the BB-based skeptical argument you offer is a different argument for skepticism than brains-in-vats. I'm not sure it's a more serious argument, though. The second premise in your argument ("If current physics is not essentially correct, I know nothing about the universe.") seems obviously false. Also the implication that I am very likely to be a BB does not come just from current physics. It comes from current physics in conjunction with something like SSA. So there's a third horn here, which says SSA is incorrect. And accepting this doesn't seem to have particularly dire consequences for our epistemological status.

comment by shokwave · 2012-06-04T07:13:07.725Z · LW(p) · GW(p)

I feel like I should be able to hyperlink this to something, but I can't find anything as relevant as I remembered. So here goes:

Your reference class is not fixed. Nor is it solely based on phenomenal state, I'd argue, although this second claim is not well-supported.

That is, Boltzmann brains are in your reference class when dealing with something all sentiences deal with; for progressively more situation-specific reasoning, the measure of Boltzmann brains in your reference class shrinks. By dealing with concrete situations one ought to be able to shrink the measure to epsilon.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T07:32:23.862Z · LW(p) · GW(p)

I think Bostrom's claim is that no matter what situation you're dealing with, all observers that are subjectively indistinguishable from you must be part of your reference class. Whether (and which) observers who are subjectively distinguishable get to be part of it will depend on what you're reasoning about.

comment by AlexSchell · 2012-07-03T03:29:25.747Z · LW(p) · GW(p)

You're right of course that Bostrom is not engaging with the problem you're focusing on. But the context for discussing Boltzmann's idea seems different from what he says about "freak observers" -- the former is about arguing that the historically accepted objection to Boltzmann is best construed as relying on SSA, whereas the rationale for the latter is best seen in his J Phil piece: http://www.anthropic-principle.com/preprints/cos/big2.pdf ). But I'll grant you that his argument about Boltzmann is suboptimally formulated (turns out I remembered it being better than it actually was).

However, there is a stronger argument (obvious to me, and maybe charitably attributable to Bostrom-sub-2002) that you seem to be ignoring, based on the notion that SSA can apply even if you can (based on your evidence) exclude certain possibilities about who you are. The argument doesn't need the absurd inference from "lives in higher entropy region" to "observes higher entropy region" but rather needs Boltzmann's model to suggest that "very few observers observe large low-entropy regions". Since most Boltzmann brains have random epistemic states and can't be described as observing anything (much less large low-entropy regions), the latter sentence is of course true conditional on Boltzmann's model. Most observer's don't observe anything (and even the lucky ones who do tend to have tiny-sized low-entropy surroundings) so virtually all observers do not observe large low-entropy regions.

Anyhow, if a Boltzmann-brain-swamped scenario is true, a very tiny fraction of observers make our observations, whereas if the world isn't Boltzmann-brain-swamped (e.g. if universes reliably stop existing before entering the de Sitter phase), a much less tiny fraction of observers make our observations. The SSA takeaway from this is that our observations disconfirm Boltzmann-brain-swamped scenarios.

You can of course exclude Boltzmann brains from the reference class, but this doesn't get you very far. Much more rarely but still infinitely often, there will arise observers with some surrounding environments and veridical observations about those surrounding environments. Still, it seems very plausible that the typical observations of such observers are different from the expected observations of evolved beings, so they dominate the reference class and again the SSA argument goes through. For your move against this argument to be successful, you have to have good grounds for excluding all de Sitter phase observers. And based on my rudimentary understanding it seems that the variety of de Sitter phase observers won't allow you to do that with general considerations like externalism. Hope this helps.

A question: why do you think that your non-qualitative evidence should be taken into account differently than qualitative evidence if SSA holds? At least for observers subjectively indistinguishable to ourselves, your choice is to exclude the one's who don't have our presumed external evidence from the reference class. Why not just include everyone in the reference class and then update based on both qualitative and non-qualitative/external evidence?

comment by AlexSchell · 2012-06-05T06:31:19.187Z · LW(p) · GW(p)

I can only skim most of this right now, but you're definitely misconstruing what Bostrom has to say about Boltzmann. He does not rely on our having non-qualitative knowledge that we're not Boltzmann brains. Please re-read his stuff: http://anthropic-principle.com/book/anthropicbias.html#5b

Replies from: pragmatist
comment by pragmatist · 2012-06-05T07:48:52.557Z · LW(p) · GW(p)

Huh, you're right. I totally misremembered Bostrom's argument. Reading it now, it doesn't make much sense to me. He moves from the claim that proportionally very few observers would live in low entropy regions as large as ours to the claim that very few observers would observe low entropy regions as large as ours. The former claim is a consequence of Boltzmann's model, but it's not at all obvious that the latter claim is. It would be if we had reason to think that most observers produced by fluctuations would have veridical observations, but why think that is the case? The veridicality of our observations is the product of eons of natural selection. It seems pretty unlikely that a random fluctuation would produce veridical observers. Once we establish this, there's no longer a straightforward inference from "lives in higher entropy region" to "observes higher entropy region".

Later, he offers this justification for neglecting "freak observers" (observers whose beliefs about their environment are almost entirely spurious):

At the same time, the existence of freak observers would not prevent a theory that is otherwise supported by our evidence from still being supported once the freak observers are taken into account—provided that the freak observers make up a small fraction of all the observers that the theory says exist. In the universe we are actually living in, for example, it seems that there may well be vast numbers of freak observers (if only it is sufficiently big). Yet these freak observers would be in an astronomically small minority compared to the regular observers who trace their origin to life that evolved by normal pathways on some planet. For every observer that pops out of a black hole, there are countless civilizations of regular observers. Freak observers can thus, in the light of our observation selection theory, be ignored for all practical purposes.

But this is just false for our current cosmological models. They predict that freak observers predominate (as does Boltzmann's own model). So it seems like Bostrom isn't even really engaging with the actual Boltzmann brain problem. The argument I attribute to him I probably encountered elsewhere. The argument is not uncommon in the literature.

Replies from: AlexSchell
comment by AlexSchell · 2012-07-03T03:30:18.114Z · LW(p) · GW(p)

You're right of course that Bostrom is not engaging with the problem you're focusing on. But the context for discussing Boltzmann's idea seems different from what he says about "freak observers" -- the former is about arguing that the historically accepted objection to Boltzmann is best construed as relying on SSA, whereas the rationale for the latter is best seen in his J Phil piece: http://www.anthropic-principle.com/preprints/cos/big2.pdf ). But I'll grant you that his argument about Boltzmann is suboptimally formulated (turns out I remembered it being better than it actually was).

However, there is a stronger argument (obvious to me, and maybe charitably attributable to Bostrom-sub-2002) that you seem to be ignoring, based on the notion that SSA can apply even if you can (based on your evidence) exclude certain possibilities about who you are. The argument doesn't need the absurd inference from "lives in higher entropy region" to "observes higher entropy region" but rather needs Boltzmann's model to suggest that "very few observers observe large low-entropy regions". Since most Boltzmann brains have random epistemic states and can't be described as observing anything, the latter sentence is of course true conditional on Boltzmann's model. Most observers don't observe anything (and even the lucky ones who do tend to have tiny-sized low-entropy surroundings) so virtually all observers do not observe large low-entropy regions.

Anyhow, if a Boltzmann-brain-swamped scenario is true, a very tiny fraction of observers make our observations, whereas if the world isn't Boltzmann-brain-swamped (e.g. if universes reliably stop existing before entering the de Sitter phase), a much less tiny fraction of observers make our observations. The SSA takeaway from this is that our observations disconfirm Boltzmann-brain-swamped scenarios.

You can of course exclude Boltzmann brains from the reference class, but this doesn't get you very far. Much more rarely but still infinitely often, there will arise observers with some surrounding environments and veridical observations about those surrounding environments. Still, it seems very plausible that the typical observations of such observers are different from the expected observations of evolved beings, so they dominate the reference class and again the SSA argument goes through. For your move against this argument to be successful, you have to have good grounds for excluding all de Sitter phase observers. And based on my rudimentary understanding it seems that the variety of de Sitter phase observers won't allow you to do that with general considerations like externalism. Hope this helps.

A question: why do you think that your non-qualitative/external evidence should be taken into account differently than qualitative/internal evidence if SSA holds? At least for observers subjectively indistinguishable to ourselves, your choice is to exclude the one's who don't have our presumed external evidence from the reference class. Why not just include everyone in the reference class and then update based on both qualitative and external evidence?

comment by Adele_L · 2012-06-04T04:20:35.122Z · LW(p) · GW(p)

Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.

So the idea is that Boltzmann brains would form in smaller fluctuations, while a larger fluctuation would be required to account for us. Since smaller fluctuations are more common, it's more likely that a given brain is a Boltzmann one.

But does this take into account the fact that one large fluctuation can give rise to trillions of brains? Enough so that it would be more likely that an observer would be in one of these larger ones?

Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs.

Under this idea, wouldn't the Boltzmann brain have an equivalent belief about "Barack Obama", which would correspond to some isomorphic thing in its environment? And then, wouldn't this be extremely unlikely, since by definition, the Boltzmann brain is in a higher entropy place (as it would observe a world isomorphic to ours, which has relatively low entropy)?

Replies from: DanArmak, pragmatist
comment by DanArmak · 2012-06-04T14:19:48.202Z · LW(p) · GW(p)

So the idea is that Boltzmann brains would form in smaller fluctuations, while a larger fluctuation would be required to account for us. Since smaller fluctuations are more common, it's more likely that a given brain is a Boltzmann one.

Your wording implicitly assumes you're not a Boltzmann brain. If you are one, the "us" is an illusion and no larger fluctuation is necessary.

Replies from: pragmatist, Adele_L
comment by pragmatist · 2012-06-04T18:43:19.087Z · LW(p) · GW(p)

Your wording implicitly assumes you're not a Boltzmann brain.

That's because I'm not one, and I know this! Look, even in the Bostrom argument, the prevalence of Boltzmann brains is the basis for rejecting the Boltzmann model. The argument's structure is: This model says that it is highly improbable that I am not a Boltzmann brain. I am in fact not a Boltzmann brain. Therefore, this model is disconfirmed.

People seem to be assuming that the problem raised by the possibility of Boltzmann brains is some kind of radical skepticism. But that's not the problem. Maybe some philosophers care about that kind of skepticism, but I don't think it's worth worrying about. The problem is that if a cosmological model predicts that I am a Boltzmann brain, then that model is disconfirmed by the fact that I'm not. And some people claim that our current cosmological models do in fact predict that I am a Boltzmann brain. Everyone in this debate takes it as granted that I am not actually a Boltzmann brain. I'm surprised people here regard this as a controversial premise.

Replies from: DanArmak
comment by DanArmak · 2012-06-04T21:21:09.444Z · LW(p) · GW(p)

That's because I'm not one, and I know this!

See my reply to you elsethread. I also agree with this reply.

comment by Adele_L · 2012-06-04T18:43:10.213Z · LW(p) · GW(p)

Oh, okay.

Is there a good introduction to Boltzmann brains somewhere? I don't seem to understand it very well.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T05:05:07.198Z · LW(p) · GW(p)

But does this take into account the fact that one large fluctuation can give rise to trillions of brains? Enough so that it would be more likely that an observer would be in one of these larger ones?

Yeah, it does. The probabilities involved here are ridiculously unbalanced. The frequency of a fluctuation (assuming ergodicity) is exponentially related to its entropy, so even small differences in entropy correspond to large differences in probability. And the difference in entropy here is itself huge. For comparison, it's been estimated that a fluctuation into our current macroscopic universe would be likelier than a fluctuation into the macroscopic state of the very early universe by a factor of about 10^(10^101).

Under this idea, wouldn't the Boltzmann brain have an equivalent belief about "Barack Obama", which would correspond to some isomorphic thing in its environment? And then, wouldn't this be extremely unlikely, since by definition, the Boltzmann brain is in a higher entropy place (as it would observe a world isomorphic to ours, which has relatively low entropy)?

Not sure what you're getting at here. The belief state in the Boltzmann brain wouldn't be caused by some external stable macroscopic object. It's produced by the chance agglomeration of microscopic collisions (in Boltzmann's model).

Replies from: pragmatist, Adele_L
comment by pragmatist · 2012-06-04T21:56:51.084Z · LW(p) · GW(p)

I get why many of my other comments on this post (and the post itself) have been downvoted, but I can't figure out why the parent of this comment has been downvoted. Everything in it is fairly uncontroversial science, as far as I know. Does someone disagree with the claims I make in that comment? If so, I'd like to know! The possibility that I might be saying false things about the science bothers me.

comment by Adele_L · 2012-06-04T05:28:08.391Z · LW(p) · GW(p)

Oh okay then.

The belief state in the Boltzmann brain wouldn't be caused by some external stable macroscopic object.

I don't think it matters what caused the belief. Just that if it had the same state as your brain, that state would correspond to a brain that observed a place with low entropy.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T05:58:40.564Z · LW(p) · GW(p)

I'm still having trouble understanding your point. I think there is good reason to think the brain does not in fact have any beliefs. Beliefs, as we understand them, are produced by certain sorts of interactions between brains and environments. The Boltzmann brain's brain states are not attributable to interactions of that sort, so they are not beliefs. Does that help, or am I totally failing to get what you're saying?

Replies from: Adele_L
comment by Adele_L · 2012-06-04T21:34:41.633Z · LW(p) · GW(p)

Beliefs, as we understand them...

But wouldn't a Boltzmann brain understand its "beliefs" the same way, despite them not corresponding to reality?

comment by Shmi (shminux) · 2012-06-04T05:30:19.448Z · LW(p) · GW(p)

There are claims that Boltzmann brains pose a significant problem for contemporary cosmology.

What I don't get is why people even bother spending time discussing such nonsense seriously, let alone make meaningless claims like this.

Replies from: pragmatist, David_Gerard
comment by pragmatist · 2012-06-04T05:31:31.506Z · LW(p) · GW(p)

Not everyone shares your extreme positivism, I guess.

ETA: Re-reading this comment, I see that it might come across as flippant or hostile. It's not intended to be either. I just think that you have a perspective on what questions are worth addressing (and how they should be addressed) that is unusual. Whether it is the right perspective (I don't think it is) is the subject of another discussion, one that I don't wish to have here.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-04T06:51:59.816Z · LW(p) · GW(p)

Thank you. I suppose I did not state my point well, just expressed my frustration. What I meant to comment on is that the odds of a Boltzmann brain are so minuscule, they are way down in the noise level, somewhere way below Jesus being an actual son of God or the Galaxy being on the Orion's (the cat) belt. Thus, what is the reason to prefer this particular pseudo-scientific model over others? Every time I hear Don Page talk about it with a straight face, I get a twilight-zone feeling.

Replies from: pragmatist, DanArmak, DanielLC, pragmatist
comment by pragmatist · 2012-06-04T07:34:40.197Z · LW(p) · GW(p)

the odds of a Boltzmann brain are so minuscule

Depends on what odds you're talking about. If you're talking about the odds of a Boltzmann brain popping into existence at any particular instant, then yes, the probability is absurdly low. If you're talking about a Boltzmann brain existing at some point in the universe, then I don't think the odds are all that small. It's the latter probability that's relevant to the kinds of issues Page cares about.

comment by DanArmak · 2012-06-04T14:17:56.487Z · LW(p) · GW(p)

the odds of a Boltzmann brain are so minuscule

Why do you think that? That's precisely the point of discussion.

Please start by explaining whether you think BBs are very unlikely to exist over the history of the universe, or whether (reading your wording carefully) many BBs (will) exist but you think being one is unlikely for some reason.

comment by DanielLC · 2012-06-04T07:12:28.809Z · LW(p) · GW(p)

I don't see why they'd be unlikely a priori. You'd expect them to pop up in an infinitely long-lasting universe, unless there's something that makes them get continually less likely, rather than approaching a certain probability as entropy maxes out.

comment by pragmatist · 2012-06-04T08:14:56.248Z · LW(p) · GW(p)

This isn't some fringe interest of Page's though. Other cosmologists who have written and spoken about the issue are Hawking, Hartle, Guth, Susskind, Linde, Carroll, Vilenkin and Bousso (I could go on). Hardly crackpots. Doesn't the fact that a number of eminent scientists consider this an issue worth talking about shift your opinion about it being nonsense a little bit?

Replies from: David_Gerard
comment by David_Gerard · 2012-06-04T10:14:32.929Z · LW(p) · GW(p)

Do you mean, they have written about Boltzmann brains, or that they actually raise concerns similar to those you raise in this post? A string of names does not actually assert the latter.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T10:17:10.639Z · LW(p) · GW(p)

All of the physicists I named, except Hartle, have raised concerns about the Boltzmann brain problem threatening observational cosmology. Hartle has argued against the SSA, so he thinks Boltzmann brains aren't problematic. I probably shouldn't have included his name in the list. Still, the fact that he has published on the issue suggests that he regards it as more than just nonsense.

comment by David_Gerard · 2012-06-04T10:12:28.548Z · LW(p) · GW(p)

What I don't get is why people even bother spending time discussing such nonsense seriously, let alone make meaningless claims like this.

Sure you do: the observed phenomenon that many things that philosophers consider major problems are artifacts of human cognitive psychology. c.f. p-zombies. The bad philosophy in question does not survive because it says anything about the physical universe, but because its proponents refuse to be convinced their claimed problem is not a problem.

tl;dr when philosophers claim they've come up with something that says something about physics, they've generally failed to realise that the ancient Greeks deciding to concentrate on pure thought and eschew experimentation was an error.

Replies from: pragmatist
comment by pragmatist · 2012-06-04T10:18:21.028Z · LW(p) · GW(p)

Philosophers did not come up with the Boltzmann brain problem. Most discussion of the problem that I am aware of is in the physics literature. Of course, your diagnosis may still be accurate. Many contemporary theoretical physicists also fall into the trap of concentrating on pure thought and eschewing experimentation.