Posts
Comments
If they can host brains, they're "similar" enough for my original intention - I was just excluding "alien worlds".
I don't see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the "quantum" of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of "matter spontaneously organises such that for an instant it mimics the structure of a conscious brain". It's not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
... but I'm only an amateur, this is probably nonsense ;-)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
Further, somewhat more speculative thought:
A totally causal universe has the potential to have an initial state (including the rules of its time-evolution) that is extremely simple (low Shannon entropy), as compared to a causal-but-with-some-exceptions universe. As Eliezer points out, it also requires vastly less computing power to 'run'.
It therefore seems perfectly reasonable that universe-simulators working with non-infinite resources would have a strong preference for simulating absolutely causal universes - and that we should therefore not be terribly surprised to find ourselves in one.
Isn't that "hint" just an observer selection effect?
Is it surprising that the correlation between "universes that are absolutely/highly causal" and "universes in which things as complex as conscious observers can be assembled by evolution and come to contemplate the causal nature of their universe" is very high? (The fitness value of intelligence must be at least somewhat proportional to how predictable reality is...)
I worry about this "what sort of thingies can be real" expression. It might be more useful to ask "what sort of thingies can we observe". The word "real", except as an indexical, seems vacuous.
It's a bit questionable if the relationship is one way, but it could be designed to be a symmetric "best" for the companion too. Okay, more CPU cycles, but this reeks of hard take-off, which probably means new physics...
Also, a bit more technically but I hope worth adding - if the companion already exists in any possible world, the fact that you engineer a situation where you are able to perceive one another isn't creating a pattern ex nihilo, it's discovering one. Takes some of the wind out of the argument, although you still certainly have a point on privacy if the relationship is asymmetric.
The answer seems fairly simple under modal realism (roughly, the thesis that all logically possible worlds exist in the same sense as mathematical facts exist, and thus that the term "actual" in "our actual world" is just an indexical).
If the simulation accurately follows a possible world, and contains a unit of (dis)utility, it doesn't generate that unit of (dis)utility, it just "discovers" it; it proves that for a given world-state an event happens which your utility function assigns a particular value. Repeating the simulation again is also only rediscovering the same fact, not in any sense creating copies of it.
As a Trinitarian who's just come out of the lurker closet, I clearly shouldn't miss this... looking forward to it!