[Link] Physics-based anthropics?
post by Brian_Tomasik · 2014-11-14T07:02:03.307Z · LW · GW · Legacy · 13 commentsContents
13 comments
Nick Bostrom's self-sampling assumption treats us as a random sample from a set of observers, but this framework raises several paradoxes. Instead, why not treat the stuff we observe to be a random sample from the set of all stuff that exists? I elaborate on this proposal in a new essay subsection: "SSA on physics rather than observers?" At first glance, it seems to work better than any of the mainstream schools of anthropics. Comments are welcome.
Has this idea been suggested before? I noticed that Robin Hanson proffered something similar way back in 1998 (four years before Bostrom's Anthropic Bias). I'm surprised Hanson's proposal hasn't received more attention in the academic literature.
13 comments
Comments sorted by top scores.
comment by drnickbone · 2014-11-14T20:26:21.697Z · LW(p) · GW(p)
If I understand correctly, this approach to anthropics strongly favours a simulation hypothesis: the universe is most likely densely packed with computing material ("computronium") and much of the computational resource is dedicated to simulating beings like us. Further, it also supports a form of Doomsday Hypothesis: simulations mostly get switched off before they start to simulate lots of post-human people (who are not like us) and the resource is then assigned to running new simulations (back at a human level).
Have I misunderstood?
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-11-15T06:20:41.938Z · LW(p) · GW(p)
Yes, that's right. Note that SIA also favors sim hypotheses, but it does so less strongly because it doesn't care whether the sims are of Earth-like humans or of weirder creatures.
Here's a note I wrote to myself yesterday:
Like SIA, my PSA anthropics favors the sim arg in a stronger way than normal anthropics.
The sim arg works regardless of one's anthropic theory because it requires only a principle of indifference over indistinguishable experiences. But it's a trilemma, so it might be that humans go extinct or post-humans don't run early-seeming sims.
Given the existence of aliens and other universes, the ordinary sim arg pushes more strongly for us being a sim because even if humans go extinct or don't run sims, whichever civilization out there runs lots of sims should have lots of sims of minds like ours, so we should be in their sims.
PSA doesn't even need aliens. It directly penalizes hypotheses that predict fewer copies of us in a given region of spacetime. Say we're deciding between
H1: no sims of us
and
H2: 1 billion sims of us.
H1 would have a billion-fold bigger probability penalty than H2. Even if H2 started out being millions of times less probable than H1, it would end up being hundreds of times more probable.
Also note that even if we're not in a sim, then PSA, like SIA, yields Katja's doomsday argument based on the Great Filter.
Either way it looks very unlikely there will be a far future, ignoring model uncertainty and unknown unknowns.
Replies from: drnickbone↑ comment by drnickbone · 2014-11-21T20:52:01.787Z · LW(p) · GW(p)
Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".
One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-11-22T11:28:01.449Z · LW(p) · GW(p)
One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions.
Yes. :) The first paragraph here identifies at least one problem with every anthropic theory I'm aware of.
Replies from: drnickbone↑ comment by drnickbone · 2015-03-14T20:24:49.507Z · LW(p) · GW(p)
I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.
I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.
Searching for features of human interest (like "leader of a nation") is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such "observers" encountered while stepping through u, and simply picks out the nth entry on the list, giving the "nth" observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2015-03-17T01:47:57.251Z · LW(p) · GW(p)
Nice point. :)
That said, your example suggests a different difficulty: People who happen to be special numbers n get higher weight for apparently no reason. Maybe one way to address this fact is to note that what number n someone has is relative to (1) how the list is enumerated and (2) what universal Turing machine is being used for KC in the first place, and maybe averaging over these arbitrary details would blur the specialness of, say, the 1-billionth observer according to any particular coding scheme. Still, I doubt the KCs of different people would be exactly equal even after such adjustments.
comment by jessicat · 2014-11-14T07:52:12.886Z · LW(p) · GW(p)
I agree with this but I prefer weighting things by computation power instead of physics cells (which may turn out to be somewhat equivalent). It's easy to justify this model by assuming that some percentage of the multiverse's computation power is spent simulating all universes in parallel. See Schmidhuber's paper on this.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-11-14T08:13:32.156Z · LW(p) · GW(p)
Cool -- thanks! Yeah, my proposal is just about how to conceptualize the sample space, but it would be trivial to replace
count(stuff I observe) / count(all stuff)
with
measure(stuff I observe) / measure(all stuff)
for some non-constant measure function.
comment by Manfred · 2014-11-14T15:43:55.079Z · LW(p) · GW(p)
If weighting by amount of stuff in the way you mean is a consequence of vanillla anthropics, then you should do it. If it is not, then you need ordinary causal information to justify doing it.
Example of when it's justified from vanilla anthropics: there are a bunch of planets, and each planet has some small chance of generating intelligent life. You're intelligent life - this causes you to think there are more planets than the base rate.
In general, I would disagree with any attempts to find a simple object-level rule for assigning anthropic probabilities.
Replies from: jessicat↑ comment by jessicat · 2014-11-14T19:41:47.909Z · LW(p) · GW(p)
I'm not sure what you mean by "vanilla anthropics". Both SSA and SIA are "simple object-level rules for assigning anthropic probabilities". Vanilla anthropics seems to be vague enough that doesn't give an answer to the doomsday argument or the presumptuous philosopher problem.
On another note, if you assume that a nonzero percentage of the multiverse's computation power is spent simulating arbitrary universes with computation power in proportion to the probabilities of their laws of physics, then both SSA and SIA will end up giving you very similar predictions to Brian_Tomasik's proposal, although I think they might be slightly different.
Replies from: Manfred↑ comment by Manfred · 2014-11-14T23:01:47.021Z · LW(p) · GW(p)
I'm not sure what you mean by "vanilla anthropics".
Am working on it - as a placeholder, for many problems, one can use Stuart Armstrong's proposed algorithm of finding the best strategy according to a non-anthropic viewpoint that adds the utilities of different copies of you, and then doing what that strategy says.
Both SSA and SIA are "simple object-level rules for assigning anthropic probabilities"
Yup. Don't trust them outside their respective ranges of validity.
if you assume [stuff about the nature of the universe]
You will predict [consequences of those assumptions, including anthropic consequences]. However, before assuming [stuff about the universe], you should have [observational data supporting that stuff].
Replies from: jessicat↑ comment by jessicat · 2014-11-15T00:03:49.325Z · LW(p) · GW(p)
Am working on it - as a placeholder, for many problems, one can use Stuart Armstrong's proposed algorithm of finding the best strategy according to a non-anthropic viewpoint that adds the utilities of different copies of you, and then doing what that strategy says.
I think this essentially leads to SIA. Since you're adding utilities over different copies of you, it follows that you care more about universes in which there are more copies of you. So your copies should behave as if they anticipate the probability of being in a universe containing lots of copies to be higher.
However, before assuming [stuff about the universe], you should have [observational data supporting that stuff].
It's definitely not a completely justified assumption. But we do have evidence that the universe supports arbitrary computations, that it's extremely large, and that some things are determined randomly, so as a result it will be running many different computations in parallel. This provides some evidence that, if there is a multiverse, it will have similar properties.
Replies from: Brian_Tomasik↑ comment by Brian_Tomasik · 2014-11-15T06:50:05.015Z · LW(p) · GW(p)
I think this essentially leads to SIA. Since you're adding utilities over different copies of you, it follows that you care more about universes in which there are more copies of you.
Of course, it's slightly different from SIA because SIA wants more copies of anyone, whether you or not. If the proportion of individuals who are you remains constant, then SIA is equivalent.
Elsewhere in my essay, I discuss a prudential argument (which I didn't invent) for assuming there are lots of copies of you. Not sure if that's the same as Armstrong's proposal.
PSA is essentially favoring more copies of you per unit of spacetime / physics / computation / etc. -- as long as we understand "copy of you" to mean "instance of perceiving all the data you perceive right now" rather than just a copy of your body/brain but in a different environment.