The table of different sampling assumptions in anthropics
post by avturchin · 2022-06-29T10:41:18.872Z · LW · GW · 5 commentsContents
1. Classical assumptions based on the difference between possible and real observers. 2. Selection only in some tempo-spatial aspects. 3. The role of the observers’ self-reflection in sampling. 4. Decision-theoretic approaches 5. Computational approach. 6. The difference between subjective and objective probabilities 7. Other similar principles 8. “Measure” as a probability of being a given observer 9. Related ideas The meta-level problem of choosing right assumption The meta-meta problem None 5 comments
“…. anthropic reasoning: complicated, error-prone,
looks a bit magical but once you start getting real evidence much
of it becomes irrelevant.” Sandberg
The meta-meta-problem of anthropics is how to find the right way to choose between principles of selecting the correct sampling assumptions, but before we get there, let's first list all known and even unknown sampling assumptions.
The full list, including some related ideas, has 38 items (google doc here, text version below).
I suggest several possibly new assumptions for the sake of completeness of the list, but it doesn’t mean that I endorse them (like “intensity sampling” and “observer-moment-size sampling”) and it doesn’t mean that they were never suggested before.
1. Classical assumptions based on the difference between possible and real observers.
- SSA. Self-sampling assumption. I am randomly selected from all actually existing observers of my reference class.
- The problem with choosing the reference class
- Many other sampling assumptions are just SSA for a specific class
- Different reference classes [LW · GW] have different types of the end.
- [Doomsday argument-like problems are designated via black box below]
- Classical Doomsday Argument: Civilization will end soon (but it is not problematic unless we have too good expectations about the future).
- Adam and Eve: probability pumping via manipulating the number of the future observers (but maybe evolution already used this?)
- SSSA. Strong self-sampling assumption. The same as SSA, but the selection is between the observer-moments (OM).
-
- It gives too much weight to Boltzmann Brains and freak dreams, as they generate OM without proper observers.
-
- SSSSA. Super-strong self-sampling assumption: I am selected from all minds weighted by their intelligence. Variant: I am selected from all observer-moments proportional to their “size”.
- Explains why I am not an ant.
- Superintelligence is impossible, or I will be it.
- Very large OMs are rare.
- Explains why I am not an ant.
- Universal SSA. SSA is true for every reference class. [LW · GW]I am randomly selected from each and every reference class to which I belong if they are mutually independent and all their members consist of qualified observers.
-
- There is a “bad corner” in the space of all possible minds where all types of the Doomsday Argument are true.
-
- SIA. Self-indication assumption. “We should give higher probability to hypotheses that contain more observers, other things being equal”, Tomasic. Classical definition [LW · GW]: “I am randomly selected from all possible observers”.
- If the universe is ergodically infinite, SIA is uninformative, as all possible observers do exist in it. Therefore, the fact, that I exist, does not give me any new information.
- SIA-doomsday: The Great Filter is ahead (Grace).
- Presumptuous philosopher: the hypothesis which postulates a larger number of observers could be validated without experiments.
- If the universe is ergodically infinite, SIA is uninformative, as all possible observers do exist in it. Therefore, the fact, that I exist, does not give me any new information.
- SIA+SSA. “But sometimes the term "SIA" also refers to "SSA+SIA", which favors more copies of you specifically, without needing to define observers in general”, Tomasik.
- SIA tells why I am in the long (tail) branch in Sleeping beauty, and SSA tells where exactly I am in that branch.
- Untypical SIA [LW · GW]. My uniqueness is an argument that many attempts to create me were made. E.g. many stars exist and many people. It is similar to SIA, but it is now not an assumption, but a theorem, which could be proved. It works only if the given actually existing observer is “untypical”, and thus its creation requires more attempts.
-
- It is an argument for Rare Earth: a larger number of planets which tried but failed to create life – is more probable than a smaller number of such planets. Multiverse is real and it is as an explanation of the fine-tuning.
-
2. Selection only in some tempo-spatial aspects.
- PSA. Physics sampling assumption by Tomasik: how often given observation will appear in different universes. Highest spatial density is favoured. I am likely to be in the universes which have a higher density of observers like me.
- A variant: The same, but also taking into account the size of the universes (the regions with the highest density could be relatively small).
- PSA favors Big Crunch over Heat Death as it has a smaller total volume.
- PSA favors solipsism as it gives the highest density of observers.
- In MWI, most of my copies appear just before Big Rip.
- Panspermia is almost proved, as it ensure higher spatial density of civilizations.
- A variant: The same, but also taking into account the size of the universes (the regions with the highest density could be relatively small).
- Now-time selection principle. I am randomly selected from all observers, who live now, but not in the past or in the future. A variant: I am selected from “now and the past” (depends on the model of time, like the block model of time).
-
- Guth’s youngness paradox: I am more likely to live in the youngest bubble universe in the world with cosmological inflation. Therefore, no aliens exist in my observable universe.
- It also means that I should be the earliest observer in our civilization who learns something about anthropics (which seems to be false).
-
- MWI-selection. I am randomly selected from all MWI branches, counting each branch separately (with observers).
-
- Anti-youngness paradox: I am now in the last moment of the universe’s existence (e.g. just before False Vacuum decay) as the number of branches is maximal at that moment. Quantum immortality seems to balance the problem, as I will constantly miraculously survive the end of the universe.
- Alternatively, if we count not the number of the branches, but the total measure of all branches as constant, then the measure declines extremely [LW · GW] quickly for any given branch and I must be very early.
-
- Entropy-based selection. The worlds with high complexity are thermodynamically improbable and rare.
-
- We are the most the complex civilization ever, decline will follow (as in the novel by Strugatsky “Definitely maybe”)
- Technological progress will experience unexpected obstacles aimed at lowering the complexity of civilization.
- Most Boltzmann brains are simple.
-
- Civilizations-sampling assumption. We are a typical civilization. [LW · GW] (Hanson in the Grabby Aliens).
-
- Grabby aliens are at the distance of around 1 Gyr.
-
3. The role of the observers’ self-reflection in sampling.
- Qualified observers. I am randomly selected from the set of qualified observers: the ones who are thinking about anthropics.
- Variant: Wider: Or at least can think.
- Variant: Thinner: Or are thinking exactly like me.
- The end is very soon, as the qualified observers appeared only recently(around 1970) in our civilization’s history and the number of them is growing.
- Minimal reference class by Bostrom: subjectively indistinguishable agents.
- A question defines the answer. Question-defined reasoning: “The room is green because I am asking why it is green”.
-
- After a right question, the world disappears, like in “The Nine Billion Names of God”.
-
4. Decision-theoretic approaches
- FDT. Functional decision theory: I am randomly selected from all functionally similar lines of thought. (This is my interpretation of FDT, could be wrong – AT)
-
- Gives too much weight to non-human computations;
- As I am human, no such computations in stones, computers or AIs.
-
- ADT. Anthropic decision theory (a variant of Updateless Decision Theory). I should reason in a way which increases global utility, without updating on my local position.
- I should ignore my copies in simulation and BB as they have zero impact, even if they are numerically dominating.
- Utility monsters?
- I should reason as if I am a future creator of FAI: in that case, I would have the biggest possible impact.
- I should ignore my copies in simulation and BB as they have zero impact, even if they are numerically dominating.
- FNC. Full non-indexical conditioning. Chances of my appearance under different theories are: “P(theory of the universe | I exist) ∝ P(I exist | theory of the universe) * P(theory of the universe", Tomasic).
5. Computational approach.
- KSA. Computations sampling assumption – similar to UDASSA, based on Kolmogorov complexity to weight measure.
- UDASSA. OMs are selected proportionally to their Kolmogorov complexity.
-
- “Simpler” minds are more often.
-
- “Law without law” article. In this theory “I” and “others” have different statistics. In this theory, observer states are connected via computational complexity of the transition from one to another in the math universe.
- It is similar to UDASSA, but the difference is that UDASSA calculates the probability of each OM, and LwL computes the probability of transition from one OM to another.
- Simulations are unlikely: a person who is often observable is not the one who is often experienced from the inside.
- It is similar to UDASSA, but the difference is that UDASSA calculates the probability of each OM, and LwL computes the probability of transition from one OM to another.
6. The difference between subjective and objective probabilities
- Perspective-based reasoning [LW · GW]. Self-sampling is impossible as it confuses the first person and third-person perspective.
- “Because self-locating probability has no underlying process, assumptions treating the first person as the sampling outcome are needed to fill this gap.”
- I am not typical.
- “Because self-locating probability has no underlying process, assumptions treating the first person as the sampling outcome are needed to fill this gap.”
- Anthropic trilemma [LW · GW] approach. Subjective probabilities are completely different from objective and could be manipulated by the order of creating of copies.
-
- Anthropic devil [LW(p) · GW(p)]: a measure monster which manipulates subjective probabilities in unobservable ways.
-
7. Other similar principles
- Copernican mediocrity principle. “In physical cosmology, the Copernican principle states that humans, on the Earth or in the Solar System, are not privileged observers of the universe”.
- Bland indifference principle. “bland indifference principle… prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are…”
- Unification theory of identity (Tomasic). I am represented by all my copies simultaneously, therefore, no selection happens and no need for the selection assumption.
- No problems like “I am BB” or “I am in a simulation”.
- Impossibility to predict the single unitary future: any future outcome has probability 1. Bad outcomes dominate.
- No problems like “I am BB” or “I am in a simulation”.
8. “Measure” as a probability of being a given observer
- The number of microstates. Each macro state of mind (OM) could be represented by many microstates. The measure is proportional to the number of microstates which represent one macro state (similar to entropy).
-
- Minds with the largest number of microstates should have the highest measure. These are minds with large sensory fields, but simple interpretation states. We are them, so there are no minds with even larger sensory fields.
- Also “meditating” minds with large but “empty” sensory fields are more likely??
-
- Qualia-selection. I am randomly selected only from those who:
- have qualia
- have “my type” of qualia (my red)
- has mental qualia are about anthropics
- Humans are qualia monsters: we have the most intense qualia.
- Fine-tuning to generate mental qualia is no more needed: The universe existed only for me to think this exact thought and may disappear after that.
- Qualia world is real: only qualia exist, no other objects.
- Causal connectedness. Selection should be only from a causally-connected region of the universe.
- I am randomly selected only from observers in the same causally-connected region of the universe, but not from other universes.
- Other universes are just possible ones.
- I am in the universe with the largest causal connection between observers (like cosmological inflation)??
- Energy-based selection. I am selected based on the energy used for computations; I am unlikely to be a Boltzmann brain, as they “use” minimal energy.
-
- Very energy-inefficient, “thick” brains will dominate. Thus simulations are less probable, but “zoos” are still likely.
- I am more likely to be a biological brain than a simulation.
- “Measure monsters” and anthropic devils hunt on observers in the multiverse by increasing the energy spent on observers’ copies.
-
- Intensity-based selection. My experience is in the middle of the set of all minds ordered by the intensity of pain and pleasure.
-
- S-risks and paradise engineering (wire-heading, hedonium) are very rare.
- Life is generally boring.
-
9. Related ideas
- Observations selection. Not observers are selected, but observations.
- Anthropic principle and fine-tuning.
- Anthropic shadow: future catastrophes are more likely.
- Anthropic principle and fine-tuning.
- Universes’ sampling. Replicating (via black holes) fecund universes from evo-devo theory.
- Self-replicating universes are the most numerous ones.
- We will self-destroy ourselves via creating a black hole which will give rise to many universes with the same fine-tuning.
- Self-replicating universes are the most numerous ones.
- Panpsychism. All physical processes and computations have qualia and I am randomly selected from all of them.
-
- Anti-panpsychism cut-off: there should be some cut-off in the complexity below which experience is not possible, and it is near the human level. But if we add the “qualified observers” requirement, it becomes this cutoff.
- Waterfall argument: I am Boltzmann brain inside some physical process.
-
- Gaia hypothesis (Quantum immortality-in the-past favors effective survivors).
- The idea is similar to the Gaia hypothesis, which claims that the Earth has a special ability to survive climate instability.
- In the same way, if I survived many rounds of quantum suicide, I likely have some special survival ability.
- Stronger version: I am more likely to be an observer who will eventually get a “measure burst” in the future.
- This assumes retrocausaility of measure, may be reinforced by loops-in-time.
- I am more likely to live in the period of time near Singularity, AGI and life extension, and to be interested in anthropic
- Stronger version: “Transcendental advantage”: my personal life story is untypical and evolves in the direction of becoming a “God”: a being of high complexity and measure.
- This claim seems less crazy if we replace “I” with “planet”: The Earth has a special history compared to other planets and will probably evolve into superintelligence.
- If loops-in-time (similar to unescapable prophecies) are possible, I am likely stack in one of them, as it the significantly increases measure.
- Me-sampling. I am randomly selected from all minds who think that they are the same observer, e.g. Alexey Turchin, but have different OMs (Identity-selection assumption).
- In some sense, it is opposite to SSSA which selects from the same OMs, which could be in different observers.
- Eventfully, my personality becomes the most typical in the universe.
- In some sense, it is opposite to SSSA which selects from the same OMs, which could be in different observers.
- Uncertainty about a valid sampling method.
- We should take an average of different sampling methods proportional to our credence in them (or based on the prediction market).
- Meta-Doomsday argument: We should multiply our credence in DA on its predictions.
- We should take an average of different sampling methods proportional to our credence in them (or based on the prediction market).
- Superposition of uncertainties. Similar to above, but assumes that this uncertainty is ontological, not epistemic (similar in some sense to the Heisenberg principle).
-
- The universe behaves in such a way that I can’t choose the right sampling method, because if I can, the size of my reference class will become extremely small.
-
The meta-level problem of choosing right assumption
On the meta-level, there are several principles how to choose between three dozen of sampling assumptions from the table:
- Choose just one selection principle and hold it as a “correct”. However, how can we prove that it is true? What is the source of the rightness of a selection principle? Experiment? God? Logic? Choice?
- Test it experimentally.
- Find unbeatable mathematical proof.
- Kill all other assumptions as logically inconsistent.
- Ditch selection of assumption. Anthropics doesn’t work at all.
- The difference between selection principles doesn’t matter in most cases, except infinities and many copies situations. Don’t be afraid to update.
- All of the sampling assumptions are valid and we are located in some very peculiar part of the universe where different types of Doomsday argument are all true.
- The right choice of assumption depends on the subtle details of the question asked.
- One should use meta-level uncertainty over different assumptions and assign them different credence.
- Use the type of assumption which benefit you most.
The meta-meta problem
The meta-meta-problem of anthropics is how to find the right way to choose between the listed above principles of selecting the correct sampling assumptions. This meta-meta problem is more about general epistemic considerations than about some ideas related to anthropics. The possible ways of choice are based on utility, or on averaging, or on logical proof, or on finding inconsistencies, or some form of a prediction market.
Personally, I think that most of the listed above sampling assumptions are working simultaneously, with some adjustments depending on the local situation and the question asked. If I have to choose, I will multiply predictions on my credence in different alternatives, which is even. I will give 2/5 in Sleeping Beauty as an average between halfers and thirders, if the measurement way is not defined.
5 comments
Comments sorted by top scores.
comment by MSRayne · 2022-06-29T14:52:20.815Z · LW(p) · GW(p)
It seems to me that one ought to reason in such a way as to maximize expected utility. That is my answer to the meta-problem, but I have no idea what to think about sampling assumptions. To be honest anthropics seems very mysterious and superstitious to me and I struggle to understand it at all.
comment by Yoav Ravid · 2023-02-18T08:04:58.618Z · LW(p) · GW(p)
I only skimmed the text but strongly upvoted cause such a collection seems very useful for anyone who would want to do a deep dive into anthropics.
Replies from: avturchincomment by Tristan Cook · 2022-06-30T13:05:25.978Z · LW(p) · GW(p)
Thanks for putting this together! Lots of ideas I hadn't seen before.
As for the meta-level problem, I agree with MSRayne to do the thing that maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.
Replies from: avturchin↑ comment by avturchin · 2022-06-30T14:17:32.408Z · LW(p) · GW(p)
I think that the problem (one of them) here is that my utility function may include some indexical preferences. Like "I want to be in simulation". Or "I don't want to be a Boltzmann brain". In that case, I return to the need of updating, as I again have to take into account my indexicals.
Also, it allows the existence of "utility monster": that I should act as if I will have the biggest possible impact on the future of humanity, even if prior odds of that is small.