Why am I Me?
post by dadadarren · 2023-06-25T12:07:03.244Z · LW · GW · 46 commentsContents
46 comments
At 5, I was hospitalized for a month due to pneumonia. Kids of that age have little fear of illnesses, and the discomfort is soon forgotten. What I still remember though is the intense boredom. It is during that dull month I started asking the question everyone has asked themself: "Among all the people in this world, why am I this particular one?". I still recall the ineffable yet intense feeling when thinking about it for the first time.
Eventually I realized that's a question with no answer. Out of the vast number of things in existence, the fact that I am experiencing the world from the perspective of this particular thing- a human being- has no explanation. Logic and reason are unable to ascribe any underlying cause or rationale. "I am me" is just one fundamental truth that anyone has to take as a given.
Yet this fundamental truth is different for each person. From each of our distinct perspectives, which physical thing is the "I" is different. Keeping track of such differences is both mentally consuming and often unnecessary. So there is a natural affinity to rid of the first-person and to think "objectively". Instead of basing it on one's own point of view, we organize thoughts and formulate arguments from an imaginary vantage point that is detached and impartial, with an immaterial gaze from nowhere.
Though I consider such objectivism merely a shortcut for efficiency which has often been mistakenly regarded as an ideal, there is no denying its practical success. We all use it constantly with great results. Even when we do think about something from our own perspective, we can easily transcode it to the objective. All it seems to take is exchanging the perspective-dependent self - the "I"- for the particular person. So "I'm tall" can become "Dadadarren is tall". This is required since there is no "I" in objective reasoning. It is a gaze from nowhere after all.
We all have performed these transcodings so frequently that they hardly require active thoughts anymore. Every time it subtly reinforces the idea that such transcoding is always possible, so thinking objectively can, some may even say should, supersedes thinking from any single perspective. But in some rare cases, that leads to problems. Anthropic reasoning is such a case.
Take the Doomsday Argument(DA) as an example. It proposes the uninformed prior of one's own birth rank among all human beings ought to be uniformly distributed from the 1st to the last. So learning our actual birth rank (we are around the 100 billionth) should shift our belief about the future toward earlier extinction. E.g. I am more likely to be the 100 billionth person if there are only 200 Billion humans overall rather than 200 Trillion. So the fact I'm the 100 billionth means the former is more likely.
The birth rank discussion isn't about if I am born slightly earlier or later. Nobody can physically be born more than a few months away from their actual birthday. The argument is about scenarios where "I" am altogether a different person. I.E. "If I'm the first human being" does not mean dadadarren is born so prematurely that he predates Adam, but rather "I" am Adam. There is a decoupling between the "I" and the physical person I am. Such decouplings are integral to anthropics. While normal problems discuss different ways of assigning people to rooms, anthropic problems can have only one fixed assignment while contemplating which one of those people is "me".
By focusing on "my" birth rank the Doomsday Argument uses the perspective-based "I". The argument is often expressed with more inclusive terms such as "us", or "our generation". Unsurprising since explaining the argument to someone requires the recipient to evoke their own perspective too. However, if we reason from our perspectives, then predicting the future simply involves looking at the past and present situations and, to the best of our abilities, making a forecast. That's all. Sure, there are some uncertainties regarding our birth rank since our knowledge of the past is imperfect, and there may be occasions when we learn more about it. But that won't trigger a probability update as the Doomsday Argument suggests.
The crux of the Doomsday Argument is its attempt to frame the problem objectively. From this detached viewpoint, it is impartial towards human beings of all times. Yet the argument also uses "I" which is nonsensical from the objective viewpoint. It has to be transcoded to a particular individual. But due to the decoupling mentioned earlier, the link between the "I" and the particular person is severed. There is no transcoding possible. In the end, perspective thinking would reject the supposed impartialness, while objective thinking could not make sense of the "I". No matter what, the proposed prior birth-rank distribution is inconceivable.
But the argument is deceiving because it exploits people's ingrained belief about objectivity. Its past success makes us think any perspective reasoning can also be formulated from the immaterial vantage point. We didn't question that even though it is oxymoronic to forecast "the future" from a viewpoint so detached and neutral that is fundamentally timeless. When the decoupling makes transcoding impossible, instead of taking a deep look into this ingrained belief, people choose the less effortful alternative: conjuring something up to complete the transcoding: treating the "I" as the particular person who is randomly chosen. We are susceptible to such suggestions because it minimally disturbs the question while allowing us to hold onto the old routine.
The Doomsday Argument is wrong not because it has left something out, but because it added something in. It adds the ostensibly plausible but entirely unsubstantiated assertion of treating "I" as a random sample. Making different assumptions about the sampling processes may dodge the argument's contentious conclusion. But that would create other controversies and, above all, entirely miss the point. People should be more suspicious of anthropic assumptions, they are blindly asserted answers to the age-old question: "Why am I me?".
46 comments
Comments sorted by top scores.
comment by Adam Kaufman (Eccentricity) · 2023-06-25T22:17:24.492Z · LW(p) · GW(p)
Exactly, it has always felt wrong to me to treat being “me” as a random sample of observers. I couldn’t be anyone except me. If the future has trillions of humans or no humans, the person which is me will feel the same way in either case. I find the doomsday argument absurd because it treats my perspective as a random sample, which feels like a type error.
comment by Nox ML · 2023-06-26T04:45:24.887Z · LW(p) · GW(p)
Suppose when you are about to die, time freezes, and Omega shows up and tells you this: "I appear once to every human who has ever lived or will live, right when they are about to die. Answer this question with yes or no: are you in the last 95% of humans who will ever live in this universe? If your answer is correct, I will bring you to this amazing afterlife that I've prepared. If you guess wrong, you get nothing." Do you say yes or no?
Let's look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5% of them get to the afterlife. So it seems better to say yes in this case, unless you have access to more information about the world than is specified in this problem. But if you accept that it's better to say yes here, then you've basically accepted the doomsday argument.
However, an important thing to note is that when using the doomsday argument, there will always be 5% of people who are wrong. And those 5% will be the first people who ever lived, whose decisions in many ways have the biggest impact on the world. So in most situations, you should still be acting like there will be a lot more people in the future, because that's what you want the first 5% of people to have been doing.
More generally, my procedure for resolving this type of confusion is similar to how this post [LW · GW] handles the Sleeping Beauty problem. Basically, probability is in the mind, so when a thought experiment messes with the concept of "mind", probability can become underspecified. But if you convert it to a decision problem by looking at the actual outcomes and rating them based on your preferences, things start making sense again.
Replies from: Q Home, red75prime, dadadarren↑ comment by Q Home · 2023-09-08T05:32:21.213Z · LW(p) · GW(p)
Let's look at actual outcomes here. If every human says yes, 95% of them get to the afterlife. If every human says no, 5% of them get to the afterlife. So it seems better to say yes in this case, unless you have access to more information about the world than is specified in this problem. But if you accept that it's better to say yes here, then you've basically accepted the doomsday argument.
There's a chance you're changing the nature of the situation by introducing Omega. Often "beliefs" and "betting strategy" go together, but here it may not be the case. You have to prove that the decision in the Omega game has any relation to any other decisions.
There's a chance this Omega game is only "an additional layer of tautology" which doesn't justify anything. We need to consider more games. I can suggest a couple of examples.
Game 1:
Omega: There are 2 worlds, one is much more populated than another. In the bigger one magic exists, in the smaller one it doesn't. Would you bet that magic exists in your world? Would you actually update your beliefs and keep that update?
One person can argue it becomes beneficial to "lie" about your beliefs/adopt temporal doublethink. Another person can argue for permanently changing your mind about magic.
Game 2:
Omega: I have this protocol. When you stand on top of a cliff, I give you a choice to jump or not. If you jump, you die. If you don't, I create many perfect simulations of this situation. If you jump in a simulation, you get a reward. Wanna jump?
You can argue "jumping means death, the reward is impossible to get". Unless you have access to true randomness which can vary across perfect copies of the situation. IDK. Maybe "making the Doomsday update beneficially" is impossible.
You did touch on exactly that, so I'm not sure how much my comment agrees with your opinions.
↑ comment by red75prime · 2023-06-29T07:09:31.101Z · LW(p) · GW(p)
Suppose when you are about to die [...] Omega shows up
Suppose something pertaining more to the real world: if you think that you are here and now because there will not be significantly more people in the future, then you are more likely to become depressed.
Also, why Omega uses 95% and not 50%, 10%, or 0.000001%?
ETA: Ah, Omega in this case is an embodiment of the litany of Tarski. Still, if there will be no catastrophe we are those 5% who violate the litany. Not saying that the litany comes closest to useless as it can get when we are talking about a belief in an inevitable catastrophe you can do nothing about.
↑ comment by dadadarren · 2023-06-26T13:58:34.265Z · LW(p) · GW(p)
I have actually written about this before. In short, there is no rational answer to Omega's question, to answer Omega, I can only look at the past and present situation and try to predict the future the best I could. There is no rational way to incorporate my birth rank in the answer.
The question is about "me" specifically. And my goal is to maximize my chance of getting a good afterlife. In contrast, the argument you mentioned judge the answer's merit by evaluating the collective outcome of all humans: "If everyone guesses this way then 95% of all would be correct ...". But if everyone is making the same decision, and the objective is the collective outcome of the whole group, then the individual "I" plays no part in it. To assert this answer based on the collective outcome is also the best answer for "me" requires additional assumptions. E.g. considering myself as a random sample from all humans. That is why you are right in saying "If you accept that it's better to say yes here, then you've basically accepted the doomsday argument."
In this post [LW · GW] I have used a repeatable experiment to demonstrate this. And the top comment by benjamincosman [LW · GW] and my subsequent replies might be relevant.
Replies from: Nox ML↑ comment by Nox ML · 2023-06-28T14:18:54.845Z · LW(p) · GW(p)
By pretty much every objective measure, the people who accept the doomsday argument in my thought experiment do better than those who don't. So I don't think it takes any additional assumptions to conclude that even selfish people should say yes.
From what I can tell, a lot of your arguments seem to be applicable even outside anthropics. Consider the following experiment. An experimenter rolls a fair 100-sided die. Then they ask someone to guess if they rolled a number >5 or not, giving them some reward if they guess correctly. Then they reroll and ask a different person, and repeat this 100 times. Now suppose I was one of these 100 people. In this situation, I could use reasoning that seems very similar to yours to reject any kind of action based on probability:
I either get the reward or not as the die landed on a number >5 or not. Giving an answer based on expected value might maximize the total benefit in aggregate of the 100 people, but it doesn't help me, because I can't know if the die is showing >5 or not. It is correct to say if everyone makes decisions based on expected utility then they will have more reward combined. But I will only have more reward if the die is >5, and this was already determined at the time of my decision, so there is no fact of the matter about what the best decision is.
And granted, it's true, you can't be sure what the die is showing in my experiment, or which copy you are in anthropic problems. But the whole point of probability is reasoning when you're not sure, so that's not a good reason to reject probabilistic reasoning in either of those situations.
Replies from: dadadarren, Ape in the coat, isaac-poulton↑ comment by dadadarren · 2023-06-28T16:27:09.673Z · LW(p) · GW(p)
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it's a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I" as a random sample, or making forced transcodings.
Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let's label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn't matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?
My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. "I" am a random sample among the group.
For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn't specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.
Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I'm saying the collective performance of a group one belongs to is not a direct substitute for self-interest.
Replies from: Nox ML↑ comment by Nox ML · 2023-06-28T18:48:06.311Z · LW(p) · GW(p)
You do this 100 times, would you say you ought to find your number >5 about 95 times?
I actually agree with you that there is no single answer to the question of "what you ought to anticipate"! Where I disagree is that I don't think this means that there is no best way to make a decision. In your thought experiment, if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
My justification for this is that objectively, those who make decisions this way will tend to have more reward and outcompete those who don't. This seems to me to be as close as we can get to defining the notion of "doing better when faced with uncertainty", regardless of if it involves the "I" or not, and regardless of if you are selfish or not.
Edit to add more (and clarify one previous sentence):
Even in the case where you repeat the die-roll experiment 100 times, there is a chance that you'll lose every time, it's just a smaller chance. So even in that case it's only true that the strategy maximizes your personal interest "in aggregate".
I am also neither a "halfer" nor a "thirder". Whether you should act like a halfer or a thirder depends on how reward is allocated, as explained in the post I originally linked to.
Replies from: dadadarren↑ comment by dadadarren · 2023-06-28T22:24:20.634Z · LW(p) · GW(p)
if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I'm guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the "I" as a random sample. Whereas for the non-anthropic problem, it doesn't.
↑ comment by Ape in the coat · 2023-06-28T16:13:42.287Z · LW(p) · GW(p)
An experimenter rolls a fair 100-sided die
For me this is where the symmetry with doomsday argument breaks. Because here the result of the die roll is actually randomly selected from a distribution from 1 to 100.
While with doomsday argument it's not the case. I'm not selected among all the humans throughout the time to be instantiated in 21st century. That's not how causal process that produced me works. Actually, that's not how causality itself works. Future humans causally depend on the past humans it's not an independant random variable at all.
Replies from: Nox ML↑ comment by Nox ML · 2023-06-28T19:32:27.562Z · LW(p) · GW(p)
I agree that they are not symmetrical. My point with that thought experiment was to counter one of their arguments, which as I understand it can be paraphrased to:
In your thought experiment, the people who bet that they are in the last 95% of humans only win in aggregate, so there is still no selfish reason to think that taking that bet is the best decision for an individual.
My thought experiment with the dice was meant to show that this reasoning also applies to regular expected utility maximization, so if they use that argument to dismiss all anthropic reasoning, then they have to reject basically all probabilistic decision making. Presumably they will not reject all probabilistic reasoning, and therefore they have to reject this argument. (Assuming that I've correctly understood their argument and the logic I've just laid out holds.)
Edit: Minor changes to improve clarity.
↑ comment by omegastick (isaac-poulton) · 2023-06-28T15:56:51.752Z · LW(p) · GW(p)
How does the logic here work if you change the question to be about human history?
Guessing a 50/50 coin flip is obviously impossible, but if Omega asks whether you are in the last 50% of "human history" the doomsday argument (not that I subscribe to it) is more compelling. The key point of the doomsday argument is that humanity's growth is exponential, therefore if we're the median birth-rank human and we continue to grow, we don't actually have that long (in wall-time) to live.
comment by Mitchell_Porter · 2023-06-25T19:17:50.112Z · LW(p) · GW(p)
I don't actually see the argument here, just assertions. The assertions are: there's no reason why you are who you are; and, you shouldn't regard yourself as a typical conscious being.
Well, you are who you are, because of the causes that made you; and you're either a typical conscious being or an atypical conscious being. I don't see the problem.
Replies from: particular, Ape in the coat↑ comment by particular · 2023-06-28T10:08:15.821Z · LW(p) · GW(p)
Do you believe you are a typical conscious being or an atypical conscious being? And does that belief follow from an argument or an assertion?
Replies from: Mitchell_Porter, dadadarren↑ comment by Mitchell_Porter · 2023-06-29T16:42:40.695Z · LW(p) · GW(p)
Do you believe you are a typical conscious being or an atypical conscious being?
Let's try this question out on some other examples of conscious beings first.
Walking this morning, I noticed a small bird on the ground that hopped a few times like a kangaroo before it took off.
I just searched google news for the words "indian farmer". This was the first article. I ask you to consider the person at the top and center of the picture, standing thigh-deep in water.
OK, I've singled out two quasi-arbitrary examples of specific conscious beings: the bird I saw this morning; the person in the Bloomberg news photo.
We can ask about each of them in turn: is this a typical or an atypical conscious being?
The way we answer the question will depend on a lot of things, such as what beings we think are conscious. We might decide that they are typical in some respects and atypical in others. We might even go meta and ask, is the mix of typicality and atypicality, itself typical or atypical.
My point is, these are questions that can be posed and tentatively answered. Is there some reason we can't ask the same questions about ourselves?
↑ comment by dadadarren · 2023-06-28T13:35:43.486Z · LW(p) · GW(p)
Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can't know you are. Whether or not something is conscious is asking if you think from that thing's perspective. So there is no typical or atypical conscious being, from my perspective I am "the" conscious being, if I reason from something else's perspective, then that thing is "the" conscious being instead.
Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense.
The part where I know I'm conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion.
Thought the reply was addressed to me. But nonetheless, it's a good opportunity to delineate and inspect my own argument. So leaving the comment here.
↑ comment by Ape in the coat · 2023-06-28T10:41:36.530Z · LW(p) · GW(p)
Well, you are who you are, because of the causes that made you; and you're either a typical conscious being or an atypical conscious being. I don't see the problem.
The causes that made you didn't randomly select your soul amoung all possible souls from a specific reference class. But that's the fundamental assumption of antropic reasoning.
Replies from: green_leaf↑ comment by green_leaf · 2023-06-28T13:39:51.078Z · LW(p) · GW(p)
By definition of probability, we can consider ourselves a random member of some reference class. (Otherwise, we couldn't make probabilistic predictions about ourselves.) The question is picking the right reference class.
Replies from: Ape in the coat, dadadarren↑ comment by Ape in the coat · 2023-06-28T15:11:24.222Z · LW(p) · GW(p)
Definitions are part of a map and maps can be inaplicable to the territory.
Replies from: green_leaf↑ comment by green_leaf · 2023-06-29T09:07:20.622Z · LW(p) · GW(p)
That's true, but the definition of probability isn't inapplicable to everything. From that, in conjunction with us being able to make probabilistic predictions about ourselves, follows that we are a random member of at least one reference class, which means that our soul has been selected at random from all possible souls from a specific reference class (if that's what you meant by that).
↑ comment by dadadarren · 2023-06-28T13:59:12.822Z · LW(p) · GW(p)
In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn't make such probabilistic predictions. Here in this post I'm trying to explain the theoretical reason against it.
Replies from: conitzer, green_leaf↑ comment by conitzer · 2023-06-30T08:51:34.735Z · LW(p) · GW(p)
Not the Doomsday Argument, but self-locating probabilities can certainly be useful in decision making, as Caspar Oesterheld and I argue for example here: http://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf and show can be done consistently in various ways here: https://www.andrew.cmu.edu/user/coesterh/DeSeVsExAnte.pdf
Replies from: dadadarren↑ comment by dadadarren · 2023-07-24T16:24:07.864Z · LW(p) · GW(p)
Let's take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn't involve taking the AI's perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The "right decision" is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal [LW · GW]
↑ comment by green_leaf · 2023-06-29T16:15:09.451Z · LW(p) · GW(p)
I found two statements in the article that I think are well-defined enough and go into your argument:
- "The birth rank discussion isn't about if I am born slightly earlier or later."
How do you know? I think it's exactly about that. I have probability of being born within the first of all humans (assuming all humans are the correct reference class - if they're not, the problem isn't in considering ourselves a random person from a reference class, but choosing the wrong reference class).
2. "Nobody can be born more than a few months away from their actual birthday."
When reasoning probabilistically, we can imagine other possible worlds. We're not talking about something being the case while at the same time not being the case. We imagine other possible worlds (created by the same sampling process that created our world) and compare them to ours. In some of those possible worlds, we were born sooner or later.
Replies from: dadadarren↑ comment by dadadarren · 2023-06-29T20:52:52.006Z · LW(p) · GW(p)
- If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that's you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person's perspectives do you experience the world?
- The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person "I" and the physical particular person.
↑ comment by green_leaf · 2023-07-11T12:26:04.893Z · LW(p) · GW(p)
- That's seemingly quite a convincing reason why you can't be born too early. But what occurs to me now is that the problem can be about where you are, temporally, in relation to other people. (So you were still born on the same day, but depending on the entire size of the civilization (), the probability of you having people precede you is .)
- Depending on how "anthropic problem" is defined, that could potentially be true either for all, or for some anthropic problems.
comment by JBlack · 2023-06-26T00:33:10.960Z · LW(p) · GW(p)
You can rewrite the doomsday question into more objective terms: "given evidence that N people have previously come into existence, what update should be made to the credence distribution for the total number of people to ever come into existence?"
Replies from: ben-lang, dadadarren↑ comment by Ben (ben-lang) · 2023-06-26T08:46:15.397Z · LW(p) · GW(p)
To me, that version of the doomsday question is extremely unconvincing for a very different reason. It is using only the most basic (single number, N) aspect of the available data. We could go one step more sophisticated and get the number of people born last year and extrapolate that number of annual births out to eternity. Or we could go yet another step more sophisticated and fit an exponential to the births per year graph to extrapolate instead. Presumably we could go much further, fitting ever more complex models to a wider set of available data. Perhaps even trying to include models of the Earth's calorific budget or the likelihood of nuclear war.
Its not clear to me why we would put any credence in the doomsday argument (take N, approximately double it) specifically, out of all the available models.
Replies from: JBlack↑ comment by JBlack · 2023-07-01T01:21:50.553Z · LW(p) · GW(p)
It's not meant to be convincing, since it doesn't make any argument. It's a version of the question.
You can obviously make models of the future, using whatever hypotheses you like. Those models then should be weighted by complexity of hypotheses and credence that they will accurately reflect the future based partly on retrodiction of the past, and the results will modify the very broad distribution that you get by taking nothing but birth rank. If you use a SSA evidence model, then this broad distribution looks something like P(T > kN) ~ 1/k.
If you take all the credible future models appropriately weighted and get a relatively low credence of doomsday before another N people come into existence, then the median of the posterior distribution of total people will be greater than that of the doomsday prior distribution.
↑ comment by dadadarren · 2023-06-26T17:30:25.469Z · LW(p) · GW(p)
This rewrite is still perspective dependent as it involves the concept of "now" to define who "previously come into existence". i.e. it is different for the current generation vs people in the axial age. Whereas the Doomsday Argument uses a detached viewpoint that is time-indifferent. So the problem still remains.
Replies from: tiago-macedo↑ comment by Giskard (tiago-macedo) · 2023-06-28T20:17:51.026Z · LW(p) · GW(p)
I don't think the Doomsday argument claims to be time-independent. It seems to me to be specifically time-dependent -- as is any update. And there's nothing inherently wrong with that: we are all trying to be the most right that we can be given the information we have access to, our point of view.
Replies from: dadadarren↑ comment by dadadarren · 2023-06-28T22:30:37.672Z · LW(p) · GW(p)
I didn't explicitly claim so. But it involves reasoning from a perspective that is impartial to any moment. This independency manifested in its core assumption [? · GW]: that one should regard themself to be randomly selected from all observers from its reference class from past, present and future
Replies from: tiago-macedo↑ comment by Giskard (tiago-macedo) · 2023-06-29T17:11:18.054Z · LW(p) · GW(p)
I think I don't understand what makes you say that anthropic reasoning requires "reasoning from a perspective that is impartial to any moment". The way I think about this is the following:
- If I imagine how an omnitemporal, omniscient being would see me, I imagine they would see me as a randomly selected sample from all humans, past present and future (which don't really exist for the being).
- From my point of view, it does feel weird to say that "I'm a randomly selected sample", but I certainly don't feel like there is anything special about the year I was born. This, combined with the fact that I'm obviously human, is just a from-my-point-of-view way of saying the same thing. I'm a human and I have no reason to belive the year I was born is special == I'm a human whose birth year is a sample randomly taken from the population of all possible humans.
What changes when you switch perspectives is just the words, not the point. I guess you're thinking about this differently? Do you think you can state where we're disagreeing?
Replies from: dadadarren↑ comment by dadadarren · 2023-06-29T20:37:24.257Z · LW(p) · GW(p)
When you say the time of your birth is not special, you are already trying to judge it objectively. For you personally, the moment of your birth is special. And more relevantly to the DA, from a first-person perspective, the moment "now" is special.
- From an objective viewpoint, discussing a specific observer or a specific moment requires some explanation, something process pointing to it. e.g. a sampling process. Otherwise, it fails to be objective by inherently focusing on someone/sometime.
- From a first-person perspective, discussions based on "I" and "now" doesn't require such an explanation. It's inherently understandable. The future is just moments after "now". Its prediction ought to be based on my knowledge of the present and past.
What the doomsday argument saying is, the fact "I am this person" (living now) shall be treated the same way as if someone from the objective viewpoint in 1, performs a random sampling and finds me (now). The two cases are supposed to be logically equivalent. So the two viewpoints can say the same thing. I'm saying let's not make that assumption. And in this case, the objective viewpoint cannot say the same thing as the first-person perspective. So we can't switch perspectives here.
comment by Vladimir_Nesov · 2023-06-25T21:02:20.403Z · LW(p) · GW(p)
In coherence arguments [LW · GW], we deal with many alternative situations, or thought experiments, asking what is to be done in each of them, considered separately from all the other situations. Presenting any one situation can be taken as a form of updating from some shared prior state on the situation's description, or on an observation of it, or on the newly presented state of knowledge of it. This is different from ordinary updating that follows a worldline and proceeds in order; instead, the updates that put the point of view in one of these situations happen independently of each other.
In this framing, an anthropic update on the birth rank makes sense, even as it would be much less central as an ordinary observation. It can successfully fail to mention anything else that an actual human would otherwise know, so that the resulting state of mind will have large gaps in its knowledge of the world. The point of the procedure is to go from a statement of what the birth rank is to some posterior state of mind that describes the situation, captures the given facts, but no more facts than that.
And clearly this is a strange place to stop, without going further to elicit a utility function, or to find an updateless strategy [LW · GW]. Yet that's what anthropics tends to be about, building up a context that's sensitive to technical details that can go unmentioned, motivated by conflicting desiderata from other pursuits that are considered out of scope, and leaving it at that.
comment by clone of saturn · 2023-06-25T18:43:05.031Z · LW(p) · GW(p)
I feel like I'm still the same person as I was before I learned how many humans were born earlier than me. I think that's all you need for the Doomsday Argument to go through.
Replies from: RussellThor↑ comment by RussellThor · 2023-06-26T07:14:03.267Z · LW(p) · GW(p)
What about if you consider all of humanity the same "person" i.e. there is just one entity so far. If you then expect humanity to live for millions of years then the doomsday hypothesis is just similar to a child asking "why am I me" and not a sign of imminent doom. Of course thats begging the question/ circular reasoning somewhat.
I probably think the best answer to it is that future humans/trans-humans/WBE/AI are not in the same reference class as us because of enhancement etc. What reference class to choose to me undermines the whole argument mostly.
comment by Mateusz Bagiński (mateusz-baginski) · 2023-06-25T14:46:40.369Z · LW(p) · GW(p)
Take the Doomsday Argument(DA) as an example. It proposes the uninformed prior of one's own birth rank among all human beings ought to be uniformly distributed from the 1st to the last. So learning our actual birth rank (we are around the 100 billionth) should shift our belief about the future toward earlier extinction. E.g. I am more likely to be the 100 billionth person if there are only 200 Billion humans overall rather than 200 Trillion. So the fact I'm the 100 billionth means the former is more likely.
This is not how DA goes. It goes like this: if we are just as likely to be any particular observer, then the probability of being born at any particular time period is proportional to how many observers are alive at that time period. So, on the most outside-ish view (i.e., not taking into account what we know about the world, what we can predict about how the future is going to unfold etc), we should assign the most probability mass to those distributions of observers over history, where the time we were born had the most observers, which means that in the future there's going to be fewer observers than right now, probably because of extinction or something.
Also, if you assume the number of people alive throughout history is "fixed" at 200 billion "you" are just as likely to be the first person as the 100 billionth and 200 billionth. If you predict to be the 100 billionth, you can at best minimize the prediction error/deviation score [LW · GW].
comment by Thoth Hermes (thoth-hermes) · 2023-06-25T13:05:04.154Z · LW(p) · GW(p)
The Doomsday Argument presupposes that we are drawn from a probability distribution over time as well as space - which I am not sure that I believe, though it might be true. I think we probably experience time sequentially across "draws" as well as within draws, assuming that there are multiple draws.
(I lean towards there being multiple draws, I don't see why there would only be one, since consciousness seems pretty fundamental to me and likely something that just "is" for any given moment in time. But some might consider this to be too spiritualistic; I'd retort that I consider annihilation to be too spiritualistic as well.)
I do think we are probably drawn from a distribution that weights something like "consciousness mass" proportional to probability mass. So, chances are you are probably going to be one of the smartest things around. This is pretty good news if true - it should mean, among other things, that there probably aren't really large civilizations in the universe with huge populations of much smarter beings.
comment by stochastic_bit · 2023-07-08T13:01:53.966Z · LW(p) · GW(p)
this claim is not really relevant for the doomsday argument, you only need the random sampling of the index of someone, you use you, because you are random choice. it doesn't matter that you can't be another you.
this claim can be interesting regarding SIA, but even there i don't think it holds, but it will be more interesting and nuance.
comment by conitzer · 2023-06-30T08:45:14.909Z · LW(p) · GW(p)
Just to make sure I understand your argument, it seems that you (dadadarren) actually disagree with the statement "I couldn't be anyone except me" (as stated e.g. by Eccentricity in these comments), in the sense that you consider "I am dadadarren" a further, subjective fact. Is that right? (For reference / how I interpret the terms, I've written about such questions e.g. here: https://link.springer.com/article/10.1007/s10670-018-9979-6)
But then I don't understand why you think a birth-rank distribution is inconceivable. I agree any such distribution should be treated as suspect, which probably gives you most of what you need for your argument here, in particular that the Doomsday Argument is suspect. But I don't see why there would necessarily be some kind of impenetrable curtain between objective and subjective reasoning; subjective facts are presumably still closely tied to objective facts. And it even seems to make sense to discuss about purely subjective facts between subjects -- e.g., discussions about qualia are presumably meaningful at least to some extent, no?
comment by Giskard (tiago-macedo) · 2023-06-28T20:13:23.482Z · LW(p) · GW(p)
For now, I see no reason to deviate from the simple explanations to the problems OP posited.
Why am I me?
Well, "am" (an individual being someone), "I" and "me" (the self) are tricky concepts. One possible way to bypass (some of) the trickiness is to consider the alternative: "why am I not someone else"?
Well, imagine for a moment that you are someone else. Imagine that you are me. In fact, you've always been me, ever since I was born. You've never thought "huh, so this is what it feels like to be someone else". All you've ever thought is "what would it be like to be someone else?". Then one day you tried to imagine what it would be like to be the person who wrote an article on LessWrong and...
Alakazam, now you're back to being you. My point here is that the universe in which you are not you, but someone else, is exactly like our universe, in every way. Which either means that this is already the case, and you really are me, and everyone else too, or that those pesky concepts of self and identity actually don't work at all.
Regarding anthropic arguments, if I understand correctly (from both OP's post and comments), they don't believe that they are a n=1 sample randomly taken from the population of every human to ever exist. I think they are. Are they an n=1 sample of something? Unless the post was written by more than one person, then yes. Are they a sample taken from the population of all humans to ever exist? I do think OP is human, so yes. Are they a randomly selected sample? This is where it gets interesting.
If both your parents were really tall, than you weren't randomly selected from the population of all humans in regards to height. That is because even before measuring your height, we had reason to believe you would grow up to be tall. Your sampling was biased. But in regards to "when you were born", we must ask if there is any reason that we should think OP's birth rank leans one way or another. I can't think of one -- unless we start adding extra information to the argument. If you think the singularity is close and will end Humanity, then we have reason to think OP is one of the last few people to be born. If you think Humanity has a large chance of spreading through the Galaxy and living for eons, than we have reason to think the opposite. But if we want to keep our argument "clean" from outside information, then OP's (and our) birth rank should not be considered special. And it certainly wasn't deliberately selected by anyone beforehand. So yes, OP is a n=1 sample randomly taken from the population of all humans to ever exist, and therefore can do anthropic reasoning.
That doesn't necessarily mean the Doomsday argument is right though. I feel like there might me hidden oversimplifications in it, but I won't try to look for them now. The larger point is that anthropic reasoning is legitimate, if done right (like every other reasoning).
comment by Boris Kashirin (boris-kashirin) · 2023-06-26T02:09:28.090Z · LW(p) · GW(p)
Today is raining and asking "why?" is a mistake because I am already experiencing rain right now, and counterfactual me isn't me?
It seems to me explanation is confused in such a way as to obscure decision making process of which questions are useful to consider.
comment by Ape in the coat · 2023-06-25T17:15:30.461Z · LW(p) · GW(p)
Both "I" and "me" are referencing the same thing. So the question "Why am I me?" is similar to "Why 1=1?" By definition. It's just a tautology but it feels like there is something more to it due to conflict between different models people use.
I completely agree about anthropic reasoning. It's indeed based on the assumption that we are randomly selected from a distribution. And all the weird cases, as far as I understand, happen when this assumption is wrong.
comment by Tensor White (tensor-white) · 2023-06-26T21:41:37.499Z · LW(p) · GW(p)
https://en.wikipedia.org/wiki/Vertiginous_question
Benj Hellie's vertiginous question asks why, of all the subjects of experience out there, this one—the one corresponding to the human being referred to as Benj Hellie—is the one whose experiences are live? (The reader is supposed to substitute their own case for Hellie's.)
This question has already been answered. Intuitively, you can ask "why am I a human instead of a fish?"