"Who I am" is an axiom.
post by dadadarren · 2021-04-25T21:59:10.566Z · LW · GW · 8 commentsContents
8 comments
I guess most of us, in moments of existential crisis, have asked the question of "Why am I me?". It is not about grammar: "because 'I' and 'me' are both first-person singular pronouns." Nor is it a matter of tautology: "of course Bill Gates is Bill Gates. duh" It's the urge to question why among all things exist, I am experiencing the world from this particular being's perspective. How come I am not a different person, or animal, or some other physical entity.
Such a question does not have any logical explanation. Because it is entirely due to subjectivity. I inherently know this human is me because I know the subjective feeling of that person. Pain can be logically reduced to nerve signals, colors by wavelength. Yet only when this human is experiencing them will I have the subjective feeling. Explaining "Who I am" is beyond the realm of logic. It is something intuitively known. An axiom.
So rational thinking should have no answers to questions such as "the chances of me being born as a human". I'm a human is a fact that can only be accepted, not explained. However, when framed in certain ways, similar questions do seem to have obvious answers. For example, "There are 7.7 billion people in the world, about 60% Asian. What's the probability of me being an Asian ?" Many would say 60%.
That's because it is uneasy to have no answer. So we often subconsciously change the question. Here "I/me" is no longer taken as the axiomatic, subjectively defined perspective center. Instead used as a shorthand for some objectively defined entity, e.g. a random human. By using an objective reinterpretation, the question falls back into the scope of logical thinking. So it can be easily answered: since 60% of all people are Asian, then a randomly selected person should have a 60% chance of being one.
Mixing the subjectively defined "I" to objectively defined entity seldomly causes any problem. Anthropic reasoning is an exception. Take the Doomsday Argument for example. It is true a randomly chosen human would be unlikely to be the very early ones. If it turns out so, the Bayesian update to doom-soon would be valid too. However, the argument uses words such as "I" or "we" as something primitively understood: the intuitive perspective center. In this case, probabilities such as "me being born as the first 5% of all human" have no answer at all. Because "who I am" is something that can only be accepted not explained, an axiom.
8 comments
Comments sorted by top scores.
comment by avturchin · 2021-04-26T11:14:44.781Z · LW(p) · GW(p)
All that we know about x-risks tells us that Doomsday argument should be true, or at least very probable. So we can't use its apparent falsity as an argument that some form of anthropic reasoning is false.
Replies from: dadadarren↑ comment by dadadarren · 2021-04-27T19:23:01.862Z · LW(p) · GW(p)
The Doomsday Argument does not depend on any empirical evidence. It is a pure logic deduction. So even if we consider the typical x-risks threatening our existence: climate change, AI boom, nuclear war, etc thus think our doom is probably coming soon. It still cannot be used as evidence favoring the Doomsday argument. Because if the Argument is true, we should expect our extinction is even more imminent, on top of all the x-risks considered.
Replies from: avturchin↑ comment by avturchin · 2021-04-28T14:42:38.625Z · LW(p) · GW(p)
We could test Doomsday argument on other things, like Gott has tested it on broadway shows. For example, I can predict that your birthday is not 1 of January with high confidence. It is also true for my birthday date, which is randomly selected from all dates of the year. So despite my "who I am" axiom, my external properties are distributed randomly.
comment by Shmi (shminux) · 2021-04-26T04:31:47.340Z · LW(p) · GW(p)
The idea of an "I" is an output of your brain, which has a model of the outside world, of others like you, and of you. In programming terms, the "programming language" of your mind has the reflection and introspection capabilities that provide some limited access to "this" or "self". There is nothing mysterious about it, and there is no need to axiomatize it.
import human.lang.reflect.*;
// me.getClass().getDeclaredMethods()
Replies from: dadadarren↑ comment by dadadarren · 2021-04-27T19:35:03.809Z · LW(p) · GW(p)
Explaining "I" as the result of introspection of "self" seems reasonable. But I think that is a circular definition. Yes, the access of "this" defines "I". But why I have immediate access to "this" instead of other things? That is the part that can only be taken as given. Not further reduced by logic.
For example, I would try to maximize the interest of dadadarren, and you would shminux. Obviously both are rational. The difference can only be explained by our different reasoning starting point: Because for me, I am dadadarren, for you shminux. There is no point further analyzing our difference to determine which is correct in terms of logic.
comment by AprilSR · 2021-04-26T22:10:36.281Z · LW(p) · GW(p)
If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn't ignore.
I think the Doomsday problem mostly fails because it's screened off by other evidence. Yeah, we should update towards being around the middle of humanity by population, but we still can observe the world and make predictions based on what we actually see as to how long humanity is likely to last.
To re-use the initial framing, think of what strategy would produce the best prediction results for humanity as a whole on the Doomsday problem. Taking a lot of evidence into account other than just "I am probably near the middle of humanity" will produce way better discrimination, if not necessarily much better calibration.
(Of course, most people have historically been terrible at identifying when Doomsday will happen.)
Replies from: dadadarren↑ comment by dadadarren · 2021-04-27T19:48:25.129Z · LW(p) · GW(p)
If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn't ignore.
Very true. I think one major problem with anthropic reasoning is we tend to treat the combined/average outcome of the group (e.g. all humans) as the expectation of the outcome for an individual. So when there is no strategy to make rational conclusions from the individual perspectives, we automatically try to make conclusions, if followed by everyone in the group, would be most beneficial for the whole group.
Then what's the appropriate group? It would lead to different conclusions. Thus the debates, most notably SSA vs SIA.