Anthropics: Where does Less Wrong lie?
post by Chris_Leong · 2018-06-22T10:27:16.592Z · LW · GW · 7 commentsContents
7 comments
I'm currently reading some papers various problems related to the anthropic problem and I'm planning to write up a post when I'm finished. To this end, it would be useful to know what position people hold so that I know what arguments I need to address. So what is your position on anthropics?
Various anthropic problems include the Sleeping Beauty problem, the Absent-minded driver [LW · GW], the Dr Evil problem [LW · GW], the Doomsday Argument, the Presumptuous Philosopher, the Sailor's Child, Fermi Paradox and the Argument from Fine-Tuning.
7 comments
Comments sorted by top scores.
comment by Rafael Harth (sil-ver) · 2018-06-22T12:16:24.078Z · LW(p) · GW(p)
It seems that the recent debate has made it fairly clear that LW is totally split on these questions
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-06-22T12:44:39.740Z · LW(p) · GW(p)
Yeah, but I wanted to see where people stand after all these posts.
comment by Paperclip Minimizer · 2018-06-22T10:47:40.930Z · LW(p) · GW(p)
My position on anthropics is that anthropics is grounded in updateless decision theory, which AFAIK lead in practice to full non-indexical conditioning.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2018-06-22T11:16:04.840Z · LW(p) · GW(p)
My position on anthropics is that anthropics is grounded in updateless decision theory,
Agreed.
which AFAIK lead in practice to full non-indexical conditioning.
It doesn't lead to that; what it leads to depends a lot on your utility function and how you value your copies: https://www.youtube.com/watch?v=aiGOGkBiWEo
Replies from: Chris_Leong, Paperclip Minimizer↑ comment by Chris_Leong · 2018-06-22T12:17:39.055Z · LW(p) · GW(p)
Question: In your paper on Anthropic Decision Theory, you wrote "If the philosopher is selfish, ADT reduces to the SSA and the philosopher will bet at 1:1 odds" in reference to Presumptuous Philosopher problem. I don't quite see how that follows - it seems to assume that the philosopher automatically exists in both universes. But if we assume there are more philosophers in the larger universe, then there must be at least some philosophers who lack a counterpart in the smaller universe. So it seems that ADT only reduces to SSA when the philosopher identifies with only his physical self in the current universe AND (his physical self in the alternate universe OR exactly one other self if he doesn't exist in the smaller universe).
I'll pre-emptively note that obviously he can never observe himself not existing. The point is that in order for the token to be worth $0.50 in the dollar, there must be a version of him to buy a losing token in the counterfactual. We can't get from "If the actual universe is small, I exist" to "If actual universe is large, I would also exist in the counterfactual". Probably the easiest way to understand this is to pretend you have a button that will do nothing in the small universe, but shrink the universe killing everyone in the large universe who wouldn't have existed if the universe were small instead. There's no way to know that pressing the button won't kill you. On the other hand, if you do have this knowledge, then you have more knowledge than everyone else who doesn't know that is true about them.
↑ comment by Paperclip Minimizer · 2018-06-22T14:39:56.386Z · LW(p) · GW(p)
I agree about this when you are reasoning about scenarios with uncertainty about the number of copies, but FNIC is about scenarios with certainty about the number of copies e.g. the variation on Sleeping Beauty where you know it's Monday.
comment by Dagon · 2018-06-23T12:11:11.091Z · LW(p) · GW(p)
IMO, it's a pretty minor issue, which shows our confusion about identity more than about probability. The notion of copies (and of resets) breaks a lot of intuition, but careful identification of propositions and payoffs removes the confusion.