0 comments
Comments sorted by top scores.
comment by gwern · 2019-06-09T23:53:12.327Z · LW(p) · GW(p)
This still doesn't seem to address why one should be risk-averse and prioritize an impoverished survival in as many branches as possible. (Not that I think it does even that, given your examples; by always taking the risky opportunities with a certain probability, wouldn't this drive your total quantum measure down rapidly? You seem to have some sort of min-max principle in mind.)
Nor does it deal with the response to the standard ensemble criticism of expected utility: EU-maximization is entirely consistent with non-greedy-EU-maximization strategies (eg the Kelly criterion) as the total-EU-maximizing strategy if the problem, fully modeled, includes considerations like survival or gambler's ruin (eg in the Kelly coinflip game, the greedy strategy of betting everything each round is one of the worst possible things to do, but EU-maximizing over the entire game does in fact deliver optimal results); however, these do not apply at the quantum level, they only exist at the macro level, and it's unclear why MWI should make any difference.
Replies from: Evan Wardcomment by Said Achmiz (SaidAchmiz) · 2019-06-10T04:47:19.552Z · LW(p) · GW(p)
Meta: this should probably be a link post. (Also, why not cross-post the whole text?)
Replies from: Evan Wardcomment by Nebu · 2019-06-29T19:06:17.994Z · LW(p) · GW(p)
suppose Bob is trying to decide to go left or right at an intersection. In the moments where he is deciding to go either left or right, many nearly identical copies in nearly identical scenarios are created. They are almost entirely all the same, and if one Bob decides to go left, one can assume that 99%+ of Bobs made the same decision.
I don't think this assumption is true (and thus perhaps you need to put more effort into checking/arguing its true, if the rest of your argument relies on this assumption). In the moments where Bob is trying to decide whether to go either left or right, there is no apriori reason to believe he would choose one side over the other -- he's still deciding.
Bob is composed of particles with quantum properties. For each property, there is no apriori reason to assume that those properties (on average) contribute more strongly to causing Bob to decide to go left vs to go right.
For each quantum property of each particle, an alternate universe is created where that property takes on some value. In a tiny (but still infinite) proportion of these universe, "something weird" happens, like Bob spontaneously disappears, or Bob spontaneously becomes Alice, or the left and right paths disappear leaving Bob stranded, etc. We'll ignore these possibilities for now.
Of the remaining "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Left", and in other "normal" universes, the properties of the particles have proceeded in such a way to trigger Bob to think "I should go Right". There is no apriori reason to think that the proportion of the first type of universe is higher or lower probability than the proportion of the second type of universe. That is, being maximally ignorant, you'd expect about 50% of Bobs to go left, and 50% to go right.
Going a bit more meta, if MWI is true, then decision theory "doesn't matter" instrumentally to any particular agent. No matter what arguments you (in this universe) provide for one decision theory being better than another, there exists an alternate universe where you argue for a different decision theory instead.
Replies from: Evan Ward↑ comment by Evan Ward · 2019-06-30T02:56:33.578Z · LW(p) · GW(p)
There are trillions of quantum operations occurring in one's brain all the time. Comparatively, we make very few executive-level decisions. Further, these high-level decisions are often based off a relatively small set of information & are predictable given that set of information. I believe this implies that a person in the majority of recently created worlds makes the same high-level decisions. It's hard to imagine numerous different decisions we could make in any given circumstance given the relatively ingrained decision procedures we seem to walk the Earth with.
I know that your Occam's prior for Bob in a binary decision is .5. That is a separate matter from how many Bobs make the same decision given the same set of external information and the same high-level decision procedures inherited from the Bob in a recent (perhaps just seconds ago) common parent world.
This decision procedure does plausibly effect recently created Everett worlds and allow people in them to coordinate amongst 'copies' of themselves. I am not saying I can coordinate w/ far past sister worlds. I can, I am theorizing, coordinate myselves in my soon-to-be-generated child worlds because there is no reason to think quantum operations would randomly delete this decision procedure from my brain over a period of seconds or minutes.