Posts

Comments

Comment by Evan Ward on How ought I spend time? · 2020-08-07T04:21:57.223Z · LW · GW

It's great to see other people thinking and working on these ideas of efficiently eliciting preferences and very 'subjective' data, and building your own long-term decision support system! I've been pretty frustrated by the seeming lack of tooling for this. Inspired partially by Gwern's Resorter as well, I've started experimenting with my own version, except my goal is to end up with random variables for cardinal utilities (at least across various metrics), and I'm having the inputs for comparisons be quickly-drawn probability distributions.

Comment by Evan Ward on Expected utility and repeated choices · 2019-12-29T23:35:03.299Z · LW · GW

To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.

Comment by Evan Ward on Expected utility and repeated choices · 2019-12-29T23:24:08.572Z · LW · GW

<Tried to retract this comment since I no longer agree with it, but it doesn't seem to be working>

Comment by Evan Ward on [deleted post] 2019-06-30T02:56:33.578Z

There are trillions of quantum operations occurring in one's brain all the time. Comparatively, we make very few executive-level decisions. Further, these high-level decisions are often based off a relatively small set of information & are predictable given that set of information. I believe this implies that a person in the majority of recently created worlds makes the same high-level decisions. It's hard to imagine numerous different decisions we could make in any given circumstance given the relatively ingrained decision procedures we seem to walk the Earth with.

I know that your Occam's prior for Bob in a binary decision is .5. That is a separate matter from how many Bobs make the same decision given the same set of external information and the same high-level decision procedures inherited from the Bob in a recent (perhaps just seconds ago) common parent world.

This decision procedure does plausibly effect recently created Everett worlds and allow people in them to coordinate amongst 'copies' of themselves. I am not saying I can coordinate w/ far past sister worlds. I can, I am theorizing, coordinate myselves in my soon-to-be-generated child worlds because there is no reason to think quantum operations would randomly delete this decision procedure from my brain over a period of seconds or minutes.

Comment by Evan Ward on [deleted post] 2019-06-22T22:50:38.805Z

Do you think making decisions with the aid of quantum generated bits actually does increase the diversification of worlds?

Comment by Evan Ward on [deleted post] 2019-06-10T16:58:26.837Z

You make a good point. I fixed it :)

Comment by Evan Ward on [deleted post] 2019-06-09T19:37:26.718Z

I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.

Comment by Evan Ward on [deleted post] 2019-06-09T19:34:40.715Z

I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback

Comment by Evan Ward on [deleted post] 2019-06-09T19:32:33.446Z

I am glad you appreciated this! I'm sorry I didn't respond sooner. I think you are write about the term "decision theory" and have opted for "decision procedure" in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/

Comment by Evan Ward on [deleted post] 2019-05-04T21:42:56.309Z

I'm sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?

It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).

I agree that QM already creates a wide spread of worlds, but I don't think that means it's safe to put all of one's eggs in one basket when one has doubt that their moral system is fundamentally wrong.

Comment by Evan Ward on Open Thread January 2019 · 2019-01-11T03:56:39.125Z · LW · GW

I too have been lurking for a little while. I have listened to the majority of Rationality from A to Z by Eliezer and really appreciate the clarity that Bayescraft and similar ideas offer. Hello :)