0 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2019-05-04T11:37:05.143Z · LW(p) · GW(p)
I think that your reasoning here is substantially confused. FDT can handle reasoning about many versions of yourself, some of which might be duplicated, just fine. If your utility function is such that where . (and you don't intrinsically value looking at quantum randomness generators) then you won't make any decisions based on one.
If you would prefer the universe to be in than a logical bet between and . (Ie you get if the 3^^^3 th digit of is even, else ) Then flipping a quantum coin makes sense.
I don't think that randomized behavior is best described as a new decision theory, as opposed to an existing decision theory with odd preferences. I don't think we actually should randomize.
I also think that quantum randomness has a Lot of power over reality. There is already a very wide spread of worlds. So your attempts to spread it wider won't help.
Replies from: Alexei, Evan Ward↑ comment by Alexei · 2019-05-05T01:14:20.206Z · LW(p) · GW(p)
If you would prefer the universe to be in ... If I was to make Evan's argument, that's the point I'd try to make.
My own intuition supporting Evan's line of argument comes from the investing world: it's much better to run a lot of uncorrelated positive EV strategies than a few really good ones, since the former reduces your volatility and drawdown, even while at the expense of EV measured in USD.
↑ comment by Evan Ward · 2019-05-04T21:42:56.309Z · LW(p) · GW(p)
I'm sorry but I am not familiar with your notation. I am just interested in the idea: when an agent Amir is fundamentally uncertain about the ethical systems that he evaluates his actions by, is it better if all of his immediate child worlds make the same decision? Or should he hedge against his moral uncertainty, ensure his immediate child worlds choose courses of action that optimize for irreconcilable moral frameworks, and increase the probability that in a subset of his child worlds, his actions realize value?
It seems that in a growing market (worlds splitting at an exponential rate), it pays in the long term to diversify your portfolio (optimize locally for irreconcilable moral frameworks).
I agree that QM already creates a wide spread of worlds, but I don't think that means it's safe to put all of one's eggs in one basket when one has doubt that their moral system is fundamentally wrong.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2019-05-05T08:59:22.092Z · LW(p) · GW(p)
If you think that there is 51% chance that A is the correct morality, and 49% chance that B is, with no more information available, which is best.
Optimize A only.
Flip a quantum coin, Optimize A in one universe, B in another.
Optimize for a mixture of A and B within the same Universe. (Act like you had utility U=0.51A+0.49B) (I would do this one.)
If A and B are local objects (eg paperclips, staples) then flipping a quantum coin makes sense if you have a concave utility per object in both of them. If your utility is Then if you are the only potential source of staples or paperclips in the entire quantum multiverse, then the quantum coin or classical mix approaches are equally good. (Assuming that the resource to paperclip conversion rate is uniform. )
However, the assumption that the multiverse contains no other paperclips is probably false. Such an AI will run simulations to see which is rarer in the multiverse, and then make only that.
The talk about avoiding risk rather than expected utility maximization, and how your utility function is nonlinear, suggests this is a hackish attempt to avoid bad outcomes more strongly.
While this isn't a bad attempt at decision theory, I wouldn't want to turn on an ASI that was programmed with it. You are getting into the mathematically well specified, novel failure modes. Keep up the good work.
Replies from: Evan Ward↑ comment by Evan Ward · 2019-06-09T19:37:26.718Z · LW(p) · GW(p)
I really appreciate this comment and my idea definitely might come down trying to avoid risk rather than maximize expected utility. However, I still think there is something net positive about diversification. I write a better version of my post here: https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and if you could spare the time, I would love your feedback.
comment by Alexei · 2019-05-05T01:12:28.866Z · LW(p) · GW(p)
I'm actually very glad you wrote this up, because I have had a similar thought for a while now. And my intuition is roughly similar to yours. I wouldn't use terms like "decision theory," though, since around here that has very specific mathematical connotations. And while I do think my intuition on this topic is probably incorrect, it's not yet completely clear to me how.
Replies from: Evan Ward↑ comment by Evan Ward · 2019-06-09T19:32:33.446Z · LW(p) · GW(p)
I am glad you appreciated this! I'm sorry I didn't respond sooner. I think you are write about the term "decision theory" and have opted for "decision procedure" in my new, refined version of the idea at https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/
comment by Pattern · 2019-05-04T21:40:39.358Z · LW(p) · GW(p)
Intuitively, one does not want to take actions a and b with probabilities of 2/3 and 1/3, whenever the EU of a is twice that of b. Rather, it might be useful to not act entirely as utility estimates based on the uncertainty present - but if you are absolutely certain U(a) = 2*U(b), then it seems obvious one should take action a, if they are mutually exclusive. (If there is a 1/2 chance that U(a) = 1, and U(b) = 2, and a 1/2 chance that U(a) = 1, and U(b) = 1/2, then EU(a) = 1, and EU(b) = 1.5.)
Replies from: Evan Ward↑ comment by Evan Ward · 2019-06-09T19:34:40.715Z · LW(p) · GW(p)
I think you are right, but my idea applies more when one is uncertain about their expected utility estimates. I write a better version if my idea here https://www.evanward.org/an-entropic-decision-procedure-for-many-worlds-living/ and would love your feedback