Posts

Comments

Comment by dan-fitch on [deleted post] 2019-07-10T15:53:10.934Z

Either way you fall on the physics, there's no reason that the many-worlds hypothesis forces EVERY choice to be taken in an even distribution. Given a choice A or B, there is probability distribution between them. If A is the more ethical choice, you should still try to strive towards A, so that more of you in all the possible worlds also strive towards A.

If anything, if you think many-worlds could be true, it makes ethics that much more important to think about. You are carving out the corner, and making it expand outward into possibility space.

Comment by Dan Fitch (dan-fitch) on Announcing the AI Alignment Prize · 2017-12-09T04:33:50.477Z · LW · GW

I don't know if this is a useful "soft" submission, considering I am still reading and learning in the area.

But I think the current metaphors (paperclips, etc.) are not very persuasive for convincing folks in the world at large that value alignment is a BIG, HARD PROBLEM. Here is my attempt to add a possibly-new metaphor to the mix: https://nilscript.wordpress.com/2017/11/26/parenting-alignment-problem/