Ethics in Many Worlds 2020-11-06T23:21:15.630Z
Review and Summary of 'Moral Uncertainty' 2020-10-07T17:52:42.189Z


Comment by fin on Ethics in Many Worlds · 2021-05-01T15:10:03.680Z · LW · GW

Yes, I'm almost certain it's too 'galaxy brained'! But does the case rely on entities outside our light cone? Aren't there many 'worlds' within our light cone? (I literally have no idea, you may be right, and someone who knows should intervene)

I'm more confident that this needn't relate to the literature on infinite ethics, since I don't think any of this relies on inifinities.

Comment by fin on Ethics in Many Worlds · 2021-04-27T14:23:14.663Z · LW · GW

Thanks, this is useful.

Comment by fin on I'm still mystified by the Born rule · 2021-03-04T17:22:04.563Z · LW · GW

There are some interesting and tangentially related comments in the discussion of this post (incidentally, the first time I've been 'ratioed' on LW).

Comment by fin on Inner Alignment in Salt-Starved Rats · 2020-11-29T14:09:32.498Z · LW · GW

Thanks, really appreciate it!

Comment by fin on Embedded Interactive Predictions on LessWrong · 2020-11-21T13:02:04.934Z · LW · GW

Was wondering the same thing — would it be possible to set others' answers as hidden by default on a post until the reader makes a prediction?

Comment by fin on Inner Alignment in Salt-Starved Rats · 2020-11-20T19:14:28.451Z · LW · GW

I interviewed Kent Berridge a while ago about this experiment and others. If folks are interested, I wrote something about it here, mostly trying to explain his work on addiction. You can listen to the audio on the same page.

Comment by fin on Ethics in Many Worlds · 2020-11-08T07:24:46.118Z · LW · GW

Got it, thanks very much for explaining.

Comment by fin on Ethics in Many Worlds · 2020-11-07T18:14:29.946Z · LW · GW

Thanks, that's a nice framing.

Comment by fin on Ethics in Many Worlds · 2020-11-07T10:18:32.900Z · LW · GW

Thanks for the response. I'm bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a 'measure of existence' — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more 'weight' to minds instantiated by brains rather than a mug of coffee. But this idea of 'weight' is ambiguous to me. Why should sampling weight (you're more likely to find yourself as a real vs Boltzmann brain, or 'thick' vs 'arbitrary' computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here's Lev Vaidman, suggesting it shouldn't: “there is a sense in which some worlds are larger than others", but "note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they're in, while recognising they 'feel' precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There's no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that's all I mean by 'conscious experience'. After all, if I'm understanding this right, you're in a 'branch' right now that is many orders of magnitude less real than the larger, 'parent' branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without 'diluting' the weight of future experiences over time.

Also, presumably there's some technical way of actually cashing out the idea of something being 'less real'? Literally speaking, I'm guessing it's best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.

I'm afraid I'm confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don't add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.

Still don't know what to think about all this!

Comment by fin on Ethics in Many Worlds · 2020-11-06T23:25:21.254Z · LW · GW

More Notes

Something very like the view I'm suggesting can be found in Albert & Loewer (1988) and their so-called 'many minds' interpretation. This is interesting to read about, but the whole idea strikes me as extremely hand-wavey and silly. Here's David Wallace with a dunk: “If it is just a fundamental law that consciousness is associated with some given basis, clearly there is no hope of a functional explanation of how consciousness emerges from basic physics.”

I should also mention that I tried explaining this idea to another philosopher of physics, who took it as a reductio of MWI! I suppose you might also take it as a reductio of any kind of total consequentialism also. One man's modus ponens...

David Lewis briefly discusses the ethical implications of his modal realism (warning: massive pdf), concluding that there aren't any. This may be of interest, but not sufficiently similar to the case at hand to be directly relevant, I think.

Another potential ethical implication: Hal Finney makes the point that MWI should steer you towards maximising good outcomes in expectation if you weren't already doing so (e.g. if you were previously risk-averse, risk-seeking, or just somehow insensitive to very small probabilities of extreme outcomes). The whole thread is a nice slice of LW history and worth reading.

Comment by fin on What are some beautiful, rationalist artworks? · 2020-10-20T14:23:20.458Z · LW · GW

Thanks, that's far more relevant!

Comment by fin on What are some beautiful, rationalist artworks? · 2020-10-17T14:57:18.112Z · LW · GW

From Wikipedia: An Experiment on a Bird in the Air Pump is a 1768 oil-on-canvas painting by Joseph Wright of Derby, one of a number of candlelit scenes that Wright painted during the 1760s. The painting departed from convention of the time by depicting a scientific subject in the reverential manner formerly reserved for scenes of historical or religious significance. Wright was intimately involved in depicting the Industrial Revolution and the scientific advances of the Enlightenment. While his paintings were recognized as exceptional by his contemporaries, his provincial status and choice of subjects meant the style was never widely imitated. The picture has been owned by the National Gallery in London since 1863 and is regarded as a masterpiece of British art. 

Comment by fin on What are some beautiful, rationalist artworks? · 2020-10-17T14:56:21.501Z · LW · GW
An Experiment on a Bird in the Air Pump
Comment by fin on What are some beautiful, rationalist artworks? · 2020-10-17T09:19:19.543Z · LW · GW

'Earthrise'. Taken from lunar orbit by astronaut William Anders on December 24, 1968, during the Apollo 8 mission. Nature photographer Galen Rowell declared it "the most influential environmental photograph ever taken".

Comment by fin on What are some beautiful, rationalist artworks? · 2020-10-17T09:17:39.817Z · LW · GW
Comment by fin on Review and Summary of 'Moral Uncertainty' · 2020-10-14T13:26:19.225Z · LW · GW

Understood. I'm not so sure there is such a big difference between uses of 'rational' and 'moral' in terms of implying the existence of norms 'outside of ourselves'. In any case, it sounds to me now like you're saying that everyday moral language assumes cognitivism + realism. Maybe so, but I'm not so sure what this has to do with moral uncertainty specifically.

Comment by fin on Review and Summary of 'Moral Uncertainty' · 2020-10-08T11:37:30.966Z · LW · GW

Got it, thanks. I think the phrase 'non-physical essences' makes moral realism sound way spookier than necessary. I don't think involve 'essences' in a similar way to how one decision could be objectively more rational than another without there being any rationality 'essences'. But what you're saying sounds basically right. Makes me wonder — it's super unclear what to do if you're also just uncertain between cognitivism and non-cognitivism. Would you need some extra layer of uncertainty and a corresponding decision procedure? I'm really not sure.

Comment by fin on Review and Summary of 'Moral Uncertainty' · 2020-10-08T10:17:39.357Z · LW · GW

Hmm. Moral uncertainty definitely doesn't assume moral realism. You could just have some credence in the possibility that there are no moral facts.

If instead by 'essentialism' you mean moral cognitivism (the view that moral beliefs can take truth values) then you're right that moral uncertainty makes most sense under cognitivism. But non-cognitivist versions (where your moral beliefs are just expressions of preference, approval, or desire) also seem workable. I'm not sure what any of this has to do with 'non-physical essences' though. I think I know what you mean by that, but maybe you could clarify?

Interesting point about moral uncertainty favouring elegant theories. Not sure it's necessarily true however — again, I could just have some credence in the possibility that a messy version of folk morality is true.