post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Dagon · 2019-03-22T17:00:05.809Z · LW(p) · GW(p)

On the "a day in hell cannot be outweighed" question, do you have any analysis of that intuition? Are you assuming that you'll remember that day and be broken by it, or is there some other negative value you're putting on it. Do you evaluate "a day in hell, 1000 years in heaven, then termination" differently from "1000 years in heaven, then a day in hell, then termination"? How about "a day in hell, mind-reset to prior state, then 1000 years in heaven"?

The reason I ask is that I'm trying to understand what's being evaluated. Are you comparing value of instantaneous experience integrated over time, or are you comparing value of effect that experiences have on your identity?

Replies from: Isnasene
comment by Isnasene · 2019-03-25T07:00:25.807Z · LW(p) · GW(p)

I see all of the situations you describe in the first paragraph as being on the same lexical order. I agree that effects of experiences on identity could factor into how I value things in different ways but these effects would be lexically less important than the issues that I'm worried about. For the sake of the thing I'm trying to discuss, I'd evaluate each of those trades as being the same.

An an aside, here's a more explicit summary of where I think my "a day in hell cannot be outweighted" intuition comes from--there are two things going on:

1. First, I have difficulty coming up with any upper bound on how bad the experience of being in hell is.

But this does not preclude me trading off 1000 years of heaven for a day in hell so long as heaven is appropriately defined. Just as there are situations that I cannot treat as finitely unpleasant, there are also (in principle) situations that I cannot treat as finitely pleasant [LW · GW] and I am willing to trade those off. As I also say in that section, I generally give more credence to this utilitarian approach than a strictly negative utilitarian one. However...

2. I have a better object-level understanding of really bad suffering than the idea of comparably really intense happiness.

which is essentally where my "a day in hell cannot be outweighed" intuition comes from. It's not just the idea that the instaneous experience integrated over time in hell is an unboundedly high number relative to some less intense suffering; it's also the physical reality that I grasp (on an intuitive level) how bad hell can be but lack a similar grasp on how good heaven can get. This is just the intuition though--and it's not something I rationally stand by:

Given both that anthropocentric biases explain devaluing arbitrarily good suffering, and that I prefer ethical systems that are not species-specific, I give more credence to general forms of utilitarianism that allow for arbitrarily intense positive or negative experiences than more limited forms of negative utilitarianism. --me

Replies from: Dagon
comment by Dagon · 2019-03-25T16:13:49.713Z · LW(p) · GW(p)

Interesting. For me, I tend to normalize top and bottom values, so "a day in heaven" and "a day in hell" cancel each other out. I don't know where the upper and lower bounds are for either, so I'm just assuming they're of the same magnitude.

But I also tend to assume, for this kind of extreme situation, that we're talking about no-consequences pain, which doesn't leave lasting false beliefs or terrors that interfere with future joy and goal-accomplishment. So when it's done, it's done. It may be unimaginably bad in the moments, but it ends and is then normalized to tolerable (if unpleasant) levels in my memory. I don't know what the similar heaven stipulation would be, as I tend to imagine it as getting stronger/smarter and knowing I can meet more and bigger goals (joy and optimism, not just pleasure). Which, from outside the sim, makes me devalue both heaven and hell compared to reality. Making the whole comparison pointless.

All this to say, I still don't have a good conception of valuation of joy/pleasure vs pain/despair. I honestly don't know whether they're a stock or a flow, nor whether the slope, the peak, or the integral over time is most important.

comment by shminux · 2019-03-18T03:54:43.204Z · LW(p) · GW(p)

Would a utility function like U=happiness-tan(suffering) describe your intuition? Small amounts of happiness and suffering contribute linearly to the utility, but suffering becomes progressively more important as the amounts increase, until at some point (Pi/2 in this example, but the units are arbitrary, of course), the contribution of suffering to your utility function goes to negative infinity, so no amount of happiness can outweigh it?

Replies from: Isnasene
comment by Isnasene · 2019-03-18T17:06:10.413Z · LW(p) · GW(p)

Something like this sounds at first qualitatively similar to what I have in mind but isn't really representative of my thought process. Here are some key differences/clarifications that would help convey my thought process:

1. Clarify that U=happiness-tan(suffering) applies to each individual's happiness and suffering (and then the global utility function is calculated by summing over all people) rather than the universe's total suffering and total happiness as I talk about here [LW · GW]. People often talk about this implicitly but I thnk being clear about this is useful.

2. I don't want a utility function that ends with something just going to infinity because it can get confused when asked questions like "Would you prefer this infinitely bad thing to happen for five minutes or ten minutes?" since both are infinite. This is why value-lexicality as shown in figure 1b [LW · GW] is important. Many different events can be infinitely worse than other things from the inside-view and it's important that our utility function is capable of comparing between them.

3. Clarify what is meant by "happiness" and "suffering." As I mention here [LW · GW], I agree with Ord's Worse-For-Everyone argument and metrics of happiness and suffering are often literally metrics of how much we should value or disvalue an experience--tautologically implying any given utility function of them should be a straight line always and regardless of intensity. Going by this definition, I would never make the claim that some finite amount of suffering should be treated as infinitely bad like tan(suffering) would suggest. Instead, my intuition is essentially that, from the inside-view, certain experiences are perceived as involving infinitely bad (or lexically worse) suffering so that if our definition of suffering is based on the inside view (which I think is reasonable), then the amount of suffering experienced can become infinite. I don't value a finite amount of suffering infinitely; I just think that suffering of effectively infinite magnitude might be possible.

Alternatively, if we define happiness and suffering in terms of physical experiences rather than something subjective. My utility function for experience E would probably look something more like asking what (and how much) a person would be willing to experience in order to make E stop. This could be approximated by something like U=happiness-tan(suffering) if the physical experience is defined appropriately in the appropriate domain. For example, if suffering represents an above-room-temperature temperature that the person is subjected to for five hours, the disutility might look locally like -tan(suffering) for an appropriate temperature range of maybe 100-300 degrees Fahrenheit. But this kind of claim is more of an empirical statement about how I think about suffering than it is the actual way I think about suffering.