Utilons vs. Hedons

post by Psychohistorian · 2009-08-10T19:20:20.968Z · LW · GW · Legacy · 119 comments

Contents

119 comments

Related to: Would Your Real Preferences Please Stand Up?

I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.

On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.

This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.

Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal.  Do you take it? What about five hundred? Five hundred thousand?

I can't speak for you, so I'll go through my evaluation of this deal and hope it generalizes reasonably well. I don't take it at any of these values. There's no clear hedonistic explanation for this - after all, I forget it happened. It would be absurd to say that the disutility I  experience between entering the agreement and having my memory wiped is so tremendous as to outweigh everything I will experience for the rest of my life (particularly since I immediately forget this disutility), and this is the only way I can see my rejection could be explained with hedons. In fact, even if the memory wipe weren't part of the deal, I doubt the act of having a few people killed would really cause me more displeasure than doubling my future hedons would yield; I'd bet more than five people have died in rural China as I've written this post, and it hasn't upset me in the slightest.

The reason I don't take the deal is my values; I believe it's wrong to kill random people to improve my own happiness. If I knew that they were people who sincerely wanted to be dead or that they were, say, serial killers, my decision would be different, even though my hedonic experience would probably not be that different. If I knew that millions of people in China would be significantly happier as a result, as well, then there's a good chance I'd take the deal even if it didn't help me. I seem to be maximizing utilons and not hedons, and I think most people would do the same.

Also, as another example so obvious that I feel like it's cheating, if most people read the headline "100 workers die in Beijing factory fire" or "1000 workers die in Beijing factory fire," they will not feel ten times the hedonic blow, even if they live in Beijing. That it is ten times worse is measured in our values, not our experiences; these values are correct, since there are roughly ten times as many people who have seriously suffered from the fire, but if we're talking about people's hedons, no individual suffers ten times as much.

In general, people value utilons much more than hedons. Drugs being illegal are an illustration of this. Arguments for (and against) drug legalization are an even better illustration of this. Such arguments usually involve weakening organized crime, increasing safety, reducing criminal behaviour, reducing expenditures on prisons, improving treatment for addicts, and improving similar values. "Lots of people who want to will get really, really high" is only very rarely touted as a major argument, even though the net hedonic value of drug legalization would probably be massive (just as the hedonic cost of prohibition in the 20's was clearly massive).

As a practical matter, this is important because many people do things precisely because they are important in their abstract value system, even if they result in little or no hedonic payoff. This, I believe, is an excellent explanation of why success is no guarantee of happiness; success is conducive to getting hedons, but it also tends to cost a lot of hedons, so there is little guarantee that earned wealth will be a net positive (and, with anchoring, hedons will get a lot more expensive than they are for the less successful). On the other hand, earning wealth (or status) is a very common value, so people pursue it irrespective of its hedonistic payoff.

It may be convenient to argue that the hedonistic payoffs must balance out, but this does not make it the case in reality. Some people work hard on assignments that are practically meaningless to their long-term happiness because they believe they should, not because they have any delusions about their hedonistic payoff. To say, "If you did X instead of Y because you 'value' X, then the hedonistic cost of breaking your values must exceed Y-X," is to win an argument by definition; you have to actually figure out the values and see if that's true. If it's not, then I'm not a hedon-maximizer. You can't then assert that I'm an "irrational" hedon maximizer unless you can make some very clear distinction between "irrationally maximizing hedons" and "maximizing something other than hedons."

This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, "it feels good" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.

Furthermore, this may cause our intuition to seriously interfere with utility-based hypotheticals, such as these. Basically, you choose to draw cards, one at a time, that have a 10% chance of killing you and a 90% chance of doubling your utility. Logically, if your current utility is positive and you assign a utility of zero2 (or greater) to your death (which makes sense in hedons, but not necessarily in utilons), you should draw cards until you die. The problem of course being that if you draw a card a second, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.

There's a bigger problem that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy. As for utilons, most people assign a much greater value to "not dying," compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.

Any useful utilitarian calculus need take into account that hedonic utility is, for most people, incomplete. Value utility is often a major motivating factor, and it need not translate perfectly into hedonic terms. Incorporating value utility seems necessary to have a map of human happiness that actually matches the territory. It also may be good that it can be easier to change values than it is to change hedonic experiences. But assuming people maximize hedons, and then assuming quantitative values that conform to this assumption, proves nothing about what actually motivates people and risks serious systematic error in furthering human happiness.

We know that our experiential utility cannot encompass all that really matters to us, so we have a value system that we place above it precisely to avoid risking destroying the whole world to make ourselves marginally happier, or to avoid pursuing any other means of gaining happiness that carries tremendous potential expense.

1- Apparently Omega has started a firm due to excessive demand for its services, or to avoid having to talk to me.

2- This assumption is rather problematic, though zero seems to be the only correct value of death in hedons. But imagine that you just won the lottery (without buying a ticket, presumably) and got selected as the most important, intelligent, attractive person in whatever field or social circle you care most about. How bad would it be to drop dead? Now, imagine you just got captured by some psychopath and are going to be tortured for years until you eventually die. How bad would it be to drop dead? Assigning zero (or the same value, period) to both outcomes seems wrong. I realize that you can say that death in one is negative and in the other is positive relative to expected utility, but still, the value of death does not seem identical, so I'm suspicious of assigning it the same value in both cases. I realize this is hand-wavy; I think I'd need a separate post to address this issue properly.

119 comments

Comments sorted by top scores.

comment by Nominull · 2009-08-11T17:02:55.347Z · LW(p) · GW(p)

This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, "it feels good" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.

I'm sorry, but this cannot possibly explain the akrasia I have experienced. Living a purposefully hedonistic life is widely considered low-status, so most people do not admit to their consciously hedonistic goals. Thus, the goals we hear about akrasia preventing people from pursuing are all noble, selfless goals: "I would like to do this thing that provides me utility but not hedonistic pleasure, but that damned akrasia is stopping me." With that as your only evidence, it is not unreasonable that you should conclude that akrasia occurs because of the divide between utilons and hedons.

Someone has to take the status hit and end this silence, and it might as well be me. I live my live mostly hedonically. I apologize to everyone who wanted me to optimize for their happiness, but that's the truth. (I may write a top level article eventually in defense of this position.) So, my utility and my hedonic pleasure are basically unified. But I still suffer akrasia! I will sometimes have an activity rich with hedons available to me, but I will instead watch TV and settle for the meager trickle of hedons it provides. I procrastinate in taking pleasure! It is a surprising result, one that a non-hedonist would likely not predict, but it's true. This thing we call akrasia has deeper roots than just resistance against self-abnegation.

Replies from: Cyan
comment by Cyan · 2009-08-11T20:07:11.548Z · LW(p) · GW(p)

I will sometimes have an activity rich with hedons available to me, but I will instead watch TV and settle for the meager trickle of hedons it provides.

Is there a time-horizon aspect to this behavior? (That is, can it be explained by saying that highly enjoyable activities with some start-up time are deferred in favor of flopping on the couch and grabbing the remote control?)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-08-11T21:22:56.478Z · LW(p) · GW(p)

Smiling is an example of hedonistic activity with no start-up time.

comment by DanArmak · 2009-08-10T22:30:55.610Z · LW(p) · GW(p)

This discussion has made me feel I don't understand what "utilon" really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?

  • "Whatever we maximize"? But we're not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn't something we consciously want.

  • "Whatever we self-report as maximizing"? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.

  • "If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility." That's a definition, yes, but it doesn't really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people's preferences for the future?

Also a note on the post:

Akrasia is what happens when we maximize our hedons at the expense of our utilons.

That definition feels too broad to me. Typically akrasia has two further atttributes:

  • Improper time discounting: we don't spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.

  • Feeling so bad due to not doing the necessary task that we don't really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience - but we just can't get started!

Replies from: conchis, pjeby, Florent_Berthet
comment by conchis · 2009-08-11T16:28:41.576Z · LW(p) · GW(p)

This discussion has made me feel I don't understand what "utilon" really means.

I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:

  1. how the history affects our mind/emotional states;
  2. how we value the history from a self-regarding perspective ("for our own sake");
  3. how we value the history from an impartial (moral) perspective; or
  4. the choices we would actually make between different world histories (or gambles over world histories).

Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.

I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values - in either the self-regarding or impartial sense of the term.

"Values" are still a bit of a black box here though, and it's not entirely clear how to cash them out. I don't think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.

Replies from: Adventurous
comment by Adventurous · 2009-08-12T18:58:20.829Z · LW(p) · GW(p)

What's the difference between 1 and 2? If we're being selfish then surely we just want to experience the most pleasurable emotional states. I would read "values" as an individual strategy for achieving this. Then, being unselfish is valuing the emotional states of everyone equally... ...so long as they are capable of experiencing equally pleasurable emotions, which may be untestable.

Note: just re-read OP, and I'm thinking about integrating over instantaneous hedons/utilons in time and then maximising the integral, which it didn't seem like the OP did.

Replies from: conchis
comment by conchis · 2009-08-12T20:10:04.923Z · LW(p) · GW(p)

We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.

comment by pjeby · 2009-08-11T02:18:57.501Z · LW(p) · GW(p)

That definition feels too broad to me. Typically akrasia has two further atttributes:

  • Improper time discounting: we don't spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.

  • Feeling so bad due to not doing the necessary task that we don't really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience - but we just can't get started!

Which is why it's pretty blatantly obvious that humans aren't utility maximizers on our native hardware. We're not even contextual utility maximizers; we're state-dependent error minimizers, where what errors we're trying to minimize are based heavily on short-term priming and longer-term time-decayed perceptual averages like "how much relaxation time I've had" or "how much i've gotten done lately".

Consciously and rationally, we can argue we ought to maximize utility, but our behavior and emotions are still controlled by the error-minimizing hardware, to the extent that it motivates all sorts of bizarre rationalizations about utility, trying to force the consciously-appealing idea of utility maximization to contort itself enough to not too badly violate our error-minimizing intuitions. (That is, if we weren't error-minimizers, we wouldn't feel the need to reduce the difference between our intuitive notions of morality, etc. and our more "logical" inclinations.)

Replies from: DanArmak
comment by DanArmak · 2009-08-11T02:25:35.436Z · LW(p) · GW(p)

Consciously and rationally, we can argue we ought to maximize utility

Then, can you tell me what utility is? What is it that I ought to maximize? (As I expanded on in my toplevel comment)

Replies from: pjeby
comment by pjeby · 2009-08-11T14:54:52.662Z · LW(p) · GW(p)

Then, can you tell me what utility is?

Something that people argue they ought to maximize, but have trouble precisely defining. ;-)

comment by Florent_Berthet · 2009-08-11T15:39:46.686Z · LW(p) · GW(p)

Has anybody ever proposed a way to value utilons?

It would be easier to discuss about them if we knew exactly what they can mean, that is, in a more precise way than just by the "unit of utility" definition. For example, how to handle them through time?

So why not defining them with something like that :

Suppose we could precisely measure the level of instant happiness of a person on a linear scale between 1 to 10, with 1 being the worst pain imaginable and 10 the best of climaxes. This level is constantly varying, for everybody. In this context, one utilon could be the value of an action that is increasing the level of happiness of a person by one, on this scale, during one hour.

Then, for example, if you help an old lady to cross the road, making her a bit happier during the next hour (let's say she would have been around 6/10 happy but thanks to you she will be 6,5/10 happy during this hour), then your action has a utility of one half of a utilon. You just created 0.5 utilon, and it's a definitely valid statement, isn't that great?

Using that, a hedon is nothing more than a utilon that we create by raising our own happiness.

Replies from: DanArmak
comment by DanArmak · 2009-08-11T16:01:16.338Z · LW(p) · GW(p)

What you describe are hedons. It's misleading to call them utilons. For rational (not human) agents, utilons are the value units of a utility function which they try to maximize. But humans don't try to maximize hedons, so hedons are not human-utilons.

Replies from: Florent_Berthet
comment by Florent_Berthet · 2009-08-11T17:12:09.287Z · LW(p) · GW(p)

Then would you agree that any utility function should, in the end, maximize hedons (if we were rational agents, that is) ? If yes, that would mean that hedons are the goal and utilons are a tool, a sub-goal, which doesn't seem to be what OP was saying.

Replies from: DanArmak
comment by DanArmak · 2009-08-11T17:48:01.301Z · LW(p) · GW(p)

No, of course not. There's nothing that a utility function should maximize, regardless of the agent's rationality. Goal choice is arational; rationality has nothing to do with hedons. First you choose goals, which may or may not be hedons, and then you rationally pursue them.

This is best demonstrated by forcibly separating hedon-maximizing from most other goals. Take a wirehead (someone with a wire into their "pleasure center" controlled by a thumb switch). A wirehead is as happy as possible (barring changes to neurocognitive architecture), but they don't seek any other goals, ever. They just sit there pressing the button until they die. (In experiments with mice, the mice wouldn't take time off from pressing the button even to eat or drink, and died from thirst. IIRC this went on happening even when the system was turned off and the trigger no longer did anything.)

Short of the wireheading state, noone is truly hedon-maximizing. It wouldn't make any sense to say that we "should" be.

Replies from: Alicorn
comment by Alicorn · 2009-08-11T17:57:49.943Z · LW(p) · GW(p)

Wireheads aren't truly hedon-maximizing either. If they were, they'd eat and drink enough to live as long as possible and push the button a greater total number of times.

Replies from: DanArmak
comment by DanArmak · 2009-08-11T18:14:59.076Z · LW(p) · GW(p)

They are hedon-maximizing, but with a very short time horizon of a few seconds.

If we prefer time horizons as long as possible, then we can conclude that hedon-maximizing implies first researching the technology for medical immortality, then building an army of self-maintaining robot caretakers, and only then starting to hit the wirehead switch.

Of course this is all tongue in cheek. I realize that wireheads (at today's level of technology) aren't maximizing hedons; they're broken minds. When the button stops working, they don't stop pushing it. Adaptation executers in an induced failure mode.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2009-08-12T20:07:52.815Z · LW(p) · GW(p)

It depends on your discount function: if its integral is finite over an infinite period of time (e.g. in case of exponential discount) then it will depend on the effort of reaching immortality whether you will go that route or just dedicate yourself to momentary bliss.

comment by SforSingularity · 2009-08-11T13:45:07.938Z · LW(p) · GW(p)

Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal.

This example is hardly hypothetical. According to GiveWell, you can save the life of one African person for $200 - $1000.

$200-$1,000 per life saved

Almost everyone has spent $5000 on things that they didn't need - for example a new car as opposed to a second hand one, a refurbishment of a room in the house, a family holiday. $5000 comes nowhere close to "doubling your hedons" - in fact it probably hardly makes a dent. Furthermore, almost everyone is aware of this fact, but we conveniently don't pay any attention to it, and our subconscious minds don't remind us about it because the deaths in Africa are remote and impersonal.

Since I know of very few people who spend literally all their spare money on saving lives at $1000 per life, and almost everyone would honestly claim that they would pay $200 - 1000 to save someone from a painful death, it is fair to say that people pretty universally don't maximize "utilons".

Replies from: snarles, Vladimir_Nesov, PhilGoetz
comment by snarles · 2009-08-11T16:42:32.103Z · LW(p) · GW(p)

This is intriguing, but what if the main indirect cause of death in Africa is overpopulation? Depending on the method by which the life is saved, you might not actually do much good by saving it. It's been touted, for example, that food aid in Africa has been bad for its inhabitants in the long-term. If there is evidence that there are ways to permanently improve conditions to that extent for that cheap, then this would be very compelling.

Replies from: SforSingularity, MichaelBishop
comment by SforSingularity · 2009-08-11T20:30:25.639Z · LW(p) · GW(p)

If there is evidence that there are ways to permanently improve conditions to that extent for that cheap

This is intriguing, but what if the main indirect cause of death in Africa is overpopulation?

I am not an expert on development in Africa, but my guess is that there is no single cause to the overall problem. Africa's population density is 26 people per km^2 source, whereas the EU's population density is 114 people per km^2 Source. Thus it is probably the case that Africa could easily sustain its current population if it were more economically developed.

Reducing the population artificially, whether by force or by education wouldn't make the problem magically go away, though it may help as part of an overall strategy.

If one is interested in charitable projects to improve overall African standards of living, take a look at the Copenhagen Consensus. Improvements in infrastructure, peacekeeping, health and womens' education are all needed.

comment by Mike Bishop (MichaelBishop) · 2009-08-15T22:01:58.895Z · LW(p) · GW(p)

I think the main reason food aid has been criticized is that it is often implemented in a way which a) empowers dictators or b) reduces profit opportunities for for African farmers and food distributors which reduces their incentive to invest in improving their farming or other businesses.

IOW, over-population is not the source of the negative externalities.

comment by Vladimir_Nesov · 2009-08-11T13:55:16.145Z · LW(p) · GW(p)

According to GiveWell, you can save the life of one African person for $200 - $1000.

How reliable is this information?

Replies from: SforSingularity, ektimo
comment by SforSingularity · 2009-08-11T15:22:14.478Z · LW(p) · GW(p)

I found a second source

Cost‐effectiveness estimates per death‐averted are $64‐294 for a range of countries

comment by ektimo · 2009-08-12T01:13:47.899Z · LW(p) · GW(p)

According to Peter Unger, it is more like one dollar:

First, a little bit about some of the horrors: During the next year, unless they're given oral rehydration therapy, several million children, in the poorest areas of the world, will die from - I kid you not - diarrhea. Indeed, according to the United States Committee for UNICEF, "diarrhea kills more children worldwide than any other cause." Next, a medium bit about some of the means: By sending in a modest sum to the U.S. Committee for UNICEF (or to CARE) and by earmarking the money for ORT, you can help prevent some of these children from dying. For, in very many instances, the net cost of giving this life-saving therapyis less than one dollar*

Even if this is true, I think it is still more important to spend money to reduce existential risks given that one of the factors is 6 billion + a much larger number for successive generations + humanity itself.

Replies from: matt
comment by matt · 2009-08-12T02:45:38.626Z · LW(p) · GW(p)

One dollar is the approximate cost if the right treatment is in the right place at the right time. How much does it cost to get the right treatment to the right place at the right time?

Replies from: ektimo
comment by ektimo · 2009-08-12T03:38:29.669Z · LW(p) · GW(p)

The price of the salt pill itself is only a few pennies. The one dollar figure was meant to include overhead. That said, the Copenhagen report mentioned above ($64 per death averted) looks more credible. But during a particular crisis the number could be less.

Replies from: Douglas_Knight, Douglas_Knight, PhilGoetz
comment by Douglas_Knight · 2009-08-12T05:10:12.108Z · LW(p) · GW(p)

In the footnote, Unger quotes UNICEF's 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it's spending on this particular project.

According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they're not making it up out of thin air: they're looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don't know which is cheaper, but that's sloppy.)

CC says that only 1/3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2/3. That's weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1/3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.)
ETA: there's a simple explanation, since the parents seek treatment at the clinics, which is that the parents can tell which bouts are bad. But I think my first two explanations play a role, too.

I'm very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven't looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India's salt industry and get it to add iodine.

Replies from: ektimo
comment by ektimo · 2009-08-12T17:07:26.124Z · LW(p) · GW(p)

+1 for above.

As a separate question, what would you do if you lived in a world where Peter Unger was correct? And what if it was 1 penny instead of 1 dollar and giving the money wouldn't cause other problems? Would you never have a burger for lunch instead of rice since it would mean 100 children would die who could otherwise be saved?

comment by Douglas_Knight · 2009-08-12T05:15:49.884Z · LW(p) · GW(p)

In the footnote, Unger quotes UNICEF's 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it's spending on this particular project.

According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they're not making it up out of thin air: they're looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don't know which is cheaper, but that's sloppy.)

CC says that only 1/3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2/3. That's weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1/3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.)

I'm very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven't looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India's salt industry and get it to add iodine.

comment by PhilGoetz · 2009-08-12T17:56:08.842Z · LW(p) · GW(p)

Salt as rehydration therapy?!

Replies from: Cyan
comment by Cyan · 2009-08-12T18:08:41.871Z · LW(p) · GW(p)

People lose electrolytes in their body fluids. If you rehydrate them without replacing the electrolytes, they get hyponatremia.

comment by PhilGoetz · 2009-08-11T17:16:09.749Z · LW(p) · GW(p)

No; it's fair to say that their utilons are not a linear function of human lives saved.

If you think there are too many people in the world, you might be willing to pay to prevent the saving of lives.

Funny thing is, the only people I know who don't agree that there are too many people in the world, are objectivists, libertarians, and extropians (there's a high correlation between these categories), who are among the least-likely to give money to save people in Africa.

Replies from: SforSingularity
comment by SforSingularity · 2009-08-11T20:31:11.418Z · LW(p) · GW(p)

If you think there are too many people in the world

Africa's population density is 26 people per km^2 source, whereas the EU's population density is 114 people per km^2 Source. Thus it is probably the case that Africa could easily sustain its current population if it were more economically developed.

Replies from: Nanani
comment by Nanani · 2009-08-12T01:20:26.489Z · LW(p) · GW(p)

That's a huge "if".

Sending money there is not a way to get the local economy to develop. It's been done for decades and the African economy is barely developped.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-08-15T22:10:58.778Z · LW(p) · GW(p)

IMO, I think the main reasons aid has been ineffective is the particular ways it has been given. It often a) empowers dictators or b) reduces profit opportunities for for African farmers and food distributors which reduces their incentive to invest in improving their farming or other businesses.

In my opinion, it would be easy to make sending money somewhat helpful. But even if I'm right, somewhat helpful is far from maximally helpful.

Replies from: Cyan
comment by Cyan · 2009-08-15T22:21:35.566Z · LW(p) · GW(p)

Something like the Grameen Bank would probably be the best bet. If there's room for economic growth but no capital to power it, then making microcredit available seems like the obvious choice.

comment by Jonathan_Graehl · 2009-08-10T20:05:06.265Z · LW(p) · GW(p)

I suspect we already indirectly, incrementally cause the death of unknown persons in order to accumulate personal wealth and pleasure. Consider goods produced in factories causing air and water contamination affecting incumbent farmers. While I'd like to punish those goods' producers by buying alternatives, it's apparently not worth my time*.

Probably, faced with the requirement to directly and completely cause a death, we would feel wrong enough about this (even with a promise of memory-wipe) to desist. But I find it difficult to consider such a situation honestly when I'm so strongly driven to signal pervasively (even to myself) that I am not an evil person. Perhaps a sufficiently anonymous poll could give us a better indication of what people would actually do.

There are certainly scenarios where under average utility maximization, you'd want to kill innocent people - draw lots if you like, but there's only enough air for 3 of us to survive the return trip from Mars.

* And maybe the economic benefit to the producing region is greater than the harm to the backyarders, and they just need to spend more in compensating or protecting them. But I believe there are some unambiguous cases where I ought to avoid consuming said product at the very least.

Replies from: matt
comment by matt · 2009-08-12T02:50:30.862Z · LW(p) · GW(p)

In general industrialized economies have better health, lifespan, standard of living and etc. You seem to be paying attention only to the negative side effects of your manufactured goods.

(That graph is not proof. Correlation is not causation. This is a short comment that makes a small point. Go easy on me.)

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-08-12T06:22:20.696Z · LW(p) · GW(p)

Yes, but I acknowledged that possibility in my asterisk turned bullet point (thanks, markup).

Replies from: Cyan
comment by Cyan · 2009-08-12T12:25:47.033Z · LW(p) · GW(p)

To get the asterisk back, use " \ " instead of " ".

comment by Alicorn · 2009-08-10T19:41:29.341Z · LW(p) · GW(p)

Nice post! This distinction should clear up several confusions. Incidentally, I don't know if there's a word for the opposite of a utilon, but the antonym of "hedon" is "dolor".

Replies from: conchis
comment by conchis · 2009-08-10T21:57:38.229Z · LW(p) · GW(p)

disutilon?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-11T01:49:37.330Z · LW(p) · GW(p)

2 utilons + 2 disutilons = 2 futilons

Replies from: DanArmak
comment by DanArmak · 2009-08-11T02:15:52.311Z · LW(p) · GW(p)

If we can split the futilon, we'll double everyone's utility function without needing Omega!

Sadly, the project was then used to bombard enemy countries with disutilons...

comment by rwallace · 2009-08-11T12:22:14.809Z · LW(p) · GW(p)

The card drawing paradox is isomorphic to the old paradox of the game where you double your money each time a coin comes up heads (the paradox being that simplistic theory assigns infinite value to both games). The solution is the same in each case: first, the entity underwriting the game cannot pay out infinite resources, and second, your utility function is not infinitely scalable in whatever resource is being paid.

comment by PhilGoetz · 2009-08-11T01:40:56.704Z · LW(p) · GW(p)

I have the sense that much of this was written as a response to this paradox in which maximizing expected utility tells you to draw cards until you die.

Psychohistorian wrote:

There's a bigger problem causing that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy.

The paradox is stated in utilons, not hedons. But if your hedons were measured properly, your inability to imagine them now is not an argument. This is Omega we're talking about. Perhaps it will augment your mind to help you reach each doubling. Whatever. It's stipulated in the problem that Omega will double whatever the proper metric is. Futurists should never accept "but I can't imagine that" as an argument.

As for utilons, most people assign a much greater value to "not dying," compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.

We need to look at it purely in terms of numbers if we are rationalists, or let us say "ratio-ists". Is your argument really that numeric analysis is the wrong thing to do?

Changing the value you assign life vs. death doesn't sidestep the paradox. We can rescale the problem by an affine transformation so that your present utility is 1 and the utility of death is 0. That will not change the results of expected utility maximization.

Replies from: Psychohistorian, DanArmak, Psychohistorian, outlawpoet, SforSingularity
comment by Psychohistorian · 2009-08-11T06:01:08.138Z · LW(p) · GW(p)

Let's try a new card game. Losing isn't death, it's 50 years of torture, followed by death in the most horribly painful way imaginable, for you and everyone you know. We'll say that utility is zero, your current utility is one, and a win doubles your current utility. Do you take the bet?

Or, losing isn't death, it's having to listen to a person scratch a chalkboard for 15 seconds. We'll call that 0, your current situation 1, and a win 2. Do you take the bet?

This is the problem with such scaling. You're defining "double your utility" as "the amount of utility that would make you indifferent to an even-odds bet between X and Y" and then proposing a bet between X and Y where the odds are better than even in your favor. No other definition will consistently yield the results you claim (or at least no other definition type - you could define it the same way but with a different odds threshold). It proves nothing useful.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-08-15T22:49:42.839Z · LW(p) · GW(p)

The example may not prove anything useful, but it did something useful for me. It reminded me that 1) we don't have a single perfect-for-all-situations definition of utility. and 2) our intuition often leads us astray.

comment by DanArmak · 2009-08-11T02:10:45.181Z · LW(p) · GW(p)

We need to look at it purely in terms of numbers if we are rationalists, or let us say "ratio-ists". Is your argument really that numeric analysis is the wrong thing to do?

We need to look at it purely in terms of numbers, only if we assume that we're maximizing hedons (or whatever Omega will double). But why should we assume that?

Let's go back to the beginning of this problem. Suppose for simplicity's sake we choose only between playing once, and playing until we die (these two alternatives were the ones discussed the most). In the latter case we die with very high probability, quite soon. Now I, personally, prefer in such a case not to play at all. Why? Well, I just do - it's fundamental to my desires not to want to die in an hour no matter what the gain in happiness during that hour.

This is how I'd actually behave, and I assume many other people as well. I don't have to explain this fact by inventing a utility function that is maximized by not playing. Even if I don't understand myself why I'd choose this, I'm very sure that I would.

Utilons and hedons are models that are supposed to help explain human behavior, but if they don't fit it, it's the models that are wrong. (This is related to the fact that I'm not sure anymore what utilons are exactly, as per my comment above.)

If we were designing a new system to achieve a goal, or even modifying humans towards a given goal, then it might be best to build maximizers of something. But if we're analyzing actual human behavior, which is how the thread about Omega's game got started, there's no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.

Replies from: MichaelBishop, PhilGoetz
comment by Mike Bishop (MichaelBishop) · 2009-08-15T23:00:17.229Z · LW(p) · GW(p)

|there's no reason to assume that humans maximize anything. If we insist on defining human behavior as maximizing hedons (and/or utilons), it follows that hedons do not behave numerically, and so are quite confusing.

In theory, any behavior can be described as a maximization of some function. The question is when this is useful and when it isn't.

comment by PhilGoetz · 2009-08-11T04:34:25.477Z · LW(p) · GW(p)

Utilons and hedons are models that are supposed to help explain human behavior, but if they don't fit it, it's the models that are wrong.

We're modeling rational behavior, not human behavior.

Replies from: DanArmak, MichaelBishop
comment by DanArmak · 2009-08-11T09:16:57.561Z · LW(p) · GW(p)

It seems to me that we're talking about both things in this thread. But I'm pretty sure this post is about analyzing human behavior... Why else does it give examples of human behavior as anecdotal proof of certain models?

I understand that utilons arise from discussions of rational goal-seeking behavior. I still think that they don't necessarily apply to human (arational) behavior.

comment by Mike Bishop (MichaelBishop) · 2009-08-15T22:57:54.359Z · LW(p) · GW(p)

I think we're doing both, and for good reason. Modeling rational behavior and actual behavior are both useful. You are right to point out that confusion about what we are modeling is rampant here though.

comment by Psychohistorian · 2009-08-11T06:07:19.883Z · LW(p) · GW(p)

Assume you are indifferent towards buying Chocolate Bar A at $1 per bar. How much would you pay for a chocolate bar that is 3.25186 times as delicious? What about one that is 12.35 times as delicious? 2^60 times as delicious? What if you were really, really hungry, so much so that vaguely edible dirt would be delicious. Would that 3.25186 remain significant to 5 decimal places, or might it change slightly?

But if your hedons were measured properly, your inability to imagine them now is not an argument.

It is not an argument; it is evidence. I cannot measure how many hedons I am experiencing now. I can kind of compare it to how many hedons I've experienced at times in the past, but it would be difficult. I certainly couldn't say I'm experiencing 10% less hedonic pleasure than my average day, 20% more than I did yesterday, and 45% less than my happiest day ever. The fact that hedons do not appear to yield to simple quantification is why I cannot imagine doubling my hedons. This fact also suggests that "double your hedons" is not a meaningful, or even possible operation, much as it seems meaningless to say that a chocolate bar is 3.873 times as tasty as another chocolate bar; at best I could say it's better or worse.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-12T18:01:23.258Z · LW(p) · GW(p)

Expecting a chocolate bar that is "twice as delicous" to be worth twice as many hedons, and then thinking that is a problem with hedons, is the same mistake as expecting 2X dollars to have twice the utility of X dollars. It is a common mistake; but it has been explained many times on LW lately. Hedons, like utilons, are defined in a way that accounts for scaling effects. If you are committed to expectation maximization, then utilons are defined such that you will prefer a 50% chance of 2X utilons + epsilon to X utilons.

EDIT: Folks, if this comment gets a -3, we have a serious problem. You can't participate in a lot of the discussions on LW if you don't understand this point. Apparently, most LW readers don't understand this point. (Unless they are voting it down because they think I am misinterpreting Psychohistorian.)

Please explain your objections.

Replies from: Psychohistorian
comment by Psychohistorian · 2009-08-12T19:20:42.092Z · LW(p) · GW(p)

Expecting a chocolate bar that is "twice as delicous" to be worth twice as many hedons, is the same mistake as expecting 2X dollars to have twice the utility of X dollars.

Wow. I never said this. Not even "I kind of said this, and you took it out of context." I just plain never claimed anything about the hedonic value of deliciousness, and I never said anything about a doubly delicious chocolate bar being worth double hedons, double dollars, double utilons, or double anything. Moreover, this is unrelated to my point.

My point was that deliciousness isn't properly quantifiable. You don't know how many dollars you'd pay to double your experienced deliciousness, because you don't even know what that would mean. Omega can tell me that a chocolate bar will be twice as delicious, but I can't sample chocolate bars and tell myself which one, if any, was twice as delicious as the first. I have absolutely no way of estimating what it would be like to double the deliciousness of my experience, and if I did double the deliciousness of my experience, I wouldn't know it unless Omega told me so.

This is a very, very big problem. That I have never experienced multiplying deliciousness by a scalar and cannot imagine experiencing such is evidence that "twice as delicious" cannot reasonably modify "chocolate bar," or anything else for that matter. The same seems to be true of hedons; you'd need Omega to tell you precisely how many hedons you've gotten today as compared to yesterday. Obviously though, you don't need Omega to tell you if you have 20% more dollars than you did yesterday.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-08-13T04:44:35.914Z · LW(p) · GW(p)

Wow. I never said this. Not even "I kind of said this, and you took it out of context." I just plain never claimed anything about the hedonic value of deliciousness

Except immediately above, in the passage we are both talking about, when you said:

Assume you are indifferent towards buying Chocolate Bar A at $1 per bar. How much would you pay for a chocolate bar that is 3.25186 times as delicious?

Either that was a statement implying that hedons are in invalid concept because it doesn't make sense to talk about being "twice as delicious" without accounting for other factors; or else it had nothing to do with what followed.

Your point still makes the same mistake. You don't have to presently know what twice as many hedons will feel like, or what twice as delicious will taste like. You know that some things are more pleasurable than others. The problem is defined so that Omega can be trusted to double your hedons, or utilons. So stop saying "I can't imagine doubling my hedons" or anything like that. It doesn't matter.

If you meant that you are cognitively incapable of experience twice the utility without losing your identity, that may be a valid objection. But AFAIK you're not making that objection.

comment by outlawpoet · 2009-08-11T03:00:30.678Z · LW(p) · GW(p)

I seem to have missed some context for this, I understand that once you've gone down the road of drawing the cards, you have no decision-theoretic reason to stop, but why would I ever draw the first card?

A mere doubling of my current utilons measured against a 10% chance of eliminating all possible future utilons is a sucker's bet. I haven't even hit a third of my expected lifespan given current technology, and my rate of utilon acquisition has been accelerating. Quite aside from the fact that I'm certain my utility function includes terms regarding living a long time, and experiencing certain anticipated future events.

Replies from: DanArmak
comment by DanArmak · 2009-08-11T03:12:16.558Z · LW(p) · GW(p)

If you accept that you're maximizing expected utility, then you should draw the first card, and all future cards. It doesn't matter what terms your utility function includes. The logic for the first step is the same as for any other step.

If you don't accept this, then what precisely do you mean when you talk about your utility function?

Replies from: conchis, conchis, outlawpoet
comment by conchis · 2009-08-13T13:49:30.087Z · LW(p) · GW(p)

The logic for the first step is the same as for any other step.

Actually, on rethinking, this depends entirely on what you mean by "utility". Here's a way of framing the problem such that the logic can change.

Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.

Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play out (so c=q when the deal is first offered, but could be higher than that if you've played and won before). Skulls give you death: V(skull)=d, and d < q.

If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:

Is there a function, f(x), such that, for some values of q and d, we should take a card every time one is offered?

Yes. f(x)=V(x) gives this result for all d<q. This is the standard approach.

Is there a function, f(x), such that, for some values of q and d, we should never take a card?

Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The card gives expected vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.

Is there a function, f(x), such that, for some values of q and d, we should take some finite number of cards then stop?

Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, its expected vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you're still alive), c=512, and the expected vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240.

So you take 9 cards, then stop. (You can verify for yourself, that the 9th card is still a good bet.)

* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let's stick with it for now.

Replies from: DanArmak
comment by DanArmak · 2009-08-13T14:47:11.681Z · LW(p) · GW(p)

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way (f is increasing in V(x)). Of course we can find an f such that a doubling in V translates to adding a constant to f, or if we like, even an infinitesimal increase in f. But all this means is that Omega is offering us the wrong thing, which we don't really value.

Replies from: conchis, conchis
comment by conchis · 2009-08-13T16:59:30.432Z · LW(p) · GW(p)

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way

You're assuming the output of V(x) is ordinal. It could be cardinal.

all this means is that Omega is offering us the wrong thing

I'm afraid I don't understand what you mean here. "Wrong" relative to what?

which we don't really value.

Eh? Valutilons were defined to be something we value (ETA: each of us individually, rather than collectively).

comment by conchis · 2009-08-13T15:36:01.481Z · LW(p) · GW(p)

Redefining "utility" like this doesn't help us with the actual problem at hand:

I guess what I'm suggesting, in part, is that the actual problem at hand isn't well-defined, unless you specify what you mean by utility in advance.

what do we do if Omega offers to double the f(x) which we're actually maximizing?

You take cards every time, obviously. But then the result is tautologically true and pretty uninteresting, AFAICT. (The thread beginning here has more on this.) It's also worth noting that there are vNM-rational preferences for which Omega could not possibly make this offer (f(x) bounded above and q greater than half the bound.)

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way.

That's only true given a particular assumption about what the output of V(x) means. If I say that V(x) is, say, a cardinally measurable and interpersonally comparable measure of my well-being, then Omega's offer to double means rather more than that.

But all this means is that Omega is offering us the wrong thing,

"Wrong" relative to what? Omega offers whatever Omega offers. We can specify the thought experiment any way we like if it helps us answer questions we are interested in. My point is that you can't learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn't set it up that way.

which we don't really value.

Eh? "Valutilons" are specifically defined to be a measure of what we value.

Replies from: DanArmak
comment by DanArmak · 2009-08-13T16:35:10.873Z · LW(p) · GW(p)

I guess what I'm suggesting, in part, is that the actual problem at hand isn't well-defined, unless you specify what you mean by utility in advance.

Utility means "the function f, whose expectation I am in fact maximizing". The discussion then indeed becomes whether f exists and whether it can be doubled.

My point is that you can't learn anything interesting from the thought experiment if Omega is offering to double f(x), so we shouldn't set it up that way.

That was the original point of the thread where the thought experiment was first discussed, though.

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive. This is in view of the original goals you want to achieve, to which maximizing f is a proxy - whether a designed one (in AI) or an evolved strategy (in humans).

"Valutilons" are specifically defined to be a measure of what we value.

If "we" refers to humans, then "what we value" isn't well defined.

Replies from: conchis, conchis
comment by conchis · 2009-08-13T17:23:55.493Z · LW(p) · GW(p)

Utility means "the function f, whose expectation I am in fact maximizing".

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.

Replies from: DanArmak
comment by DanArmak · 2009-08-13T19:03:27.357Z · LW(p) · GW(p)

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

Yes, that was the point :-) On my reading of OP, this is the meaning of utility that was intended.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.

Yes. Here's my current take:

The OP argument demonstrates the danger of using a function-maximizer as a proxy for some other goal. If there can always exist a chance to increase f by an amount proportional to its previous value (e.g. double it), then the maximizer will fall into the trap of taking ever-increasing risks for ever-increasing payoffs in the value of f, and will lose with probability approaching 1 in a finite (and short) timespan.

This qualifies as losing if the original goal (the goal of the AI's designer, perhaps) does not itself have this quality. This can be the case when the designer sloppily specifies its goal (chooses f poorly), but perhaps more interesting/vivid examples can be found.

Replies from: conchis
comment by conchis · 2009-08-13T19:35:48.672Z · LW(p) · GW(p)

To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).

You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].

If g(x) is only ordinal, this won't be especially helpful, but if you had a reasonable way of establishing an origin and scale it would seem potentially useful. Note also that f could be unbounded even if g were bounded, and vice-versa. In theory, that seems to suggest that taking ever increasing risks to achieve a bounded goal could be rational, if one were sufficiently risk-loving (though it does seem unlikely that anyone would really be that "crazy"). Also, one could avoid ever taking such risks, even in the pursuit of an unbounded goal, if one were sufficiently risk-averse that one's f function were bounded.

P.S.

On my reading of OP, this is the meaning of utility that was intended.

You're probably right.

comment by conchis · 2009-08-13T17:04:06.082Z · LW(p) · GW(p)

Crap. Sorry about the delete. :(

comment by conchis · 2009-08-11T14:23:13.514Z · LW(p) · GW(p)

If you accept that you're maximizing expected utility, then you should draw the first card, and all future cards. It doesn't matter what terms your utility function includes.

Note however, that there is no particular reason that one needs to maximise expected utilons.

The standard axioms for choice under uncertainty imply only that consistent choices over gambles can be represented as maximizing the expectation of some function that maps world histories into the reals. This function is conventionally called a utility function. However, if (as here) you already have another function that maps world histories into the reals, and happen to have called this a utility function as well, this does not imply that your two utility functions (which you've derived in completely different ways and for completely different purposes) need to be the same function. In general (and as I've I've tried, with varying degrees of success to point out elsewhere) the utility function describing your choices over gambles can be any positive monotonic transform of the latter, and you will still comply with the Savage-vNM-Marschak axioms.

All of which is to say that you don't actually have to draw the first card if you are sufficiently risk averse over utilons (at least as I understand Psychohistorian to have defined the term).

Replies from: DanArmak
comment by DanArmak · 2009-08-11T14:49:35.360Z · LW(p) · GW(p)

Thanks! You're the first person who's started to explain to me what "utilons" are actually supposed to be under a rigorous definition and incidentally why people sometimes seem to be using slightly different definitions in these discussions.

consistent choices over gambles

How is consistency defined here?

Replies from: Vladimir_Nesov, conchis
comment by Vladimir_Nesov · 2009-08-11T15:01:36.660Z · LW(p) · GW(p)

You can learn more from e.g. the following lecture notes:

B. L. Slantchev (2008). `Game Theory: Preferences and Expected Utility'. (PDF)

comment by conchis · 2009-08-11T15:58:04.737Z · LW(p) · GW(p)

How is consistency defined here?

Briefly, as requiring completeness, transitivity, continuity, and (more controversially) independence. Vladimir's link looks good, so check that for the details.

Replies from: DanArmak
comment by DanArmak · 2009-08-11T16:03:05.168Z · LW(p) · GW(p)

I will when I have time tomorrow, thanks.

comment by outlawpoet · 2009-08-11T03:39:20.915Z · LW(p) · GW(p)

I see, I misparsed the terms of the argument, I thought it was doubling my current utilons, you're positing I have a 90% chance of doubling my currently expected utility over my entire life.

The reason I bring up the terms in my utility function, is that they reference concrete objects, people, time passing, and so on. So, measuring expected utility, for me, involves projecting the course of the world, and my place in it.

So, assuming I follow the suggested course of action, and keep drawing cards until I die, to fulfill the terms, Omega must either give me all the utilons before I die, or somehow compress the things I value into something that can be achieved in between drawing cards as fast as I can. This either involves massive changes to reality, which I can verify instantly, or some sort of orthogonal life I get to lead while simultaneously drawing cards, so I guess that's fine.

Otherwise, given the certainty that I will die essentially immediately, I certainly don't recognize that I'm getting a 90% chance of doubled expected utility, as my expectations certainly include whether or not I will draw a card.

Replies from: Douglas_Knight, DanArmak
comment by Douglas_Knight · 2009-08-11T05:41:44.770Z · LW(p) · GW(p)

current utilons

I don't think "current utilons" makes that much sense. Utilons should be for a utility function, which is equivalent to a decision function, and the purpose of decisions is probably to influence the future. So utility has to be about the whole future course of the world. "Currently expected utilons" means what you expect to happen, averaged over your uncertainty and actual randomness, and this is what the dilemma should be about.

"Current hedons" certainly does make sense, at least because hedons haven't been specified as well.

comment by DanArmak · 2009-08-11T09:32:43.579Z · LW(p) · GW(p)

Like Douglas_Knight, I don't think current utilons are a useful unit.

Suppose your utility function behaves as you describe. If you play once (and win, with 90% probability), Omega will modify the universe in a way that all the concrete things you derive utility from will bring you twice as much utility, over the course of the infinite future. You'll live out your life with twice as much of all the things you value. So it makes sense to play this once, by the terms of your utility function.

You don't know, when you play your first game, whether or not you'll ever play again; your future includes both options. You can decide, for yourself, that you'll play once but never again. It's a free decision both now and later.

And now a second has passed and Omega is offering a second game. You remember your decision. But what place do decisions have in a utility function? You're free to choose to play again if you wish, and the logic for playing is the same as the first time around...

Now, you could bind yourself to your promise (after the first game). Maybe you have a way to hardwire your own decision procedure to force something like this. But how do you decide (in advance) after how many games to stop? Why one and not, say, ten?

OTOH, if you decide not to play at all - would you really forgo a one-time 90% chance of doubling your lifelong future utility? How about a 99.999% chance? The probability of death in any one round of the game can be made as small as you like, as long as it's finite and fixed for all future rounds. Is there no probability at which you'd take the risk for one round?

Replies from: outlawpoet, conchis
comment by outlawpoet · 2009-08-11T18:58:41.609Z · LW(p) · GW(p)

Why on earth wouldn't I consider whether or not I would play again? Am I barred from doing so?

If I know that the card game will continue to be available, and that Omega can truly double my expected utility every draw, either it's a relatively insignificant increase of expected utility over the next few minutes it takes me to die, in which case it's a foolish bet, compared to my expected utility over the decades I have left, conservatively, or Omega can somehow change the whole world in the radical fashion needed for my expected utility over the next few minutes it takes me to die to dwarf my expected utility right now.

This paradox seems to depend on the idea that the card game is somehow excepted from the 90% likely doubling of expected utility. As I mentioned before, my expected utility certainly includes the decisions I'm likely to make, and it's easy to see that continuing to draw cards will result in my death. So, it depends on what you mean. If it's just doubling expected utility over my expected life IF I don't die in the card game, then it's a foolish decision to draw the first or any number of cards. If it's doubling expected utility in all cases, then I draw cards until I die, happily forcing Omega to make verifiable changes to the universe and myself.

Now, there are terms at which I would take the one round, IF you don't die in the card game version of the gamble, but it would probably depend on how it's implemented. I don't have a way of accessing my utility function directly, and my ability to appreciate maximizing it is indirect at best. So I would be very concerned about the way Omega plans to double my expected utility, and how I'm meant to experience it.

In practice, of course, any possible doubt that it's not Omega giving you this gamble far outweighs any possibility of such lofty returns, but the thought experiment has some interesting complexities.

comment by conchis · 2009-08-13T13:46:33.557Z · LW(p) · GW(p)

You're free to choose to play again if you wish, and the logic for playing is the same as the first time around

This, again, depends on what you mean by "utility". Here's a way of framing the problem such that the logic can change.

Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.

Then Omega turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play (so c=q when the deal is first offered, but could be higher than that if you've played and won before). Skulls give you death: V(skull)=d, and d < q.

If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:

Is there a function, f(x), such that, for some values of q and d, we should take cards every time this bet is offered?

Yes. f(x)=V(x) gives this result for all d<q.

Is there a function, f(x), such that, for some values of q and d, we should never take the bet?

Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The offer gives vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.

Is there a function, f(x), such that, for some values of q and d, we should take cards for some finite number of offers, and then stop?

Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, it's vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you're still alive), c=512, and the vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240. So you play up until the 10th offer, then stop.

* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let's stick with this for now.

comment by SforSingularity · 2009-08-11T13:39:04.749Z · LW(p) · GW(p)

But if your hedons were measured properly, your inability to imagine them now is not an argument. This is Omega we're talking about. Perhaps it will augment your mind to help you reach each doubling. Whatever. It's stipulated in the problem that Omega will double whatever the proper metric is. Futurists should never accept "but I can't imagine that" as an argument.

In ethical and axiological matters, it is an argument.

If Omega alters your mind so that you can experience "doubled utility", and you choose not to identify with the resultant creature, then Omega has killed you.

Replies from: PhilGoetz, UnholySmoke
comment by PhilGoetz · 2009-08-12T17:30:31.195Z · LW(p) · GW(p)

I can't imagine any situation in which "I can't imagine that" is an acceptable argument. QED.

comment by UnholySmoke · 2009-08-11T22:01:50.903Z · LW(p) · GW(p)

And thus, the alcoholic who wishes to sober up, but is unable, dies with every slug of cheap cider!

It's not an argument at all. Otherwise the concept of utilons as a currency with any...currency, is nonsense.

Replies from: SforSingularity
comment by SforSingularity · 2009-08-11T22:03:35.081Z · LW(p) · GW(p)

And thus, the alcoholic who wishes to sober up, but is unable, dies with every slug of cheap cider!

I don't understand. Can you make this point clearer?

Replies from: UnholySmoke
comment by UnholySmoke · 2009-08-11T22:10:24.292Z · LW(p) · GW(p)

Somewhat off-topic, but: Many people do many things that they have previously wished not to do, through coercion or otherwise. And when asked 'are you still you' most would probably answer in the affirmative.

If Omega doubled your fun-points and asked you if you were still you, you would say yes. Why would you-now be right and you-altered be wrong?

The concept of a currency of utility is very counterintuitive. It's not how we feel utility. However, if we're to shut up and calculate (which we probably should) then 'I can't imagine twice the utility' isn't a smart response.

Replies from: SforSingularity
comment by SforSingularity · 2009-08-11T22:30:17.396Z · LW(p) · GW(p)

If Omega doubled your fun-points and asked you if you were still you, you would say yes. Why would you-now be right and you-altered be wrong?

I don't know. But I do know for sure that if Omega doubled them 60 times, the resultant being wouldn't be me.

Replies from: UnholySmoke
comment by UnholySmoke · 2009-08-12T10:30:19.260Z · LW(p) · GW(p)

At which doubling would you cease being you? Or would it be an incremental process? What function links 'number of doublings' to 'degree of me-ness'?

I don't think we're going anywhere useful with this. But I do know that if you get too tight on continuous personal identity and what that means, you start coming up with all sorts of paradoxes.

Replies from: SforSingularity
comment by SforSingularity · 2009-08-15T14:50:29.581Z · LW(p) · GW(p)

But that doesn't mean that we should just give up on personal identity. The utility function is not up for grabs, as they say: if I consider it integral to my utility function that I don't get significantly altered, then no amount of logical argument ought to persuade me otherwise.

comment by SforSingularity · 2009-08-11T13:20:41.639Z · LW(p) · GW(p)

P=~1-1.88*10^165

I think you need a minus sign in there

Replies from: Cyan
comment by Cyan · 2009-08-11T14:08:34.388Z · LW(p) · GW(p)

It's there -- it's the fifth character.

Replies from: SforSingularity
comment by SforSingularity · 2009-08-11T15:26:22.517Z · LW(p) · GW(p)

I was thinking of putting another one in, to change

10^165

into

10^-165

Replies from: Cyan
comment by Cyan · 2009-08-11T16:39:18.232Z · LW(p) · GW(p)

Right you are.

comment by [deleted] · 2009-08-10T23:58:38.805Z · LW(p) · GW(p)

"Lots of people who want to will get really, really high" is only very rarely touted as a major argument.

In public policy discussions, that's true. In private conversations with individuals, I've heard that reason more than any other.

comment by conchis · 2009-08-10T22:11:36.824Z · LW(p) · GW(p)

Depending on your purpose, I think it's probably useful to distinguish between self-regarding and other-regarding utilons as well. A consequentialist moral theory may want to maximise the (weighted) sum of (some transform of) self-regarding utilons, but to exclude other-regarding utilons from the maximand (to avoid "double-counting").

The other interesting question is: what does it actually mean to "value" something?

comment by Nanani · 2009-08-12T01:26:47.243Z · LW(p) · GW(p)

In what way are hedons anything other than a subset of utilons? Please clarify.

Increasing happiness is a part of human utility, it just isn't all of it. This post doesn't really make sense because it is arguing Superset vs Subset.

Replies from: conchis, Nick_Tarleton, Vladimir_Nesov
comment by conchis · 2009-08-12T01:48:11.504Z · LW(p) · GW(p)

Hedons won't be a subset of utilons if we happen not to value all hedons. One might not value hedons that arise out of false beliefs, for example. (From memory, I think Lawrence Sumner is a proponent of a view something like this.)

NB: Even if hedons were simply a subset of utilons, I don't quite see how that would mean that this post "doesn't really make sense".

Replies from: Nanani
comment by Nanani · 2009-08-13T00:34:43.250Z · LW(p) · GW(p)

Ah, I see! Thank you, that helps.

RE:NB Reading hedons as a subset of utilons, phrases like "maximize our hedons at the expense of our utilons" didn't make sense to me.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-08-13T01:33:06.396Z · LW(p) · GW(p)

Reading hedons as a subset of utilons, phrases like "maximize our hedons at the expense of our utilons" didn't make sense to me.

The x that maximizes f(x) might not maximize f(x)+g(x).

comment by Nick_Tarleton · 2009-08-12T01:50:12.863Z · LW(p) · GW(p)

In what way are hedons anything other than a subset of utilons?

One need not care about all hedons (or any), or care about them linearly.

comment by Vladimir_Nesov · 2009-08-12T14:11:33.946Z · LW(p) · GW(p)

What sets? What subsets? You can't throw concepts like this without clarification and expect them to make sense.

comment by timtyler · 2009-08-11T20:55:53.258Z · LW(p) · GW(p)

Re: I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units.

This seems contrary to the usage of the LessWrong Wiki:

http://wiki.lesswrong.com/wiki/Utilon

http://wiki.lesswrong.com/wiki/Hedon

The Wiki has the better usage - much better usage.

Replies from: conchis, PhilGoetz
comment by conchis · 2009-08-13T10:09:51.385Z · LW(p) · GW(p)

To avoid confusion, I think I'm going to refer to Psychohistorian's utilons as valutilons from now on.

comment by PhilGoetz · 2009-08-12T17:53:49.246Z · LW(p) · GW(p)

Then what's the difference between "pleasure unit" and "experiential utility unit"?

Replies from: conchis, Psychohistorian, timtyler
comment by conchis · 2009-08-13T10:04:34.847Z · LW(p) · GW(p)

We can experience things other than pleasure.

comment by Psychohistorian · 2009-08-12T18:00:57.717Z · LW(p) · GW(p)

Yeah, I'm pretty sure my usage is entirely consistent with the wiki usage, if not basically identical.

Replies from: conchis, timtyler
comment by conchis · 2009-08-13T11:06:21.335Z · LW(p) · GW(p)

Interesting, I'd assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.

The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.

Your definition focuses on "values": utilons are a measure of the extent to which a given world history measures up according to your values.

These are not necessarily inconsistent, but I'd assumed (perhaps wrongly) that they differed in two respects.

  1. Preferences are a simply binary relation, that does not allow degrees of intensity. (I can rank A>B, but I can't say that I prefer A twice as much as B.) In contrast, the degree to which a world measures up to our values seems capable of degrees. (It could make sense for me to say that I value A twice as much as I value B.)
  2. The preferences in question are over gambles over world histories, whereas I assumed that the values in question were over world histories directly.

I've started calling what-I-thought-you-meant "valutilons", to avoid confusion between that concept and the definition of utilons that seems more common here (and which is reflected in the wiki). We'll see how that goes.

comment by timtyler · 2009-08-13T10:03:22.731Z · LW(p) · GW(p)

Wiki says: hedons are "Utilons generated by fulfilling base desires".

Article says: hedons are "experiential utility units". Seems different to me.

comment by timtyler · 2009-08-13T10:00:57.286Z · LW(p) · GW(p)

If you are still talking about Hedons and Utilons - and if we go by the wiki, then no difference - since Hedons are a subset of Utilons, and are therefore measured in the same units.

Replies from: conchis
comment by conchis · 2009-08-13T10:07:31.689Z · LW(p) · GW(p)

since Hedons are a subset of Utilons

Not true. Even according to the wiki's usage.

Replies from: timtyler
comment by timtyler · 2009-08-13T20:10:56.530Z · LW(p) · GW(p)

What the Wiki says is: "Utilons generated by fulfilling base desires are hedons". I think it follows from that that Utilons and Hedons have the same units.

I don't much like the Wiki on these issues - but I do think it a better take on the definitions than this post.

Replies from: conchis
comment by conchis · 2009-08-13T20:25:19.055Z · LW(p) · GW(p)

I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)

As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)

Replies from: timtyler
comment by timtyler · 2009-08-14T17:38:17.006Z · LW(p) · GW(p)

My original claim was about what the Wiki says. Outside that context we would have to start by stating definitions of Hedons and Utilons before there could be much in the way of sensible conversation.

comment by snarles · 2009-08-11T03:07:20.729Z · LW(p) · GW(p)

I'm not convinced by your examples that people generally value utilons over hedons.

For your first example, you feel like you (and others, by generalization) would reject Omega's deal, but how much can you trust this self-prediction? Especially given that this situation will never occur, you don't have much incentive to predict correctly if the answer isn't flattering.

For the drug use example, I can think of many other possible reasons that people would oppose drugs other than valuing utilons over hedons. Society might be split into two groups: drug-lovers and non-drug-lovers. If non-drug-lovers have more power, then the individually-maximizing non-drug-lovers will make sure that drugs are illegal, even if the net hedonic benefit of legalizing drugs is positive.

Replies from: Psychohistorian, DanArmak
comment by Psychohistorian · 2009-08-11T16:41:27.016Z · LW(p) · GW(p)

I can think of many other possible reasons that people would oppose drugs other than valuing utilons over hedons. Society might be split into two groups: drug-lovers and non-drug-lovers.

That's why my argument focuses on arguments surrounding legalization rather than on the law itself. There are many potential reasons why drugs remain illegal, from your argument to well-intentioned utilitarianism to big pharma. However, when you look at arguments for legalization, you seldom hear a public figure say, "But people really like getting high!" Similarly, if you're hearing an argument for, say, abstinence-only sex ed, you never hear someone say, "But teenagers really like having sex!" Even with more "neutral" topics like a junk food tax, arguments like "I don't want the government telling me what to eat" seem far more common than "But some people really like deep fried lard!" Though that example I am less sure of, and it is certainly less consistent than the other two. In general though, you don't see people arguing that hedons should be a meaningful factor in any policy, and I think this strongly indicates that our society does not assign a high value to the attaining of of hedons, in the way it assigns value to say, being thin or being wealthy.

Replies from: Christian_Szegedy, teageegeepea
comment by Christian_Szegedy · 2009-08-12T17:48:28.184Z · LW(p) · GW(p)

" Even with more "neutral" topics like a junk food tax, arguments like "I don't want the government telling me what to eat" seem far more common than "But some people really like deep fried lard!"

I think this is mostly rationalization:

In a practical sense, we have a very strong drive to pleasure and enjoyment, but our Judeo-Christian tradition (like most other religions as well, but let's keep it simple) makes a sport of downplaying pleasure as a factor in human happiness, even making it into something dirty or at least suspicious.

Fortunately, when the time of enlightenment came, it did not reestablish pleasure as a desirable goal, but opened a great back door for rationalization: the very concept of freedom. The long ascetic tradition going back several thousand years put a very strong barrier to publicly admitting this significant part of our driving force. Freedom was promoted instead. Of course "freedom" is a very fuzzy word. It can refer to several more or less disconnected fuzzy concepts like independence of foreign power, free practice of religion, personal liberties, etc.

Still "Freedom" is also a wildcard for saying: "Don't mess with my hedons!".

Of course, I won't admit that I am softie and care about all those nice convenient or exciting stuff, but don't dare to dispute my freedom to do whatever I want! (Unless it harms anyone else.)

So the concept of freedom is an ideal invention for our anyways irrational and hypocritical society: it allows public discussion to covertly recognize the value of individual pleasures by referring to this established, noble, abstract concept that fortunately made it into the set of few keywords that command immediate respect and unquestioned reverence.

comment by teageegeepea · 2009-08-12T05:41:36.364Z · LW(p) · GW(p)

I know I've read a number of economists doing utilitarian analyses of drug legalization that take into account the enjoyment people get from drugs. Jacob Sullum's "Saying Yes" is basically a defense of drug use.

I argue in favor of keeping your damn dirty hands off my fatty food on the basis of my enjoyment of it. I also enjoy rock'n'roll, but don't care much about sex'n'drugs (though I think those should be legal too).

Replies from: UnholySmoke
comment by UnholySmoke · 2009-08-12T10:35:35.522Z · LW(p) · GW(p)

How can you enjoy one without the others?

comment by DanArmak · 2009-08-11T03:20:53.579Z · LW(p) · GW(p)

For your first example, you claim that you would reject Omega's deal, but this could be for signalling purposes only. If the situation really occurred, who knows whether you would accept?

This is a good objection. I can see another reason why this is a poor example.

Our morals evolved in a society that (to begin with) has no Omegas. If you have an opportunity to hurt a lot of people and profit from it, it's a very safe bet that someone will find out one day that you did it, and you will be punished proportionally. So our instincts (morals, whatever) tell us very strongly not to do this. The proposed secrecy is an added hint (to our subconscious thinking) that this action is not accepted by society, so it's very dangerous.

Rejecting the proposal is unnecessary, excessive caution. If people were more rational, and more serious about maximizing hedons (rather than, say, concentrating on minimizing risk once a suitable lifelong level of hedons has been reached), then more people would accept Omega's proposal!

comment by Jonathan_Graehl · 2009-08-10T20:48:07.212Z · LW(p) · GW(p)

"dead in an hour with P=~1-1.88*10^165" should probably have 10^(-165) so that P is just less than 1.

comment by PhilGoetz · 2009-08-11T04:36:39.121Z · LW(p) · GW(p)

Why doesn't this post show up under "new" anymore?

[And what possible reason did someone have for down-voting that question?]

Replies from: DanArmak
comment by DanArmak · 2009-08-11T09:59:56.072Z · LW(p) · GW(p)

It shows up for me...

If you downvoted the post, it wouldn't show up for you, depending on your account settings.