Posts

Comments

Comment by conchis on Pinpointing Utility · 2013-02-05T22:01:31.052Z · score: 0 (0 votes) · LW · GW

There is no value to a superconcept that crosses that boundary.

This doesn't seem to me to argue in favour of using wording that's associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn't belong there.

Two additional things, FWIW:

(1) There's a lot of existing literature that distinguishes between "decision utility" and "experienced utility" (where "decision utility" corresponds to preference representation) so there is an existing terminology already out there. (Although "experienced utility" doesn't necessarily have anything to do with preference or welfare aggregation either.)

(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it's problematic. (Not sure that's what you intend - but it's one interpretation of your words in this comment.) Even Harsanyi's argument, while flawed, is interesting in this regard (it's much more sophisticated than Phil's post, so I'd recommend checking it out if you haven't already.)

Comment by conchis on Pinpointing Utility · 2013-02-04T00:22:09.543Z · score: 1 (1 votes) · LW · GW

I'm hesitant to get into a terminology argument when we're in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)

Yes, it's annoying when people use the word 'fruit' to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I'd suggest that it's not the most useful response to this problem to insist on using the word 'fruit' to refer exclusively to apples, and to proceed to make claims like 'fruit can't be orange coloured' that are false for some types of fruit. (Even more so when people have been using the word 'fruit' to refer to oranges for longer than they've been using it to refer to apples.) Aren't you just making it more difficult for people to get your point that apples and oranges are different?

On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you're really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it's easier to instead phrase your claims as being about apples and oranges directly when they're intended to apply to only one type of fruit?

P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.

Comment by conchis on Pinpointing Utility · 2013-02-03T12:24:48.171Z · score: -1 (1 votes) · LW · GW

While I'm in broad agreement with you here, I'd nitpick on a few things.

Different utility functions are not commensurable.

Agree that decision-theoretic or VNM utility functions are not commensurable - they're merely mathematical representations of different individuals' preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow's impossibility theorem).

Translate the axioms into statements about people. Do they still seem reasonable?

I'm actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it's 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)

That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn't have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen's main point).

Comment by conchis on Pinpointing Utility · 2013-02-03T06:55:31.661Z · score: 0 (0 votes) · LW · GW

Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we've discussed ad nauseam before)? In response to an argument of Harsanyi's that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.

If not, some useful references here.

ETA: I worry that I've unduly maligned Harsanyi by associating his argument too heavily with Phil's post. Although I still think it's wrong, Harsanyi's argument is rather more sophisticated than Phil's, and worth checking out if you're at all interested in this area.

Comment by conchis on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 · 2012-12-28T07:01:19.224Z · score: 0 (0 votes) · LW · GW

It wouldn't necessarily reflect badly on her: if someone has to die to take down Azkaban,* and Harry needs to survive to achieve other important goals, then Hermione taking it down seems like a non-foolish solution to me.

*This is hinted at as being at least a strong possibility.

Comment by conchis on Offense versus harm minimization · 2011-04-18T23:04:43.849Z · score: 5 (5 votes) · LW · GW

Although I agree it's odd, it does in fact seem that there is gender information transferred / inferred from grammatical gender.

From Lera Boroditsky's Edge piece

Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key" — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like "hard," "heavy," "jagged," "metal," "serrated," and "useful," whereas Spanish speakers were more likely to say "golden," "intricate," "little," "lovely," "shiny," and "tiny." To describe a "bridge," which is feminine in German and masculine in Spanish, the German speakers said "beautiful," "elegant," "fragile," "peaceful," "pretty," and "slender," and the Spanish speakers said "big," "dangerous," "long," "strong," "sturdy," and "towering." This was true even though all testing was done in English, a language without grammatical gender. The same pattern of results also emerged in entirely nonlinguistic tasks (e.g., rating similarity between pictures). And we can also show that it is aspects of language per se that shape how people think: teaching English speakers new grammatical gender systems influences mental representations of objects in the same way it does with German and Spanish speakers.

Comment by conchis on Offense versus harm minimization · 2011-04-18T22:58:31.439Z · score: 2 (2 votes) · LW · GW

My understanding of the relevant research* is that it's a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)

* There's an overview of some of this here (from p.26).

Comment by conchis on Med Patient Social Networks Are Better Scientific Institutions · 2010-02-23T18:33:01.559Z · score: 0 (0 votes) · LW · GW

Isn't the main difference just that they have a bigger sample. (e.g. "4x" in the hardcore group).

Comment by conchis on The Absent-Minded Driver · 2009-09-16T02:47:54.891Z · score: 0 (0 votes) · LW · GW

Isn't the claim in 6 (that there is a planning-optimal choice, but no action-optimal choice) inconsistent with 4 (a choice that is planning optimal is also action optimal)?

Comment by conchis on Decision theory: Why we need to reduce “could”, “would”, “should” · 2009-09-03T22:50:12.369Z · score: 1 (1 votes) · LW · GW

Laying down rules for what counts as evidence that a body is considering alternatives, is mess[y]

Agreed. But I don't think that means that it's not possible to do so, or that there aren't clear cases on either side of the line. My previous formulation probably wasn't as clear as it should have been, but would the distinction seem more tenable to you if I said "possible in principle to observe physical representations of" instead of "possible in principle to physically extract"? I think the former better captures my intended meaning.

If there were a (potentially) observable physical process going on inside the pebble that contained representations of alternative paths available to it, and the utility assigned to them, then I think you could argue that the pebble is a CSA. But we have no evidence of that whatsoever. Those representations might exist in our minds once we decide to model the pebble in that way, but that isn't the same thing at all.

On the other hand, we do seem to have such evidence for e.g. chess-playing computers, and (while claims about what neuroimaging studies have identified are frequently overstated) we also seem to be gathering it for the human brain.

Comment by conchis on Rationality Quotes - September 2009 · 2009-09-03T22:26:17.012Z · score: 5 (5 votes) · LW · GW

FWIW, the exact quote (from pp.13-14 of this article) is:

Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise. [Emphasis in original]

Your paraphrase is snappier though (as well as being less ambiguous; it's hard to tell in the original whether Tukey intends the adjectives "vague" and "precise" to apply to the questions or the answers).

Comment by conchis on Decision theory: Why we need to reduce “could”, “would”, “should” · 2009-09-03T18:25:19.413Z · score: 0 (0 votes) · LW · GW

all of the above assumes a distinction I'm not convinced you've made

If it is possible in principle, to physically extract the alternatives/utility assignments etc., wouldn't that be sufficient to ground the CSA--non-CSA distinction, without running afoul of either current technological limitations, or the pebble-as-CSA problem? (Granted, we might not always know whether a given agent is really a CSA or not, but that doesn't seem to obviate the distinction itself.)

Comment by conchis on Open Thread: September 2009 · 2009-09-01T21:55:02.760Z · score: 1 (1 votes) · LW · GW

The Snoep paper Will linked to measured the correlation for the US, Denmark and the Netherlands (and found no significant correlation in the latter two).

The monopolist religion point is of course a good one. It would be interesting to see what the correlation looked like in relatively secular, yet non-monopolistic countries. (Not really sure what countries would qualify though.)

Comment by conchis on Open Thread: September 2009 · 2009-09-01T19:44:42.992Z · score: 2 (2 votes) · LW · GW

We already have some limited evidence that conventionally religious people are happier

But see Will Wilkinson on this too (arguing that this only really holds in the US, and speculating that it's really about "a good individual fit with prevailing cultural values" rather than religion per se).

Comment by conchis on Working Mantras · 2009-08-25T13:47:28.126Z · score: 3 (3 votes) · LW · GW

Thanks for the explanation.

The idea is that when you are listening to music, you are handicapping yourself by taking some of the attention of the aural modality.

I'd heard something similar from a friend who majored in psychology, but they explained it in terms of verbal processing rather than auditory processing more generally, which is why (they said) music without words wasn't as bad.

I'm not sure whether it's related, but I've also been told by a number of musically-trained friends that they can't work with music at all, because they can't help but analyse it as they listen: for them, listening seems to automatically involve processing work that it doesn't (seem to) for me, precisely because I'm not capable of such processing. (This was part of the reason I was originally wondering about individual variation; the point you make at the end is really interesting in this regard too.)

Comment by conchis on Working Mantras · 2009-08-25T13:13:12.893Z · score: 0 (0 votes) · LW · GW

Sometimes, but it varies quite a lot depending on exactly what I'm doing. The only correlation I've noticed between the effect of music and work-type is that the negative effect of lyrics is more pronounced when I'm trying to write.

Of course, it's entirely possible that I'm just not noticing the right things - which is why I'd be interested in references.

Comment by conchis on Working Mantras · 2009-08-25T09:47:50.603Z · score: 1 (1 votes) · LW · GW

If anyone does have studies to hand I'd be grateful for references.* I personally find it difficult to work without music. That may be habit as much as anything else, though I expect part of the benefit is due to shutting out other, more distracting noise. I've noticed negative effects on my productivity on the rare occasions I've listened to music with lyrics, but that's about it.

* I'd be especially grateful for anything that looks at how much individual variation there is in the effect of music.

Comment by conchis on Happiness is a Heuristic · 2009-08-24T15:17:39.477Z · score: 0 (0 votes) · LW · GW

Fair enough. My impression of the SWB literature is that the relationship is robust, both in a purely correlational sense, and in papers like the Frey and Stutzer one where they try to control for confounding factors like personality and selection. The only major catch is how long it takes individuals to adapt after the initial SWB spike.

Indeed, having now managed to track down the paper behind your first link, it seems like this is actually their main point. From their conclusion:

Our results show that (a) selection effects appear to make happy people more likely to get and stay married, and these selection effects are at least partially [emphasis mine] responsible for the widely documented association between marital status and SWB; (b) on average, people adapt quickly and completely to marriage, and they adapt more slowly to widowhood (though even in this case, adaptation is close to complete after about 8 years); (c) there are substantial individual differences in the extent to which people adapt; and (d) the extent to which people adapt is strongly related to the degree to which they react to the initial event—those individuals who reacted strongly were still far from baseline levels years after the event. These last two findings indicate that marital transitions can be related to changes in satisfaction but that these effects may be overlooked if only average trends are examined.

Comment by conchis on Happiness is a Heuristic · 2009-08-24T09:52:24.946Z · score: 0 (0 votes) · LW · GW

FWIW, this seems inconsistent with the evidence presented in the paper linked here, and most of the other work I've seen. The omitted category in most regression analyses is "never married", so I don't really see how this would fly.

Comment by conchis on Happiness is a Heuristic · 2009-08-24T09:49:38.917Z · score: 1 (1 votes) · LW · GW

Sorry for the delay in getting back to you (in fairness, you didn't get back to me either!). A good paper (though not a meta-analysis) on this is:

Stutzer and Frey (2006) Does Marriage Make People Happy or Do Happy People Get Married? Journal of Socio-Economics 35:326-347. links

The lit review surveys some of the other evidence.

I a priori doubt all the happiness research as based on silly questionnaires and naive statistics

I'm a little puzzled by this comment given that the first link you provided looks (on its face) to be based on exactly this sort of evidence. But in any event, many of the studies mentioned in the Stutzer and Frey paper look at health and other outcomes as well.

Comment by conchis on Experiential Pica · 2009-08-18T12:13:10.887Z · score: 2 (2 votes) · LW · GW

this post infers possible causation based upon a sample size of 1

Eh? Pica) is a known disorder. The sample size for the causation claim is clearly more than 1.

[ETA: In case anyone's wondering why this comment no longer makes any sense, it's because most of the original parent was removed after I made it, and replaced with the current second para.]

Comment by conchis on Friendlier AI through politics · 2009-08-18T09:22:30.665Z · score: 5 (5 votes) · LW · GW

I for one comment far more on Phil's posts when I think they're completely misguided than I do otherwise. Not sure what that says about me, but if others did likewise, we would predict precisely the relationship Phil is observing.

Comment by conchis on Happiness is a Heuristic · 2009-08-17T02:40:05.965Z · score: 0 (0 votes) · LW · GW

Interesting. All the other evidence I've seen suggest that committed relationships do make people happier, so I'd be interested to see how these apparently conflicting findings can be resolved.

Part of the difference could just be the focus on marriage vs. stable relationships more generally (whether married or not): I'm not sure there's much reason to think that a marriage certificate is going to make a big difference in and of itself (or that anyone's really claiming that it would). In fact, there's some, albeit limited, evidence that unmarried couples are happier on average than married ones.

I'll try to dig up references when I have a bit more time. Don't suppose you happen to have one for the actual research behind your first link?

Comment by conchis on Calibration fail · 2009-08-16T16:22:20.941Z · score: 0 (0 votes) · LW · GW

Me too. It gets especially embarrassing when you end up telling someone a story about a conversation they themselves were involved in.

Comment by conchis on Deleting paradoxes with fuzzy logic · 2009-08-13T22:52:59.496Z · score: 2 (2 votes) · LW · GW

Warning, nitpicks follow:

The sentence "All good sentences must at least one verb." has at least one verb. (It's an auxiliary verb, but it's still a verb. Obviously this doesn't make it good; but it does detract from the point somewhat.)

"2+2=5" is false, but it's not nonsense.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T20:25:19.055Z · score: 1 (1 votes) · LW · GW

I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)

As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)

Comment by conchis on Utilons vs. Hedons · 2009-08-13T19:35:48.672Z · score: 0 (0 votes) · LW · GW

To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).

You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].

If g(x) is only ordinal, this won't be especially helpful, but if you had a reasonable way of establishing an origin and scale it would seem potentially useful. Note also that f could be unbounded even if g were bounded, and vice-versa. In theory, that seems to suggest that taking ever increasing risks to achieve a bounded goal could be rational, if one were sufficiently risk-loving (though it does seem unlikely that anyone would really be that "crazy"). Also, one could avoid ever taking such risks, even in the pursuit of an unbounded goal, if one were sufficiently risk-averse that one's f function were bounded.

P.S.

On my reading of OP, this is the meaning of utility that was intended.

You're probably right.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T17:23:55.493Z · score: 1 (1 votes) · LW · GW

Utility means "the function f, whose expectation I am in fact maximizing".

There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)

The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.

Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T17:04:06.082Z · score: 0 (0 votes) · LW · GW

Crap. Sorry about the delete. :(

Comment by conchis on Utilons vs. Hedons · 2009-08-13T16:59:30.432Z · score: 1 (1 votes) · LW · GW

Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?

It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.

In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way

You're assuming the output of V(x) is ordinal. It could be cardinal.

all this means is that Omega is offering us the wrong thing

I'm afraid I don't understand what you mean here. "Wrong" relative to what?

which we don't really value.

Eh? Valutilons were defined to be something we value (ETA: each of us individually, rather than collectively).

Comment by conchis on Utilons vs. Hedons · 2009-08-13T13:49:30.087Z · score: 2 (2 votes) · LW · GW

The logic for the first step is the same as for any other step.

Actually, on rethinking, this depends entirely on what you mean by "utility". Here's a way of framing the problem such that the logic can change.

Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.

Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play out (so c=q when the deal is first offered, but could be higher than that if you've played and won before). Skulls give you death: V(skull)=d, and d < q.

If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:

Is there a function, f(x), such that, for some values of q and d, we should take a card every time one is offered?

Yes. f(x)=V(x) gives this result for all d<q. This is the standard approach.

Is there a function, f(x), such that, for some values of q and d, we should never take a card?

Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The card gives expected vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.

Is there a function, f(x), such that, for some values of q and d, we should take some finite number of cards then stop?

Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, its expected vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you're still alive), c=512, and the expected vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240.

So you take 9 cards, then stop. (You can verify for yourself, that the 9th card is still a good bet.)

* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let's stick with it for now.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T11:06:21.335Z · score: 0 (0 votes) · LW · GW

Interesting, I'd assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.

The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.

Your definition focuses on "values": utilons are a measure of the extent to which a given world history measures up according to your values.

These are not necessarily inconsistent, but I'd assumed (perhaps wrongly) that they differed in two respects.

  1. Preferences are a simply binary relation, that does not allow degrees of intensity. (I can rank A>B, but I can't say that I prefer A twice as much as B.) In contrast, the degree to which a world measures up to our values seems capable of degrees. (It could make sense for me to say that I value A twice as much as I value B.)
  2. The preferences in question are over gambles over world histories, whereas I assumed that the values in question were over world histories directly.

I've started calling what-I-thought-you-meant "valutilons", to avoid confusion between that concept and the definition of utilons that seems more common here (and which is reflected in the wiki). We'll see how that goes.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T10:07:31.689Z · score: 0 (0 votes) · LW · GW

since Hedons are a subset of Utilons

Not true. Even according to the wiki's usage.

Comment by conchis on Utilons vs. Hedons · 2009-08-13T10:04:34.847Z · score: 0 (0 votes) · LW · GW

We can experience things other than pleasure.

Comment by conchis on Exterminating life is rational · 2009-08-13T09:22:08.917Z · score: 0 (0 votes) · LW · GW

Agreed. My point is simply that one particular (tempting) way of resolving the underspecification is non-useful. ;)

Comment by conchis on Exterminating life is rational · 2009-08-13T09:15:03.061Z · score: 1 (1 votes) · LW · GW

if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

Well, I don't disagree with this, but I would still agree with it if you substituted "right" for "wrong", so it doesn't seem like much of a conclusion. ;)

Comment by conchis on Exterminating life is rational · 2009-08-13T09:10:50.094Z · score: 1 (1 votes) · LW · GW

it's just a parameter of the tool.

It's also an entity in the problem set-up. When Omega says "I'll double your utility", what is she offering to double? Without defining this, the problem isn't well-specified.

Comment by conchis on Exterminating life is rational · 2009-08-13T08:52:01.728Z · score: 0 (0 votes) · LW · GW

I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn't solve any problems. For this definition of utility I argue that your example doesn't work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn't really helping.

For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.

Comment by conchis on Utilons vs. Hedons · 2009-08-12T20:10:04.923Z · score: 1 (1 votes) · LW · GW

We can value more than just our emotional states. The experience machine is the classic thought experiment designed to demonstrate this. Another example that was discussed a lot here recently was the possibility that we could value not being deceived.

Comment by conchis on Exterminating life is rational · 2009-08-12T18:08:16.950Z · score: 1 (1 votes) · LW · GW

As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person's preference (after "careful" reflection), proves that there is no way to set up "utility"

My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person's considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person's considered preference is). AFAICT, the thought experiment then amounts to asking: "If I should maximize expected utility, should I maximize expected utility?" Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still "yes". But the thought experiment is completely uninformative.

Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don't, could you tell me what part you don't understand so I can try to clarify my thinking?

On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don't think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)

My point is just that the thought experiment is underspecified unless we're clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.

Comment by conchis on Exterminating life is rational · 2009-08-12T16:02:27.191Z · score: 0 (0 votes) · LW · GW

I'm struggling to figure out whether we're actually disagreeing about anything here, and if so, what it is. I agree with most of what you've said, but can't quite see how it connects to the point I'm trying to make. It seems like we're somehow managing to talk past each other, but unfortunately I can't tell whether I'm missing your point, you're missing mine, or something else entirely. Let's try again... let me know if/when you think I'm going off the rails here.

If I understand you correctly, you want to evaluate a particular decision procedure "maximize expected utility" (MEU) by seeing whether the results it gives in this situation seem correct. (Is that right?)

My point was that the result given by MEU, and the evidence that this can provide, both depend crucially on what you mean by utility.

One possibility is that by utility, you mean vNM utility. In this case, MEU clearly says you should accept the offer. As a result, it's tempting to say that if you think accepting the offer would be a bad idea, then this provides evidence against MEU (or equivalently, since the vNM axioms imply MEU, that you think it's ok to violate the vNM axioms). The problem is that if you violate the vNM axioms, your choices will have no vNM utility representation, and Omega couldn't possibly promise to double your vNM utility, because there's no such thing. So for the hypothetical to make sense at all, we have to assume that your preferences conform to the vNM axioms. Moreover, because the vNM axioms necessarily imply MEU, the hypothetical also assumes MEU, and it therefore can't provide evidence either for or against it.*

If the hypothetical is going to be useful, then utility needs to mean something other than vNM utility. It could mean hedons, it could mean valutilons,** it could mean something else. I do think that responses to the hypothetical in these cases can provide useful evidence about the value of decision procedures such as "maximize expected hedons" (MEH) or "maximize expected valutilons" (MEV). My point on this score was simply that there is no particular reason to think that either MEH or MEV were likely to be an optimal decision procedure to begin with. They're certainly not implied by the vNM axioms, which require only that you should maximise the expectation of some (positive) monotonic transform of hedons or valutilons or whatever.*** [ETA: As a specific example, if you decide to maximize the expectation of a bounded concave function of hedons/valutilons, then even if hedons/valutilons are unbounded, you'll at some point stop taking bets to double your hedons/valutilons, but still be an expected vNM utility maximizer.]

Does that make sense?

* This also means that if you think MEU gives the "wrong" answer in this case, you've gotten confused somewehere - most likely about what it means to double vNM utility.

** I define these here as the output of a function that maps a specific, certain, world history (no gambles!) into the reals according to how well that particular world history measures up against my values. (Apologies for the proliferation of terminology - I'm trying to guard against the possibility that we're using "utilons" to mean different things without inadvertently ending up in a messy definitional argument. ;))

*** A corollary of this is that rejecting MEH or MEV does not constitute evidence against the vNM axioms.

Comment by conchis on Exterminating life is rational · 2009-08-12T14:22:10.546Z · score: 3 (3 votes) · LW · GW

No one has proposed a form of utilitarianism that is free from paradoxes (e.g., the Repugnant Conclusion).

I'd be interested to know what you think of Critical-Level Utilitarianism and Population-Relative Betterness as ways of avoiding the repugnant conclusion and other problems.

Comment by conchis on Deleting paradoxes with fuzzy logic · 2009-08-12T09:40:11.484Z · score: 0 (0 votes) · LW · GW

Apology likewise accepted! ;)

Comment by conchis on Utilons vs. Hedons · 2009-08-12T01:48:11.504Z · score: 1 (1 votes) · LW · GW

Hedons won't be a subset of utilons if we happen not to value all hedons. One might not value hedons that arise out of false beliefs, for example. (From memory, I think Lawrence Sumner is a proponent of a view something like this.)

NB: Even if hedons were simply a subset of utilons, I don't quite see how that would mean that this post "doesn't really make sense".

Comment by conchis on Welcome to Less Wrong! · 2009-08-11T23:32:28.188Z · score: 4 (4 votes) · LW · GW

Dissidence (i.e. dissent/the state of being a dissident) actually seems to fit the context better than dissonance. I thought it was a nice turn of phrase.

Comment by conchis on Exterminating life is rational · 2009-08-11T23:16:53.354Z · score: 0 (0 votes) · LW · GW

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

The former doesn't force such decisions at all. That's precisely why I said that it's not useful advice: all it says is that you should take the gamble if you prefer to take the gamble.* (Moreover, if you did not prefer to take the gamble, the hypothetical doubling of vNM utility could never happen, so the set up already assumes you prefer the gamble. This seems to make the hypothetical not especially useful either.)

On the other hand "maximize expected utilons" does provide concrete advice. It's just that (AFAIK) there's no reason to listen to that advice unless you're risk-neutral over utilons. If you were sufficiently risk averse over utilons then a 50% chance of doubling them might not induce you to take the gamble, and nothing in the vNM axioms would say that you're behaving irrationally. The really interesting question then becomes whether there are other good reasons to have particular risk preferences with respect to utilons, but it's a question I've never heard a particularly good answer to.

* At least provided doing so would not result in an inconsistency in your preferences. [ETA: Actually, if your preferences are inconsistent, then they won't have a vNM utility representation, and Omega's claim that she will double your vNM utility can't actually mean anything. The set-up therefore seems to imply that you preferences are necessarily consistent. There sure seem to be a lot of surreptitious assumptions built in here!]

Comment by conchis on Deleting paradoxes with fuzzy logic · 2009-08-11T22:54:35.467Z · score: 0 (2 votes) · LW · GW

I'm willing to apologise for publicly calling you out. While I'm still not totally convinced that PMing would have been optimal in this instance, it was a failing on my part not to have considered it at all, and I'm certainly sorry for any hurt I may have caused.

I'm also sorry that you seem to have such a poor impression of me that you can't think of any way to explain my behaviour other than self-promotion and grandstanding. Not really big on argumentative charity are you?

Comment by conchis on Exterminating life is rational · 2009-08-11T22:05:13.562Z · score: 1 (1 votes) · LW · GW

Sorry for coming late to this party. ;)

Much of this discussion seems to me to rest on a similar confusion to that evidenced in "Expectation maximization implies average utilitarianism".

As I just pointed out again, the vNM axioms merely imply that "rational" decisions can be represented as maximising the expectation of some function mapping world histories into the reals. This function is conventionally called a utility function. In this sense of "utility function", your preferences over gambles determine your utility (up to an affine transform), so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer".* Given standard assumptions about Omega, this pretty obviously means that you accept the offer.

The confusion seems to arise because there are other mappings from world histories into the reals that are also conventionally called utility functions, but which have nothing in particular to do with the vNM utility function. When we read "I'll double your utility" I think we intuitively parse the phrase as referring to one of these other utility functions, which is when problems start to ensue.

Maximising expected vNM utility is the right thing to do. But "maximise expected vNM utility" is not especially useful advice, because we have no access to our vNM utility function unless we already know our preferences (or can reasonably extrapolate them from preferences we do have access to). Maximising expected utilons is not necessarily the right thing to do. You can maximize any (potentially bounded!) positive monotonic transform of utilons and you'll still be "rational".

* There are sets of "rational" preferences for which such a statement could never be true (your preferences could be represented by a bounded utility function where doubling would go above the bound). If you had such preferences and Omega possessed the usual Omega-properties, then she would never claim to be able to double your utility: ergo the hypothetical implicitly rules out such preferences.

NB: I'm aware that I'm fudging a couple of things here, but they don't affect the point, and unfudging them seemed likely to be more confusing than helpful.

Comment by conchis on Deleting paradoxes with fuzzy logic · 2009-08-11T19:01:27.357Z · score: 2 (2 votes) · LW · GW

Holding someone's hand through basic explanations is unfair to the people who have to do the work that the initial poster should have done for themselves.

What's obvious to one person is seldom obvious to everybody else. There are things that seem utterly trivial to me that lots of people don't get immediately, and many more things that seem utterly trivial to others that I don't get immediately. That doesn't mean that any of us aren't trying, or deserve to be belittled for "not getting it". (I can't quite tell if your second paragraph is intended as justification or merely explanation; apologies if I've guessed wrongly).

Why stage a degrading, self-congratulatory "intervention"?

It wasn't intended to be self-congratulatory; it was intended to make a point. Oh well. As for being degrading, I was attempting, via irony, to help you to understand the impact of a particular style of comment. It's a style that I would normally try to avoid, and I agree that in general such comments might be better communicated privately, and certainly in a less inflammatory way. (In this case, it honestly didn't occur to me to send a private message. Not sure what I would have done if it had. I think the extent to which others' here agree or disagree with my point is useful information for us both, but information that would be lost if the correspondence were private.)

It's not actually as often as you're trying to imply.

I'm not sure what you think I was trying to imply, but I had two specific instances in mind (other than this one), and honestly wasn't trying to imply anything beyond that.

Comment by conchis on Utilons vs. Hedons · 2009-08-11T16:28:41.576Z · score: 4 (4 votes) · LW · GW

This discussion has made me feel I don't understand what "utilon" really means.

I agree that the OP is somewhat ambiguous on this. For my own part, I distinguish between at least the following four categories of things-that-people-might-call-a-utility-function. Each involves a mapping from world histories into the reals according to:

  1. how the history affects our mind/emotional states;
  2. how we value the history from a self-regarding perspective ("for our own sake");
  3. how we value the history from an impartial (moral) perspective; or
  4. the choices we would actually make between different world histories (or gambles over world histories).

Hedons are clearly the output of the first mapping. My best guess is that the OP is defining utilons as something like the output of 3, but it may be a broader definition that could also encompass the output of 2, or it could be 4 instead.

I guess that part of the point of rationality is to get the output of 4 to correspond more closely to the output of either 2 or 3 (or maybe something in between): that is to help us act in greater accordance with our values - in either the self-regarding or impartial sense of the term.

"Values" are still a bit of a black box here though, and it's not entirely clear how to cash them out. I don't think we want to reduce them either to actual choices or simply to stated values. Believed values might come closer, but I think we probably still want to allow that we could be mistaken about them.