Posts
Comments
I can see the appeal, but I worry that a metaphor where a single person is given a single piece of software, and has an option to rewrite it for their own and/or others’ purpose without grappling with myriad upstream and downstream dependencies, vested interests, and so forth is probably missing an important part of the dynamics of real world systems?
(This doesn’t really speak to moral obligations to systems, as much as practical challenges doing anything about them, but my experience is that the latter is a much more binding constraint.)
Additional/complementary argument in favour (and against the “any difference you make is marginal” argument): one’s personal example of viable veganism increases the chances of others becoming vegan (or partially so, which is still a benefit). Under plausible assumptions this effect could be (potentially much) larger the the direct effect of personal consumption decisions.
I have to say that the claimed reductios here strike me as under-argued, particularly when there are literally decades of arguments articulating and defending various versions of moral anti-realism, and which set out a range of ways in which the implications, though decidedly troubling, need not be absurd.
His 2018 lectures are also available on youtube and seem pretty good so far if anyone wants a complement to the book. The course website also has lecture notes and exercises.
To me, at least, it seems clear that you should not take the opportunities to reduce your torture sentence. After all, if you repeatedly decide to take them, you will end up with a 0.5 chance of being highly uncomfortable and a 0.5 chance of being tortured for 3^^^^3 years. This seems like a really bad lottery, and worse than the one that lets me have a 0.5 chance of having an okay life.
FWIW, this conclusion is not clear to me. To return to one of my original points: I don't think you can dodge this objection by arguing from potentially idiosyncratic preferences, even perfectly reasonable ones; rather, you need it to be the case that no rational agent could have different preferences. Either that, or you need to be willing to override otherwise rational individual preferences when making interpersonal tradeoffs.
To be honest, I'm actually not entirely averse to the latter option: having interpersonal trade-offs determined by contingent individual risk-preferences has never seemed especially well-justified to me (particularly if probability is in the mind). But I confess it's not clear whether that route is open to you, given the motivation for your system as a whole.
More generally, I think the basic property of non-real-valued consistent preference orderings is that they value some things "infinitely more" than others.
That makes sense, thanks.
So, I don't think your concern about keeping utility functions bounded is unwarranted; I'm just noting that they are part of a broader issue with aggregate consequentialism, not just with my ethical system.
Agreed!
you just need to make it so the supremum of them their value is 1 and the infimum is 0.
Fair. Intuitively though, this feels more like a rescaling of an underlying satisfaction measure than a plausible definition of satisfaction to me. That said, if you're a preferentist, I accept this is internally consistent, and likely an improvement on alternative versions of preferentism.
One issue with only having boundedness above is that is that the expected of life satisfaction for an arbitrary agent would probably often be undefined or in expectation
Yes, and I am obviously not proposing a solution to this problem! More just suggesting that, if there are infinities in the problem that appear to correspond to actual things we care about, then defining them out of existence seems more like deprioritising the problem than solving it.
The utility monster feels an incredibly strong need to have everyone on Earth be tortured
I think this framing muddies the intuition pump by introducing sadistic preferences, rather than focusing just on unboundedness below. I don't think it's necessary to do this: unboundedness below means there's a sense in which everyone is a potential "negative utility monster" if you torture them long enough. I think the core issue here is whether there's some point at which we just stop caring, or whether that's morally repugnant.
in order to act, you need more than just a consistent preference order over possible universe. In reality, you only get to choose between probability distributions over possible worlds, not specific possible worlds
Sorry, sloppy wording on my part. The question should have been "does this actually prevent us having a consistent preference ordering over gambles over universes" (even if we are not able to represent those preferences as maximising the expectation of a real-valued social welfare function)? We know (from lexicographic preferences) that "no-real-valued-utility-function-we-are-maximising-expectations-of" does not immediately imply "no-consistent-preference-ordering" (if we're willing to accept orderings that violate continuity). So pointing to undefined expectations doesn't seem to immediately rule out consistent choice.
In an infinite universe, there's already infinitely-many people, so I don't think this applies to my infinite ethical system.
YMMV, but FWIW allowing a system of infinite ethics to get finite questions (which should just be a special case) wrong seems a very non-ideal property to me, and suggests something has gone wrong somewhere. Is it really never possible to reach a state where all remaining choices have only finite implications?
I'll clarify the measure of life satisfaction I had in mind. Imagine if you showed an agent finitely-many descriptions of situations they could end up being in, and asked the agent to pick out the worst and the best of all of them. Assign the worst scenario satisfaction 0 and the best scenario satisfaction 1.
Thanks. I've toyed with similar ideas perviously myself. The advantage, if this sort of thing works, is that it conveniently avoids a major issue with preference-based measures: that they're not unique and therefore incomparable across individuals. However, this method seems fragile in relying on a finite number of scenarios: doesn't it break if it's possible to imagine something worse than whatever the currently worst scenario is? (E.g. just keep adding 50 more years of torture.) While this might be a reasonable approximation in some circumstances, it doesn't seem like a fully coherent solution to me.
This seems pretty horrible to me, so I'm satisfied with keeping the measure of life satisfaction to be bounded.
IMO, the problem highlighted by the utility monster objection is fundamentally a prioritiarian one. A transformation that guarantees boundedness above seems capable of resolving this, without requiring boundedness below (and thus avoiding the problematic consequences that boundedness below introduces).
Further, suppose you do decide to have an unbounded measure of life satisfaction
Given issues with the methodology proposed above for constructing bounded satisfaction functions, it's still not entirely clear to me that this is really a decision, as opposed to an empirical question (which we then need to decide how to cope with from a normative perspective). This seems like it may be a key difference in our perspectives here.
So, if you're trying to maximize the expected moral value of the universe, you won't be able to. And, as a moral agent, what else are you supposed to do?
Well, in general terms the answer to this question has to be either (a) bite a bullet, or (b) find another solution that avoids the uncomfortable trade-offs. It seems to me that you'll be willing to bite most bullets here. (Though I confess it's actually a little hard for me to tell whether you're also denying that there's any meaningful tradeoff here; that case still strikes me as less plausible.) If so, that's fine, but I hope you'll understand why to some of us that might feel less like a solution to the issue of infinities, than a decision to just not worry about them on a particular dimension. Perhaps that's ultimately necessary, but it's definitely non-ideal from my perspective.
A final random thought/question: I get that we can't expected utility maximise unless we can take finite expectations, but does this actually prevent us having a consistent preference ordering over universes, or is it potentially just a representation issue? I would have guessed that the vNM axiom we're violating here is continuity, which I tend to think of as a convenience assumption rather than an actual rationality requirement. (E.g. there's not really anything substantively crazy about lexicographic preferences as far as I can tell, they're just mathematically inconvenient to represent with real numbers.) Conflating a lack of real-valued representations with lack of consistent preference orderings is a fairly common mistake in this space. That said, if it were just really just a representation issue, I would have expected someone smarter than me to have noticed by now, so (in lieu of actually checking) I'm assigning that low probability for now.
Re boundedness:
It's important to note that the sufficiently terrible lives need to be really, really, really bad already. So much so that being horribly tortured for fifty years does almost exactly nothing to affect their overall satisfaction. For example, maybe they're already being tortured for more than 3^^^^3 years, so adding fifty more years does almost exactly nothing to their life satisfaction.
I realise now that I may have moved through a critical step of the argument quite quickly above, which may be why this quote doesn't seem to capture the core of the objection I was trying to describe. Let me take another shot.
I am very much not suggesting that 50 years of torture does virtually nothing to [life satisfaction - or whatever other empirical value you want to take as axiologically primitive; happy to stick with life satisfaction as a running example]. I am suggesting that 50 years of torture is terrible for [life satisfaction]. I am then drawing a distinction between [life-satisfaction] and the output of the utility function that you then take expectations of. The reason I am doing this, is because it seems to me that whether [life satisfaction] is bounded is a contingent empirical question, not one that can be settled by normative fiat in order to make it easier to take expectations.
If, as a matter of empirical fact, [life satisfaction] is bounded, then the objection I describe will not bite.
If, on the other hand [life-satisfaction] is not bounded, then requiring the utility function you take expectations of to be bounded forces us to adopt some form of sigmoid mapping from [life satisfaction] to "utility", and this in turn forces us, at some margin, to not care about things that are absolutely awful (from the perspective of [life satisfaction]). (If an extra 50 years of torture isn't sufficient awful for some reason, then we just need to pick something more awful for the purposes of the argument).
Perhaps because I didn't explain this very well the first time, what's not totally clear to me from your response, is whether you think:
(a) [life satisfaction] is in fact bounded; or
(b) even if [life satisfaction] is unbounded, it's actually ok to not care about stuff that is absolutely (infinitely?) awful from the perspective of [life-satisfaction] because it lets us take expectations more conveniently. [Intentionally provocative framing, sorry. Intended as an attempt to prompt genuine reflection, rather than to score rhetorical points.]
It's possible that (a) is true, and much of your response seems like it's probably (?) targeted at that claim, but FWIW, I don't think this case can be convincingly made by appealing to contingent personal values: e.g. suggesting that another 50 years of torture wouldn't much matter to you personally won't escape the objection, as long as there's a possible agent who would view their life-satisfaction as being materially reduced in the same circumstances.
Suggesting evolutionary bounds on satisfaction is another potential avenue of argument, but also feels too contingent to do what you really want.
Maybe you could make a case for (a) if you were to substitute a representation of individual preferences for [life satisfaction]? I'm personally disinclined towards preferences as moral primitives, particularly as they're not unique, and consequently can't deal with distributional issues, but YMMV.
ETA: An alternative (more promising?) approach could be to accept that, while it may not cover all possible choices, in practice we're more likely to face choices with an infinite extensive margin than with an infinite intensive margin, and that the proposed method could be a reasonable decision rule for such choices. Practically, this seems like it would be acceptable as long as whatever function we're using to map [life-satisfaction] into utility isn't a sigmoid over the relevant range, and instead has a (weakly) negative second derivative over the (finite) range of [life satisfaction] covered by all relevant options.
(I assume (in)ability-to-take-expectations wasn't intended as an argument for (a), as it doesn't seem up to making such an empirical case?)
On the other hand, if you're actually arguing for (b), then I guess that's a bullet you can bite; though I think I'd still be trying to dodge it if I could. ETA: If there's no alternative but to ignore infinities on either the intensive or extensive margin, I could accept choosing the intensive margin, but I'm inclined think this choice should be explicitly justified, and recognised as tragic if it really can't be avoided.
Re the repugnant conclusion: apologies for the lazy/incorrect example. Let me try again with better illustrations of the same underlying point. To be clear, I am not suggesting these are knock-down arguments; just that, given widespread (non-infinitarian) rejection of average utilitarianisms, you probably want to think through whether your view suffers from the same issues and whether you are ok with that.
Though there's a huge literature on all of this, a decent starting point is here:
However, the average view has very little support among moral philosophers since it suffers from severe problems.
First, consider a world inhabited by a single person enduring excruciating suffering. The average view entails that we could improve this world by creating a million new people whose lives were also filled with excruciating suffering if the suffering of the new people was ever-so-slightly less bad than the suffering of the original person.26
Second, the average view entails the sadistic conclusion: It can sometimes be better to create lives with negative wellbeing than to create lives with positive wellbeing from the same starting point, all else equal.
Adding a small number of tortured, miserable people to a population diminishes the average wellbeing less than adding a sufficiently large number of people whose lives are pretty good, yet below the existing average...
Third, the average view prefers arbitrarily small populations over very large populations, as long as the average wellbeing was higher. For example, a world with a single, extremely happy individual would be favored to a world with ten billion people, all of whom are extremely happy but just ever-so-slightly less happy than that single person.
Fair point re use cases! My familiarity with DSGE models is about a decade out-of-date, so maybe things have improved, but a lot of the wariness then was that typical representative-agent DSGE isn't great where agent heterogeneity and interactions are important to the dynamics of the system, and/or agents fall significantly short of the rational expectations benchmark, and that in those cases you'd plausibly be better of using agent-based models (which has only become easier in the intervening period).
I (weakly) believe this is mainly because econometrists mostly haven't figured out that they can backpropagate through complex models
Plausible. I suspect the suspicion of fitting more complex models is also influenced by the fact that there's just not that much macro data + historical aversion to regularisation approaches that might help mitigate the paucity of data issues + worries that while such approaches might be ok for the sort of prediction tasks that ML is often deployed for, they're more risky for causal identification.
My point was more that, even if you can calculate the expectation, standard versions of average utilitarianism are usually rejected for non-infinitarian reasons (e.g. the repugnant conclusion) that seem like they would plausibly carry over to this proposal as well. I haven't worked through the details though, so perhaps I'm wrong.
Separately, while I understand the technical reasons for imposing boundedness on the utility function, I think you probably also need a substantive argument for why boundedness makes sense, or at least is morally acceptable. Boundedness below risks having some pretty unappealing properties, I think.
Arguments that utility functions are in fact bounded in practice seem highly contingent, and potentially vulnerable e.g. to the creation of utility-monsters, so I assume what you really need is an argument that some form of sigmoid transformation from an underlying real-valued welfare, u = s(w), is justified.
On the one hand, the resulting diminishing marginal utility for high-values of welfare will likely be broadly acceptable to those with prioritarian intuitions. But I don't know that I've ever seen an argument for the sort of anti-prioritarian results you get as a result of increasing marginal utility at very low levels of welfare. Not only would this imply that there's a meaningful range where it's morally required to deprioritise the welfare of the worse off, this deprioritisation is greatest for the very worst off. Because the sigmoid function essentially saturates at very low levels of welfare, at some point you seem to end up in a perverse version of Torture vs. dust specks where you think it's ok (or indeed required) to have 3^^^3 people (whose lives are already sufficiently terrible) horribly tortured for fifty years without hope or rest, to avoid someone in the middle of the welfare distribution getting a dust speck in their eye. This seems, well, problematic.
Worth noting that many economists (including e.g. Solow, Romer, Stiglitz among others) are pretty sceptical (to put it mildly) about the value of DSGE models (not without reason, IMHO). I don't want to suggest that the debate is settled one way or the other, but do think that the framing of the DSGE approach as the current state-of-the-art at least warrants a significant caveat emptor. Afraid I am too far from the cutting edge myself to have a more constructive suggestion though.
This sounds essentially like average utilitarianism with bounded utility functions. Is that right? If so, have you considered the usual objections to average utilitarianism (in particular, re rankings over different populations)?
Have you read s1gn1f1cant d1g1t5?
There is no value to a superconcept that crosses that boundary.
This doesn't seem to me to argue in favour of using wording that's associated with the (potentially illegitimate) superconcept to refer to one part of it. Also, the post you were responding to (conf)used both concepts of utility, so by that stage, they were already in the same discussion, even if they didn't belong there.
Two additional things, FWIW:
(1) There's a lot of existing literature that distinguishes between "decision utility" and "experienced utility" (where "decision utility" corresponds to preference representation) so there is an existing terminology already out there. (Although "experienced utility" doesn't necessarily have anything to do with preference or welfare aggregation either.)
(2) I view moral philosophy as a special case of decision theory (and e.g. axiomatic approaches and other tools of decision theory have been quite useful in to moral philosophy), so to the extent that your firewall intends to cut that off, I think it's problematic. (Not sure that's what you intend - but it's one interpretation of your words in this comment.) Even Harsanyi's argument, while flawed, is interesting in this regard (it's much more sophisticated than Phil's post, so I'd recommend checking it out if you haven't already.)
I'm hesitant to get into a terminology argument when we're in substantive agreement. Nonetheless, I personally find your rhetorical approach here a little confusing. (Perhaps I am alone in that.)
Yes, it's annoying when people use the word 'fruit' to refer to both apples and oranges, and as a result confuse themselves into trying to derive propositions about oranges from the properties of apples. But I'd suggest that it's not the most useful response to this problem to insist on using the word 'fruit' to refer exclusively to apples, and to proceed to make claims like 'fruit can't be orange coloured' that are false for some types of fruit. (Even more so when people have been using the word 'fruit' to refer to oranges for longer than they've been using it to refer to apples.) Aren't you just making it more difficult for people to get your point that apples and oranges are different?
On your current approach, every time you make a claim about fruit, I have to try to figure out from context whether you're really making a claim about all fruit, or just apples, or just oranges. And if I guess wrong, we just end up in a pointless and avoidable argument. Surely it's easier to instead phrase your claims as being about apples and oranges directly when they're intended to apply to only one type of fruit?
P.S. For the avoidance of doubt, and with apologies for obviousness: fruit=utility, apples=decision utility, oranges=substantive utility.
While I'm in broad agreement with you here, I'd nitpick on a few things.
Different utility functions are not commensurable.
Agree that decision-theoretic or VNM utility functions are not commensurable - they're merely mathematical representations of different individuals' preference orderings. But I worry that your language consistently ignores an older, and still entirely valid use of the utility concept. Other types of utility function (hedonic, or welfarist more broadly) may allow for interpersonal comparisons. (And unless you accept the possibility of such comparisons, any social welfare function you try to construct will likely end up running afoul of Arrow's impossibility theorem).
Translate the axioms into statements about people. Do they still seem reasonable?
I'm actually pretty much OK with Axioms 1 through 3 being applied to a population social welfare function. As Wei Dai pointed out in the linked thread (and Sen argues as well), it's 4 that seems the most problematic when translated to a population context. (Dealing with varying populations tends to be a stumbling block for aggregationist consequentialism in general.)
That said, the fact that decision utility != substantive utility also means that even if you accepted that all 4 VNM axioms were applicable, you wouldn't have proven average utilitarianism: the axioms do not, for example, rule out prioritarianism (which I think was Sen's main point).
Are you familiar with the debate between John Harsanyi and Amartya Sen on essentially this topic (which we've discussed ad nauseam before)? In response to an argument of Harsanyi's that purported to use the VNM axioms to justify utilitarianism, Sen reaches a conclusion that broadly aligns with your take on the issue.
If not, some useful references here.
ETA: I worry that I've unduly maligned Harsanyi by associating his argument too heavily with Phil's post. Although I still think it's wrong, Harsanyi's argument is rather more sophisticated than Phil's, and worth checking out if you're at all interested in this area.
It wouldn't necessarily reflect badly on her: if someone has to die to take down Azkaban,* and Harry needs to survive to achieve other important goals, then Hermione taking it down seems like a non-foolish solution to me.
*This is hinted at as being at least a strong possibility.
Although I agree it's odd, it does in fact seem that there is gender information transferred / inferred from grammatical gender.
From Lera Boroditsky's Edge piece
Does treating chairs as masculine and beds as feminine in the grammar make Russian speakers think of chairs as being more like men and beds as more like women in some way? It turns out that it does. In one study, we asked German and Spanish speakers to describe objects having opposite gender assignment in those two languages. The descriptions they gave differed in a way predicted by grammatical gender. For example, when asked to describe a "key" — a word that is masculine in German and feminine in Spanish — the German speakers were more likely to use words like "hard," "heavy," "jagged," "metal," "serrated," and "useful," whereas Spanish speakers were more likely to say "golden," "intricate," "little," "lovely," "shiny," and "tiny." To describe a "bridge," which is feminine in German and masculine in Spanish, the German speakers said "beautiful," "elegant," "fragile," "peaceful," "pretty," and "slender," and the Spanish speakers said "big," "dangerous," "long," "strong," "sturdy," and "towering." This was true even though all testing was done in English, a language without grammatical gender. The same pattern of results also emerged in entirely nonlinguistic tasks (e.g., rating similarity between pictures). And we can also show that it is aspects of language per se that shape how people think: teaching English speakers new grammatical gender systems influences mental representations of objects in the same way it does with German and Spanish speakers.
My understanding of the relevant research* is that it's a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There's an overview of some of this here (from p.26).
Isn't the main difference just that they have a bigger sample. (e.g. "4x" in the hardcore group).
Isn't the claim in 6 (that there is a planning-optimal choice, but no action-optimal choice) inconsistent with 4 (a choice that is planning optimal is also action optimal)?
Laying down rules for what counts as evidence that a body is considering alternatives, is mess[y]
Agreed. But I don't think that means that it's not possible to do so, or that there aren't clear cases on either side of the line. My previous formulation probably wasn't as clear as it should have been, but would the distinction seem more tenable to you if I said "possible in principle to observe physical representations of" instead of "possible in principle to physically extract"? I think the former better captures my intended meaning.
If there were a (potentially) observable physical process going on inside the pebble that contained representations of alternative paths available to it, and the utility assigned to them, then I think you could argue that the pebble is a CSA. But we have no evidence of that whatsoever. Those representations might exist in our minds once we decide to model the pebble in that way, but that isn't the same thing at all.
On the other hand, we do seem to have such evidence for e.g. chess-playing computers, and (while claims about what neuroimaging studies have identified are frequently overstated) we also seem to be gathering it for the human brain.
FWIW, the exact quote (from pp.13-14 of this article) is:
Far better an approximate answer to the right question, which is often vague, than the exact answer to the wrong question, which can always be made precise. [Emphasis in original]
Your paraphrase is snappier though (as well as being less ambiguous; it's hard to tell in the original whether Tukey intends the adjectives "vague" and "precise" to apply to the questions or the answers).
all of the above assumes a distinction I'm not convinced you've made
If it is possible in principle, to physically extract the alternatives/utility assignments etc., wouldn't that be sufficient to ground the CSA--non-CSA distinction, without running afoul of either current technological limitations, or the pebble-as-CSA problem? (Granted, we might not always know whether a given agent is really a CSA or not, but that doesn't seem to obviate the distinction itself.)
The Snoep paper Will linked to measured the correlation for the US, Denmark and the Netherlands (and found no significant correlation in the latter two).
The monopolist religion point is of course a good one. It would be interesting to see what the correlation looked like in relatively secular, yet non-monopolistic countries. (Not really sure what countries would qualify though.)
We already have some limited evidence that conventionally religious people are happier
But see Will Wilkinson on this too (arguing that this only really holds in the US, and speculating that it's really about "a good individual fit with prevailing cultural values" rather than religion per se).
Thanks for the explanation.
The idea is that when you are listening to music, you are handicapping yourself by taking some of the attention of the aural modality.
I'd heard something similar from a friend who majored in psychology, but they explained it in terms of verbal processing rather than auditory processing more generally, which is why (they said) music without words wasn't as bad.
I'm not sure whether it's related, but I've also been told by a number of musically-trained friends that they can't work with music at all, because they can't help but analyse it as they listen: for them, listening seems to automatically involve processing work that it doesn't (seem to) for me, precisely because I'm not capable of such processing. (This was part of the reason I was originally wondering about individual variation; the point you make at the end is really interesting in this regard too.)
Sometimes, but it varies quite a lot depending on exactly what I'm doing. The only correlation I've noticed between the effect of music and work-type is that the negative effect of lyrics is more pronounced when I'm trying to write.
Of course, it's entirely possible that I'm just not noticing the right things - which is why I'd be interested in references.
If anyone does have studies to hand I'd be grateful for references.* I personally find it difficult to work without music. That may be habit as much as anything else, though I expect part of the benefit is due to shutting out other, more distracting noise. I've noticed negative effects on my productivity on the rare occasions I've listened to music with lyrics, but that's about it.
* I'd be especially grateful for anything that looks at how much individual variation there is in the effect of music.
Fair enough. My impression of the SWB literature is that the relationship is robust, both in a purely correlational sense, and in papers like the Frey and Stutzer one where they try to control for confounding factors like personality and selection. The only major catch is how long it takes individuals to adapt after the initial SWB spike.
Indeed, having now managed to track down the paper behind your first link, it seems like this is actually their main point. From their conclusion:
Our results show that (a) selection effects appear to make happy people more likely to get and stay married, and these selection effects are at least partially [emphasis mine] responsible for the widely documented association between marital status and SWB; (b) on average, people adapt quickly and completely to marriage, and they adapt more slowly to widowhood (though even in this case, adaptation is close to complete after about 8 years); (c) there are substantial individual differences in the extent to which people adapt; and (d) the extent to which people adapt is strongly related to the degree to which they react to the initial event—those individuals who reacted strongly were still far from baseline levels years after the event. These last two findings indicate that marital transitions can be related to changes in satisfaction but that these effects may be overlooked if only average trends are examined.
FWIW, this seems inconsistent with the evidence presented in the paper linked here, and most of the other work I've seen. The omitted category in most regression analyses is "never married", so I don't really see how this would fly.
Sorry for the delay in getting back to you (in fairness, you didn't get back to me either!). A good paper (though not a meta-analysis) on this is:
Stutzer and Frey (2006) Does Marriage Make People Happy or Do Happy People Get Married? Journal of Socio-Economics 35:326-347. links
The lit review surveys some of the other evidence.
I a priori doubt all the happiness research as based on silly questionnaires and naive statistics
I'm a little puzzled by this comment given that the first link you provided looks (on its face) to be based on exactly this sort of evidence. But in any event, many of the studies mentioned in the Stutzer and Frey paper look at health and other outcomes as well.
this post infers possible causation based upon a sample size of 1
Eh? Pica) is a known disorder. The sample size for the causation claim is clearly more than 1.
[ETA: In case anyone's wondering why this comment no longer makes any sense, it's because most of the original parent was removed after I made it, and replaced with the current second para.]
I for one comment far more on Phil's posts when I think they're completely misguided than I do otherwise. Not sure what that says about me, but if others did likewise, we would predict precisely the relationship Phil is observing.
Interesting. All the other evidence I've seen suggest that committed relationships do make people happier, so I'd be interested to see how these apparently conflicting findings can be resolved.
Part of the difference could just be the focus on marriage vs. stable relationships more generally (whether married or not): I'm not sure there's much reason to think that a marriage certificate is going to make a big difference in and of itself (or that anyone's really claiming that it would). In fact, there's some, albeit limited, evidence that unmarried couples are happier on average than married ones.
I'll try to dig up references when I have a bit more time. Don't suppose you happen to have one for the actual research behind your first link?
Me too. It gets especially embarrassing when you end up telling someone a story about a conversation they themselves were involved in.
Warning, nitpicks follow:
The sentence "All good sentences must at least one verb." has at least one verb. (It's an auxiliary verb, but it's still a verb. Obviously this doesn't make it good; but it does detract from the point somewhat.)
"2+2=5" is false, but it's not nonsense.
I was objecting to the subset claim, not the claim about unit equivalence. (Mainly because somebody else had just made the same incorrect claim elsewhere in the comments to this post.)
As it happens, I'm also happy to object to claim about unit equivalence, whatever the wiki says. (On what seems to be the most common interpretation of utilons around these parts, they don't even have a fixed origin or scale: the preference orderings they represent are invariant to affine transforms of the utilons.)
To expand on this slightly, it seems like it should be possible to separate goal achievement from risk preference (at least under certain conditions).
You first specify a goal function g(x) designating the degree to which your goals are met in a particular world history, x. You then specify another (monotonic) function, f(g) that embodies your risk-preference with respect to goal attainment (with concavity indicating risk-aversion, convexity risk-tolerance, and linearity risk-neutrality, in the usual way). Then you maximise E[f(g(x))].
If g(x) is only ordinal, this won't be especially helpful, but if you had a reasonable way of establishing an origin and scale it would seem potentially useful. Note also that f could be unbounded even if g were bounded, and vice-versa. In theory, that seems to suggest that taking ever increasing risks to achieve a bounded goal could be rational, if one were sufficiently risk-loving (though it does seem unlikely that anyone would really be that "crazy"). Also, one could avoid ever taking such risks, even in the pursuit of an unbounded goal, if one were sufficiently risk-averse that one's f function were bounded.
P.S.
On my reading of OP, this is the meaning of utility that was intended.
You're probably right.
Utility means "the function f, whose expectation I am in fact maximizing".
There are many definitions of utility, of which that is one. Usage in general is pretty inconsistent. (Wasn't that the point of this post?) Either way, definitional arguments aren't very interesting. ;)
The interesting result is that if you're maximizing something you may be vulnerable to a failure mode of taking risks that can be considered excessive.
Your maximand already embodies a particular view as to what sorts of risk are excessive. I tend to the view that if you consider the risks demanded by your maximand excessive, then you should either change your maximand, or change your view of what constitutes excessive risk.
Crap. Sorry about the delete. :(
Redefining "utility" like this doesn't help us with the actual problem at hand: what do we do if Omega offers to double the f(x) which we're actually maximizing?
It wasn't intended to help with the the problem specified in terms of f(x). For the reasons set out in the thread beginning here, I don't find the problem specified in terms of f(x) very interesting.
In your restatement of the problem, the only thing we assume about Omega's offer is that it would change the universe in a desirable way
You're assuming the output of V(x) is ordinal. It could be cardinal.
all this means is that Omega is offering us the wrong thing
I'm afraid I don't understand what you mean here. "Wrong" relative to what?
which we don't really value.
Eh? Valutilons were defined to be something we value (ETA: each of us individually, rather than collectively).
The logic for the first step is the same as for any other step.
Actually, on rethinking, this depends entirely on what you mean by "utility". Here's a way of framing the problem such that the logic can change.
Assume that we have some function V(x) that maps world histories into (non-negative*) real-valued "valutilons", and that, with no intervention from Omega, the world history that will play out is valued at V(status quo) = q.
Omega then turns up and offers you the card deal, with a deck as described above: 90% stars, 10% skulls. Stars give you double V(star)=2c, where c is the value of whatever history is currently slated to play out (so c=q when the deal is first offered, but could be higher than that if you've played and won before). Skulls give you death: V(skull)=d, and d < q.
If our choices obey the vNM axioms, there will be some function f(x), such that our choices correspond to maximising E[f(x)]. It seems reasonable to assume that f(x) must be (weakly) increasing in V(x). A few questions present themselves:
Is there a function, f(x), such that, for some values of q and d, we should take a card every time one is offered?
Yes. f(x)=V(x) gives this result for all d<q. This is the standard approach.
Is there a function, f(x), such that, for some values of q and d, we should never take a card?
Yes. Set d=0, q=1000, and f(x) = ln(V(x)+1). The card gives expected vNM utility of 0.9ln(2001)~6.8, which is less than ln(1001)~6.9.
Is there a function, f(x), such that, for some values of q and d, we should take some finite number of cards then stop?
Yes. Set d=0, q=1, and f(x) = ln(V(x)+1). The first time you get the offer, its expected vNM utility is 0.9ln(3)~1 which is greater than ln(2)~0.7. But at the 10th time you play (assuming you're still alive), c=512, and the expected vNM utility of the offer is now 0.9ln(1025)~6.239, which is less than ln(513)~6.240.
So you take 9 cards, then stop. (You can verify for yourself, that the 9th card is still a good bet.)
* This is just to ensure that doubling your valutilons cannot make you worse off, as would happen if they were negative. It should be possible to reframe the problem to avoid this, but let's stick with it for now.
Interesting, I'd assumed your definitions of utilon were subtly different, but perhaps I was reading too much into your wording.
The wiki definition focuses on preference: utilons are the output of a set of vNM-consistent preferences over gambles.
Your definition focuses on "values": utilons are a measure of the extent to which a given world history measures up according to your values.
These are not necessarily inconsistent, but I'd assumed (perhaps wrongly) that they differed in two respects.
- Preferences are a simply binary relation, that does not allow degrees of intensity. (I can rank A>B, but I can't say that I prefer A twice as much as B.) In contrast, the degree to which a world measures up to our values seems capable of degrees. (It could make sense for me to say that I value A twice as much as I value B.)
- The preferences in question are over gambles over world histories, whereas I assumed that the values in question were over world histories directly.
I've started calling what-I-thought-you-meant "valutilons", to avoid confusion between that concept and the definition of utilons that seems more common here (and which is reflected in the wiki). We'll see how that goes.
since Hedons are a subset of Utilons
Not true. Even according to the wiki's usage.
We can experience things other than pleasure.