Posts

Underappreciated points about utility functions (of both sorts) 2020-01-04T07:27:28.041Z
Goal-thinking vs desire-thinking 2019-11-10T00:31:40.661Z
Three types of "should" 2018-06-02T00:54:49.171Z
Meetup : Ann Arbor Area Amalgam of Rationalist-Adjacent Anthropoids: March 11 2017-03-10T00:41:02.064Z
Link: Simulating C. Elegans 2014-11-20T09:30:03.864Z
Terminology suggestion: Say "degrees utility" instead of "utils" to prompt affine thinking 2013-05-19T08:03:05.421Z
Should correlation coefficients be expressed as angles? 2012-11-28T00:05:50.347Z
Example of neurons usefully communicating without synapses found in flies [LINK] 2012-11-23T23:25:43.019Z
Where to report bugs on the new site? 2011-08-12T08:32:21.766Z
[LINK] Reverse priming effect from awareness of persuasion attempt? 2011-08-05T18:02:07.579Z
A summary of Savage's foundations for probability and utility. 2011-05-22T19:56:27.952Z
Link: Chessboxing could help train automatic emotion regulation 2011-01-22T23:40:17.572Z
Draft/wiki: Infinities and measuring infinite sets: A quick reference 2010-12-24T04:52:53.090Z
Definitions, characterizations, and hard-to-ground variables 2010-12-03T03:18:07.947Z

Comments

Comment by Sniffnoy on Harri Besceli's Shortform · 2024-11-30T22:37:19.015Z · LW · GW

We could also point to sleepwalkers of various sorts: even when executing complex actions (like murdering someone), I've never seen any accounts which mention deeply felt emotions. (WP emphasizes their dullness and apathetic affect.)

Nitpick: Sleepwalking proper apparently happens during non-REM sleep; acting out a dream during REM sleep is different and has its own name. Although it seems like sleepwalkers may also be dreaming somehow even though they aren't in REM sleep? I don't know -- this is definitely not my area -- and arguably none of this is relevant to the original point; but I thought I should point it out.

Comment by Sniffnoy on My PhD thesis: Algorithmic Bayesian Epistemology · 2024-04-11T20:20:47.635Z · LW · GW

Ha! OK, that is indeed nasty. Yeah I guess CASes can solve this kind of problem these days, can't they? Well -- I say "these days" as if it this hasn't been the case for, like, my entire life, I've just never gotten used to making routine use of them...

Comment by Sniffnoy on My PhD thesis: Algorithmic Bayesian Epistemology · 2024-04-04T00:20:18.835Z · LW · GW

One annoying thing in reading Chapter 3 -- chapter 3 states that for l=2,4,8, the optimal scoring rules can be written in terms of elementary functions. However, you only actually give the full formula for the case l=8 (for l=2 you give it on half the interval). What are the formulas for the other cases?

(But also, this is really cool, thanks for posting this!)

Comment by Sniffnoy on K-types vs T-types — what priors do you have? · 2022-11-06T02:30:50.841Z · LW · GW

I think some cases cases of what you're describing as derivation-time penalties may really be can-you-derive-that-at-all penalties. E.g., with MWI and no Born rule assumed, it doesn't seem that there is any way to derive it. I would still expect a "correct" interpretation of QM to be essentially MWI-like, but I still think it's correct to penalize MWI-w/o-Born-assumption, not for the complexity of deriving the Born rule, but for the fact that it doesn't seem to be possible at all. Similarly with attempts to eliminate time, or its distinction from space, from physics; it seems like it simply shouldn't be possible in such a case to get something like Lorentz invariance.

Comment by Sniffnoy on Counter-theses on Sleep · 2022-04-05T01:32:44.099Z · LW · GW

Why do babies need so much sleep then?

Given that at the moment we don't really understand why people need to sleep at all, I don't think this is a strong argument for any particular claimed function.

Comment by Sniffnoy on Impossibility results for unbounded utilities · 2022-03-25T17:40:48.051Z · LW · GW

Oh, that's a good citation, thanks. I've used that rough argument in the past, knowing I'd copied it from someone, but I had no recollection of what specifically or that it had been made more formal. Now I know!

My comment above was largely just intended as "how come nobody listens when I say it?" grumbling. :P

Comment by Sniffnoy on Impossibility results for unbounded utilities · 2022-03-24T07:04:42.635Z · LW · GW

I should note that this is more or less the same thing that Alex Mennen and I have been pointing out for quite some time, even if the exact framework is a little different. You can't both have unbounded utilities, and insist that expected utility works for infinite gambles.

IMO the correct thing to abandon is unbounded utilities, but whatever assumption you choose to abandon, the basic argument is an old one due to Fisher, and I've discussed it in previous posts! (Even if the framework is a little different here, this seems essentially similar.)

I'm glad to see other people are finally taking the issue seriously, at least...

Comment by Sniffnoy on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-26T06:52:13.012Z · LW · GW

Yeah, that sounds about right to me. I'm not saying that you should assume such people are harmless or anything! Just that, like, you might want to try giving them a kick first -- "hey, constant vigilance, remember?" :P -- and see how they respond before giving up and treating them as hostile.

Comment by Sniffnoy on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T18:44:10.624Z · LW · GW

This seems exactly backwards, if someone makes uncorrelated errors, they are probably unintentional mistakes. If someone makes correlated errors, they are better explained as part of a strategy.

I mean, there is a word for correlated errors, and that word is "bias"; so you seem to be essentially claiming that people are unbiased? I'm guessing that's probably not what you're trying to claim, but that is what I am concluding? Regardless, I'm saying people are biased towards this mistake.

Or really, what I'm saying it's the same sort of phenomenon that Eliezer discusses here. So it could indeed be construed as a strategy as you say; but it would not be a strategy on the part of the conscious agent, but rather a strategy on the part of the "corrupted hardware" itself. Or something like that -- sorry, that's not a great way of putting it, but I don't really have a better one, and I hope that conveys what I'm getting at.

Like, I think you're assuming too much awareness/agency of people. A person who makes correlated errors, and is aware of what they are doing, is executing a deliberate strategy. But lots of people who make correlated errors are just biased, or the errors are part of a built-in strategy they're executing, not deliberately, but by default without thinking about it, that requires effort not to execute.

We should expect someone calling themself a rationalist to be better, obviously, but, IDK, sometimes things go bad?

I can imagine, after reading the sequences, continuing to have this bias in my own thoughts, but I don't see how I could have been so confused as to refer to it in conversation as a valid principle of epistemology.

I mean people don't necessarily fully internalize everything they read, and in some people the "hold on what am I doing?" can be weak? <shrug>

I mean I certainly don't want to rule out deliberate malice like you're talking about, but neither do I think this one snippet is enough to strongly conclude it.

Comment by Sniffnoy on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T08:50:50.169Z · LW · GW

I don't think this follows. I do not see how degree of wrongness implies intent. Eliezer's comment rhetorically suggests intent ("trolling") as a way of highlighting how wrong the person is; he is free to correct me if I am wrong, but I am pretty sure that is not an actual suggestion of intent, only a rhetorical one.

I would say moreover, that this is the sort of mistake that occurs, over and over, by default, with no intent necessary. I might even say that it is avoiding, not committing, this sort of mistake, that requires intent. Because this sort of mistake is just sort of what people fall into by default, and avoiding it requires active effort.

Is it contrary to everything Eliezer's ever written? Sure! But reading the entirety of the Sequences, calling yourself a "rationalist", does not in any way obviate the need to do the actual work of better group epistemology, of noticing such mistakes (and the path to them) and correcting/avoiding them.

I think we can only infer intent like you're talking about if the person in question is, actually, y'know, thinking about what they're doing. But I think people are really, like, acting on autopilot a pretty big fraction of the time; not autopiloting takes effort, and doing that work may be what a "rationalist" is supposed to do, it's still not the default. All I think we can infer from this is a failure to do the work to shift out of autopilot and think. Bad group epistemology via laziness rather than via intent strikes me as the more likely explanation.

Comment by Sniffnoy on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-19T08:35:38.372Z · LW · GW

I want to more or less second what River said. Mostly I wouldn't have bothered replying to this... but your line of "today around <30" struck me as particularly wrong.

So, first of all, as River already noted, your claim about "in loco parentis" isn't accurate. People 18 or over are legally adults; yes, there used to be a notion of "in loco parentis" applied to college students, but that hasn't been current law since about the 60s.

But also, under 30? Like, you're talking about grad students? That is not my experience at all. Undergrads are still treated as kids to a substantial extent, yes, even if they're legally adults and there's no longer any such thing as "in loco parentis". But in my experience grad students are, absolutely, treated as adults, nor have I heard of things being otherwise. Perhaps this varies by field (I'm in math) or location or something, I don't know, but I at least have never heard of that before.

Comment by Sniffnoy on Common knowledge about Leverage Research 1.0 · 2021-09-26T07:05:21.276Z · LW · GW

I'm not involved with the Bay Area crowd but I remember seeing things about how Leverage is a scam/cult years ago; I was surprised to learn it's still around...? I expected most everyone would have deserted it after that...

Comment by Sniffnoy on Common knowledge about Leverage Research 1.0 · 2021-09-26T07:02:22.003Z · LW · GW

I do worry about "ends justify the means" reasoning when evaluating whether a person or project was or wasn't "good for the world" or "worth supporting". This seems especially likely when using an effective-altruism-flavored lens that only a few people/organizations/interventions will matter orders of magnitude more than others. If one believes that a project is one of very few projects that could possibly matter, and the future of humanity is at stake - and also believes the project is doing something new/experimental that current civilization is inadequate for - there is a risk of using that belief to extend unwarranted tolerance of structurally-unsound organizational decisions, including those typical of "high-demand groups" (such as use of psychological techniques to increase member loyalty, living and working in the same place, non-platonic relationships with subordinates, secrecy, and so on) without proportionate concern for the risks of structuring an organization in that way.

There is (roughly) a sequences post for that. :P

Comment by Sniffnoy on In support of Yak Shaving · 2021-09-01T00:25:04.100Z · LW · GW

Seems to me the story in the original yak-shaving story falls into case 2 -- the thing to do is to forget about borrowing the EZPass and just pay the toll!

Comment by Sniffnoy on Founding a rationalist group at the University of Michigan · 2021-08-12T07:23:48.936Z · LW · GW

There used to be an Ann Arbor LW meetup group, actually, back when I lived there -- it seems to be pretty dead now best I can tell but the mailing list still exists. It's A4R-A2@googlegroups.com; I don't know how relevant this is to you, since you're trying to start a UM group and many of the people on that list will likely not be UM-affiliated, but you can at least try recruiting from there (or just restarting it if you're not necessarily trying to specifically start a UM group). It also used to have a website, though I can't find it at the moment, and I doubt it would be that helpful anyway.

According to the meetup group list on this website, there's also is or was a UM EA group, but there's not really any information about it? And there's this SSC meetup group listed there too, which has more recent activity possibly? No idea who's in that, I don't know this Sam Rossini, but possibly also worth recruiting from?

So, uh, yeah, that's my attempt (as someone who hasn't lived in Ann Arbor for two years) to survey the prior work in this area. :P Someone who's actually still there could likely say more...

Comment by Sniffnoy on A Contamination Theory of the Obesity Epidemic · 2021-07-25T18:03:13.199Z · LW · GW

Oh, huh -- looks like this paper is the summary of the blog series that "Slime Mold Time Mold" has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P

Comment by Sniffnoy on Can crimes be discussed literally? · 2021-04-09T18:45:37.375Z · LW · GW

Yeah. You can use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it's not much unlike how discussing lots of other things seriously requires avoiding common but confused words!

Comment by Sniffnoy on Classifying games like the Prisoner's Dilemma · 2021-04-09T03:51:51.979Z · LW · GW

I'm reminded of this paper, which discusses a smaller set of two-player games. What you call "Cake Eating" they call the "Harmony Game". They also use the more suggestive variable names -- which I believe come from existing literature -- R (reward), S (sucker's payoff), T (temptation), P (punishment) instead of (W, X, Y, Z). Note that in addition to R > P (W > Z) they also added the restrictions T > P (Y > Z) and R > S (W > X) so that the two options could be meaningfully labeled "cooperate" and "defect" instead of "Krump" and "Flitz" (the cooperate option is always better for the other player, regardless of whether it's better or worse for you). (I'm ignoring cases of things being equal, just like you are.)

(Of course, the paper isn't actually about classifying games, it's an empirical study of how people actually play these games! But I remember it for being the first place I saw such a classification...)

With these additional restrictions, there are only four games: Harmony Game (Cake Eating), Chicken (Hawk-Dove/Snowdrift/Farmer's Dilemma), Stag Hunt, and Prisoner's Dilemma (Too Many Cooks).

I'd basically been using that as my way of thinking about two-player games, but this broader set might be useful. Thanks for taking the time to do this and assign names to these.

I do have to wonder about that result that Zack_M_Davis mentions... as you mentioned, where's the Harmony Game in it? Also, isn't Battle of the Sexes more like Chicken than like Stag Hunt? I would expect to see Chicken and Stag Hunt, not Battle of the Sexes and Chicken, which sounds like the same thing twice and seems to leave out Stag Hunt. But maybe Battle of the Sexes is actually equivalent, in the sense described, to Stag Hunt rather than Chicken? That would be surprising, but I didn't set down to check whether the definition is satsified or not...

Comment by Sniffnoy on Thirty-three randomly selected bioethics papers · 2021-03-24T17:53:05.093Z · LW · GW

I suppose so. It is at least a different problem than I was worried about...

Comment by Sniffnoy on Thirty-three randomly selected bioethics papers · 2021-03-23T19:44:28.463Z · LW · GW

Huh. Given the negative reputation of bioethics around here -- one I hadn't much questioned, TBH -- most of these are suprisingly reasonable. Only #10, #16, and #24 really seemed like the LW stereotype of the bioethics paper that I would roll my eyes at. Arguably also #31, but I'd argue that one is instead alarming in a different way.

Some others seemed like bureaucratic junk (so, neither good nor bad), and others I think the quoted sections didn't really give enough information to judge; it is quite possible that a few more of these would go under the stereotype list if I read these papers further.

#1 is... man, why does it have to be so hostile? The argument it's making is basically a counter-stereotypical bioethics argument, but it's written in such a hostile manner. That's not the way to have a good discussion!

Also, I'm quite amused to see that #3 basically argues that we need what I've previously referred to here as a "theory of legitimate influence", for what appear likely to be similar reasons (although again I didn't read the full thing to inspect this in more detail).

Comment by Sniffnoy on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T19:10:06.693Z · LW · GW

Consider a modified version of the prisoner's dilemma. This time, the prisoners are allowed to communicate, but they also have to solve an additional technical problem, say, how to split the loot. They may start with agreeing on not betraying each other to the prosecutors, but later one of them may say: "I've done most of the work. I want 70% of the loot, otherwise I am going to rat on you." It's easy to see how the problem would escalate and end up in the prisoners betraying each other.

Minor note, but I think you could just talk about a [bargaining game}(https://en.wikipedia.org/wiki/Cooperative_bargaining), rather than the Prisoner's Dilemma, which appears to be unrelated. There are other basic game theory examples beyond the Prisoner's Dilemma!

Comment by Sniffnoy on Dark Matters · 2021-03-17T04:50:03.604Z · LW · GW

I just explained why (without more specific theories of in exactly what way the gravity would become delocalized from the visible mass) the bullet cluster is not evidence one way or the other.

Now, you compare the extra fields of modified gravity to epicycles -- as in, post-hoc complications grafted on to a theory to explain a particular phenomenon. But these extra fields are, to the best of my understanding, not grafted on to explain such delocalization; they're the actual basic content of the modified gravity theories and necessary to obtain a workable theory at all. MOND by itself, after all, is not a theory of gravity; the problem then is making one compatible with it, and every actual attempt at that that I'm aware of involves these extra fields, again, not as an epicycle for the bullet cluster, but as a way of constructing a workable theory at all. So, I don't think that comparison is apt here.

One could perhaps say that such theories are epicycles upon MOND -- since the timeline may go MOND, then bullet cluster, then proper modified gravity theories -- but for the reasons above I don't think that makes a lot of sense either.

If this was some post-hoc epicycle then your comment would make some sense; but as it is, I don't think it does. Is there some reason that I'm missing that it should be regarded as a post-hoc epicycle?

Note that Hossenfelder herself says modified gravity is probably not correct! It's still important to understand what is or is not a valid argument against it. The other arguments for dark matter sure seem pretty compelling!

(Also, uh, I don't think "People who think X are just closed-minded and clearly not open to persuasion" is generally not the sort of charity we try to go for here on LW...? I didn't downvote you but, like, accusing people of being closed-minded rather than actually arguing is on the path to becoming similarly close-minded oneself, you know?)

Comment by Sniffnoy on Defending the non-central fallacy · 2021-03-17T03:35:22.661Z · LW · GW

I feel like this really misses the point of the whole "non-central fallacy" idea. I would say, categories are heuristics and those heuristics have limits. When the category gets strained, the thing to do is to stop arguing using the category and start arguing the particular facts without relation to the category ("taboo your words").

You're saying that this sort of arguing-via-category is useful because it's actually aguing-via-similarity; but I see the point of Scott/Yvain's original article being that such arguing via similarity simply isn't useful in such cases, and has to be replaced with a direct assessment of the facts.

Like, one might say, similar in what way, and how do we know that this particular similarity is relevant in this case? But any answer to why the similarity is relevant, could be translated into an argument that doesn't rely on the similarity in the first place. Similarity can thus be a useful guide to finding arguments, but it shouldn't, in contentious cases, be considered compelling as an argument itself.

Yes, as you say, the argument is common because it is useful as a quick shorthand most of the time. But in contentious cases, in edge cases -- the cases that people are likely to be arguing about -- it breaks down. That is to say, it's an argument whose validity is largel to those cases where people aren't arguing to begin with!

Comment by Sniffnoy on Dark Matters · 2021-03-15T08:36:20.808Z · LW · GW

Good post. Makes a good case. I wasn't aware of the evidence from galactic cluster lensing; that's pretty impressive. (I guess not as much as the CMB power spectrum, but that I'd heard about before. :P )

But, my understanding is that the Bullet Cluster is actually not the strong evidence it's claimed to be? My understanding of modified gravity theories is that, since they all work by adding extra fields, it's also possible for those to have gravity separated from visible matter, even if no dark matter is present. (See e.g.. here... of course in this post Hossenfelder claims that the Bullet Cluster in particular is actually evidence against dark matter due to simulation reasons, but I don't know how much to believe that.)

Of course this means that modified gravity theories also aren't quite as different from dark matter as they're commonly said to be -- with either dark matter or modified gravity you're adding an additional field, the difference is just (OK, this is maybe a big just!) the nature of that field. But since this new field would presumably not act like matter in all the other ways you describe, my understanding is that it is still definitely distinct from "dark matter" for the purposes of this post.

Apparently these days even modified gravity proponents admit you still need dark matter to make things work out, which rather kills the whole motivation behind modified gravity, so I'm not sure if that's really an idea that makes sense anymore! Still, had to point out the thing about the Bullet Cluster, because based on what I know I don't think that part is actually correct.

Comment by Sniffnoy on Blue is Arbitrary · 2021-03-14T19:44:36.317Z · LW · GW

"Cyan" isn't a basic color term in English; English speakers ordinarily consider cyan to be a variant of blue, not something basically separate. Something that is cyan could also be described in English as "blue". As opposed to say, red and pink -- these are both basic color terms in English; an English speaker would not ordinarily refer to something pink as "red", or vice versa.

Or in other words: Color words don't refer to points in color space, they refer to regions, which means that you can look at how those regions overlap -- some may be subsets of others, some may be disjoint (well -- not disjoint per se, but thought of as disjoint, since obviously you can find things near the boundary that won't be judged consistently), etc. Having words "blue" and "cyan" that refer to two thought-of-as-disjoint regions is pretty different from having words "blue" and "cyan" where the latter refers to a subset of the former.

So, it's not as simple as saying "English also has a word cyan" -- yes, it does, but the meaning of that word, and the relation of its meaning to that of "blue", is pretty different. These translated words don't quite correspond; we're taking regions in color space, and translating them to words that refer to similar regions, regions that contain a number of the same points, but not the same ones.

The bit in the comic about "Eurocentric paint" obviously doesn't quite make sense as stated -- the division of the rainbow doesn't come from paint! -- but a paint set that focused on the central examples of basic color terms of a particular language could reasonably be called a that-language-centric paint set. In any case the basic point is just that dividing up color space into basic color terms has a large cultural component to it.

Comment by Sniffnoy on Making Vaccine · 2021-02-04T06:12:25.675Z · LW · GW

Wow!

I guess a thing that still bugs me after reading the rest of the comments is, if it turns out that this vaccine only offers protection against inhaling the virus though the nose, how much does that help when one considers that one could also inhale it through the mouth? Like, I worry that after taking this I'd still need to avoiding indoor spaces with other people, etc, which would defeat a lot of the benefit of it.

But, if it turns out that it does yield antibodies in the blood, then... this sounds very much worth trying!

Comment by Sniffnoy on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-09-15T17:51:23.616Z · LW · GW

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like?

I don't think that we do, exactly. I think that most people only know the term "prisoners' dilemma" and haven't learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners' Dilemma as a result. :-/

Comment by Sniffnoy on Toolbox-thinking and Law-thinking · 2020-09-06T20:51:21.041Z · LW · GW

That sounds like it might have been it?

Comment by Sniffnoy on Swiss Political System: More than You ever Wanted to Know (III.) · 2020-08-11T20:29:25.611Z · LW · GW

Sorry, but after reading this I'm not very clear on just what exactly the "Magic Formula" refers to. Could you state it explicitly?

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-02-28T22:58:16.505Z · LW · GW

Oops, turns out I did misremember -- Savage does not in fact put the proof in his book. You have to go to Fishburn's book.

I've been reviewing all this recently and yeah -- for anyone else who wants to get into this, I'd reccommend getting Fishburn's book ("Utility Theory for Decision Making") in addition to Savage's "Foundations of Statistics". Because in addition to the above, what I'd also forgotten is that Savage leaves out a bunch of the proofs. It's really annoying. Thankfully in Fishburn's treatment he went and actually elaborated all the proofs that Savage thought it OK to skip over...

(Also, stating the obvious, but get the second edition of "Foundations of Statistics", as it fixes some mistakes. You probably don't want just Fishburn's book, it's fairly hard to read by itself.)

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-06T20:24:51.447Z · LW · GW

Oh, I see. I misread your comment then. Yes, I am assuming one already has the ability to discern the structure of an argument and doesn't need to hire someone else to do that for you...

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-05T18:53:19.983Z · LW · GW

What I said above. Sorry, to be clear here, by "argument structure" I don't mean the structure of the individual arguments but rather the overall argument -- what rebuts what.

(Edit: Looks like I misread the parent comment and this fails to respond to it; see below.)

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-03T20:55:24.801Z · LW · GW

This is a good point (the redemption movement comes to mind as an example), but I think the cases I'm thinking of and the cases you're describing look quite different in other details. Like, the bored/annoyed expert tired of having to correct basic mistakes, vs. the salesman who wants to initiate you into a new, exciting secret. But yeah, this is only a quick-and-dirty heuristic, and even then only good for distinguishing snake oil; it might not be a good idea to put too much weight on it, and it definitely won't help you in a real dispute ("Wait, both sides are annoyed that the other is getting basic points wrong!"). As Eliezer put it -- you can't learn physics by studying psychology!

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-01T22:16:28.738Z · LW · GW

Given a bunch of people who disagree, some of whom are actual experts and some of whom are selling snake oil, expertise yourself, there are some further quick-and-dirty heuristics you can use to tell which of the two groups is which. I think basically my suggestion can be best summarized at "look at argument structure".

The real experts will likely spend a bunch of time correct popular misconceptions, which the fakers may subscribe to. By contrast, the fakers will generally not bother "correcting" the truth to their fakery, because why would they? They're trying to sell to unreflective people who just believe the obvious-seeming thing; someone who actually bothered to read corrections to misconceptions at any point is likely too savvy to be their target audience.

Sometimes though you do get actual arguments. Fortunately, it's easier to evaluate arguments than to determine truth oneself -- of course, this is only any good if at least one of the parties is right! If everyone is wrong, heuristics like this will likely be no help. But in an experts-and-fakers situation, where one of the groups is right and the other pretty definitely wrong, you can often just use heuristics like "which side has arguments (that make some degree of sense) that the other side has no answer to (that makes any sense)?". If we grant the assumption that one of the two sides is right, then it's likely to be that one.

When you actually have a lot of back-and-forth arguing -- as you might get in politics, or, as you might get in disputes between actual experts -- the usefulness of this sort of thing can drop quickly, but if you're just trying to sort out fakers from those with actual knowledge, I think it can work pretty well. (Although honestly, in a dispute between experts, I think the "left a key argument unanswered" is still a pretty big red flag.)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-28T07:25:59.782Z · LW · GW

Well, it's worth noting that P7 is introduced to address gambles with infinitely many possible outcomes, regardless of whether those outcomes are bounded or not (which is the reason I argue above you can't just get rid of it). But yeah. Glad that's cleared up now! :)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-17T04:58:12.530Z · LW · GW

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]

That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I'm confused by your claim that you're asking about conditions, when you haven't been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.

Something seems to be backwards here.

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

I'm confused here; it sounds like you're just describing, in the VNM framework, the strong continuity requirement, or in Savage's framework, P7? Of course Savage's P7 doesn't directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I'm less familiar with that.

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.

That doesn't make sense. If you add axioms, you'll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!

Comment by Sniffnoy on A summary of Savage's foundations for probability and utility. · 2020-01-16T02:03:52.980Z · LW · GW

Here's a quick issue I only just noticed but which fortunately is easily fixed:

Above I mentioned you probably want to restrict to a sigma-algebra of events and only allow measurable functions as actions. But, what does measurable mean here? Fortunately, the ordering on outcomes (even without utility) makes measurability meaningful. Except this puts a circularity in the setup, because the ordering on outcomes is induced from the ordering on actions.

Fortunately this is easily patched. You can start with the assumption of a total preorder on outcomes (considering the case of decisions without uncertainty), to make measurability meaningful and restrict actions to measurable functions (once we start considering decisions under uncertainty); then, for P3, instead of the current P3, you would strengthen the current P3 by saying that (on non-null sets) the induced ordering on outcomes actually matches the original ordering on outcomes. Then this should all be fine.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:40:43.816Z · LW · GW

(This is more properly a followup to my sibling comment, but posting it here so you'll see it.)

I already said that I think that thinking in terms of infinitary convex combinations, as you're doing, is the wrong way to go about it; but it took me a bit to put together why that's definitely the wrong way.

Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he's able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he's not necessarily arbitrary actions).

Savage doesn't assume probability! So if you want to disallow certain actions... how do you specify them? Or if you want to talk about convex combinations of actions -- not just infinitary ones, any ones -- how do you even define these?

In Savage's framework, you have to prove that if two actions can be described by the same probabilities and outcomes, then they're equivalent. E.g., suppose action A results in outcome X with probability 1/2 and outcome Y with probability 1/2, and suppose action B meets that same description. Are A and B equivalent? Well, yes, but that requires proof, because maybe A and B take outcome X on different sets of probability 1/2. (OK, in the two-outcome case it doesn't really require "proof", rather it's basically just his definition of probability; but the more general case requires proof.)

So, until you've established that theorem, that it's meaningful to combine gambles like that, and that the particular events yielding the probabilities aren't relevant, one can't really meaningfully define convex combinations at all. This makes it pretty hard to incorporate them into the setup or axioms!

More generally this should apply not only to Savage's particular formalism, but any formalism that attempts to ground probability as well as utility.

Anyway yeah. As I think I already said, I think we should think of this in terms not of, what combinations of actions yield permitted actions, but rather whether there should be forbidden actions at all. (Note btw in the usual VNM setup there aren't any forbidden actions either! Although there infinite gambles are, while not forbidden, just kind of ignored.) But this is in particular why trying to put it it in terms of convex combinations as you've done doesn't really work from a fundamentals point of view, where there is no probability yet, only preferences.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:23:05.982Z · LW · GW

Apologies, but it sounds like you've gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don't think anyone here is arguing that utility functions don't need to be finite-valued. All the things you're saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?

In the second paragraph perhaps this is just an issue of language -- when you say "infinitely high", do you actually mean "aribtrarily high"? -- but in the first paragraph this does not seem to be the case.

I'm also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called "backwards reasoning" above -- starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T09:56:32.529Z · LW · GW

Oh, so that's what you're referring to. Well, if you look at the theorem statements, you'll see that P=P_d is an axiom that is explicitly called out in the theorems where it's assumed; it's not implictly part of Axiom 0 like you asserted, nor is it more generally left implicit at all.

but the important part is that last infinite sum: this is where all infinitary convex combinations are asserted to exist. Whether that is assigned to "background setup" or "axioms" does not matter. It has to be present, to allow the construction of St. Petersburg gambles.

I really think that thinking in terms of infinitary convex combinations is the wrong way to go about this here. As I said above: You don't get a St. Petersburg gamble by taking some fancy convex combination, you do it by just constructing the function. (Or, in Fishburn's framework, you do it by just constructing the distribution; same effect.) I guess without P=P_d you do end up relying on closure properties in Fishburn's framework, but Savage's framework just doesn't work that way at all; and Fishburn with P=P_d, well, that's not a closure property. Rather what Savage's setup, and P=P_d have in common, is that they're, like, arbitrary-construction properties: If you can make a thing, you can compare it.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T08:11:20.562Z · LW · GW

Savage does not actually prove bounded utility. Fishburn did this later, as Savage footnotes in the edition I'm looking at, so Fishburn must be tackled.

Yes, it was actually Fishburn that did that. Apologies if I carelessly implied it was Savage.

IIRC, Fishburn's proof, formulated in Savage's terms, is in Savage's book, at least if you have the second edition. Which I think you must, because otherwise that footnote wouldn't be there at all. But maybe I'm misremembering? I think it has to be though...

In Savage's formulation, from P1-P6 he derives Theorem 4 of section 2 of chapter 5 of his book, which is linear interpolation in any interval.

I don't have the book in front of me, but I don't recall any discussion of anything that could be called linear interpolation, other than the conclusion that expected utility works for finite gambles. Could you explain what you mean? I also don't see the relevance of intervals here? Having read (and written a summary of) that part of the book I simply don't know what you're talking about.

Clearly, linear interpolation does not work on an interval such as [17,Inf], therefore there cannot be any infinitely valuable gambles. St. Petersburg-type gambles are therefore excluded from his formulation.

I still don't know what you're talking about here, but I'm familiar enough with Savage's formalism to say that you seem to have gotten quite lost somewhere, because this all sounds like nonsense.

From what you're saying, the impression that I'm getting is that you're treating Savage's formalism like Fishburn's, where there's some a-prior set of actions under consideration, and so we need to know closure properties about that set. But, that's not how Savage's formalism works. Rather the way it works is that actions are just functions (possibly with a measurability condition -- he doesn't discuss this but you probably want it) from world-states to outcomes. If you can construct the action as a function, there's no way to exclude it.

I shall have to examine further how his construction works, to discern what in Savage's axioms allows the construction, when P1-P6 have already excluded infinitely valuable gambles.

Well, I've already described the construction above, but I'll describe it again. Once again though, you're simply wrong about that last part; that last statement is not only incorrect, but fundamentally incompatible with Savage's whole approach.

Anyway. To restate the construction of how to make a St. Petersburg gamble. (This time with a little more detail.) An action is simply a function from world-states to outcomes.

By assumption, we have a sequence of outcomes a_i such that U(a_i) >= 2^i and such that U(a_i) is strictly increasing.

We can use P6 (which allows us to "flip coins", so to speak) to construct events E_i (sets of world-states) with probability 1/2^i.

Then, the action G that takes on the value a_i on the set E_i is a St. Petersburg gamble.

For the particular construction, you take G as above, and also G', which is the same except that G' takes the value a_1 on E_0, instead of the value a_0.

Savage proves in the book (although I think the proof is due to Fishburn? I'm going by memory) that given two gambles, both of which are preferred to any essentially bounded gamble, the agent must be indifferent between them. (The proof uses P7, obviously -- the same thing that proves that expected utility works for infinite gambles at all. I don't recall the actual proof offhand and don't feel like trying to reconstruct it right now, but anyway I think you have it in front of you from the sounds of it.) And we can show both these gambles are preferred to any essentially bounded gamble by comparing to truncated versions of themselves (using sure-thing principle) and using the fact that expected utility works for essentially bounded gambles. Thus the agent must be indifferent between G and G'. But also, by the sure-thing principle (P2 and P3), the agent must prefer G' to G. That's the contradiction.

Edit: Earlier version of this comment misstated how the proof goes

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:51:56.475Z · LW · GW

Fishburn (op. cit., following Blackwell and Girschick, an inaccessible source) requires that the set of gambles be closed under infinitary convex combinations.

Again, I'm simply not seeing this in the paper you linked? As I said above, I simply do not see anything like that outside of section 9, which is irrelevant. Can you point to where you're seeing this condition?

I shall take a look at Savage's axioms and see what in them is responsible for the same thing.

In the case of Savage, it's not any particular axiom, but rather the setup. An action is a function from world-states to outcomes. If you can construct the function, the action (gamble) exists. That's all there is to it. And the relevant functions are easy enough to construct, as I described above; you use P6 (the Archimedean condition, which also allows flipping coins, basically) to construct the events, and we have the outcomes by assumption. You assign the one to the other and there you go.

(If you don't want to go getting the book out, you may want to read the summary of Savage I wrote earlier!)

A short answer to this (something longer later) is that an agent need not have preferences between things that it is impossible to encounter. The standard dissolution of the St. Petersberg paradox is that nobody can offer that gamble. Even though each possible outcome is finite, the offerer must be able to cover every possible outcome, requiring that they have infinite resources. Since the gamble cannot be offered, no preferences between that gamble and any other need exist.

So, would it be fair to sum this up as "it is not necessary to have preferences between two gambles if one of them takes on unbounded utility values"? Interesting. That doesn't strike me as wholly unworkable, but I'm skeptical. In particular:

  1. Can we phrase this without reference to utility functions? It would say a lot more for the possibility if we can.
  2. What if you're playing against Nature? A gamble can be any action; and in a world of unbounded utility functions, why should one believe that any action must have some bound on how much utility it can get you? Sure, sure, second law of thermodynamics and all that, but that's just a feature of the paticular universe we happen to live in, not something that reshapes your preferences. (And if we were taking account of that sort of thing, we'd probably just say, oh, utility is bounded after all, in a kind of stupid way.) Notionally, it could be discovered to be wrong! It won't happen, but it's not probability literally 0.

Or are you trying to cut out a more limited class of gambles as impossible? I'm not clear on this, although I'm not certain it affects the results.

Anyway, yeah, as I said, my main objection is that I see no reason to believe that, if you have an unbounded utility function, Nature cannot offer you a St. Petersburg game. Or I mean, to the extent I do see reasons to believe that, they're facts about the particular universe we happen to live in, that notionally could be discovered to be wrong.

Looking at the argument from the other end, at what point in valuing numbers of intelligent lives does one approach an asymptote, bearing in mind the possibility of expansion to the accessible universe? What if we discover that the habitable universe is vastly larger than we currently believe? How would one discover the limits, if there are any, to one's valuing?

This is exactly the sort of argument that I called "flimsy" above. My answer to these questions is that none of this is relevant.

Both of us are trying to extend our ideas about preferences from ordinary situations to extraordinary ones. (Like, I agree that some sort of total utilitarianism is a good heuristic for value under the conditions we're familiar with.) This sort of extrapolation, to an unfamiliar realm, is always potentially dangerous. The question then becomes, what sort of tools can we expect to continue to work, without needing any sort of adjustment to the new conditions?

I do not expect speculation about the particular form preferences our would take under these unusual conditions to be trustworthy. Whereas basic coherence conditions had damn well better continue to hold, or else we're barely even talking about sensible preferences anymore.

Or, to put it differently, my answer is, I don't know, but the answer must satisfy basic coherence conditions. There's simply no way that the idea that decision-theoretic utility has to increase linearly with number intelligent lives, is on anywhere near as solid ground as that! The mere fact that it's stated in terms of a utility function in the first place, rather than in terms of something more basic, is something of a smell. Complicated statements we're not even entirely sure how to formulate can easily break in a new context. Short simple statements that have to be true for reasons of simple coherence don't break.

(Also, some of your questions don't seem to actually appreciating what a bounded utility function would actually mean. It wouldn't mean taking an unbounded utility function and then applying a cap to it. It would just mean something that naturally approaches 1 as things get better and 0 as things get worse. There is no point at which it approaches an asymptote; that's not how asymptotes work. There is no limit to one's valuing; presumably utility 1 does not actually occur. Or, at least, that's how I infer it would have to work.)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:08:09.006Z · LW · GW

Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I've seen.

Now (in Savage's formalism) actions are just functions from world-states to outcomes (maybe with a measurability condition), so regardless of your prior it's easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you're saying is, if we don't allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T06:54:24.108Z · LW · GW

OK, so going by that you're suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It's an interesting idea, but I don't see why one would expect it to have anything to do with preferences.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:37:43.913Z · LW · GW

You should check out Abram's post on complete class theorems. He specifically addresses some of the concerns you mentioned in the comments of Yudkowsky's posts.

So, it looks to me like what Abrams is doing -- once he gets past the original complete class theorem -- is basically just inventing some new formalism along the lines of Savage. I think it is very misleading to refer to this as "the complete class theorem" -- how on earth was I supposed to know that this was what was being referred to when "the complete class theorem" was mentioned, when it resembles the original theorem so little (and it's the original theorem that was linked to)? -- and I don't see why it was necessary to invent this anew, but sure, I can accept that it presumably works, even if the details aren't spelled out.

But I must note that he starts out by saying that he's only considering the case when there's only a finite set of states of the world! I realize you weren't making a point about bounded utility here; but from that point of view, it is quite significant...

Also, my inner model of Jaynes says that the right way to handle infinities is not to outlaw them, but to be explicit and consistent about what limits we're taking.

I don't really understand what that means in this context. It is already quite explicit what limits we're taking: Given an action (a measurable function from states of the world to outcomes), take its expected utility, with regard to the [finitely-additive] probability on states of the world. (Which is implicitly a limit of sorts.)

I think this is another one of those comments that makes sense if you're reasoning backward, starting from utility functions, but not if you're reasoning forward, from preferences. If you look at things from a utility-functions-first point of view, then it looks like you're outlawing infinities (well, unboundedness that leads to infinities). But from a preferences-first point of view, you're not outlawing anything. You haven't outlawed unbounded utility functions, rather they've just failed to satisfy fundamental assumptions about decision-making (remember, if you don't have P7 your utility function is not guaranteed to return correct results about infinite gambles at all!) and so clearly do not reflect your idealized preferences. You didn't get rid of the infinity, it was simply never there in the first place; the idea that it might have been turned out to be mistaken.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:20:25.895Z · LW · GW

I think you've misunderstood a fair bit. I hope you don't mind if I address this slightly out of order.

Or if infinite utilities are not immediately a problem, then by a more complicated argument, involving constructing multiple St. Petersburg-type combinations and demonstrating that the axioms imply that there both should and should not be a preference between them.

This is exactly what Fishburn does, as I mentioned above. (Well, OK, I didn't attribute it to Fishburn, I kind of implicitly misattributed it to Savage, but it was actually Fishburn; I didn't think that was worth going into.)

I haven't studied the proof of boundedness in detail, but it seems to be that unbounded utilities allow St. Petersburg-type combinations of them with infinite utilities, but since each thing is supposed to have finite utility, that is a contradiction.

He does not give details, but the argument that I conjecture from his text is that if there are unbounded utilities then one can construct a convex combination of infinitely many of them that has infinite utility (and indeed one can), contradicting the proof from his axioms that the utility function is a total function to the real numbers.

What you describe in these two parts I'm quoting is, well, not how decision-theoretic utility functions work. A decision-theoretic utility function is a function on outcomes, not on gambles over outcomes. You take expected utility of a gamble; you don't take utility of a gamble.

So, yes, if you have an unbounded decision-theoretic utility function, you can set up a St. Petersburg-style situation that will have infinite expected utility. But that is not by itself a problem! The gamble has infinite expected utility; no individual outcome has infinite utility. There's no contradiction yet.

Of course, you then do get a contradiction when you attempt to compare two of these that have been appropriately set up, but...

But by a similar argument, one might establish that the real numbers must be bounded, when instead one actually concludes that not all series converge

What? I don't know what one might plausibly assume that might imply the boundedness of the real numbers.

...oh, I think I see the analogy you're going for here. But, it seems to rest on the misunderstanding of utility functions discussed above.

and that one cannot meaningfully compare the magnitudes of divergent infinite series.

Well, so, one must remember the goal here. So, let's start with divergent series, per your analogy. (I'm assuming you're discussing series of nonnegative numbers here, that diverge to infinity.)

So, well, there's any number of ways we could compare divergent series. We could just say that they sum to infinity, and so are equal in magnitude. Or we could try to do a more detailed comparison of their growth rates. That might not always yield a well-defined result though. So yeah. There's not any one universal way to compare magnitudes of divergent series, as you say; if someone asks, which of these two series is bigger, you might just have to say, that's a meaningless question. All this is as you say.

But that's not at all the situation we find ourselves in choosing between two gambles! If you reason backward, from the idea of utility functions, it might seem reasonable to say, oh, these two gambles are both divergent, so comparison is meaningless. But if you reason forward, from the idea of preferences... well, you have to pick one (or be indifferent). You can't just leave it undefined. Or if you have some formalism where preferences can be undefined (in a way that is distinct from indifference), by all means explain it... (but what happens when you program these preferences into an FAI and it encounters this situation? It has to pick. Does it pick arbitrarily? How is that distinct from indifference?)

That we have preferences between gambles is the whole thing we're starting from.

I note that in order to construct convex combinations of infinitely many states, Fishburn extends his axiom 0 to allow this. He does not label this extension separately as e.g. "Axiom 0*". So if you were to ask which of his axioms to reject in order to retain unbounded utility, it could be none of those labelled as such, but the one that he does not name, at the end of the first paragraph on p.1055. Notice that the real numbers satisfy Axiom 0 but not Axiom 0*. It is that requirement that all infinite convex combinations exist that surfaces later as the boundedness of the range of the utility function.

Sorry, but looking through Fishburn's paper I can't see anything like this. The only place where any sort of infinite combination seems to be mentioned is section 9, which is not relevant. Axiom 0 means one thing throughout and allows only finite convex combinations. I simply don't see where you're getting this at all.

(Would you mind sticking to Savage's formalism for simplicity? I can take the time to properly read Fishburn if for some reason you insist things have to be done this way, but otherwise for now I'm just going to put things in Savage's terms.)

In any case, in Savage's formalism there's no trouble in proving that the necessary actions exist -- you don't have to go taking convex combinations of anything, you simply directly construct the functions. You just need an appropriate partition of the set of world-states (provided by the Archimedean axiom he assumes, P6) and an appropriate set of outcomes (which comes from the assumption of unbounded utility). You don't have to go constructing other things and then doing some fancy infinite convex combination of them.

If you don't mind, I'd like to ask: could just tell me what in particular in Savage's setup or axioms you find to be the probable weak point? If it's P7 you object to, well, I already discussed that in the post; if you get rid of that, the utility function may be unbounded but it's no longer guaranteed to give correct results when comparing infinite gambles.

While searching out the original sources, I found a paper indicating that at least in 1993, bounded utility theorems were seen as indicating a problem with Savage's axioms: "Unbounded utility for Savage's "Foundations of Statistics" and Other Models", by Peter Wakker. There is another such paper from 2014. I haven't read them, but they indicate that proofs of boundedness of utility are seen as problems for the axioms, not discoveries that utility must be bounded.

I realize a number of people see this as a problem. Evidently they have some intuition or argument that disagrees with the boundedness of utility. Whatever this intuition or argument is, I would be very surprised if it were as strong as the argument that utility must be bounded. There's no question that assumptions can be bad. I just think the reasons to think these are bad that have been offered, are seriously flimsy compared to the reasons to think that they're good. So I see this as basically a sort of refusal to take the math seriously. (Again: Which axiom should we throw out, or what part of the setup should we rework?)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:31:23.979Z · LW · GW

Is there a reason we can't just solve this by proposing arbitrarily large bounds on utility instead of infinite bounds? For instance, if we posit that utility is bounded by some arbitrarily high value X, then the wager can only payout values X for probabilities below 1/X.

I'm not sure what you're asking here. An individual decision-theoretic utility function can be bounded or it can be unbounded. Since decision-theoretic utility functions can be rescaled arbitrarily, naming a precise value for the bounds is meaningless; so like we could just assume the bounds are 0 below and 1 above.

So, I mean, yeah, you can make the problem go away by assuming bounded utility, but if you were trying to say something more than that, a bounded utility that is somehow "closer" to unbounded utility, then no such notion is meaningful.

Apologies if I've misunderstood what you're trying to do.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:27:22.242Z · LW · GW

Yes, thanks, I didn't bother including it in the body of the post but that's basically how it goes. Worth noting that this:

Both of these wagers have infinite expected utility, so we must be indifferent between them.

...is kind of shortcutting a bit (at least as Savage/Fishburn[0] does it; he proves indifference between things of infinite expected utility separately after proving that expected utility works when it's finite), but that is the essence of it, yes.

(As for the actual argument... eh, I don't have it in front of me and don't feel like rederiving it...)

[0]I initially wrote Savage here, but I think this part is actually due to Fishburn. Don't have the book in front of me right now though.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:23:39.526Z · LW · GW

By "a specific gamble" do you mean "a specific pair of gambles"? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.

I don't understand how to make sense of what you're saying. Agent's preferences are the starting point -- preferences as in, given a choice between the two, which do you pick? It's not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that's distinct).

I mean, you could try to come up with such a thing, but I'd be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that's effectively just nontransitive indifference, which seems like it would be a problem...)

I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I'd be interested to hear...

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:14:34.737Z · LW · GW

If you're not making a prioritarian aggregate utility function by summing functions of individual utility functions, the mapping of a prioritarian function to a utility function doesn't always work. Prioritarian utility functions, for instance, can do things like rank-order everyone's utility functions and then sum each individual utility raised to the negative-power of the rank-order ... or something*. They allow interactions between individual utility functions in the aggregate function that are not facilitated by the direct summing permitted in utilitarianism.

This is a good point. I might want to go back and edit the original post to account for this.

So from a mathematical perspective, it is possible to represent many prioritarian utility function as a conventional utilitarian utility function. However, from an intuitive perspective, they mean different things:

This doesn't practically affect decision-making of a moral agents but it does reflect different underlying philosophies -- which affects the kinds of utility functions people might propose.

Sure, I'll agree that they're different in terms of ways of thinking about things, but I thought it was worth pointing out that in terms of what they actually propose they are largely indistinguishable without further constraints.