Posts

Underappreciated points about utility functions (of both sorts) 2020-01-04T07:27:28.041Z
Goal-thinking vs desire-thinking 2019-11-10T00:31:40.661Z
Three types of "should" 2018-06-02T00:54:49.171Z
Meetup : Ann Arbor Area Amalgam of Rationalist-Adjacent Anthropoids: March 11 2017-03-10T00:41:02.064Z
Link: Simulating C. Elegans 2014-11-20T09:30:03.864Z
Terminology suggestion: Say "degrees utility" instead of "utils" to prompt affine thinking 2013-05-19T08:03:05.421Z
Should correlation coefficients be expressed as angles? 2012-11-28T00:05:50.347Z
Example of neurons usefully communicating without synapses found in flies [LINK] 2012-11-23T23:25:43.019Z
Where to report bugs on the new site? 2011-08-12T08:32:21.766Z
[LINK] Reverse priming effect from awareness of persuasion attempt? 2011-08-05T18:02:07.579Z
A summary of Savage's foundations for probability and utility. 2011-05-22T19:56:27.952Z
Link: Chessboxing could help train automatic emotion regulation 2011-01-22T23:40:17.572Z
Draft/wiki: Infinities and measuring infinite sets: A quick reference 2010-12-24T04:52:53.090Z
Definitions, characterizations, and hard-to-ground variables 2010-12-03T03:18:07.947Z

Comments

Comment by Sniffnoy on In support of Yak Shaving · 2021-09-01T00:25:04.100Z · LW · GW

Seems to me the story in the original yak-shaving story falls into case 2 -- the thing to do is to forget about borrowing the EZPass and just pay the toll!

Comment by Sniffnoy on Founding a rationalist group at the University of Michigan · 2021-08-12T07:23:48.936Z · LW · GW

There used to be an Ann Arbor LW meetup group, actually, back when I lived there -- it seems to be pretty dead now best I can tell but the mailing list still exists. It's A4R-A2@googlegroups.com; I don't know how relevant this is to you, since you're trying to start a UM group and many of the people on that list will likely not be UM-affiliated, but you can at least try recruiting from there (or just restarting it if you're not necessarily trying to specifically start a UM group). It also used to have a website, though I can't find it at the moment, and I doubt it would be that helpful anyway.

According to the meetup group list on this website, there's also is or was a UM EA group, but there's not really any information about it? And there's this SSC meetup group listed there too, which has more recent activity possibly? No idea who's in that, I don't know this Sam Rossini, but possibly also worth recruiting from?

So, uh, yeah, that's my attempt (as someone who hasn't lived in Ann Arbor for two years) to survey the prior work in this area. :P Someone who's actually still there could likely say more...

Comment by Sniffnoy on A Contamination Theory of the Obesity Epidemic · 2021-07-25T18:03:13.199Z · LW · GW

Oh, huh -- looks like this paper is the summary of the blog series that "Slime Mold Time Mold" has been written about it? Guess I can read this paper to skip to the end, since not all of it is posted yet. :P

Comment by Sniffnoy on Can crimes be discussed literally? · 2021-04-09T18:45:37.375Z · LW · GW

Yeah. You can use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it's not much unlike how discussing lots of other things seriously requires avoiding common but confused words!

Comment by Sniffnoy on Classifying games like the Prisoner's Dilemma · 2021-04-09T03:51:51.979Z · LW · GW

I'm reminded of this paper, which discusses a smaller set of two-player games. What you call "Cake Eating" they call the "Harmony Game". They also use the more suggestive variable names -- which I believe come from existing literature -- R (reward), S (sucker's payoff), T (temptation), P (punishment) instead of (W, X, Y, Z). Note that in addition to R > P (W > Z) they also added the restrictions T > P (Y > Z) and R > S (W > X) so that the two options could be meaningfully labeled "cooperate" and "defect" instead of "Krump" and "Flitz" (the cooperate option is always better for the other player, regardless of whether it's better or worse for you). (I'm ignoring cases of things being equal, just like you are.)

(Of course, the paper isn't actually about classifying games, it's an empirical study of how people actually play these games! But I remember it for being the first place I saw such a classification...)

With these additional restrictions, there are only four games: Harmony Game (Cake Eating), Chicken (Hawk-Dove/Snowdrift/Farmer's Dilemma), Stag Hunt, and Prisoner's Dilemma (Too Many Cooks).

I'd basically been using that as my way of thinking about two-player games, but this broader set might be useful. Thanks for taking the time to do this and assign names to these.

I do have to wonder about that result that Zack_M_Davis mentions... as you mentioned, where's the Harmony Game in it? Also, isn't Battle of the Sexes more like Chicken than like Stag Hunt? I would expect to see Chicken and Stag Hunt, not Battle of the Sexes and Chicken, which sounds like the same thing twice and seems to leave out Stag Hunt. But maybe Battle of the Sexes is actually equivalent, in the sense described, to Stag Hunt rather than Chicken? That would be surprising, but I didn't set down to check whether the definition is satsified or not...

Comment by Sniffnoy on Thirty-three randomly selected bioethics papers · 2021-03-24T17:53:05.093Z · LW · GW

I suppose so. It is at least a different problem than I was worried about...

Comment by Sniffnoy on Thirty-three randomly selected bioethics papers · 2021-03-23T19:44:28.463Z · LW · GW

Huh. Given the negative reputation of bioethics around here -- one I hadn't much questioned, TBH -- most of these are suprisingly reasonable. Only #10, #16, and #24 really seemed like the LW stereotype of the bioethics paper that I would roll my eyes at. Arguably also #31, but I'd argue that one is instead alarming in a different way.

Some others seemed like bureaucratic junk (so, neither good nor bad), and others I think the quoted sections didn't really give enough information to judge; it is quite possible that a few more of these would go under the stereotype list if I read these papers further.

#1 is... man, why does it have to be so hostile? The argument it's making is basically a counter-stereotypical bioethics argument, but it's written in such a hostile manner. That's not the way to have a good discussion!

Also, I'm quite amused to see that #3 basically argues that we need what I've previously referred to here as a "theory of legitimate influence", for what appear likely to be similar reasons (although again I didn't read the full thing to inspect this in more detail).

Comment by Sniffnoy on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T19:10:06.693Z · LW · GW

Consider a modified version of the prisoner's dilemma. This time, the prisoners are allowed to communicate, but they also have to solve an additional technical problem, say, how to split the loot. They may start with agreeing on not betraying each other to the prosecutors, but later one of them may say: "I've done most of the work. I want 70% of the loot, otherwise I am going to rat on you." It's easy to see how the problem would escalate and end up in the prisoners betraying each other.

Minor note, but I think you could just talk about a [bargaining game}(https://en.wikipedia.org/wiki/Cooperative_bargaining), rather than the Prisoner's Dilemma, which appears to be unrelated. There are other basic game theory examples beyond the Prisoner's Dilemma!

Comment by Sniffnoy on Dark Matters · 2021-03-17T04:50:03.604Z · LW · GW

I just explained why (without more specific theories of in exactly what way the gravity would become delocalized from the visible mass) the bullet cluster is not evidence one way or the other.

Now, you compare the extra fields of modified gravity to epicycles -- as in, post-hoc complications grafted on to a theory to explain a particular phenomenon. But these extra fields are, to the best of my understanding, not grafted on to explain such delocalization; they're the actual basic content of the modified gravity theories and necessary to obtain a workable theory at all. MOND by itself, after all, is not a theory of gravity; the problem then is making one compatible with it, and every actual attempt at that that I'm aware of involves these extra fields, again, not as an epicycle for the bullet cluster, but as a way of constructing a workable theory at all. So, I don't think that comparison is apt here.

One could perhaps say that such theories are epicycles upon MOND -- since the timeline may go MOND, then bullet cluster, then proper modified gravity theories -- but for the reasons above I don't think that makes a lot of sense either.

If this was some post-hoc epicycle then your comment would make some sense; but as it is, I don't think it does. Is there some reason that I'm missing that it should be regarded as a post-hoc epicycle?

Note that Hossenfelder herself says modified gravity is probably not correct! It's still important to understand what is or is not a valid argument against it. The other arguments for dark matter sure seem pretty compelling!

(Also, uh, I don't think "People who think X are just closed-minded and clearly not open to persuasion" is generally not the sort of charity we try to go for here on LW...? I didn't downvote you but, like, accusing people of being closed-minded rather than actually arguing is on the path to becoming similarly close-minded oneself, you know?)

Comment by Sniffnoy on Defending the non-central fallacy · 2021-03-17T03:35:22.661Z · LW · GW

I feel like this really misses the point of the whole "non-central fallacy" idea. I would say, categories are heuristics and those heuristics have limits. When the category gets strained, the thing to do is to stop arguing using the category and start arguing the particular facts without relation to the category ("taboo your words").

You're saying that this sort of arguing-via-category is useful because it's actually aguing-via-similarity; but I see the point of Scott/Yvain's original article being that such arguing via similarity simply isn't useful in such cases, and has to be replaced with a direct assessment of the facts.

Like, one might say, similar in what way, and how do we know that this particular similarity is relevant in this case? But any answer to why the similarity is relevant, could be translated into an argument that doesn't rely on the similarity in the first place. Similarity can thus be a useful guide to finding arguments, but it shouldn't, in contentious cases, be considered compelling as an argument itself.

Yes, as you say, the argument is common because it is useful as a quick shorthand most of the time. But in contentious cases, in edge cases -- the cases that people are likely to be arguing about -- it breaks down. That is to say, it's an argument whose validity is largel to those cases where people aren't arguing to begin with!

Comment by Sniffnoy on Dark Matters · 2021-03-15T08:36:20.808Z · LW · GW

Good post. Makes a good case. I wasn't aware of the evidence from galactic cluster lensing; that's pretty impressive. (I guess not as much as the CMB power spectrum, but that I'd heard about before. :P )

But, my understanding is that the Bullet Cluster is actually not the strong evidence it's claimed to be? My understanding of modified gravity theories is that, since they all work by adding extra fields, it's also possible for those to have gravity separated from visible matter, even if no dark matter is present. (See e.g.. here... of course in this post Hossenfelder claims that the Bullet Cluster in particular is actually evidence against dark matter due to simulation reasons, but I don't know how much to believe that.)

Of course this means that modified gravity theories also aren't quite as different from dark matter as they're commonly said to be -- with either dark matter or modified gravity you're adding an additional field, the difference is just (OK, this is maybe a big just!) the nature of that field. But since this new field would presumably not act like matter in all the other ways you describe, my understanding is that it is still definitely distinct from "dark matter" for the purposes of this post.

Apparently these days even modified gravity proponents admit you still need dark matter to make things work out, which rather kills the whole motivation behind modified gravity, so I'm not sure if that's really an idea that makes sense anymore! Still, had to point out the thing about the Bullet Cluster, because based on what I know I don't think that part is actually correct.

Comment by Sniffnoy on Blue is Arbitrary · 2021-03-14T19:44:36.317Z · LW · GW

"Cyan" isn't a basic color term in English; English speakers ordinarily consider cyan to be a variant of blue, not something basically separate. Something that is cyan could also be described in English as "blue". As opposed to say, red and pink -- these are both basic color terms in English; an English speaker would not ordinarily refer to something pink as "red", or vice versa.

Or in other words: Color words don't refer to points in color space, they refer to regions, which means that you can look at how those regions overlap -- some may be subsets of others, some may be disjoint (well -- not disjoint per se, but thought of as disjoint, since obviously you can find things near the boundary that won't be judged consistently), etc. Having words "blue" and "cyan" that refer to two thought-of-as-disjoint regions is pretty different from having words "blue" and "cyan" where the latter refers to a subset of the former.

So, it's not as simple as saying "English also has a word cyan" -- yes, it does, but the meaning of that word, and the relation of its meaning to that of "blue", is pretty different. These translated words don't quite correspond; we're taking regions in color space, and translating them to words that refer to similar regions, regions that contain a number of the same points, but not the same ones.

The bit in the comic about "Eurocentric paint" obviously doesn't quite make sense as stated -- the division of the rainbow doesn't come from paint! -- but a paint set that focused on the central examples of basic color terms of a particular language could reasonably be called a that-language-centric paint set. In any case the basic point is just that dividing up color space into basic color terms has a large cultural component to it.

Comment by Sniffnoy on Making Vaccine · 2021-02-04T06:12:25.675Z · LW · GW

Wow!

I guess a thing that still bugs me after reading the rest of the comments is, if it turns out that this vaccine only offers protection against inhaling the virus though the nose, how much does that help when one considers that one could also inhale it through the mouth? Like, I worry that after taking this I'd still need to avoiding indoor spaces with other people, etc, which would defeat a lot of the benefit of it.

But, if it turns out that it does yield antibodies in the blood, then... this sounds very much worth trying!

Comment by Sniffnoy on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-09-15T17:51:23.616Z · LW · GW

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like?

I don't think that we do, exactly. I think that most people only know the term "prisoners' dilemma" and haven't learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners' Dilemma as a result. :-/

Comment by Sniffnoy on Toolbox-thinking and Law-thinking · 2020-09-06T20:51:21.041Z · LW · GW

That sounds like it might have been it?

Comment by Sniffnoy on Swiss Political System: More than You ever Wanted to Know (III.) · 2020-08-11T20:29:25.611Z · LW · GW

Sorry, but after reading this I'm not very clear on just what exactly the "Magic Formula" refers to. Could you state it explicitly?

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-02-28T22:58:16.505Z · LW · GW

Oops, turns out I did misremember -- Savage does not in fact put the proof in his book. You have to go to Fishburn's book.

I've been reviewing all this recently and yeah -- for anyone else who wants to get into this, I'd reccommend getting Fishburn's book ("Utility Theory for Decision Making") in addition to Savage's "Foundations of Statistics". Because in addition to the above, what I'd also forgotten is that Savage leaves out a bunch of the proofs. It's really annoying. Thankfully in Fishburn's treatment he went and actually elaborated all the proofs that Savage thought it OK to skip over...

(Also, stating the obvious, but get the second edition of "Foundations of Statistics", as it fixes some mistakes. You probably don't want just Fishburn's book, it's fairly hard to read by itself.)

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-06T20:24:51.447Z · LW · GW

Oh, I see. I misread your comment then. Yes, I am assuming one already has the ability to discern the structure of an argument and doesn't need to hire someone else to do that for you...

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-05T18:53:19.983Z · LW · GW

What I said above. Sorry, to be clear here, by "argument structure" I don't mean the structure of the individual arguments but rather the overall argument -- what rebuts what.

(Edit: Looks like I misread the parent comment and this fails to respond to it; see below.)

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-03T20:55:24.801Z · LW · GW

This is a good point (the redemption movement comes to mind as an example), but I think the cases I'm thinking of and the cases you're describing look quite different in other details. Like, the bored/annoyed expert tired of having to correct basic mistakes, vs. the salesman who wants to initiate you into a new, exciting secret. But yeah, this is only a quick-and-dirty heuristic, and even then only good for distinguishing snake oil; it might not be a good idea to put too much weight on it, and it definitely won't help you in a real dispute ("Wait, both sides are annoyed that the other is getting basic points wrong!"). As Eliezer put it -- you can't learn physics by studying psychology!

Comment by Sniffnoy on What Money Cannot Buy · 2020-02-01T22:16:28.738Z · LW · GW

Given a bunch of people who disagree, some of whom are actual experts and some of whom are selling snake oil, expertise yourself, there are some further quick-and-dirty heuristics you can use to tell which of the two groups is which. I think basically my suggestion can be best summarized at "look at argument structure".

The real experts will likely spend a bunch of time correct popular misconceptions, which the fakers may subscribe to. By contrast, the fakers will generally not bother "correcting" the truth to their fakery, because why would they? They're trying to sell to unreflective people who just believe the obvious-seeming thing; someone who actually bothered to read corrections to misconceptions at any point is likely too savvy to be their target audience.

Sometimes though you do get actual arguments. Fortunately, it's easier to evaluate arguments than to determine truth oneself -- of course, this is only any good if at least one of the parties is right! If everyone is wrong, heuristics like this will likely be no help. But in an experts-and-fakers situation, where one of the groups is right and the other pretty definitely wrong, you can often just use heuristics like "which side has arguments (that make some degree of sense) that the other side has no answer to (that makes any sense)?". If we grant the assumption that one of the two sides is right, then it's likely to be that one.

When you actually have a lot of back-and-forth arguing -- as you might get in politics, or, as you might get in disputes between actual experts -- the usefulness of this sort of thing can drop quickly, but if you're just trying to sort out fakers from those with actual knowledge, I think it can work pretty well. (Although honestly, in a dispute between experts, I think the "left a key argument unanswered" is still a pretty big red flag.)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-28T07:25:59.782Z · LW · GW

Well, it's worth noting that P7 is introduced to address gambles with infinitely many possible outcomes, regardless of whether those outcomes are bounded or not (which is the reason I argue above you can't just get rid of it). But yeah. Glad that's cleared up now! :)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-17T04:58:12.530Z · LW · GW

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]

That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I'm confused by your claim that you're asking about conditions, when you haven't been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.

Something seems to be backwards here.

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

I'm confused here; it sounds like you're just describing, in the VNM framework, the strong continuity requirement, or in Savage's framework, P7? Of course Savage's P7 doesn't directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I'm less familiar with that.

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.

That doesn't make sense. If you add axioms, you'll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!

Comment by Sniffnoy on A summary of Savage's foundations for probability and utility. · 2020-01-16T02:03:52.980Z · LW · GW

Here's a quick issue I only just noticed but which fortunately is easily fixed:

Above I mentioned you probably want to restrict to a sigma-algebra of events and only allow measurable functions as actions. But, what does measurable mean here? Fortunately, the ordering on outcomes (even without utility) makes measurability meaningful. Except this puts a circularity in the setup, because the ordering on outcomes is induced from the ordering on actions.

Fortunately this is easily patched. You can start with the assumption of a total preorder on outcomes (considering the case of decisions without uncertainty), to make measurability meaningful and restrict actions to measurable functions (once we start considering decisions under uncertainty); then, for P3, instead of the current P3, you would strengthen the current P3 by saying that (on non-null sets) the induced ordering on outcomes actually matches the original ordering on outcomes. Then this should all be fine.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:40:43.816Z · LW · GW

(This is more properly a followup to my sibling comment, but posting it here so you'll see it.)

I already said that I think that thinking in terms of infinitary convex combinations, as you're doing, is the wrong way to go about it; but it took me a bit to put together why that's definitely the wrong way.

Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he's able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he's not necessarily arbitrary actions).

Savage doesn't assume probability! So if you want to disallow certain actions... how do you specify them? Or if you want to talk about convex combinations of actions -- not just infinitary ones, any ones -- how do you even define these?

In Savage's framework, you have to prove that if two actions can be described by the same probabilities and outcomes, then they're equivalent. E.g., suppose action A results in outcome X with probability 1/2 and outcome Y with probability 1/2, and suppose action B meets that same description. Are A and B equivalent? Well, yes, but that requires proof, because maybe A and B take outcome X on different sets of probability 1/2. (OK, in the two-outcome case it doesn't really require "proof", rather it's basically just his definition of probability; but the more general case requires proof.)

So, until you've established that theorem, that it's meaningful to combine gambles like that, and that the particular events yielding the probabilities aren't relevant, one can't really meaningfully define convex combinations at all. This makes it pretty hard to incorporate them into the setup or axioms!

More generally this should apply not only to Savage's particular formalism, but any formalism that attempts to ground probability as well as utility.

Anyway yeah. As I think I already said, I think we should think of this in terms not of, what combinations of actions yield permitted actions, but rather whether there should be forbidden actions at all. (Note btw in the usual VNM setup there aren't any forbidden actions either! Although there infinite gambles are, while not forbidden, just kind of ignored.) But this is in particular why trying to put it it in terms of convex combinations as you've done doesn't really work from a fundamentals point of view, where there is no probability yet, only preferences.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:23:05.982Z · LW · GW

Apologies, but it sounds like you've gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don't think anyone here is arguing that utility functions don't need to be finite-valued. All the things you're saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?

In the second paragraph perhaps this is just an issue of language -- when you say "infinitely high", do you actually mean "aribtrarily high"? -- but in the first paragraph this does not seem to be the case.

I'm also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called "backwards reasoning" above -- starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T09:56:32.529Z · LW · GW

Oh, so that's what you're referring to. Well, if you look at the theorem statements, you'll see that P=P_d is an axiom that is explicitly called out in the theorems where it's assumed; it's not implictly part of Axiom 0 like you asserted, nor is it more generally left implicit at all.

but the important part is that last infinite sum: this is where all infinitary convex combinations are asserted to exist. Whether that is assigned to "background setup" or "axioms" does not matter. It has to be present, to allow the construction of St. Petersburg gambles.

I really think that thinking in terms of infinitary convex combinations is the wrong way to go about this here. As I said above: You don't get a St. Petersburg gamble by taking some fancy convex combination, you do it by just constructing the function. (Or, in Fishburn's framework, you do it by just constructing the distribution; same effect.) I guess without P=P_d you do end up relying on closure properties in Fishburn's framework, but Savage's framework just doesn't work that way at all; and Fishburn with P=P_d, well, that's not a closure property. Rather what Savage's setup, and P=P_d have in common, is that they're, like, arbitrary-construction properties: If you can make a thing, you can compare it.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T08:11:20.562Z · LW · GW

Savage does not actually prove bounded utility. Fishburn did this later, as Savage footnotes in the edition I'm looking at, so Fishburn must be tackled.

Yes, it was actually Fishburn that did that. Apologies if I carelessly implied it was Savage.

IIRC, Fishburn's proof, formulated in Savage's terms, is in Savage's book, at least if you have the second edition. Which I think you must, because otherwise that footnote wouldn't be there at all. But maybe I'm misremembering? I think it has to be though...

In Savage's formulation, from P1-P6 he derives Theorem 4 of section 2 of chapter 5 of his book, which is linear interpolation in any interval.

I don't have the book in front of me, but I don't recall any discussion of anything that could be called linear interpolation, other than the conclusion that expected utility works for finite gambles. Could you explain what you mean? I also don't see the relevance of intervals here? Having read (and written a summary of) that part of the book I simply don't know what you're talking about.

Clearly, linear interpolation does not work on an interval such as [17,Inf], therefore there cannot be any infinitely valuable gambles. St. Petersburg-type gambles are therefore excluded from his formulation.

I still don't know what you're talking about here, but I'm familiar enough with Savage's formalism to say that you seem to have gotten quite lost somewhere, because this all sounds like nonsense.

From what you're saying, the impression that I'm getting is that you're treating Savage's formalism like Fishburn's, where there's some a-prior set of actions under consideration, and so we need to know closure properties about that set. But, that's not how Savage's formalism works. Rather the way it works is that actions are just functions (possibly with a measurability condition -- he doesn't discuss this but you probably want it) from world-states to outcomes. If you can construct the action as a function, there's no way to exclude it.

I shall have to examine further how his construction works, to discern what in Savage's axioms allows the construction, when P1-P6 have already excluded infinitely valuable gambles.

Well, I've already described the construction above, but I'll describe it again. Once again though, you're simply wrong about that last part; that last statement is not only incorrect, but fundamentally incompatible with Savage's whole approach.

Anyway. To restate the construction of how to make a St. Petersburg gamble. (This time with a little more detail.) An action is simply a function from world-states to outcomes.

By assumption, we have a sequence of outcomes a_i such that U(a_i) >= 2^i and such that U(a_i) is strictly increasing.

We can use P6 (which allows us to "flip coins", so to speak) to construct events E_i (sets of world-states) with probability 1/2^i.

Then, the action G that takes on the value a_i on the set E_i is a St. Petersburg gamble.

For the particular construction, you take G as above, and also G', which is the same except that G' takes the value a_1 on E_0, instead of the value a_0.

Savage proves in the book (although I think the proof is due to Fishburn? I'm going by memory) that given two gambles, both of which are preferred to any essentially bounded gamble, the agent must be indifferent between them. (The proof uses P7, obviously -- the same thing that proves that expected utility works for infinite gambles at all. I don't recall the actual proof offhand and don't feel like trying to reconstruct it right now, but anyway I think you have it in front of you from the sounds of it.) And we can show both these gambles are preferred to any essentially bounded gamble by comparing to truncated versions of themselves (using sure-thing principle) and using the fact that expected utility works for essentially bounded gambles. Thus the agent must be indifferent between G and G'. But also, by the sure-thing principle (P2 and P3), the agent must prefer G' to G. That's the contradiction.

Edit: Earlier version of this comment misstated how the proof goes

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:51:56.475Z · LW · GW

Fishburn (op. cit., following Blackwell and Girschick, an inaccessible source) requires that the set of gambles be closed under infinitary convex combinations.

Again, I'm simply not seeing this in the paper you linked? As I said above, I simply do not see anything like that outside of section 9, which is irrelevant. Can you point to where you're seeing this condition?

I shall take a look at Savage's axioms and see what in them is responsible for the same thing.

In the case of Savage, it's not any particular axiom, but rather the setup. An action is a function from world-states to outcomes. If you can construct the function, the action (gamble) exists. That's all there is to it. And the relevant functions are easy enough to construct, as I described above; you use P6 (the Archimedean condition, which also allows flipping coins, basically) to construct the events, and we have the outcomes by assumption. You assign the one to the other and there you go.

(If you don't want to go getting the book out, you may want to read the summary of Savage I wrote earlier!)

A short answer to this (something longer later) is that an agent need not have preferences between things that it is impossible to encounter. The standard dissolution of the St. Petersberg paradox is that nobody can offer that gamble. Even though each possible outcome is finite, the offerer must be able to cover every possible outcome, requiring that they have infinite resources. Since the gamble cannot be offered, no preferences between that gamble and any other need exist.

So, would it be fair to sum this up as "it is not necessary to have preferences between two gambles if one of them takes on unbounded utility values"? Interesting. That doesn't strike me as wholly unworkable, but I'm skeptical. In particular:

  1. Can we phrase this without reference to utility functions? It would say a lot more for the possibility if we can.
  2. What if you're playing against Nature? A gamble can be any action; and in a world of unbounded utility functions, why should one believe that any action must have some bound on how much utility it can get you? Sure, sure, second law of thermodynamics and all that, but that's just a feature of the paticular universe we happen to live in, not something that reshapes your preferences. (And if we were taking account of that sort of thing, we'd probably just say, oh, utility is bounded after all, in a kind of stupid way.) Notionally, it could be discovered to be wrong! It won't happen, but it's not probability literally 0.

Or are you trying to cut out a more limited class of gambles as impossible? I'm not clear on this, although I'm not certain it affects the results.

Anyway, yeah, as I said, my main objection is that I see no reason to believe that, if you have an unbounded utility function, Nature cannot offer you a St. Petersburg game. Or I mean, to the extent I do see reasons to believe that, they're facts about the particular universe we happen to live in, that notionally could be discovered to be wrong.

Looking at the argument from the other end, at what point in valuing numbers of intelligent lives does one approach an asymptote, bearing in mind the possibility of expansion to the accessible universe? What if we discover that the habitable universe is vastly larger than we currently believe? How would one discover the limits, if there are any, to one's valuing?

This is exactly the sort of argument that I called "flimsy" above. My answer to these questions is that none of this is relevant.

Both of us are trying to extend our ideas about preferences from ordinary situations to extraordinary ones. (Like, I agree that some sort of total utilitarianism is a good heuristic for value under the conditions we're familiar with.) This sort of extrapolation, to an unfamiliar realm, is always potentially dangerous. The question then becomes, what sort of tools can we expect to continue to work, without needing any sort of adjustment to the new conditions?

I do not expect speculation about the particular form preferences our would take under these unusual conditions to be trustworthy. Whereas basic coherence conditions had damn well better continue to hold, or else we're barely even talking about sensible preferences anymore.

Or, to put it differently, my answer is, I don't know, but the answer must satisfy basic coherence conditions. There's simply no way that the idea that decision-theoretic utility has to increase linearly with number intelligent lives, is on anywhere near as solid ground as that! The mere fact that it's stated in terms of a utility function in the first place, rather than in terms of something more basic, is something of a smell. Complicated statements we're not even entirely sure how to formulate can easily break in a new context. Short simple statements that have to be true for reasons of simple coherence don't break.

(Also, some of your questions don't seem to actually appreciating what a bounded utility function would actually mean. It wouldn't mean taking an unbounded utility function and then applying a cap to it. It would just mean something that naturally approaches 1 as things get better and 0 as things get worse. There is no point at which it approaches an asymptote; that's not how asymptotes work. There is no limit to one's valuing; presumably utility 1 does not actually occur. Or, at least, that's how I infer it would have to work.)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:08:09.006Z · LW · GW

Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I've seen.

Now (in Savage's formalism) actions are just functions from world-states to outcomes (maybe with a measurability condition), so regardless of your prior it's easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you're saying is, if we don't allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-09T06:54:24.108Z · LW · GW

OK, so going by that you're suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It's an interesting idea, but I don't see why one would expect it to have anything to do with preferences.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:37:43.913Z · LW · GW

You should check out Abram's post on complete class theorems. He specifically addresses some of the concerns you mentioned in the comments of Yudkowsky's posts.

So, it looks to me like what Abrams is doing -- once he gets past the original complete class theorem -- is basically just inventing some new formalism along the lines of Savage. I think it is very misleading to refer to this as "the complete class theorem" -- how on earth was I supposed to know that this was what was being referred to when "the complete class theorem" was mentioned, when it resembles the original theorem so little (and it's the original theorem that was linked to)? -- and I don't see why it was necessary to invent this anew, but sure, I can accept that it presumably works, even if the details aren't spelled out.

But I must note that he starts out by saying that he's only considering the case when there's only a finite set of states of the world! I realize you weren't making a point about bounded utility here; but from that point of view, it is quite significant...

Also, my inner model of Jaynes says that the right way to handle infinities is not to outlaw them, but to be explicit and consistent about what limits we're taking.

I don't really understand what that means in this context. It is already quite explicit what limits we're taking: Given an action (a measurable function from states of the world to outcomes), take its expected utility, with regard to the [finitely-additive] probability on states of the world. (Which is implicitly a limit of sorts.)

I think this is another one of those comments that makes sense if you're reasoning backward, starting from utility functions, but not if you're reasoning forward, from preferences. If you look at things from a utility-functions-first point of view, then it looks like you're outlawing infinities (well, unboundedness that leads to infinities). But from a preferences-first point of view, you're not outlawing anything. You haven't outlawed unbounded utility functions, rather they've just failed to satisfy fundamental assumptions about decision-making (remember, if you don't have P7 your utility function is not guaranteed to return correct results about infinite gambles at all!) and so clearly do not reflect your idealized preferences. You didn't get rid of the infinity, it was simply never there in the first place; the idea that it might have been turned out to be mistaken.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:20:25.895Z · LW · GW

I think you've misunderstood a fair bit. I hope you don't mind if I address this slightly out of order.

Or if infinite utilities are not immediately a problem, then by a more complicated argument, involving constructing multiple St. Petersburg-type combinations and demonstrating that the axioms imply that there both should and should not be a preference between them.

This is exactly what Fishburn does, as I mentioned above. (Well, OK, I didn't attribute it to Fishburn, I kind of implicitly misattributed it to Savage, but it was actually Fishburn; I didn't think that was worth going into.)

I haven't studied the proof of boundedness in detail, but it seems to be that unbounded utilities allow St. Petersburg-type combinations of them with infinite utilities, but since each thing is supposed to have finite utility, that is a contradiction.

He does not give details, but the argument that I conjecture from his text is that if there are unbounded utilities then one can construct a convex combination of infinitely many of them that has infinite utility (and indeed one can), contradicting the proof from his axioms that the utility function is a total function to the real numbers.

What you describe in these two parts I'm quoting is, well, not how decision-theoretic utility functions work. A decision-theoretic utility function is a function on outcomes, not on gambles over outcomes. You take expected utility of a gamble; you don't take utility of a gamble.

So, yes, if you have an unbounded decision-theoretic utility function, you can set up a St. Petersburg-style situation that will have infinite expected utility. But that is not by itself a problem! The gamble has infinite expected utility; no individual outcome has infinite utility. There's no contradiction yet.

Of course, you then do get a contradiction when you attempt to compare two of these that have been appropriately set up, but...

But by a similar argument, one might establish that the real numbers must be bounded, when instead one actually concludes that not all series converge

What? I don't know what one might plausibly assume that might imply the boundedness of the real numbers.

...oh, I think I see the analogy you're going for here. But, it seems to rest on the misunderstanding of utility functions discussed above.

and that one cannot meaningfully compare the magnitudes of divergent infinite series.

Well, so, one must remember the goal here. So, let's start with divergent series, per your analogy. (I'm assuming you're discussing series of nonnegative numbers here, that diverge to infinity.)

So, well, there's any number of ways we could compare divergent series. We could just say that they sum to infinity, and so are equal in magnitude. Or we could try to do a more detailed comparison of their growth rates. That might not always yield a well-defined result though. So yeah. There's not any one universal way to compare magnitudes of divergent series, as you say; if someone asks, which of these two series is bigger, you might just have to say, that's a meaningless question. All this is as you say.

But that's not at all the situation we find ourselves in choosing between two gambles! If you reason backward, from the idea of utility functions, it might seem reasonable to say, oh, these two gambles are both divergent, so comparison is meaningless. But if you reason forward, from the idea of preferences... well, you have to pick one (or be indifferent). You can't just leave it undefined. Or if you have some formalism where preferences can be undefined (in a way that is distinct from indifference), by all means explain it... (but what happens when you program these preferences into an FAI and it encounters this situation? It has to pick. Does it pick arbitrarily? How is that distinct from indifference?)

That we have preferences between gambles is the whole thing we're starting from.

I note that in order to construct convex combinations of infinitely many states, Fishburn extends his axiom 0 to allow this. He does not label this extension separately as e.g. "Axiom 0*". So if you were to ask which of his axioms to reject in order to retain unbounded utility, it could be none of those labelled as such, but the one that he does not name, at the end of the first paragraph on p.1055. Notice that the real numbers satisfy Axiom 0 but not Axiom 0*. It is that requirement that all infinite convex combinations exist that surfaces later as the boundedness of the range of the utility function.

Sorry, but looking through Fishburn's paper I can't see anything like this. The only place where any sort of infinite combination seems to be mentioned is section 9, which is not relevant. Axiom 0 means one thing throughout and allows only finite convex combinations. I simply don't see where you're getting this at all.

(Would you mind sticking to Savage's formalism for simplicity? I can take the time to properly read Fishburn if for some reason you insist things have to be done this way, but otherwise for now I'm just going to put things in Savage's terms.)

In any case, in Savage's formalism there's no trouble in proving that the necessary actions exist -- you don't have to go taking convex combinations of anything, you simply directly construct the functions. You just need an appropriate partition of the set of world-states (provided by the Archimedean axiom he assumes, P6) and an appropriate set of outcomes (which comes from the assumption of unbounded utility). You don't have to go constructing other things and then doing some fancy infinite convex combination of them.

If you don't mind, I'd like to ask: could just tell me what in particular in Savage's setup or axioms you find to be the probable weak point? If it's P7 you object to, well, I already discussed that in the post; if you get rid of that, the utility function may be unbounded but it's no longer guaranteed to give correct results when comparing infinite gambles.

While searching out the original sources, I found a paper indicating that at least in 1993, bounded utility theorems were seen as indicating a problem with Savage's axioms: "Unbounded utility for Savage's "Foundations of Statistics" and Other Models", by Peter Wakker. There is another such paper from 2014. I haven't read them, but they indicate that proofs of boundedness of utility are seen as problems for the axioms, not discoveries that utility must be bounded.

I realize a number of people see this as a problem. Evidently they have some intuition or argument that disagrees with the boundedness of utility. Whatever this intuition or argument is, I would be very surprised if it were as strong as the argument that utility must be bounded. There's no question that assumptions can be bad. I just think the reasons to think these are bad that have been offered, are seriously flimsy compared to the reasons to think that they're good. So I see this as basically a sort of refusal to take the math seriously. (Again: Which axiom should we throw out, or what part of the setup should we rework?)

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:31:23.979Z · LW · GW

Is there a reason we can't just solve this by proposing arbitrarily large bounds on utility instead of infinite bounds? For instance, if we posit that utility is bounded by some arbitrarily high value X, then the wager can only payout values X for probabilities below 1/X.

I'm not sure what you're asking here. An individual decision-theoretic utility function can be bounded or it can be unbounded. Since decision-theoretic utility functions can be rescaled arbitrarily, naming a precise value for the bounds is meaningless; so like we could just assume the bounds are 0 below and 1 above.

So, I mean, yeah, you can make the problem go away by assuming bounded utility, but if you were trying to say something more than that, a bounded utility that is somehow "closer" to unbounded utility, then no such notion is meaningful.

Apologies if I've misunderstood what you're trying to do.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:27:22.242Z · LW · GW

Yes, thanks, I didn't bother including it in the body of the post but that's basically how it goes. Worth noting that this:

Both of these wagers have infinite expected utility, so we must be indifferent between them.

...is kind of shortcutting a bit (at least as Savage/Fishburn[0] does it; he proves indifference between things of infinite expected utility separately after proving that expected utility works when it's finite), but that is the essence of it, yes.

(As for the actual argument... eh, I don't have it in front of me and don't feel like rederiving it...)

[0]I initially wrote Savage here, but I think this part is actually due to Fishburn. Don't have the book in front of me right now though.

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:23:39.526Z · LW · GW

By "a specific gamble" do you mean "a specific pair of gambles"? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.

I don't understand how to make sense of what you're saying. Agent's preferences are the starting point -- preferences as in, given a choice between the two, which do you pick? It's not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that's distinct).

I mean, you could try to come up with such a thing, but I'd be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that's effectively just nontransitive indifference, which seems like it would be a problem...)

I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I'd be interested to hear...

Comment by Sniffnoy on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:14:34.737Z · LW · GW

If you're not making a prioritarian aggregate utility function by summing functions of individual utility functions, the mapping of a prioritarian function to a utility function doesn't always work. Prioritarian utility functions, for instance, can do things like rank-order everyone's utility functions and then sum each individual utility raised to the negative-power of the rank-order ... or something*. They allow interactions between individual utility functions in the aggregate function that are not facilitated by the direct summing permitted in utilitarianism.

This is a good point. I might want to go back and edit the original post to account for this.

So from a mathematical perspective, it is possible to represent many prioritarian utility function as a conventional utilitarian utility function. However, from an intuitive perspective, they mean different things:

This doesn't practically affect decision-making of a moral agents but it does reflect different underlying philosophies -- which affects the kinds of utility functions people might propose.

Sure, I'll agree that they're different in terms of ways of thinking about things, but I thought it was worth pointing out that in terms of what they actually propose they are largely indistinguishable without further constraints.

Comment by Sniffnoy on Misconceptions about continuous takeoff · 2019-12-25T20:11:43.939Z · LW · GW

I don't really want to go trying to defend here a position I don't necessarily hold, but I do have to nitpick and point out that there's quite a bit of room inbetween exponential and hyperbolic.

Comment by Sniffnoy on Misconceptions about continuous takeoff · 2019-12-24T08:39:22.931Z · LW · GW

To be clear, intelligence explosion via recursive self-improvement has been distinguished from merely exponential growth at least as far back as Yudkowsky's "Three Major Singularity Schools". I couldn't remember the particular link when I wrote the comment above, but, well, now I remember it.

Anyway, I don't have a particular argument one way or the other; I'm just registering my surprise that you encountered people here arguing for merely exponential growth base on intelligence explosion arguments.

Comment by Sniffnoy on Bayesian examination · 2019-12-11T18:17:16.334Z · LW · GW

Yeah, proper scoring rules (and in particular both the quadratic/Brier and the logarithmic examples) have been discussed here a bunch, I think that's worth acknowledging in the post...

Comment by Sniffnoy on Bayesian examination · 2019-12-10T00:36:11.454Z · LW · GW

Kind of well-known here, but worth repeating I guess...

Comment by Sniffnoy on Misconceptions about continuous takeoff · 2019-12-02T22:53:07.014Z · LW · GW

It is sometimes argued that even if this advantage is modest, the growth curves will be exponential, and therefore a slight advantage right now will compound to become a large advantage over a long enough period of time. However, this argument by itself is not an argument against a continuous takeoff.

I'm not sure this is an accurate characterization of the point; my understanding is that the concern largely comes from the possibility that the growth will be faster than exponential, rather than merely exponential.

Comment by Sniffnoy on Goal-thinking vs desire-thinking · 2019-11-17T04:09:22.526Z · LW · GW

I mean, are you actually disagreeing with me here? I think you're just describing an intermediate position.

Comment by Sniffnoy on Goal-thinking vs desire-thinking · 2019-11-16T22:03:21.453Z · LW · GW

OK. I think I didn't think through my reply sufficiently. Something seemed off with what you were saying, but I failed to think through what and made a reply that didn't really make sense instead. But thinking things through a bit more now I think I can lay out my actual objection a bit more clearly.

I definitely think that if you're taking the point of view that suicide is preferable to suffering you're not applying what I'm calling goal-thinking. (Remember here that the description I laid out above is not intended as some sort of intensional definition, just my attempt to explicate this distinction I've noticed.) I don't think goal-thinking would consider nonexistence as some sort of neutral point as many do.

I think the best way of explaining this maybe is that goal-thinking -- or at-least the extreme version which nobody actually uses -- is to simply not consider happiness or suffering as whatever as separate objects worth considering at all, that can be good or bad, or that should be acted on directly; but purely as indicators of whether one is achieving one's goals -- intermediates to be eliminated. In this point of view, suffering isn't some separate thing to be gotten rid of by whatever means, but simply the internal experience of not achieving one's goals, the only proper response to which is to go out and do so. You see?

And if we continue in this direction, one can also apply this to others; so you wouldn't have "not have other people suffer horribly" as a goal in the first place. You would always phrase things in terms of other's goals, and whether they're being thwarted, rather than in terms of their experiences.

Again, none of what I'm saying here necessarily follows from what I wrote in the OP, but as I said, that was never intended as an intensional definition. I think the distinction I'm drawing makes sense regardless of whether I described it sufficiently clearly initially.

Comment by Sniffnoy on Goal-thinking vs desire-thinking · 2019-11-11T03:08:59.238Z · LW · GW

This is perhaps an intermediate example, but I do think that once you're talking about internal experiences to be avoided, it's definitely not all the way at the goal-thinking end.

Comment by Sniffnoy on Goal-thinking vs desire-thinking · 2019-11-11T00:16:09.488Z · LW · GW

Hm, I suppose that's true. But I think the overall point still stands? It's illustrating a type of thinking that doesn't make sense to one thinking in terms of concrete, unmodifiable goals in the external world.

Comment by Sniffnoy on Coherent decisions imply consistent utilities · 2019-10-21T01:06:15.455Z · LW · GW

So this post is basically just collecting together a bunch of things you previously wrote in the Sequences, but I guess it's useful to have them collected together.

I must, however, take objection to one part. The proper non-circular foundation you want for probability and utility is not the complete class theorem, but rather Savage's theorem, which I previously wrote about on this website. It's not short, but I don't think it's too inaccessible.

Note, in particular, that Savage's theorem does not start with any assumption baked in that R is the correct system of numbers to use for probabilities[0], instead deriving that as a conclusion. The complete class theorem, by contrast, has real numbers in the assumptions.

In fact -- and it's possible I'm misunderstanding -- but it's not even clear to me that the complete class theorem does what you claim it does, at all. It seems to assume probability at the outset, and therefore cannot provide a grounding for probability. Unlike Savage's theorem, which does. Again, it's possible I'm misunderstanding, but that sure seems to be the case.

Now this has come up here before (I'm basically in this comment just restating things I've previously written) and your reply when I previously pointed out some of these issues was, frankly, nonsensical (your reply, my reply), in which you claimed that the statement that one's preferences form a partial preorder is a stronger assumption than "one prefers more apples to less apples", when, in fact, the exact reverse is the case.

(To restate it for those who don't want to click through: If one is talking solely about one's preferences over number of apples, then the statement that more is better immediately yields a total preorder. And if one is talking about preferences not just over number of apples but in general, then... well, it's not clear how what you're saying applies directly; and taken less literally, it just in general seems to me that the complete class theorem is making some very strong assumptions, much stronger than that of merely a total preorder (e.g., real numbers!).)

In short the use of the complete class theorem here in place of Savage's theorem would appear to be an error and I think you should correct it.

[0]Yes, it includes an Archimedean assumption, which you could argue is the same thing as baking in R; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking R as your number system means as a statement about the agent's preferences.

Comment by Sniffnoy on Noticing Frame Differences · 2019-10-19T17:47:49.078Z · LW · GW

Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?

Comment by Sniffnoy on The Forces of Blandness and the Disagreeable Majority · 2019-05-02T16:30:04.671Z · LW · GW

Sorry, but: The thing at the top says this was crossposted from Otium, but I see no such post there. Was this meant to go up there as well? Because it seems to be missing.

Comment by Sniffnoy on An Extensive Categorisation of Infinite Paradoxes · 2019-03-18T05:55:32.104Z · LW · GW

OK, time to actually now get into what's wrong with the ones I skipped initially. Already wrote the intro above so not repeating that. Time to just go.

Infinitarian paralysis: So, philosophical problems to start: As an actual decision theory problem this is all moot since you can't actually have an infinite number of people. I.e. it's not clear why this is a problem at all. Secondly, naive assumption of utilitarian aggregation as mentioned above, etc, not going over this again. Enough of this, let's move on.

So what are the mathematical problems here? Well, you haven't said a lot here, but here's what it's look like to me. I think you've written one thing here that is essentially correct, which is that, if you did have some system of surreal valued-utilities, it would indeed likely make the distinction you want.

But, once again, that's a big "if", and not just for philosophical reasons but for the mathematical reasons I've already brought up so many times right now -- you can't do infinite sums in the surreals like you want, for reasons I've already covered. So there's a reason I included the word "likely" above, because if you did find an appropriate way of doing such a sum, I can't even necessarily guarantee that it would behave like you want (yes, finite sums should, but infinite sums require definition, and who knows if they'll actually be compatible with finite sums like they should be?).

But the really jarring thing here, the thing that really exposes a serious error in your thought (well, OK, that does so to a greater extent), is not in your proposed solution -- it's in what you contrast it with. Cardinal valued-utilities? Nothing about that makes sense! That's not a remotely well-defined alternative you can contrast with! And the thing that bugs me about this error is that it's just so unforced -- I mean, man, you could have said "extended reals" rather than cardinals, and made essentially the same point while making at least some sense! This is just demonstrating once again that not only do you not understand surreals, you do not understand cardinals or ordinals either.

(Well, I suppose technically there's the possibility that you do but expect your audience doesn't and are talking down to them, but since you're writing here on Less Wrong, I'm going to assume that's not the case.)

Seriously, cardinals and utilities do not go together. I mean, cardinals and real numbers do not go together. Like surreals and utilities don't go together either, but at least the surreals include the reals! At least you can attempt to treat it naively in special cases, as you've done in a number of these examples, even if the result probably isn't meaningful! Cardinals you can't even do that.

And once again, there's no reason anyone who understood cardinals would even want cardinal-valued utilities. That's just not what cardinals are for! Cardinals are for counting how many there are of something. Utility calculations are not a "how many" problem.

Sphere of suffering: Once again we have infinitely many people (so this whole problem is again a non-problem) and once again we have some sort of naive utility aggregation over those infinitely many people with all the mathematical problems that brings (only now it's over time-slices as well?). Enough of this, moving on.

Honestly I don't have much new to say about the bad mathematics here, much of it is the same sort of mistakes as you made in the ones I covered in my initial comment. To cover those ones briefly:

  1. Surreal numbers do not measure how far a grid extends (similar to examples I've already covered)
  2. There's not a question of how far the grid extends, allowing it to be a transfinite variable l is just changing the problem (similar to examples I've already covered)
  3. Surreal numbers also do not measure number of time steps, you want ordinals for that (similar to examples I've already covered)
  4. Repeat #2 but for the time steps (similar to examples I've already covered)

But OK. The one new thing here, I guess, is that now you're talking about a "majority" of the time slices? Yeah, that is once again not well-defined at all. Cardinality won't help you here, obviously; are you putting a measure on this somehow? I think you're going to have some problems there.

Trumped: Same problems I've discussed before. Surreal numbers do not count time steps, you're changing the problem by introducing a variable, utility aggregation over an infinite set (this time of time-slices rather than people), you know the drill.

But actually here you're changing the problem in a different way, by supposing that Trump knows in advance the number of time steps? The original problem just had this as a repeated offer. Maybe that's a philosophical rather than mathematical problem. Whatever. It's changing the problem, is the point.

And then on top of that your solution doesn't even make any sense. Let's suppose you meant an ordinal number of days rather than a surreal number of days, since that is what you'd actually use in this context. OK. Suppose for example then that the number of days is ω (which is, after all, the original problem before you changed it). So your solution says that Trump should accept the deal so long as the day number is less than the surreal number ω/3. Except, oops! Every ordinal less than ω is also less than ω/3. Trump always accepts the deal, we're back at the original problem.

I.e., even granting that you can somehow make all the formalism work, this is still just wrong.

St. Petersburg paradox: OK, so, there's a lot wrong here. Let me get the philosophical problem out of the way first -- the real solution to the St. Petersburg paradox is that you must look not at expected money, but at expected utility, and utility functions must be bounded, so this problem can't arise. But let's get to the math, because, like I said, there's a lot wrong here.

Let's get the easy-to-describe problems out of the way first: You are once again using surreals where you should be using ordinals; you are once again assuming some sort of theory of infinite sums of surreals; getting infinitely many heads has zero probability, not infinitesimal (probabilities are real-valued, you could try to introduce a theory of surreal probabilities but that will have problems already discussed), what happens in that case is irrelevant; you are once again changing the problem by allowing things to go on beyond ω steps; and, minor point, but where on earth did the function n |-> n comes from? Don't you mean n |-> 2^n?

OK, that's largely stuff I've said before. But the thing that puzzled me the most in your claimed solution is the first sentence:

If we model this with surreals, then simply stating that there is potentially an infinite number of tosses is undefined.

What? I mean, yeah, sure, the surreals have multiple infinities while, say, the extended nonnegative reals have only one, no question there. But that sentence still makes no sense! It, like, seems to reveal a fundamental misunderstanding so great I'm having trouble comprehending it. But I will give it my best shot.

So the thing is, that -- ignoring the issue of unbounded utility and what's the correct decision -- the original setup has no ambiguities. You can't choose to make it different by changing what system of numbers you describe it with. Now, I don't know if you're making the mistake I think you're making, because who knows what mistake you might be making, but it looks to me that you are confusing numbers that are part of the actual problem specification, with auxiliary numbers just used to describe the problem.

Like, what's actually going on here is that there is a set of coin flips, right? The elements of that set will be indexed by the natural numbers, and will form a (possibly improper, though with probability 0) initial segment of it -- those numbers are part of the actual problem specification. The idea though that there might be infinitely many coin flips... that's just a description. When I say "With probability 0, the set of flips will be infinite", that's just another way of saying, "With probability 0, the set of flips will be N." It doesn't make sense to ask "Ah, but what system of numbers are you using to measure its infinitude?" It doesn't matter! The set I'm describing is N! (And in any case I just said it was an infinite set, although I suppose you could say I was implicitly using cardinals.)

This is, I suppose, an idea that's shown up over and over in your claimed solutions, but since I skipped over this particular one before, I guess I never got it so explicitly before. Again, I'm having to guess what you think, but it looks to me like you think that the numbers are what's primary, rather than the actual objects the problems are about, and so you can just change the numbers system and get a different version of the same problem. I mean, OK, often the numbers are primary and you can do that! But sometimes they're just descriptive.

Oy. I have no idea whether I've correctly described what your misunderstanding, but whatever it is, it's pretty big. Let's just move on.

Trouble in St. Petersburg: Can I first just complain that your numbers don't seem to match up with your text? 13 is not 9*2+3. I'm just going to assume you meant 21 rather than 13, because none of the other interpretations I can come up with make sense.

Also this problem once again relies on unbounded utilities, but I don't need to go on about that. (Although if you were to somehow reformulate it without those -- though that doesn't seem possible in this coin-flip formulation -- then the problem would be basically similar to Satan's Apple. I have my own thoughts on that problem, but, well, I'm not going to go into it here because that's not the point.)

Anyway, let's get to the surreal abuse! Well, OK, again I don't have much new to say here, it's the same sort of surreal abuse as you've made before. Namely: Using surreals where they don't make sense (time steps should be counted by ordinals); changing the problem by introducing a transfinite variable; thinking that all ordinals are successor ordinals (sorry, but with n=ω, i.e. the original problem, there's still no last step).

Ultimately you don't offer any solution? Whatever. The errors above still stand.

The headache: More naive aggregation and thinking you can do infinite sums and etc. Or at least so I'm gathering from your claimed solution. Anyway that's boring.

The surreal abuse here though is also boring, same types as we've seen before -- using surreals where they make no sense but where ordinals would; ignoring the existence of limit ordinals; and of course the aforementioned infinite sums and such.

OK. That's all of them. I'm stopping there. I think the first comment was really enough to demonstrate my point, but now I can honestly claim to have addressed every one of your examples. Time to go sleep now.