## Posts

## Comments

**Sniffnoy**on Can crimes be discussed literally? · 2021-04-09T18:45:37.375Z · LW · GW

Yeah. You *can* use language that is unambiguously not attack language, it just takes more effort to avoid common words. In this respect it's not much unlike how discussing lots of other things seriously requires avoiding common but confused words!

**Sniffnoy**on Classifying games like the Prisoner's Dilemma · 2021-04-09T03:51:51.979Z · LW · GW

I'm reminded of this paper, which discusses a smaller set of two-player games. What you call "Cake Eating" they call the "Harmony Game". They also use the more suggestive variable names -- which I believe come from existing literature -- R (reward), S (sucker's payoff), T (temptation), P (punishment) instead of (W, X, Y, Z). Note that in addition to R > P (W > Z) they also added the restrictions T > P (Y > Z) and R > S (W > X) so that the two options could be meaningfully labeled "cooperate" and "defect" instead of "Krump" and "Flitz" (the cooperate option is always better for the other player, regardless of whether it's better or worse for you). (I'm ignoring cases of things being equal, just like you are.)

(Of course, the paper isn't actually about classifying games, it's an empirical study of how people actually play these games! But I remember it for being the first place I saw such a classification...)

With these additional restrictions, there are only four games: Harmony Game (Cake Eating), Chicken (Hawk-Dove/Snowdrift/Farmer's Dilemma), Stag Hunt, and Prisoner's Dilemma (Too Many Cooks).

I'd basically been using that as my way of thinking about two-player games, but this broader set might be useful. Thanks for taking the time to do this and assign names to these.

I do have to wonder about that result that Zack_M_Davis mentions... as you mentioned, where's the Harmony Game in it? Also, isn't Battle of the Sexes more like Chicken than like Stag Hunt? I would expect to see Chicken and Stag Hunt, not Battle of the Sexes and Chicken, which sounds like the same thing twice and seems to leave out Stag Hunt. But maybe Battle of the Sexes is actually equivalent, in the sense described, to Stag Hunt rather than Chicken? That would be surprising, but I didn't set down to check whether the definition is satsified or not...

**Sniffnoy**on Thirty-three randomly selected bioethics papers · 2021-03-24T17:53:05.093Z · LW · GW

I suppose so. It is at least a *different* problem than I was worried about...

**Sniffnoy**on Thirty-three randomly selected bioethics papers · 2021-03-23T19:44:28.463Z · LW · GW

Huh. Given the negative reputation of bioethics around here -- one I hadn't much questioned, TBH -- most of these are suprisingly reasonable. Only #10, #16, and #24 really seemed like the LW stereotype of the bioethics paper that I would roll my eyes at. Arguably also #31, but I'd argue that one is instead alarming in a *different* way.

Some others seemed like bureaucratic junk (so, neither good nor bad), and others I think the quoted sections didn't really give enough information to judge; it is quite possible that a few more of these would go under the stereotype list if I read these papers further.

#1 is... man, why does it have to be so hostile? The argument it's making is basically a *counter*-stereotypical bioethics argument, but it's written in such a hostile manner. That's not the way to have a good discussion!

Also, I'm quite amused to see that #3 basically argues that we need what I've previously referred to here as a "theory of legitimate influence", for what appear likely to be similar reasons (although again I didn't read the full thing to inspect this in more detail).

**Sniffnoy**on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T19:10:06.693Z · LW · GW

Consider a modified version of the prisoner's dilemma. This time, the prisoners are allowed to communicate, but they also have to solve an additional technical problem, say, how to split the loot. They may start with agreeing on not betraying each other to the prosecutors, but later one of them may say: "I've done most of the work. I want 70% of the loot, otherwise I am going to rat on you." It's easy to see how the problem would escalate and end up in the prisoners betraying each other.

Minor note, but I think you could just talk about a [bargaining game}(https://en.wikipedia.org/wiki/Cooperative_bargaining), rather than the Prisoner's Dilemma, which appears to be unrelated. There are other basic game theory examples beyond the Prisoner's Dilemma!

**Sniffnoy**on Dark Matters · 2021-03-17T04:50:03.604Z · LW · GW

I just explained why (without more specific theories of in exactly what way the gravity would become delocalized from the visible mass) the bullet cluster is *not* evidence one way or the other.

Now, you compare the extra fields of modified gravity to epicycles -- as in, post-hoc complications grafted on to a theory to explain a particular phenomenon. But these extra fields are, to the best of my understanding, not grafted on to explain such delocalization; they're the actual basic content of the modified gravity theories and necessary to obtain a workable theory at all. MOND by itself, after all, is not a theory of gravity; the problem then is making one compatible with it, and every actual attempt at that that I'm aware of involves these extra fields, again, not as an epicycle for the bullet cluster, but as a way of constructing a workable theory at all. So, I don't think that comparison is apt here.

One could perhaps say that such theories are epicycles upon MOND -- since the timeline may go MOND, then bullet cluster, then proper modified gravity theories -- but for the reasons above I don't think that makes a lot of sense either.

If this was some post-hoc epicycle then your comment would make some sense; but as it is, I don't think it does. Is there some reason that I'm missing that it should be regarded as a post-hoc epicycle?

Note that Hossenfelder herself says modified gravity is probably not correct! It's still important to understand what is or is not a valid argument against it. The other arguments for dark matter sure seem pretty compelling!

(Also, uh, I don't think "People who think X are just closed-minded and clearly not open to persuasion" is generally not the sort of charity we try to go for here on LW...? I didn't downvote you but, like, accusing people of being closed-minded rather than actually arguing is on the path to becoming similarly close-minded oneself, you know?)

**Sniffnoy**on Defending the non-central fallacy · 2021-03-17T03:35:22.661Z · LW · GW

I feel like this really misses the point of the whole "non-central fallacy" idea. I would say, categories are heuristics and those heuristics have limits. When the category gets strained, the thing to do is to stop arguing using the category and start arguing the particular facts without relation to the category ("taboo your words").

You're saying that this sort of arguing-via-category is useful because it's actually aguing-via-similarity; but I see the point of Scott/Yvain's original article being that such arguing via similarity simply isn't useful in such cases, and has to be replaced with a direct assessment of the facts.

Like, one might say, similar in what way, and how do we know that this particular similarity is relevant in this case? But any answer to why the similarity is relevant, could be translated into an argument that doesn't rely on the similarity in the first place. Similarity can thus be a useful guide to finding arguments, but it shouldn't, in contentious cases, be considered compelling as an argument itself.

Yes, as you say, the argument is common because it *is* useful as a quick shorthand most of the time. But in contentious cases, in edge cases -- the cases that people are likely to be arguing about -- it breaks down. That is to say, it's an argument whose validity is largel to those cases where people aren't arguing to begin with!

**Sniffnoy**on Dark Matters · 2021-03-15T08:36:20.808Z · LW · GW

Good post. Makes a good case. I wasn't aware of the evidence from galactic cluster lensing; that's pretty impressive. (I guess not as much as the CMB power spectrum, but that I'd heard about before. :P )

But, my understanding is that the Bullet Cluster is actually not the strong evidence it's claimed to be? My understanding of modified gravity theories is that, since they all work by adding extra fields, it's also possible for those to have gravity separated from visible matter, even if no dark matter is present. (See e.g.. here... of course in this post Hossenfelder claims that the Bullet Cluster in particular is actually evidence *against* dark matter due to simulation reasons, but I don't know how much to believe that.)

Of course this means that modified gravity theories also aren't quite as different from dark matter as they're commonly said to be -- with either dark matter or modified gravity you're adding an additional field, the difference is just (OK, this is maybe a big just!) the nature of that field. But since this new field would presumably not act like matter in all the other ways you describe, my understanding is that it is still definitely distinct from "dark matter" for the purposes of this post.

Apparently these days even modified gravity proponents admit you still need dark matter to make things work out, which rather kills the whole motivation behind modified gravity, so I'm not sure if that's really an idea that makes sense anymore! Still, had to point out the thing about the Bullet Cluster, because based on what I know I don't think that part is actually correct.

**Sniffnoy**on Blue is Arbitrary · 2021-03-14T19:44:36.317Z · LW · GW

"Cyan" isn't a basic color term in English; English speakers ordinarily consider cyan to be a variant of blue, not something basically separate. Something that is cyan could also be described in English as "blue". As opposed to say, red and pink -- these are both basic color terms in English; an English speaker would not ordinarily refer to something pink as "red", or vice versa.

Or in other words: Color words don't refer to *points* in color space, they refer to *regions*, which means that you can look at how those regions overlap -- some may be subsets of others, some may be disjoint (well -- not disjoint per se, but *thought* of as disjoint, since obviously you can find things near the boundary that won't be judged consistently), etc. Having words "blue" and "cyan" that refer to two thought-of-as-disjoint regions is pretty different from having words "blue" and "cyan" where the latter refers to a subset of the former.

So, it's not as simple as saying "English also has a word cyan" -- yes, it does, but the meaning of that word, and the relation of its meaning to that of "blue", is pretty different. These translated words don't *quite* correspond; we're taking regions in color space, and translating them to words that refer to similar regions, regions that contain a number of the same points, but not the same ones.

The bit in the comic about "Eurocentric paint" obviously doesn't quite make sense as stated -- the division of the rainbow doesn't come from paint! -- but a paint set that focused on the central examples of basic color terms of a particular language could reasonably be called a that-language-centric paint set. In any case the basic point is just that dividing up color space into basic color terms has a large cultural component to it.

**Sniffnoy**on Making Vaccine · 2021-02-04T06:12:25.675Z · LW · GW

Wow!

I guess a thing that still bugs me after reading the rest of the comments is, if it turns out that this vaccine only offers protection against inhaling the virus though the nose, how much does that help when one considers that one could also inhale it through the mouth? Like, I worry that after taking this I'd still need to avoiding indoor spaces with other people, etc, which would defeat a lot of the benefit of it.

But, if it turns out that it does yield antibodies in the blood, then... this sounds very much worth trying!

**Sniffnoy**on Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems · 2020-09-15T17:51:23.616Z · LW · GW

So, why do we perceive so many situations to be Prisoner's Dilemma -like rather than Stag Hunt -like?

I don't think that we do, exactly. I think that most people only know the term "prisoners' dilemma" and haven't learned any more game theory than that; and then occasionally they go and actually attempt to map things onto the Prisoners' Dilemma as a result. :-/

**Sniffnoy**on Toolbox-thinking and Law-thinking · 2020-09-06T20:51:21.041Z · LW · GW

That sounds like it might have been it?

**Sniffnoy**on Swiss Political System: More than You ever Wanted to Know (III.) · 2020-08-11T20:29:25.611Z · LW · GW

Sorry, but after reading this I'm not very clear on just what exactly the "Magic Formula" refers to. Could you state it explicitly?

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-02-28T22:58:16.505Z · LW · GW

Oops, turns out I *did* misremember -- Savage does not in fact put the proof in his book. You have to go to Fishburn's book.

I've been reviewing all this recently and yeah -- for anyone else who wants to get into this, I'd reccommend getting Fishburn's book ("Utility Theory for Decision Making") in addition to Savage's "Foundations of Statistics". Because in addition to the above, what I'd also forgotten is that *Savage leaves out a bunch of the proofs*. It's really annoying. Thankfully in Fishburn's treatment he went and actually elaborated all the proofs that Savage thought it OK to skip over...

(Also, stating the obvious, but get the second edition of "Foundations of Statistics", as it fixes some mistakes. You probably don't want *just* Fishburn's book, it's fairly hard to read by itself.)

**Sniffnoy**on What Money Cannot Buy · 2020-02-06T20:24:51.447Z · LW · GW

Oh, I see. I misread your comment then. Yes, I am assuming one already has the ability to discern the structure of an argument and doesn't need to hire someone else to do that for you...

**Sniffnoy**on What Money Cannot Buy · 2020-02-05T18:53:19.983Z · LW · GW

What I said above. Sorry, to be clear here, by "argument structure" I don't mean the structure of the individual arguments but rather the overall argument -- what rebuts what.

(Edit: Looks like I misread the parent comment and this fails to respond to it; see below.)

**Sniffnoy**on What Money Cannot Buy · 2020-02-03T20:55:24.801Z · LW · GW

This is a good point (the redemption movement comes to mind as an example), but I think the cases I'm thinking of and the cases you're describing look quite different in other details. Like, the bored/annoyed expert tired of having to correct basic mistakes, vs. the salesman who wants to initiate you into a new, exciting secret. But yeah, this is only a quick-and-dirty heuristic, and even then only good for distinguishing snake oil; it might not be a good idea to put too much weight on it, and it definitely won't help you in a real dispute ("Wait, *both* sides are annoyed that the other is getting basic points wrong!"). As Eliezer put it -- you can't learn physics by studying psychology!

**Sniffnoy**on What Money Cannot Buy · 2020-02-01T22:16:28.738Z · LW · GW

Given a bunch of people who disagree, some of whom are actual experts and some of whom are selling snake oil, expertise yourself, there are some further quick-and-dirty heuristics you can use to tell which of the two groups is which. I think basically my suggestion can be best summarized at "look at argument structure".

The real experts will likely spend a bunch of time correct popular misconceptions, which the fakers may subscribe to. By contrast, the fakers will generally not bother "correcting" the truth to their fakery, because why would they? They're trying to sell to unreflective people who just believe the obvious-seeming thing; someone who actually bothered to read corrections to misconceptions at any point is likely too savvy to be their target audience.

Sometimes though you do get actual arguments. Fortunately, it's easier to evaluate arguments than to determine truth oneself -- of course, this is only any good if at least one of the parties is *right*! If everyone is wrong, heuristics like this will likely be no help. But in an experts-and-fakers situation, where one of the groups is right and the other pretty definitely wrong, you can often just use heuristics like "which side has arguments (that make some degree of sense) that the other side has no answer to (that makes any sense)?". If we grant the assumption that one of the two sides is right, then it's likely to be that one.

When you actually have a *lot* of back-and-forth arguing -- as you might get in politics, or, as you might get in disputes between actual experts -- the usefulness of this sort of thing can drop quickly, but if you're just trying to sort out fakers from those with actual knowledge, I think it can work pretty well. (Although honestly, in a dispute between experts, I think the "left a key argument unanswered" is still a pretty big red flag.)

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-28T07:25:59.782Z · LW · GW

Well, it's worth noting that P7 is introduced to address gambles with infinitely many possible outcomes, regardless of whether those outcomes are bounded or not (which is the reason I argue above you can't just get rid of it). But yeah. Glad that's cleared up now! :)

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-17T04:58:12.530Z · LW · GW

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]

That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I'm confused by your claim that you're asking about conditions, when you haven't been talking about conditions, but rather ways of modifying the idea of decision-theoretic utility.

Something seems to be backwards here.

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

- an axiom describing what it means for one infinite wager to be "strictly better" than another.
- an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

I'm confused here; it sounds like you're just describing, in the VNM framework, the strong continuity requirement, or in Savage's framework, P7? Of course Savage's P7 doesn't directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I'm less familiar with that.

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.

That doesn't make sense. If you *add* axioms, you'll only be able to conclude *more* things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!

**Sniffnoy**on A summary of Savage's foundations for probability and utility. · 2020-01-16T02:03:52.980Z · LW · GW

Here's a quick issue I only just noticed but which fortunately is easily fixed:

Above I mentioned you probably want to restrict to a sigma-algebra of events and only allow measurable functions as actions. But, what does measurable mean here? Fortunately, the ordering on outcomes (even without utility) makes measurability meaningful. Except this puts a circularity in the setup, because the ordering on outcomes is induced from the ordering on actions.

Fortunately this is easily patched. You can start with the assumption of a total preorder on outcomes (considering the case of decisions without uncertainty), to make measurability meaningful and restrict actions to measurable functions (once we start considering decisions under uncertainty); then, for P3, instead of the current P3, you would strengthen the current P3 by saying that (on non-null sets) the induced ordering on outcomes actually matches the original ordering on outcomes. Then this should all be fine.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:40:43.816Z · LW · GW

(This is more properly a followup to my sibling comment, but posting it here so you'll see it.)

I already said that I think that thinking in terms of infinitary convex combinations, as you're doing, is the wrong way to go about it; but it took me a bit to put together why that's definitely the wrong way.

Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he's able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he's not necessarily arbitrary actions).

Savage doesn't assume probability! So if you want to disallow certain actions... how do you specify them? Or if you want to talk about convex combinations of actions -- not just infinitary ones, *any* ones -- how do you even define these?

In Savage's framework, you have to *prove* that if two actions can be described by the same probabilities and outcomes, then they're equivalent. E.g., suppose action A results in outcome X with probability 1/2 and outcome Y with probability 1/2, and suppose action B meets that same description. Are A and B equivalent? Well, yes, but that requires proof, because maybe A and B take outcome X on *different* sets of probability 1/2. (OK, in the two-outcome case it doesn't really require "proof", rather it's basically just his definition of probability; but the more general case requires proof.)

So, until you've established that theorem, that it's meaningful to combine gambles like that, and that the particular events yielding the probabilities aren't relevant, one can't really meaningfully define convex combinations at all. This makes it pretty hard to incorporate them into the setup or axioms!

More generally this should apply not only to Savage's particular formalism, but any formalism that attempts to ground probability as well as utility.

Anyway yeah. As I think I already said, I think we should think of this in terms not of, what combinations of actions yield permitted actions, but rather whether there should be forbidden actions at all. (Note btw in the usual VNM setup there aren't any forbidden actions either! Although there infinite gambles are, while not forbidden, just kind of ignored.) But this is in particular why trying to put it it in terms of convex combinations as you've done doesn't really work from a fundamentals point of view, where there is no probability yet, only preferences.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-16T01:23:05.982Z · LW · GW

Apologies, but it sounds like you've gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don't think anyone here is arguing that utility functions don't need to be finite-valued. All the things you're saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?

In the second paragraph perhaps this is just an issue of language -- when you say "infinitely high", do you actually mean "aribtrarily high"? -- but in the first paragraph this does not seem to be the case.

I'm also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called "backwards reasoning" above -- starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-09T09:56:32.529Z · LW · GW

Oh, so that's what you're referring to. Well, if you look at the theorem statements, you'll see that P=P_d is an axiom that is explicitly called out in the theorems where it's assumed; it's *not* implictly part of Axiom 0 like you asserted, nor is it more generally left implicit at all.

but the important part is that last infinite sum: this is where all infinitary convex combinations are asserted to exist. Whether that is assigned to "background setup" or "axioms" does not matter. It has to be present, to allow the construction of St. Petersburg gambles.

I really think that thinking in terms of infinitary convex combinations is the wrong way to go about this here. As I said above: You don't get a St. Petersburg gamble by taking some fancy convex combination, you do it by just constructing the function. (Or, in Fishburn's framework, you do it by just constructing the distribution; same effect.) I guess without P=P_d you do end up relying on closure properties in Fishburn's framework, but Savage's framework just doesn't work that way at all; and Fishburn with P=P_d, well, that's not a closure property. Rather what Savage's setup, and P=P_d have in common, is that they're, like, arbitrary-construction properties: If you can make a thing, you can compare it.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-09T08:11:20.562Z · LW · GW

Savage does not actually prove bounded utility. Fishburn did this later, as Savage footnotes in the edition I'm looking at, so Fishburn must be tackled.

Yes, it was actually Fishburn that did that. Apologies if I carelessly implied it was Savage.

IIRC, Fishburn's proof, formulated in Savage's terms, is in Savage's book, at least if you have the second edition. Which I think you must, because otherwise that footnote wouldn't be there at all. But maybe I'm misremembering? I think it has to be though...

In Savage's formulation, from P1-P6 he derives Theorem 4 of section 2 of chapter 5 of his book, which is linear interpolation in any interval.

I don't have the book in front of me, but I don't recall any discussion of anything that could be called linear interpolation, other than the conclusion that expected utility works for finite gambles. Could you explain what you mean? I also don't see the relevance of intervals here? Having read (and written a summary of) that part of the book I simply don't know what you're talking about.

Clearly, linear interpolation does not work on an interval such as [17,Inf], therefore there cannot be any infinitely valuable gambles. St. Petersburg-type gambles are therefore excluded from his formulation.

I still don't know what you're talking about here, but I'm familiar enough with Savage's formalism to say that you seem to have gotten quite lost somewhere, because this all sounds like nonsense.

From what you're saying, the impression that I'm getting is that you're treating Savage's formalism like Fishburn's, where there's some a-prior set of actions under consideration, and so we need to know closure properties about that set. But, that's not how Savage's formalism works. Rather the way it works is that actions are just functions (possibly with a measurability condition -- he doesn't discuss this but you probably want it) from world-states to outcomes. If you can construct the action as a function, there's no way to exclude it.

I shall have to examine further how his construction works, to discern what in Savage's axioms allows the construction, when P1-P6 have already excluded infinitely valuable gambles.

Well, I've already described the construction above, but I'll describe it again. Once again though, you're simply wrong about that last part; that last statement is not only incorrect, but fundamentally incompatible with Savage's whole approach.

Anyway. To restate the construction of how to make a St. Petersburg gamble. (This time with a little more detail.) An action is simply a function from world-states to outcomes.

By assumption, we have a sequence of outcomes a_i such that U(a_i) >= 2^i and such that U(a_i) is strictly increasing.

We can use P6 (which allows us to "flip coins", so to speak) to construct events E_i (sets of world-states) with probability 1/2^i.

Then, the action G that takes on the value a_i on the set E_i is a St. Petersburg gamble.

For the particular construction, you take G as above, and also G', which is the same except that G' takes the value a_1 on E_0, instead of the value a_0.

Savage proves in the book (although I think the proof is due to Fishburn? I'm going by memory) that given two gambles, both of which are preferred to any essentially bounded gamble, the agent must be indifferent between them. (The proof uses P7, obviously -- the same thing that proves that expected utility works for infinite gambles at all. I don't recall the actual proof offhand and don't feel like trying to reconstruct it right now, but anyway I think you have it in front of you from the sounds of it.) And we can show both these gambles are preferred to any essentially bounded gamble by comparing to truncated versions of themselves (using sure-thing principle) and using the fact that expected utility works for essentially bounded gambles. Thus the agent must be indifferent between G and G'. But also, by the sure-thing principle (P2 and P3), the agent must prefer G' to G. That's the contradiction.

Edit: Earlier version of this comment misstated how the proof goes

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:51:56.475Z · LW · GW

Fishburn (op. cit., following Blackwell and Girschick, an inaccessible source) requires that the set of gambles be closed under infinitary convex combinations.

Again, I'm simply not seeing this in the paper you linked? As I said above, I simply do not see anything like that outside of section 9, which is irrelevant. Can you point to where you're seeing this condition?

I shall take a look at Savage's axioms and see what in them is responsible for the same thing.

In the case of Savage, it's not any particular axiom, but rather the setup. An action is a function from world-states to outcomes. If you can construct the function, the action (gamble) exists. That's all there is to it. And the relevant functions are easy enough to construct, as I described above; you use P6 (the Archimedean condition, which also allows flipping coins, basically) to construct the events, and we have the outcomes by assumption. You assign the one to the other and there you go.

(If you don't want to go getting the book out, you may want to read the summary of Savage I wrote earlier!)

A short answer to this (something longer later) is that an agent need not have preferences between things that it is impossible to encounter. The standard dissolution of the St. Petersberg paradox is that nobody can offer that gamble. Even though each possible outcome is finite, the offerer must be able to cover every possible outcome, requiring that they have infinite resources. Since the gamble cannot be offered, no preferences between that gamble and any other need exist.

So, would it be fair to sum this up as "it is not necessary to have preferences between two gambles if one of them takes on unbounded utility values"? Interesting. That doesn't strike me as wholly unworkable, but I'm skeptical. In particular:

- Can we phrase this without reference to utility functions? It would say a lot more for the possibility if we can.
- What if you're playing against Nature? A gamble can be any action; and in a world of unbounded utility functions, why should one believe that any action must have some bound on how much utility it can get you? Sure, sure, second law of thermodynamics and all that, but that's just a feature of the paticular universe we happen to live in, not something that reshapes your preferences. (And if we were taking account of that sort of thing, we'd probably just say, oh, utility is bounded after all, in a kind of stupid way.) Notionally, it could be discovered to be wrong! It won't happen, but it's not probability literally 0.

Or are you trying to cut out a more limited class of gambles as impossible? I'm not clear on this, although I'm not certain it affects the results.

Anyway, yeah, as I said, my main objection is that I see no reason to believe that, if you have an unbounded utility function, Nature cannot offer you a St. Petersburg game. Or I mean, to the extent I do see reasons to believe that, they're facts about the particular universe we happen to live in, that notionally could be discovered to be wrong.

Looking at the argument from the other end, at what point in valuing numbers of intelligent lives does one approach an asymptote, bearing in mind the possibility of expansion to the accessible universe? What if we discover that the habitable universe is vastly larger than we currently believe? How would one discover the limits, if there are any, to one's valuing?

This is exactly the sort of argument that I called "flimsy" above. My answer to these questions is that none of this is relevant.

Both of us are trying to extend our ideas about preferences from ordinary situations to extraordinary ones. (Like, I agree that some sort of total utilitarianism is a good heuristic for value under the conditions we're familiar with.) This sort of extrapolation, to an unfamiliar realm, is always potentially dangerous. The question then becomes, what sort of tools can we expect to continue to work, without needing any sort of adjustment to the new conditions?

I do not expect speculation about the particular form preferences our would take under these unusual conditions to be trustworthy. Whereas basic coherence conditions had damn well better continue to hold, or else we're barely even talking about sensible preferences anymore.

Or, to put it differently, my answer is, *I don't know, but the answer must satisfy basic coherence conditions*. There's simply no way that the idea that decision-theoretic utility has to increase linearly with number intelligent lives, is on anywhere near as solid ground as that! The mere fact that it's stated in terms of a utility function in the first place, rather than in terms of something more basic, is something of a smell. Complicated statements we're not even entirely sure how to formulate can easily break in a new context. Short simple statements that *have* to be true for reasons of simple coherence don't break.

(Also, some of your questions don't seem to actually appreciating what a bounded utility function would actually mean. It wouldn't mean taking an unbounded utility function and then applying a cap to it. It would just mean something that naturally approaches 1 as things get better and 0 as things get worse. There is no *point* at which it approaches an asymptote; that's not how asymptotes work. There *is* no limit to one's valuing; presumably utility 1 does not actually occur. Or, at least, that's how I infer it would have to work.)

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-09T07:08:09.006Z · LW · GW

Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I've seen.

Now (in Savage's formalism) actions are just functions from world-states to outcomes (maybe with a measurability condition), so regardless of your prior it's easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you're saying is, if we *don't* allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-09T06:54:24.108Z · LW · GW

OK, so going by that you're suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It's an interesting idea, but I don't see why one would expect it to have anything to do with preferences.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:37:43.913Z · LW · GW

You should check out Abram's post on complete class theorems. He specifically addresses some of the concerns you mentioned in the comments of Yudkowsky's posts.

So, it looks to me like what Abrams is doing -- once he gets past the original complete class theorem -- is basically just inventing some new formalism along the lines of Savage. I think it is very misleading to refer to this as "the complete class theorem" -- how on earth was I supposed to know that *this* was what was being referred to when "the complete class theorem" was mentioned, when it resembles the original theorem so little (and it's the original theorem that was linked to)? -- and I don't see why it was necessary to invent this anew, but sure, I can accept that it presumably works, even if the details aren't spelled out.

But I must note that he starts out by saying that he's only considering the case when there's only a finite set of states of the world! I realize you weren't making a point about bounded utility here; but from that point of view, it is quite significant...

Also, my inner model of Jaynes says that the right way to handle infinities is not to outlaw them, but to be explicit and consistent about what limits we're taking.

I don't really understand what that means in this context. It is already quite explicit what limits we're taking: Given an action (a measurable function from states of the world to outcomes), take its expected utility, with regard to the [finitely-additive] probability on states of the world. (Which is implicitly a limit of sorts.)

I think this is another one of those comments that makes sense if you're reasoning backward, starting from utility functions, but not if you're reasoning forward, from preferences. If you look at things from a utility-functions-first point of view, then it looks like you're outlawing infinities (well, unboundedness that leads to infinities). But from a preferences-first point of view, you're not outlawing anything. You haven't outlawed unbounded utility functions, rather they've just failed to satisfy fundamental assumptions about decision-making (remember, if you don't have P7 your utility function is not guaranteed to return correct results about infinite gambles at all!) and so clearly do not reflect your idealized preferences. You didn't get rid of the infinity, it was simply never there in the first place; the idea that it might have been turned out to be mistaken.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T07:20:25.895Z · LW · GW

I think you've misunderstood a fair bit. I hope you don't mind if I address this slightly out of order.

Or if infinite utilities are not immediately a problem, then by a more complicated argument, involving constructing multiple St. Petersburg-type combinations and demonstrating that the axioms imply that there both should and should not be a preference between them.

This is exactly what Fishburn does, as I mentioned above. (Well, OK, I didn't attribute it to Fishburn, I kind of implicitly misattributed it to Savage, but it was actually Fishburn; I didn't think that was worth going into.)

I haven't studied the proof of boundedness in detail, but it seems to be that unbounded utilities allow St. Petersburg-type combinations of them with infinite utilities, but since each thing is supposed to have finite utility, that is a contradiction.

He does not give details, but the argument that I conjecture from his text is that if there are unbounded utilities then one can construct a convex combination of infinitely many of them that has infinite utility (and indeed one can), contradicting the proof from his axioms that the utility function is a total function to the real numbers.

What you describe in these two parts I'm quoting is, well, not how decision-theoretic utility functions work. A decision-theoretic utility function is a function on outcomes, not on gambles over outcomes. You take expected utility of a gamble; you don't take utility of a gamble.

So, yes, if you have an unbounded decision-theoretic utility function, you can set up a St. Petersburg-style situation that will have infinite expected utility. But that is not by itself a problem! The gamble has infinite *expected* utility; no individual outcome has infinite utility. There's no contradiction yet.

Of course, you then do get a contradiction when you attempt to compare two of these that have been appropriately set up, but...

But by a similar argument, one might establish that the real numbers must be bounded, when instead one actually concludes that not all series converge

What? I don't know what one might plausibly assume that might imply the boundedness of the real numbers.

...oh, I think I see the analogy you're going for here. But, it seems to rest on the misunderstanding of utility functions discussed above.

and that one cannot meaningfully compare the magnitudes of divergent infinite series.

Well, so, one must remember the goal here. So, let's start with divergent series, per your analogy. (I'm assuming you're discussing series of nonnegative numbers here, that diverge to infinity.)

So, well, there's any number of ways we could compare divergent series. We could just say that they sum to infinity, and so are equal in magnitude. Or we could try to do a more detailed comparison of their growth rates. That might not always yield a well-defined result though. So yeah. There's not any one universal way to compare magnitudes of divergent series, as you say; if someone asks, which of these two series is bigger, you might just have to say, that's a meaningless question. All this is as you say.

But that's not at all the situation we find ourselves in choosing between two gambles! If you reason backward, from the idea of utility functions, it might seem reasonable to say, oh, these two gambles are both divergent, so comparison is meaningless. But if you reason forward, from the idea of preferences... well, you have to pick one (or be indifferent). You can't just leave it undefined. Or if you have some formalism where preferences can be undefined (in a way that is distinct from indifference), by all means explain it... (but what happens when you program these preferences into an FAI and it encounters this situation? It has to pick. Does it pick arbitrarily? How is that distinct from indifference?)

That we have preferences between gambles is the whole thing we're starting from.

I note that in order to construct convex combinations of infinitely many states, Fishburn extends his axiom 0 to allow this. He does not label this extension separately as e.g. "Axiom 0*". So if you were to ask which of his axioms to reject in order to retain unbounded utility, it could be none of those labelled as such, but the one that he does not name, at the end of the first paragraph on p.1055. Notice that the real numbers satisfy Axiom 0 but not Axiom 0*. It is that requirement that all infinite convex combinations exist that surfaces later as the boundedness of the range of the utility function.

Sorry, but looking through Fishburn's paper I can't see anything like this. The only place where any sort of infinite combination seems to be mentioned is section 9, which is not relevant. Axiom 0 means one thing throughout and allows only finite convex combinations. I simply don't see where you're getting this at all.

(Would you mind sticking to Savage's formalism for simplicity? I can take the time to properly read Fishburn if for some reason you *insist* things have to be done this way, but otherwise for now I'm just going to put things in Savage's terms.)

In any case, in Savage's formalism there's no trouble in proving that the necessary actions exist -- you don't have to go taking convex combinations of anything, you simply directly construct the functions. You just need an appropriate partition of the set of world-states (provided by the Archimedean axiom he assumes, P6) and an appropriate set of outcomes (which comes from the assumption of unbounded utility). You don't have to go constructing other things and then doing some fancy infinite convex combination of them.

If you don't mind, I'd like to ask: could just tell me *what in particular* in Savage's setup or axioms you find to be the probable weak point? If it's P7 you object to, well, I already discussed that in the post; if you get rid of that, the utility function may be unbounded but it's no longer guaranteed to give correct results when comparing infinite gambles.

While searching out the original sources, I found a paper indicating that at least in 1993, bounded utility theorems were seen as indicating a problem with Savage's axioms: "Unbounded utility for Savage's "Foundations of Statistics" and Other Models", by Peter Wakker. There is another such paper from 2014. I haven't read them, but they indicate that proofs of boundedness of utility are seen as problems for the axioms, not discoveries that utility must be bounded.

I realize a number of people see this as a problem. Evidently they have some intuition or argument that disagrees with the boundedness of utility. Whatever this intuition or argument is, I would be very surprised if it were as strong as the argument that utility must be bounded. There's no question that assumptions *can* be bad. I just think the reasons to think these *are* bad that have been offered, are seriously flimsy compared to the reasons to think that they're good. So I see this as basically a sort of refusal to take the math seriously. (Again: Which axiom should we throw out, or what part of the setup should we rework?)

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:31:23.979Z · LW · GW

Is there a reason we can't just solve this by proposing arbitrarily large bounds on utility instead of infinite bounds? For instance, if we posit that utility is bounded by some arbitrarily high value X, then the wager can only payout values X for probabilities below 1/X.

I'm not sure what you're asking here. An individual decision-theoretic utility function can be bounded or it can be unbounded. Since decision-theoretic utility functions can be rescaled arbitrarily, naming a precise value for the bounds is meaningless; so like we could just assume the bounds are 0 below and 1 above.

So, I mean, yeah, you can make the problem go away by assuming bounded utility, but if you were trying to say something more than that, a bounded utility that is somehow "closer" to unbounded utility, then no such notion is meaningful.

Apologies if I've misunderstood what you're trying to do.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:27:22.242Z · LW · GW

Yes, thanks, I didn't bother including it in the body of the post but that's basically how it goes. Worth noting that this:

Both of these wagers have infinite expected utility, so we must be indifferent between them.

...is kind of shortcutting a bit (at least as Savage/Fishburn[0] does it; he proves indifference between things of infinite expected utility separately after proving that expected utility works when it's finite), but that is the essence of it, yes.

(As for the actual argument... eh, I don't have it in front of me and don't feel like rederiving it...)

[0]I initially wrote Savage here, but I think this part is actually due to Fishburn. Don't have the book in front of me right now though.

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:23:39.526Z · LW · GW

By "a specific gamble" do you mean "a specific pair of gambles"? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.

I don't understand how to make sense of what you're saying. Agent's preferences are the starting point -- preferences as in, given a choice between the two, which do you pick? It's not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be *indifferent*, but that's distinct).

I mean, you could try to come up with such a thing, but I'd be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that's effectively just nontransitive indifference, which seems like it would be a problem...)

I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I'd be interested to hear...

**Sniffnoy**on Underappreciated points about utility functions (of both sorts) · 2020-01-08T06:14:34.737Z · LW · GW

If you're not making a prioritarian aggregate utility function by summing functions of individual utility functions, the mapping of a prioritarian function to a utility function doesn't always work. Prioritarian utility functions, for instance, can do things like rank-order everyone's utility functions and then sum each individual utility raised to the negative-power of the rank-order ... or something*. They allow interactions between individual utility functions in the aggregate function that are not facilitated by the direct summing permitted in utilitarianism.

This is a good point. I might want to go back and edit the original post to account for this.

So from a mathematical perspective, it is possible to represent many prioritarian utility function as a conventional utilitarian utility function. However, from an intuitive perspective, they mean different things:

This doesn't practically affect decision-making of a moral agents but it does reflect different underlying philosophies -- which affects the kinds of utility functions people might propose.

Sure, I'll agree that they're different in terms of ways of thinking about things, but I thought it was worth pointing out that in terms of what they actually propose they are largely indistinguishable without further constraints.

**Sniffnoy**on Misconceptions about continuous takeoff · 2019-12-25T20:11:43.939Z · LW · GW

I don't really want to go trying to defend here a position I don't necessarily hold, but I do have to nitpick and point out that there's quite a bit of room inbetween exponential and hyperbolic.

**Sniffnoy**on Misconceptions about continuous takeoff · 2019-12-24T08:39:22.931Z · LW · GW

To be clear, intelligence explosion via recursive self-improvement has been distinguished from merely exponential growth at least as far back as Yudkowsky's "Three Major Singularity Schools". I couldn't remember the particular link when I wrote the comment above, but, well, now I remember it.

Anyway, I don't have a particular argument one way or the other; I'm just registering my surprise that you encountered people here arguing for merely *exponential* growth base on intelligence explosion arguments.

**Sniffnoy**on Bayesian examination · 2019-12-11T18:17:16.334Z · LW · GW

Yeah, proper scoring rules (and in particular both the quadratic/Brier and the logarithmic examples) have been discussed here a bunch, I think that's worth acknowledging in the post...

**Sniffnoy**on Bayesian examination · 2019-12-10T00:36:11.454Z · LW · GW

Kind of well-known here, but worth repeating I guess...

**Sniffnoy**on Misconceptions about continuous takeoff · 2019-12-02T22:53:07.014Z · LW · GW

It is sometimes argued that even if this advantage is modest, the growth curves will be exponential, and therefore a slight advantage right now will compound to become a large advantage over a long enough period of time. However, this argument by itself is not an argument against a continuous takeoff.

I'm not sure this is an accurate characterization of the point; my understanding is that the concern largely comes from the possibility that the growth will be *faster* than exponential, rather than merely exponential.

**Sniffnoy**on Goal-thinking vs desire-thinking · 2019-11-17T04:09:22.526Z · LW · GW

I mean, are you actually disagreeing with me here? I think you're just describing an intermediate position.

**Sniffnoy**on Goal-thinking vs desire-thinking · 2019-11-16T22:03:21.453Z · LW · GW

OK. I think I didn't think through my reply sufficiently. *Something* seemed off with what you were saying, but I failed to think through what and made a reply that didn't really make sense instead. But thinking things through a bit more now I think I can lay out my actual objection a bit more clearly.

I *definitely* think that if you're taking the point of view that suicide is preferable to suffering you're not applying what I'm calling goal-thinking. (Remember here that the description I laid out above is not intended as some sort of intensional definition, just my attempt to explicate this distinction I've noticed.) I don't think goal-thinking would consider nonexistence as some sort of neutral point as many do.

I think the best way of explaining this maybe is that goal-thinking -- or at-least the extreme version which nobody actually uses -- is to simply not consider happiness or suffering as whatever as separate objects worth considering at all, that can be good or bad, or that should be acted on directly; but purely as *indicators* of whether one is achieving one's goals -- intermediates to be eliminated. In this point of view, suffering *isn't* some separate thing to be gotten rid of by whatever means, but simply the internal experience of not achieving one's goals, the only proper response to which is to go out and do so. You see?

And if we continue in this direction, one can also apply this to others; so you wouldn't have "not have other people suffer horribly" as a goal in the first place. You would always phrase things in terms of other's goals, and whether they're being thwarted, rather than in terms of their experiences.

Again, none of what I'm saying here necessarily follows from what I wrote in the OP, but as I said, that was never intended as an intensional definition. I think the distinction I'm drawing makes sense regardless of whether I described it sufficiently clearly initially.

**Sniffnoy**on Goal-thinking vs desire-thinking · 2019-11-11T03:08:59.238Z · LW · GW

This is perhaps an intermediate example, but I do think that once you're talking about internal experiences to be avoided, it's definitely not all the way at the goal-thinking end.

**Sniffnoy**on Goal-thinking vs desire-thinking · 2019-11-11T00:16:09.488Z · LW · GW

Hm, I suppose that's true. But I think the overall point still stands? It's illustrating a type of thinking that doesn't make sense to one thinking in terms of concrete, unmodifiable goals in the external world.

**Sniffnoy**on Coherent decisions imply consistent utilities · 2019-10-21T01:06:15.455Z · LW · GW

So this post is basically just collecting together a bunch of things you previously wrote in the Sequences, but I guess it's useful to have them collected together.

I must, however, take objection to one part. The proper non-circular foundation you want for probability and utility is not the complete class theorem, but rather Savage's theorem, which I previously wrote about on this website. It's not short, but I don't think it's too inaccessible.

Note, in particular, that Savage's theorem does not start with any assumption baked in that **R** is the correct system of numbers to use for probabilities[0], instead deriving that as a conclusion. The complete class theorem, by contrast, has real numbers in the assumptions.

In fact -- and it's possible I'm misunderstanding -- but it's not even clear to me that the complete class theorem does what you claim it does, at all. It seems to assume probability at the outset, and therefore cannot provide a grounding for probability. Unlike Savage's theorem, which does. Again, it's possible I'm misunderstanding, but that sure seems to be the case.

Now this has come up here before (I'm basically in this comment just restating things I've previously written) and your reply when I previously pointed out some of these issues was, frankly, nonsensical (your reply, my reply), in which you claimed that the statement that one's preferences form a partial preorder is a *stronger* assumption than "one prefers more apples to less apples", when, in fact, the exact reverse is the case.

(To restate it for those who don't want to click through: If one is talking solely about one's preferences over number of apples, then the statement that more is better immediately yields a total preorder. And if one is talking about preferences not just over number of apples but in general, then... well, it's not clear how what you're saying applies directly; and taken less literally, it just in general seems to me that the complete class theorem is making some very strong assumptions, *much* stronger than that of merely a total preorder (e.g., real numbers!).)

In short the use of the complete class theorem here in place of Savage's theorem would appear to be an error and I think you should correct it.

[0]Yes, it includes an Archimedean assumption, which you could argue is the same thing as baking in **R**; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking **R** as your number system means as a statement about the agent's preferences.

**Sniffnoy**on Noticing Frame Differences · 2019-10-19T17:47:49.078Z · LW · GW

Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?

**Sniffnoy**on The Forces of Blandness and the Disagreeable Majority · 2019-05-02T16:30:04.671Z · LW · GW

Sorry, but: The thing at the top says this was crossposted from Otium, but I see no such post there. Was this meant to go up there as well? Because it seems to be missing.

**Sniffnoy**on An Extensive Categorisation of Infinite Paradoxes · 2019-03-18T05:55:32.104Z · LW · GW

OK, time to actually now get into what's wrong with the ones I skipped initially. Already wrote the intro above so not repeating that. Time to just go.

**Infinitarian paralysis**: So, philosophical problems to start: As an actual decision theory problem this is all moot since you can't actually have an infinite number of people. I.e. it's not clear why this is a problem at all. Secondly, naive assumption of utilitarian aggregation as mentioned above, etc, not going over this again. Enough of this, let's move on.

So what are the mathematical problems here? Well, you haven't said a lot here, but here's what it's look like to me. I think you've written one thing here that is essentially correct, which is that, *if* you did have some system of surreal valued-utilities, it would indeed likely make the distinction you want.

But, once again, that's a big "if", and not just for philosophical reasons but for the mathematical reasons I've already brought up so many times right now -- you can't do infinite sums in the surreals like you want, for reasons I've already covered. So there's a reason I included the word "likely" above, because if you did find an appropriate way of doing such a sum, I can't even necessarily guarantee that it would behave like you want (yes, finite sums should, but infinite sums require definition, and who knows if they'll actually be compatible with finite sums like they should be?).

But the really jarring thing here, the thing that really exposes a serious error in your thought (well, OK, that does so to a greater extent), is not in your proposed solution -- it's in what you contrast it with. *Cardinal* valued-utilities? Nothing about that makes sense! That's not a remotely well-defined alternative you can contrast with! And the thing that bugs me about this error is that it's just so *unforced* -- I mean, man, you could have said "extended reals" rather than cardinals, and made essentially the same point while making at least some sense! This is just demonstrating once again that not only do you not understand surreals, you do not understand cardinals or ordinals either.

(Well, I suppose technically there's the possibility that you do but expect your audience doesn't and are talking down to them, but since you're writing here on Less Wrong, I'm going to assume that's not the case.)

Seriously, cardinals and utilities do not go together. I mean, cardinals and *real numbers* do not go together. Like surreals and utilities don't go together either, but at least the surreals include the reals! At least you can *attempt* to treat it naively in special cases, as you've done in a number of these examples, even if the result probably isn't meaningful! Cardinals you can't even do that.

And once again, there's no reason anyone who understood cardinals would even *want* cardinal-valued utilities. That's just not what cardinals are for! Cardinals are for counting how many there are of something. Utility calculations are not a "how many" problem.

**Sphere of suffering**: Once again we have infinitely many people (so this whole problem is again a non-problem) and once again we have some sort of naive utility aggregation over those infinitely many people with all the mathematical problems that brings (only now it's over time-slices as well?). Enough of this, moving on.

Honestly I don't have much new to say about the bad mathematics here, much of it is the same sort of mistakes as you made in the ones I covered in my initial comment. To cover those ones briefly:

- Surreal numbers do not measure how far a grid extends (similar to examples I've already covered)
- There's not a
*question*of how far the grid extends, allowing it to be a transfinite variable l is just changing the problem (similar to examples I've already covered) - Surreal numbers also do not measure number of time steps, you want ordinals for that (similar to examples I've already covered)
- Repeat #2 but for the time steps (similar to examples I've already covered)

But OK. The one new thing here, I guess, is that now you're talking about a "majority" of the time slices? Yeah, that is once again not well-defined at all. Cardinality won't help you here, obviously; are you putting a measure on this somehow? I think you're going to have some problems there.

**Trumped**: Same problems I've discussed before. Surreal numbers do not count time steps, you're changing the problem by introducing a variable, utility aggregation over an infinite set (this time of time-slices rather than people), you know the drill.

But actually here you're changing the problem in a different way, by supposing that Trump knows in advance the number of time steps? The original problem just had this as a repeated offer. Maybe that's a philosophical rather than mathematical problem. Whatever. It's changing the problem, is the point.

And then on top of that your solution doesn't even make any sense. Let's suppose you meant an ordinal number of days rather than a surreal number of days, since that is what you'd actually use in this context. OK. Suppose for example then that the number of days is ω (which is, after all, the original problem before you changed it). So your solution says that Trump should accept the deal so long as the day number is less than the surreal number ω/3. Except, oops! Every ordinal less than ω is also less than ω/3. Trump always accepts the deal, we're back at the original problem.

I.e., even granting that you can somehow make all the formalism work, this is still just wrong.

**St. Petersburg paradox**: OK, so, there's a lot wrong here. Let me get the philosophical problem out of the way first -- the real solution to the St. Petersburg paradox is that you must look not at expected money, but at expected utility, and utility functions must be bounded, so this problem can't arise. But let's get to the math, because, like I said, there's a lot wrong here.

Let's get the easy-to-describe problems out of the way first: You are once again using surreals where you should be using ordinals; you are once again assuming some sort of theory of infinite sums of surreals; getting infinitely many heads has zero probability, not infinitesimal (probabilities are real-valued, you could try to introduce a theory of surreal probabilities but that will have problems already discussed), what happens in that case is irrelevant; you are once again changing the problem by allowing things to go on beyond ω steps; and, minor point, but where on earth did the function n |-> n comes from? Don't you mean n |-> 2^n?

OK, that's largely stuff I've said before. But the thing that puzzled me the most in your claimed solution is the first sentence:

If we model this with surreals, then simply stating that there is potentially an infinite number of tosses is undefined.

*What*? I mean, yeah, sure, the surreals have multiple infinities while, say, the extended nonnegative reals have only one, no question there. But that sentence still makes no sense! It, like, seems to reveal a fundamental misunderstanding so great I'm having trouble comprehending it. But I will give it my best shot.

So the thing is, that -- ignoring the issue of unbounded utility and what's the correct decision -- the original setup has no ambiguities. You can't choose to make it different by changing what system of numbers you *describe* it with. Now, I don't know if you're making the mistake I think you're making, because who knows what mistake you might be making, but it looks to me that you are confusing numbers that are part of the actual problem specification, with auxiliary numbers just used to *describe* the problem.

Like, what's actually going on here is that there is a set of coin flips, right? The elements of that set will be indexed by the natural numbers, and will form a (possibly improper, though with probability 0) initial segment of it -- those numbers are part of the actual problem specification. The idea though that there might be infinitely many coin flips... that's just a description. When I say "With probability 0, the set of flips will be infinite", that's just another way of saying, "With probability 0, the set of flips will be **N**." It doesn't make sense to ask "Ah, but what system of numbers are you using to measure its infinitude?" It doesn't matter! The set I'm describing is **N**! (And in any case I just said it was an infinite *set*, although I suppose you could say I was implicitly using cardinals.)

This is, I suppose, an idea that's shown up over and over in your claimed solutions, but since I skipped over this particular one before, I guess I never got it so explicitly before. Again, I'm having to guess what you think, but it looks to me like you think that the *numbers* are what's primary, rather than the actual *objects* the problems are about, and so you can just change the numbers system and get a different version of the same problem. I mean, OK, often the numbers *are* primary and you can do that! But sometimes they're just descriptive.

Oy. I have no idea whether I've correctly described what your misunderstanding, but whatever it is, it's pretty big. Let's just move on.

**Trouble in St. Petersburg**: Can I first just complain that your numbers don't seem to match up with your text? 13 is not 9*2+3. I'm just going to assume you meant 21 rather than 13, because none of the other interpretations I can come up with make sense.

Also this problem once again relies on unbounded utilities, but I don't need to go on about that. (Although if you were to somehow reformulate it without those -- though that doesn't seem possible in this coin-flip formulation -- then the problem would be basically similar to Satan's Apple. I have my own thoughts on that problem, but, well, I'm not going to go into it here because that's not the point.)

Anyway, let's get to the surreal abuse! Well, OK, again I don't have much new to say here, it's the same sort of surreal abuse as you've made before. Namely: Using surreals where they don't make sense (time steps should be counted by ordinals); changing the problem by introducing a transfinite variable; thinking that all ordinals are successor ordinals (sorry, but with n=ω, i.e. the original problem, there's still no last step).

Ultimately you don't offer any solution? Whatever. The errors above still stand.

**The headache**: More naive aggregation and thinking you can do infinite sums and etc. Or at least so I'm gathering from your claimed solution. Anyway that's boring.

The surreal abuse here though is also boring, same types as we've seen before -- using surreals where they make no sense but where ordinals would; ignoring the existence of limit ordinals; and of course the aforementioned infinite sums and such.

OK. That's all of them. I'm stopping there. I think the first comment was really enough to demonstrate my point, but now I can honestly claim to have addressed every one of your examples. Time to go sleep now.

**Sniffnoy**on An Extensive Categorisation of Infinite Paradoxes · 2019-03-18T03:42:15.451Z · LW · GW

OK, time for the second half, where I get to the errors in the ones I initially skipped. And yes, I'm going to assert some philosophical positions which (for whatever reason) aren't well-accepted on this site, but there's still plenty of mathematical errors to go around even once you ignore any philosphical problems. And yeah, I'm still going to point out missing formalism, but I will try to focus on the more substantive errors, of which there are plenty.

So, let's get those philosophical problems out of the way first, and quickly review utility functions and utilitarianism, because this applies to a bunch of what you discuss here. Like, this whole post takes a very naive view of the idea of "utility", and this needs some breaking down. Apologies if you already know all of what I'm about to say, but I think given the context it bears repeating.

So: There are two different things meant by "utility function". The first is decision-theoretic; an agent's utility function is a function whose expected value it attempts to maximize. The second is the one used by utilitarianism, which involves (at present, poorly-defined) "E-utility" functions, which are *not* utility functions in the decision-theoretic sense, that are then somehow aggregated (maybe by addition? who knows?) into a decision-theoretic utility function. Yes, this terminology is terribly confusing. But these are two separate things and need to be kept separate.

Basically, any agent that satisfies appropriate rationality conditions has a utility function in the decision-theoretic sense (obviously such idealized agents don't actually exist, but it's still a useful abstraction). So you could say, roughly speaking, any rational consequentialist has a decision-theoretic utility function. Whereas E-utility is specifically a utilitarian notion, rather than a general consequentalist or purely descriptive notion like decision-theoretic utility (it's also not at all clear how to define it).

Anyway, if you want surreal E-utility functions... well, I think that's still probably pretty dumb for reasons I'll get to, but since E-utility is so poorly defined that's not *obviously* wrong. But let's talk about decision-theoretic utility functions. These need to be real-valued for very good reasons.

Because, well, why use utility functions at all? What makes us think that a rational agent's preferences can be described in terms of a utility function in the first place? Well, there's an answer to that: Savage's theorem. I've already described this above -- it gives rationality conditions, phrased directly in terms of an agent's preferences, that together suffice to guarantee that said preferences can be described by a utility function. And yes, it's real-valued.

(And, OK, it's real-valued because Savage includes an Archimedean assumption, but, well -- do you think that's a bad assumption? Let me repeat here a naive argument against infinite and infinitesimal utilities I've seen before on this site (I forget due to who; I think Eliezer maybe?). Suppose we go with a naive treatment of infinitesimal utilities, and A has infinitesimal utility compared to B. Then since any action you take at *all* has *some* positive (real, non-infinitesimal) probability of bringing about B, even sitting in your room waving your hand back and forth in the air, A simply has no effect on your decision making; all considerations of B, even stupid ones, completely wash it out. Which means that A's infinitesimal utility does not, in fact, have any place in a decision-theoretic utility function. Do you really want to throw out that Archimedean assumption? Also if you do throw it out, I don't think that actually gets you non-real-valued utilities, I think it just, y'know, doesn't get you utilities. The agent's preferences can't necessarily be described with a utility function of any sort. Admittedly I could be wrong about that last part; I haven't checked.)

In short, your philosophical mistake here is of a kind with your mathematical mistakes -- in both cases, you're starting from a system of numbers (surreals) and trying awkwardly to fit it to the problem, even when it blatantly does *not* fit, does not have the properties that are required; rather than seeing what requirements the problem actually *calls* for and finding something that meets those needs. As I've pointed out multiple times by now, you're trying to make use of properties that the surreal numbers just don't have. Work forward from the requirements, don't try to force into them things that don't meet them!

By the way, Savage's theorem also shows that utility functions must be *bounded*. That utility functions must be bounded does not, for whatever reason, seem to be a well-accepted position on this site, but, well, it's correct so I'm going to continue asserting it, including here. :P Now it's true that the VNM theorem doesn't prove this, but that's due to a deficiency in the VNM theorem's assumptions, and with that gap fixed it does. I don't want to belabor this point here, so I'll just refer you to this previous discussion.

(Also the VNM theorem is just a worse foundation generally because it assumes real-valued probabilities to begin with, but that's a separate matter. Though maybe here it's not -- since you can't claim to avoid the boundedness requirement by saying you're justifying the use of utilities with VNM rather than Savage, since you seem to want to allow surreal-valued probabilities!)

Anyway, so, yes, utilities should be real-valued (and bounded) or else you have no good reason to use them -- to use surreal-valued utilities is to start from the *assumption* that you should use utilities (a big assumption! why would one ever assume such a thing?) when it should be a conclusion (a conclusion of theorems that say it must be real-valued).

Ah, but could infinities or infinitesimals appear in an E-utility function, that the utilitarians use? I've been ignoring those, after all. But, since they're getting aggregated into a decision-theoretic utility function, which is real-valued (or maybe it's not quite a decision-theoretic utility function, but it should still be real-valued by the naive argument above), unless this aggregation function can magnify an infinitesimal into a non-infinitesimal, the same problem will arise, the infinitesimals will still have no relevance, and thus should never have been included.

(Yeah, I suppose in what you write you consider "summing over an infinite number of people". But: 1. such infinite sums with infinitesimals don't actually work mathematically, for reasons I've already covered, and 2. you can't actually have an infinite number of people, so it's all moot anyway.)

Yikes, all that and I haven't even gotten to examining in detail the particular mathematical problems in the remaining ones! You know what, I'll end this here and split that comment out into a third post. Point is, now in these remaining ones, when I want to point out philosophical problems, I can just point back to this comment rather than repeating all this again.

**Sniffnoy**on An Extensive Categorisation of Infinite Paradoxes · 2019-02-27T20:24:03.335Z · LW · GW

My primary response to this comment will take the form of a post, but I should add that I wrote: "I will provide informal hints on how surreal numbers could help us solve some of these paradoxes, although the focus on this post is primarily categorisation, so please don't mistake these for formal proofs".

You're right; I did miss that, thanks. It was perhaps unfair of me then to pick on such gaps in formalism. Unfortunately, this is only enough to rescue a small portion in the post. Ignoring the ones I skipped -- maybe it would be worth my time to get back to those after all -- I think the only one potentially rescued that way is the envelope problem. (I'm still skeptical that it *is* -- I haven't looked at it in enough detail to say -- but I'll grant you that it could be.)

(Edit: After rechecking, I guess I'd count Grandi's series and Thomson's lamp here too, but only *barely*, in the sense that -- after giving you quite a bit of benefit of the doubt -- yeah I *guess* you could define things that way but I see absolutely no reason why one would want to and I seriously doubt you gain anything from doing so. (I was about to include god picking a random integer here, too, but on rechecking again, no, that one still has serious other problems even if I give you more leeway than I initially did. Like, if you try to identify ∞ with a specific surreal, say ω, there's no surreal you can identify it with that will make your conclusion correct.))

The rest of the ones I pointed out as wrong (involving surreals, anyway) all contain more substantial errors. In some cases this becomes evident after doing the work and attempting to formalize your hints; in other cases they're evident immediately, and clearly do not work even informally.

The magic dartboard is a good example of the latter -- you've simply given an incorrect proof of why the magic dartboard construction works. In it you talk about ω_1 having a first half and a second half. You don't need to do any deep thinking about surreals to see the problem here -- that's just not what ω_1 looks like, at all. If you *do* follow the hint, and compare the elements of ω_1 to (ω_1)/2 in the surreals, then, as already noted, you find everything falls in the first half, which is not very helpful. (Again: This is the sort of thing that causes me to say, I suspect you need to relearn ordinals and probably other things, not just surreals. If you actually understand ordinals, you should not have any trouble proving that the magic dartboard acts as claimed, without any need to go into the surreals and perform division.)

Meanwhile the paradox of the gods is, as I've already laid out in detail, an example of the former. It sounds like a nice informal answer that could possibly be formalized, sure; but if you try to actually follow the hint and do that -- switching to surreal time and space as needed, of course -- it still makes no sense for the reasons I've described above. Because, e.g., ω is a limit ordinal and not a successor ordinal (this is a repeated mistake throughout the post, ignoring the existence of limit ordinals), because in the surreals there are no infima of sets (that aren't minima), because the fact that a surreal exponential *exists* doesn't mean that it acts like you want it to (*algebraically* it does everything you might want, but this problem isn't about algebraic properties) or that there's anything special about the points it picks out.

In addition, some of the things one is expected to just go with would require not just more explanation to formalize (like surreal integration) but to even make even informal sense of (like what structure you are putting on a set, or what you are embedding it in, that would make a surreal an appropriate measure of its size).

In short, your hints are not hints towards an already-existing solution (or at least, not one that anyone other than you would accept); they're analogy-driven speculation as to what a solution could look like. Obviously there's nothing wrong with analogy-driven speculation! I could definitely go on about some analogy-driven speculation of mine involving surreals! But, firstly, that's not what you presented it as; secondly, in most of your cases it's actually fairly easy (with a bit of relevant knowledge) to follow the breadcrumb trail and see that in fact it goes nowhere, as I did in my reply; and, thirdly, you're purporting to "solve" things that aren't actually problems in the first place. The second being the most important here, to be clear.

(And I think the ones I skipped demonstrate even more mathematical problems that I didn't get to, but, well, I haven't gotten to those.)

FWIW, I'd say surreal decision theory is a bad idea, because, well, Savage's theorem -- that's a lot of my philosophical objections right there. But I should get to the actual mathematical problems sometime; the philosophical objections, while important, are, I expect, not as interesting to you.

Basically, the post treats the surreals as a sort of device for automatically making the infinite behave like the finite. They're not. Yes, their structure as an ordered field (ordered exponential field, even) means that their *algebraic* behavior resembles such familiar finite settings as the real numbers, in contrast to the quite different arithmetic of (say) the ordinal or cardinal numbers (one might even include here the extended real line, with its mostly-all-absorbing ∞). But the things you're trying to do here often involve more than arithmetic or algebra, and then the analogies quickly fall apart. (Again, I'd see our previous exchange here for examples.)

**Sniffnoy**on An Extensive Categorisation of Infinite Paradoxes · 2019-02-26T09:55:14.948Z · LW · GW

Almost nothing in this post is correct. This post displays not just a misuse of and failure to understand surreal numbers, but a failure to understand cardinals, ordinals, free groups, lots of other things, and just how to think about such matters generally, much as in our last exchange. The fact that (as I write this) this is sitting at +32 is an *embarrassment* to this website. You really, really, need to go back and relearn all of this from scratch, because going by this post you don't have the slightest clue what you're talking about. I would encourage everyone else to stop upvoting this crap.

This whole post is just throwing words around and making assertions that assume things generalize in a particular naïve way that you expect. Well, they don't, and certainly not obviously.

Really, the whole idea here is wrong. The fact that something *does not extend* to infinities or infinitesimals is not somehow a paradox. Many things don't extend. There's nothing wrong with that. Some things, of course, do extend if you do things properly. Some things extend in more than one way, with none of them being more natural than the others! But if something doesnt extend, it doesn't extend. That's not a paradox.

Similarly, the fact that something has unexpected results is not a paradox. The right solution for some of these is just to actually formalize them and accept the results. No further "resolution" is required.

In the hopes of making my point absolutely clear, I am going to take these one by one. ~~~(As per the bullshit asymmetry principle, I'm afraid my response will be much longer than the original post.)~~~ (OK, I guess that turned out not to be true.) Those that involve philosophical problems in addition to just mathematical problems I will skip on my first pass, if you don't mind (well, some of them, anyway; and I may have slightly misjudged some of the ones I skipped, because, well, I skipped them -- point is I'm skipping some, it hardly matters, the rest are enough to demonstrate the point, but *maybe* I will get back to the skipped ones later). Note that I'm going to focus on problems involving infinities somehow; if there are problems not involving infinities I'll likely miss them.

**Infinitarian paralysis**: Skipping for now due to philosophical problems in addition to mathematical ones.

**Paradox of the gods**: You haven't stated your setup here formally, but if I try to formalize it (using real numbers as is probably appropriate here) I come to the conclusion that yes, the man cannot leave the starting point. Is this a "paradox"? No, it's just what you get if you actually formalize this. The continuum is counterintuitive! It doesn't quite fit our usual notions of causality! Think about differential equations for a moment -- is it a "paradox" that some differential equations have nonunique solutions, even though it seems that a particle's position, velocity, and relation between the two ought to "cause" its future trajectory? No! This is the same sort of thing; continuous time and continuous space do not work like discrete time and discrete space.

But in addition to your "resolution" being unnecessary, it's also nonsensical. You're taking the *number* of gods as a surreal number. That's nonsense. Surreal numbers are not for counting how many of something there are. Are you trying to map cardinals to surreals? I mean, yeah, you *could* define such a map, it's easy to do with AC, but is it meaningful? Not really. You do not count numbers of things with surreals, as you seem to be suggesting.

Of course, there's more than one way to measure the size of an infinite set, not just cardinality. Since you translate the number into a surreal, perhaps you meant the set of gods to be reverse well-ordered, so that you can talk about its reverse order type, as an ordinal, and take that as a surreal? That would go a *little* way to making this less nonsensical, but, well, you never said any such thing.

Of course, your solution seems to involve implicitly changing the setting to have surreal-valued time and space, but that makes sense -- it does make sense to try to make such "paradoxes" make more sense by extending the domain you're talking about. You might want to make more of an explicit note of it, though. Anyway, let's get back to nonsense.

So let's say we accept this reverse-well-ordering hypothesis. Does your "resolution" follow? Does it even make sense? No to both! First, your "resolution" isn't so much a deduction as a new assumption -- that these reverse-well-ordered gods are placed at positions 1/2^α for ordinals α. I mean, I guess that's a sensible extension of the setup, but... let's note here that you actually *are* changing the setup significantly at this point; the original setup pretty clearly had ω gods, not more. But, OK, that's fine -- you're generalizing, from the case of ω to the case of more. You should be more explicit that you're doing that, but I guess that's not wrong.

But your conclusion still *is* wrong. Why? Several reasons. Let's focus on the case of ω-many gods, that the original setup describes. You say that the man is stopped at 1/2^ω. Question: Why? Is 1/2^ω the minimum of the set {1/2^n : n ∈ **N** } inside the surreals? Well, obviously not, because that set obviously has no smallest element.

But is it the infimum (or equivalently limit), then, inside the surreals, if not the minimum? Actually, let's put that question aside for now and note that the answer to this question is actually irrelevant! Because if you accept the logic that the infimum (or equivalently limit) controls, then, guess what, you already *have* your resolution to the paradox back in the real numbers, where there's it's an infimum and it's 0. So all the rest of this is irrelevant.

But let's go on -- *is* it the infimum (or equivalently limit)? No! It's not! Because there is no infimum! A subsets of the surreals with no minimum also has no infimum, always, unconditionally! The surreal numbers are not at all like the real numbers. You basically can't do limits there, as we've already discussed. So there's nothing particularly distinguishing about the point 1/2^ω, no particular reason why that's where the man would stop. (There's no god there! We're talking about the case of ω gods, not ω+1 gods.)

We haven't even asked the question of what you mean by 2^s for a surreal s. I'm going to assume, since you're talking about surreals and didn't specify otherwise, that you mean exp(s log 2), using the usual surreal exponential. But, since you're only concerned with the case where s is an ordinal, maybe you actually meant taking 2^s using ordinal exponentiation, and then taking the reciprocal as a surreal. These are different, I hope you realize that!

What about if we use {left set|right set} intead of limits and infima? Well, there's now even less reason to believe that such a point has any relevance to this problem, but let's make a note of what we get. What is {|1, 1/2, 1/4, ...}? Well, it's 0, duh. OK, what if we exclude that by asking for {0|1, 1/2, 1/4, ...} instead? That's 1/ω. This isn't 1/2^ω; it's larger -- well, unless you meant "use ordinal exponentiation and then invert", in which case it is indeed equal and you need to be a hell of a lot clearer but it's all still irrelevant to anything. (Using ordinal exponentiation, 2^ω = ω; while using the surreal exponential, 2^ω = ω^(ω log 2) > ω.)

(What if we use sign-sequence limits, FWIW? That'll still get us 1/ω. You really shouldn't use those though.)

Anyway, in short, your resolution makes no sense. Moving on...

**Two envelopes paradox**: OK, I'm ignoring all the parts that don't have to do with surreals, including the use of an improper prior (aka *not a probability distribution*); I'm just going to examine the use of surreals.

Please. Explain. How, on earth, does one put a uniform distribution on an interval of *surreal numbers*?

So, if we look at the interval from 0 to 1, say, then the probability of picking a number between a and b, for a<b, is b-a? For *surreal* a and b?

So, first off, that's not a probability. Probabilities are real, for very good reason. This is explicitly a decision-theory context, so don't tell me that doesn't apply!

But OK. Let's accept the premise that you're using a surreal-valued probability measure instead of a real one. Except, wait, how is that going to work? How is countable additivity going to work, for instance? We've already established that infinite sums do not (in general) work in the surreals! (See earlier discussion.) But OK, we can ignore that -- hell, Savage's theorem doesn't guarantee countable additivity, so let's just accept finite additivity. There is the question of just how you're going to define this in generality -- it takes quite a bit of work to extend Jordan "measure" into Lebesgue measure, you know -- but you're basically just using intervals so I'll accept we can just treat that part naïvely.

But now you're taking expected values! Of a surreal-valued probability distribution over the surreals! So basically you're having to integrate a surreal-valued function over the surreals. As I've mentioned before, there is no known theory of this, no known general way to define this. I *suppose* since you're just dealing with step functions we can treat this naïvely, but *ugh*. Nothing you're doing is really defined. This is pure "just go with it, OK?" This one is less bad than the previous one, this one contains things one *can* potentially just go with, but you don't seem to realize that the things you're doing aren't *actually* defined, that this is naïve heuristic reasoning rather than actual properly-founded mathematics.

**Sphere of suffering**: Skipping for now due to philosophical problems in addition to mathematical ones.

**Hilbert Hotel**: So, first off, there's no paradox here. This sort of basic cardinal arithmetic of countable sets is well-understood. Yes, it's counterintuitive. That's not a paradox.

But let's examine your resolution, because, again, it makes no sense. First, you talk about there being n rooms, where n is a surreal number. Again: You cannot measure sizes of sets with surreal numbers! That is meaningless!

But let's be generous and suppose you're talking about well-ordered sets, and you're measuring their size with ordinals, since those embed in the surreals. As you note, this is changing the problem, but let's go with it anyway. Guess what -- you've still described it wrong! If you have ω rooms, there is no last room. The last room isn't room ω, that'd be if you had ω+1 rooms. Having ω rooms is the original Hilbert Hotel with no modification.

I'm assuming when you say n/2 you mean that in the surreal sense. OK. Let's go back to the original problem and say n=ω. Then n/2 is ω/2, which is still bigger than any natural number, so there's still nobody in the "last half" of rooms! What if n=ω+1, instead? Then ω/2+1/2 is still bigger than any natural number, so your "last half" consists only of ω+1 -- it's not of the same cardinality as your "first half". Is that what you intended?

But ultimately... even ignoring all these problems... I don't understand how any of this is supposed to "resolve" any paradoxes. It resolves it by making it impossible to add more people? Um, OK. I don't see why we should want that.

But it doesn't even succeed at that! Because if you have [Dedekind-]infinitely many, then for adding finitely many, you have that initial ω, so you can just perform your alterations on that and leave the rest alone. You haven't prevented the Hilbert Hotel "paradox" at all! And for doubling, well, assuming well-ordering (because you're measuring sizes with ordinals, maybe?? or because we're assuming choice) well, you can partition things into copies of ω and go from there.

**Galileo's paradox**: Skipping this one as I have nothing more to add on this subject, really.

**Bacon's puzzle**: This one, having nothing to do with surreals, is completely correct! It's not new, but it's correct, and it's neat to know about, so that's good. (Although I have to wonder: Why is it on this one you accept conventional mathematics of the infinite, instead of objecting that it's a "paradox" and trying to shoehorn in surreals?)

**Trumped** and the **St. Petersburg** ones: Skipping for now due to philosophical problems in addition to mathematical ones

**Dice-room murders**: An infinitesimal chance the die never comes up 10? No, there's a 0 chance. That's how probability theory works. Again, probability is real-valued for very good reasons, and reals don't have infinitesimals. If you want to introduce probabilities valued over some other codomain, you're going to have to specify what and explain how it's going to work. "Infinitesimal" is not very specific.

The rest as you say has nothing to do with infinities and seems correct so I'll ignore it.

**Ross-Littlewood paradox**: Er... you haven't resolved this one at all? The conventional answer, FWIW, is that you should take the limit of the sets, not the limit of the cardinalities, so that none are left, and this demonstrates the discontinuity of cardinality. But, um, you just haven't answered this one? I mean I guess that's not *wrong* as such...

**Soccer teams**: Your resolution bears little resemblance to the original problem. You initially postulated that the set of abilities was **Z**, then in your resolution you said it was an interval in the surreals. **Z** is not an interval in the surreals. In fact, no set is an interval in the surreals; between any two given surreals there is a whole proper class of surreals. Perhaps you meant in the omnific integers? Sorry, **Z** isn't an interval in there either. Perhaps you meant in something of your own invention? Well, you didn't describe it. Ultimately it's irrelevant -- because the fact is that, yes, if you add 1 to each element of **Z**, you get **Z**. No alternate way of describing it will change that.

**Positive soccer teams**: You, uh, once again didn't supply a resolution? In any case this whole problem is ill-defined since you didn't actually specify any way to measure which of two teams is better. Although, if we just assume there *is* some way, then presumably we want it to be a preorder (since teams can be tied), and then it seems pretty clear that the two teams should be tied (because each should be no greater than the other for the two reasons you gave). (Actually it's not too hard to come up with an actual preorder here that does what you want, and then you can verify that, yup, the two teams are tied in it.) This happens a lot with infinities -- things that are orders in the finite case become preorders. Just something you have to learn to live with, once again.

**Can God pick an integer at random?**: This is... not how probability works. There is no uniform probability distribution on the natural numbers, by countable additivity. Or, in short, no, God cannot pick an integer at random. You then go on to talk about nonsensical 1/∞ chances. In short, the only paradox here is due to a nonsensical setup.

But then you go and give it a nonsensical resolution, too. So, first off, once again, you can't count things with surreals. I will once again generously assume that you intended there to be a well-ordered set of planets and are counting with ordinals rather than surreals.

It doesn't matter. Not only do you then fail to reject the nonsensical setup, you do the most nonsensical thing yet: You *explicitly* mix surreal numbers with extended real numbers, and attempt to compare the two. What. Are you implicitly thinking of ∞ as ω here? Because you sure didn't say anything like that! Seriously, these don't go together.

I am tempted to do the formal manipulations to see if there is any way one might come to your conclusions by such meaningless formal manipulation, but I'll just give you the benefit of the doubt there, because I don't want to give myself a headache doing meaningless formal manipulations involving two different number systems that can't be meaningfully combined.

**Banach-Tarski paradox**: This starts out as a decent explanation of Banach-Tarski; it's missing some important details, but whatever. But then you start talking about sequences of infinite length. (Something that wasn't there before -- you act as if this was already there, but it wasn't.) Which once again you meaninglessly assign a surreal length. I'll once again assume you meant an ordinal length instead. Except that doesn't help much because this whole thing is meaningless -- you can't take infinite products in groups.

Or maybe you can, in this case, since we're really working in F_2 embedded in SO(3), rather than just in F_2? So you could take the limit in SO(3), if it exists. (SO(3) is compact, so there will certainly be at least one limit point, but I don't see any obvious reason it'd be unique.)

Except the way you talk about it, you talk as if these infinite sequences are still in our free group. Which, no. That is not how free groups work. They contain finite words only.

Maybe you're intending this to be in some sort of "free topological group", which does contain infinite and transfinite words? Yeah, there's no such thing in any nontrivial manner. Because if you have any element g, then you can observe that g(ggg...) = ggg..., and therefore (because this is a group) ggg...=1. Well, OK, that's not a full argument, I'll admit. But, that's just a quick example of how this doesn't work, I hope you don't mind. Point is: You haven't defined this new setting you're working in, and if you try, you'll find it makes no sense. But it sure as hell ain't the free group F_2.

I also have no idea what you're saying this does to the Banach-Tarski paradox. Honestly, it doesn't matter, because the logic behind Banach-Tarski remains the same regardless.

**The headache**: Skipping for now

**The magic dartboard**: No, a bijection between the countable ordinals and [0,1] is not known to exist. That's only true if you assume the continuum hypothesis. Are you assuming the continuum hypothesis? You didn't mention any such thing.

You then give a completely wrong and nonsensical argument as to why this construction has the desired "magic dartboard" property, in which you talk about certain ordinals being in the "first 1/n" of the countable ordinals, or the "last half" of the countable ordinals. This is completely meaningless. There is no first 1/n, or last half, of the countable ordinals. If you had some meaning in mind, you're going to have to explain it. And if you mean going into the surreals and comparing them against ω_1/n, then, unsurprisingly, the entire countable ordinals will always fall in your first 1/n. The construction does yield a magic dartboard, but you're completely wrong as to why.

**Thomson's lamp**: Your resolution here is nonsense. Now, our presses our occurring in a well-ordered sequence, so it's most appropriate to regard the number of presses as an ordinal. In which case, the number of presses is ω. It's not a question -- that's what it is. It doesn't depend on how we define the reals, WTF? The reals are the reals (unless you're going to start doing constructive mathematics, in which case the things you wrote will presumably be wrong in many *more* ways). It might depend on how you define the *problem*, but you were pretty explicit about what the press timings are. Anyway, ω is even as an omnific integer, but does that mean we should consider the lamp to be on? I see no reason to conclude this. The lamp's state has no well-defined limit, after all. This is once again naïvely extending something from the finite to the infinite without checking whether it actually extends (it doesn't).

Really, the basic mistake here is assuming there must be an answer. As I said, the lamp's state has no limit, so there really just isn't any well-defined answer to this problem.

**Grandi's series**: You once again assign a variable surreal length (which still makes no sense) to something which has a very definite length, namely ω. In any case, Grandi's series has no limit. You say it depends on whether the length is even or odd. Suppose we interpret that as "even or odd as an omnific integer" (i.e. having even or odd finite part). OK. So you're saying that Grandi's series sums to 0, then, since ω is even as an omnific integer? It doesn't matter; the series has no limit, and if you tried to extend it transfinitely, you'd get stuck at ω when there's already no limit there.

I mean, I suppose you could define a new notion of what it means to sum a divergent (possibly transfinite series), and apply it to Grandi's series (possibly extended transfinitely) as an example, but you haven't done that. You've just said what the limit "is". It isn't. More naïve extension and formal manipulation in place of actual mathematical reasoning.

**Satan's apple**: Skipping, you didn't mention surreals and the paradox is entirely philosophical rather than mathematical (you also admitted confusion on this one rather than giving a fake resolution, so good for you)

**Gabriel's horn**: Yup, you described this one correctly at least!

**Bertrand paradox**: You almost had this, but still snuck in an incorrect statement revealing a serious conceptual error. There aren't multiple sets of chords; there are multiple probability distributions on the set of chords. Really, it's not that all the probabilities are valid, it's just that it depends on how you pick, but I was giving you the benefit of the doubt on that one until you added that bit about multiple sets of chords.

**Zeno's paradoxes**: We can argue all we like about the "real" resolution here philosophically but whatever, you seem to grasp the mathematics of it at least, so let's move on

**Skolem's paradox**: You've mostly summed this one up correctly. I must nitpick and point out that membership in the model is not necessarily the same as membership outside the model even for those sets that are in the model -- something which you might realize but your explanation doesn't make clear -- but this is a small error compared to the giant conceptual errors that fill most of what you've written here.

Whew. OK. I will *maybe* get back to the ones I skipped, but probably not because this is enough to demonstrate my point. This post is horribly wrong nearly in its entirely, shot through with serious conceptual errors. You really need to relearn this stuff from scratch, because almost nothing you're saying makes sense. I urge everyone else to ignore this post and not take anything it says as reliable.