Posts
Comments
Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens  so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]
That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or otherwise incoherent, and then it follows as a conclusion that they have to correspond to a [bounded] utility function. I'm confused by your claim that you're asking about conditions, when you haven't been talking about conditions, but rather ways of modifying the idea of decisiontheoretic utility.
Something seems to be backwards here.
I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need
 an axiom describing what it means for one infinite wager to be "strictly better" than another.
 an axiom describing what kinds of infinite wagers it is rational to be indifferent towards
I'm confused here; it sounds like you're just describing, in the VNM framework, the strong continuity requirement, or in Savage's framework, P7? Of course Savage's P7 doesn't directly talk about these things, it just implies them as a consequence. I believe the VNM case is similar although I'm less familiar with that.
Then, I would try to find a decisioningsystem that satisfies these new conditions as well as the VNMrationality axioms (where VNMrationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function.
That doesn't make sense. If you add axioms, you'll only be able to conclude more things, not fewer. Such a thing will necessarily be representable by a utility function (that is valid for finite gambles), since we have the VNM theorem; and then additional axioms will just add restrictions. Which is what P7 or strong continuity do!
Here's a quick issue I only just noticed but which fortunately is easily fixed:
Above I mentioned you probably want to restrict to a sigmaalgebra of events and only allow measurable functions as actions. But, what does measurable mean here? Fortunately, the ordering on outcomes (even without utility) makes measurability meaningful. Except this puts a circularity in the setup, because the ordering on outcomes is induced from the ordering on actions.
Fortunately this is easily patched. You can start with the assumption of a total preorder on outcomes (considering the case of decisions without uncertainty), to make measurability meaningful and restrict actions to measurable functions (once we start considering decisions under uncertainty); then, for P3, instead of the current P3, you would strengthen the current P3 by saying that (on nonnull sets) the induced ordering on outcomes actually matches the original ordering on outcomes. Then this should all be fine.
(This is more properly a followup to my sibling comment, but posting it here so you'll see it.)
I already said that I think that thinking in terms of infinitary convex combinations, as you're doing, is the wrong way to go about it; but it took me a bit to put together why that's definitely the wrong way.
Specifically, it assumes probability! Fishburn, in the paper you link, assumes probability, which is why he's able to talk about why infinitary convex combinations are or are not allowed (I mean, that and the fact that he's not necessarily arbitrary actions).
Savage doesn't assume probability! So if you want to disallow certain actions... how do you specify them? Or if you want to talk about convex combinations of actions  not just infinitary ones, any ones  how do you even define these?
In Savage's framework, you have to prove that if two actions can be described by the same probabilities and outcomes, then they're equivalent. E.g., suppose action A results in outcome X with probability 1/2 and outcome Y with probability 1/2, and suppose action B meets that same description. Are A and B equivalent? Well, yes, but that requires proof, because maybe A and B take outcome X on different sets of probability 1/2. (OK, in the twooutcome case it doesn't really require "proof", rather it's basically just his definition of probability; but the more general case requires proof.)
So, until you've established that theorem, that it's meaningful to combine gambles like that, and that the particular events yielding the probabilities aren't relevant, one can't really meaningfully define convex combinations at all. This makes it pretty hard to incorporate them into the setup or axioms!
More generally this should apply not only to Savage's particular formalism, but any formalism that attempts to ground probability as well as utility.
Anyway yeah. As I think I already said, I think we should think of this in terms not of, what combinations of actions yield permitted actions, but rather whether there should be forbidden actions at all. (Note btw in the usual VNM setup there aren't any forbidden actions either! Although there infinite gambles are, while not forbidden, just kind of ignored.) But this is in particular why trying to put it it in terms of convex combinations as you've done doesn't really work from a fundamentals point of view, where there is no probability yet, only preferences.
Apologies, but it sounds like you've gotten some things mixed up here? The issue is boundedness of utility functions, not whether they can take on infinity as a value. I don't think anyone here is arguing that utility functions don't need to be finitevalued. All the things you're saying seem to be related to the latter question rather than the former, or you seem to be possibly conflating them?
In the second paragraph perhaps this is just an issue of language  when you say "infinitely high", do you actually mean "aribtrarily high"?  but in the first paragraph this does not seem to be the case.
I'm also not sure you understood the point of my question, so let me make it more explicit. Taking the idea of a utility function and modifying it as you describe is what I called "backwards reasoning" above  starting from the idea of a utility function, rather than starting from preferences. Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?
Oh, so that's what you're referring to. Well, if you look at the theorem statements, you'll see that P=P_d is an axiom that is explicitly called out in the theorems where it's assumed; it's not implictly part of Axiom 0 like you asserted, nor is it more generally left implicit at all.
but the important part is that last infinite sum: this is where all infinitary convex combinations are asserted to exist. Whether that is assigned to "background setup" or "axioms" does not matter. It has to be present, to allow the construction of St. Petersburg gambles.
I really think that thinking in terms of infinitary convex combinations is the wrong way to go about this here. As I said above: You don't get a St. Petersburg gamble by taking some fancy convex combination, you do it by just constructing the function. (Or, in Fishburn's framework, you do it by just constructing the distribution; same effect.) I guess without P=P_d you do end up relying on closure properties in Fishburn's framework, but Savage's framework just doesn't work that way at all; and Fishburn with P=P_d, well, that's not a closure property. Rather what Savage's setup, and P=P_d have in common, is that they're, like, arbitraryconstruction properties: If you can make a thing, you can compare it.
Savage does not actually prove bounded utility. Fishburn did this later, as Savage footnotes in the edition I'm looking at, so Fishburn must be tackled.
Yes, it was actually Fishburn that did that. Apologies if I carelessly implied it was Savage.
IIRC, Fishburn's proof, formulated in Savage's terms, is in Savage's book, at least if you have the second edition. Which I think you must, because otherwise that footnote wouldn't be there at all. But maybe I'm misremembering? I think it has to be though...
In Savage's formulation, from P1P6 he derives Theorem 4 of section 2 of chapter 5 of his book, which is linear interpolation in any interval.
I don't have the book in front of me, but I don't recall any discussion of anything that could be called linear interpolation, other than the conclusion that expected utility works for finite gambles. Could you explain what you mean? I also don't see the relevance of intervals here? Having read (and written a summary of) that part of the book I simply don't know what you're talking about.
Clearly, linear interpolation does not work on an interval such as [17,Inf], therefore there cannot be any infinitely valuable gambles. St. Petersburgtype gambles are therefore excluded from his formulation.
I still don't know what you're talking about here, but I'm familiar enough with Savage's formalism to say that you seem to have gotten quite lost somewhere, because this all sounds like nonsense.
From what you're saying, the impression that I'm getting is that you're treating Savage's formalism like Fishburn's, where there's some aprior set of actions under consideration, and so we need to know closure properties about that set. But, that's not how Savage's formalism works. Rather the way it works is that actions are just functions (possibly with a measurability condition  he doesn't discuss this but you probably want it) from worldstates to outcomes. If you can construct the action as a function, there's no way to exclude it.
I shall have to examine further how his construction works, to discern what in Savage's axioms allows the construction, when P1P6 have already excluded infinitely valuable gambles.
Well, I've already described the construction above, but I'll describe it again. Once again though, you're simply wrong about that last part; that last statement is not only incorrect, but fundamentally incompatible with Savage's whole approach.
Anyway. To restate the construction of how to make a St. Petersburg gamble. (This time with a little more detail.) An action is simply a function from worldstates to outcomes.
By assumption, we have a sequence of outcomes a_i such that U(a_i) >= 2^i and such that U(a_i) is strictly increasing.
We can use P6 (which allows us to "flip coins", so to speak) to construct events E_i (sets of worldstates) with probability 1/2^i.
Then, the action G that takes on the value a_i on the set E_i is a St. Petersburg gamble.
For the particular construction, you take G as above, and also G', which is the same except that G' takes the value a_1 on E_0, instead of the value a_0.
Savage proves in the book (although I think the proof is due to Fishburn? I'm going by memory) that given two gambles, both of which are preferred to any essentially bounded gamble, the agent must be indifferent between them. (The proof uses P7, obviously  the same thing that proves that expected utility works for infinite gambles at all. I don't recall the actual proof offhand and don't feel like trying to reconstruct it right now, but anyway I think you have it in front of you from the sounds of it.) And we can show both these gambles are preferred to any essentially bounded gamble by comparing to truncated versions of themselves (using surething principle) and using the fact that expected utility works for essentially bounded gambles. Thus the agent must be indifferent between G and G'. But also, by the surething principle (P2 and P3), the agent must prefer G' to G. That's the contradiction.
Edit: Earlier version of this comment misstated how the proof goes
Fishburn (op. cit., following Blackwell and Girschick, an inaccessible source) requires that the set of gambles be closed under infinitary convex combinations.
Again, I'm simply not seeing this in the paper you linked? As I said above, I simply do not see anything like that outside of section 9, which is irrelevant. Can you point to where you're seeing this condition?
I shall take a look at Savage's axioms and see what in them is responsible for the same thing.
In the case of Savage, it's not any particular axiom, but rather the setup. An action is a function from worldstates to outcomes. If you can construct the function, the action (gamble) exists. That's all there is to it. And the relevant functions are easy enough to construct, as I described above; you use P6 (the Archimedean condition, which also allows flipping coins, basically) to construct the events, and we have the outcomes by assumption. You assign the one to the other and there you go.
(If you don't want to go getting the book out, you may want to read the summary of Savage I wrote earlier!)
A short answer to this (something longer later) is that an agent need not have preferences between things that it is impossible to encounter. The standard dissolution of the St. Petersberg paradox is that nobody can offer that gamble. Even though each possible outcome is finite, the offerer must be able to cover every possible outcome, requiring that they have infinite resources. Since the gamble cannot be offered, no preferences between that gamble and any other need exist.
So, would it be fair to sum this up as "it is not necessary to have preferences between two gambles if one of them takes on unbounded utility values"? Interesting. That doesn't strike me as wholly unworkable, but I'm skeptical. In particular:
 Can we phrase this without reference to utility functions? It would say a lot more for the possibility if we can.
 What if you're playing against Nature? A gamble can be any action; and in a world of unbounded utility functions, why should one believe that any action must have some bound on how much utility it can get you? Sure, sure, second law of thermodynamics and all that, but that's just a feature of the paticular universe we happen to live in, not something that reshapes your preferences. (And if we were taking account of that sort of thing, we'd probably just say, oh, utility is bounded after all, in a kind of stupid way.) Notionally, it could be discovered to be wrong! It won't happen, but it's not probability literally 0.
Or are you trying to cut out a more limited class of gambles as impossible? I'm not clear on this, although I'm not certain it affects the results.
Anyway, yeah, as I said, my main objection is that I see no reason to believe that, if you have an unbounded utility function, Nature cannot offer you a St. Petersburg game. Or I mean, to the extent I do see reasons to believe that, they're facts about the particular universe we happen to live in, that notionally could be discovered to be wrong.
Looking at the argument from the other end, at what point in valuing numbers of intelligent lives does one approach an asymptote, bearing in mind the possibility of expansion to the accessible universe? What if we discover that the habitable universe is vastly larger than we currently believe? How would one discover the limits, if there are any, to one's valuing?
This is exactly the sort of argument that I called "flimsy" above. My answer to these questions is that none of this is relevant.
Both of us are trying to extend our ideas about preferences from ordinary situations to extraordinary ones. (Like, I agree that some sort of total utilitarianism is a good heuristic for value under the conditions we're familiar with.) This sort of extrapolation, to an unfamiliar realm, is always potentially dangerous. The question then becomes, what sort of tools can we expect to continue to work, without needing any sort of adjustment to the new conditions?
I do not expect speculation about the particular form preferences our would take under these unusual conditions to be trustworthy. Whereas basic coherence conditions had damn well better continue to hold, or else we're barely even talking about sensible preferences anymore.
Or, to put it differently, my answer is, I don't know, but the answer must satisfy basic coherence conditions. There's simply no way that the idea that decisiontheoretic utility has to increase linearly with number intelligent lives, is on anywhere near as solid ground as that! The mere fact that it's stated in terms of a utility function in the first place, rather than in terms of something more basic, is something of a smell. Complicated statements we're not even entirely sure how to formulate can easily break in a new context. Short simple statements that have to be true for reasons of simple coherence don't break.
(Also, some of your questions don't seem to actually appreciating what a bounded utility function would actually mean. It wouldn't mean taking an unbounded utility function and then applying a cap to it. It would just mean something that naturally approaches 1 as things get better and 0 as things get worse. There is no point at which it approaches an asymptote; that's not how asymptotes work. There is no limit to one's valuing; presumably utility 1 does not actually occur. Or, at least, that's how I infer it would have to work.)
Huh. This would need some elaboration, but this is definitely the most plausible way around the problem I've seen.
Now (in Savage's formalism) actions are just functions from worldstates to outcomes (maybe with a measurability condition), so regardless of your prior it's easy to construct the relevant St. Petersburg gambles if the utility function is unbounded. But seems like what you're saying is, if we don't allow arbitrary actions, then the prior could be such that, not only are none of the permitted actions St. Petersburg gambles, but also this remains the case even after future updates. Interesting! Yeah, that just might be workable...
OK, so going by that you're suggesting, like, introducing varying caps and then taking limits as the cap goes to infinity? It's an interesting idea, but I don't see why one would expect it to have anything to do with preferences.
You should check out Abram's post on complete class theorems. He specifically addresses some of the concerns you mentioned in the comments of Yudkowsky's posts.
So, it looks to me like what Abrams is doing  once he gets past the original complete class theorem  is basically just inventing some new formalism along the lines of Savage. I think it is very misleading to refer to this as "the complete class theorem"  how on earth was I supposed to know that this was what was being referred to when "the complete class theorem" was mentioned, when it resembles the original theorem so little (and it's the original theorem that was linked to)?  and I don't see why it was necessary to invent this anew, but sure, I can accept that it presumably works, even if the details aren't spelled out.
But I must note that he starts out by saying that he's only considering the case when there's only a finite set of states of the world! I realize you weren't making a point about bounded utility here; but from that point of view, it is quite significant...
Also, my inner model of Jaynes says that the right way to handle infinities is not to outlaw them, but to be explicit and consistent about what limits we're taking.
I don't really understand what that means in this context. It is already quite explicit what limits we're taking: Given an action (a measurable function from states of the world to outcomes), take its expected utility, with regard to the [finitelyadditive] probability on states of the world. (Which is implicitly a limit of sorts.)
I think this is another one of those comments that makes sense if you're reasoning backward, starting from utility functions, but not if you're reasoning forward, from preferences. If you look at things from a utilityfunctionsfirst point of view, then it looks like you're outlawing infinities (well, unboundedness that leads to infinities). But from a preferencesfirst point of view, you're not outlawing anything. You haven't outlawed unbounded utility functions, rather they've just failed to satisfy fundamental assumptions about decisionmaking (remember, if you don't have P7 your utility function is not guaranteed to return correct results about infinite gambles at all!) and so clearly do not reflect your idealized preferences. You didn't get rid of the infinity, it was simply never there in the first place; the idea that it might have been turned out to be mistaken.
I think you've misunderstood a fair bit. I hope you don't mind if I address this slightly out of order.
Or if infinite utilities are not immediately a problem, then by a more complicated argument, involving constructing multiple St. Petersburgtype combinations and demonstrating that the axioms imply that there both should and should not be a preference between them.
This is exactly what Fishburn does, as I mentioned above. (Well, OK, I didn't attribute it to Fishburn, I kind of implicitly misattributed it to Savage, but it was actually Fishburn; I didn't think that was worth going into.)
I haven't studied the proof of boundedness in detail, but it seems to be that unbounded utilities allow St. Petersburgtype combinations of them with infinite utilities, but since each thing is supposed to have finite utility, that is a contradiction.
He does not give details, but the argument that I conjecture from his text is that if there are unbounded utilities then one can construct a convex combination of infinitely many of them that has infinite utility (and indeed one can), contradicting the proof from his axioms that the utility function is a total function to the real numbers.
What you describe in these two parts I'm quoting is, well, not how decisiontheoretic utility functions work. A decisiontheoretic utility function is a function on outcomes, not on gambles over outcomes. You take expected utility of a gamble; you don't take utility of a gamble.
So, yes, if you have an unbounded decisiontheoretic utility function, you can set up a St. Petersburgstyle situation that will have infinite expected utility. But that is not by itself a problem! The gamble has infinite expected utility; no individual outcome has infinite utility. There's no contradiction yet.
Of course, you then do get a contradiction when you attempt to compare two of these that have been appropriately set up, but...
But by a similar argument, one might establish that the real numbers must be bounded, when instead one actually concludes that not all series converge
What? I don't know what one might plausibly assume that might imply the boundedness of the real numbers.
...oh, I think I see the analogy you're going for here. But, it seems to rest on the misunderstanding of utility functions discussed above.
and that one cannot meaningfully compare the magnitudes of divergent infinite series.
Well, so, one must remember the goal here. So, let's start with divergent series, per your analogy. (I'm assuming you're discussing series of nonnegative numbers here, that diverge to infinity.)
So, well, there's any number of ways we could compare divergent series. We could just say that they sum to infinity, and so are equal in magnitude. Or we could try to do a more detailed comparison of their growth rates. That might not always yield a welldefined result though. So yeah. There's not any one universal way to compare magnitudes of divergent series, as you say; if someone asks, which of these two series is bigger, you might just have to say, that's a meaningless question. All this is as you say.
But that's not at all the situation we find ourselves in choosing between two gambles! If you reason backward, from the idea of utility functions, it might seem reasonable to say, oh, these two gambles are both divergent, so comparison is meaningless. But if you reason forward, from the idea of preferences... well, you have to pick one (or be indifferent). You can't just leave it undefined. Or if you have some formalism where preferences can be undefined (in a way that is distinct from indifference), by all means explain it... (but what happens when you program these preferences into an FAI and it encounters this situation? It has to pick. Does it pick arbitrarily? How is that distinct from indifference?)
That we have preferences between gambles is the whole thing we're starting from.
I note that in order to construct convex combinations of infinitely many states, Fishburn extends his axiom 0 to allow this. He does not label this extension separately as e.g. "Axiom 0*". So if you were to ask which of his axioms to reject in order to retain unbounded utility, it could be none of those labelled as such, but the one that he does not name, at the end of the first paragraph on p.1055. Notice that the real numbers satisfy Axiom 0 but not Axiom 0*. It is that requirement that all infinite convex combinations exist that surfaces later as the boundedness of the range of the utility function.
Sorry, but looking through Fishburn's paper I can't see anything like this. The only place where any sort of infinite combination seems to be mentioned is section 9, which is not relevant. Axiom 0 means one thing throughout and allows only finite convex combinations. I simply don't see where you're getting this at all.
(Would you mind sticking to Savage's formalism for simplicity? I can take the time to properly read Fishburn if for some reason you insist things have to be done this way, but otherwise for now I'm just going to put things in Savage's terms.)
In any case, in Savage's formalism there's no trouble in proving that the necessary actions exist  you don't have to go taking convex combinations of anything, you simply directly construct the functions. You just need an appropriate partition of the set of worldstates (provided by the Archimedean axiom he assumes, P6) and an appropriate set of outcomes (which comes from the assumption of unbounded utility). You don't have to go constructing other things and then doing some fancy infinite convex combination of them.
If you don't mind, I'd like to ask: could just tell me what in particular in Savage's setup or axioms you find to be the probable weak point? If it's P7 you object to, well, I already discussed that in the post; if you get rid of that, the utility function may be unbounded but it's no longer guaranteed to give correct results when comparing infinite gambles.
While searching out the original sources, I found a paper indicating that at least in 1993, bounded utility theorems were seen as indicating a problem with Savage's axioms: "Unbounded utility for Savage's "Foundations of Statistics" and Other Models", by Peter Wakker. There is another such paper from 2014. I haven't read them, but they indicate that proofs of boundedness of utility are seen as problems for the axioms, not discoveries that utility must be bounded.
I realize a number of people see this as a problem. Evidently they have some intuition or argument that disagrees with the boundedness of utility. Whatever this intuition or argument is, I would be very surprised if it were as strong as the argument that utility must be bounded. There's no question that assumptions can be bad. I just think the reasons to think these are bad that have been offered, are seriously flimsy compared to the reasons to think that they're good. So I see this as basically a sort of refusal to take the math seriously. (Again: Which axiom should we throw out, or what part of the setup should we rework?)
Is there a reason we can't just solve this by proposing arbitrarily large bounds on utility instead of infinite bounds? For instance, if we posit that utility is bounded by some arbitrarily high value X, then the wager can only payout values X for probabilities below 1/X.
I'm not sure what you're asking here. An individual decisiontheoretic utility function can be bounded or it can be unbounded. Since decisiontheoretic utility functions can be rescaled arbitrarily, naming a precise value for the bounds is meaningless; so like we could just assume the bounds are 0 below and 1 above.
So, I mean, yeah, you can make the problem go away by assuming bounded utility, but if you were trying to say something more than that, a bounded utility that is somehow "closer" to unbounded utility, then no such notion is meaningful.
Apologies if I've misunderstood what you're trying to do.
Yes, thanks, I didn't bother including it in the body of the post but that's basically how it goes. Worth noting that this:
Both of these wagers have infinite expected utility, so we must be indifferent between them.
...is kind of shortcutting a bit (at least as Savage/Fishburn[0] does it; he proves indifference between things of infinite expected utility separately after proving that expected utility works when it's finite), but that is the essence of it, yes.
(As for the actual argument... eh, I don't have it in front of me and don't feel like rederiving it...)
[0]I initially wrote Savage here, but I think this part is actually due to Fishburn. Don't have the book in front of me right now though.
By "a specific gamble" do you mean "a specific pair of gambles"? Remember, preferences are between two things! And you hardly need a utility function to express a preference between a single pair of gambles.
I don't understand how to make sense of what you're saying. Agent's preferences are the starting point  preferences as in, given a choice between the two, which do you pick? It's not clear to me how you have a notion of preference that allows for this to be undefined (the agent can be indifferent, but that's distinct).
I mean, you could try to come up with such a thing, but I'd be pretty skeptical of its meaningfulness. (What happens if you program these preferences into an FAI and then it hits a choice for which its preference is undefined? Does it act arbitrarily? How does this differ from indifference, then? By lack of transitivity, maybe? But then that's effectively just nontransitive indifference, which seems like it would be a problem...)
I think your comment is the sort of thing that sounds reasonable if you reason backward, starting from the idea of expected utility, but will fall apart if you reason forward, starting from the idea of preferences. But if you have some way of making it work, I'd be interested to hear...
If you're not making a prioritarian aggregate utility function by summing functions of individual utility functions, the mapping of a prioritarian function to a utility function doesn't always work. Prioritarian utility functions, for instance, can do things like rankorder everyone's utility functions and then sum each individual utility raised to the negativepower of the rankorder ... or something*. They allow interactions between individual utility functions in the aggregate function that are not facilitated by the direct summing permitted in utilitarianism.
This is a good point. I might want to go back and edit the original post to account for this.
So from a mathematical perspective, it is possible to represent many prioritarian utility function as a conventional utilitarian utility function. However, from an intuitive perspective, they mean different things:
This doesn't practically affect decisionmaking of a moral agents but it does reflect different underlying philosophies  which affects the kinds of utility functions people might propose.
Sure, I'll agree that they're different in terms of ways of thinking about things, but I thought it was worth pointing out that in terms of what they actually propose they are largely indistinguishable without further constraints.
I don't really want to go trying to defend here a position I don't necessarily hold, but I do have to nitpick and point out that there's quite a bit of room inbetween exponential and hyperbolic.
To be clear, intelligence explosion via recursive selfimprovement has been distinguished from merely exponential growth at least as far back as Yudkowsky's "Three Major Singularity Schools". I couldn't remember the particular link when I wrote the comment above, but, well, now I remember it.
Anyway, I don't have a particular argument one way or the other; I'm just registering my surprise that you encountered people here arguing for merely exponential growth base on intelligence explosion arguments.
Yeah, proper scoring rules (and in particular both the quadratic/Brier and the logarithmic examples) have been discussed here a bunch, I think that's worth acknowledging in the post...
Kind of wellknown here, but worth repeating I guess...
It is sometimes argued that even if this advantage is modest, the growth curves will be exponential, and therefore a slight advantage right now will compound to become a large advantage over a long enough period of time. However, this argument by itself is not an argument against a continuous takeoff.
I'm not sure this is an accurate characterization of the point; my understanding is that the concern largely comes from the possibility that the growth will be faster than exponential, rather than merely exponential.
I mean, are you actually disagreeing with me here? I think you're just describing an intermediate position.
OK. I think I didn't think through my reply sufficiently. Something seemed off with what you were saying, but I failed to think through what and made a reply that didn't really make sense instead. But thinking things through a bit more now I think I can lay out my actual objection a bit more clearly.
I definitely think that if you're taking the point of view that suicide is preferable to suffering you're not applying what I'm calling goalthinking. (Remember here that the description I laid out above is not intended as some sort of intensional definition, just my attempt to explicate this distinction I've noticed.) I don't think goalthinking would consider nonexistence as some sort of neutral point as many do.
I think the best way of explaining this maybe is that goalthinking  or atleast the extreme version which nobody actually uses  is to simply not consider happiness or suffering as whatever as separate objects worth considering at all, that can be good or bad, or that should be acted on directly; but purely as indicators of whether one is achieving one's goals  intermediates to be eliminated. In this point of view, suffering isn't some separate thing to be gotten rid of by whatever means, but simply the internal experience of not achieving one's goals, the only proper response to which is to go out and do so. You see?
And if we continue in this direction, one can also apply this to others; so you wouldn't have "not have other people suffer horribly" as a goal in the first place. You would always phrase things in terms of other's goals, and whether they're being thwarted, rather than in terms of their experiences.
Again, none of what I'm saying here necessarily follows from what I wrote in the OP, but as I said, that was never intended as an intensional definition. I think the distinction I'm drawing makes sense regardless of whether I described it sufficiently clearly initially.
This is perhaps an intermediate example, but I do think that once you're talking about internal experiences to be avoided, it's definitely not all the way at the goalthinking end.
Hm, I suppose that's true. But I think the overall point still stands? It's illustrating a type of thinking that doesn't make sense to one thinking in terms of concrete, unmodifiable goals in the external world.
So this post is basically just collecting together a bunch of things you previously wrote in the Sequences, but I guess it's useful to have them collected together.
I must, however, take objection to one part. The proper noncircular foundation you want for probability and utility is not the complete class theorem, but rather Savage's theorem, which I previously wrote about on this website. It's not short, but I don't think it's too inaccessible.
Note, in particular, that Savage's theorem does not start with any assumption baked in that R is the correct system of numbers to use for probabilities[0], instead deriving that as a conclusion. The complete class theorem, by contrast, has real numbers in the assumptions.
In fact  and it's possible I'm misunderstanding  but it's not even clear to me that the complete class theorem does what you claim it does, at all. It seems to assume probability at the outset, and therefore cannot provide a grounding for probability. Unlike Savage's theorem, which does. Again, it's possible I'm misunderstanding, but that sure seems to be the case.
Now this has come up here before (I'm basically in this comment just restating things I've previously written) and your reply when I previously pointed out some of these issues was, frankly, nonsensical (your reply, my reply), in which you claimed that the statement that one's preferences form a partial preorder is a stronger assumption than "one prefers more apples to less apples", when, in fact, the exact reverse is the case.
(To restate it for those who don't want to click through: If one is talking solely about one's preferences over number of apples, then the statement that more is better immediately yields a total preorder. And if one is talking about preferences not just over number of apples but in general, then... well, it's not clear how what you're saying applies directly; and taken less literally, it just in general seems to me that the complete class theorem is making some very strong assumptions, much stronger than that of merely a total preorder (e.g., real numbers!).)
In short the use of the complete class theorem here in place of Savage's theorem would appear to be an error and I think you should correct it.
[0]Yes, it includes an Archimedean assumption, which you could argue is the same thing as baking in R; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking R as your number system means as a statement about the agent's preferences.
Thirding what the others said, but I wanted to also add that rather than actual game theory, what you may be looking here may instead be the anthropological notion of limited good?
Sorry, but: The thing at the top says this was crossposted from Otium, but I see no such post there. Was this meant to go up there as well? Because it seems to be missing.
OK, time to actually now get into what's wrong with the ones I skipped initially. Already wrote the intro above so not repeating that. Time to just go.
Infinitarian paralysis: So, philosophical problems to start: As an actual decision theory problem this is all moot since you can't actually have an infinite number of people. I.e. it's not clear why this is a problem at all. Secondly, naive assumption of utilitarian aggregation as mentioned above, etc, not going over this again. Enough of this, let's move on.
So what are the mathematical problems here? Well, you haven't said a lot here, but here's what it's look like to me. I think you've written one thing here that is essentially correct, which is that, if you did have some system of surreal valuedutilities, it would indeed likely make the distinction you want.
But, once again, that's a big "if", and not just for philosophical reasons but for the mathematical reasons I've already brought up so many times right now  you can't do infinite sums in the surreals like you want, for reasons I've already covered. So there's a reason I included the word "likely" above, because if you did find an appropriate way of doing such a sum, I can't even necessarily guarantee that it would behave like you want (yes, finite sums should, but infinite sums require definition, and who knows if they'll actually be compatible with finite sums like they should be?).
But the really jarring thing here, the thing that really exposes a serious error in your thought (well, OK, that does so to a greater extent), is not in your proposed solution  it's in what you contrast it with. Cardinal valuedutilities? Nothing about that makes sense! That's not a remotely welldefined alternative you can contrast with! And the thing that bugs me about this error is that it's just so unforced  I mean, man, you could have said "extended reals" rather than cardinals, and made essentially the same point while making at least some sense! This is just demonstrating once again that not only do you not understand surreals, you do not understand cardinals or ordinals either.
(Well, I suppose technically there's the possibility that you do but expect your audience doesn't and are talking down to them, but since you're writing here on Less Wrong, I'm going to assume that's not the case.)
Seriously, cardinals and utilities do not go together. I mean, cardinals and real numbers do not go together. Like surreals and utilities don't go together either, but at least the surreals include the reals! At least you can attempt to treat it naively in special cases, as you've done in a number of these examples, even if the result probably isn't meaningful! Cardinals you can't even do that.
And once again, there's no reason anyone who understood cardinals would even want cardinalvalued utilities. That's just not what cardinals are for! Cardinals are for counting how many there are of something. Utility calculations are not a "how many" problem.
Sphere of suffering: Once again we have infinitely many people (so this whole problem is again a nonproblem) and once again we have some sort of naive utility aggregation over those infinitely many people with all the mathematical problems that brings (only now it's over timeslices as well?). Enough of this, moving on.
Honestly I don't have much new to say about the bad mathematics here, much of it is the same sort of mistakes as you made in the ones I covered in my initial comment. To cover those ones briefly:
 Surreal numbers do not measure how far a grid extends (similar to examples I've already covered)
 There's not a question of how far the grid extends, allowing it to be a transfinite variable l is just changing the problem (similar to examples I've already covered)
 Surreal numbers also do not measure number of time steps, you want ordinals for that (similar to examples I've already covered)
 Repeat #2 but for the time steps (similar to examples I've already covered)
But OK. The one new thing here, I guess, is that now you're talking about a "majority" of the time slices? Yeah, that is once again not welldefined at all. Cardinality won't help you here, obviously; are you putting a measure on this somehow? I think you're going to have some problems there.
Trumped: Same problems I've discussed before. Surreal numbers do not count time steps, you're changing the problem by introducing a variable, utility aggregation over an infinite set (this time of timeslices rather than people), you know the drill.
But actually here you're changing the problem in a different way, by supposing that Trump knows in advance the number of time steps? The original problem just had this as a repeated offer. Maybe that's a philosophical rather than mathematical problem. Whatever. It's changing the problem, is the point.
And then on top of that your solution doesn't even make any sense. Let's suppose you meant an ordinal number of days rather than a surreal number of days, since that is what you'd actually use in this context. OK. Suppose for example then that the number of days is ω (which is, after all, the original problem before you changed it). So your solution says that Trump should accept the deal so long as the day number is less than the surreal number ω/3. Except, oops! Every ordinal less than ω is also less than ω/3. Trump always accepts the deal, we're back at the original problem.
I.e., even granting that you can somehow make all the formalism work, this is still just wrong.
St. Petersburg paradox: OK, so, there's a lot wrong here. Let me get the philosophical problem out of the way first  the real solution to the St. Petersburg paradox is that you must look not at expected money, but at expected utility, and utility functions must be bounded, so this problem can't arise. But let's get to the math, because, like I said, there's a lot wrong here.
Let's get the easytodescribe problems out of the way first: You are once again using surreals where you should be using ordinals; you are once again assuming some sort of theory of infinite sums of surreals; getting infinitely many heads has zero probability, not infinitesimal (probabilities are realvalued, you could try to introduce a theory of surreal probabilities but that will have problems already discussed), what happens in that case is irrelevant; you are once again changing the problem by allowing things to go on beyond ω steps; and, minor point, but where on earth did the function n > n comes from? Don't you mean n > 2^n?
OK, that's largely stuff I've said before. But the thing that puzzled me the most in your claimed solution is the first sentence:
If we model this with surreals, then simply stating that there is potentially an infinite number of tosses is undefined.
What? I mean, yeah, sure, the surreals have multiple infinities while, say, the extended nonnegative reals have only one, no question there. But that sentence still makes no sense! It, like, seems to reveal a fundamental misunderstanding so great I'm having trouble comprehending it. But I will give it my best shot.
So the thing is, that  ignoring the issue of unbounded utility and what's the correct decision  the original setup has no ambiguities. You can't choose to make it different by changing what system of numbers you describe it with. Now, I don't know if you're making the mistake I think you're making, because who knows what mistake you might be making, but it looks to me that you are confusing numbers that are part of the actual problem specification, with auxiliary numbers just used to describe the problem.
Like, what's actually going on here is that there is a set of coin flips, right? The elements of that set will be indexed by the natural numbers, and will form a (possibly improper, though with probability 0) initial segment of it  those numbers are part of the actual problem specification. The idea though that there might be infinitely many coin flips... that's just a description. When I say "With probability 0, the set of flips will be infinite", that's just another way of saying, "With probability 0, the set of flips will be N." It doesn't make sense to ask "Ah, but what system of numbers are you using to measure its infinitude?" It doesn't matter! The set I'm describing is N! (And in any case I just said it was an infinite set, although I suppose you could say I was implicitly using cardinals.)
This is, I suppose, an idea that's shown up over and over in your claimed solutions, but since I skipped over this particular one before, I guess I never got it so explicitly before. Again, I'm having to guess what you think, but it looks to me like you think that the numbers are what's primary, rather than the actual objects the problems are about, and so you can just change the numbers system and get a different version of the same problem. I mean, OK, often the numbers are primary and you can do that! But sometimes they're just descriptive.
Oy. I have no idea whether I've correctly described what your misunderstanding, but whatever it is, it's pretty big. Let's just move on.
Trouble in St. Petersburg: Can I first just complain that your numbers don't seem to match up with your text? 13 is not 9*2+3. I'm just going to assume you meant 21 rather than 13, because none of the other interpretations I can come up with make sense.
Also this problem once again relies on unbounded utilities, but I don't need to go on about that. (Although if you were to somehow reformulate it without those  though that doesn't seem possible in this coinflip formulation  then the problem would be basically similar to Satan's Apple. I have my own thoughts on that problem, but, well, I'm not going to go into it here because that's not the point.)
Anyway, let's get to the surreal abuse! Well, OK, again I don't have much new to say here, it's the same sort of surreal abuse as you've made before. Namely: Using surreals where they don't make sense (time steps should be counted by ordinals); changing the problem by introducing a transfinite variable; thinking that all ordinals are successor ordinals (sorry, but with n=ω, i.e. the original problem, there's still no last step).
Ultimately you don't offer any solution? Whatever. The errors above still stand.
The headache: More naive aggregation and thinking you can do infinite sums and etc. Or at least so I'm gathering from your claimed solution. Anyway that's boring.
The surreal abuse here though is also boring, same types as we've seen before  using surreals where they make no sense but where ordinals would; ignoring the existence of limit ordinals; and of course the aforementioned infinite sums and such.
OK. That's all of them. I'm stopping there. I think the first comment was really enough to demonstrate my point, but now I can honestly claim to have addressed every one of your examples. Time to go sleep now.
OK, time for the second half, where I get to the errors in the ones I initially skipped. And yes, I'm going to assert some philosophical positions which (for whatever reason) aren't wellaccepted on this site, but there's still plenty of mathematical errors to go around even once you ignore any philosphical problems. And yeah, I'm still going to point out missing formalism, but I will try to focus on the more substantive errors, of which there are plenty.
So, let's get those philosophical problems out of the way first, and quickly review utility functions and utilitarianism, because this applies to a bunch of what you discuss here. Like, this whole post takes a very naive view of the idea of "utility", and this needs some breaking down. Apologies if you already know all of what I'm about to say, but I think given the context it bears repeating.
So: There are two different things meant by "utility function". The first is decisiontheoretic; an agent's utility function is a function whose expected value it attempts to maximize. The second is the one used by utilitarianism, which involves (at present, poorlydefined) "Eutility" functions, which are not utility functions in the decisiontheoretic sense, that are then somehow aggregated (maybe by addition? who knows?) into a decisiontheoretic utility function. Yes, this terminology is terribly confusing. But these are two separate things and need to be kept separate.
Basically, any agent that satisfies appropriate rationality conditions has a utility function in the decisiontheoretic sense (obviously such idealized agents don't actually exist, but it's still a useful abstraction). So you could say, roughly speaking, any rational consequentialist has a decisiontheoretic utility function. Whereas Eutility is specifically a utilitarian notion, rather than a general consequentalist or purely descriptive notion like decisiontheoretic utility (it's also not at all clear how to define it).
Anyway, if you want surreal Eutility functions... well, I think that's still probably pretty dumb for reasons I'll get to, but since Eutility is so poorly defined that's not obviously wrong. But let's talk about decisiontheoretic utility functions. These need to be realvalued for very good reasons.
Because, well, why use utility functions at all? What makes us think that a rational agent's preferences can be described in terms of a utility function in the first place? Well, there's an answer to that: Savage's theorem. I've already described this above  it gives rationality conditions, phrased directly in terms of an agent's preferences, that together suffice to guarantee that said preferences can be described by a utility function. And yes, it's realvalued.
(And, OK, it's realvalued because Savage includes an Archimedean assumption, but, well  do you think that's a bad assumption? Let me repeat here a naive argument against infinite and infinitesimal utilities I've seen before on this site (I forget due to who; I think Eliezer maybe?). Suppose we go with a naive treatment of infinitesimal utilities, and A has infinitesimal utility compared to B. Then since any action you take at all has some positive (real, noninfinitesimal) probability of bringing about B, even sitting in your room waving your hand back and forth in the air, A simply has no effect on your decision making; all considerations of B, even stupid ones, completely wash it out. Which means that A's infinitesimal utility does not, in fact, have any place in a decisiontheoretic utility function. Do you really want to throw out that Archimedean assumption? Also if you do throw it out, I don't think that actually gets you nonrealvalued utilities, I think it just, y'know, doesn't get you utilities. The agent's preferences can't necessarily be described with a utility function of any sort. Admittedly I could be wrong about that last part; I haven't checked.)
In short, your philosophical mistake here is of a kind with your mathematical mistakes  in both cases, you're starting from a system of numbers (surreals) and trying awkwardly to fit it to the problem, even when it blatantly does not fit, does not have the properties that are required; rather than seeing what requirements the problem actually calls for and finding something that meets those needs. As I've pointed out multiple times by now, you're trying to make use of properties that the surreal numbers just don't have. Work forward from the requirements, don't try to force into them things that don't meet them!
By the way, Savage's theorem also shows that utility functions must be bounded. That utility functions must be bounded does not, for whatever reason, seem to be a wellaccepted position on this site, but, well, it's correct so I'm going to continue asserting it, including here. :P Now it's true that the VNM theorem doesn't prove this, but that's due to a deficiency in the VNM theorem's assumptions, and with that gap fixed it does. I don't want to belabor this point here, so I'll just refer you to this previous discussion.
(Also the VNM theorem is just a worse foundation generally because it assumes realvalued probabilities to begin with, but that's a separate matter. Though maybe here it's not  since you can't claim to avoid the boundedness requirement by saying you're justifying the use of utilities with VNM rather than Savage, since you seem to want to allow surrealvalued probabilities!)
Anyway, so, yes, utilities should be realvalued (and bounded) or else you have no good reason to use them  to use surrealvalued utilities is to start from the assumption that you should use utilities (a big assumption! why would one ever assume such a thing?) when it should be a conclusion (a conclusion of theorems that say it must be realvalued).
Ah, but could infinities or infinitesimals appear in an Eutility function, that the utilitarians use? I've been ignoring those, after all. But, since they're getting aggregated into a decisiontheoretic utility function, which is realvalued (or maybe it's not quite a decisiontheoretic utility function, but it should still be realvalued by the naive argument above), unless this aggregation function can magnify an infinitesimal into a noninfinitesimal, the same problem will arise, the infinitesimals will still have no relevance, and thus should never have been included.
(Yeah, I suppose in what you write you consider "summing over an infinite number of people". But: 1. such infinite sums with infinitesimals don't actually work mathematically, for reasons I've already covered, and 2. you can't actually have an infinite number of people, so it's all moot anyway.)
Yikes, all that and I haven't even gotten to examining in detail the particular mathematical problems in the remaining ones! You know what, I'll end this here and split that comment out into a third post. Point is, now in these remaining ones, when I want to point out philosophical problems, I can just point back to this comment rather than repeating all this again.
My primary response to this comment will take the form of a post, but I should add that I wrote: "I will provide informal hints on how surreal numbers could help us solve some of these paradoxes, although the focus on this post is primarily categorisation, so please don't mistake these for formal proofs".
You're right; I did miss that, thanks. It was perhaps unfair of me then to pick on such gaps in formalism. Unfortunately, this is only enough to rescue a small portion in the post. Ignoring the ones I skipped  maybe it would be worth my time to get back to those after all  I think the only one potentially rescued that way is the envelope problem. (I'm still skeptical that it is  I haven't looked at it in enough detail to say  but I'll grant you that it could be.)
(Edit: After rechecking, I guess I'd count Grandi's series and Thomson's lamp here too, but only barely, in the sense that  after giving you quite a bit of benefit of the doubt  yeah I guess you could define things that way but I see absolutely no reason why one would want to and I seriously doubt you gain anything from doing so. (I was about to include god picking a random integer here, too, but on rechecking again, no, that one still has serious other problems even if I give you more leeway than I initially did. Like, if you try to identify ∞ with a specific surreal, say ω, there's no surreal you can identify it with that will make your conclusion correct.))
The rest of the ones I pointed out as wrong (involving surreals, anyway) all contain more substantial errors. In some cases this becomes evident after doing the work and attempting to formalize your hints; in other cases they're evident immediately, and clearly do not work even informally.
The magic dartboard is a good example of the latter  you've simply given an incorrect proof of why the magic dartboard construction works. In it you talk about ω_1 having a first half and a second half. You don't need to do any deep thinking about surreals to see the problem here  that's just not what ω_1 looks like, at all. If you do follow the hint, and compare the elements of ω_1 to (ω_1)/2 in the surreals, then, as already noted, you find everything falls in the first half, which is not very helpful. (Again: This is the sort of thing that causes me to say, I suspect you need to relearn ordinals and probably other things, not just surreals. If you actually understand ordinals, you should not have any trouble proving that the magic dartboard acts as claimed, without any need to go into the surreals and perform division.)
Meanwhile the paradox of the gods is, as I've already laid out in detail, an example of the former. It sounds like a nice informal answer that could possibly be formalized, sure; but if you try to actually follow the hint and do that  switching to surreal time and space as needed, of course  it still makes no sense for the reasons I've described above. Because, e.g., ω is a limit ordinal and not a successor ordinal (this is a repeated mistake throughout the post, ignoring the existence of limit ordinals), because in the surreals there are no infima of sets (that aren't minima), because the fact that a surreal exponential exists doesn't mean that it acts like you want it to (algebraically it does everything you might want, but this problem isn't about algebraic properties) or that there's anything special about the points it picks out.
In addition, some of the things one is expected to just go with would require not just more explanation to formalize (like surreal integration) but to even make even informal sense of (like what structure you are putting on a set, or what you are embedding it in, that would make a surreal an appropriate measure of its size).
In short, your hints are not hints towards an alreadyexisting solution (or at least, not one that anyone other than you would accept); they're analogydriven speculation as to what a solution could look like. Obviously there's nothing wrong with analogydriven speculation! I could definitely go on about some analogydriven speculation of mine involving surreals! But, firstly, that's not what you presented it as; secondly, in most of your cases it's actually fairly easy (with a bit of relevant knowledge) to follow the breadcrumb trail and see that in fact it goes nowhere, as I did in my reply; and, thirdly, you're purporting to "solve" things that aren't actually problems in the first place. The second being the most important here, to be clear.
(And I think the ones I skipped demonstrate even more mathematical problems that I didn't get to, but, well, I haven't gotten to those.)
FWIW, I'd say surreal decision theory is a bad idea, because, well, Savage's theorem  that's a lot of my philosophical objections right there. But I should get to the actual mathematical problems sometime; the philosophical objections, while important, are, I expect, not as interesting to you.
Basically, the post treats the surreals as a sort of device for automatically making the infinite behave like the finite. They're not. Yes, their structure as an ordered field (ordered exponential field, even) means that their algebraic behavior resembles such familiar finite settings as the real numbers, in contrast to the quite different arithmetic of (say) the ordinal or cardinal numbers (one might even include here the extended real line, with its mostlyallabsorbing ∞). But the things you're trying to do here often involve more than arithmetic or algebra, and then the analogies quickly fall apart. (Again, I'd see our previous exchange here for examples.)
Almost nothing in this post is correct. This post displays not just a misuse of and failure to understand surreal numbers, but a failure to understand cardinals, ordinals, free groups, lots of other things, and just how to think about such matters generally, much as in our last exchange. The fact that (as I write this) this is sitting at +32 is an embarrassment to this website. You really, really, need to go back and relearn all of this from scratch, because going by this post you don't have the slightest clue what you're talking about. I would encourage everyone else to stop upvoting this crap.
This whole post is just throwing words around and making assertions that assume things generalize in a particular naïve way that you expect. Well, they don't, and certainly not obviously.
Really, the whole idea here is wrong. The fact that something does not extend to infinities or infinitesimals is not somehow a paradox. Many things don't extend. There's nothing wrong with that. Some things, of course, do extend if you do things properly. Some things extend in more than one way, with none of them being more natural than the others! But if something doesnt extend, it doesn't extend. That's not a paradox.
Similarly, the fact that something has unexpected results is not a paradox. The right solution for some of these is just to actually formalize them and accept the results. No further "resolution" is required.
In the hopes of making my point absolutely clear, I am going to take these one by one. ~(As per the bullshit asymmetry principle, I'm afraid my response will be much longer than the original post.)~ (OK, I guess that turned out not to be true.) Those that involve philosophical problems in addition to just mathematical problems I will skip on my first pass, if you don't mind (well, some of them, anyway; and I may have slightly misjudged some of the ones I skipped, because, well, I skipped them  point is I'm skipping some, it hardly matters, the rest are enough to demonstrate the point, but maybe I will get back to the skipped ones later). Note that I'm going to focus on problems involving infinities somehow; if there are problems not involving infinities I'll likely miss them.
Infinitarian paralysis: Skipping for now due to philosophical problems in addition to mathematical ones.
Paradox of the gods: You haven't stated your setup here formally, but if I try to formalize it (using real numbers as is probably appropriate here) I come to the conclusion that yes, the man cannot leave the starting point. Is this a "paradox"? No, it's just what you get if you actually formalize this. The continuum is counterintuitive! It doesn't quite fit our usual notions of causality! Think about differential equations for a moment  is it a "paradox" that some differential equations have nonunique solutions, even though it seems that a particle's position, velocity, and relation between the two ought to "cause" its future trajectory? No! This is the same sort of thing; continuous time and continuous space do not work like discrete time and discrete space.
But in addition to your "resolution" being unnecessary, it's also nonsensical. You're taking the number of gods as a surreal number. That's nonsense. Surreal numbers are not for counting how many of something there are. Are you trying to map cardinals to surreals? I mean, yeah, you could define such a map, it's easy to do with AC, but is it meaningful? Not really. You do not count numbers of things with surreals, as you seem to be suggesting.
Of course, there's more than one way to measure the size of an infinite set, not just cardinality. Since you translate the number into a surreal, perhaps you meant the set of gods to be reverse wellordered, so that you can talk about its reverse order type, as an ordinal, and take that as a surreal? That would go a little way to making this less nonsensical, but, well, you never said any such thing.
Of course, your solution seems to involve implicitly changing the setting to have surrealvalued time and space, but that makes sense  it does make sense to try to make such "paradoxes" make more sense by extending the domain you're talking about. You might want to make more of an explicit note of it, though. Anyway, let's get back to nonsense.
So let's say we accept this reversewellordering hypothesis. Does your "resolution" follow? Does it even make sense? No to both! First, your "resolution" isn't so much a deduction as a new assumption  that these reversewellordered gods are placed at positions 1/2^α for ordinals α. I mean, I guess that's a sensible extension of the setup, but... let's note here that you actually are changing the setup significantly at this point; the original setup pretty clearly had ω gods, not more. But, OK, that's fine  you're generalizing, from the case of ω to the case of more. You should be more explicit that you're doing that, but I guess that's not wrong.
But your conclusion still is wrong. Why? Several reasons. Let's focus on the case of ωmany gods, that the original setup describes. You say that the man is stopped at 1/2^ω. Question: Why? Is 1/2^ω the minimum of the set {1/2^n : n ∈ N } inside the surreals? Well, obviously not, because that set obviously has no smallest element.
But is it the infimum (or equivalently limit), then, inside the surreals, if not the minimum? Actually, let's put that question aside for now and note that the answer to this question is actually irrelevant! Because if you accept the logic that the infimum (or equivalently limit) controls, then, guess what, you already have your resolution to the paradox back in the real numbers, where there's it's an infimum and it's 0. So all the rest of this is irrelevant.
But let's go on  is it the infimum (or equivalently limit)? No! It's not! Because there is no infimum! A subsets of the surreals with no minimum also has no infimum, always, unconditionally! The surreal numbers are not at all like the real numbers. You basically can't do limits there, as we've already discussed. So there's nothing particularly distinguishing about the point 1/2^ω, no particular reason why that's where the man would stop. (There's no god there! We're talking about the case of ω gods, not ω+1 gods.)
We haven't even asked the question of what you mean by 2^s for a surreal s. I'm going to assume, since you're talking about surreals and didn't specify otherwise, that you mean exp(s log 2), using the usual surreal exponential. But, since you're only concerned with the case where s is an ordinal, maybe you actually meant taking 2^s using ordinal exponentiation, and then taking the reciprocal as a surreal. These are different, I hope you realize that!
What about if we use {left setright set} intead of limits and infima? Well, there's now even less reason to believe that such a point has any relevance to this problem, but let's make a note of what we get. What is {1, 1/2, 1/4, ...}? Well, it's 0, duh. OK, what if we exclude that by asking for {01, 1/2, 1/4, ...} instead? That's 1/ω. This isn't 1/2^ω; it's larger  well, unless you meant "use ordinal exponentiation and then invert", in which case it is indeed equal and you need to be a hell of a lot clearer but it's all still irrelevant to anything. (Using ordinal exponentiation, 2^ω = ω; while using the surreal exponential, 2^ω = ω^(ω log 2) > ω.)
(What if we use signsequence limits, FWIW? That'll still get us 1/ω. You really shouldn't use those though.)
Anyway, in short, your resolution makes no sense. Moving on...
Two envelopes paradox: OK, I'm ignoring all the parts that don't have to do with surreals, including the use of an improper prior (aka not a probability distribution); I'm just going to examine the use of surreals.
Please. Explain. How, on earth, does one put a uniform distribution on an interval of surreal numbers?
So, if we look at the interval from 0 to 1, say, then the probability of picking a number between a and b, for a<b, is ba? For surreal a and b?
So, first off, that's not a probability. Probabilities are real, for very good reason. This is explicitly a decisiontheory context, so don't tell me that doesn't apply!
But OK. Let's accept the premise that you're using a surrealvalued probability measure instead of a real one. Except, wait, how is that going to work? How is countable additivity going to work, for instance? We've already established that infinite sums do not (in general) work in the surreals! (See earlier discussion.) But OK, we can ignore that  hell, Savage's theorem doesn't guarantee countable additivity, so let's just accept finite additivity. There is the question of just how you're going to define this in generality  it takes quite a bit of work to extend Jordan "measure" into Lebesgue measure, you know  but you're basically just using intervals so I'll accept we can just treat that part naïvely.
But now you're taking expected values! Of a surrealvalued probability distribution over the surreals! So basically you're having to integrate a surrealvalued function over the surreals. As I've mentioned before, there is no known theory of this, no known general way to define this. I suppose since you're just dealing with step functions we can treat this naïvely, but ugh. Nothing you're doing is really defined. This is pure "just go with it, OK?" This one is less bad than the previous one, this one contains things one can potentially just go with, but you don't seem to realize that the things you're doing aren't actually defined, that this is naïve heuristic reasoning rather than actual properlyfounded mathematics.
Sphere of suffering: Skipping for now due to philosophical problems in addition to mathematical ones.
Hilbert Hotel: So, first off, there's no paradox here. This sort of basic cardinal arithmetic of countable sets is wellunderstood. Yes, it's counterintuitive. That's not a paradox.
But let's examine your resolution, because, again, it makes no sense. First, you talk about there being n rooms, where n is a surreal number. Again: You cannot measure sizes of sets with surreal numbers! That is meaningless!
But let's be generous and suppose you're talking about wellordered sets, and you're measuring their size with ordinals, since those embed in the surreals. As you note, this is changing the problem, but let's go with it anyway. Guess what  you've still described it wrong! If you have ω rooms, there is no last room. The last room isn't room ω, that'd be if you had ω+1 rooms. Having ω rooms is the original Hilbert Hotel with no modification.
I'm assuming when you say n/2 you mean that in the surreal sense. OK. Let's go back to the original problem and say n=ω. Then n/2 is ω/2, which is still bigger than any natural number, so there's still nobody in the "last half" of rooms! What if n=ω+1, instead? Then ω/2+1/2 is still bigger than any natural number, so your "last half" consists only of ω+1  it's not of the same cardinality as your "first half". Is that what you intended?
But ultimately... even ignoring all these problems... I don't understand how any of this is supposed to "resolve" any paradoxes. It resolves it by making it impossible to add more people? Um, OK. I don't see why we should want that.
But it doesn't even succeed at that! Because if you have [Dedekind]infinitely many, then for adding finitely many, you have that initial ω, so you can just perform your alterations on that and leave the rest alone. You haven't prevented the Hilbert Hotel "paradox" at all! And for doubling, well, assuming wellordering (because you're measuring sizes with ordinals, maybe?? or because we're assuming choice) well, you can partition things into copies of ω and go from there.
Galileo's paradox: Skipping this one as I have nothing more to add on this subject, really.
Bacon's puzzle: This one, having nothing to do with surreals, is completely correct! It's not new, but it's correct, and it's neat to know about, so that's good. (Although I have to wonder: Why is it on this one you accept conventional mathematics of the infinite, instead of objecting that it's a "paradox" and trying to shoehorn in surreals?)
Trumped and the St. Petersburg ones: Skipping for now due to philosophical problems in addition to mathematical ones
Diceroom murders: An infinitesimal chance the die never comes up 10? No, there's a 0 chance. That's how probability theory works. Again, probability is realvalued for very good reasons, and reals don't have infinitesimals. If you want to introduce probabilities valued over some other codomain, you're going to have to specify what and explain how it's going to work. "Infinitesimal" is not very specific.
The rest as you say has nothing to do with infinities and seems correct so I'll ignore it.
RossLittlewood paradox: Er... you haven't resolved this one at all? The conventional answer, FWIW, is that you should take the limit of the sets, not the limit of the cardinalities, so that none are left, and this demonstrates the discontinuity of cardinality. But, um, you just haven't answered this one? I mean I guess that's not wrong as such...
Soccer teams: Your resolution bears little resemblance to the original problem. You initially postulated that the set of abilities was Z, then in your resolution you said it was an interval in the surreals. Z is not an interval in the surreals. In fact, no set is an interval in the surreals; between any two given surreals there is a whole proper class of surreals. Perhaps you meant in the omnific integers? Sorry, Z isn't an interval in there either. Perhaps you meant in something of your own invention? Well, you didn't describe it. Ultimately it's irrelevant  because the fact is that, yes, if you add 1 to each element of Z, you get Z. No alternate way of describing it will change that.
Positive soccer teams: You, uh, once again didn't supply a resolution? In any case this whole problem is illdefined since you didn't actually specify any way to measure which of two teams is better. Although, if we just assume there is some way, then presumably we want it to be a preorder (since teams can be tied), and then it seems pretty clear that the two teams should be tied (because each should be no greater than the other for the two reasons you gave). (Actually it's not too hard to come up with an actual preorder here that does what you want, and then you can verify that, yup, the two teams are tied in it.) This happens a lot with infinities  things that are orders in the finite case become preorders. Just something you have to learn to live with, once again.
Can God pick an integer at random?: This is... not how probability works. There is no uniform probability distribution on the natural numbers, by countable additivity. Or, in short, no, God cannot pick an integer at random. You then go on to talk about nonsensical 1/∞ chances. In short, the only paradox here is due to a nonsensical setup.
But then you go and give it a nonsensical resolution, too. So, first off, once again, you can't count things with surreals. I will once again generously assume that you intended there to be a wellordered set of planets and are counting with ordinals rather than surreals.
It doesn't matter. Not only do you then fail to reject the nonsensical setup, you do the most nonsensical thing yet: You explicitly mix surreal numbers with extended real numbers, and attempt to compare the two. What. Are you implicitly thinking of ∞ as ω here? Because you sure didn't say anything like that! Seriously, these don't go together.
I am tempted to do the formal manipulations to see if there is any way one might come to your conclusions by such meaningless formal manipulation, but I'll just give you the benefit of the doubt there, because I don't want to give myself a headache doing meaningless formal manipulations involving two different number systems that can't be meaningfully combined.
BanachTarski paradox: This starts out as a decent explanation of BanachTarski; it's missing some important details, but whatever. But then you start talking about sequences of infinite length. (Something that wasn't there before  you act as if this was already there, but it wasn't.) Which once again you meaninglessly assign a surreal length. I'll once again assume you meant an ordinal length instead. Except that doesn't help much because this whole thing is meaningless  you can't take infinite products in groups.
Or maybe you can, in this case, since we're really working in F_2 embedded in SO(3), rather than just in F_2? So you could take the limit in SO(3), if it exists. (SO(3) is compact, so there will certainly be at least one limit point, but I don't see any obvious reason it'd be unique.)
Except the way you talk about it, you talk as if these infinite sequences are still in our free group. Which, no. That is not how free groups work. They contain finite words only.
Maybe you're intending this to be in some sort of "free topological group", which does contain infinite and transfinite words? Yeah, there's no such thing in any nontrivial manner. Because if you have any element g, then you can observe that g(ggg...) = ggg..., and therefore (because this is a group) ggg...=1. Well, OK, that's not a full argument, I'll admit. But, that's just a quick example of how this doesn't work, I hope you don't mind. Point is: You haven't defined this new setting you're working in, and if you try, you'll find it makes no sense. But it sure as hell ain't the free group F_2.
I also have no idea what you're saying this does to the BanachTarski paradox. Honestly, it doesn't matter, because the logic behind BanachTarski remains the same regardless.
The headache: Skipping for now
The magic dartboard: No, a bijection between the countable ordinals and [0,1] is not known to exist. That's only true if you assume the continuum hypothesis. Are you assuming the continuum hypothesis? You didn't mention any such thing.
You then give a completely wrong and nonsensical argument as to why this construction has the desired "magic dartboard" property, in which you talk about certain ordinals being in the "first 1/n" of the countable ordinals, or the "last half" of the countable ordinals. This is completely meaningless. There is no first 1/n, or last half, of the countable ordinals. If you had some meaning in mind, you're going to have to explain it. And if you mean going into the surreals and comparing them against ω_1/n, then, unsurprisingly, the entire countable ordinals will always fall in your first 1/n. The construction does yield a magic dartboard, but you're completely wrong as to why.
Thomson's lamp: Your resolution here is nonsense. Now, our presses our occurring in a wellordered sequence, so it's most appropriate to regard the number of presses as an ordinal. In which case, the number of presses is ω. It's not a question  that's what it is. It doesn't depend on how we define the reals, WTF? The reals are the reals (unless you're going to start doing constructive mathematics, in which case the things you wrote will presumably be wrong in many more ways). It might depend on how you define the problem, but you were pretty explicit about what the press timings are. Anyway, ω is even as an omnific integer, but does that mean we should consider the lamp to be on? I see no reason to conclude this. The lamp's state has no welldefined limit, after all. This is once again naïvely extending something from the finite to the infinite without checking whether it actually extends (it doesn't).
Really, the basic mistake here is assuming there must be an answer. As I said, the lamp's state has no limit, so there really just isn't any welldefined answer to this problem.
Grandi's series: You once again assign a variable surreal length (which still makes no sense) to something which has a very definite length, namely ω. In any case, Grandi's series has no limit. You say it depends on whether the length is even or odd. Suppose we interpret that as "even or odd as an omnific integer" (i.e. having even or odd finite part). OK. So you're saying that Grandi's series sums to 0, then, since ω is even as an omnific integer? It doesn't matter; the series has no limit, and if you tried to extend it transfinitely, you'd get stuck at ω when there's already no limit there.
I mean, I suppose you could define a new notion of what it means to sum a divergent (possibly transfinite series), and apply it to Grandi's series (possibly extended transfinitely) as an example, but you haven't done that. You've just said what the limit "is". It isn't. More naïve extension and formal manipulation in place of actual mathematical reasoning.
Satan's apple: Skipping, you didn't mention surreals and the paradox is entirely philosophical rather than mathematical (you also admitted confusion on this one rather than giving a fake resolution, so good for you)
Gabriel's horn: Yup, you described this one correctly at least!
Bertrand paradox: You almost had this, but still snuck in an incorrect statement revealing a serious conceptual error. There aren't multiple sets of chords; there are multiple probability distributions on the set of chords. Really, it's not that all the probabilities are valid, it's just that it depends on how you pick, but I was giving you the benefit of the doubt on that one until you added that bit about multiple sets of chords.
Zeno's paradoxes: We can argue all we like about the "real" resolution here philosophically but whatever, you seem to grasp the mathematics of it at least, so let's move on
Skolem's paradox: You've mostly summed this one up correctly. I must nitpick and point out that membership in the model is not necessarily the same as membership outside the model even for those sets that are in the model  something which you might realize but your explanation doesn't make clear  but this is a small error compared to the giant conceptual errors that fill most of what you've written here.
Whew. OK. I will maybe get back to the ones I skipped, but probably not because this is enough to demonstrate my point. This post is horribly wrong nearly in its entirely, shot through with serious conceptual errors. You really need to relearn this stuff from scratch, because almost nothing you're saying makes sense. I urge everyone else to ignore this post and not take anything it says as reliable.
I'm pretty sure this point has been made here before, but, hey, it's worth repeating, no? :)
I think you're going to need to be more explicit. My best understanding of what you're saying is this: Each participant has two options  to attempt to actually understand the other, or to attempt to vilify them for disagreeing, and we can lay these out in a payoff matrix and turn this into a game.
I don't see offhand why this would be a Prisoner's Dilemma, though I guess that seems plausible if you actually do this. It certainly doesn't seem like a Stag Hunt or Chicken which I guess are the other classic cooperateordon't games.
My biggest problem here is the question of how you're constructing the payoff matrices. The reward for defecting is greater ingroup acceptance, at the cost of understanding; the reward for both cooperating is increased understanding, but likely at the cost of ingroup acceptance. And the penalty for cooperating and being defected on seems to be in the form of decreased outgroup acceptance. I'm not sure how you make all these commensurable to come up with a single payoff matrix. I guess you have to somehow, but that the result would be a Prisoner's Dilemma isn't obvious. Indeed it's actually not obvious to me here that cooperating and being defected on is worse than what you get if both players defect, depending on one's priorities, which woud definitely not make it a Prisoner's Dilemma. I think that part of what's going on here is that different people's weighting of these things may substantially affect the resulting game.
This makes sense, but what you call "dialectical moral argumentation" seems to me like it can just be considered as what you call "logical moral argumentation" but with the "ought" premises left implicit, you know? From this point of view, you could say that they're two different ways of framing the same argument. Basically, dialectical moral argumentation is the hypothetical syllogism to logical moral argumentation's repeated modus ponens. Because if you want to prove C, where C is "You should take action X", starting from A, where A is "You want to accomplish Y", then logical moral argumentation makes the premise A explicit, and so supplied with the facts A => B and B=> C, can first make B and then make C (although obviously that's not the only way to do it but let's just go with this); whereas dialectical moral argumentation doesn't actually have the premise A to hand and so instead can only apply hypothetical syllogism to get A => C, and then has to hand this to the other party who then has A and can make C with it.
So, like, this is a good way of making sense of Sam Harris, as you say, but I'm not sure this new point of view actually adds anything new. It sounds like a fairly trivial rephrasing, and to me at least seems like a less clear one, hiding some of the premises.
(Btw thank you for the comment below with the interview quotes; that really seems to demonstrate that yes, your explanation really is what Harris means, not the ridiculous thing it sounds like he's saying!)
What does this have to do with the Prisoners' Dilemma?
Oh, I see! So the 2017 one is even more not the first one, then. :)
I feel like I should point out that last year was not the first East Coast Rationalist Megameetup in NYC; that was in 2014. Last year may however be the first one that gets repeated...
[This suggests a Magic format where you have some 'base decks' on offer, maybe shuffled, maybe not, and your 'actual deck' is your starting hand, that you get to choose entirely. If the base decks only contain the equivalent of forests and Grizzly Bears, then the question is something like "can you fit a gameender into 7 cards, with enough disruption and counterdisruption that yours goes off first?"]
Having restricted choices for groups of cards, and then only picking a few of them, seems to be moving almost somewhat Codexward... (although I gather from the other comments that wasn't really your intention).
(Hey, a note, you should probably learn to use the blockquote feature. I dunno where it is in the rich text editor if you're using that, but if you're using the Markdown editor you just precede the paragraph you're quoting with a ">". It will make your posts substantially more readable.)
Are you sure this chain of reasoning is correct?
Yes.
Consider 1/2x. For any finite number of terms it will be greater than ε, but as x approaches ω, it should approach 1/2ω.
What "terms"? What are you talking about? This isn't a sequence or a sum; there are no "terms" here. Yes, even in the surreals, as x goes to ω, 1/(2x) will approach 1/(2ω), as you say; as I mentioned above, limits of functions of a surreal variable will indeed still work. But that has no relevance to the case under discussion.
(And, while it's not necessary to see what's going on here, it may be helpful to remember that if we if we interpret this as occurring in the surreals, then in the case of 1/2x as x→ω, your domain has properclass cofinality, while in the case of this infinite sum, the domain has cofinality ω. So the former can work, and the latter cannot. Again, one doesn't need this to see that  the partial sum can't get within 1/ω of e even when the cofinality is countable  but it may be worth remembering.)
Why can't the partial sum get within 1/ω of e?
Because the partial sum is always a rational number. A rational number  more generally, a real number  cannot be infinitesimally close to e without being e. (By contrast, for surreal x, 1/(2x) certainly does not need to be a real number, and so can get infinitesimally close to 1/(2ω) without being equal to it.)
You're right that it won't be a nice neat quotient group. But here's an example. N_0  N_0 can equal any integer where N_0 is a cardinal, or even +/ N_0, but in surreal numbers it works as follows. Suppose X and Y are countable infinities. Then X  Y has a unique value that we can sometimes identify. For example, if X represents the length of a sequence and Y is all the elements in the sequences except one, then X  Y = 1. We can perform the calculation in the surreals, or we can perform it in the cardinals and receive a broad range of possible answers. But for every possible answer in the cardinals, we can find pairs of surreal numbers that would provide that answer.
What??
OK. Look. I could spend my time attempting to pick this apart. But, let me be blunt, the point I am trying to get across here is that you are talking nonsense. This is babble. You are way out of your depth, dude. You don't know what you are talking about. You need to go back and relearn this from the beginning. I don't even know what mistake you're making, because it's not a common one I recognize.
Just in the hopes it might be somewhat helpful, I will quickly go over the things I can maybe address quickly:
N_0  N_0 can equal any integer where N_0 is a cardinal, or even +/ N_0, but in surreal numbers it works as follows.
I have no idea what this sentence is talking about.
Suppose X and Y are countable infinities.
What's an "infinity"? An ordinal? A cardinal? (There's only one countably infinite cardinal...) A surreal or something else entirely? You said "countable", so it has to be something to which the notion of countability applies!
This mistake, at least, I think I can identify. Maybe you should, in fact, look over that "quick guide to the infinite" I wrote, because this is myth #0 I discussed there. There's no such thing as a unified notion of "infinities". There are different systems of numbers, some of them contain numbers/objects that are infinite (i.e.: larger in magnitude than any whole number), there is not some greater unified system they are all a part of.
Then X  Y has a unique value that we can sometimes identify.
What is XY? I don't even know what system of numbers you're using, so I don't know what this means.
If X and Y are surreals, then, sure, there's quite definitely a unique surreal XY. This is true more generally if you're thinking of X and Y as living in some sort of ordered field or ring.
If X and Y are cardinals, then XY may not be welldefined. Trivially so if Y>X (no possible values), but let's ignore that case. Even ignoring that, if X and Y are infinite, XY may fail to be welldefined due to having multiple possible values.
If X and Y are ordinals, we have to ask what sort of addition we're using. If we're using natural addition, then XY certainly has a unique value in the surreals, but it may or may not be an ordinal, so it's not necessarily welldefined within the ordinals.
If we're using ordinary addition, we have to distinguish between XY and Y+X. (The latter just being a way of denoting "subtracting on the left"; it should not be interpreted as actually negating Y and adding to X.) Y+X will have a unique value so long as Y≤X, but XY is a different story; even restricting to Y≤X, if X is infinite, then XY may have multiple possible values or none.
For example, if X represents the length of a sequence and Y is all the elements in the sequences except one, then X  Y = 1.
Yeah, not going to try to pick this apart, in short though this is nonsense.
I'm starting to think though that maybe you meant that X and Y were infinite sets, rather than some sort of numbers? With XY being the set difference? But that is not what you said. Simply put you seem very confused about all this.
We can perform the calculation in the surreals, or we can perform it in the cardinals and receive a broad range of possible answers.

Are X and Y surreals or are they cardinals? Surreals and cardinals don't mix, dude! It can't be both, not unless they're just whole numbers! You are performing the calculation in whatever number system these things live in.

You just said above you get a welldefined answer, and, moreover, that it's 1! Now you're telling me that you can get a broad range of possible answers??

If X is representing the length of a sequence, it should probably be an ordinal. As for Y... yeah, OK, not going to try to make sense of the thing I already said I wouldn't attempt to pick through.

And if X and Y are sets rather than numbers... oh, to hell with it, I'm just going to move on.
But for every possible answer in the cardinals, we can find pairs of surreal numbers that would provide that answer.
There is, I think, a correct idea here that is rescuable. It also seems pretty clear you don't know enough to perform that rescue yourself and rephrase this as something that makes sense. (A hint, though: The fixed version probably should not involve surreals.)
(Do surreal numbers even have cardinalities, in a meaningful sense? Yes obviously if you pick a particular way of representing surreals as sets, e.g. by representing them as sign sequences, the resulting representations will have cardinalities; obviously, that's not what I'm talking about. Although, who knows, maybe that's a workable notion  define the cardinality of a surreal to be the cardinality of its birthday. No idea if that's actually relevant to anything, though.)
Even charitably interpreted, none of this matches up with your comments above about equivalence classes. It relates, sure, but it doesn't match. What you said above was that you could solve more equations by passing to equivalence classes. What you're saying now seems to be... not that.
Long story short: I really, really, do not think you have much idea what you are talking about. You really need to relearn this from scratch, and not starting with surreals. I definitely do not think you are prepared to go instructing others on their uses; at this point I'm not convinced you could clearly articulate what ordinals and cardinals are for, you've gotten everything so mixed up in your comment above. I wouldn't recommend trying to expand this into a post.
I think I should probably stop arguing here. If you reply to this with more babble I'm not going to waste my time replying to it further.
(Note: I've edited some things in to be clearer on some points.)
Do you know where I could find proofs of the following?
"Normally we define exp(x) to be the limit of 1, 1+x, 1+x+x^2/2, it'll never get within 1/ω of e."
"If you make the novice mistake in fixing it of instead trying to define exp(x) as {1,1+x,1+x+x^2/2,...}, you will get not exp(1)=e but rather exp(1)=3."
These are both pretty straightforward. For the first, say we're working in a nonArchimedean ordered field which contains the reals, we take the partial sums of the series 1+1+1/2+1/6+...; these are rational numbers, in particular they're real numbers. So if we have one of these partial sums, call it s, then es is a positive real number. So if you have some infinitesimal ε, it's larger than ε; that's what an infinitesimal is. The sequence will not get within ε of e.
For the second, note that 3={2}, i.e., it's the simplest number larger than 2. So if you have {1,2,5/2,8/3,...}, well, the simplest number larger than all of those is still 3, because you did nothing to exclude 3. 3 is a very simple number! By definition, if you want to not get 3, either your interval has to not contain 3, or it has to contain something even simpler than 3 (i.e., 2, 1, or 0). (This is easy to see if you use the signsequence representation  remember that x is simpler than y iff the sign sequence of x is a proper prefix of the sign sequence of y.) The interval of surreals greater than those partial sums does contain 3, and does not contain 2, 1, or 0. So you get 3. That's all there is to it.
As for the rest of the comment... let me address this out of order, if you don't mind:
In some ways I view them as the ultimate reality
See, this is exactly the sort of thinking I'm trying to head off. How is that relevant to anything? You need to use something that actually fulfills the requirements of the problem.
On top of that, this seems... well, I don't know if you actually are making this error, but it seems rather reminiscent of the high school student's error of imagning that there's a single notion of "number"  where every notion of "number" they know fits in C so "number" and "complex number" become identified. And this is false not just because you can go beyond C, but because there are systems of numbers that can't be fit together with C at all. (How does Q_p fit into this? Answer: It doesn't!)
(Actually, by that standard, shouldn't the surcomplexes be the "ultimate reality"? :) )
(...I actually have some thoughts on that sort of thing, but since I'm trying to point out right now that that sort of thing is not what you should be thinking about when determining what sort of space to use, I won't go into them. "Ultimate reality" is, in addition to not being correct, probably not on the list of requirements!)
Also, y'know, you don't necessarily need something that could be considered "numbers" at all, as I keep emphasizing.
Anyway, as to the mathematical part of what you were saying...
I still need to read more about surreal numbers, but the thing I like about them is that you can always reduce the resolution if you can't solve the equation in the surreals. In some ways I view them as the ultimate reality and if we don't know the answer to something or only know the answer to a certain fineness, I think it's better to be honest about, rather than just fall back to an equivalence class over the surreals where we do know the answer. Actually, maybe that wasn't quite clear, I'm fine with falling back, but after its clear that we can't solve it to the finest degree.
I have no idea what you're talking about here. Like, what? First off, what sort of equations are you talking about? Algebraic ones? Over the surreals, I guess? The surreals are a real closed field, the surcomplexes are algebraically closed. That will suffice for algebraic equations. Maybe you mean some more general sort, I don't know.
But most of this is just baffling. I have no idea what you're talking about when you speak of passing to a quotient of the surreals to solve any equation. Where is that coming from? And like  what sort of quotient are we talking about here? "Quotient of the surreals" is already suspect because, well, it can't be a ringtheoretic quotient, as fields don't have nontrivial ideals, at all. So I guess you mean purely an additive quotient? But that's not going to mix very well with solving any equations that involve more than addition now, is it? Meanwhile what the surreals are known for is that any ordered field embeds in them, not something about quotients!
Anyway, if you want to solve algebraic equations, you want an algebraically closed field. If you want to solve algebraic equations to the greatest extent possible while still keeping things ordered, you want a real closed field. The surreals are a real closed field, but you certainly don't need them just for solving equations. If you want to be able to do limits and calculus and such, you want something with a nice topology (just how nice probably depends on just what you want), but note that you don't necessarily want a field at all! None of these things favor the surreals, and the fact that we almost certainly need integration here is a huge strike against them.
Btw, you know what's great for solving equations in, even if they aren't just algebraic equations? The real numbers. Because they're connected, so you have the intermediate value theorem. And they're the only ordered field that's connected. Again, you might be able to emulate that sort of thing to some extent in the surreals for sufficiently nice functions (mere continuity won't be enough) (certainly you can for polynomials, like I said they're real closed, but I'm guessing you can probably get more than that), I'm not superfamiliar with just what's possible there, but it'll take more work. In the reals it's just, make some comparisons, they come out opposite one another, R is connected, boom, there's a solution somewhere inbetween.
But mostly I'm just wondering where like any of this is coming from. It neither seems to make much sense nor to resemble anything I know.
(Edit: And, once again, it's not at all clear that being able to solve equations is at all relevant! That just doesn't seem to be something that's required. Whereas integration is.)
Later edits: various edits for clarity; also the "transfinite sequences suffice" thing is easy to verify, it doesn't require some exotic theorem
Yet later edit: Added another example
Two weeks later edit: Added the part about signsequence limits
So, to a large extent this is a problem with nonArchimedean ordered fields in general; the surreals just exacerbate it. So let's go through this in stages.
===Stage 1: Infinitesimals break limits===
Let's start with an example. In the real numbers, the limit as n goes to infinity of 1/n is 0. (Here n is a natural number, to be clear.)
If we introduce infinitesimals  even just as minimally as, say, passing to R(ω)  that's not so, because if you have some infinitesimal ε, the sequence will not get within ε of 0.
Of course, that's not necessarily a problem; I mean, that's just restating that our ordered field is no longer Archimedean, right? Of course 1/n is no longer going to go to 0, but is 1/n really the right thing to be looking at? How about, say, 1/x, as x goes to infinity, where x takes values in this field of ours? That still goes to 0. So it may seem like things are fine, like we just need to get these sequences out of our head and make sure we're always taking limits of functions, not sequences.
But that's not always so easy to do. What if we look at x^n, where x<1? If x isn't infinitesimal, that's no longer going to go to 0. It may still go to 0 in some cases  like, in R(ω), certainly 1/ω^n will still go to 0  but 1/2^n sure won't. And what do we replace that with? 1/2^x? How do we define that? In certain settings we may be able to  hell, there's a theory of the surreal exponential, so in the surreals we can  but not in general. And doing that requires first inventing the surreal exponential, which  well, I'll talk more about that later, but, hey, let's talk about that a bit right now. How are we going to define the exponential? Normally we define exp(x) to be the limit of 1, 1+x, 1+x+x^2/2... but that's not going to work anymore. If we try to take exp(1), expecting an answer of e, what we get is that the sequence doesn't converge due to the cloud of infinitesimals surrounding it; it'll never get within 1/ω of e. For some values maybe it'll converge, but not enough to do what we want.
Now the exponential is nice, so maybe we can find another definition (and, as mentioned, in the case of the surreals indeed we can, while obviously in the case of the hyperreals we can do it componentwise). But other cases can be much worse. Introducing infinitesimals doesn't break limits entirely  but it likely breaks the limits that you're counting on, and that can be fatal on its own.
===Stage 2: Uncountable cofinality breaks limits harder===
Stage 2 is really just a slight elaboration of stage 1. Once your field is large enough to have uncountable cofinality  like, say, the hyperreals  no sequence (with domain the whole numbers) will converge (unless it's eventually constant). If you want to take limits, you'll need transfinite sequences of uncountable length, or you simply will not get convergence.
Again, when you can rephrase things from sequences (with domain the natural numbers) to functions (with domain your field), things are fine. Because obviously your field's cofinality is equal to itself. But you can't always do that, or at least not so easily. Again: It would be nice if, for x<1, we had x^n approaching 0, and once we hit uncountable cofinality, that is simply not going to happen for any nonzero x.
(A note: In general in topology, not even transfinite sequences are good enough for general limits, and you need nets/filters. But for ordered fields, transfinite sequences (of length equal to the field's cofinality) are sufficient. Hence the focus on transfinite sequences rather than being ultrageneral and using nets.)
Note that of course the hyperreals are used for nonstandard analysis, but nonstandard analysis doesn't involve taking limits in the hyperreals  that's the point; limits in the reals correspond to nonlimitbased things in the hyperreals.
===Stage 3: The surreals break limits as hard as is possible===
So now we have the surreals, which take uncountable cofinality to the extreme. Our cofinality is no longer merely uncountable, it's not even an actual ordinal! The "cofinality" of the surreals is the "ordinal" represented by the class of all ordinals (or the "cardinal" of the class of all sets, if you prefer to think of cofinalities as cardinals). We have properclass cofinality.
Limits of sequences are gone. Limits of ordinary transfinite sequences are gone. All that remains working are limits of sequences whose domain consists of the entire class of all ordinals. Or, again, other things with properclass cofinality; 1/x still goes to 0 as x goes to infinity (again, letting x range over all surreals  note that that that's a very strong notion of "goes to infinity"!) You still have limits of surreal functions of a surreal variable. But as I keep pointing out, that's not always good enough.
I mean, really  in terms of ordered fields, the real numbers are the best possible setting for limits, because of the existence of suprema. Every set that's bounded above has a least upper bound. By contrast, in the surreals, no set that's bounded above has a least upper bound! That's kind of their defining property; if you have a set S and an upper bound b then, oops, {Sb} sneaks right inbetween. Proper classes can have suprema, yes, but, as I keep pointing out, you don't always have a proper class to work with; oftentimes you just have a plain old countably infinite set. As such, in contrast to the reals, the surreal numbers are the worst possible setting for limits.
The result is that doing things with surreals beyond addition and multiplication typically requires basically reinventing those things. Now, of course, the surreal numbers have something that vaguely resemble limits, namely, {left stuffright stuff}  the "simplest in an interval" construction. I mean, if you want, say, √2, you can just put {x∈Q, x>0, x^2<2  x∈Q, x>0, x^2>2}, and, hey, you've got √2! Looks almost like a limit, doesn't it? Or a Dedekind cut? Sure, there's a huge cloud of infinitesimals surrounding √2 that will thwart attempts at limits, but the simplestinaninterval construction cuts right through that and snaps to the simplest thing there, which is of course √2 itself, not √2+1/ω or something.
Added later: Similarly, if you want, say, ω^ω, you just take {ω,ω^2,ω^3,...}, and you get ω^ω. Once again, it gets you what a limit "ought" to get you  what it would get you in the ordinals  even though an actual limit wouldn't work in this setting.
But the problem is, despite these suggestive examples showing that snappingtothesimplest looks like a limit in some cases, it's obviously the wrong thing in others; it's not some general dropin substitute. For instance, in the real numbers you define exp(x) as the limit of the sequence 1, 1+x, 1+x+x^2/2, etc. In the surreals we already know that won't work, but if you make the novice mistake in fixing it of instead trying to define exp(x) as {1,1+x,1+x+x^2/2,...}, you will get not exp(1)=e but rather exp(1)=3. Oops. We didn't want to snap to something quite that simple. And that's hard to prevent.
You can do it  there is a theory of the surreal exponential  but it requires care. And it requires basically reinventing whatever theory it is that you're trying to port over to the surreal numbers, it's not a nice straight port like so many other things in mathematics. It's been done for a number of things! But not, I think, for the things you need here.
Martin Kruskal tried to develop a theory of surreal integration back in the 70s; he ultimately failed, and I'm pretty sure nobody has succeeded since. And note that this was for surreal functions of a single surreal variable. For surreal utilities and real probabilities you'd need surreal functions on a measure space, which I imagine would be harder, basically for cofinality reasons. And for this thing, where I guess we'd have something like surreal probabilities... well, I guess the cofinality issue gets easier  or maybe gets easier, I don't want to say that it does  but it raises so many others. Like, if you can do that, you should at least be able to do surreal functions of a single surreal variable, right? But at the moment, as I said, nobody knows how (I'm pretty sure).
In short, while you say that the surreals solve a lot more problems than people realize, my point of view is basically the opposite: From the point of view of applications, the surreal numbers are basically an attractive nuisance. People are drawn to them for obvious reasons  surreals are cool! Surreals are fun! They include, informally speaking, all the infinities and infitesimals! But they can be a huge pain to work with, and  much more importantly  whatever it is you need them to do, they probably don't do it. "Includes all the infinities and infinitesimals" is probably not actually on your list of requirements; while if you're trying to do any sort of decision theory, some sort of theory of integration is.
You have basically no idea how many times I've had to write the same "no, you really don't want to use surreal utilities" comment here on LW. In fact years ago  basically due to constant abuse of surreals (or cardinals, if people really didn't know what they were talking about)  I wrote this article here on LW, and (while it's not like people are likely to happen across that anyway) I wish I'd included more of a warning against using the surreals.
Basically, I would say, go where the math tells you to; build your system to the requirements, don't just go pulling something off the shelf unless it meets those requirements. And note that what you build might not be a system of numbers at all. I think people are often too quick to jump to the use of numbers in the first place. Real numbers get a lot of this, because people are familiar with them. I suspect that's the real historical reason why utility functions were initially defined as realvalued; we're lucky that they turned out to actually be appropriate!
(Added later: There is one other thing you can do in the surreals that kind of resembles a limit, and this is to take a limit of sign sequences. This at least doesn't have the cofinality problem; you can take a signsequence limit of a sequence. But this is not any sort of dropin replacement for usual limits either, and my impression (not an expert here) is that it doesn't really work very well at all in the first place. My impression is that, while {leftright} can be a bit too oblivious to the details of the the inputs (if you're not careful), limits of sign sequences are a bit too finicky. For instance, defining e to be the signsequence limit of the partial sums 1, 2, 5/2, 8/3, 65/24... will work, but defining exp(x) analogously won't, because what if x is (as a real number) the logarithm of a dyadic rational? Instead of getting exp(log(2))=2, you'll get exp(log(2))=21/ω. (I'm pretty sure that's right.) There goes multiplicativity! Worse yet, exp(log(2)) won't "converge" at all. Again, I can't rule out that, like {leftright}, it can be made to work with some care, but it's definitely not a dropin replacement, and my nonexpert impression is that it's overall worse than {leftright}. In any case, once again, the better choice is almost certainly not to use surreals.)
I've already mentioned this in a separate comment, but surreals come with a lot of problems of their own (basically, limits don't work). I don't like to say this, but your comment gives off the same "oh well we need infinitesimals and this is what I've heard of" impression as above. Pick systems of numbers based on what they do. Surreals probably don't do whatever's necessary here  how are you going to do any sort of integration?
(Also, you mean a free ultrafilter, not a principal one.)
So, I haven't really read this in any detail, but  I am very, very wary of the use of hyperreal and/or surreal numbers here. While as I said I haven't taken a thorough look at this, to me these look like "well we need infinitesimals and this is what I've heard of" rather than having any real reason to pick one of these two. I seriously doubt that either is a good choice.
Hyperreals require picking a free ultrafilter; they're not even uniquely defined. Surreal numbers (pretty much) completely break limits. (Hyperreals kind of break limits too, due to being of uncountable cofinality, but not nearly as extensively as surreal numbers do, which are of properclass cofinality.) If you're picking a number system, you need to consider what you're actually going to do with it. If you're going to do any sort of limits or integration with it  and what else is probability for, if not integration?  you probably don't want surreal numbers, because limits are not going to work there. (Some things that are normally done with limits can be recovered for surreals by other means, e.g. there's a surreal exponential, but you don't define it as a limit of partial sums, because that doesn't work. So, maybe you can develop the necessary theory based on something other than limits, but I'm pretty sure it's not something that already exists which you can just pick up and use.)
Again: Pick number systems for what they do. Hyperreals have a specific point, which is the transfer principle. If you're not going to be using the transfer principle, you probably don't want hyperreals. And as I already said, if you're going to be taking any sort of limit, you probably don't want surreals.
Consider asking whether you need a system of numbers at all. You mention sequences of real numbers; perhaps that's simply what you want? Sequences of real numbers, not modulo a free ultrafilter? You don't need to use an existing system of numbers, you can purposebuild one; and you don't need to use a system of numbers at all, you can just use appropriate objects, whatever they may be. (Oftentime it makes more sense to represent "orders of infinity" by functions of different growth rates  or, I guess here, sequences of different growth rates.)
(Honestly if infinitesimal probabilities or utilities are coming up, I'd consider that a flag that something has likely gone wrong  we have good reasons to use real numbers for these, which I'm sure you're already familiar with (but here's a link for everyone else :P )  but I'll admit that I haven't read this thing in any detail and you are going beyond that sort of classical context so, hey, who knows.)
Also, there may be a common sentiment that altruism is only ever intended as signaling (of virtue, of wealth, of whatever), and is thus a statusenhancing move. In my experience, people from such societies will often not comprehend (or be very skeptical of, even when they do comprehend) the idea of acting altruistically for purely… altruistic reasons.
This is a total armchair reply, but  I'm wondering if that ascription of ulterior intent is actually necessary. Like, rather than "this act of altruism is actually just intended as a status move and so should be punished", perhaps just, "this act of altruism will increase their status and so should be punished".
Thanks, that's a good way of putting it.
So basically, historical explanations. These are frequently a good idea for exactly the reason you say  a lot of things are just a lot more confusing without their historical context; they developed as the answer to a series of questions and answers and things make more sense once you know that series.
However it's worth noting that there are times where you do want to skip over a bunch of the history, because the modern way of thinking about things is so much cleaner, and you can develop a different, better series of questions and answers than the one that actually happened historically.
Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that's just how "uncorrelated" in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]E[X]E[Y] is not really the most fundamental definition of covariance, I'd say, but it's easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)
So, um, I was writing a post and I left the tab open for a few hours and the post seems to have just disappeared? While it's not impossible I accidentally clicked refresh or something, as best I can tell it was just gone when I got back, with the tab not having been touched in over an hour.
I'm pretty sure that's not the particular one, but thank you all the same!