I'm pretty sure this point has been made here before, but, hey, it's worth repeating, no? :)
I think you're going to need to be more explicit. My best understanding of what you're saying is this: Each participant has two options  to attempt to actually understand the other, or to attempt to vilify them for disagreeing, and we can lay these out in a payoff matrix and turn this into a game.
I don't see offhand why this would be a Prisoner's Dilemma, though I guess that seems plausible if you actually do this. It certainly doesn't seem like a Stag Hunt or Chicken which I guess are the other classic cooperateordon't games.
My biggest problem here is the question of how you're constructing the payoff matrices. The reward for defecting is greater ingroup acceptance, at the cost of understanding; the reward for both cooperating is increased understanding, but likely at the cost of ingroup acceptance. And the penalty for cooperating and being defected on seems to be in the form of decreased outgroup acceptance. I'm not sure how you make all these commensurable to come up with a single payoff matrix. I guess you have to somehow, but that the result would be a Prisoner's Dilemma isn't obvious. Indeed it's actually not obvious to me here that cooperating and being defected on is worse than what you get if both players defect, depending on one's priorities, which woud definitely not make it a Prisoner's Dilemma. I think that part of what's going on here is that different people's weighting of these things may substantially affect the resulting game.
This makes sense, but what you call "dialectical moral argumentation" seems to me like it can just be considered as what you call "logical moral argumentation" but with the "ought" premises left implicit, you know? From this point of view, you could say that they're two different ways of framing the same argument. Basically, dialectical moral argumentation is the hypothetical syllogism to logical moral argumentation's repeated modus ponens. Because if you want to prove C, where C is "You should take action X", starting from A, where A is "You want to accomplish Y", then logical moral argumentation makes the premise A explicit, and so supplied with the facts A => B and B=> C, can first make B and then make C (although obviously that's not the only way to do it but let's just go with this); whereas dialectical moral argumentation doesn't actually have the premise A to hand and so instead can only apply hypothetical syllogism to get A => C, and then has to hand this to the other party who then has A and can make C with it.
So, like, this is a good way of making sense of Sam Harris, as you say, but I'm not sure this new point of view actually adds anything new. It sounds like a fairly trivial rephrasing, and to me at least seems like a less clear one, hiding some of the premises.
(Btw thank you for the comment below with the interview quotes; that really seems to demonstrate that yes, your explanation really is what Harris means, not the ridiculous thing it sounds like he's saying!)
What does this have to do with the Prisoners' Dilemma?
Oh, I see! So the 2017 one is even more not the first one, then. :)
I feel like I should point out that last year was not the first East Coast Rationalist Megameetup in NYC; that was in 2014. Last year may however be the first one that gets repeated...
[This suggests a Magic format where you have some 'base decks' on offer, maybe shuffled, maybe not, and your 'actual deck' is your starting hand, that you get to choose entirely. If the base decks only contain the equivalent of forests and Grizzly Bears, then the question is something like "can you fit a gameender into 7 cards, with enough disruption and counterdisruption that yours goes off first?"]
Having restricted choices for groups of cards, and then only picking a few of them, seems to be moving almost somewhat Codexward... (although I gather from the other comments that wasn't really your intention).
(Hey, a note, you should probably learn to use the blockquote feature. I dunno where it is in the rich text editor if you're using that, but if you're using the Markdown editor you just precede the paragraph you're quoting with a ">". It will make your posts substantially more readable.)
Are you sure this chain of reasoning is correct?
Yes.
Consider 1/2x. For any finite number of terms it will be greater than ε, but as x approaches ω, it should approach 1/2ω.
What "terms"? What are you talking about? This isn't a sequence or a sum; there are no "terms" here. Yes, even in the surreals, as x goes to ω, 1/(2x) will approach 1/(2ω), as you say; as I mentioned above, limits of functions of a surreal variable will indeed still work. But that has no relevance to the case under discussion.
(And, while it's not necessary to see what's going on here, it may be helpful to remember that if we if we interpret this as occurring in the surreals, then in the case of 1/2x as x→ω, your domain has properclass cofinality, while in the case of this infinite sum, the domain has cofinality ω. So the former can work, and the latter cannot. Again, one doesn't need this to see that  the partial sum can't get within 1/ω of e even when the cofinality is countable  but it may be worth remembering.)
Why can't the partial sum get within 1/ω of e?
Because the partial sum is always a rational number. A rational number  more generally, a real number  cannot be infinitesimally close to e without being e. (By contrast, for surreal x, 1/(2x) certainly does not need to be a real number, and so can get infinitesimally close to 1/(2ω) without being equal to it.)
You're right that it won't be a nice neat quotient group. But here's an example. N_0  N_0 can equal any integer where N_0 is a cardinal, or even +/ N_0, but in surreal numbers it works as follows. Suppose X and Y are countable infinities. Then X  Y has a unique value that we can sometimes identify. For example, if X represents the length of a sequence and Y is all the elements in the sequences except one, then X  Y = 1. We can perform the calculation in the surreals, or we can perform it in the cardinals and receive a broad range of possible answers. But for every possible answer in the cardinals, we can find pairs of surreal numbers that would provide that answer.
What??
OK. Look. I could spend my time attempting to pick this apart. But, let me be blunt, the point I am trying to get across here is that you are talking nonsense. This is babble. You are way out of your depth, dude. You don't know what you are talking about. You need to go back and relearn this from the beginning. I don't even know what mistake you're making, because it's not a common one I recognize.
Just in the hopes it might be somewhat helpful, I will quickly go over the things I can maybe address quickly:
N_0  N_0 can equal any integer where N_0 is a cardinal, or even +/ N_0, but in surreal numbers it works as follows.
I have no idea what this sentence is talking about.
Suppose X and Y are countable infinities.
What's an "infinity"? An ordinal? A cardinal? (There's only one countably infinite cardinal...) A surreal or something else entirely? You said "countable", so it has to be something to which the notion of countability applies!
This mistake, at least, I think I can identify. Maybe you should, in fact, look over that "quick guide to the infinite" I wrote, because this is myth #0 I discussed there. There's no such thing as a unified notion of "infinities". There are different systems of numbers, some of them contain numbers/objects that are infinite (i.e.: larger in magnitude than any whole number), there is not some greater unified system they are all a part of.
Then X  Y has a unique value that we can sometimes identify.
What is XY? I don't even know what system of numbers you're using, so I don't know what this means.
If X and Y are surreals, then, sure, there's quite definitely a unique surreal XY. This is true more generally if you're thinking of X and Y as living in some sort of ordered field or ring.
If X and Y are cardinals, then XY may not be welldefined. Trivially so if Y>X (no possible values), but let's ignore that case. Even ignoring that, if X and Y are infinite, XY may fail to be welldefined due to having multiple possible values.
If X and Y are ordinals, we have to ask what sort of addition we're using. If we're using natural addition, then XY certainly has a unique value in the surreals, but it may or may not be an ordinal, so it's not necessarily welldefined within the ordinals.
If we're using ordinary addition, we have to distinguish between XY and Y+X. (The latter just being a way of denoting "subtracting on the left"; it should not be interpreted as actually negating Y and adding to X.) Y+X will have a unique value so long as Y≤X, but XY is a different story; even restricting to Y≤X, if X is infinite, then XY may have multiple possible values or none.
For example, if X represents the length of a sequence and Y is all the elements in the sequences except one, then X  Y = 1.
Yeah, not going to try to pick this apart, in short though this is nonsense.
I'm starting to think though that maybe you meant that X and Y were infinite sets, rather than some sort of numbers? With XY being the set difference? But that is not what you said. Simply put you seem very confused about all this.
We can perform the calculation in the surreals, or we can perform it in the cardinals and receive a broad range of possible answers.

Are X and Y surreals or are they cardinals? Surreals and cardinals don't mix, dude! It can't be both, not unless they're just whole numbers! You are performing the calculation in whatever number system these things live in.

You just said above you get a welldefined answer, and, moreover, that it's 1! Now you're telling me that you can get a broad range of possible answers??

If X is representing the length of a sequence, it should probably be an ordinal. As for Y... yeah, OK, not going to try to make sense of the thing I already said I wouldn't attempt to pick through.

And if X and Y are sets rather than numbers... oh, to hell with it, I'm just going to move on.
But for every possible answer in the cardinals, we can find pairs of surreal numbers that would provide that answer.
There is, I think, a correct idea here that is rescuable. It also seems pretty clear you don't know enough to perform that rescue yourself and rephrase this as something that makes sense. (A hint, though: The fixed version probably should not involve surreals.)
(Do surreal numbers even have cardinalities, in a meaningful sense? Yes obviously if you pick a particular way of representing surreals as sets, e.g. by representing them as sign sequences, the resulting representations will have cardinalities; obviously, that's not what I'm talking about. Although, who knows, maybe that's a workable notion  define the cardinality of a surreal to be the cardinality of its birthday. No idea if that's actually relevant to anything, though.)
Even charitably interpreted, none of this matches up with your comments above about equivalence classes. It relates, sure, but it doesn't match. What you said above was that you could solve more equations by passing to equivalence classes. What you're saying now seems to be... not that.
Long story short: I really, really, do not think you have much idea what you are talking about. You really need to relearn this from scratch, and not starting with surreals. I definitely do not think you are prepared to go instructing others on their uses; at this point I'm not convinced you could clearly articulate what ordinals and cardinals are for, you've gotten everything so mixed up in your comment above. I wouldn't recommend trying to expand this into a post.
I think I should probably stop arguing here. If you reply to this with more babble I'm not going to waste my time replying to it further.
(Note: I've edited some things in to be clearer on some points.)
Do you know where I could find proofs of the following?
"Normally we define exp(x) to be the limit of 1, 1+x, 1+x+x^2/2, it'll never get within 1/ω of e."
"If you make the novice mistake in fixing it of instead trying to define exp(x) as {1,1+x,1+x+x^2/2,...}, you will get not exp(1)=e but rather exp(1)=3."
These are both pretty straightforward. For the first, say we're working in a nonArchimedean ordered field which contains the reals, we take the partial sums of the series 1+1+1/2+1/6+...; these are rational numbers, in particular they're real numbers. So if we have one of these partial sums, call it s, then es is a positive real number. So if you have some infinitesimal ε, it's larger than ε; that's what an infinitesimal is. The sequence will not get within ε of e.
For the second, note that 3={2}, i.e., it's the simplest number larger than 2. So if you have {1,2,5/2,8/3,...}, well, the simplest number larger than all of those is still 3, because you did nothing to exclude 3. 3 is a very simple number! By definition, if you want to not get 3, either your interval has to not contain 3, or it has to contain something even simpler than 3 (i.e., 2, 1, or 0). (This is easy to see if you use the signsequence representation  remember that x is simpler than y iff the sign sequence of x is a proper prefix of the sign sequence of y.) The interval of surreals greater than those partial sums does contain 3, and does not contain 2, 1, or 0. So you get 3. That's all there is to it.
As for the rest of the comment... let me address this out of order, if you don't mind:
In some ways I view them as the ultimate reality
See, this is exactly the sort of thinking I'm trying to head off. How is that relevant to anything? You need to use something that actually fulfills the requirements of the problem.
On top of that, this seems... well, I don't know if you actually are making this error, but it seems rather reminiscent of the high school student's error of imagning that there's a single notion of "number"  where every notion of "number" they know fits in C so "number" and "complex number" become identified. And this is false not just because you can go beyond C, but because there are systems of numbers that can't be fit together with C at all. (How does Q_p fit into this? Answer: It doesn't!)
(Actually, by that standard, shouldn't the surcomplexes be the "ultimate reality"? :) )
(...I actually have some thoughts on that sort of thing, but since I'm trying to point out right now that that sort of thing is not what you should be thinking about when determining what sort of space to use, I won't go into them. "Ultimate reality" is, in addition to not being correct, probably not on the list of requirements!)
Also, y'know, you don't necessarily need something that could be considered "numbers" at all, as I keep emphasizing.
Anyway, as to the mathematical part of what you were saying...
I still need to read more about surreal numbers, but the thing I like about them is that you can always reduce the resolution if you can't solve the equation in the surreals. In some ways I view them as the ultimate reality and if we don't know the answer to something or only know the answer to a certain fineness, I think it's better to be honest about, rather than just fall back to an equivalence class over the surreals where we do know the answer. Actually, maybe that wasn't quite clear, I'm fine with falling back, but after its clear that we can't solve it to the finest degree.
I have no idea what you're talking about here. Like, what? First off, what sort of equations are you talking about? Algebraic ones? Over the surreals, I guess? The surreals are a real closed field, the surcomplexes are algebraically closed. That will suffice for algebraic equations. Maybe you mean some more general sort, I don't know.
But most of this is just baffling. I have no idea what you're talking about when you speak of passing to a quotient of the surreals to solve any equation. Where is that coming from? And like  what sort of quotient are we talking about here? "Quotient of the surreals" is already suspect because, well, it can't be a ringtheoretic quotient, as fields don't have nontrivial ideals, at all. So I guess you mean purely an additive quotient? But that's not going to mix very well with solving any equations that involve more than addition now, is it? Meanwhile what the surreals are known for is that any ordered field embeds in them, not something about quotients!
Anyway, if you want to solve algebraic equations, you want an algebraically closed field. If you want to solve algebraic equations to the greatest extent possible while still keeping things ordered, you want a real closed field. The surreals are a real closed field, but you certainly don't need them just for solving equations. If you want to be able to do limits and calculus and such, you want something with a nice topology (just how nice probably depends on just what you want), but note that you don't necessarily want a field at all! None of these things favor the surreals, and the fact that we almost certainly need integration here is a huge strike against them.
Btw, you know what's great for solving equations in, even if they aren't just algebraic equations? The real numbers. Because they're connected, so you have the intermediate value theorem. And they're the only ordered field that's connected. Again, you might be able to emulate that sort of thing to some extent in the surreals for sufficiently nice functions (mere continuity won't be enough) (certainly you can for polynomials, like I said they're real closed, but I'm guessing you can probably get more than that), I'm not superfamiliar with just what's possible there, but it'll take more work. In the reals it's just, make some comparisons, they come out opposite one another, R is connected, boom, there's a solution somewhere inbetween.
But mostly I'm just wondering where like any of this is coming from. It neither seems to make much sense nor to resemble anything I know.
(Edit: And, once again, it's not at all clear that being able to solve equations is at all relevant! That just doesn't seem to be something that's required. Whereas integration is.)
Later edits: various edits for clarity; also the "transfinite sequences suffice" thing is easy to verify, it doesn't require some exotic theorem
Yet later edit: Added another example
Two weeks later edit: Added the part about signsequence limits
So, to a large extent this is a problem with nonArchimedean ordered fields in general; the surreals just exacerbate it. So let's go through this in stages.
===Stage 1: Infinitesimals break limits===
Let's start with an example. In the real numbers, the limit as n goes to infinity of 1/n is 0. (Here n is a natural number, to be clear.)
If we introduce infinitesimals  even just as minimally as, say, passing to R(ω)  that's not so, because if you have some infinitesimal ε, the sequence will not get within ε of 0.
Of course, that's not necessarily a problem; I mean, that's just restating that our ordered field is no longer Archimedean, right? Of course 1/n is no longer going to go to 0, but is 1/n really the right thing to be looking at? How about, say, 1/x, as x goes to infinity, where x takes values in this field of ours? That still goes to 0. So it may seem like things are fine, like we just need to get these sequences out of our head and make sure we're always taking limits of functions, not sequences.
But that's not always so easy to do. What if we look at x^n, where x<1? If x isn't infinitesimal, that's no longer going to go to 0. It may still go to 0 in some cases  like, in R(ω), certainly 1/ω^n will still go to 0  but 1/2^n sure won't. And what do we replace that with? 1/2^x? How do we define that? In certain settings we may be able to  hell, there's a theory of the surreal exponential, so in the surreals we can  but not in general. And doing that requires first inventing the surreal exponential, which  well, I'll talk more about that later, but, hey, let's talk about that a bit right now. How are we going to define the exponential? Normally we define exp(x) to be the limit of 1, 1+x, 1+x+x^2/2... but that's not going to work anymore. If we try to take exp(1), expecting an answer of e, what we get is that the sequence doesn't converge due to the cloud of infinitesimals surrounding it; it'll never get within 1/ω of e. For some values maybe it'll converge, but not enough to do what we want.
Now the exponential is nice, so maybe we can find another definition (and, as mentioned, in the case of the surreals indeed we can, while obviously in the case of the hyperreals we can do it componentwise). But other cases can be much worse. Introducing infinitesimals doesn't break limits entirely  but it likely breaks the limits that you're counting on, and that can be fatal on its own.
===Stage 2: Uncountable cofinality breaks limits harder===
Stage 2 is really just a slight elaboration of stage 1. Once your field is large enough to have uncountable cofinality  like, say, the hyperreals  no sequence (with domain the whole numbers) will converge (unless it's eventually constant). If you want to take limits, you'll need transfinite sequences of uncountable length, or you simply will not get convergence.
Again, when you can rephrase things from sequences (with domain the natural numbers) to functions (with domain your field), things are fine. Because obviously your field's cofinality is equal to itself. But you can't always do that, or at least not so easily. Again: It would be nice if, for x<1, we had x^n approaching 0, and once we hit uncountable cofinality, that is simply not going to happen for any nonzero x.
(A note: In general in topology, not even transfinite sequences are good enough for general limits, and you need nets/filters. But for ordered fields, transfinite sequences (of length equal to the field's cofinality) are sufficient. Hence the focus on transfinite sequences rather than being ultrageneral and using nets.)
Note that of course the hyperreals are used for nonstandard analysis, but nonstandard analysis doesn't involve taking limits in the hyperreals  that's the point; limits in the reals correspond to nonlimitbased things in the hyperreals.
===Stage 3: The surreals break limits as hard as is possible===
So now we have the surreals, which take uncountable cofinality to the extreme. Our cofinality is no longer merely uncountable, it's not even an actual ordinal! The "cofinality" of the surreals is the "ordinal" represented by the class of all ordinals (or the "cardinal" of the class of all sets, if you prefer to think of cofinalities as cardinals). We have properclass cofinality.
Limits of sequences are gone. Limits of ordinary transfinite sequences are gone. All that remains working are limits of sequences whose domain consists of the entire class of all ordinals. Or, again, other things with properclass cofinality; 1/x still goes to 0 as x goes to infinity (again, letting x range over all surreals  note that that that's a very strong notion of "goes to infinity"!) You still have limits of surreal functions of a surreal variable. But as I keep pointing out, that's not always good enough.
I mean, really  in terms of ordered fields, the real numbers are the best possible setting for limits, because of the existence of suprema. Every set that's bounded above has a least upper bound. By contrast, in the surreals, no set that's bounded above has a least upper bound! That's kind of their defining property; if you have a set S and an upper bound b then, oops, {Sb} sneaks right inbetween. Proper classes can have suprema, yes, but, as I keep pointing out, you don't always have a proper class to work with; oftentimes you just have a plain old countably infinite set. As such, in contrast to the reals, the surreal numbers are the worst possible setting for limits.
The result is that doing things with surreals beyond addition and multiplication typically requires basically reinventing those things. Now, of course, the surreal numbers have something that vaguely resemble limits, namely, {left stuffright stuff}  the "simplest in an interval" construction. I mean, if you want, say, √2, you can just put {x∈Q, x>0, x^2<2  x∈Q, x>0, x^2>2}, and, hey, you've got √2! Looks almost like a limit, doesn't it? Or a Dedekind cut? Sure, there's a huge cloud of infinitesimals surrounding √2 that will thwart attempts at limits, but the simplestinaninterval construction cuts right through that and snaps to the simplest thing there, which is of course √2 itself, not √2+1/ω or something.
Added later: Similarly, if you want, say, ω^ω, you just take {ω,ω^2,ω^3,...}, and you get ω^ω. Once again, it gets you what a limit "ought" to get you  what it would get you in the ordinals  even though an actual limit wouldn't work in this setting.
But the problem is, despite these suggestive examples showing that snappingtothesimplest looks like a limit in some cases, it's obviously the wrong thing in others; it's not some general dropin substitute. For instance, in the real numbers you define exp(x) as the limit of the sequence 1, 1+x, 1+x+x^2/2, etc. In the surreals we already know that won't work, but if you make the novice mistake in fixing it of instead trying to define exp(x) as {1,1+x,1+x+x^2/2,...}, you will get not exp(1)=e but rather exp(1)=3. Oops. We didn't want to snap to something quite that simple. And that's hard to prevent.
You can do it  there is a theory of the surreal exponential  but it requires care. And it requires basically reinventing whatever theory it is that you're trying to port over to the surreal numbers, it's not a nice straight port like so many other things in mathematics. It's been done for a number of things! But not, I think, for the things you need here.
Martin Kruskal tried to develop a theory of surreal integration back in the 70s; he ultimately failed, and I'm pretty sure nobody has succeeded since. And note that this was for surreal functions of a single surreal variable. For surreal utilities and real probabilities you'd need surreal functions on a measure space, which I imagine would be harder, basically for cofinality reasons. And for this thing, where I guess we'd have something like surreal probabilities... well, I guess the cofinality issue gets easier  or maybe gets easier, I don't want to say that it does  but it raises so many others. Like, if you can do that, you should at least be able to do surreal functions of a single surreal variable, right? But at the moment, as I said, nobody knows how (I'm pretty sure).
In short, while you say that the surreals solve a lot more problems than people realize, my point of view is basically the opposite: From the point of view of applications, the surreal numbers are basically an attractive nuisance. People are drawn to them for obvious reasons  surreals are cool! Surreals are fun! They include, informally speaking, all the infinities and infitesimals! But they can be a huge pain to work with, and  much more importantly  whatever it is you need them to do, they probably don't do it. "Includes all the infinities and infinitesimals" is probably not actually on your list of requirements; while if you're trying to do any sort of decision theory, some sort of theory of integration is.
You have basically no idea how many times I've had to write the same "no, you really don't want to use surreal utilities" comment here on LW. In fact years ago  basically due to constant abuse of surreals (or cardinals, if people really didn't know what they were talking about)  I wrote this article here on LW, and (while it's not like people are likely to happen across that anyway) I wish I'd included more of a warning against using the surreals.
Basically, I would say, go where the math tells you to; build your system to the requirements, don't just go pulling something off the shelf unless it meets those requirements. And note that what you build might not be a system of numbers at all. I think people are often too quick to jump to the use of numbers in the first place. Real numbers get a lot of this, because people are familiar with them. I suspect that's the real historical reason why utility functions were initially defined as realvalued; we're lucky that they turned out to actually be appropriate!
(Added later: There is one other thing you can do in the surreals that kind of resembles a limit, and this is to take a limit of sign sequences. This at least doesn't have the cofinality problem; you can take a signsequence limit of a sequence. But this is not any sort of dropin replacement for usual limits either, and my impression (not an expert here) is that it doesn't really work very well at all in the first place. My impression is that, while {leftright} can be a bit too oblivious to the details of the the inputs (if you're not careful), limits of sign sequences are a bit too finicky. For instance, defining e to be the signsequence limit of the partial sums 1, 2, 5/2, 8/3, 65/24... will work, but defining exp(x) analogously won't, because what if x is (as a real number) the logarithm of a dyadic rational? Instead of getting exp(log(2))=2, you'll get exp(log(2))=21/ω. (I'm pretty sure that's right.) There goes multiplicativity! Worse yet, exp(log(2)) won't "converge" at all. Again, I can't rule out that, like {leftright}, it can be made to work with some care, but it's definitely not a dropin replacement, and my nonexpert impression is that it's overall worse than {leftright}. In any case, once again, the better choice is almost certainly not to use surreals.)
I've already mentioned this in a separate comment, but surreals come with a lot of problems of their own (basically, limits don't work). I don't like to say this, but your comment gives off the same "oh well we need infinitesimals and this is what I've heard of" impression as above. Pick systems of numbers based on what they do. Surreals probably don't do whatever's necessary here  how are you going to do any sort of integration?
(Also, you mean a free ultrafilter, not a principal one.)
So, I haven't really read this in any detail, but  I am very, very wary of the use of hyperreal and/or surreal numbers here. While as I said I haven't taken a thorough look at this, to me these look like "well we need infinitesimals and this is what I've heard of" rather than having any real reason to pick one of these two. I seriously doubt that either is a good choice.
Hyperreals require picking a free ultrafilter; they're not even uniquely defined. Surreal numbers (pretty much) completely break limits. (Hyperreals kind of break limits too, due to being of uncountable cofinality, but not nearly as extensively as surreal numbers do, which are of properclass cofinality.) If you're picking a number system, you need to consider what you're actually going to do with it. If you're going to do any sort of limits or integration with it  and what else is probability for, if not integration?  you probably don't want surreal numbers, because limits are not going to work there. (Some things that are normally done with limits can be recovered for surreals by other means, e.g. there's a surreal exponential, but you don't define it as a limit of partial sums, because that doesn't work. So, maybe you can develop the necessary theory based on something other than limits, but I'm pretty sure it's not something that already exists which you can just pick up and use.)
Again: Pick number systems for what they do. Hyperreals have a specific point, which is the transfer principle. If you're not going to be using the transfer principle, you probably don't want hyperreals. And as I already said, if you're going to be taking any sort of limit, you probably don't want surreals.
Consider asking whether you need a system of numbers at all. You mention sequences of real numbers; perhaps that's simply what you want? Sequences of real numbers, not modulo a free ultrafilter? You don't need to use an existing system of numbers, you can purposebuild one; and you don't need to use a system of numbers at all, you can just use appropriate objects, whatever they may be. (Oftentime it makes more sense to represent "orders of infinity" by functions of different growth rates  or, I guess here, sequences of different growth rates.)
(Honestly if infinitesimal probabilities or utilities are coming up, I'd consider that a flag that something has likely gone wrong  we have good reasons to use real numbers for these, which I'm sure you're already familiar with (but here's a link for everyone else :P )  but I'll admit that I haven't read this thing in any detail and you are going beyond that sort of classical context so, hey, who knows.)
Also, there may be a common sentiment that altruism is only ever intended as signaling (of virtue, of wealth, of whatever), and is thus a statusenhancing move. In my experience, people from such societies will often not comprehend (or be very skeptical of, even when they do comprehend) the idea of acting altruistically for purely… altruistic reasons.
This is a total armchair reply, but  I'm wondering if that ascription of ulterior intent is actually necessary. Like, rather than "this act of altruism is actually just intended as a status move and so should be punished", perhaps just, "this act of altruism will increase their status and so should be punished".
Thanks, that's a good way of putting it.
So basically, historical explanations. These are frequently a good idea for exactly the reason you say  a lot of things are just a lot more confusing without their historical context; they developed as the answer to a series of questions and answers and things make more sense once you know that series.
However it's worth noting that there are times where you do want to skip over a bunch of the history, because the modern way of thinking about things is so much cleaner, and you can develop a different, better series of questions and answers than the one that actually happened historically.
Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that's just how "uncorrelated" in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]E[X]E[Y] is not really the most fundamental definition of covariance, I'd say, but it's easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)
So, um, I was writing a post and I left the tab open for a few hours and the post seems to have just disappeared? While it's not impossible I accidentally clicked refresh or something, as best I can tell it was just gone when I got back, with the tab not having been touched in over an hour.
I'm pretty sure that's not the particular one, but thank you all the same!
Huh, interesting. I have to admit I'm not really familiar with the literature on this; I just inferred this from the use of point estimates. So you're saying people recognized that the quantity to focus on was P(N>0) but used point estimates anyway? I guess what I'm saying is, if you ask "why would they do that", I would imagine the answer to be, "because they were still thinking of the Drake equation, even though it was developed for a different purpose". But I guess that's not necessarily so; it could just have been out of mathematical convenience...
Made what mistake, exactly?
The authors grant Drake's assumption that everything is uncorrelated, though.
I think the real point here (as I've commented elsewhere) isn't that using point estimates is inherently a mistake, it's that the expected value is not what we care about. They're valid for that, but not for the thing we actually care about, which is P(N=0).
Ah, makes sense now, thanks!
I guess it's not so much that I am opposed to such a post being frontpaged, as it is, like... that post was just sort of a quick note written without a lot of thought, right? Not something I am necessarily really endorsing or prepared to argue about or anything, you know? I just feel like, if a mod is going to move it to frontpage, they should put their name on it as having done so! :P I didn't put it on the front page, I just wrote the thing...
Oh I see. That explains it, thanks.
I can't agree with that, for a number of reasons. Note that the thing that I'm claiming Chapman does is really a number of things which I've summed up as "you have to prepend 'human' to everything", but the meaning of that prefix I'm summing things up with is actually context dependent. Here's a few examples of what it can mean (if I'm correct  again, if Chapman himself wants to correct me, great!) and why it's not a good way of talking.
 Sometimes this means talking about... certain human patterns, that a particular notion tends to invoke. E.g. "rationality" above  it does indeed frequently happen that those who go in for "rationality" or similar notions end up falling into the Straw Vulcan pattern. And it's important to be able to discuss these patterns. But it's a mistake to conflate the pattern itself with the idea that invokes it  especially as there may be multiple of the latter, that are distinct from one another; this is a lossy operation. Better to say "rationality" when you mean rationality, and say "the pattern invoked by rationality" (or in this case, "Straw Vulcanism", since we have a name for it in this case) when you mean that. Because otherwise how will you tell apart the different ideas that can invoke the Straw Vulcan pattern?
Like, let's diagram this. The usual approach is that "rationality" (the word) points to rationality (the concept) which then itself has an arrow (via the "invokes in humans" operator) to Straw Vulcanism. If we take the initial arrow from "rationality" to rationality, and alter it instead to point to Straw Vulcanism, how do we refer to rationality? "Idealized Straw Vulcanism?" I don't think so! Especially because once again which idealization?
The alternative, I suppose, is that we don't reroute any arrows, but instead just take it as implicit that we're always supposed to apply "human" afterward. And, like, use some sort of quotation thingy (e.g. the "idealized" prefix) when we want to stop that application (like how we use quote marks to indicate that we are mentioning rather than using a word). But even though we're using "rationality" to talk about Straw Vulcanism, under this way of talking, we have to keep in mind that rationality doesn't actually mean Straw Vulcanism (even though that's what we're using it to mean!) so that when we say "idealized rationality" we know what that means. This... this does not sound like a good way of handling things. I would recommend having words directly point to the thing they refer to.
 Sometimes this means talking about the map rather than the territory. Taking "X" not to mean X but to mean "X", people's idea of X.
The problem is that, well, most of the time we want to talk about the territory, not people's maps. If I say "there were no Kuiper belt in 1700" you should say "that is false", not "that is true, because the idea of a Kuiper belt had not yet been hypothesized". If I want to say "there was no concept of a 'Kuiper belt' in 1700", I can say that explicitly. Basically this way of talking is in a sense saying, you can't actually use words, you can only mention them. But most of the time I do in fact want to use words, not mention them!
And again this ends up with similar problems to above, which I won't detail in full once again. In this case they seem a bit more handleable because there's not the lossiness issue  the usual way of speaking is to say X in order to use the word "X" and to say "X" in order to mention the word "X", but one could notionally come up with some bizarre reverse convention here. (Which to be clear I haven't seen Chapman use  what he says when he actually wants to use a word rather than mentioning it, I don't know. "The real, actual Kuiper belt?" IDK.) I still don't think this is a good idea.
 The most defensible one, I think, is where it effectively means "humanly realizable", like with the "system" example above. This one is substantially less bad than the others, because it's still a bad idea, it's at least workable. It's usably bad rather than unusably bad. But I do still think it's a bad idea. Once again this is a lossy operation  the distinction betwen "nondeterministic" and "chaotic", that can both get collapsed to "unpredictable in practice", is worth preserving. And once again to adopt this systematically would require similar contortions to above, even if not as bad; once again I'll skip the full argument. But yeah, I don't think this is a good way of talking.
Ohg vs nf svefg2 ntnvafg frpbaq1 lbh purpx, frpbaq1 zvtug org naq (fvapr lbh qba'g xabj vg'f 1 engure guna 3) lbh zvtug sbyq va erfcbafr (1 sbe lbh), zrnavat lbh raq hc qbvat jbefr guna lbh jbhyq unir unq lbh org (+1 sbe lbh). Fb V qba'g frr ubj vg qbzvangrf?
OK but how does one actually control where a given post goes to? Where is the control? I don't see it. And despite what you say the one post I made went to front page!
I can't make sense of this comment.
If one is talking about one's preferences over number of apples, then the statement that it is a total preorder, is a weaker statement than the statement that more is better. (Also, you know, real number assumptions all over the place.) If one is talking about preferences not just over number of apples but in general, then even so it seems to me that the complete class theorem seems to be making some very strong assumptions, much stronger than the assumption of a total preorder! (Again, look at all those real number assumptions.)
Moreover it's not even clear to me that the complete class theorem does what you claim it does, like, at all. Like it starts out assuming the notion of probability. How can it ground probability when it starts out assuming it? And perhaps I'm misunderstanding, but are the "risk functions" it discusses not in utility? It sure looks like expected values of them are being taken with the intent that smaller is better (this seems to be implicit in the definition of r(θ), that r(θ) is measured by expected value when T isn't a pure strategy). Is that mistaken?
(Possible source of error here: I can't seem to find a statement of the complete class theorem that fits neatly into Savage/VNM/Cox/etcstyle formalism and I'm having some trouble translating it to such, so I may be misunderstanding. The most sense I'm making of it at the moment is that it's something like your examples for why probabilities must sum to one  i.e., it's saying, if you already believe in utility, and something almost like probability, it must actually be probability. Is that accurate, or am I off?)
(Edit: Also if you're taking issue with the preorder assumption, does this mean that you no longer consider VNM to be a good grounding of the notion of utility for those who already accept the idea of probability?)
So, um, not exactly on topic but I have no idea where else to ask this  how does one post to one's personal blog page anyway? And how does one move one's post between frontpage/personal? This isn't at all clear. I recently tried to post to the personal blog and it ended up on frontpage and I have no idea how to move it, or how I would have posted to personal in the first place.
(Like, where is this documented? Right now to me it looks like the answer is, not somewhere findable enough.)
Thanks!
Heh, I tried to gametheory it all out and got nonsensical answers (my set of equilibria wasn't convex!), and I wasn't about to redo it, so I'm glad someone managed to determine [what I'm presuming is] the correct Nash equlibirum...
It is a little surprising to me that nyjnlf purpxvat jura fgnegvat jvgu n 2 vf gur bayl sbeprq pubvpr jvgubhg na boivbhf ernfba sbe vg. Gur bgure 6 sbeprq pubvprf ner rnfl gb qrgrezvar jvgubhg univat gb qb zhpu zngu, ohg abg gung bar. Naq gur bgure 5 pubvprf gung qb erdhver qbvat zngu gb qrgrezvar qba'g raq hc nf 1f be 0f.
But yeah, as Zvi says, I guess part of the whole question is how far into donkeyspace you want to go. i would've stuck with Nash, if I could figure it out, but...
Why is it irrelevant when you assume a world where the agent who has to make the decision knows more than they actually know? Decision theory is about making decisions based on certain information that known.
I think you've lost the chain a bit here. We're just discussing to what extent probability theory does or does not extend various forms of logic. The actual conditions in the real world do not affect that. Now obviously if it only extends it in conditions that do not hold in the real world, then that is important to know; but if that were the case then "probability theory extends logic" would be a way too general statement anyhow and I hope nobody would be claiming that!
(And actually if you read the argument with Chapman that I linked, I agree that "probability theory extends logic" is a misleading claim, and that it indeed mostly does not extend logic. The question isn't whether it extends logic, the question is whether propositional and predicate logic behave differently here.)
But again all of this is irrelevant because nobody is claiming anything like that! I mentioned a finite universe, where predicate logic essentially becomes propositional logic, to illustrate a particular point  that probability theory does not extend propositional logic in the sense Chapman claims it does. I didn't bring it up to say "Oho well in a finite universe it does extend predicate logic, therefore it's correct to say that probability theory extends predicate logic"; I did the opposite of that! At no point did I make any actualratherthanillustrative assumption to the effect that that the real world is or is like a finite universe. So objecting that it isn't has no relevance.
I haven't studied the surrounding math but as far as I understand according to Cox’s Theorem probability theory does extend propositional calculus without having to make additional assumptions about finite universe or certain things being known.
Cox's theorem actually requires a "big world" assumption, which IINM is incompatible with a finite universe!
I think this is getting offtrack a little. To review: Chapman claimed that, in a certain sense, probability theory extends propositional but not predicate logic. I claimed that, in that particular sense, it actually extends both of them equally well. (Which is not to say that it truly does extend both of them, to be clear  if you read the argument with Chapman that I linked, I actually agree that "probability theory extends logic" is a misleading claim, and that it mostly doesn't.)
So now the question here is, what are you arguing for? If you're arguing for Chapman's original claim, the relevance of your statement of Cox's theorem is unclear, as it's not clear that this relates to the particular sense he was talking about.
If you're arguing for a broader version of Chapman's claim  broadening the scope to allow any sense rather than the particular one he claimed  then you need to exhibit a sense in which probability theory extends propositional logic but not predicate logic. I can buy the claim that Cox's theorem provides a certain sense in which probability theory extends propositional logic. And, though you haven't argued for it, I can even buy the claim that this is a sense in which it does not extend predicate logic [edit: at least, in an uncountable universe]. But, well, the problem is that regardless if it's true, this broader claim  or this particular version of it, anyway  just doesn't seem to have much to do with his original one.
So, I must point out that a finite universe with known elements isn't actually one where everything is known, although it certainly is one where we know way more than we ever do in the real world. But this is irrelevant. I don't see how anything you're saying relates to the claim is that probability theory extends propositional logic but not predicate logic.
Edit: oops, wrote "point" instead of "world"
But the question wasn't about whether it's usable. The question was about whether there is some sense in which probability extends propositional logic but not predicate logic.
Later Chapman wrote the more technical Probability theory does not extend logic in which Chapman who has a MIT AI PHD shows how the core claim that probability theory is an extension of logic that's made in the sequences is wrong.
As far as I can tell, his piece is mistaken. I'm going to copypaste what I've written about it elsewhere:
So I looked at Chapman’s “Probability theory does not extend logic” and some things aren’t making sense. He claims that probability theory does extend propositional logic, but not predicate logic.
But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions. Even without that assumption, well, a universal is essentially an infinite conjunction, and an existential statement is essentially an infinite disjunction. It would be strange that this case should fail.
His more specific example is: Say, for some x, we gain evidence for “There exist distinct y and y’ with R(x,y)”, and update its probability accordingly; how should we update our probability for “For all x, there exists a unique y with R(x,y)”? Probability theory doesn’t say, he says. But OK — let’s take this to a finite universe with known elements. Now all those universals and existentials can be rewritten as finite conjunctions and disjunctions. And probability theory does handle this case?
I mean… I don’t think it does. If you have events A and B and you learn C, well, you update P(A) to P(AC), and you update P(A∩B) to P(A∩BC)… but the magnitude of the first update doesn’t determine the magnitude in the second. Why should it when the conjunction becomes infinite? I think that Chapman’s claim about a way in which probability theory does not extend predicate logic, is equally a claim about a way in which it does not extend propositional logic. As best I can tell, it extends both equally well.
(Also here is a link to a place where I posted this and got into an argument with Chapman about this that people might find helpful?)
I think this is a good summary; see also my comment below.
Three types of "should"
20180602T00:54:49.171Z · score: 26 (5 votes)Some images seem to have gone missing from the "It's Not (Just) Regulation" section?
BTW, this is offtopic, but since you link to that Arbital page, and I don't know where else to comment on that  the theorem you're looking for, that grounds both utility and probability simultaneously, in a noncircular fashion, and without any assumption baked in that R is the correct system of numbers to use[0], is not the complete class theorem. It is Savage's theorem.
[0]Yes, Savage's theorem includes an Archimedean assumption, which you could argue is the same thing as baking in R; but I'd say it's not, because this Archimedean assumption is a direct statement about the agent's preferences, whereas it's not immediately clear what picking R as your number system means about the agent's preferences (and I suspect that most people have used R more on the basis of convenience/familiarity rather than because they recognized the necessity of an Archimedean condition).
So, my understanding of Chapman  and this is based on other thing's he's written which I unfortunately can't find right now, he can of course correct me if I'm wrong here  is that he's often just not saying what it sounds like he's saying, because he's implicitly prefixing everything with "human". The article that I can't find at the moment that made this clear was where he said, there's no system to do X, and then said, there's no system to do X, and then anticipated the counterargument, but the human brain does X, and replied, yes but I'm talking about systems a human could execute, so "the human brain" does X is not relevant to what I'm talking about. But this is the only place he explicitly said that! So I think when reading him you just have to do that everywhere  prefix "human" to everything (although the exact meaning of that prefix seems to vary). When he says "system", he actually means "system a human could execute". When he says "rationality", he actually means "how people usually construe rationality". When he seems to confuse systems of facts and systems of norms, that's not him getting mixed up, it's that he's actually talking about other people's maps  and in other people's maps these are often conflated  rather than talking about the territory. Now personally I think this sort of terminology obfuscates rather than clarifies  you could just, you know, explicitly mark when you're talking about humanX rather than X, or when you're talking about people's maps rather than the territory directly  but I think you have to understand it if you want to read Chapman's writing.
I'm very confused how the categorical imperative is supposed to be relevant here. I don't see how the bit you've highlighted relates to it at all.
I think you've misread what I'm saying. I am not trying to define that as a norm. I am pointing it out as an important consideration, not a definition.
More generally, I'm not trying to define anything as a norm. As I stated above, what I'm trying to do is not define new norms  certainly not from any sort of first principles  but to make some tiny initial progress towards making explicit the norms that already exist. Which, as you say, vary, but I can at least speak to what I've seen. The numbered points above are, as I said, considerations that I think need to be accounted for, and I think failing to account for those points is a big reason previous attempts have failed and ended up somewhere near "classical liberal" or "nerd", neither of which is at all close to the actual norms anywhere.
So, I don't actually understand most of this comment. So, one thing at a time here...
This is really good and important, but I think you're making the problem too hard by thinking about universal rather than local norms.
Well, I'd just say I failed to specify that these might just be local norms, but sure, that's a good point  local norms vary. E.g. I've noticed people in the LWsphere writing about how asking twice about something might be considered pressuring, whereas to my mind asking twice is completely ordinary and it's asking three times that's over the line. But yes we have to account for the fact that there's not necessarily going to be one universally applicable "theory of legitimate influence", except possibly at a high level that's not directly applicable.
Institutions that can produce interesting longrun technological improvements have to be optimizing for building shared maps, not exploiting existing maps for local efficiencies in ways that erode those maps.
OK, I don't understand what you're saying here, or at least not how it's relevant. Could you give an example?
A norm that this is the legitimate incentive gradient to follow within such institutions  and that more generally creating shared capacity is more interesting than reallocating existing capacity  is the generator of the different legitimate influence ideologies you mentioned.
I don't really understand what you're saying here and to the extent that I do I find the claim confusing. Again, could you give examples of how this might occur?
For an example of why  like, I'd say that the "nerd" theory here arises from bad observation. It's not something people actually follow, because that's impossible, though they might sometimes try. Basically, the question of legitimate influence is one of those social microthings that ordinary people just can't really talk about because their common sense gets in the way; theories of legitimate influence are mostly left implicit. Attempts to make them explicit get filtered through the lens of common sense, yielding instructions that are untenable if taken literally... though nerds will try all the same. (E.g. a common thing I've seen recently is explicitly stating #1, but implicitly redefining "coercion" as needed to mean whatever you need it to mean. Common sense allows statement to diverge from practice heavily.)
In short #1 and #2 above were meant to be examples of theories that people state, not that people follow.
If you have closed systems for having these nice things, you don't have to remake norms everywhere to have nice things in your community.
Indeed! But I think the important thing to recognize here is that I'm (mostly) not talking about remaking norms at all. When I say "we need a theory of legitimate influence", I (mostly) mean "We need to learn how to make explicit the norms that we're already following". Or perhaps I should say the norms that normal people are already following. :P Once we understand that, then perhaps we can begin to adjust them, if adjustments are needed. Trying to do things the other way around  starting from reasonedout theories, then trying to practice them  just leads to untenable theories like the nerd theory.
You definitely don't have to make war on people who don't want these nice things and demand they adopt your standards.
I... never suggested that?
I mean, it's kind of implicit in lots of the stuff on Less Wrong, isn't it?
Tangential  in that this is going to do absolutely zero more to justify the completeness assumption than anything else you've read  but this seems like a good place to point out that utility functions can also be grounded in (and IMO, are better grounded in) Savage's theorem, in addition to the VNM theorem.
Huh, I only just saw this for some reason. Anyway, if you're not familiar with Savage's theorem, that's why I wrote the linked article here about it! :)
Huh, I only just saw this for some reason.
Anyway yes AlexMennen has the right of it.
I don't have an example to hand of Eliezer's remarks. By which, I remember seeing on old LW, but I can't find it at the moment. (Note that I'm interpreting what he said... very charitably. What he actually said made considerably less sense, but we can perhaps steelman it as a strong commitment to total utilitarianism.)
I definitely think Nerst has things the right way round, but I'm having trouble making explcit why. One reason though that I can make explicit is that, well, tangling everything together is the default. Decoupling  orthogonality, unbundling, separation of concerns, hugging the query  is rarer, takes work, and is worth pointing out.
Decoupling, orthogonality, unbundling, separation of concerns, relevance, the belief that the genetic fallacy is in fact a fallacy, hugging the query.... :)
Not a new idea, but an important one, and worth writing explicitly about!
Note also you could easily see your initial comment to be wrong just by computing the truth tables! Equivalence in classical propositional logic is pretty easy  you don't need to think about proofs, just write down the truth tables!