On infinite ethics

post by Joe Carlsmith (joekc) · 2022-01-31T07:04:44.244Z · LW · GW · 70 comments

Contents

  I. The importance of the infinite
  II. On “locations” of value
  III. Problems for totalism
  IV. Infinite fanatics
  V. The impossibility of what we want
  VI. Ordinal rankings aren’t enough
  VII. Something about averages? 
  VIII. New ways of representing infinite quantities?
  IX. Something about expanding regions of space-time?
  X. Weight people by simplicity?
  XI. What’s the most bullet-biting hedonistic utilitarian response we can think of?
  XII. Bigger infinities and other exotica
  XIII. Maybe infinities are just not a thing? 
  XIV. The death of a utilitarian dream
  XV. Everyone’s problem
  XVI. Nihilism and responsibility
  XVII. Infinities in practice
None
71 comments

(Cross-posted from Hands and Cities)

And for all this, nature is never spent…

Gerard Manley Hopkins

Summary:

Thanks to Leopold Aschenbrenner, Amanda Askell, Paul Christiano, Katja Grace, Cate Hall, Evan Hubinger, Ketan Ramakrishnan, Carl Shulman, and Hayden Wilkinson for discussion. And thanks to Cate Hall for some poetry suggestions. 

I. The importance of the infinite

Most of ethics ignores infinities. They’re confusing. They break stuff. Hopefully, they’re irrelevant. And anyway, finite ethics is hard enough.

Infinite ethics is just ethics without these blinders. And ditching the blinders is good. We have to deal with infinites in practice. And they are deeply revealing in theory. 

Why do we have to deal with infinities in practice? Because maybe we can do infinite things. 

More specifically, we might be able to influence what happens to an infinite number of “value-bearing locations” – for example, people. This could happen in two ways: causal, or acausal. 

The causal way requires funkier science. It’s not that infinite universes are funky: to the contrary, the hypothesis that we share the universe with an infinite number of observers is very live, and various people seem to think it’s the leading cosmology on offer (see footnote).[1] But current science suggests that our causal influence is made finite by things like lightspeed and entropy (though see footnote for some subtlety).[2] So causing infinite stuff probably needs new science. Maybe we learn to make hypercomputers, or baby universes with infinite space-times.[3] Maybe we’re in a simulation housed in a more infinite-causal-influence-friendly universe. Maybe something about wormholes? You know, sci-fi stuff. 

The acausal way can get away with more mainstream science. But it requires funkier decision theory. Suppose you’re deciding whether to make a $5000 donation that will save a life, or to spend the money on a vacation with your family. And suppose, per various respectable cosmologies, that the universe is filled with an infinite number of people very much like you, faced with choices very much like yours. If you donate, this is strong evidence that they all donate, too. So evidential decision theory treats your donation as saving an infinite number of lives, and as sacrificing an infinite number of family vacations (does one outweigh the other? on what grounds?). Other non-causal decision theories, like FDT, will do the same. The stakes are high. 

Perhaps you say: Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough. 

And whatever our credences here, we should be clear-eyed about the fact that helping or harming an infinite number of people would be an extremely big deal. Saving a hundred lives, for example, is a deeply significant act. But saving a thousand lives is even more so; a million, even more so; and so on. For any finite number of lives, though, saving an infinite number would save more than that. So saving an infinite number of lives matters at least as much as saving any finite number – and very plausibly, it matters more (see Beckstead and Thomas (2021) for more). 

And the point generalizes: for any way of helping/harming some finite set of people, doing that to an infinite number of people matters at least as much, and plausibly more. And if you’re the type of person who thinks that e.g. saving 10x the lives is 10x as important, it will be quite natural and tempting to say that the infinite version matters infinitely more.

Of course, accepting these sorts of stakes can lead to “fanaticism” about infinities, and neglect of merely finite concerns. I’ll touch on this below. For now, I mostly want to note that, just as you can recognize that humanity’s long-term future matters a lot, without becoming indifferent to the present, so too can you recognize that helping or harming an infinite number of people would matter a lot, without becoming indifferent to the merely finite. Perhaps you do not yet have a theory that justifies this practice; perhaps you’ll never find one. But in the meantime, you need not distort the stakes of infinite benefits and harms, and pretend that infinity is actively smaller than e.g. a trillion.  

I emphasize these stakes partly because I’m going to be using the word “infinite” a lot, and casually, with reference to both wonderful and horrifying things. My examples will be math-y and cartoonish. Faced with such a discourse, it can be easy to start numbing out, or treating the topic like a joke, or a puzzle, or a wash of weirdness. But ultimately, we’re talking about situations that would involve actual, live human beings – the same human beings whose lives are at stake in genocides, mental hospitals, slums; human beings who fall in love, who feel the wind on their skin, who care for dying parents as they fade. In infinite ethics, the stakes are just: what they always are. Only: unendingly more. 

Here I’m reminded of people who realize, after engaging with the terror and sublimity of very large finite numbers (e.g., Graham’s number), that “infinity,” in their heads, was actually quite small, such that e.g. living for eternity sounds good, but living a Graham’s number of years sounds horrifying (see Tim Urban’s “PS” at the bottom of this post). So it’s worth taking a second to remember just how non-small infinity really is. The stakes it implies are hard to fathom. But they’re crucial to remember – especially given that, in practice, they may be the stakes we face.

Even if you insist on ignoring infinities in practice, though, they still matter in theory. In particular: whatever our actual finitude, ethics shouldn’t fall silent in the face of the infinite. Nor does it. Suppose you were God, choosing whether to create an infinite heaven, or an infinite hell. Flip a coin? Definitely not. Ok then: that’s a data point. Let’s find others. Let’s get some principles. It’s a familiar game – and one we often use merely possible worlds to play.

Except: the infinite version is harder. Instructively so. In particular: it breaks tons of stuff developed for the finite version. Indeed, it can feel staring into a void that swallows all sense-making. It’s painful. But it’s also good. In science, one often hopes to get new data that ruins an established theory. It’s a route to progress: breaking the breakable is often key to fixing it. 

Let’s look into the void.

II. On “locations” of value

Forever – is composed of Nows –

 — Emily Dickinson

A quick note on set-up. The standard game in infinite ethics is to put finite utilities on an infinite set (specifically, a countably infinite set) of value-bearing “locations.” But it can make an important difference what sort of “locations” you have in mind. 

Here’s a classic example (adapted from Cain (1995); see also here). Consider two worlds: 

Zone of suffering: An infinite line of immortal people, numbered starting at 1, who all start out happy (+1). On day 1, person 1 becomes sad (-1), and stays that way forever. On day 2, person 2 becomes sad, and stays that way forever. And so on. 

Person   1   2   3   4   5
day 1: <-1,  1,  1,  1,  1, …>
day 2: <-1, -1,  1,  1,  1, …>
day 3: <-1, -1, -1,  1,  1, …>
etc…

Zone of happiness: Same world, but the happiness and sadness are reversed: everyone starts out sad, and on day 1, person 1 becomes happy; day 2, person 2, and so on. 

Person  1    2    3     4   5
day 1: <1,  -1,  -1,  -1, -1, …>
day 2: <1,   1,  -1,  -1,  -1, …>
day 3: <1,   1,   1,   -1,  -1, …>
etc…

In zone of suffering, at any given time, the world has finite sadness, and infinite happiness. But any given person is finitely happy, and infinitely sad. In zone of happiness, it’s reversed. Which is better? 

My take is that the zone of happiness is better. It’s where I’d rather live, and choosing it fits with principles like “if you can save everyone from infinite suffering and give them infinite happiness instead, do it,” which sound pretty solid. We can talk about analogous principles for “times,” but from a moral perspective, agents seem to me more fundamental. 

My broader point, though, is that the choice of “location” matters. I’ll generally focus on “agents.”

III. Problems for totalism

Friend,
the hours will hardly pardon you their loss,
those brilliant hours that wear away the days,
those days that eat away eternity. 

Robert Lowell, A Roman Sonnet

OK, let’s start with easy stuff: namely, problems for a simple, total utilitarian principle that directs you to maximize the total welfare in the universe. 

First off: “total welfare in the universe” gets weird in infinite worlds. Consider a world with infinite people at +2 welfare, and an infinite number at -1. What’s the total welfare? It depends on the order you add. If you go: +2, -1, -1, +2, -1, -1, then the total oscillates forever between 0 and 2 (if you prefer to hang out near a different number, just add or subtract the relevant amount at the beginning, then start oscillating). If you go: +2, -1, +2, -1, you get ∞. If you go: +2, -1, -1, -1, +2, -1, -1, -1, you get –∞. So which is it? If you’re God, and you can create this world, should you?

Or consider a world where the welfare levels are: 1, -1/2, 1/3, -1/4, 1/5, and so on. Depending on the order you use, these can sum to any welfare level you want (see the Reimann Rearrangement Theorem; and see the Pasadena Game for decision-theory problems this creates). Isn’t that messed up? Not the type of situation the totalist is used to. (Maybe you don’t like infinitely precise welfare levels. Fine, stick with the previous example.)

Maybe we demand enough structure to fix a definite order (this already involves giving up some cherished principles – more below). But now consider an infinite world where everyone’s at 1. Suppose you can bump everyone up to 2. Shouldn’t you do it? But the “total welfare” is the same: ∞.

So “totals” get funky. But there’s also another problem: namely, that if the total is infinite (whether positive or negative), then finite changes won’t make a difference. So the totalist in an infinite world starts shrugging at genocides. And if they can only ever do finite stuff, they start treating all their possible actions as ethically indifferent. Very bad. As Bostrom puts it: 

“This should count as a reductio by everyone’s standards. Infinitarian paralysis is not one of those moderately counterintuitive implications that all known moral theories have, but which are arguably forgivable in light of the theory’s compensating virtues. The problem of infinitarian paralysis must be solved, or else aggregative consequentialism must be rejected.” (p. 45).

Strong words. Worrying. 

But actually, even if I put a totalist hat on, I’m not too worried. If “how can finite changes matter in infinite worlds?” were the only problem we faced, I’d be inclined to ditch talk about maximizing total welfare, and to focus instead on maximizing the amount of welfare that you add on net. Thus, in a world of infinite 1s, bumping ten people up to 2 adds 10. Nice. Worth it. Size of drop, not size of bucket.[4]

But “for totalists in infinite worlds, are finite genocides still bad?” really, really isn’t the only problem that infinities create. 

IV. Infinite fanatics

In the finite no happiness can ever breathe.
The Infinite alone is the fulfilling happiness.

The Upanishads

Another problem I want to note, but then mostly set aside, is fanaticism. Fanaticism, in ethics, means paying extreme costs with certainty, for the sake of tiny probabilities of sufficiently big-deal outcomes.

Thus, to take an infinite case: suppose that you live in a finite world, and everyone is miserable. You are given a one-time opportunity to choose between two buttons. The blue button is guaranteed to transform your world into a giant (but still finite) utopia that will last for trillions of years. The red button has a one-in-a-graham’s-number chance of creating a utopia that will last infinitely long. Which should you press?

Here the fanatic says: red. And naively, if an infinite utopia is infinitely valuable, then expected utility theory agrees: the EV of red is infinite (and positive), and the EV of blue, merely finite. But one might wonder. In particular: red seems like a loser’s game. You can press red over and over for a trillion^trillion years, and you just won’t win. And wasn’t rationality about winning?

This isn’t a purely infinity problem. Verdicts like “red” are surprisingly hard to avoid, even for merely finite outcomes, without saying other very unattractive things (see Beckstead and Thomas (2021) and Wilkinson (2021) for discussion).

Plausibly, though, the infinite version is worse. The finite fanatic, at least, cares about how tiny the probability is, and about the finite costs of rolling the dice. But the infinite fanatic has no need for such details: she pays any finite cost for any probability of an infinite payoff. Suppose that: oops, we overestimated the probability of red paying out by a factor of a graham’s number. Oops: we forgot that red also tortures a zillion kittens with certainty. The infinite fanatic doesn’t even blink. The moment you said “infinity,” she tuned all that stuff out. 

Note that varying the “quality” of the infinity (while keeping its sign the same) doesn’t matter either. Suppose that oops: actually, red’s payout is just a single, barely-conscious, slightly-happy lizard, floating for eternity in space. For a sufficiently utilitarian-ish infinite fanatic, it makes no difference. Burn the Utopia. Torture the kittens. I know the probability of creating that lizard is unthinkably negligible. But we have to try.

What’s more, the finite fanatic can reach for excuses that the infinite fanatic cannot. In particular, the finite fanatic can argue that, in her actual situation, she has faces no choices with the relevantly problematic combination of payoffs and probabilities. Whether this argument works is another question (I’m skeptical). But the infinite fanatic can’t even voice it. After all, any non-zero credence on an infinite payoff is enough to bite her. And since it is always possible to get evidence that infinite payoffs are available (God could always appear before you with various multi-colored buttons), non-zero-credences seem mandatory. Thus, no matter where she is, no matter what she has seen, the infinite fanatic never gives finite things any intrinsic attention. When she kisses her children, or prevents a genocide, she does it for the lizard, or for something at least as large.

(This “non-zero credences on infinities” issue is also a problem for assigning expected sizes to empirical quantities. What’s your expected lifespan? Oops: it’s infinite. How long will humanity survive, in expectation? Oops: eternity. How tall, in expectation, is that tree? Oops: infinity tall. I guess we’ll just ignore this? Yep, I guess we will.)

But infinite fanaticism isn’t our biggest infinity problem either. Notably, for example, it seems structurally similar to finite fanaticism, and one expects a similar diagnosis. But also: it’s a type of bullet a certain sort of person has gotten used to biting (more below). And biting has a familiar logic: as I noted above, infinities really are quite a big-deal thing. Maybe we can live with obsession? There’s a grand tradition, for example, of treating God, heaven, hell, etc as lexically more important than the ephemera of this fallen world. And what is heaven but a gussied-up lizard? (Well, one hopes for distinctions.)

No, the biggest infinity problems are harder. They break our familiar logic. They serve up bullets no one dreamed of biting. They leave the “I’ll just be hardcore about it” train without tracks.[5]

V. The impossibility of what we want

From this – experienced Here –
Remove the Dates – to These –
Let Months dissolve in further Months –
And Years – exhale in Years

Emily Dickinson

In particular: whether you’re obsessed with infinities or not, you need to be able to choose between them. Notably, for example, you might (non-zero credences!) run into a situation where you need to create one infinite baby universe (hypercomputer, etc), vs. another. And as I noted above, we have views about this. Heaven > hell. Infinite utopia > infinite lizard (at least according to me). 

And even absent baby-universe stuff, EDT-ish folks (and people with non-trivial credence on EDT-ish decision-theories) with mainstream credences on infinite cosmologies are already choosing between infinite worlds – and even, infinite differences between worlds – all the time. Whenever an EDT-ish person moves their arm, they see (with very substantive probability) an infinite number of arms, all across the universe, moving too. Every donation is an infinite donation. Every papercut is an infinity of pain. Yet: whatever your cosmology and decision theory, isn’t a life-saving donation worth a papercut? Aren’t two life-saving donations better than one?

Ok, then, let’s figure out the principles at work. And let’s start easy, with what’s called an “ordinal” ranking of infinite worlds: that is, a ranking that says which worlds are better than which others, but which doesn’t say how much better.

Suppose we want to endorse the following extremely plausible principle: 

Pareto: If two worlds (w1 and w2) contain the same people, and w1 is better for an infinite number of them, and at least as good for all of them, then w1 is better than w2. 

Pareto looks super solid. Basically it just says: if you can help an infinite number of people, without hurting anyone, do it. Sign me up. 

But now we hit problems. Consider another very attractive principle: 

Agent-neutrality: If there is a welfare-preserving bijection from the agents in w1 to the agents in w2, then w1 and w2 are equally good.  

By “welfare-preserving bijection,” I mean a mapping that pairs each agent in w1 with a single agent in w2, and each agent in w2 with a single agent in w1, such that both members of each pair have the same welfare level. The intuitive idea here is that we don’t have weird biases that make us care more about some agents than others for no good reason. A world with a hundred Alices, each at 1, has the same value as a world of hundred Bobs, each at 1. And a world where Alice has 1, and Bob has 2, has the same value as a world where Alice has 2, and Bob has 1. We want the agents in a world to flourish; but we don’t care extra about e.g. Bob flourishing in particular. Once you’ve told me the welfare levels in a given world, I don’t need to check the names. 

(Maybe you say: what if Alice and Bob differ in some intuitively relevant respect? Like maybe Bob has been a bad boy and deserves to suffer? Following common practice, I’m ignoring stuff like this. If you like, feel free to add further conditions like “provided that everyone is similar in XYZ respects.”)

The problem is that in infinite worlds, Pareto and Agent-Neutrality contradict each other. Consider the following example (adapted from Van Liedekerke (1995)). In w1, every fourth agent has a good life. In w2, every second agent has a good life. And the same agents exist in both worlds.

Agents    a1   a2   a3   a4   a5   a6   a7
w1            1      0     0    0     1     0     0….
w2            1      0     1    0     1     0     1….

By Pareto, w2 is better than w1 (it’s better for a3, a7, and so on, and just as good for everyone else). But there is also a welfare-preserving bijection from w1 to w2: you just map the 1s in w1 to the 1s in w2, in order, and the same for the 0s. Thus: a1 goes to a1, a2 goes to a2, a3 goes to a4, a4 goes to a6, a5 goes to a3, and so on. So by Agent-Neutrality, w1 and w2 are equally good. Contradiction. 

Here’s another example (adapted from Hamkins and Montero (1999)). Consider an infinite world where each agent is assigned to an integer, which determines their well-being, such that each agent i is at i welfare. And now suppose you could give each agent in this world +1 welfare. Should you do it? By Pareto, yes. But wait: have you actually improved anything? By Agent-Neutrality: no. There’s a welfare preserving bijection from each agent i in the first world to agent i-1 in the second: 

Agents       …  a-3    a-2   a-1  a0   a1   a2   a3  …
w3              …   -3     -2    -1     0     1     2     3 …
w4              …   -2     -1      0     1     2     3    4 …

Indeed, Agent-neutrality mandates indifference to the addition or subtraction of any uniform level of well-being in w3. You could harm each agent by a million, or help them by a zillion, and Agent-neutrality will shrug: it’s the same distribution, dude. 

Clearly, then, either Pareto or Agent-Neutrality has got to go. Which is it?

My impression is that ditching Agent-Neutrality is the more popular option. One argument for this is that Pareto just seems so right. If we’re not in favor of helping an infinite number of agents, or against harming an infinite number, then where on earth has our ethics landed us? 

Plus: Agent-Neutrality causes problems for other attractive, not-quite-Pareto principles as well. Consider: 

Anti-infinite-sadism: It’s bad to add infinitely many suffering agents to a world. 

Seems right. Very right, in fact. But now consider an infinite world where everyone is at -1. And suppose you can add another infinity of people at -1. 

Agents    a1    a2    a3   a4   a5   a6    a7
w5          -1              -1          -1             -1….
w6          -1     -1     -1    -1   -1     -1    -1….

Agent-neutrality is like: shrug, it’s the same distribution. But I feel like: tell that to the infinity of distinct suffering people you just created, dude. If there is a button on the wall that says “create an extra infinity of suffering people, once per second,” one does not lean casually against it, regardless of whether it’s already been pressed. 

On the other hand, when I step back and look at these cases, my agent-neutrality intuitions kick in pretty hard. That is, pairs like w3 and w4, and w5 and w6, really start to look like the same distribution.

Here’s a way of pumping the intuition. Consider a world just like w3/w4, except with an entirely different set of people (call them the “b-people”). 

Agents       …  b-3    b-2   b-1  b0   b1   b2   b3  …
w7              …  -3      -2     -1      0     1      2    3 …

Compared to w3, w7 really looks equally good: switching from a-people to b-people doesn’t change the value. But so, too, does w7 look equally good when compared to w4 (it doesn’t matter which b-person we call b0). But by Pareto, it can’t be both.

We can pump the same sort of intuition with w5, w6, and another infinite b-people world consisting of all -1s (call this w8). I feel disinclined to pay to move from w5 to w8: it’s just another infinite line of -1s. But I feel the same about w6 and w8. Yet I am very into paying to prevent the addition of an extra infinity of suffering people to a world. What gives?

What’s more, my understanding is that the default way to hold onto Pareto, in this sort of case, is to say that w7 is “incomparable” to w3 and w4 (e.g., it’s neither better, nor worse, nor equally good), even though w3 and w4 are comparable to each other. There’s a big literature on incomparability in philosophy, which I haven’t really engaged with. One immediate problem, though, has to do with money-pumps.  

Suppose that I’m God, about to create w3. Someone offers me w4 instead, for $1, and I’m like: hell yeah, +1 to an infinite number of people. Now someone offers me w7 in exchange for w4. They’re incomparable, so I’m like … um, I think the thing people say here is that I’m “rationally permitted” to either trade or not? Ok, f*** it, let’s trade. Now someone else says: wait, how about w3 for w8? Another “whatever” choice: so again I shrug, and trade. But now I’m back to where I started, except with $1 less. Not good. Money-pumped.

Fans of incomparability will presumably have a lot to say about this kind of case. For now I’ll simply register a certain kind of “bleh, whatever we end up saying here is going to kind of suck” feeling. (For example: if in order to avoid money-pumping, the incomparabilist forces me to “complete” my preferences in a particular way once I make certain trades, such that I end up treating w7 as equal either to w3 or w4, but not both, I feel like: which one? Either choice seems arbitrary, and I don’t actually think that w7 is better/worse than one of w3 or w4. Why am I acting like I do?)

Overall, this looks like a bad situation to me. We have to start shrugging at infinities of benefit or harm, or we have to start being opinionated/weird about worlds that really look the same. I don’t like it at all.

And note that we can run analogous arguments for basic locations of value other than agents. Suppose, for example, that we replace each of the “agents” in the worlds above with spatio-temporal regions. We can then derive similar contradictions between e.g. “spatio-temporal Pareto” (if you make some spatio-temporal regions better, and none worse, that’s an improvement), and “spatio-temporal-neutrality” (e.g., it doesn’t matter in which spatio-temporal region a given unit of value occurs, as long as there’s a value-preserving bijection between them). And the same goes for person-moments, generations, and so forth. 

This contradiction between something-Pareto and something-Neutrality is one relatively simple impossibility result in infinite ethics. The literature, though, contains a variety of others (see e.g. Zame (2007), Lauwers (2010), and Askell (2018)). I haven’t dug in on these much, but at a glance, they seem broadly similar in flavor. 

And note that we can get contradictions between something-Pareto and something-else-Pareto as well: for example, Pareto over agents and Pareto over spatio-temporal locations. Thus, consider a single room where Alice will live, then Bob, then Cindy, and so forth, onwards for eternity. In w9, each of them lives for 100 happy years. In w10, each lives for 1000 slightly less happy years, such that each life is better overall. w10 is better for every agent. But w9 is better at every time (this example is adapted from Arntzenius (2014)). So which is better overall? Here, following my verdict about the zone of happiness, I’m inclined to go with w10: agents, I think, are the more fundamental unit of ethical concern. But one might’ve thought that making an infinite number of spatio-temporal locations worse would make the world worse, not better.  

Pretty clearly, some stuff we liked from finite land is going to have to go. 

VI. Ordinal rankings aren’t enough

Suppose we bite the bullet and ditch Pareto or Agent-Neutrality. We’re still nowhere close to generating an ordinal ranking over infinite worlds. Pareto, after all, is an extremely weak principle: it stops applying as soon a given world is better for one agent, and worse for another (for example, donations vs. papercuts). And Agent-Neutrality stops applying without a welfare-preserving bijection. So even with a nasty bullet fresh in our teeth, a lot more work is in store.  

Worse, though, ordinal rankings aren’t enough. They tell you how to choose between certainties of one outcome vs. another. But real choices afford no such certainty. Rather, we need to choose between probabilities of creating one outcome vs. another. Suppose, for example, that God offers you the following lotteries: 

l1:    40% on a line of people at <1, 1, 1, 0, 1, 1, 1, 0 …>
        60% on zone of suffering, plus an infinite lizard (always at 1) on the side.

l2:    80% on <1, -2, 3, -4, 5 … >
        20% on zone of happiness, plus four infinite lizards (always at -6.2) on the side.

Which should you choose? Umm… 

The classic thing to want here is some kind of “score” for each world, such that you can multiply this score by the probabilities at stake to get an expected value. But we’ll settle for principles that will just tell us how to choose between lotteries more generally. 

Here I’ll look at a few candidates for principles like this. This isn’t an exhaustive survey; but my hope is that it can give a flavor for the challenge.

VII. Something about averages? 

Could we say something about averages? Like <2, 2, 2, 2, …> is better than <1, 1, 1, 1, …>, right? So maybe we could base the value of an infinite world on something like the limit of (total welfare of the agents counted so far)/(number of agents counted so far). Thus, the 2s have a limiting average of 2; and the 1s, a limiting average of 1; etc.

This approach suffers from a myriad of problems. Here’s a sample: 

One solution to order-dependence is to appeal to the limit of the utility per unit space-time volume, as you expand outward from some (all?) points. I cover principles with this flavor below. For now I’ll just note that many of the other problems I just listed will persist. 

VIII. New ways of representing infinite quantities?

Could we look for new ways of representing infinite quantities?

Bostrom (2011) suggests mapping infinite worlds (or more specifically: the sums of the utilities in an infinite sequence of value-bearing things) to “hyperreal numbers.” I won’t try to explain this proposal in full here (and I haven’t tried to understand it fully), but I’ll note one of the major problems: namely, that it’s sensitive to an arbitrary choice of “ultra-filter,” such that: 

And once you’ve arbitrarily chosen your ultra-filter, Bostrom’s proposal is order-dependent as well. E.g., once you’ve decided that <1, -2, 1, 1, -2, 1, 1 …> is e.g. better than (or worse than, or equal to) an empty world, we can just re-arrange the terms to change your mind. 

(Arntzenius also complains that Bostrom’s proposal gets him dutched booked. At a glance, though, this looks to me like an instance of the broader set of worries about “Satan’s Apple” type cases (see Arntzenius, Elga and Hawthorne (2004)), which I don’t feel very worried about.)

IX. Something about expanding regions of space-time?

Let’s turn to a more popular approach (e.g., an approach that has multiple adherents): one focused on the utility contained inside expanding bubbles of space-time.

Vallentyne and Kagan (1997) suggest that if we have two worlds with the same locations, and these locations have an “essential natural order,” we look at the differences between the utility contained in a “bounded uniform expansion” from any given location. In particular: if there is some positive number k such that, for any bounded uniform expansion, the utility inside the expansion eventually stays larger by more than k in worldi vs. worldj, then worldi is better. 

Thus, for example, in a comparison of <1, 1, 1, 1, …> vs. <2, 2, 2, 2, …>, the utility inside any expansion is bigger in the 2 world. And similarly, in <1, 2, 3, 4 …> vs. <2, 3, 4, 5>, expansions in the latter will always be greater by 1. 

“Essential natural order” is a bit tricky to define, but the key upshot, as I understand it, is that things like agents and person-moments don’t have it (agents can be listed by their height, by their passion for Voltaire, etc), but space-timey-stuff plausibly does (there is a well-defined notion of a “bounded-region of space-time,” and we can make sense of the idea that in order to get from a to b, you have to “go through” c). Exactly what counts as a “uniform expansion” also gets a bit tricky (see Arntzenius (2014) for discussion), but one gets the broad vibe: e.g., if I’ve got a growing bubble of space-time, it should be growing at the same rate in all directions (some of trickiness comes from comparing “directions,” I think). 

A major problem for Vallentyne and Kagan (1997) is that their principle only provides an ordinal ranking. But Arntzenius suggests a modification that generalizes to choices amongst lotteries: instead of looking at the actual value at each location, look at the expected value. Thus, if you’re choosing between: 

l3:  50% on <1, 1, 1, 1…>
      50% on <1, 2, 3, 4…>

l4: 50% on <-1, 0, -1, 0>
     50% on  <1, 4,  9, 16…>

Then you’d use the expected values of the locations “make these lotteries into worlds.” E.g., l3 is equivalent to <1, 1.5, 2, 2.5 …>, and l4 is equivalent to <0, 2, 4, 8 …>; and the latter is better according to Vallentyne-Kagan, so Arntzenius says to choose it. Granted, this approach doesn’t give worlds cardinal scores to use in EV maximization; but hey, at least we can say something about lotteries. 

The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff. 

Consider an infinite line of planets, each of which houses a Utopia, and none of which will ever interact with any of the others. On expansionism, it is extremely good to pull all these planets an inch closer together: so good, indeed, as to justify any finite addition of dystopias to the world (thanks to Amanda Askell, Hayden Wilkinson, and Ketan Ramakrishnan for discussion). After all, pulling on the planets so that there’s an extra Utopia every x inches will be enough for the eventual betterness of the uniform expansions to compensate for any finite number of hellscapes. But this looks pretty wrong to me. No one’s thanking you for pulling those planets closer together. In fact, no one noticed. But a lot of people are pissed about the whole “adding arbitrarily large (finite) numbers of hellscapes” thing: in particular, the people living there.

For closely related reasons, expansionism violates both Pareto over agents and Agent-neutrality. Consider the following example from Askell (2018), p. 83, in which three infinite sets of people (x-people, y-people, and z-people) live on an infinite sequence of islands, which are either “Balmy” (such that three out of four agents are happy) or “Blustery” (such that three out of four agents are sad). Happy agents are represented in black, and sad agents in white. 

From Askell (2018), p. 83; reprinted with permission

Here, expansionism likes Balmy more than Blustery – and intuitively, we might agree. But Blustery is better for the y-people, and worse for no one: hence, goodbye Pareto. And there is a welfare-preserving bijection from Balmy to Blustery as well. So goodbye Agent-Neutrality, too. Can’t we at least have one? 

The basic issue, here, is that expansionism’s moral focus is on space-time points (regions, whatever), rather than people, person-moments, and so on. In some cases (e.g. Balmy vs. Blustery), this actually does fit with our intuitions: we like it if the universe seems “dense” with value. But abstractly, it’s pretty alien; and when I reflect on questions like “how much do I want to pay to pull these planets closer together?”, the appeal from intuition starts to wane.

My other big issue with expansionism, at present, is that it fails to provide guidance in lots of cases. Some milder problems are sort of exotic and specific. Thus: 

These are all cases in which the worlds being compared have the exact same locations. I expect bigger problems, though, with worlds that aren’t like that. Consider, for example, the choice between creating a spatially-finite world with an immortal dude trudging from hell to heaven, where each day looks like <…-2, -1, 0, 1, 2 …>, and a spatially-infinite universe that only lasts a day, with a infinite line of people whose days are <…-2, -1, 0, 1, 2 …>. How shall we match up the locations in these worlds? Depending on how we do it, we’ll get different expansionist verdicts. And we’ll hit even worse arbitrariness if we try to e.g. match up locations for worlds with different numbers of dimensions (e.g., pairing locations in a 2-d world with locations in a 4-d one), let alone worlds whose differences reflect the full range of logically-possible space-times.

Maybe you say: whatever, we’ll just go incomparable there. But note this incomparability infects our lotteries as well. Thus, for example, suppose that we get some space-times, A and B, that just can’t be matched up with each other in any reasonable and/or non-arbitrary way. And now suppose that I’m choosing between lotteries like: 

l5: 99% on a A-world of -1s
     1% on a B-world of 2s.

l6: 99% on a A-world of 2s
     1% on a B-world of -1s.

The problem is because these worlds can’t be matched up, we can’t turn these lotteries into single worlds we can compare with our expansionist paradigm. So even though it looks kind of plausible that we want l6 here, we can’t actually run the argument. 

Maybe you say: Joe, this won’t happen often in practice (this is the vibe one gets from Arntzenius (2014) and Wilkinson (2021)). But I feel like: yes it will? We should already have non-zero credence on our living in different space-times that can’t be matched up, and it doesn’t matter how small the probability on the B-world is in the case above. What’s more, we should have non-zero credence that later, we’ll be able to create all sorts of crazy infinite baby-universes – including ones where their causal relationship to our universe doesn’t support a privileged mapping between their locations. 

There are other possible expansionist-ish approaches to lotteries (see e.g. Wilkinson (2020)). But I expect them – and indeed, any approach that requires counterpart relations between spatio-temporal locations — to run into similar problems. 

X. Weight people by simplicity?

Here’s an approach I’ve heard floating around amongst Bay Area folks, but which I can’t find written up anywhere (see here [LW · GW], though, for some similar vibes; and the literature on UDASSA [LW · GW] for a closely-related anthropic view that I think some people use, perhaps together with updateless-ish decision theory [LW · GW], to reach similar conclusions). Let’s call it “simplicity weighted utilitarianism” (I’ve also heard “k-weighted,” for “Kolmogorov Complexity”). The basic idea, as I understand it, is to be a total utilitarian, but to weight locations in a world by how easily they can be specified by an arbitrarily-chosen Universal Turing Machine (see my post on the Universal Distribution for more on moves in this vicinity). The hope here is to do for people’s moral weight what UDASSA does for your prior over being a given person in an infinite world: namely, give an infinite set of people weights that sum to 1 (or less). 

Thus, for example, suppose that I have an infinite line of rooms, each with numbers written in binary on the door, starting at 0. And let’s say we use simplicity-discounts that go in proportion to 1/(2^(numbers of bits for the door number+1)). Room 0 gets a 1/4 weighting, room 1 gets 1/4, room 10 gets 1/8, room 11 gets 1/8, room 100 gets 1/16th, and so on. (See here [LW · GW] for more on this sort of set-up.) The hope here is that if you fill the rooms with e.g. infinite 1s, you still get a finite total (in this case, 1). So you’ve got a nice cardinal score for infinite worlds, and you’re not obsessing about them. 

Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.

Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.

Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It feels like denying the hypothetical, rather than handling it. And are we really so confident about how much of what can be fit inside an “experience”?

Regardless, though, this view has other problems as well. Notably: like expansionism, this approach will also pay lots to re-arrange people, pull them closer together, etc (for example, moving from a “one person every million rooms” world to a “one person every room” world). But worse than expansionism, it will do this even in finite worlds. Thus, for example, it cares a lot about moving the happy people in rooms 100-103 to rooms 0-3, even if only four people exist. 

Indeed, it’s willing to create infinite suffering for the sake of this trade. Thus, a world where the first four rooms are at 1 is worth 1/4 + 1/4 + 1/8 + 1/8 = 3/4. But if we fill the rest of the rooms with an infinite line of -1, we only take a -1/4 hit. Indeed, on this view, just the first room at 1 offsets an infinity of suffering in rooms four and up. 

Maybe you say: “Joe, my discounts aren’t going to be so steep.” But it’s not clear to me how to tell which discounts are at stake, for a given UTM [LW · GW]. And anyway, regardless of your discounts, the same arguments will hold, but with a different quantitative gloss. 

Looks bad to me. 

XI. What’s the most bullet-biting hedonistic utilitarian response we can think of?

As a final sample from the space of possible views, let’s consider the view that seems to me most continuous with the spirit of hardcore, bullet-biting hedonistic utilitarianism. (I’m not aware of anyone who endorses the view I’ll lay out, but Bostrom (2011, p. 29)’s “Extended Decision Rule” is in a similar ballpark). This view doesn’t care about people, or space-time points, or densities of utility per unit volume, or Pareto, or whatever. All it cares about is the amount of pleasure vs. pain in the universe. Pursuant to this single-minded focus, it groups worlds into four types: 

  1. Positive infinities. Worlds with infinite pleasure, and finite pain. Value: ∞.
  2. Negative infinities. Worlds with infinite pain, and finite pleasure. Value: –∞.
  3. Mixed infinities. Worlds with infinite pleasure and infinite pain. Value: worse than positive infinities, better than negative infinities, incomparable to each other and to finite worlds.
  4. Finite worlds. Worlds with finite pleasure and finite pain. Value: ~0, but ranked according to total utilitarianism. Worse than positive infinities, better than negative infinities, incomparable to mixed infinities.

This view’s decision procedure is just: maximize the probability of positive infinity minus the probability of negative infinity (call this quantity “the diff”). Maybe it allows finite worlds to serve as tie-breakers, but this doesn’t really come up in practice: in practice, it’s obsessed with maximizing the diff (see Bostrom (2011), p. 30-31). And it doesn’t have anything to say about comparisons between different mixed infinity worlds, or about trade-offs between mixed infinities and finite worlds. 

Alternatively, if we don’t like all this faff about incomparability (my model of a bullet-biting utilitarian doesn’t), we can set the value of all mixed infinity worlds to 0 (i.e., the positive and negative infinities “cancel out”). Then we’d have a nice ranking with positive infinity infinitely far on the top, finite worlds in between (with mixed infinities sitting at zero), and negative infinities infinitely far at the bottom. 

Call this the “four types” view. To get a sense of this view’s verdicts, consider the following worlds:

On the four types view: 

We can see the four types view as continuous with a certain kind of “pleasure/pain-neutrality” principle. That is, if we assume that pleasure/pain come in units you can either “swap around” or render equivalent to each other (e.g., there is some amount of lizard time that outweighs a moment in heaven; some number of dust specks that outweigh a moment in hell, etc – a classic utilitarian thought), then in some sense you can build every positive infinity world (or the equivalent) by re-arranging Infinite Lizard, every negative infinity world by re-arranging Infinite Speck, and every type 3 world by re-arranging both in combination. It’s the same (quality-weighted) amount of pleasure and pain regardless, says this view, and amounts of pleasure and pain (as opposed to “densities,” or placements in different people’s lives, or whatever) were what utilitarianism was supposed to be all about. 

There is, I think, a certain logic to it. But also: it’s horrifying. Trading a world where an infinite number of people have infinitely good lives, for a ~guarantee of a world where infinitely many people are eternally tortured, to get a one-in-a-graham’s-number chance of creating a single immortal, barely-conscious lizard? Fuuuuhck that. That’s way worse than paying to pull planets together, or not knowing what to say about worlds with non-matching space-times. It’s worse than the repugnant conclusion; worse than fanaticism; worse than … basically every bullet some philosopher has ever bitten? If this is where “bullet-biting utilitarianism” leads, it has entered a whole new phase of crazy. Just say no, people. Just say no. 

But also: such a choice doesn’t really make sense on its own terms. Infinite Lizard is getting treated as lexically better than Heaven + Speck, because it’s possible to map all of Infinite Lizard’s barely conscious happiness onto something equivalent to all the happiness in Heaven+Speck, with the negative infinity of the dust specks left over. But so, equally, is it possible to map all of Infinite Lizard’s barely-conscious happiness onto everyone’s first nano-seconds in heaven, to map those nano-seconds onto each of their dust specks in a way that would more than outweigh the dust-specks in finite contexts, and to leave everyone with an infinity of fully-conscious happiness left over. That is, the “Infinite Lizard Has All of Heaven’s Happiness” and “No Amount Of Time In Heaven Can Outweigh The Dust Specks” mappings aren’t, actually, privileged here: one just as easily interpret Heaven + Speck as ridiculously better than Infinite Lizard (indeed, this is my default stance). But the four types view has fixated on these particular mappings anyway, and condemned an infinity of people to eternal torture for their sake. 

(Alternatively, on yet a third version of the four-types view, we can try to take the arbitrariness of these mappings more seriously, and say that all mixed worlds are incomparable to everything, including positive and negative infinities. This avoids mandating trades from Heaven + Speck to Hell + Lollypop for a tiny chance of the lizard (such a choice is now merely “permissible”), but it also makes an even larger set of choices rationally permissible: for example, choosing Hell + Lollypop over pure Heaven. And it permits money-pumps that lead you from Heaven, to Hell + Lollypop, and then to Hell.)

XII. Bigger infinities and other exotica

OK, we’ve now touched on five possible approaches to infinite ethics: averages, hyperreals, expansionism, simplicity weightings, and the four types view. There are others in the literature, too (see e.g. Wilkinson (2020) and Easwaran (2021) – though I believe that both of these proposals require that the two worlds have exactly the same locations (maybe Wilkinson’s can be rejiggered to avoid this?) – and Jonsson and Voorneveld (2018), which I haven’t really looked at). I also want to note, though, ways in which the discussion of all of these has been focused on a very narrow range of cases. 

In particular: we’ve only ever been talking about the smallest possible infinities – i.e., “countable infinities.” This is the size of the set of the natural numbers (and the rationals, and the odd numbers, and so on), and it makes it possible to do things like list all the locations in some order. But there is an unending hierarchy of larger infinities, too, create-able by taking power-sets over and over forever (see Cantor’s theorem). Indeed, according to this video, some people even want to posit a size of infinity inaccessible via power-setting – an infinity whose role, with respect to taking power-sets, is analogous to the role of countable infinities, with respect to counting (i.e., you never get there). And some go beyond that, too: the video also contains the following diagram (see also here), which starts with the “can’t get there via power-setting” infinity at the bottom (“inaccessible”), and goes from there (centrally, according to the video, by just adding axioms declaring that you can). 

(From here.)

I’m not a mathematician (as I expect this post has already made clear in various places), but at a glance, this looks pretty wild. “Almost huge?” “Superhuge?” Also, not sure where this fits with respect to the diagram, but Cantor was apparently into the idea of the “Absolute Infinite,” which I think is supposed to be just straight up bigger than everything period, and which Cantor “linked to the idea of God.” 

Now, relative to countably infinite worlds, it’s quite a bit harder to imagine worlds with e.g. one person for every real number. And imagining worlds with a “strongly Ramsey” number of people seems likely to be a total non-starter, even if one knew what “strongly Ramsey” meant, which I don’t. Still, it seems like the infinite fanatic should be freaking out (drooling?). After all, what’s the use obsessing about the smallest possible infinities? What happened to scope-sensitivity? Maybe you can’t imagine bigger-infinity worlds; maybe the stuff on that chart is totally confused – but remember that thing about non-zero credences? The lizards could be so much larger, man. We have to try for an n-huge lizard at least. And really (wasn’t it obvious the whole time?), we should be trying to create God. (A friend comments, something like: “God seems too comprehensible, here. N-huge lizards seem bigger.”)

More importantly, though: whether we’re obsessing about infinities or not, it seems very likely that trying to incorporate merely uncountable infinities (let alone “supercompact” ones, or whatever) into our lotteries is going to break whatever ethical principles we worked so hard to construct for the countably infinite case. In this sense, focusing purely on countable infinities seems like a recipe for the same kind of rude awakening that countable infinities give to finite ethics. Perhaps we should try early to get hip to the pattern. 

And we can imagine other exotica breaking our theories as well. Thus, for example, very few theories are equipped to handle worlds with infinite value at a single “location.” And expansionism relies on all the worlds we’re considering having something like a “space-time” (or at least, a “natural ordering” of locations). But do space-timey worlds, or worlds with any natural orderings of “locations,” exhaust the worlds of moral concern? I’m not sure. Admittedly, I have a tough time imagining persons, experience-like things, or other valuable stuff existing without something akin to space-time; but I haven’t spent much time on the project, and I have non-zero credence that if I spent more, I’d come up with something. 

XIII. Maybe infinities are just not a thing? 

When we wake up brushed by panic in the dark
our pupils grope for the shape of things we know.

Sarah Howe

But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a 30% chance that probabilities need not add up to 100%? What percent of your net worth should you pay for such a lottery, vs. a guaranteed avocado sandwich? Must you learn to answer, lest your ethics break, both in theory and in practice?

One feels like: no. Indeed, one senses that a certain type of plot has been lost, and that we should look for less demanding standards for our lottery-choosing – ones that need not accommodate literally every wacked-out, probably-non-sensical possibility we haven’t thought of yet. 

With this in mind, though, perhaps one is tempted to give a similar response to countable infinities as well. “Look, dude, just like my ethics doesn’t need to be able to handle ‘the universe is a square circle,’ it doesn’t need to be able to handle infinite worlds, either.” 

But this dismissal seems too quick. Infinite worlds seem eminently possible. Indeed, we have very credible scientific theories that say that our actual universe contains a countably infinite number of people, credible decision theories that say that we can have infinite influence on that universe, widely-accepted religions that posit infinite rewards and punishments, and a possibly very intense future ahead of us where baby-universes/wormholes/hyper-computers etc appear much more credible, at least, than “consciousness = cheesy-bread.” What’s more, we have standard ethical theories that break quickly on encounter with readily-imaginable cases that we continue to have strong ethical intuitions about (e.g., Heaven + Speck vs. Hell + Lollypop). For these reasons, it seems to me that we have much more substantive need to deal with countable infinities in our ethics than we do with square-circle universes.  

Still, my impression is that a relatively common response to infinite ethics is just: “maybe somehow infinities actually aren’t a thing? For example: they’re confusing, and they lead to weird paradoxes, like building the sun out of a pea (video), and messed up stuff with balls in boxes (video). Also: I don’t like some of these infinite ethics problems you’re talking about” (see here for some more considerations). And indeed, despite their role in e.g. cosmology (let alone the rest of math), some philosophers of math (e.g., “ultrafinitists”) deny the existence of infinities. Naively, this sort of position gets into trouble with claims like “there is a largest natural number” (a friend’s reaction: “what about that number plus one?”), but apparently there is ultra-finitist work trying to address this (something about “indefinitely large numbers”? hmm…). 

My own take, though, is that resting the viability of your ethics on something like “infinities aren’t a thing” is a dicey game indeed, especially given that modern cosmology says that our actual concrete universe is very plausibly infinite. And as Bostrom (2011, p. 38) notes, conditioning on the non-thing-ness of infinities (or ignoring infinity-involving possibilities) leads to weird behavior in other contexts – e.g., refusing to fund scientific projects premised on infinity-involving hypotheses, insisting that the universe is actually finite even as more evidence comes in, etc. And more broadly, it just looks like denial. It looks like covering your ears and says “la-la-la.” 

XIV. The death of a utilitarian dream

I bite all the bullets.

— A friend of mine, pre-empting an objection to his utilitarianism.

The broad vibe I’m trying to convey, here, is that infinite ethics is a rough time. Even beyond “torturing any finite number of people for any probability of an infinite lizard,” we’ve got bad impossibility results even just for ordinal rankings; we’ve got a smattering of theories that are variously incomplete, order-dependent, Pareto-violating, and otherwise unattractive/horrifying; and we’ve got an infinite hierarchy of further infinities, waiting in the wings to break whatever theory we happen to settle on. It’s early days (there isn’t that much work on this topic, at least in analytic ethics), but things are looking bleak. 

OK, but: why does this matter? I’ll mention a few reasons. 

The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go. 

Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground. 

Indeed, even people who reject this dream can feel its allure. If you’re a deontologist, scrambling to add yet another epicycle to your already-complex and non-exhaustive principles, to handle yet another counter-example (e.g. the fat man lives in a heavy metal crate, such that his body itself won’t stop the trolley, but he’ll die if the crate moves), you might hear, sometimes, a still, small voice saying: “You know, the utilitarians don’t have this kind of problem. They’ve got a nice, simple, coherent theory that takes care of this case and a zillion others in one fell swoop, including all possible lotteries (something my deontologist friends barely ever talk about). And they always get more expected net pleasure in return. They sure have it easy…”[6] In this sense, “maximize expected net pleasure” can hover in the background as a kind of “default.” Maybe you don’t go for it. But it’s there, beckoning, and making a certain kind of sense. You could always fall back on it. Perhaps, indeed, you can feel it relentlessly pulling on you. Perhaps a part of you fears the force of its simplicity and coherence. Perhaps a part of you suspects that ultimately (horribly?), it’s the way to go.  

But I think infinite ethics changes this picture. As I mentioned above: in the land of the infinite, the bullet-biting utilitarian train runs out of track. You have to get out and wander blindly. The issue isn’t that you’ve become fanatical about infinities: that’s a bullet, like the others, that you’re willing to bite. The issue is that once you’ve resolved to be 100% obsessed with infinities, you don’t know how to do it. Your old thing (e.g., “just sum up the pleasure vs. pain”) doesn’t make sense in infinite contexts, so your old trick – just biting whatever bullets your old thing says to bite – doesn’t work (or it leads to horrific bullets, like trading Heaven + Speck for Hell + Lollypop, plus a tiny chance of the lizard). And when you start trying to craft a new version of your old thing, you run headlong into Pareto-violations, incompleteness, order-dependence, spatio-temporal sensitivities, appeals to persons as fundamental units of concern, and the rest. In this sense, you start having problems you thought you transcended – problems like the problems the other people had. You start having to rebuild yourself on new and jankier foundations. You start writing whole papers about a few counterexamples, using principles that you know don’t cover all the choices you might need to make, even as you sense the presence of further problems and counterexamples just offscreen. Your world starts looking stranger, “patchier,” more complicated. You start to feel, for the first time, genuinely lost.

To be clear: I’m not saying that infinite ethics is hopeless. To the contrary, I think some theories are better than others (expansionism is probably my current favorite), and that further work on the topic is likely to lead to further clarity about the best overall response. My point is just that this response isn’t going to look like the simple, complete, neutrality-respecting, totalist, hedonistic, EV-maximizing utilitarianism that some hoped, back in the day, would answer every ethical question – and which it is possible to treat as a certain kind of “fallback” or “default.” Maybe the best view will look a lot like such a utilitarianism in finite contexts – or maybe it won’t. But regardless, a certain type of dream will have died. And the fact that it dies eventually should make it less appealing now. 

XV. Everyone’s problem

That said, infinite ethics is a problem for everyone, not just utilitarians. Everyone (even the virtue ethicists) needs to know how to choose between Heaven + Speck vs. Hell + Lollypop, given the opportunity. Everyone needs decision procedures that can handle some probability of doing infinite things. Faced with impossibility results, everyone has to give something up. And sometimes that stuff you give up matters in finite contexts, too.

A salient example to me, here, is spatio-temporal neutrality. Utilitarian or no, most philosophers want to deny that a person’s location in space and time has intrinsic ethical significance. Indeed, claims in this vicinity play an important role in standard arguments against discounting the welfare of future people, and in support of “longtermism” more broadly (e.g., “location in time doesn’t matter, there could be a lot of people in the future, so the future matters a ton”). But notably, various prominent views in infinite ethics (notably, expansionist views; but also “simplicity-weightings”) reject spatio-temporal neutrality. On these views, locations in space and time matter a lot – enough, indeed, to make e.g. pulling infinite happy planets an inch closer together worth any finite amount of additional suffering. On its own, this isn’t enough to get conclusions like “people matter more if they’re nearer to me in space and time” (the thing that longtermism most needs to reject) – but it’s an interesting departure from “location in spacetime is nothing to me,” and one that, if accepted, might make us question other neutrality-flavored intuitions as well.

And the logic that leads to non-neutrality about space-time is understandable. In particular: infinite worlds look and behave very differently depending on how you order their “value-bearing locations,” so if your view focuses on a type of location that lacks a natural order (e.g., agents, experiences, etc), it often ends up indeterminate, incomplete, and/or in violation of Pareto for the locations in question. Space-time, by contrast, comes with a natural order, so focusing on it cuts down on arbitrariness, and gives us more structure to work with. 

Something somewhat analogous happens, I think, with “persons” vs. “experiences” as units of concern. Some people (especially, in my experience, utilitarian-types) are tempted, in finite contexts, to treat experiences (or “person-moments”) as more fundamental, since persons can give rise to various Parfitian problems. But in infinite contexts, refusing to talk about persons makes it much harder to do things like distinguish between worlds like Heaven + Speck vs. Hell + Lollypop, where our intuition is centrally driven, I think, by thoughts like “In Heaven + Speck, everyone’s life is infinitely good; in Hell + Lollypop, everyone’s life is infinitely bad.” So it becomes tempting to bring persons back into the picture (see Askell (2018), p. 198, for more on this).

We can see the outlines of a broader pattern. Finite ethics (or at least, a certain reductionist kind) often tries to ignore structure. It calls more and more things (e.g., the location of people in space-time, the locations of experiences in lives) irrelevant, so that it can hone in on the true, fundamental unit of ethical concern. But infinite ethics needs structure, or else everything dissolves into re-arrangeable nonsense. So it often starts adding back in what finite ethics threw out. One is left with a sense that perhaps, there is even more structure to be not-ignored. Perhaps, indeed, the game of deriving the value of the whole from the value of some privileged type of part is worse than one might’ve thought (see Chappel (2011) for some considerations, h/t Carl Shulman). Perhaps the whole is primary. 

These are a few examples of finite-ethical impulses that infinities put pressure on. I expect there to be many others. Indeed, I think it’s good practice, in finite ethics, to make a habit of checking whether a given proposal breaks immediately upon encounter with the infinite. That doesn’t necessarily mean you need to throw it out. But it’s a clue about its scope and fundamentality.

XVI. Nihilism and responsibility

Vain are the thousand creeds
That move men’s hearts: unutterably vain…

Emily Bronte

Perhaps one looks at infinite ethics and says: this is an argument for nihilism. In particular: perhaps one was up for some sort of meta-ethical realism, if the objectively true ethics was going to have certain properties that infinite ethics threatens to deny – properties like making a certain sort of intuitively resonant sense. Perhaps, indeed, one had (consciously or unconsciously) tied one’s meta-ethical realism to the viability of a certain specific normative ethical theory – for example, total hedonistic utilitarianism – which seemed sufficiently simple, natural, and coherent that you could (just barely) believe that it was written into the fabric of an otherwise inhuman universe. And perhaps that theory breaks on the rocks of the infinite.

Or perhaps, more generally, infinite ethics reminds us too hard of our cognitive limitations; of the ways in which our everyday morality, for all its pretension to objectivity, emerges from the needs and social dynamics of fleshy creatures on a finite planet; of how few possibilities we are in the habit of actually considering; of how big and strange the world can be. And perhaps this leaves us, if not with nihilism, then with some vague sense of confusion and despair (or perhaps, more concretely, it makes us think we’d have to learn more math to dig into this stuff properly, and we don’t like math).

I don’t think there’s a clean argument from “infinite ethics breaks lots of stuff I like” to “meta-ethical realism is false,” or to some vaguer sense that Cosmos of value hath been reduced to Chaos. But I feel some sympathy for the vibe. 

I was already pretty off-board with meta-ethical realism, though (see here and here). And for anti-realists, despairing or giving up in the face of the infinite is less of an option. Anti-realists, after all, are much less prone to nihilism: they were never aiming to approximate, in their action, some ethereal standard that might or might not exist, and which infinities could refute. Rather, anti-realists (or at least, my favored variety) were always choosing [LW · GW] how to respond to the world as it is (or might be), and they were turning to ethics centrally as a means of becoming more intentional, clear-eyed, and coherent in their choice-making. That project persists in its urgency, whatever the unboundedness of the world, and of our influence on it. We still need to take responsibility for what we do, and for what it creates. We still harm, or help – only, on larger scales. If we act incoherently, we still step on our own feet, burning what we care about for nothing – only, this time, the losses can be infinite. Perhaps coherence is harder to ensure. But the stakes are higher, too. 

The realists might object: for the anti-realist, “we need to take responsibility for how we respond to infinite worlds” is too strong. And fair enough: at the deepest level, the anti-realist doesn’t “need” or “have” to do anything. We can ignore infinities if we want, in the same sense that we can let our muscles go limp, or stay home on election day. What we lose, when we do this, is simply the ability to intentionally steer the world, including the infinite world, in the directions we care about – and we do, I think, care about some infinite things, whatever the challenges this poses. That is: if, in response to the infinite, we simply shrug, or tune out, or wail that all is lost, then we become “passive” about infinite stuff. And to be passive with respect to X is just: to let what happens with X be determined by some set of factors other than our agency. Maybe that’ll work out fine with infinites; but maybe, actually, it won’t. Maybe, if we thought about it more, we’d see that infinities are actually, from our perspective, quite a big deal indeed – a sufficiently big deal that “whatever, this is hard, I’ll ignore it” no longer looks so appealing.

I’m hoping to write more about this distinction between “agency” and “passivity” at some point (see here [LW · GW] for some vaguely similar themes). For now I’ll mostly leave it as a gesture. I want to add, though, that given how far away we are (in my opinion) from a satisfying and coherent theory of infinite ethics, I expect that a good amount of the agency we aim at the infinite will remain, for some time, pretty weak-sauce in terms of “steering stuff in consistent directions I’d endorse if I thought about it more.” That is, while I don’t think that we should give up on approaching infinities with intentional agency, I think we should acknowledge that for a while, we’re probably going to suck at it.  

XVII. Infinities in practice

If we can think
this far, might not our eyes adjust to the dark?

Sarah Howe

What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: “This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it … even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, speck of dust!”

Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: “You are a god and never have I heard anything more divine.”

Friedrich Nietzsche

Heaven lies about us in our infancy!

William Wordsworth

I’ll close with a few thoughts on practical implications. 

Perhaps we suck at infinite ethics now, both in theory and in practice. Someday, though, we might get better. In particular: if humanity can survive long enough to grow profoundly in wisdom and power, we will be able to understand the ethics here fully – or at least, much more deeply. We’ll also know much more about what sort of infinite things we are able to do, and we’ll be much better able to execute on infinite projects we deem worthwhile (building hyper-computers, creating baby-universes, etc). Or, to the extent we were always doing infinite things (for example, acausally), we’ll be wiser, more skillful, and more empowered on that front, too.

And to be clear: I don’t think that understanding the ethics, here, is going to look like “patching a few counterexamples to expansionism” or “figuring out how to deal with lotteries involving incomparable outcomes.” I’m imagining something closer to: “understanding ~all the math you might ever need, including everything related to all the infinites on the completed version of that crazy chart above; solving all of cosmology, physics, metaphysics, epistemology, and so on, too; probably reconceptualizing everything in fundamentally new and more sophisticated terms — terms that creatures at our current level of cognitive capacity can’t grok; then building up a comprehensive ethics and decision theory (assuming those terms still make sense), informed by this understanding, and encompassing of all the infinities that this understanding makes relevant.” It may well make sense to get started on this project now (or it might not); but we’re not, as it were, a few papers away. 

I don’t, though, expect the output of such a completed understanding to be something like: “eh, infinities are tricky, we decided to ignore them,” which as far as I can tell is our current default. To the contrary, I can readily imagine future people being horrified at the casual-ness of our orientation towards the possibility of infinite benefits and harms. “They knew that an infinite number of people is more than any finite number, right? Did they even stop to think about it?” This isn’t to say that future people will be fanatical about infinities (as I noted above, I expect that the right thing to say about fanaticism will emerge even just from considering the finite case). But the argument for taking infinite benefits and harms very seriously isn’t especially complex. It’s the type of thing you can imagine future people being pretty adamant about.

On the other hand, if someone comes to me now and says: “I’m doing X crazy-sounding thing (e.g., quitting my bio-risk job to help break us out of the simulation; converting to Catholicism because it seemed to me slightly more likely than all the other religions; following up on that one drug experience with those infinite spaghetti elves), because of something about infinite ethics,” I’m definitely feeling nervous and bad. As ever with the wackier stuff on this blog (and indeed, even with the less-wacky stuff), my default attitude is: OK (though not risk-free) to incorporate into your worldview in grounded and suitably humble ways; bad to do brittle and stupid stuff for the sake of. I trust a wise and empowered humanity to handle the wacky stuff well (or at least, much better). I trust present-day humans who’ve thought about it for a few hours/weeks/years (including myself) much less. So as a first pass, I think that what it looks like, now, to take infinite ethics seriously is: to help our species make it to a wise and empowered future, and to let our successors take it from there. 

That said, I do think that reflection on infinite ethics can (very hazily) inform our backdrop sense of how strange and different a wise future’s priorities might be. In particular: of the options I’ve considered (and setting aside simulation shenanigans), to my mind the most plausible way of doing infinitely good stuff is via exerting optimally wise acausal influence on an infinitely large cosmology. That is, my current attitude towards things like baby-universes and hyper-computers is something like: “hard to totally rule out.” (And I’d say the same thing, in a more skeptical tone, about various religions.) But I’m told that my attitude towards infinitely large cosmologies should be somewhere between: “plausible” and “probably,” and my current attitude towards some sort of acausal decision theory is something like: “best guess view.” So this leaves me, already, with very macroscopic credences on all of my actions exerting infinite amounts of (acausal) influence. It’s hard to really absorb — and I haven’t, partly because I haven’t actually looked into the relevant cosmology. But if I had to guess about where the attention of future infinity-oriented ethical projects would turn, I’d start with this type of thing, rather than with hypercomputers, or Catholicism. 

Does this sort of infinite influence, maybe, just add up to normality [LW · GW]? Maybe, for example, we use some sort of expansionism to say that you should just make your local environment as good as possible, thereby acausally making an infinite number of other places in the universe better too, thereby improving the whole thing by expansionist lights? If so, then maybe we can just live our finite lives as usual, but in an infinite number of places at once? Our lives would simply carry, on this view, the weight of Nietzsche’s eternal return – only spread out across space-time, rather than in an endless loop. We’d have a chance to confront a version of Nietzsche’s demon in the real world – to find out if we rejoice, or if we gnash our teeth.

I do think we’d confront this demon in some form. But I’m skeptical it would leave our substantive priorities untouched (and anyway, we’d need to settle on a theory of infinite ethics to get this result). In particular, I expect this sort of “acausal influence across the universe” perspective to expand beyond very close copies of you, to include acausal interaction with other inhabitants of the universe (including, perhaps, ones very different from you) whose decisions are nevertheless correlated with yours (see e.g. Oesterheld (2017) for some discussion). And naively, I expect this sort of interaction to get pretty weird. 

Even beyond this particular form of weirdness, though, I think visions of future civilizations that put substantive weight on infinity-focused projects are just different in flavor from the ones that emerge from naively extrapolating your favorite finite-ethical views (though even with infinities to the side, I expect such extrapolations to mislead). Thus, for example, total utilitarian types often think that the main game for a wise future is going to be “tiling the accessible universe” with some kind of intrinsically optimal value-structure (e.g., paperclips; oh wait, no…), the marginal value of which stays constant no matter how much you’ve already got. So this sort of view sees e.g. a one-in-a-billion chance of controlling a billion galaxies as equivalent in expected value to a guarantee of one galaxy. But even as infinities cause theoretical problems for total utilitarianism, they also complicate this sort of voracious appetite for resources: relative to “hedonium per unit galaxy,” it is less clear that the success and value of infinity-oriented projects scales linearly with the resources involved (h/t Nick Beckstead for suggesting this consideration) – though obviously, resources are still useful for tons of things (including, e.g., building hypercomputers, acausal bargaining with the aliens – you know, the usual).

All in all, I currently think of infinite ethics as a lesson in humility: humility about how far standard ethical theory extends; humility about what priorities a wise future might bring; humility about just how big the world (both the abstract world, and the concrete world) can be, and how little we might have seen or understood. We need not be pious about such humility. Nor need we preserve or sanctify the ignorance it reflects: to the contrary, we should strive to see further, and more clearly. Still, the puzzles and problems of the infinite can be evidence about brittleness, dogmatism, over-confidence, myopia. If infinities break our ethics, we should pause, and notice our confusion, rather than pushing it under the rug. Confusion, as ever, is a clue.  

  1. ^

    From Sean Carroll (13:01 here): “Yeah, I’ll just say very quickly, I think that, just so everyone knows, this is an open question in cosmology. … The possibility’s on the table, the universe is infinite, there’s an infinite number of observers of all different kinds, and there’s a possibility on the table that the universe is finite, and there’s not that many observers, we just don’t know right now.” 

    Bostrom (2011): “Recent cosmological evidence suggests that the world is probably infinite. [continued in footnote] In the standard Big Bang model, assuming the simplest topology (i.e., that space is singly connected), there are three basic possibilities: the universe can be open, flat, or closed. Current data suggests a flat or open universe, although the final verdict is pending. If the universe is either open or flat, then it is spatially infinite at every point in time and the model entails that it contains an infinite number of galaxies, stars, and planets. There exists a common misconception which confuses the universe with the (finite) “observable universe”. But the observable part—the part that could causally affect us— would be just an infinitesimal fraction of the whole. Statements about the “mass of the universe” or the “number of protons in the universe” generally refer to the content of this observable part; see e.g. [1]. Many cosmologists believe that our universe is just one in an infinite ensemble of universes (a multiverse), and this adds to the probability that the world is canonically infinite; for a popular review, see [2].”

    Wilkinson (2021): “you might be disappointed to find that the world around you is infinite in the relevant sense. I am sorry to disappoint you, but contemporary physics suggests just that. The widely accepted flat-lambda model predicts that our universe will tend towards a stable state and will then remain in that state for infinite duration (Wald 1983; Carroll 2017). Also widely accepted, the inflationary view posits that our world is spatially infinite, containing infinitely many other ‘bubble’ universes beyond our cosmic horizon (Guth 2007). But that’s not all they predict. Take any small-scale phenomenon which is morally valuable e.g., perhaps a human brain experiencing the thrill of reading philosophy for a given duration. Each of the above physical views predicts that our universe, in its infinite volume, will contain infinitely many such thrills (Garriga and Vilenkin 2001; Linde 2007; de Simone 2010; Carroll 2017).”

  2. ^

    I’m ignoring situations where e.g. if I eat a sandwich today, then this changes what happens to an infinite number of Boltzmann brains later, but in a manner I can’t ever predict. That said, this sort of scenario does raise problems: see e.g. Wilkinson (2021) for some discussion.

  3. ^

    See also Dyson (1979, p. 455-456) for more on possibilities for infinite computation.

  4. ^

    See MacAskill: “It’s not the size of the bucket that matters, but the size of the drop” (p. 25).

  5. ^

    This image is partly inspired by Ajeya Cotra’s discussion of the “crazy train” here.

  6. ^

    An example from an unpublished paper by Ketan Ramakrishnan: “If this is correct, some other account of suboptimal supererogatory harming is called for. But I have been unable to figure out how such an account would work. And our exhausting casuistical gymnastics suggest that, whatever the best such account turns out to be, its mechanics are likely to prove extremely intricate. Perhaps a satisfying account will eventually be found, of course. But an alternative diagnosis of our predicament is also available. The foundational elements of ordinary, deontological moral thought – stringent duties against harming and using other people without their consent, wide prerogatives to refrain from harming ourselves in order to aid other people – are highly compelling on first inspection. But they prove, on closer view, to be composed of byzantine causal structures whose moral significance is open to serious doubt. Our present difficulties may thus be symptomatic of wider instabilities in the deontological architecture. Perhaps we should renounce any moral view that is built on such intricate casual structures. Perhaps we should just accept, with consequentialists, that “well-being comes first. The weal and woe of human beings comes first.”’

70 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2022-02-03T02:22:27.637Z · LW(p) · GW(p)

I didn't really read much of the post, but I think you are rejecting weighting people by simplicity unfairly here.

Imagine you flip a fair coin until it comes up tails, and either A) you suffer if you flip >100 times, or B) you suffer if you flip <100 times. I think you should prefer action A. 

However if you think about there as being a countable collection of possible outcomes, one for each possible number of flips, you are are creating "infinite" suffering rather than "finite" suffering, so you should prefer B.

I think the above argument for B is wrong and similar to the argument you are giving.

Note that the choice of where we draw the boundary between outcomes mattered, and similarly the choice of where we draw the boundary between people in your reasoning matters. You need to make choices about what counts as different people vs same people for this reasoning to even make sense, and even if it does make sense, you are still not taking seriously the proposal that we care about the total simplicity of good/bad experience rather than the total count of good/bad experience. 

Indeed, I think the lesson of the whole infinite ethics thing is mostly just grappling with we don't understand how to talk about total count in the infinite case. But I don't see the argument for wanting to talk about count in the first place. It feels like a property of where you are drawing the boundaries, rather than what is actually there. In the simple cases, we can just draw boundaries between people and declare that our measure is the uniform measure on this finite set, but then once we declare that to be our measure, we interact with it as a measure.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2022-02-03T03:05:40.036Z · LW(p) · GW(p)

Sure, we should put more weight on the suffering from flipping a single tail in B than the suffering from flipping a thousand heads followed by a tail in A (by a factor of times). But (at least intuitively) that's because the former is more probable; there's (roughly speaking) universes in which we flip a single tail for every one where we flip a thousand heads followed by a tail. This doesn't generally seem relevant to the scenarios described in the post, where we're specifying possibilities to compare, but of course it's worth tracking in general, where simple phenomena are more likely.

Replies from: interstice
comment by interstice · 2022-02-03T19:06:03.698Z · LW(p) · GW(p)

there’s (roughly speaking) universes in which we flip a single tail for every one where we flip a thousand heads followed by a tail

This follows only if you assume that all probability measures must derive from some underlying uniform measure over a finite set, but there's no reason that this has to be the case. In quantum mechanics, for instance, there's no obvious underlying set on which the uniform measure gives the Born probabilities. Or if we're considering an infinite set of possibilities like in this post, there is no uniform probability measure we can use. That's arguably the source of the paradoxes, and one possible resolution is to allow non-uniform measures such as the simplicity prior.

comment by Connor Leahy (NPCollapse) · 2022-02-01T11:35:49.029Z · LW(p) · GW(p)

This was an excellent post, thanks for writing it!

But, I think you unfairly dismiss the obvious solution to this madness, and I completely understand why, because it's not at all intuitive where the problem in the setup of infinite ethics is. It's in your choice of proof system and interpretation of mathematics! (Don't use non-constructive proof systems!) 

This is a bit of an esoteric point and I've been planning to write a post or even sequence about this for a while, so I won't be able to lay out the full arguments in one comment, but let me try to convey the gist (apologies to any mathematicians reading this and spotting stupid mistakes I made):

Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough. 

This is where things go wrong. The actual credence of seeing a hypercomputer is zero, because a computationally bounded observer can never observe such an object in such a way that differentiates it from a finite approximation. As such, you should indeed have a zero percent probability of ever moving into a state in which you have performed such a verification, it is a logical impossibility. Think about what it would mean for you, a computationally bounded approximate bayesian, to come into a state of belief that you are in possession of a hypercomputer (and not a finite approximation of a hypercomputer, which is just a normal computer. Remember arbitrarily large numbers are still infinitely far away from infinity!). What evidence would you have to observe for this belief? You would need to observe literally infinite bits, and your credence to observing infinite bits should be zero, because you are computationally bounded! If you yourself are not a hypercomputer, you can never move into the state of believing a hypercomputer exists.

This is somewhat analogous to how Solomonoff inductors cannot model a universe containing themselves. Solomonoff inductors are "one step up in the halting hierarchy" from us and cannot model universes that have "super-infinite objects" like themselves in it. Similarly, we cannot model universes that contain "merely infinite" objects (and by transitivity, any super-infinite objects either) in it, either, our bayesian reasoning does not allow it!

I think the core of the problem is that, unfortunately, modern mathematics implicitly accepts classical logic as its basis of formalization, which is a problem because the Law of Excluded Middle is an implicit halting oracle. The LEM says that every logical statement is either true or false. This makes intuitive sense, but is wrong. If you think of logical statements as programs whose truth value we want to evaluate by executing a proof search, there are, in fact three "truth values": True, false and uncomputable! This is a necessity because any axiom system worth its salt is Turing complete (this is basically what Gödel showed in his incompleteness theorems, he used Gödel numbers because Turing machines didn't exist yet to formalize the same idea) and therefor has programs that don't halt. Intuitionistic Logic (the logic we tend to formalize type theory and computer science with) doesn't have this problem of an implicit halting oracle, and in my humble opinion should be used for the formalization of mathematics, on peril of trading infinite universes for an avocado sandwich and a big lizard if we use classical logic. 

My own take, though, is that resting the viability of your ethics on something like “infinities aren’t a thing” is a dicey game indeed, especially given that modern cosmology says that our actual concrete universe is very plausibly infinite

Note that us using constructivist/intuitionistic logic does not mean that "infinities aren't a thing", it's a bit more subtle than that (and something I have admittedly not fully deconfused for myself yet). But basically, the kind of "infinities" that cosmologists talk about are (in my ontology) very different from the "super-infinities" that you get in the limit of hypercomputation. Intuitively, it's important to differentiate "inductive infinities" ("you need arbitrarily many steps to complete this computation") and "real infinities" ("the solution only pops out after infinity steps have been complete" i.e. a halting oracle).

The difference makes the most sense from the perspective of computational complexity theory. The universe is a "program" of complexity class PTIME/BQP (BQP is basically just the quantum version of PTIME), which means that you can evaluate the "next state" of the universe with at most PTIME/BQP computation. Importantly, this means that even if the universe is inflationary and "infinite", you could evaluate the state of any part of it in (arbitrarily large) finite time. There are no "effects that emerge only at infinity". The (evaluation of a given arbitrary state of the) universe halts. This is very different to the kinds of computations a hypercomputer is capable of (and less paradoxical). Which is why I found the following very amusing:

baby-universes/wormholes/hyper-computers etc appear much more credible, at least, than “consciousness = cheesy-bread.” 

Quite the opposite! Or rather, one of those three things is not like the other. baby-universes are in P/BQP, wormholes are in PSPACE (assuming by wormholes you mean closed timelike curves, which is afaik the default interpretation), and hyper-computers are halting-complete which is ludicrously insanely not even remotely like the other two things. So in that regard, yes, I think consciousness being equal to cheesy-bread is more likely than finding a hypercomputer!

 

To be clear when I talk about "non-constructive logic is Bad™" I don't mean that the actual literal symbolic mathematics is somehow malign (of course), it's the interpretation we assign to it. We think we're reasoning about infinite objects, but we're really reasoning about computable weaker versions of the objects, and these are not the same thing. If one is maximally careful with ones interpretations, this is (theoretically) not a problem, but this is such a subtle difference of interpretation that this is very difficult to disentangle in our mere human minds. I think this is at the heart of the problems with infinite ethics, because understanding what the correct mathematical interpretations are is so damn subtle and confusing, we find ourselves in bizarre scenarios that seem contradictory and insane because we accidentally naively extrapolate interpretations to objects they don't belong to.

I didn't do the best of jobs formally arguing for my point, and I'm honestly still 20% confused about this all (at least), but I hope I at least gave some interesting intuitions about why the problem might be in our philosophy of mathematics, not our philosophy of ethics.

P.S. I'm sure you've heard of it before, but on the off chance you haven't, I can not recommend this wonderful paper by Scott Aaronson highly enough for a crash course in many of these kinds of topics relevant to philosophers.

Replies from: Slider, interstice, Daphne_W
comment by Slider · 2022-02-01T13:26:58.063Z · LW(p) · GW(p)

The case for observing a hypercomputer might rather be that a claim that has infinidesimal credence requires infinite amounts of proof to get to a finite credence level. So a being that can only entertain finite evidence would treat that credence as effectively zero but it might technically be separate from zero.

I could imagine programming a hypertask into an object and finding some exotic trajectory with proper time more than a finite amount and receive the object from such trajectory having completed the task. The hypothesis that it was actually a very potent classical computer is ruled out by the structure of the task. I am not convinced that the main or only method of checking for nature of computation is to check output bit by bit.

comment by interstice · 2022-02-02T02:43:34.464Z · LW(p) · GW(p)

This is where things go wrong. The actual credence of seeing a hypercomputer is zero, because a computationally bounded observer can never observe such an object in such a way that differentiates it from a finite approximation

This seems dubious. Compare: "the actual credence that the universe contains more computing power than my brain is zero, because an observer with the computing power of my brain can never observe such an object in such a way that differentiates it from a brain-sized approximation". It's true that a bounded approximation to Solomonoff induction would think this way, but that seems like a problem with Solomonoff induction [AF · GW], not a guide for the way we should reason ourselves. See also the discussion here [LW · GW] on forms of hypercomputation that could be falsified in principle.

comment by Daphne_W · 2022-04-14T07:40:08.110Z · LW(p) · GW(p)

This is where things go wrong. The actual credence of seeing a hypercomputer is zero, because a computationally bounded observer can never observe such an object in such a way that differentiates it from a finite approximation. As such, you should indeed have a zero percent probability of ever moving into a state in which you have performed such a verification, it is a logical impossibility. Think about what it would mean for you, a computationally bounded approximate bayesian, to come into a state of belief that you are in possession of a hypercomputer (and not a finite approximation of a hypercomputer, which is just a normal computer. Remember arbitrarily large numbers are still infinitely far away from infinity!). What evidence would you have to observe for this belief? You would need to observe literally infinite bits, and your credence to observing infinite bits should be zero, because you are computationally bounded! If you yourself are not a hypercomputer, you can never move into the state of believing a hypercomputer exists.

 

Sorry, I previously assigned hypercomputers a non-zero credence, and you're asking me to assign it zero credence. This requires an infinite amount of bits to update, which is impossible to collect in my computationally bounded state. Your case sounds sensible, but I literally can't receive enough evidence over the course of a lifetime to be convinced by it.

Like, intuitively, it doesn't feel literally impossible that humanity discovers a computationally unbounded process in our universe. If a convincing story is fed into my brain, with scientific consensus, personally verifying the math proof, concrete experiments indicating positive results, etc., I expect I would believe it. In my state of ignorance, I would not be surprised to find out there's a calculation which requires a computationally unbounded process to calculate but a bounded process to verify.

To actually intuitively give something 0 (or 1) credence, though, to be so confident in a thesis that you literally can't change your mind, that at the very least seems very weird. Self-referentially, I won't actually assign that situation 0 credence, but even if I'm very confident that 0 credence is correct, my actual credence will be bounded by my uncertainty in my method of calculating credence.

comment by Jan_Kulveit · 2022-02-01T12:20:14.525Z · LW(p) · GW(p)

Rough take on this: to me lot of this reasoning seems not paying close enough attention to the relation of maths and reality; in practice the problems of infinite ethics are more likely to be solved at the level of maths, as opposed on the level of ethics and thinking about what this means for actual decisions.

Why:

General problems with how "infinities" are used in this text is it seems it is importing the assumption that something like ZFC tells us something fundamental and true about reality. Subsequently, lot of problems with infinities seem to be basically "imported from math" (how to sum infinite series?). 

I'm happy to bite this bullet:
- our default math being based on ZFC axioms is to a large extend random historical fact
- how ZFC deals with infinities tells us very little about real infinities 
- default infinite ethics tells us something about ethical problems in ZFC-based matemathical universes; as I don't assume ZFC is some fundamental base of my reality, it's problems, questions and answers about infinities do not seem particularly relevant.  
Ad absurdum: if we postulated as an axioms reality is based on wiggling of big elephants standing on the back of an enormous turtle, we would likely arrive at weird ethical problems depending on obscure details of turtleology. 

I still do agree with the overall conclusion that infinite ethics is in a way a lesson in humility, and there is a lot of what we don't know. 

Sidenote/reasoning transparency: Few weeks ago, I attended a seminar about alternative set theory proposed by Petr Vopenka from the "Vopenka's principle" in the diagram. 

Part of what I got from this was:  

Vopenka was someone, who understood some of the problems with infinities in standard set theory, and actually took them seriously.  At least what I got from some of his collaborators is, due to his research on ultrafilters, he become worried the whole math based on ZFC, as we are using it, puts a lot of what we assume to "know" on more shaky grounds / more in tension with reality than people ordinarily think.  (Actually this post was really helpful for me to understand Vopenka's deep philosophical horror). He decided to fix it, spending decades trying to develop mentioned alternative set theory, which would be somehow closer to reality. In my impression, from a math perspective, his program was not really successful. From a more meta- perspective, in my view this seems an approach to "infinite ethics" more likely to lead to progress.

Replies from: MichaelStJules
comment by MichaelStJules · 2022-02-01T15:27:34.987Z · LW(p) · GW(p)

I think a lot of these problems don't depend on the axiom of choice in particular. I think you can still construct incomparable options under most or all approaches, with explicit bijections. Maybe there are still problems with ZF, but I'm more skeptical.

Replies from: itaibn0
comment by itaibn0 · 2022-04-14T18:03:01.330Z · LW(p) · GW(p)

If the only alternative you can conceive of for ZFC is removing the axiom of choice then you are proving Jan_Kulveit's point.

Replies from: MichaelStJules
comment by MichaelStJules · 2022-04-15T07:39:06.237Z · LW(p) · GW(p)

I don't think ZF(C) is the problem. If whatever alternative you come up with doesn't have equivalent results, then I think it isn't expressive enough (or you've possibly even assumed away infinites, which would be empirically questionable). And whatever solution you might come up with can probably be expressed in ZF, and with less work than trying to build new foundations for math. I think it's better to work within ZF, but with additional structure on sets or using different ethical axioms.

Replies from: itaibn0
comment by itaibn0 · 2022-04-15T15:06:26.255Z · LW(p) · GW(p)

On further thought I want to walk back a bit:

  1. I confess my comment was motivated by seeing something where it looked like I could make a quick "gotcha" point, which is a bad way to converse.
  2. Reading the original comment more carefully, I'm seeing how I disagree with it. It says (emphasis mine)

in practice the problems of infinite ethics are more likely to be solved at the level of maths, as opposed on the level of ethics and thinking about what this means for actual decisions.

I highly doubt this problem will be solved purely on the level of math, and expect it will involve more work on the level of ethics than on the level of foundations of mathematics. However, I think taking an overly realist view on the conventions mathematicians have chosen for dealing with infinities is an impediment to thinking about these issues, and studying alternative foundations is helpful to ward against that. The problems of infinite ethics, especially for uncountable infinities, seem to especially rely on such realism. I do expect a solution to such issues, to the extent it is mathematical at all, could be formalized in ZFC. The central thing I liked about the comment is the call to rethink the relationship of math and mathematical infinity to reality, and that doesn't necessary require changing our foundations, just changing our attitude towards them.

comment by paulfchristiano · 2022-02-01T04:08:35.386Z · LW(p) · GW(p)

Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.

Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.

Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It feels like denying the hypothetical, rather than handling it. And are we really so confident about how much of what can be fit inside an “experience”?

I think these are real problems, but it's worth pointing out that they occur even if you are certain that the world contains only one observer living for a bounded amount of time, whose welfare is merely uncertain (e.g. in a St. Petersburg case). So I don't think it's fair to characterize these as problems with infinite worlds, rather than more fundamental problems with common intuitions about unbounded utilities.

Replies from: davidad
comment by davidad · 2022-02-01T10:00:09.142Z · LW(p) · GW(p)

The St. Petersburg case does involve infinitely many possible worlds, in the sense that its probability distribution is not finitely supported, or in the Kripke-semantics sense. But I agree that infinitely many possible worlds is an extremely common and everyday assumption (say when modeling variables with Gaussians), and that similar issues come up that perhaps could be handled by similar solutions.

Replies from: paulfchristiano
comment by paulfchristiano · 2022-02-01T16:57:17.255Z · LW(p) · GW(p)

I do think it's reasonable to call those cases "infinite ethics" given that they involve infinitely many possible worlds. But I definitely think it's a distraction to frame them as being about infinite populations, and a mistake to expect them to be handled by ideas about aggregation across people.

(The main counterargument I can imagine is that you might think of probability distributions as a special case of aggregation across people, in which case you might think of "infinite populations" as the simpler case than "infinitely many possible worlds." But this is still a bit funky given that infinitely many possible worlds is kind of the everyday default whereas infinite populations feel exotic.)

comment by Charlie Steiner · 2022-01-31T19:49:44.455Z · LW(p) · GW(p)

If we accept that there's nonzero probabilities that our actions impact diverse infinite possibilities, and yet we still want to make decisions, it seems the entire infinitarian project is sunk already. Where then do you part ways with the completely bog-standard derivations (e.g. Savage) of expected utility theory that tell you you will act as if you assign regular old numbers to states of the universe?

In other words, great post, but I don't see why you're so convinced that better understanding of infinite ethics will require far-out math rather than prosaic, already-discovered math.

Replies from: MichaelStJules
comment by MichaelStJules · 2022-02-01T16:11:25.112Z · LW(p) · GW(p)

The problem is deciding how to order certain outcomes in the first place, even in deterministic cases. You can declare orderings some way, but they will probably end up either violating transitivity or at least be pretty counterintuitive to you when combined.

Also, infinities usually end up requiring the rejection of the continuity/Archimedean axiom, which the vNM theorem uses to get finite numbers to represent utilities. If you want to force it to hold, I think you'll need to reject scope sensitivity.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-02-01T18:51:28.372Z · LW(p) · GW(p)

Yup, I agree. Basically once you say you're going to use aggregated utilities (by which I don't just mean summed, I mean according to some potentially-complicated aggregation function) to make VNM-style decisions, this requires you to abandon a lot of other plausible desiderata.

Like scope sensitivity. Because there's "always a bigger infinity" no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.

And this is pretty weird. But at some point you just learn to let desiderata go when they're bad. One experience I remember from college is learning that there's no uniform distribution over the real numbers. In some circumstances you can "fight reality" and use an improper prior as a bit of mathematical sleight of hand. But for all cases where you're going to actually use the answer, you just have to accept that infinity is too big to care about all of it equally.

Replies from: Caspar42
comment by Caspar Oesterheld (Caspar42) · 2022-04-15T20:33:41.258Z · LW(p) · GW(p)

>Because there's "always a bigger infinity" no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.

Couldn't one use a lexicographic utility function that has infinitely many levels? I don't know exactly how this works out technically. I know that maximizing the expectation of a lexicographic utility function is equivalent to the vNM axioms without continuity, see Blume et al. (1989). But they only mention the case of infinitely many levels in passing.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2022-04-16T05:29:12.700Z · LW(p) · GW(p)

I'm not sure what sort of decision procedure you would use that actually has outputs, if you assign ever-tinier probabilities to theories ever-higher in the lexicographic ordering.

Like, infinite levels going down is no problem, but going up seems like you need all but a finite number of levels to be indifferent to your actions before you can make a decision - but maybe I just don't see a trick.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2022-04-17T09:19:50.953Z · LW(p) · GW(p)

"Anyone got the gods' exact wording, there?  For comparison, my home plane doesn't have a spatial boundary in any direction and doesn't spatially loop, but on the lower level of reality underneath that, there were limits on the size of structures that could exist and any two identical structures of entanglement were the same structure at that underlying level of reality.  It didn't actually contain an infinite amount of stuff; it was repeating at a lower level than space looping around.  If Elysium is infinite, nonrepeating, contains arbitrarily large entangled structures, and everywhere comprises a similar positive density of realityfluid, that literally breaks the Law of Probability I know."

" - oh dear. Uh, I don't know that, but I guess we can have someone look it up. ....why does it break the Law of Probability if there are planes that go on forever?" What a good thing for Keltham to be extremely worried about.

"Well, let's say you have an infinite number of INT 16 wizard students, of whom an infinite number become 5th-circle wizards, what's your chance of becoming a 5th-circle wizard given that you're an INT 16 wizard student?"

"I don't think all infinities are the same size though maybe they are in the relevant sense - wait, yes, okay, they are. Huh."

"Right, so, the whole requirement of being able to say, here I am now, what happens to me next, the required premise of being able to ask if it is more likely that you see the coinspin landing Queen or see the coinspin landing Text, is that there's entangled structures of realityfluid where your possible futures are entangled with your present and the amount of realityfluid making up your futures is such that you can have ratios between the amounts of realityfluid in entangled futures.  Like, you want to say that there's infinities you can draw ratios between, sure, but then you can just talk about the fraction that anything is of everything.  Like, maybe for a really really large infinite universe somewhere that was among the simplest possible ones, that one infinite universe would be 0.00001% of the infinite Everything, but then you could just leave out the talk of infinity and talk about the fractions."

"Anyways, not an urgent issue.  I just note that it implies, in descending order of probability, that you misunderstood your gods, your gods lied, your gods have no idea how the ass some parts of Law work, or Civilization managed to get some parts of the Law truly incredibly wrong."

"It seems unfair to your Civilization if they could be wrong based off a - factual thing about how a different universe than theirs works."


"The crap I'm talking about is part of the border between universal truth and local truth, like, the universal truth about why there are any local truths or people inside those local truths to experience them.  I mean, it's not like math and local reality are two totally distinct realms, there's a border where they meet.  Knowledge about realityfluids always being ratioable sorts of things, since it's knowledge about that border, falls on the universal side of 'is that true everywhere'."

"Like, there's a difference between saying that this table is real - where it's just being real here, and not being real over there or in dath ilan - and saying that since the table is real it's necessarily ultimately made out of some stuff that is how real it is, and this stuff comes in ratioable quantities."

"Oh, and if you're wondering why everything I'm saying sounds like gibberish, it's because we're currently talking about 'anthropics'."

https://www.glowfic.com/replies/1753115#reply-1753115

comment by Davidmanheim · 2023-12-21T11:41:48.401Z · LW(p) · GW(p)

I did not see this post when it was first put on the forum, but reading it now, my personal view of this post is that it continues a trend of wasting time on a topic that is already a focus of too much effort, with little relevance to actual decisions, and no real new claim that the problems were relevant or worth addressing.

I was even more frustrated that it didn't address most of the specific arguments put forward in our paper from a year earlier on why value for decisionmaking was finite, and then put forward seeral arguments we explicitly gave reasons to dismiss - including dismissing ordinal preferences, and positing situations which assume infinities instead of showing they could be relevant to any decision.  I think that it doesn't actually engage with reality in ways that are falsifiable, and ignores that even with assuming infinite universes based on physics, the theories which posit infinities in physics are strongly in favor of the view that there is a finite *affectable* universe, making the relevance to ethics hard to justify. 

Replies from: joekc
comment by Joe Carlsmith (joekc) · 2023-12-21T17:57:53.994Z · LW(p) · GW(p)

Hi David -- it's true that I don't engage your paper (there's a large literature on infinite ethics, and the piece leaves out a lot of it -- and I'm also not sure I had seen your paper at the time I was writing), but re: your comments here on the ethical relevance of infinities: I discuss the fact that the affectable universe is probably finite -- "current science suggests that our causal influence is made finite by things like lightspeed and entropy" -- in section 1 of the essay (paragraph 5), and argue that infinites are still 

  1. relevant in practice due to (i) the remaining probability that current physical theories are wrong about our causal influence (and note also related possibilities like having causal influence on whether you go to an infinite-heaven/hell etc, a la pascal) and (ii) due to the possibility of having infinite acausal influence conditional on various plausible-in-my-view decision theories, and
  2. relevant to ethical theory, even if not to day-to-day decision-making, due to (a) ethical theory generally aspiring to cover various physically impossible cases, and (b) the existence of intuitions about infinite cases (e.g., heaven > hell, pareto, etc) that seem prima facie amenable to standard attempts at systematization. 
Replies from: Davidmanheim
comment by Davidmanheim · 2023-12-24T07:33:58.031Z · LW(p) · GW(p)

First, I said I was frustrated that you didn't address the paper, which I intended to mean was a personal frustration, not blame for not engaging, given the vast number of things you could have focused on. I brought it up only because I don't want to make it sound like I thought it was relevant for those reading my comment, to appreciate that this was a personal motive, not a dispassionate evaluation.


Howeer, to defend my criticism, for decisionmakers with finite computational power / bounded time and limited ability to consider issues, I think that there's a strong case to dismiss arguments based on plausible relevance. There are, obviously, an huge (but, to be fair, weighted by a simplicity prior, effectively finite) number of potential philosophies or objections, and a smaller amount of time to make decisions than would be required to evaluate each. So I think we need a case for relevance, and I have two reasons / partial responses to the above that I think explain why I don't think there is such a case.

  1. There are (to simplify greatly,) two competing reasons for a theory to have come to our attention enough to be considered; plausibility, or interestingness. If a possibility is very cool seeming, and leads to lots of academic papers and cool sounding ideas, the burden of proof for plausibility is, ceteris pariubus, higher.

    This is not to say that we should strongly dismiss these questions, but it is a reason to ask for more than just non-zero possibility that physics is wrong. (And in the paper, we make an argument that "physics is wrong" still doesn't imply that bounds we know of are likely to be revoked - most changes to physics which have occurred have constrained things more, not less.)
     
  2. I'm unsure why I should care that I have intuitions that can be expanded to implausible cases. Justifying this via intuitions built on constructed cases which work seems exactly backwards.

    As an explanation for why I think this is confused, Stuart Armstrong made a case that people fall prey to a failure mode in reasoning that parallels one we see in ML, which I'll refer to as premature rulemaking. In ML, that's seen when a model sees a small sample, and try to build a classification rule based on that, and apply it out of sample; all small black fuzzy object it has seen are cats, and it has seen to cats which are large or other colors, so it calls large grey housecats non-cats, and small black dogs cats. Even moving from that point, it it harder to change away from that mode; we can convince it that dogs are a different category, but the base rule gets expanded by default to other cases, and tigers are not cats, and black mice are, etc. Once we set up the problem as a classifier, trying to find rules, we spend time building systems, not judging cases on their merits. (The alternative he proposes in this context, IIRC, is something like trying to do grouping rather than build rules, and evaluate distance from the cluster rather than classification.)

    The parallel here is that people find utilitarianism / deontology / maximizing complexity plausible in a set of cases, and jump to using it as a rule. This is the premature rulemaking. People then try to modify the theory to fit a growing number of cases, ignoring that it's way out of sample for their intuitions. Intuitions then get reified, and people self-justify their new reified intuitions as obvious. (Some evidence for this: people have far more strongly contrasting intuitions in less plausible constructed cases.)

This has gone somewhat off track, I think, but in short, I'm deeply unsure why we should spend time on infinite ethics, have a theory for why people do so, and would want to see strong evidence of why to focus on the topic before considering it useful, as opposed to fun.

comment by Slider · 2022-01-31T18:54:33.424Z · LW(p) · GW(p)

Some of the logic uses that because a chance is positive then it must be finite. In the infinite context infinidesimal chances might be relevant and can break this principle. For expected value calculations it helps that for any transfinite payoff there is a corresponding infinidesimal chance which would make that option ambivalent with a certain finite payout. And for example 4 times the lizards this threshold would be 4 times lower. Mere possiblity giving a finite (even if small) chance seems overgenerous althought I would expected the theory on what kind of evidentiary or reasoning processes can legitimately output infinidesimals seems hard and beyond me atm. But it leaves the door open for me to say "just because its possible doesn't mean its possible enough".

In general thinking with surreal numbers has really hammered down to me that "infinite" is an adjective like "finite" and "infinity - inifinity" is undefined for similar reasons that "finite - finite" is undefined. A lot this analysis would go differently if we kept the notion that we need to be asking "how much" even for the transfinite amounts.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2022-02-01T01:31:29.286Z · LW(p) · GW(p)

(For hyperreals/surreals, I think "finite" means "smaller magnitude than some real," and so where you say "finite" in your first paragraph I think you mean positive-standard-part, viz., positive and not infinitesimal.)

I'm sympathetic to this point, and I think it's deep, but I'm pretty sure that it ultimately doesn't make a difference. We should have positive-standard-part credence in the possibility of infinite utility. We should also have positive-standard-part credence in various subpossibilities. And so they can dominate our expected value calculations. It doesn't all cancel out (unless you stipulate that it does, which seems to be an incredibly strong empirical assumption that I can't imagine justifying). I strongly disagree with "for any transfinite payoff there is a corresponding infinidesimal chance which would make that option ambivalent"; this is an empirical claim that we have no reason to believe.

Depending on definitions, you could say that there's an infinitesimal chance of any particular possible future, but you can't say that there's an infinitesimal chance of a class of possible futures such as (some precisification of) the possible futures with exactly one immortal happy lizard.

Replies from: Slider
comment by Slider · 2022-02-01T10:51:36.841Z · LW(p) · GW(p)

yes, I mean the primary archimedean field with "finite". The fields go also "inside" in that 1/ is in one that 1/ is not in.

I mean the ambivalence point to be a mathematical statement rather than an empirical one about how expected value and infinites work. That is formulas of the form  have a solution with  and that   can be made true

The empirical part would involve a situation where the correct credence about it was truly infinidesimal. Currently we just designate the nearest real and call it a day. I have a suspicion that events that can happen but have probability 0 would benefit from going beyond real precision.

One can imagine dart competition. $20 for hitting the upper half of the board. $40 for hitting the upper and left half of the board. These two "challenges" or bets would be comparable in expected money if the dart lands uniformly on the board. How much money would be fair for landing squarely on the bullseye? For any small radius around the bullseye such a value is somewhat straighforward to get, yet any finite amount of money is not enough for the center itself. Yet presumably the center is not special and is as likely as any other. So if we are restricted to having a finite money payoff we can't make an actually arbitrary target but must include infinitely many points to make up the target area. Even the vertical symmetry axis of the board is too narrow, a dart will almost surely that is to say with real probablity 1 land left or right on the board. It is not true that an infinite multiple of an infinidesimal needs to be finite. The vertical axis has more than finite multiples of points compared to the bullseye but it still fails to make a finite area for which a finite money payoff would be appropriate.

I don't see how probablities for classes of futures neccesiates non-neglible real probabilities. Usually we want to be atleast real smooth but it is not clear to me that real smoothnes is good enough for all cases. This might be connected to the fact that if you integrate the probability density whether you are guaranteed to get a real or might you get something less than a real.

comment by Zach Stein-Perlman · 2022-01-31T15:00:30.106Z · LW(p) · GW(p)

Good post.

In some places, you seem to assume that infinity looks like ∞ rather than ω. (∞ is not a number and just means roughly bigger than all real numbers, while ω [in the hyperreal sense] is a particular number bigger than all real numbers.) For example:

consider an infinite world where everyone’s at 1. Suppose you can bump everyone up to 2. Shouldn’t you do it? But the “total welfare” is the same: ∞.

and

if the total is infinite (whether positive or negative), then finite changes won’t make a difference. So the totalist in an infinite world starts shrugging at genocides. And if they can only ever do finite stuff, they start treating all their possible actions as ethically indifferent.

Your response works:

If “how can finite changes matter in infinite worlds?” were the only problem we faced, I’d be inclined to ditch talk about maximizing total welfare, and to focus instead on maximizing the amount of welfare that you add on net. Thus, in a world of infinite 1s, bumping ten people up to 2 adds 10. Nice. Worth it. Size of drop, not size of bucket.

But this response is unnecessary when we can use infinite numbers like ω. If you tell me about "an infinite world where everyone’s at 1", I can call its value ω. If you add another person, its value is ω+1; if you double its size, its value is 2ω. This is intuitive and useful (and a generalization of the "size of drop" approach.)


Edit: davidad and gjm seem to think of ω as an ordinal; as I mentioned above, I just mean it as an infinite hyperreal. It certainly seems more appealing to use hyperreals than ordinals for anything like utility. I don't know nonstandard analysis, but I think of ω (in nonstandard analysis) as an arbitrary infinite number, not (as Slider suggests) the smallest transfinite surreal number. If it's common to use ω to mean a specific number, then replace all instances of ω above with, say, x, where x an infinite hyperreal defined as the utility of the "infinite world where everyone’s at 1".

Replies from: davidad, MichaelStJules
comment by davidad · 2022-01-31T18:25:29.439Z · LW(p) · GW(p)

Just to spell things out a bit more:

  • is a cardinal number, i.e. an isomorphism-class of sets
  • is an ordinal number, i.e. an isomorphism-class of well-ordered sets
  • is an extended real number, i.e. a Dedekind cut of the rationals which fails the usual condition that the upper side is inhabited.
  • [Edited to add: is also the name of a hyperreal number, and also the name of a surreal number. That makes five different kinds of infinities on the table! Surreals include all ordinals and cardinals, and hyperreals include at least all countable ordinals.]

For the avoidance of confusion, the words "cardinal" and "ordinal" in this context are also used to mean:

  • Ordinal rankings can say pairwise which of two options is better
  • Cardinal rankings can put a number on any option, which enables assessment of lotteries in addition to ordinal ranking (by comparing two options' numbers). There is no straightforward relationship between these uses of "ordinal" and "cardinal".

The suggestion in the parent comment might be summarized with the slogan "for cardinal rankings, use ordinal numbers", if that weren't comically confusing. [Edit: actually it looks like the parent comment meant “for cardinal rankings, use hyperreals”; I’m currently leaning towards “use extended reals, unless those really can’t make the distinctions you need, in which case, use surreals.”]

However, I am not at all sure that ordinal numbers help us much here!

  • Although and are distinct, that distinction comes from the order structure placed on their elements; as plain sets they are isomorphic. Where do we get this order structure on axiological "locations"?
  • Also, if each location has an ℝ-valued amount of ethical value, how could those numbers be aggregated into an overall ordinal number, when the topology of ordinals is totally disconnected?
Replies from: Zach Stein-Perlman, davidad, Slider
comment by Zach Stein-Perlman · 2022-02-01T01:15:38.448Z · LW(p) · GW(p)

(Disclaimer: I'm no mathematician, but I think I'm sufficiently familiar with the math in this post and comment.)

I'm quite confused about why you would use extended reals when you could use hyperreals (or surreals, which are just an extension of hyperreals). For example, my intuitions about infinite utility say we should have

  • ∞+1 > ∞
  • 2*∞ > ∞
  • ∞*∞ > ∞

where "∞" is the amount of utility in some canonical infinite-utility universe and ">" means "is better than".

Each of these works if "∞" represents an infinite hyperreal, but not if it's just the infinite extended-real, right? There's not just one positive and one negative possible-universe-value off the real line; it feels much more like there's such a value for each hyperreal.

comment by davidad · 2022-01-31T18:33:34.125Z · LW(p) · GW(p)

I do want to add as a separate point that I think Joe's invocation of large cardinal numbers as potential amounts of ethical value is probably confused. I don't see any reason to use cardinal numbers for cardinal rankings (although they both involve the word "cardinal"). Perhaps cardinal numbers will someday allow us to make meaningful and useful distinctions between different levels of infinite quantities of value, but I doubt it, even without knowing quite how something more grounded in extended reals (like stochastic dominance) can resolve the paradoxes here. Fundamentally, cardinal numbers are not quantities, they are names for bijection-classes of sets (except when they are finite, in which case by definition they happen to correspond to ℕ, which then includes into ℝ).

Replies from: gjm, Zach Stein-Perlman
comment by gjm · 2022-01-31T19:36:38.321Z · LW(p) · GW(p)

I agree that it's unlikely that we'd ever want infinite cardinal numbers to play a utility-like role. But it's worth noting Conway's "surreal numbers", which extend the familiar real numbers with a rich variety of infinities and infinitesimals, and the infinities get (handwave handwave) "as big as the infinite ordinals". (They are definitely more ordinal-like than cardinal-like, though they're also definitely not the exact same thing as the ordinals either.)

So in the right framework these set-theoretic exotica do function as quantities, at least if being part of an ordered field that extends the real numbers counts as "functioning as quantities", which I think it should.

Replies from: davidad
comment by davidad · 2022-01-31T21:26:49.751Z · LW(p) · GW(p)

That makes sense. I wasn’t really familiar with surreal numbers, but they indeed seem to be an ordered field extending the reals, and yet also including whatever large cardinals one cares to postulate in one’s foundations.

It looks like there’s a little bit of work already on using surreal numbers for probabilities and utility values, proving a surreal version of the vNM theorem and applying it to various Pascal’s-Wager-type scenarios. This seems like a solid direction if one really does want maximal expressive power in one’s degrees of infinitude.

comment by Zach Stein-Perlman · 2022-01-31T20:10:07.736Z · LW(p) · GW(p)

I agree, although I'm not sure Joe himself invokes cardinals; relevant quote from the post (I think?):

imagining worlds with a “strongly Ramsey” number of people seems likely to be a total non-starter, even if one knew what “strongly Ramsey” meant, which I don’t. Still, it seems like the infinite fanatic should be freaking out (drooling?). After all, what’s the use obsessing about the smallest possible infinities?

While cardinals might not make sense here, I strongly agree with Joe that we might need to care a lot about really big infinities. If we call the utility of a finite-size positive-value system that exists forever utility, we can imagine or without too much trouble (infinite copies, or infinite value per copy per finite-time-unit, or both), and this has implications for our decisions.

comment by Slider · 2022-01-31T19:10:36.421Z · LW(p) · GW(p)

I know   as the smallest transfinite surreal number and atleast that is a ready source of transfinite arithmetic. Incidentally as {0,1,2,3,4,5...|} it also sounds an awful lot like the third definition. But the successor of that, {|} is hard to translate to isomorphisms-classes

For me  is associated with "grows without limit". In a system that allows only a handful of transfinites that might be a tempting to use as a numbers name. However in non-standard analysis integrating to  integrates to a limit while "integrating to " still exists but doesn't refer to any number.

Replies from: gjm
comment by gjm · 2022-01-31T19:32:18.490Z · LW(p) · GW(p)

The successor of omega is (the equivalence class / order-type of) well-orderings that look like an infinite upward chain and then one more element above those. In general, if A is a set of ordinals that has a largest element then (A|) is the order-type of well-orderings that look like that largest element plus one more thing above it; if A is a set of ordinals that doesn't have a largest element then (A|) is the order-type of the union of the things in A, with the understanding that if a<b with a,b in A then "corresponding" elements of a and b are identified.

(As soon as you start considering (A|B) with B nonempty, of course you are at risk of getting things that aren't ordinals any more. But as long as you're working with ordinals and keeping B empty, the surreal-number structure is closely related to the ordinal structure.)

It's a while since I looked at this stuff in detail, but my recollection is that surreal numbers and nonstandard analysis don't make very good playmates even though they are both extensions of our "usual" number systems to allow infinitesimals and infinities. One thing that makes nonstandard analysis interesting is the transfer principle (i.e., things you can say without referring to the special notions of nonstandard analysis are true about its extended number system iff they're true about the ordinary real numbers) and so far as I know there is nothing at all like a transfer principle for the surreal numbers.

comment by MichaelStJules · 2022-02-04T05:32:25.594Z · LW(p) · GW(p)

Expansionism is basically a special case of hyperreals. See my comment here [LW(p) · GW(p)] outlining Expansionism over possible persons instead of spacetime locations, to satisfy Pareto over persons instead of Pareto over spacetime locations.

comment by Raemon · 2022-04-13T20:59:54.687Z · LW(p) · GW(p)

Curated. 

I think the topic of infinite ethics is pretty confusing, and important. 

I haven't actually read the entirety of this post (it sure is long). I've read large chunks of the beginning, and end, and spot-checked some of the arguments in the middle. The broad structure of Joe's arguments make sense to me, and seem important for bullet-biting utilitarians to engage with. I'm interested in seeing more engagement with individual points here.

comment by Zach Stein-Perlman · 2022-02-01T14:25:43.465Z · LW(p) · GW(p)

Here and on the EA Forum, some commenters have suggested that (some of) our problems with infinities arise from ZF set theory (and others have expressed skepticism, which I tentatively share). I would love to see a post (high-effort or low-effort, long or short) on how an alternative mathematical foundation would not raise some issues Joe discusses.

comment by kyleherndon · 2022-01-31T10:10:00.259Z · LW(p) · GW(p)

I am confused how you got to the point of writing such a thoroughly detailed analysis of the application of the math of infinities to ethics while (from my perspective) strawmanning finitism by addressing only ultrafinitism. “Infinities aren’t a thing” is only a "dicey game" if the probability of finitism is less than 100% :). In particular, there's an important distinction between being able to reference the "largest number + 1" and write it down versus referencing it as a symbol as we do, because in our referencing of it as a symbol, in the original frame, it can be encoded as a small number.

Another easy way to just dismiss the question of infinite ethics that I feel you overlooked is that you can assign zero probability to our choice of mathematical axioms is exactly correct about the nature of infinities (or even probabilities).

You'll notice that both of these examples correspond to absolute certainty, and that one may object that I am "not being open minded" or something like that for having infinitely trapped priors. However, I would remind readers that you cannot chose all of your beliefs and that, practically, understanding your own beliefs can be more important than changing (or being able to change) them. We can play word games regarding infinities, but will you put your life at stake? Or will your body reject the attempts of your confused mind when the situation and threats at hand are visible?

I would also like to directly claim, regardless of the truth of aforementioned claims, that entities and actions beyond the cosmic horizon of our observable universe are forfeit for consideration of ethics (and only once they are past that horizon). In particular, I dislike that your argument relies on the notion that cosmologists believe that the universe is infinite, while cosmologist will also definitely tell you that things beyond the cosmological horizon are outside of causal influence. Your appeal to logos only to later reject it in your favor is inconsistent and unpalatable to me.

I also am generally under the impression that a post like this should be classified as a cognitohazard, as I am under the impression that the post will cause net harm under the premise that it attempts to update people in the direction of susceptibility to arguments of the nature of Pascal's Wager.

I'm sorry if I'm coming off as harsh. In particular, I know from reading your posts that I think you generally contribute positively and I have enjoyed much of your content. However, I am under the impression that this post is likely a net negative, and directly conflicting against the proposition that we "help our species make it to a wise and empowered future" because I think that this contributes towards misleading our species. I have found myself, and obviously others may find otherwise, that as far as I can tell there is ingrained in my experience of consciousness itself that assigns zero probability to our choice of axioms as being literally entirely correct (the map is not the territory). I also claim that regardless of the supposed "actual truth" of the existence of infinities in ethics, that a practical standpoint suggests that you should definitely reject the idea, as I believe practically having any modicum of belief is more likely to lead you astray and likely to perform worse in exceptional case that our range of causal influence is "actually infinite" though clearly this is not something I can prove.

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2022-01-31T14:31:40.965Z · LW(p) · GW(p)

I don't understand or disagree with a lot in your comment, but I don't think I'd say much different from Joe. However, my meta-level principles say I should respond to

I also am generally under the impression that a post like this should be classified as a cognitohazard, as I am under the impression that the post will cause net harm under the premise that it attempts to update people in the direction of susceptibility to arguments of the nature of Pascal's Wager.

I disagree for two independently dispositive reasons:

  • I think that if anything, readers would update in the opposite direction, realizing that the bullets to utility-maximizing are harder to swallow than they thought. Joe shows that infinity makes decision theory really weird, making it more appealing (on the margin) to just do prosaic stuff. (I'm relatively fanatical, but I would be much more fanatical and much more comfortable in my fanaticism if not for issues Joe mentions.) And Joe doesn't advocate, e.g., pressing the red button here [LW · GW], and his last section has good nuanced discussion of this in practice.
  • Discussions related to this topic have expected benefits in figuring out how to deal with Pascal's Wager/Mugging outweighing expected costs (if they existed) in individuals making worse decisions (and this "expectation" is robust to non-fanaticism, or something).
comment by James_Miller · 2022-04-20T13:53:11.717Z · LW(p) · GW(p)

"An infinite line of immortal people, numbered starting at 1, who all start out happy (+1). "  Are you allowed to do this?  Say I am one of these people, how long is my number likely to be?  Can my number be described with a finite number of symbols?  Isn't my position determined by a draw from a uniform distribution over the positive integers, which I think isn't allowed?   https://math.stackexchange.com/questions/14777/why-isnt-there-a-uniform-probability-distribution-over-the-positive-real-number

comment by AnthonyC · 2022-02-08T00:05:55.308Z · LW(p) · GW(p)

But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a 30% chance that probabilities need not add up to 100%? What percent of your net worth should you pay for such a lottery, vs. a guaranteed avocado sandwich? Must you learn to answer, lest your ethics break, both in theory and in practice?

 

You just made my week. The fact that this paragraph manages to actually mean something and not just constitute Insane Troll Logic makes me (finitely) very happy.

comment by Donald Hobson (donald-hobson) · 2022-01-31T12:34:42.910Z · LW(p) · GW(p)

I think part of the problem here comes when you consider an infinite number of people. Lets say that anything involving too many bits of memory is not a person. Yes, this implies we don't care about 3^^^^3 sized minds, but we can care about more reasonably sized subsections of those minds.  So we have some finite list of computations we consider people. So what you care about is the amount of magic reality fluid that gets applied to each computation. (The total amount of magic reality fluid must add up to 1) 

In this view, there is no difference between a single person in cyclic space and an infinite row of identical people. They are both just one computation. 

comment by NicholasKross · 2023-07-21T22:02:56.611Z · LW(p) · GW(p)

Thoughts while reading this, especially as they relate to realityfluid and diminishing-matteringness [LW · GW] in the same vein as "weight by simplicity":

  • If we start at a set "0" time, and only the future is infinite, then we break some of the mappings discussed in the "Agent [or any other value-location] Neutrality" mappings [LW · GW]. Intuitively, some of the "same distributions" would, if both bounded on the right side, be different again. This does not hold for e.g. w5 and w6 (since there's a 1-to-1 mapping between n and 2n for all n), but it does hold for w3 and w4 (since the first w3 agent to the left of the cutoff-line won't have a corresponding w4 agent).
  • Diminishing realityfluid sure looks superficially similar to the weight-by-simplicity idea [LW · GW], which is unusual for being one of the least-tempting not-quite-solutions-to-infinities in the entire post. This makes me update (weakly!) away from "diminishing realityfluid".
  • What makes Heaven+Speck intuitively better-than Hell+Lollipop? I think the answer might end up as something like "realityfluid", where (normalized or not) there's more of it on the Heaven/Hell than on the Speck/Lollipop, even though both Heaven&Speck are infinite and so are both Hell&Lollipop.
  • This whole post and what Tamsin Leake [LW · GW] has built off of it, have updated me away from "backchain learning math [LW · GW] as you need it". Maybe not all-the-way away, but a good bit away.
comment by MichaelStJules · 2022-02-04T08:06:45.223Z · LW(p) · GW(p)

To extend Expansionism to worlds with less structure, you could try to come up with a very general kind of distance metric between locations of value (whatever they may be), and use that instead of distance in spacetime to define your "spheres". I'm not sure this can cover all possibilities we'd want to consider while also ensuring the spheres only contain finitely many locations of value with nonzero value at a time, for finite partial sums over each sphere.

Here's one idea, although I'm skeptical that it works. If there are only countably many binary predicates about locations of value (or you only consider countably many of them), you can order them , and define a metric by something like

or just map  and use . (I use 3 instead of 2 to avoid 0.01111...=0.1000... in binary.)

With this, you can define the ball of radius  around  as , and expand using these nested balls instead of the spacetime spheres. This basically represents the location of value  as the binary sequence , (or a real number) so you can only uniquely represent a number of locations of value with at most the cardinality of the real numbers this way. That's still a lot of possible locations, though: we could potentially represent a continuum of different universes each with a continuum of possible locations, as long as only countably many locations bear any value at all in a given outcome, or we can take well-defined integrals over locations.

comment by MichaelStJules · 2022-02-04T05:11:33.676Z · LW(p) · GW(p)

For the hyperreal approaches, ultrafilters are basically just orders over the locations of value to take limits over.

It's worth noting that the better-behaved "finite-sum" version of the hyperreal approach outlined in Bostrom (2011) is just choosing an order to take partial sums and then limits over, and he describes Expansionism (over spacetime locations) as a special case. This is on pages 21 and 22.

You could do Expansionism over possible persons instead of spacetime locations to satisfy Pareto over persons/agents and avoid the weird issues with pulling worlds together.

The way Expansionism (over spacetime locations) works is by having a set of nested subsets of possible spacetime locations, the expanding spacetime spheres, such that:

  1. Each location is in at least one of the subsets, and then of course in all the following subsets that contain that subset.
  2. In any world, each subset has only finitely many locations with value in it (all but finitely many have value 0).

In math notation, the set of spacetime locations is , and you have a set of subsets of , such that  is totally ordered with respect to  (nested) and

  1. , i.e. for every , there exists , such that .
  2. For any world  (the assignment of spacetime locations to their utilities as real values),  is finite.

1 means we cover all of spacetime in the limit and 2 allows us to take finite partial sums before taking limits (or looking at some other behaviour of these partial sums in the limit).

Strictly speaking, you don't need the nested sets to grow uniformly in each direction ("uniform expansion") for things to be well-defined, but it'll probably give more intuitive answers that way.

When comparing two worlds, you're comparing the "sequence" of partial sums in the "limit" as the set  covers all of :

 

For possible persons instead of spacetime locations, just use a set of nested subsets of possible persons (instead of spacetime locations) that satisfies 1 and 2.

This seems okay and doable, as long as you can come up with such a set of nested subsets of the possible persons that you find sufficiently satisfying, and you're able to track possible persons across worlds, over time and through space in a way you find sufficiently satisfying. You could identify persons based on characteristics when they first come into existence and then track them based on some kind of (psychological) continuity and causal connections (and make some kinds of fixes for duplication cases when there's no fact of the matter who's the original and who's the duplicate). If you want to distinguish identical people who are in totally different regions of spacetime, one characteristic to use would be their initial spacetime location relative to you, maybe with substantial tolerance so that when a person first becomes conscious (if they aren't already), it's not too important precisely when and where that happens for identifying them across worlds, but that might lead to weird discontinuities.

comment by Dagon · 2022-01-31T21:39:53.716Z · LW(p) · GW(p)

[ epistemic status: feeling ignorant - unsure if I'm misinterpreting the claims, or disagreeing, or just surprised that this is presented as more settled and agreed than I expected. ]

First point of confusion: literally infinite, in the sense of moral-patient experiences?  This implies that everything anyone can imagine (in our finite brains) happens, and an uncountable of unimaginable things also happens.  Or do you mean more figuratively infinite - a very (VERY) large number of possible experiences, and a smaller-but-still large number of actual experiences.  

Second point (or maybe should have been first): what's your unit of measure for ethically-relevant "things"?  I generally take it as distinct conscious experiences, but there are many other other justifiable views.  How do you weight duplicates (in all relevant dimensions) in your counting of "infinite"?

And thirdly, even if there IS an infinite number of relevant experiences that one cares about, doesn't any given being still make a finite number of decisions?  And thus every moral actor is purely finite.

Replies from: davidad, davidad, davidad
comment by davidad · 2022-01-31T22:56:28.788Z · LW(p) · GW(p)

Regarding the first point: there’s a common confusion, which I’ll call the Infinite Monkeys Fallacy, which says that if is a truly infinite set, then it must eventually/somewhere contain for all . This is confusing because, while there is an Infinite Monkey Theorem (saying that, as the number of independent samples from a uniform distribution goes to infinity, the probability that any given string of samples goes to 1), its independence and support assumptions don’t generalize to infinite situations in general. A simple counterexample is that the set of all even numbers is infinite, yet never contains a single odd number. Infinite sets of parallel universes need not contain a single universe in which the Brooklyn Bridge is made of cheese. Nor does an infinite set of moral-patient experiences necessarily contain the experience of eating the Brooklyn Bridge. Even the pigeonhole principle doesn’t necessarily apply here, because identical moral-patient experiences can count separately—so an infinite(ly indexed) set of moral-patient experiences could even contain just one actual experience, copied times.

comment by davidad · 2022-01-31T23:04:07.069Z · LW(p) · GW(p)

Regarding the third point: even if every moral agent (and their option space and their mind) is always finite, it’s still conceivable that a single binary decision—made over the course of, say, a week—could involve sophisticated and perhaps even “correct” reasoning that balances the interests of an infinite set of moral patients. There seem to be at least some clear-cut cases, like choosing to create an infinite heaven instead of an infinite hell. The question here is then what formal normative principles we could apply to judge (and guide) such reasoning in general.

comment by davidad · 2022-01-31T22:59:09.546Z · LW(p) · GW(p)

Regarding the second point, I think the OP is agnostic on this point, referring to that unit of measure as “locations”, but letting it vary between individuals, observer-moments, spacetime locations, or possibly other notions. All you need to accept for the purpose of facing problems of infinite ethics is that it’s plausible that the set of morally relevant locations is infinite, whatever form they take.

Replies from: Dagon
comment by Dagon · 2022-01-31T23:52:11.857Z · LW(p) · GW(p)

I think "for the purpose of facing problems of infinite ethics" is my sticking point - I don't seem to have that purpose.  I don't think infinities apply in reality, and certainly not in ethics of any real agent.  It's NOT plausible to me that morally relevant locations are infinite.  And I'm unsure if that's because I'm confused about what you're talking about, or I've missed some argument about why I'm wrong on that topic.

Replies from: davidad
comment by davidad · 2022-02-01T09:56:16.143Z · LW(p) · GW(p)

I think it’s more the latter, although I’m not terribly convinced myself that you’re wrong. But the arguments you may or may not have missed are in the middle of OP’s section 1, with paragraphs starting “The causal way,” “The acausal way,” and “Perhaps you say.”

Personally, I am more open than most rationalists to the idea that there may be a way to escape heat death without actually violating Liouville’s theorem or other justifications for the Second Law. If so, that would make the set of future temporal locations infinite. Even if the amount of energy in the accessible universe is finite, and even if that implies the state-space of the universe is finite, and so its state trajectory must either terminate or repeat itself, maybe it would then repeat itself for 𝜔 many cycles, and so the question of which cycle of state space is entered first matters infinitely much.

comment by Zach Stein-Perlman · 2022-01-31T15:00:52.972Z · LW(p) · GW(p)

You mention that there are finite fanaticism problems (in addition to infinite ones), but I don't think you illustrated this. So just in case a reader is inclined to think they can solve fanaticism by somehow ignoring infinity—which would make ignoring infinity more appealing—here's an example of how you're still left with fanaticism:

We should have credence at least that long-term value is not linear in resources, but exponential, and then this possibility dominates our expected utility, so that rather than maximizing expected resources we do something much closer to maximizing maximum possible resources (even if extremely unlikely), with implications including building superintelligence as fast as possible (as long as you think that it's more likely to do optimize for value than optimize for disvalue).

(Also versions of Pascal's Mugging that are truly finite, and acausal trade in some cases.)

comment by Ben (ben-lang) · 2022-08-17T17:08:46.535Z · LW(p) · GW(p)

Slightly orthogonal, but I think you are rating some of your infinite worlds really badly.

You say that you like the one you call "Zone of happiness" (you would rather live there than some of the others). This strikes me as insane. To re-iterate, in the "Zone of happiness" there are an infinite number of miserable people, sharing the word with a finite number of happy people (that finite number is however growing). If this doesn't already sound bad we can put some flesh onto the example to make it less abstract. Lets assume a world where there is a single happy person (the Tyrant) who rules over an infinite number of miserable subjects who attend to their wishes. Just to make the example more aggressive we can assume that the people in this world are somewhat different to normal people and they can in fact only be happy if they have an infinite number of people serving them. One day the Tyrant falls in love or makes friends or whatever and uses the standard hotel-room trick (even numbered slaves mine, odd yours) to make one other person a ruler over infinite subjects too (one more happy person). Then assume that this sort of thing happens at a steady rate averaging out to the 1-per-day you discuss. This is the "Zone of happiness".

Or, another "Zone of happiness". You live on a densely populated planet in a Galaxy scale empire spanning a million planets. You are sad and in pain. So is everyone you know, all the time. One day, millions of years into your life, you hear a rumour about a person in a distant galaxy who claims to actually be happy. On the same day you hear this God turns up and tells you that you are in fact in the "Zone of happiness" and that, of the infinite people in the universe, 1-per-day are becoming forever-happy. You suppose that the happy person in that far off galaxy you heard of is some lucky guy who finally hit there jackpot. Then you realise, No!, its infinitely more likely that the rumour is false, or that the person is lying. What are the chances that one of the happy people would just happen to be in your past light cone? Zero, that's what. (And that will still be true in a million more years).

comment by FireStormOOO · 2022-05-08T18:40:09.331Z · LW(p) · GW(p)

I find myself wanting to reach for an asymptotic function and mapping most of these infinities back to finite values.  I can't quite swallow assigning a non-finite value to infinite lizard.  At some point, I'm not paying any more for more lizard no matter how infinite it gets (which probably means I'd need some super-asymptote that continues working even as infinities get progressively more insane).

I'm largely on board with more good things happening to more people is always better, but I think I'd give up the notion of computing utilions by simple addition before accepting any of the above.

I also reject Pascal's wager, which is a (comparatively simple) instance of these infinite problems, for reasons that seem like they should generalize, but are hard to articulate.  My first stab would be something along the lines of my prior for any given version of heaven existing shrinks at least as fast as the values increase.  I think this follows from finite examples, e.g., if someone offers you a wager with a billion-dollar payout, the chances they're good for it are much less than for a million-dollar payout.  Large swaths of the insane results here stem from accepting bizarre wagers at face value; while that's a useful simplifying assumption for much of philosophy, I think it's one this topic has outgrown.  Absurdity heuristic is a keeper.

comment by JonasMoss · 2022-04-19T12:25:08.608Z · LW(p) · GW(p)

Pareto: If two worlds (w1 and w2) contain the same people, and w1 is better for an infinite number of them, and at least as good for all of them, then w1 is better than w2.

As far as I can see, the Pareto principle is not just incompatible with the agent-neutrality principle, it's incompatible with set theory itself. (Unless we add an arbitrary ordering relation on the utilities or some other kind of structure.)

Let's take a look at, for instance,  vs , where  is the multiset containing  and  is the disjoint union. Now consider the following scenarios:

(a) Start out with  and multiply every utility by  to get . Since infinitely many people are better off and no one is worse off, .

(b) Start out with  and take every other of the -utilities from  and change them to . Since a copy of  is still left over, this operation leaves us with . Again, since infinitely many are better off and no one worse off, .

In conclusion, both  and , a contradiction.

Replies from: Slider, Zach Stein-Perlman, MichaelStJules
comment by Slider · 2022-04-19T13:48:59.128Z · LW(p) · GW(p)

In (b) the remaining copy of  is specifically missing those "upgraded individuals". They might contain the same number of people but it is not clear to me that they contain the same people. Thus (b) is not an instance of applying pareto.

Replies from: JonasMoss
comment by JonasMoss · 2022-04-19T13:59:09.874Z · LW(p) · GW(p)

I don't understand what you mean. The upgraded individuals are better off than the non-upgraded individuals, with everything else staying the same, so it is an application of Pareto.

Now, I can understand the intuition that (a) and (b) aren't directly comparable due to identity of individuals. That's what I mean with the caveat "(Unless we add an arbitrary ordering relation on the utilities or some other kind of structure.)"

Replies from: Slider
comment by Slider · 2022-04-19T17:41:47.348Z · LW(p) · GW(p)

Okay the pareto thing applies but the formal contradiction has a problem in the (b) prong. Consider  which is 1,2,3,4,5,6,7... if you took each other out from that yu would get 1,3,5,7... There is no 2 in there so there is no copy of  remaining in there. Sure if you have  which is 0,0,0,0,0... you can have it as a multiset but multisets track amounts. It is not sufficient that the members are of the same object the amount needs to match too. And in that dropping the amounts (atleast ought to) change. So  is not the same as . So you get  which is not an exact mirror of the (a) prong.

Replies from: JonasMoss
comment by JonasMoss · 2022-04-19T18:35:57.197Z · LW(p) · GW(p)

The number of elements in  won't change when removing every other element from it. The cardinality of   is countable. And when you remove every other element, it is still countable, and indistinguishable from .  If you're unconvinced, ask yourself how many elements  with every other element removed contains. The set is certainly not larger than , so it's at most countable. But it's certainly not finite either. Thus you're dealing with a set of countably many 0s. As there is only one such multiset,  equals  with every other element removed.

That there is only one such multiset follows from the definition of a multiset, a set of pairs , where  is an element and  is its cardinality. It would also be true if we define multisets using sets containing all the pairs  -- provided we ignore the identity of each pair. I believe this is where our disagreement lies. I ignore identities, working only with sets. I think you want to keep the identities intact. If we keep the identities, the set  is not equal to , and my argument (as it stands) fails. 

Replies from: Slider, None
comment by Slider · 2022-04-19T23:07:14.023Z · LW(p) · GW(p)

To my mind the reduced set has  elements which is less than . But yeah its part of a bigger pattern where I don't think cardinality is a very exhaustive concept when it comes to infinite set sizes. But I don't have that much knowledge to have a good alternative  working conception around "ordinalities".

comment by [deleted] · 2022-04-30T21:40:42.750Z · LW(p) · GW(p)

Pareto explicitly says that you have to keep identities intact, because the definition stipulates that w1 and w2 "contain the same people." If you don't preserve identities, you can't verify that that condition is met, in which case Pareto isn't applicable.

comment by Zach Stein-Perlman · 2022-04-19T12:37:02.624Z · LW(p) · GW(p)

Yeah, so Pareto seems to require that we don't just think about the people in the universe in terms of set theory as you do, but instead maybe have something like a canonical order in which to compare people between universes... that seems to work for comparing worlds where (roughly) the people are the same but their utilities change; I'm not sure how to compare universes with people arranged differently using something like this set theory. Ideally we could think of infinite utilities as hyperreal numbers rather than in terms of sets; then there's no contradiction of this form.

Replies from: MichaelStJules
comment by MichaelStJules · 2022-04-19T19:59:31.648Z · LW(p) · GW(p)

I think full agent-neutrality/multisets will make it basically impossible for anything to matter in practice assuming the universe is infinite (maybe you can only care about finite universes, though). You'd need to change the number of individuals at some utility level to make any difference, but if the universe is infinite, the number of individuals at any given utility level is probably infinite, and you probably won't be able to change its cardinality predictably through normal acts that don't predictably affect weird possibilities of jumping between different infinite cardinals.

If a hyperreal approach gets past this, then it probably assumes additional structure, effectively an order.

comment by MichaelStJules · 2022-04-19T19:47:50.044Z · LW(p) · GW(p)

Multisets don't track identity and effectively bake agent-neutrality into them, so they don't have enough structure to express the Pareto principle properly. For Pareto, it's better to represent your worlds/universes/outcomes as vectors with a component for each individual, or functions mappings identities (or labels) to real numbers. Your set of identities can just be the natural numbers, or integers or whatever.

comment by London L. (london-l) · 2022-04-14T04:49:53.165Z · LW(p) · GW(p)

I have not read through this in its entirety, but it strikes me that an article I wrote about how the mathematical definition of infinity doesn't match human intuitions might be useful for people to read who are also interested in this material. I'm also fairly new here, so if cross posting this isn't okay, please let me know.

https://london-lowmanstone.medium.com/comparing-infinities-e4a3d66c2b07

comment by MichaelStJules · 2022-02-04T05:44:05.553Z · LW(p) · GW(p)

The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff. 

 

You can reduce the sensitivity to the spatio-temporal distribution of value at the cost of ranking less of the stuff that's intuitively ambiguous anyway by doing the comparison between two worlds over multiple expansions from a set of possible expansions (e.g. multiple starting points with uniformly expanding spacetime spheres), and using some rule to combine the comparisons that's impartial with respect to the different expansions. For example, if  under at least one expansion and  under no expansions (or  under all expansions, which will be transitive as long as the rankings over any fixed expansion is transitive), then  overall. Basically, this is Pareto over possible expansions.

comment by Joe_Collman · 2022-02-02T08:48:08.619Z · LW(p) · GW(p)

Excellent post. Thanks for writing this.

"...expansionism violates ... Pareto over agents..."

I don't think this statement makes sense:
Expansionism specifies no mapping between equivalent agents.
Pareto must specify a mapping identifying equivalent pairs of agents.
For a given pair of worlds, expansionism will usually violate Pareto for some mappings and not others - because it must: Pareto gives different answers with different mappings.

[I believe I'm disagreeing with Askell 2018 here; I'm genuinely confused that she seems to be making a simple error - so it's entirely possible that I'm just genuinely confused :)]

E.g. in the Balmy/Blustery case, Pareto tells us that with the mapping taking X/Y/Z Balmy to X/Y/Z Blustery, we should prefer Blustery (call this the XYZ mapping).

However, with the mirror mapping (draw a line between Balmy and Blustery on the page, map each person to the person in their reflected position), Pareto tells us we should prefer Balmy: the left-siders are better off, and the right-siders are no worse off.
(with many mappings Pareto doesn't tell us to prefer either world)

Now one can say that obviously we should use the XYZ mapping (otherwise the letters don't match!) - and another can say that obviously we should use the mirror mapping (otherwise the mirrored positions don't match!). We can't say "Just look for the person that's the same in the other world": equivalent persons won't be the same in all respects, since in general they'll have different welfare levels.

It's true that the XYZ mapping was chosen as part of the example setup. However, this doesn't make it any less arbitrary - and importantly this mapping is not an argument that expansionism can 'see': expansionism takes two worlds and a space-time mapping.

When making claims like "...we can see that Blustery is better than Balmy by Pareto..." (Askell p84), we ought to specify the mapping. Expansionism must disagree here with one of Pareto_XYZ or Pareto_mirror.
This disagreement can be reasonably be called a violation of Pareto_XYZ, but not of Pareto.

None of this is to say that we need to throw out Pareto - only that we need to be clear on what it is/isn't saying. Being guided by Pareto means bringing in a kind of path-dependence.

To take another example, consider the trade T = [all-planets-closer-together] for [any finite suffering] situation. Pareto_Obvious says that the original planet people are no better off, while the new suffering people are clearly worse off, so T is a bad trade-off.
However, expansionism doesn't see the Obvious agent mapping for T. From expansionism's point of view, it's as if we left the planet people at approximately their original world position, and created an infinite number of new happy people to increase the population density as necessary. (and note that we could have done something like this, to get to a final world similar in all important respects)

For this reason, it seems a mistake to say "No one's thanking you for pulling those planets closer together. In fact, no one noticed.".
First, people's noticing isn't the point. You could create an infinite number of happy people who believed they'd always existed and none of them need notice. In either case, what matters is the moral significance of creating them / moving them.

Second, moving an infinite number of people can get you to the same state as creating an infinite number of people. It's not clear that moving infinite numbers of people should have little moral significance. (the [moving-shouldn't-matter-much] intuition is based on moving finite numbers of things; in the infinite case, various important invariants cease to hold (e.g. invariant total density; invariant number of occupied (/empty) positions))
[I'm not at all sure what I think here; intuitively, it seems that creating one happy person in exchange for dividing happy-person-density by Graham's number is a terrible trade (so I'm picking expansionism over Pareto); however, I worry I might be focusing on what it's like for an observer to examine some region of that world, rather than on the inherent welfare of the world itself]

That said, I think it's fine to take the stance that it does matter whether we [moved people] or [created people] to reach a given world state. Expansionism doesn't have to care, but for Pareto it can be important.

Replies from: None
comment by [deleted] · 2022-04-30T21:46:13.201Z · LW(p) · GW(p)

>We can't say "Just look for the person that's the same in the other world": equivalent persons won't be the same in all respects, since in general they'll have different welfare levels.

Askell extensively argues for why you should be able to do that in the first part of her thesis. For one thing, it's highly implausible to say that differing welfare levels alone necessarily imply an alteration in personal identity. It seems obvious that my life could have been happier or sadder,  at least to some extent, without my being a different person. For another, your condition for transworld identity more broadly is way too strong: no philosopher that I know of thinks that I need to be the same in every respect in some other world in order for the person in that other world to be "me."

comment by JonasMoss · 2022-04-19T12:24:33.098Z · LW(p) · GW(p)