The Upper Limit of Value
post by Davidmanheim · 2021-01-27T14:13:09.510Z · LW · GW · 25 commentsContents
Physics is Finite and the Near-Term Value-in-General is Finite, Even When it Isn't None 25 comments
I am happy to announce a new paper I co-wrote with Anders Sandberg, which is now a public preprint (Note: PDF). The abstract is below, followed by a brief sketch of some of what we said in the paper.
Abstract: How much value can our decisions create? We argue that unless our current understanding of physics is wrong in fairly fundamental ways, there exists an upper limit of value relevant to our decisions. First, due to the speed of light and the definition and conception of economic growth, the limit to economic growth is a restrictive one. Additionally, a related far larger but still finite limit exists for value in a much broader sense due to the physics of information and the ability of physical beings to place value on outcomes. We discuss how this argument can handle lexicographic preferences, probabilities, and the implications for infinite ethics and ethical uncertainty.
Physics is Finite and the Near-Term
First, there is a claim underlying our argument, that our current understanding of physics is sufficient to conclude that the accessible universe is finite in volume, in time, and in amount of information which can be stored. (The specific arguments for this are in the appendix of the paper.) We also assume humans are physical beings, without access to value unconnected to the physical world. Anything valued in their mind is part of a physical process.
Given those two claims, we start out with a discussion of purely economic value, and the short term future, specifically the next 100,000 years. During that time, the speed of light means that humanity will only have access to the Milky Way Galaxy. In the optimistic case that we colonize the galaxy, the rate of growth in economic value is limited to the polynomial increase in accessible matter and volume of space. This implies that indefinite exponential economic growth is impossible. In fact, as we suggest in the paper, the limit to exponential growth is almost certainly well below 1% over that time frame.
This has some interesting implications for economic discussions about the proper discount rate for the far-future, for the hinge-of-history hypothesis [EA · GW], and the argument that humanity will reach an economic singularity - or at least one where growth will continue indefinitely at an accelerating pace.
Value-in-General is Finite, Even When it Isn't
The second half of our paper discusses value more generally, in the philosophical sense. Humans often remark that some things, like human life, are "infinitely valuable." Despite economic evidence that this is not literally true, and taking this claim at face value, we argue that value is still limited.
In philosophy, preferences involving infinities are referred to as "lexicographic," in the sense used in computer science to refer to sorting. Any amount of a "lexicographically inferior" good, like blueberries, is less useful than a single "lexicographically superior" good, say, human lives. Still, in a finite universe, no infinities are needed to represent this "infinite preference." To quote from the paper:
We can consider a finite universe with three goods and lexicographic preferences . We denote the number of each good , and the maximum possible of each in the finite universe as . Set }. We can now assign utility for a bundle of goods . This assignment captures the lexicographic preferences exactly. This can obviously be extended to any finite number of goods , with a total of different goods, with any finite maximum of each.
(You should read the paper for a fuller account of the argument, and for the footnotes that I left out of this quote.)
The above argument does not deal with expected utility, but in the paper we claim that not only are zero and one not probabilities [LW · GW], but neither are or . That is, we argue that it would be effectively incoherent to assign an infinitesimal probability in order to reach an infinite expected value. We also discuss why Boltzmann brains, and non-causal decision theories don't refute this claim - but for all of those, you'll need to read the paper.
Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc. Finally, I'll quote the paper a final time for the acknowledgements - not only was it awesome for me to co-write a paper with Anders, but we got feedback from a variety of really incredible people.
We are grateful to the Global Priorities Institute for highlighting these issues and hosting the conference where this paper was conceived, and to Will MacAskill for the presentation that prompted the paper. Thanks to Hilary Greaves, Toby Ord, and Anthony DiGiovanni, as well as to Adam Brown, Evan Ryan Gunter, and Scott Aaronson, for feedback on the philosophy and the physics, respectively. David Manheim also thanks the late George Koleszarik for initially pointing out Wei Dai's related work in 2015, and an early discussion of related issues with Scott Garrabrant and others on asymptotic logical uncertainty, both of which informed much of his thinking in conceiving the paper. Thanks to Roman Yampolskiy for providing a quote for the paper. Finally, thanks to Selina Schlechter-Komparativ and Eli G. for proofreading and editing assistance.
25 comments
Comments sorted by top scores.
comment by CarlShulman · 2021-01-27T22:36:22.642Z · LW(p) · GW(p)
Thanks David, this looks like a handy paper!
Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc.
I don't agree with the argument that infinite impacts of our choices are of Pascalian improbability, in fact I think we probably face them as a consequence of one-boxing decision theory, and some of the more plausible routes to local infinite impact are missing from the paper:
- The decision theory section misses the simplest argument for infinite value: in an infinite inflationary universe with infinite copies of me, then my choices are multiplied infinitely. If I would one-box on Newcomb's Problem, then I would take the difference between eating the sandwich and not to be scaled out infinitely. I think this argument is in fact correct and follows from our current cosmological models combine with one-boxing decision theories.
- Under 'rejecting physics' I didn't see any mention of baby universes, e.g. Lee Smolin's cosmological natural selection. If that picture were right, or anything else in which we can affect the occurrence of new universes/inflationary bubbles forming, then that would permit infinite impacts.
- The simulation hypothesis is a plausible way for our physics models to be quite wrong about the world in which the simulation is conducted, and further there would be reason to think simulations would be disproportionately conducted under physical laws that are especially conducive to abundant computation
↑ comment by Davidmanheim · 2021-01-28T07:35:52.591Z · LW(p) · GW(p)
Thanks for this!
If I understand you initial point, I agree that the route to infinite value wouldn't be through infinitesimal probabilities, as we say in the paper. I'm less sure what you mean by "one-boxing decision theory" - we do discuss alternative decision theories briefly, but find only a limited impact of even non-causal decision theories without also accepting multiverses, and not renormalizing value.
Regarding "in an infinite inflationary universe with infinite copies of me," we point out in the paper that the universe cannot support infinite copies of anything, since it's bounded in mass, space, and time - see A.2.1 and A.4. You suggest that there may be ways around this in your next two claims.
Regarding baby universes, perhaps we should have addressed it - as we noted in the introduction, we limited the discussion to a fairly prosaic setting. However, assuming Smolin's model, we still have no influence on the contents of the baby universe. If we determined that those universes were of positive value, despite having no in-principle way of determining their content or accessing them, then I could imagine tiling the universe with black holes to maximize the number of such universes is a possible optimal strategy - and the only impact of our actions with infinite values is the number of black holes we create.
Finally, if we accept the simulation hypothesis, we again have no necessary access to the simulators' universe. Only if we both accept the hypothesis and believe we can influence the parent universe in determinable ways can we make decisions that have an infinite impact. In that case, infinite value is again only accessible via this route.
Replies from: daniel-kokotajlo, CarlShulman↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-28T09:46:29.821Z · LW(p) · GW(p)
Finally, if we accept the simulation hypothesis, we again have no necessary access to the simulators' universe. Only if we both accept the hypothesis and believe we can influence the parent universe in determinable ways can we make decisions that have an infinite impact. In that case, infinite value is again only accessible via this route.
This seems like an isolated demand for... something. If we accept the simulation hypothesis, we still have a credence distribution over what the simulators' universe might be like, including what the simulators are like, what their purpose in creating the simulation is, etc. We don't need to believe we can influence the parent universe "in determinable ways" to make decisions that take into account possible effects on the parent universe. We certainly don't need "necessary access." We don't have necessary access to anything pretty much. Or maybe I just don't know what you mean by these quoted phrases?
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-01-28T10:52:03.293Z · LW(p) · GW(p)
That's fair. What I meant by necessary access, but said unclearly, was that for there to be infinite value, we need to not only accept the simulation hypothesis, but also require that there be some possible influence / access - it's necessary to assume both. And yes, if we have some finite probability that both the simulation hypothesis is true and that our actions could affect the simulators, I agree that we can have some credence over how we could influence their universe, which means that we could have access to infinite value. But as noted, the entire access to infinite value is still conditional on whatever probability we assign to this join condition. And in that case, if we care about total value, 100% of all expected value is riding on that single possibility.
↑ comment by CarlShulman · 2021-01-28T14:31:27.047Z · LW(p) · GW(p)
I suppose by 'the universe' I meant what you would call the inflationary multiverse, that is including distant regions we are now out of contact with. I personally tend not to call regions separated by mere distance separate universes.
"and the only impact of our actions with infinite values is the number of black holes we create."
Yes, that would be the infinite impact I had in mind, doubling the number would double the number of infinite branching trees of descendant universes.
Re simulations, yes, there is indeed a possibility of influencing other levels, although we would be more clueless, and it is a way for us to be in a causally connected patch with infinite future.
↑ comment by Davidmanheim · 2021-01-28T16:31:26.529Z · LW(p) · GW(p)
We tried to be clear that we were discussing influenceable value, i.e. value relevant for decisions. Unreachable parts of our universe, which are uninfluenceable, may not be finite, but not in a way that changes any decision we would make. I agree that they are part of the universe, but I think that if we assume standard theories of physics, i.e. without child universes and without assuming simulation, the questions in infinite ethics don't make them relevant. But we should probably qualify these points more clearly in the paper.
Replies from: CarlShulman↑ comment by CarlShulman · 2021-01-28T21:06:09.275Z · LW(p) · GW(p)
As I said, the story was in combination with one-boxing decision theories and our duplicate counterparts.
comment by denkenberger · 2021-01-30T18:03:16.859Z · LW(p) · GW(p)
this gives a paltry annual return on investment of 0.075%
which seems large until we note that it implies an annualized rate of return of 0.08%; far more than our estimate above, but a tiny rate of return.
Am I comparing the right numbers? It doesn’t seem like far more to me.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-01-31T06:52:33.673Z · LW(p) · GW(p)
It's a huge difference in actual value, and a small difference in rate of return. (But we should edit the text to clarify this.)
comment by Slider · 2021-01-28T12:41:11.708Z · LW(p) · GW(p)
The proof that lexigraphics can be embedded in a real function can also be walked backwards in that a decider that doesn't know the upper limits of the goods he opines on can't collapse their choices on a single archimedian class but must keep them separate essentially having neccesity for infinite values. A system that tries to collapse anyway will have to decide on a "margin" between good classes and risks encountring a multiple of one class that crosses over the margin. That is someone getting themselfs killed over 1 million bananas might have the reason that they reasoning capabilities are not designed to work on over 1000 bananas.
The arguments about of boltczman brains seem a little strange. If I close my eyes and can't tell a good state of the world from a bad state of the world then yes I can't sysdtematically use my sense data to get a good outcome. But this seems more of a statement of my epistemics rather than outside world. If a butterfly can't expect a hurricane, does that mean that hurricanes are ethically irrelevant? Any given actor probably has a horizon on how far they can predict the future. But trying to get a result that the universe would have a limit where nobody could predict what happens is tantamount to saying that causality will break down.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-01-28T16:35:15.790Z · LW(p) · GW(p)
The first argument is correct, and if we believe lexicographic preferences as more than exaggerations, that implies that finding a bound is important.
The ethical relevance argument was not that we can't tell, but that we cannot influence the end-state in a meaningful way. Prediction is different than influenceability. And yes, post heat-death, I would think that causality would have broken down in any meaningful sense.
Replies from: Slider↑ comment by Slider · 2021-01-28T16:59:55.537Z · LW(p) · GW(p)
If something is determined by a pseudorandom generator that is initialised with a seed and I have control over what the seed is I can "influence" the result in that if I switch the seed the outcome will be something different but in another sense I can't "influence" in that I can't force it into a goal state. That I believe my actions will have the same effect doesn't mean they will and there is a difference between not knowing and something being unable to be known.
I guess I am missing the detail on what part of their construction makes them uninfluencable. To my understanding after different "orderly phases" of the universe the resulting boltzman soup is different ie what happens before heat-death is correlated what happens after heat-death.
↑ comment by Davidmanheim · 2021-01-31T06:48:32.188Z · LW(p) · GW(p)
It's true that the actual evolution post-heat death will depend on the state now, but 1) the distribution of states is not dependent on the seed, and 2) the result isn't pseudorandom, it's truly random.
Replies from: Slider↑ comment by Slider · 2021-02-09T21:19:48.061Z · LW(p) · GW(p)
I migth be a bit out of my breath, but if there is a distinction between a "actual evolution" and "potential evolution", the "representativeness" of the potential evolution has aspects of epistemology in it. If I have a large macrostate and let a thermodynamic simulation go on then I collapse more quickly into a single mess where the start condition lineations don't allow me to make useful distinctions. If I define my macrostates more narrowly ie have more resolution in the simulation this will take longer. For any finite horizon there should be a narrowenough accuracy on the detailedness of the start state that it retains usefulnes. If an absolute zero simulation is possible (as atleast on paper with assumtions can be).
If I just know that there is a door A and a door B then I can't make any meaningful distinction which door is better (I guess I could arbitrary prefer one over the other). If I know behind one of the doors is a donkey and one has a car I can make much more informed decisions. In a given situation how detailed a model I apply is dependent on my knowledge and sensory organs. However me not being able to guess the rigth door doesn't mean that cars cease to be unvaluable. In Monty Hall switching is preferable. The point about the distributions being the same would be akin to saying that the decision procedure used to pick the door doesn't matter as any door is as good as any other. But if there are different states behind different doors, ie it is not an identical superposition of car and donkey behind each door but some doors have cars and some have donkeys then door choice does matter.
I kinda maybe know that quantum mechanics has elements which are mor properly random than pseudo random. However quantum computing is reversibloe and the blackhole information paradox would suggets that phycisist don't treat quantum effects to make states a indistinct mess, it is a distinct mess where entanglements and other things make tricky to keep track of stuff but it doesn't come at the sacrificde of clockworkiness.
In particular quantum mechanics has entanglemebnt which means that even if a classical mechanis is "fuzzed" by exposure to try quantum spread that spread often is correlated that is entangled states are produced which have the potential to keep choices distinct. For example if the Monty chooses the valid door to reveal via a true quantum coin the situation can still be benefitted from by switching. Even if the car is in a equal superposition behind any of the doors, if Monty opens correct doors (ie montys reveal is entangled to never reveal a car) then the puzzle remains solvable. Just the involvement of actual randomness isn't sufficient to say that distinctions are impossible but I lack the skill to distinguish what the requirements for that would be.
However if there was true "washing out" then the correlation between the orderly and the random should be broken. If a coin is conditional on what happens before the flip then it is not a fair coin.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-02-10T09:42:46.113Z · LW(p) · GW(p)
This seems confused in a bunch of ways, but I'm not enough of an expert in quantum mechanics, chaos theory, or teaching to figure out where you're confused. Anders might be able to help - but I think we'd need a far longer discussion to respond and explain this.
But to appeal to authority, when Scott Aaronson looked at the earlier draft, he didn't bring up any issues with quantum uncertainty as a concern, and when I check back in with him, I'll double check that he doesn't have any issues with the physics.
↑ comment by Slider · 2021-02-10T12:59:31.092Z · LW(p) · GW(p)
To the extent boltzman brains can be understood as a classical process then I think they are or can be viewed as pseudorandom phenomena. For quantum I do not really know. I do not know whether the paper intends to invoke quantum to get them that property.
The claim in the paper that they are "inaccesible by construction" is very implicit and requires a lot of accompaning assumptions and does a lot of work for the argument turn.
Numerology analog:
Say that some strange utility function wants to find the number that contains the maximum codings of the string "LOL" as a kind of smiley face maximiser. Any natural number when turned into binary and turned into strings can only contain a finite amount of such codings because there are only a finite amount of 1s in the binary representation. For any rational number turned to bianry deciaml there is going to be a period in the representation and the period can only contain finite multiples. The optimal rational number would be where the period is exactly "lol". However for transcendental numbers there is no period. Also most transcendental numbers are "fair" in the sense that each digit appears approximately as likely as any other and additionally fair in that bigger combinations converge to even statistic. When the lol-maximiser tries to determine whether it likes pi or phi more as numbers, it is going to find infinite lols in both. However it would be astonishing if they contained the exact same amount of lols. The difference in lols is likely to be vanishingly small ie infinidesimal. But even if we can't computationally check the matter, the difference exists before it is made apparent to us. The utility function of the lol-maximiser over the reals probably can't be expressed as a real function.
While the difference between boltzman histories might be small if we want to be exact about preference preservation then the differences need to cancel exactly. Otherwise we are discarding lexiographic differences (it is common to treat a positive amout less than any real to be exactly 0). There is a difference between vanishingly different and indifferent and distributional sameness only gets you to vanishingly different.
comment by Davidmanheim · 2022-12-15T14:28:17.154Z · LW(p) · GW(p)
I think that the work we did on the question of finite or infinite value settles an important practical question about whether, in the real world, we need to think about infinite value. While there are remaining objections, I think it is clear that the possibility of infinite value is conditional on specific factual and improbable claims, and because of the conditionals involved, this has minimal to no impact on decision-making more generally, since most choices do not involve those infinities - and so finite decision theories should suffice, and attempts to address the more general problem are unneeded.
My hope for the post, and the paper, is mostly to suggest that further work on the topic might be interesting, but it is low value.
comment by ForensicOceanography · 2021-01-30T18:35:28.859Z · LW(p) · GW(p)
Hello, thank you for sharing the paper. This is an interesting philosophical point; however, I have the impression that your conclusion is not true for all the possible value functions.
All the examples in your paper assume that the value of a commodity is linear in the amount of it (for example, Plutonium costs 5$/mg). But what happens if the value of a commodity increases with time? For example, a never-ending plutonium speculation bubble could make 1 mg of plutonium cost 1$ x exp(kt), for some k.
I feel that you have a point, but I think that you should axiomatize what properties the "value" function fulfills. If I do not want to sell my stuffed toy for any price, does it mean that it has an infinite price? Does your argument still hold if we replace "value" with "beauty" or any other undefined concept, or is this an argument specific for economic value? If yes, what is your definition of an "economic value" functional?
Maybe you have already answered to this kind of objection and I did not notice it.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-01-31T06:50:28.774Z · LW(p) · GW(p)
"All the examples in your paper assume that the value of a commodity is linear in the amount of it" No, this is only assumed for the economic value, and does not change the finitude of the value. Also see the discussion about exponential versus polynomial growth.
"If I do not want to sell my stuffed toy for any price" See the discussion of lexicographic utility.
↑ comment by ForensicOceanography · 2021-02-04T07:03:28.896Z · LW(p) · GW(p)
You are saying that you can always redefine the value function to be finite, while maintaining the lexicographic order.
Fair enough, but then your "value" is no longer a measurement of the amount of effort/money/resources you would be willing to pay for something. It is just a real function with the same order relationship on the set of objects.
It is certainly is possible to construct a "value" function which is finite over all the possible states of the universe, I totally agree. But is this class of functions the only logically possible choice?
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-02-08T08:05:20.266Z · LW(p) · GW(p)
>then your "value" is no longer a measurement of the amount of effort/money/resources you would be willing to pay for something
No, that's exactly what I'm saying isn't true. If the preference order for bundles of goods (which include effort/money/etc.) doesn't change, no decision - including tradeoffs between effort/money/resources - will change.
↑ comment by ForensicOceanography · 2021-02-08T15:36:09.818Z · LW(p) · GW(p)
Ok, now I get it.
comment by MichaelStJules · 2021-01-28T04:24:05.870Z · LW(p) · GW(p)
Cool!
On your lexicographic utility function, I think it's pretty ad hoc that it depends on explicit upper bounds on the quantities, which will depend on the specifics of our universe, but you can manage without them and allow unbounded quantities (and countably infinitely many, but I would be careful going further), unfortunately at the cost of additivity. I wrote about this here [EA(p) · GW(p)].
Replies from: Davidmanheim↑ comment by Davidmanheim · 2021-01-28T07:43:55.746Z · LW(p) · GW(p)
Thanks. I agree that the lexicographic utility we defined is ad-hoc, and agree that it is not unique. Of course there are infinitely many utility functions that represent the same preferences, since they are defined only as an affine transformation class - and once we are representing "infinite" preferences in a finite universe, the class is slightly larger, since we can place as (finitely) large a space in between lexically different goods as we want and still represent the same preferences. I'm unsure if your formulation has an difference in terms of ability to represent preferences.
Replies from: MichaelStJules↑ comment by MichaelStJules · 2021-01-28T23:53:54.522Z · LW(p) · GW(p)
My formulation can handle lexicality according to which any amount of A (or anything greater than a certain increment in A) outweighs any (countable amount) of B, not just finite amounts up to some bound. The approach you take is more specific to empirical facts about the universe; if you want it to give a bounded utility function, you need a different utility function for different possible universes. If you learn that your bounds were too low (e.g. that you can in fact affect much more than you thought before), in order to preserve lexicality, you'd need to change your utility function, which is something we'd normally not want to do.
Of course, my approach doesn't solve infinite ethics in general; if you're adding goods and bads that are commensurable, you can get divergent series, etc.. And, as I mentioned, you sacrifice additivity, which is a big loss.