Is the potential astronomical waste in our universe too small to care about?
post by Wei_Dai
In the not too distant past, people thought that our universe might be capable of supporting an unlimited amount of computation. Today our best guess at the cosmology of our universe is that it stops being able to support any kind of life or deliberate computation after a finite amount of time, during which only a finite amount of computation can be done (on the order of something like 10^120 operations).
Consider two hypothetical people, Tom, a total utilitarian with a near zero discount rate, and Eve, an egoist with a relatively high discount rate, a few years ago when they thought there was .5 probability the universe could support doing at least 3^^^3 ops and .5 probability the universe could only support 10^120 ops. (These numbers are obviously made up for convenience and illustration.) It would have been mutually beneficial for these two people to make a deal: if it turns out that the universe can only support 10^120 ops, then Tom will give everything he owns to Eve, which happens to be $1 million, but if it turns out the universe can support 3^^^3 ops, then Eve will give $100,000 to Tom. (This may seem like a lopsided deal, but Tom is happy to take it since the potential utility of a universe that can do 3^^^3 ops is so great for him that he really wants any additional resources he can get in order to help increase the probability of a positive Singularity in that universe.)
You and I are not total utilitarians or egoists, but instead are people with moral uncertainty. Nick Bostrom and Toby Ord proposed the Parliamentary Model for dealing with moral uncertainty, which works as follows:
Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability. Now imagine that each of these theories gets to send some number of delegates to The Parliament. The number of delegates each theory gets to send is proportional to the probability of the theory. Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting. What you should do is act according to the decisions of this imaginary Parliament.
It occurred to me recently that in such a Parliament, the delegates would makes deals similar to the one between Tom and Eve above, where they would trade their votes/support in one kind of universe for votes/support in another kind of universe. If I had a Moral Parliament active back when I thought there was a good chance the universe could support unlimited computation, all the delegates that really care about astronomical waste would have traded away their votes in the kind of universe where we actually seem to live for votes in universes with a lot more potential astronomical waste. So today my Moral Parliament would be effectively controlled by delegates that care little about astronomical waste.
I actually still seem to care about astronomical waste (even if I pretend that I was certain that the universe could only do at most 10^120 operations). (Either my Moral Parliament wasn't active back then, or my delegates weren't smart enough to make the appropriate deals.) Should I nevertheless follow UDT-like reasoning and conclude that I should act as if they had made such deals, and therefore I should stop caring about the relatively small amount of astronomical waste that could occur in our universe? If the answer to this question is "no", what about the future going forward, given that there is still uncertainty about cosmology and the nature of physical computation. Should the delegates to my Moral Parliament be making these kinds of deals from now on?
Comments sorted by top scores.
comment by gjm ·
2014-10-21T14:18:44.316Z · LW(p) · GW(p)
Another possible conclusion is that the "moral parliament" model either doesn't match how you actually think, or doesn't match how you "should" think.
Replies from: Wei_Dai
↑ comment by Wei_Dai ·
2014-10-22T19:46:53.163Z · LW(p) · GW(p)
Realizing the implication here has definitely made me more skeptical of the moral parliament idea, but if it's an argument against the moral parliament, then it's also a potential argument against other ideas for handling moral uncertainty. The problem is that trading is closely related to Pareto optimality. If you don't allow trading between your moral theories, then you likely end up in situations where each of your moral theories says that option A is better or at least no worse than option B, but you choose option B anyway. But if you do allow trading, then you end up with the kind of conclusion described in my post.
Another way out of this may be to say that there is no such thing as how one "should" handle moral uncertainty, that the question simply doesn't have an answer, that it would be like asking "how should I make decisions if I can't understand basic decision theory?". It's actually hard to think of a way to define "should" such that the question does have an answer. For example, suppose we define "should" as what an ideal version of you would tell you to do, then presumably they would have resolved their moral uncertainty already and tell you what the correct morality is (or what your actual values are, whichever makes more sense), and tell you to follow that.
Replies from: torekp
↑ comment by torekp ·
2014-10-30T10:27:18.884Z · LW(p) · GW(p)
it would be like asking "how should I make decisions if I can't understand basic decision theory?"
But that seems to have an answer, specifically along the lines of "follow those heuristics recommended by those who are on your side and do understand decision theory."
comment by Toby_Ord ·
2014-10-27T12:48:22.673Z · LW(p) · GW(p)
Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.
comment by Vladimir_Nesov ·
2014-10-21T21:52:30.950Z · LW(p) · GW(p)
If Moral Parliament can make deals, it could as well decide on a single goal to be followed thereafter, at which point moral uncertainty is resolved (at least formally). For this to be a good idea, the resulting goal has to be sensitive to facts discovered in the future. This should also hold for other deals, so it seems to me that unconditional redistribution of resources in not the kind of deal that a Moral Parliament should make. Some unconditional redistributions of resources are better than others, but even better are conditional deals that say where the resources will go depending on what is discovered in the future. And while resources could be wasted, so that at a future point you won't be able to direct at much in a new direction, seats in the Moral Parliament can't be.
Replies from: Tyrrell_McAllister
↑ comment by Tyrrell_McAllister ·
2014-11-01T18:45:30.721Z · LW(p) · GW(p)
If Moral Parliament can make deals, it could as well decide on a single goal to be followed thereafter, at which point moral uncertainty is resolved (at least formally). For this to be a good idea, the resulting goal has to be sensitive to facts discovered in the future.
The "Eve" delegates want the "Tom" delegates to have less power no matter what, so they will support a deal that gives the "Tom" delegates less expected power in the near term. The "Tom" delegates give greater value to open-ended futures, so they will trade away power in the near term in exchange for more power if the future turns out to be open ended.
So this seems to be a case where both parties support a deal that takes away sensitivity if the future turns out to be short. Both parties support a deal that gives the "Eve" delegates more power in that case.
comment by Toby_Ord ·
2014-10-27T12:46:13.324Z · LW(p) · GW(p)
The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even harder to apply in real life than it looked, and (2) it doesn't seem to be able to deal with cases where you learn more about ethics (i.e. update your credence function over moral theories) -- at least not without quite a bit of extra explanation about how that works. I suppose the best answer may well be that the policies over which the representatives are arguing include branches dealing with all ways the credences could change, weighted by their probabilities. This is even more messy.
My guess is that of these two broad options (decide one bounded decision vs decide everything all at once) the latter is better. But either way it is a bit less intuitive than it first appears.
comment by ESRogs ·
2014-10-23T05:41:38.788Z · LW(p) · GW(p)
What's the argument against doing the UDT thing here?
Replies from: Wei_Dai
↑ comment by Wei_Dai ·
2014-10-23T18:07:23.302Z · LW(p) · GW(p)
I'm not aware of a specific argument against doing the UDT thing here. It's just a combination of the UDT-like conclusion being counterintuitive, UDT being possibly wrong in general, and the fact that we don't really know what the UDT math says if we apply it to humans or human-like agents (and actually we don't even know what the UDT math is, since logical uncertainty isn't solved yet and we need that to plug into UDT).
comment by James_Miller ·
2014-10-21T14:30:14.275Z · LW(p) · GW(p)
Although this is fighting the hypothetical, I think that the universe is almost certainly infinite because observers such as myself will be much more common in infinite than finite universes. Plus, as I'm sure you realize, the non-zero probability that the universe can support an infinite number of computations means that the expected number of computations we expect to be performed in our universe is infinite.
As Bostrom has written, if the universe is infinite then it might be that nothing we do matters so perhaps your argument is correct but with the wrong sign.
Replies from: None, RichardKennaway
↑ comment by [deleted] ·
2014-10-22T05:28:51.225Z · LW(p) · GW(p)
Forget the erroneous probabalistic argument: it doesn't matter if the universe is infinite. What we see of it will always be finite, due to inflation.
↑ comment by RichardKennaway ·
2014-10-21T15:50:10.640Z · LW(p) · GW(p)
the non-zero probability that the universe can support an infinite number of computations means that the expected number of computations we expect to be performed in our universe is infinite.
Where do you get the non-zero probability from? If it's from the general idea that nothing has zero probability, this proves too much. On the same principle, every action has non-zero probability of infinite positive utility and of infinite negative utility. This makes expected utility calculations impossible, because Inf - Inf = NaN.
I consider this a strong argument against the principle, often cited on LW, that "0 and 1 are not probabilities". It makes sense as a slogan for a certain idea, but not as mathematics.
Replies from: James_Miller