Is the far future inevitably zero sum?
post by Srdjan Miletic (srdjan-miletic) · 2023-12-19T01:45:44.626Z · LW · GW · 2 commentsThis is a link post for https://dissent.blog/2023/12/07/is-the-far-future-inevitably-zero-sum/
Contents
2 comments
In this twitter post, Richard argues that in the distant future there will be far more zero or negative sum interactions because once all possible value has been wrung out of the physical world there's noting left to compete over other than the distribution of that value. I wonder if this is true?
Imagine a far future where the following hold true
- the physical universe is divided among N agents
- each agent has fully exploited the physical resources in it's territory to best satisfy its utility function (some agents turn everything into computronium and simulate endless minds experiencing pleasure, others turn everything into tessellating atom-scale paperclip replicas)
- we've reached the limits of scientific progress. Each agents ability to convert the matter/energy in their territory to utility is capped out and cannot be further improved
(Optimistic Srdjan): Even if agents know the same things, they can still plausibly gain from trade in a few ways.
- It could be the case that different kinds of agents are innately better at doing different kinds of things. Different kinds of minds have different strengths after all. In that case trade can be mutually beneficial as it allows for more specialisation and more stuff for all parties. Basically, as long as agents differ and those differences translate into some degree of difference in effectiveness in the material world, trade can make sense.
- Maybe scale has returns. Maybe a computronium matrix the size of 10 galaxies is marginally more efficient per unit of entropy than one the size of 1 galaxy. In that case trade again can be win-win.
(Pessimistic Srdjan) Hmmmm. So the argument here is basically that trade can still make sense even if knowledge is capped out provided either agents abilities differ or scale matters. Okay, lets grant that both of those conditions hold. I'm still not sure that argument works.
Sure, in the initial state the agents will trade. They'll collaborate to build larger computers and split the compute time gains from doing so. They'll specialise in producing things they're better at and also split the gains from trade. Fine. But what happens once the agents do that for long enough to eat all the gains. Eventually the final equilibrium is:
- The agents have built a shared computronium matrix that spans the entire universe or hits scaling limits
- They've produced all the widgets they need in an optimal way and have stockpiles/futures contracts that will last until the heat-death of the universe. Surely at some point you exhaust all the gains from trade, right?
I guess the meta point here is something like this:
- You start with a pool of possible trades. Some, e.g: war, are negative sum. Some are neutral. Some are positive sum.
- You take the positive sum trades.
- Eventually you've taken all the positive sum trades and have only negative or zero sum ones left
(Optimistic Srdjan) Hmmmmmm.
I'm tempted to start arguing the object level point here but I think that would be a mistake. Maybe it is the case that if you go sufficiently far into the future, all possible trades are made and all positive-sum potential is tapped. Still, that seems somewhat different from the question we were initially asking. The question was something akin to "Is there any need for trade in a world where material resources are all fixed and fully-owned and all agents have full knowledge". I think the answer to that is a yes, possibly. The question of "Will there eventually be no need for trade once we finish creating the optimal pan-galactic compute cluster" or "Will AI's 3 days after birth make all possible trades with precomittments spanning to the heat death of the universe, meaning that there is no real new trade from that point on" are harder to think about but also meaningfully different.
2 comments
Comments sorted by top scores.
comment by Dagon · 2023-12-19T05:15:40.561Z · LW(p) · GW(p)
We don't really have a good theory of agent sizing and aggregation. In what cases is it "one agent with many semi-independent components" vs "many fully independent agents"? Which is more efficient, which is more morally desirable? Why not "one gigantic agent"?
In situations of competition for limited resources, of course there will be zero-sum elements. It's not obvious that there won't ALSO be positive-sum cooperation (to exploit non-agent areas, or to find higher sums in aggregate utility functions), until there is literally no un-optimized resources and everything is local maxima for various morally-relevant (to us) agents.
I argue that that's a great problem to have!
comment by TheCookieLab · 2023-12-19T02:18:11.968Z · LW(p) · GW(p)
A questionable assumption undergirding this entire line of thought is that the universe can be finitely partitioned. Another assumption that could be considered a coin toss is that agents occupy and compete within a common size, space, and/or time scale. That there is no upper bound on need/“greed” or that there will still be multiple agents may seem a given within the current zeitgeist but again are far from guarantees. There are many other such assumptions but these are a few of the most readily apparent.