Is the far future inevitably zero sum?

post by Srdjan Miletic (srdjan-miletic) · 2023-12-19T01:45:44.626Z · LW · GW · 2 comments

This is a link post for https://dissent.blog/2023/12/07/is-the-far-future-inevitably-zero-sum/

In this twitter post, Richard argues that in the distant future there will be far more zero or negative sum interactions because once all possible value has been wrung out of the physical world there's noting left to compete over other than the distribution of that value. I wonder if this is true?

Imagine a far future where the following hold true

(Optimistic Srdjan): Even if agents know the same things, they can still plausibly gain from trade in a few ways.

(Pessimistic Srdjan) Hmmmm. So the argument here is basically that trade can still make sense even if knowledge is capped out provided either agents abilities differ or scale matters. Okay, lets grant that both of those conditions hold. I'm still not sure that argument works.

Sure, in the initial state the agents will trade. They'll collaborate to build larger computers and split the compute time gains from doing so. They'll specialise in producing things they're better at and also split the gains from trade. Fine. But what happens once the agents do that for long enough to eat all the gains. Eventually the final equilibrium is:

I guess the meta point here is something like this:

(Optimistic Srdjan) Hmmmmmm.

I'm tempted to start arguing the object level point here but I think that would be a mistake. Maybe it is the case that if you go sufficiently far into the future, all possible trades are made and all positive-sum potential is tapped. Still, that seems somewhat different from the question we were initially asking. The question was something akin to "Is there any need for trade in a world where material resources are all fixed and fully-owned and all agents have full knowledge". I think the answer to that is a yes, possibly. The question of "Will there eventually be no need for trade once we finish creating the optimal pan-galactic compute cluster" or "Will AI's 3 days after birth make all possible trades with precomittments spanning to the heat death of the universe, meaning that there is no real new trade from that point on" are harder to think about but also meaningfully different.

2 comments

Comments sorted by top scores.

comment by Dagon · 2023-12-19T05:15:40.561Z · LW(p) · GW(p)

We don't really have a good theory of agent sizing and aggregation.  In what cases is it "one agent with many semi-independent components" vs "many fully independent agents"?  Which is more efficient, which is more morally desirable?  Why not "one gigantic agent"?

In situations of competition for limited resources, of course there will be zero-sum elements.  It's not obvious that there won't ALSO be positive-sum cooperation (to exploit non-agent areas, or to find higher sums in aggregate utility functions), until there is literally no un-optimized resources and everything is local maxima for various morally-relevant (to us) agents.

I argue that that's a great problem to have!

comment by TheCookieLab · 2023-12-19T02:18:11.968Z · LW(p) · GW(p)

A questionable assumption undergirding this entire line of thought is that the universe can be finitely partitioned. Another assumption that could be considered a coin toss is that agents occupy and compete within a common size, space, and/or time scale. That there is no upper bound on need/“greed” or that there will still be multiple agents may seem a given within the current zeitgeist but again are far from guarantees. There are many other such assumptions but these are a few of the most readily apparent.