Nina Panickssery's Shortform
post by Nina Panickssery (NinaR) · 2025-01-07T02:06:38.759Z · LW · GW · 10 commentsContents
10 comments
10 comments
Comments sorted by top scores.
comment by Nina Panickssery (NinaR) · 2025-01-07T02:06:39.344Z · LW(p) · GW(p)
Inspired by a number of posts discussing owning capital + AI, I'll share my own simplistic prediction on this topic:
Unless there is a hostile AI takeover, humans will be able to continue having and enforcing laws, including the law that only humans can own and collect rent from resources. Things like energy sources, raw materials, and land have inherent limits on their availability - no matter how fast AI progresses we won't be able to create more square feet of land area on earth. By owning these resources, you'll be able to profit from AI-enabled economic growth as this growth will only increase demand for the physical goods that are key bottlenecks for basically all productive endeavors.
To elaborate further/rephrase: sure, you can replace human programmers with vastly more efficient AI programmers, decreasing the human programmers' value. In a similar fashion you can replace a lot of human labor. But an equivalent replacement for physical space or raw materials for manufacturing does not exist. With an increase in demand for goods caused by a growing economy, these things will become key bottlenecks and scarcity will increase their price. Whoever owns them (some humans) will be collecting a lot of rent.
Even simpler version of the above: economics traditionally divides factors of production into land, labor, capital, entrepreneurship. If labor costs go toward zero you can still hodl some land.
Besides the hostile AI takeover scenario, why could this be wrong (/missing the point)?
Replies from: lahwran, Lblack, Vladimir_Nesov, sharmake-farah, quetzal_rainbow↑ comment by the gears to ascension (lahwran) · 2025-01-07T13:05:42.568Z · LW(p) · GW(p)
Ownership is enforced by physical interactions, and only exists to the degree the interactions which enforce it do. Those interactions can change.
As Lucius said, resources in space are unprotected.
Organizations which hand more of their decision-making to sufficiently strong AIs "win" by making technically-legal moves, at the cost of probably also attacking their owners. Money is a general power coupon accepted by many interactions; ownership deeds are a more specific, narrow one; if the ai systems which enforce these mechanisms don't systemically reinforce towards outcomes where the things available to buy actually satisfy the preferences of remaining humans who own ai stock or land, then the owners can end up with no not-deadly food and a lot of money, while datacenters grow and grow, taking up energy and land with (semi?-)autonomously self replicating factories or the like - if money-like exchange continues to be how the physical economy is managed in ai to ai interactions, these self replicating factories might end up adapted to make products that the market will buy. but if the majority of the buying power is ai controlled corporations, then figuring out how to best manipulate those ais into buying is the priority. If it isn't, then manipulating humans into buying is the priority.
It seems to me that the economic alignment problem of guaranteeing everyone is each able to reliably only spend money on things that actually match their own preferences, so that sellers can't gain economic power by customer manipulation, is an ongoing serious problem that ends up being the weak link in scenarios where AIs manage an economy that uses similar numeric abstractions and contracts (money, ownership, rent) as the current one.
↑ comment by Lucius Bushnaq (Lblack) · 2025-01-07T09:11:14.309Z · LW(p) · GW(p)
Space has resources people don't own. The earth's mantle a couple thousand feet down potentially has resources people don't own. More to the point maybe, I don't think humans will be able to continue enforcing laws barring a hostile takeover in the way you seem to think.
Imagine we find out that aliens are headed for earth and will arrive in a few years. Just from the light emissions their probes and expanding civilisation give off, we can infer that they're obviously more technologically mature than us, probably already engineered themselves to be much smarter than us, and can basically do whatever they want with the atoms that make up our solar system and there's nothing we can do about it. We don't know what they want yet though. Maybe they're friendly?
I think guessing that the aliens will be friendly and share human morality to an extent seems like a pretty specific guess about their minds to be making, and is maybe flase more likely than not. But guessing that they don't care about human preferences or well-being but do care about human legal structures, that they won't at all help you or gift you things, also won't disassemble you and your property for its atoms[1], but will try to buy atoms from those whom the atoms belong to according to human legal records, now that strikes me as a really really really specific guess to be making that is very likely false.
Superintelligent AGIs don't start out having giant space infrastructure, but qualitatively, I think they'd very quickly overshadow the collective power of humanity in a similar manner. They can see paths through the future to accomplish their goals much better than we can, routing around attempts by us to oppose them. The force that backs up our laws does not bind them. If you somehow managed to align them, they might want to follow some of our laws, because they care about them. But if someone managed to make them care about the legal system, they probably also managed to make them care about your well-being. Few humans, I think, would not at all care about other humans' welfare, but would care about the rule of law, when choosing what to align their AGI with. That's not a kind of value system that shows up in humans much.
So in that scenario, you don't need a legal claim to part of the pre-exisiting economy to benefit from the superintelligences' labours. They will gift some of their labour to you. Say the current value of the world economy is , owned by humans roughly in proportion to how much money they have, and two years after superintelligence the value of the economy is , with ca. of the new surplus owned by aligned superintelligences[2] because they created most of that value, and ca. owned by rich humans who sold the superintelligence valuable resources and infrastructure to get the new industrial base started faster[3]. The superintelligence will then probably distribute its gains among humans according to some system that either treats conscious minds pretty equally, or follows the idiosyncratic preferences of the faction that aligned it, not according to how large a fraction of the total economy they used to own two years ago. So someone who started out with much more money than you two years ago doesn't have much more money in expectation now than you do.
- ^
For its conserved quantum numbers really
- ^
Or owned by whomever the superintelligences take orders from.
- ^
You can't just demand super high share percentages from the superintelligence in return for that startup capital. It's got all the resource owners in the world as potential bargain partners to compete with you. And really, the only reason it wouldn't be steering the future into a deal where you get almost nothing, or just steal all your stuff, is to be nice to you. Decision theoretically, this is a handout with extra steps, not a negotiation between equals.
↑ comment by the gears to ascension (lahwran) · 2025-01-07T13:13:43.075Z · LW(p) · GW(p)
A question in my head is what range of fixed points are possible in terms of different numeric ("monetary") economic mechanisms and contracts. Seems to me those are a kind of AI component that has been in use since before computers.
↑ comment by Vladimir_Nesov · 2025-01-07T02:37:57.456Z · LW(p) · GW(p)
you can replace a lot of human labor. But an equivalent replacement for physical space or raw materials for manufacturing does not exist.
There is a lot of space and raw materials in the universe. AI thinks faster, so technological progress happens faster, which opens up access to new resources shortly after takeoff. Months to years, not decades to centuries.
Replies from: NinaR↑ comment by Nina Panickssery (NinaR) · 2025-01-07T03:27:40.653Z · LW(p) · GW(p)
If, for the sake of argument, we suppose that goods that provide no benefit to humans have no value, then land in space will be less valuable than land on earth until humans settle outside of earth (which I don't believe will happen in the next few decades).
Mining raw materials from space and using them to create value on earth is feasible, but again I'm less confident that this will happen (in an efficient-enough manner that it eliminates scarcity) in as short of a timeframe as you predict.
However, I am sympathetic to the general argument here that smart-enough AI is able to find more efficient ways of manufacturing or better approaches to obtaining plentiful energy/materials. How extreme this is will depend on "takeoff speed" which you seem to think will be faster than I do.
Replies from: Josephm↑ comment by Joseph Miller (Josephm) · 2025-01-07T12:10:22.205Z · LW(p) · GW(p)
land in space will be less valuable than land on earth until humans settle outside of earth (which I don't believe will happen in the next few decades).
Why would it take so long? Is this assuming no ASI?
↑ comment by Noosphere89 (sharmake-farah) · 2025-01-07T19:23:32.885Z · LW(p) · GW(p)
This is actually true, at least in the short term, with the important caveat of the gears of ascension's comment here:
https://www.lesswrong.com/posts/4hCca952hGKH8Bynt/nina-panickssery-s-shortform#quPNTp46CRMMJoamB [LW(p) · GW(p)]
Longer-term, if Adam Brown is correct on how advanced civilizations can change the laws of physics, then effectively no constraints remain on the economy, and the reason why we can't collect almost all of the rent is because you can drive prices arbitrarily low:
↑ comment by quetzal_rainbow · 2025-01-07T10:50:42.719Z · LW(p) · GW(p)
I don't think "hostile takeover" is a meaningful distinction in case of AGI. What exactly prevents AGI from pulling plan consisting of 50 absolutely legal moves which ends up with it as US dictator?
Replies from: NinaR↑ comment by Nina Panickssery (NinaR) · 2025-01-07T12:22:44.935Z · LW(p) · GW(p)
Perhaps the term “hostile takeover” was poorly chosen but this is an example of something I’d call a “hostile takeover”. As I doubt we would want and continue to endorse an AI-dictator.
Perhaps “total loss of control” would have been better.