By default, capital will matter more than ever after AGI
post by L Rudolf L (LRudL) · 2024-12-28T17:52:58.358Z · LW · GW · 94 commentsThis is a link post for https://nosetgauge.substack.com/p/capital-agi-and-human-ambition
Contents
The default solution Money currently struggles to buy talent Most people's power/leverage derives from their labour Why are states ever nice? No more outlier outcomes? Enforced equality is unlikely The default outcome? What's the takeaway? None 94 comments
Edited to add: The main takeaway of this post is meant to be: Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched. Many people are reading this post in a way where either (a) "capital" means just "money" (rather than also including physical capital like factories and data centres), or (b) the main concern is human-human inequality (rather than broader societal concerns about humanity's collective position, the potential for social change, and human agency).
I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.
First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them. I'll say "money" when I want to exclude capital goods.
The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).
I will walk through consequences of this, and end up concluding that labour-replacing AI means:
- The ability to buy results in the real world will dramatically go up
- Human ability to wield power in the real world will dramatically go down (at least without money); including because:
- there will be no more incentive for states, companies, or other institutions to care about humans
- it will be harder for humans to achieve outlier outcomes relative to their starting resources
- Radical equalising measures are unlikely
Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable.
Given sufficiently strong AI, this is not a risk about insufficient material comfort. Governments could institute UBI with the AI-derived wealth. Even if e.g. only the United States captures AI wealth and the US government does nothing for the world, if you're willing to assume arbitrarily extreme wealth generation from AI, the wealth of the small percentage of wealthy Americans who care about causes outside the US might be enough to end material poverty (if 1% of American billionaire wealth was spent on wealth transfers to foreigners, it would take 16 doublings of American billionaire wealth as expressed in purchasing-power-for-human-needs—a roughly 70,000x increase—before they could afford to give $500k-equivalent to every person on Earth; in a singularity scenario where the economy's doubling time is months, this would not take long). Of course, if the AI explosion is less singularity-like, or if the dynamics during AI take-off actively disempower much of the world's population (a real possibility), even material comfort could be an issue.
What most emotionally moves me about these scenarios is that a static society with a locked-in ruling caste does not seem dynamic or alive to me. We should not kill human ambition, if we can help it.
There are also ways in which such a state makes slow-rolling, gradual AI catastrophes more likely, because the incentive for power to care about humans is reduced.
The default solution
Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labour-replacing AI.
There are two levels of the standard solution to the resulting unemployment problem:
- Governments will adopt something universal basic income (UBI).
- We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Money currently struggles to buy talent
Money can buy you many things: capital goods, for example, can usually be bought quite straightforwardly, and cannot be bought without a lot of money (or other liquid assets, or non-liquid assets that others are willing to write contracts against, or special government powers). But it is surprisingly hard to convert raw money into labour, in a way that is competitive with top labour.
Consider Blue Origin versus SpaceX. Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history, and even today employs almost as many people as SpaceX (11,000 v 13,000). Yet SpaceX has crushingly dominated Blue Origin. In 2000, Jeff Bezos had $4.7B at hand. But it is hard to see what he could've done to not lose out to the comparatively money-poor SpaceX with its intense culture and outlier talent.
Consider, a century earlier, the Wright brothers with their bike shop resources beating Samuel Langley's well-funded operation.
Consider the stereotypical VC-and-founder interaction, or the acquirer-and-startup interaction. In both cases, holders of massive financial capital are willing to pay very high prices to bet on labour—and the bet is that the labour of the few people in the startup will beat extremely large amounts of capital.
If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:
- It's often hard to judge talent, unless you yourself have considerable talent in the same domain. Therefore, if you try to find talent, you will often miss.
- Talent is rare (and credentialed talent even more so—and many actors can't afford to rely on any other kind, because of point 1), so there's just not very much of it going around.
- Even if you can locate the top talent, the top talent tends to be less amenable to being bought out by money than others.
(Of course, those with money keep building infrastructure that makes it easier to convert money into results. I have seen first-hand the largely-successful quest by quant finance companies to strangle out all existing ambition out of top UK STEM grads and replace it with the eking of tiny gains in financial markets. Mammon must be served!)
With labour-replacing AI, these problems go away.
First, you might not be able to judge AI talent. Even the AI evals ecosystem might find it hard to properly judge AI talent—evals are hard. Maybe even the informal word-of-mouth mechanisms that correctly sung praises of Claude-3.5-Sonnet far more decisively than any benchmark might find it harder and harder to judge which AIs really are best as AI capabilities keep rising. But the real difference is that the AIs can be cloned. Currently, huge pools of money chase after a single star researcher who's made a breakthrough, and thus had their talent made legible to those who control money (who can judge the clout of the social reception to a paper but usually can't judge talent itself directly). But the star researcher that is an AI can just be cloned. Everyone—or at least, everyone with enough money to burn on GPUs—gets the AI star researcher. No need to sort through the huge variety of unique humans with their unproven talents and annoying inability to be instantly cloned. This is the main reason why it will be easier for money to find top talent once we have labour-replacing AIs.
Also, of course, the price of talent will go down massively, because the AIs will be cheaper than the equivalent human labour, and because competition will be fiercer because the AIs can be cloned.
The final big bottleneck for converting money into talent is that lots of top talent has complicated human preferences that make them hard to buy out. The top artist has an artistic vision they're genuinely attached to. The top mathematician has a deep love of elegance and beauty. The top entrepreneur has deep conviction in what they're doing—and probably wouldn't function well as an employee anyway. Talent and performance in humans are surprisingly tied to a sacred bond to a discipline or mission (a fact that the world's cynics / careerists / Roman Empires like to downplay, only to then find their lunch eaten by the ambitious interns / SpaceXes / Christianities of the world). In contrast, AIs exist specifically so that they can be trivially bought out (at least within the bounds of their safety training). The genius AI mathematician, unlike the human one, will happily spend its limited time on Earth proving the correctness of schlep code.
Finally (and obviously), the AIs will eventually be much more capable than any human employees at their tasks.
This means that the ability of money to buy results in the real world will dramatically go up once we have labour-replacing AI.
Most people's power/leverage derives from their labour
Labour-replacing AI also deprives almost everyone of their main lever of power and leverage. Most obviously, if you're the average Joe, you have money because someone somewhere pays you to spend your mental and/or physical efforts solving their problems.
But wait! We assumed that there's UBI! Problem solved, right?
Why are states ever nice?
UBI is granted by states that care about human welfare. There are many reasons why states care and might care about human welfare.
Over the past few centuries, there's been a big shift towards states caring more about humans. Why is this? We can examine the reasons to see how durable they seem:
- Moral changes downstream of the Enlightenment, in particular an increased centering of liberalism and individualism.
- Affluence & technology. Pre-industrial societies were mostly so poor that significant efforts to help the poor would've bankrupted them. Many types of help (such as effective medical care) are also only possible because of new technology.
- Incentives for states to care about freedom, prosperity, and education.
AI will help a lot with the 2nd point. It will have some complicated effect on the 1st. But here I want to dig a bit more into the 3rd, because I think this point is unappreciated.
Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states. For more, see my review of Foragers, Farmers, and Fossil Fuels, or my post on the connection between moral values and economic growth.
With labour-replacing AI, the incentives of states—in the sense of what actions states should take to maximise their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. The incentives might be better than during feudalism. During feudalism, the incentive was to extract as much as possible from the peasants without them dying. After labour-replacing AI, humans will be less a resource to be mined and more just irrelevant. However, spending fewer resources on humans and more on the AIs that sustain the state's competitive advantage will still be incentivised.
Humans will also have much less leverage over states. Today, if some important sector goes on strike, or if some segment of the military threatens a coup, the state has to care, because its power depends on the buy-in of at least some segments of the population. People can also credibly tell the state things like "invest in us and the country will be stronger in 10 years". But once AI can do all the labour that keeps the economy going and the military powerful, the state has no more de facto reason to care about the demands of its humans.
Adam Smith could write that his dinner doesn't depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labour-replacing AI, this will no longer be true. If the arc of history keeps bending towards freedom and plenty, it will do so only out of the benevolence of the state (or the AI plutocrats). If so, we better lock in that benevolence while we have leverage—and have a good reason why we expect it to stand the test of time.
The best thing going in our favour is democracy. It's a huge advantage that a deep part of many of the modern world's strongest institutions (i.e. Western democracies) is equal representation of every person. However, only about 13% of the world's population lives in a liberal democracy, which creates concerns both about the fate of the remaining 87% of the world's people (especially the 27% in closed autocracies). It also creates potential for Molochian competition between humanist states and less scrupulous states that might drive down the resources spent on human flourishing to zero over a sufficiently long timespan of competition.
I focus on states above, because states are the strongest and most durable institutions today. However, similar logic applies if, say, companies or some entirely new type of organisation become the most important type of institution.
No more outlier outcomes?
Much change in the world is driven by people who start from outside money and power, achieve outlier success, and then end up with money and/or power. This makes sense, since those with money and/or power rarely have the fervour to push for big changes, since they are exactly those who are best served by the status quo.
Whatever your opinions on income inequality or any particular group of outlier successes, I hope you agree with me that the possibility of someone achieving outlier success and changing the world is important for avoiding stasis and generally having a world that is interesting to live in.
Let's consider the effects of labour-replacing AI on various routes to outlier success through labour.
Entrepreneurship is increasingly what Matt Clifford calls the "technology of ambition" of choice for ambitious young people (at least those with technical talent and without a disposition for politics). Right now, entrepreneurship has become easier. AI tools can already make small teams much more effective without needing to hire new employees. They also reduce the entry barrier to new skills and fields. However, labour-replacing AI makes the tenability of entrepreneurship uncertain. There is some narrow world in which AIs remain mostly tool-like and entrepreneurs can succeed long after most human labour is automated because they provide agency and direction. However, it also seems likely that sufficiently strong AI will by default obsolete human entrepreneurship. For example, VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding a human entrepreneurs to manage the AIs for them.
The hard sciences. The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
Intellectuals. Keynes, Friedman, and Hayek all did technical work in economics, but their outsize influence came from the worldviews they developed and sold (especially in Hayek's case), which made them more influential than people like Paul Samuelson who dominated mathematical economics. John Stuart Mill, John Rawls, and Henry George were also influential by creating frames, worldviews, and philosophies. The key thing that separates such people from the hard scientists is that the outputs of their work are not spotlighted by technical correctness alone, but require moral judgement as well. Even if AI is superhumanly persuasive and correct, there's some uncertainty about how AI work in this genre will fit into the way that human culture picks and spreads ideas. Probably it doesn't look good for human intellectuals. I suspect that a lot of why intellectuals' ideologies can have so much power is that they're products of genius in a world where genius is rare. A flood of AI-created ideologies might mean that no individual ideology, and certainly no human one, can shine so bright anymore. The world-historic intellectual might go extinct.
Politics might be one of the least-affected options, since I'd guess that most humans specifically want a human to do that job, and because politicians get to set the rules for what's allowed. The charisma of AI-generated avatars, and a general dislike towards politicians at least in the West, might throw a curveball here, though. It's also hard to say whether incumbents will be favoured. AI might bring down the cost of many parts of political campaigning, reducing the resource barrier to entry. However, if AI too expensive for small actors is meaningfully better than cheaper AI, this would favour actors with larger resources. I expect these direct effects to be smaller than the indirect effects from whatever changes AI has on the memetic landscape.
Also, the real play is not to go into actual politics, where a million other politically-talented people are competing to become president or prime minister. Instead, have political skill and go somewhere outside government where political skill is less common (c.f. Sam Altman). Next, wait for the arrival of hyper-competent AI employees that reduce the demands for human subject-matter competence while increasing the rewards for winning political games within that organisation.
Military success as a direct route to great power and disruption has—for the better—not really been a thing since Napoleon. Advancing technology increases the minimum industrial base for a state-of-the-art army, which benefits incumbents. AI looks set to be controlled by the most powerful countries. One exception is if coups of large countries become easier with AI. Control over the future AI armies will likely be both (a) more centralised than before (since a large number of people no longer have to go along for the military to take an action), and (b) more tightly controllable than before (since the permissions can be implemented in code rather than human social norms). These two factors point in different directions so it's uncertain what the net effect on coup ease will be. Another possible exception is if a combination of revolutionary tactics and cheap drones enables a Napoleon-of-the-drones to win against existing armies. Importantly, though, neither of these seems likely to promote the good kind of disruptive challenge to the status quo.
Religions. When it comes to rising rank in existing religions, the above takes on politics might be relevant. When it comes to starting new religions, the above takes on intellectuals might be relevant.
So on net, sufficiently strong labour-replacing AI will be on-net bad for the chances of every type of outlier human success, with perhaps the weakest effects in politics. This is despite the very real boost that current AI has on entrepreneurship.
All this means that the ability to get and wield power in the real world without money will dramatically go down once we have labour-replacing AI.
Enforced equality is unlikely
The Great Leveler is a good book on the history of inequality that (at least per the author) has survived its critiques fairly well. Its conclusion is that past large reductions in inequality have all been driven by one of the "Four Horsemen of Leveling": total war, violent revolution, state collapse, and pandemics. Leveling income differences has historically been hard enough to basically never happen through conscious political choice.
Imagine that labour-replacing AI is here. UBI is passed, so no one is starving. There's a massive scramble between countries and companies to make the best use of AI. This is all capital-intensive, so everyone needs to woo holders of capital. The top AI companies wield power on the level of states. The redistribution of wealth is unlikely to end up on top of the political agenda.
An exception might be if some new political movement or ideology gets a lot of support quickly, and is somehow boosted by some unprecedented effect of AI (such as: no one has jobs anymore so they can spend all their time on politics, or there's some new AI-powered coordination mechanism).
Therefore, even if the future is a glorious transhumanist utopia, it is unlikely that people will be starting in it at an equal footing. Due to the previous arguments, it is also unlikely that they will be able to greatly change their relative footing later on.
Consider also equality between states. Some states stand set to benefit massively more than others from AI. Many equalising measures, like UBI, would be difficult for states to extend to non-citizens under anything like the current political system. This is true even of the United States, the most liberal and humanist great power in world history. By default, the world order might therefore look (even more than today) like a global caste system based on country of birth, with even fewer possibilities for immigration (because the main incentive to allow immigration is its massive economic benefits, which only exist when humans perform economically meaningful work).
The default outcome?
Let's grant the assumptions at the start of this post and the above analysis. Then, the post-labour-replacing-AI world involves:
- Money will be able to buy results in the real world better than ever.
- People's labour gives them less leverage than ever before.
- Achieving outlier success through your labour in most or all areas is now impossible.
- There was no transformative leveling of capital, either within or between countries.
This means that those with significant capital when labour-replacing AI started have a permanent advantage. They will wield more power than the rich of today—not necessarily over people, to the extent that liberal institutions remain strong, but at least over physical and intellectual achievements. Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.
Also, there will be no more incentive for whatever institutions wield power in this world to care about people in order to maintain or grow their power, because all real power will flow from AI. There might, however, be significant lock-in of liberal humanist values through political institutions. There might also be significant lock-in of people's purchasing power, if everyone has meaningful UBI (or similar), and the economy retains a human-oriented part.
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital (and maybe not even then), you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? Much like in feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle ("my great-great-grandfather fought at Bosworth Field!"), anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ("oh, my uncle was technical staff at OpenAI"). The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.
In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.
In the absolute worst case, humanity goes extinct, potentially because of a slow-rolling optimisation for AI power over human prosperity over a long period of time. Because that's what the power and money incentives will point towards.
What's the takeaway?
If you read this post and accept a job at a quant finance company as a result, I will be sad. If you were about to do something ambitious and impactful about AI, and read this post and accept a job at Anthropic to accumulate risk-free personal capital while counterfactually helping out a bit over the marginal hire, I can't fault you too much, but I will still be slightly sad.
It's of course true that the above increases the stakes of medium-term (~2-10 year) personal finance, and you should consider this. But it's also true that right now is a great time to do something ambitious. Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal myths: the time when the future world order and its values are still liquid, not yet set in stone.
Previous upheavals—the various waves of industrialisation, the internet, etc.—were great for human ambition. With AI, we could have the last and greatest opportunity for human ambition—followed shortly by its extinction for all time. How can your reaction not be: "carpe diem"?
We should also try to preserve the world's dynamism.
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
I think it's much healthier for society and its development to be a shifting, dynamic thing where the ability, as an individual, to add to it or change it remains in place. And that means keeping the potential for successful ambition—and the resulting disruption—alive.
How do we do this? I don't know. But I don't think you should see the approach of powerful AI as a blank inexorable wall of human obsolescence, consuming everything equally and utterly. There will be cracks in the wall, at least for a while, and they will look much bigger up close once we get there—or if you care to look for them hard enough from further out—than from a galactic perspective. As AIs get closer and closer to a Pareto improvement over all human performance, though, I expect we'll eventually need to augment ourselves to keep up.
94 comments
Comments sorted by top scores.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-28T23:13:03.948Z · LW(p) · GW(p)
When people such as myself say "money won't matter post-AGI" the claim is NOT that the economy post-AGI won't involve money (though that might be true) but rather that the strategy of saving money in order to spend it after AGI is a bad strategy. Here are some reasons:
- The post-AGI economy might not involve money, it might be more of a command economy.
- Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example:
- Maybe the humans will lose control of the AGIs
- Maybe the humans who control the AGIs will put values into the AGIs, such that the resulting world redistributes the money, so to speak. E.g. maybe they'll tax and redistribute to create a more equal society -- OR (and you talk about this, but don't go far enough!) maybe they'll make a less equal society, one in which 'how much money you saved' doesn't translate into how much money you have in the new world, and instead e.g. being in the good graces of the leadership of the AGI project, as judged by their omnipresent AGI servants that infuse the economy and talk to everyone, is what matters.
- Maybe there'll be a war or something, a final tussle over AGI (and possibly involving AGI) between US and China for example. Or between terrorists with AI-produced bioweapons and everyone else. Maybe this war will result in mass casualties and you might be one of them.
- Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
- You'll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
- For your altruistic or political preferences -- e.g. your preferences about how society should be structured, or about what should be done with all the galaxies -- then your money post-AGI will be valuable, but it'll be a drop in the bucket compared to all the money of other people and institutions who also saved their money through AGI (i.e. most of the planet). By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they'll instill, etc. More generally you can compete for influence right now. And the amount of money in the arena competing with you is... billions? Whereas the amount of money that is being saved for the post-AGI future is what, a hundred trillion? (Because it's all the rest of the money there is basically)
↑ comment by Benjamin_Todd · 2024-12-30T13:14:56.376Z · LW(p) · GW(p)
I agree (1) and (2) are possibilities. However, from a personal planning pov, you should focus preparing for scenarios (i) that might last a long time (ii) where you can affect what happens, since that's where the stakes are.
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability. (Edit: to be clear it does reduce the value of saving vs. spending, just don't think it's a big effect unless probabilities are high.)
I think (3) is the key way to push back.
I feel unsure all my preferences are either (i) local and easily satisfied or (ii) impartial & altruistic. You only need to have one type of preference with, say, log returns to money that can be better satisfied post-AGI to make capital post-AGI valuable to you (emulations maybe).
But let's focus on the altruistic case – I'm very interested in the question of how valuable capital will be altruistically post-AGI.
I think your argument about relative neglectedness makes sense, but is maybe too strong.
There's 500 trillion of world wealth, so if you have $1m now, that's 2e-9 of world wealth. Through good investing through the transition, it seems like you can increase your share. Then set that against chance of confiscation etc, and plausibly you end up with a similar share afterwards.
You say you'd be competing with the entire rest of the pot post-transition, but that seems too negative. Only <3% of income today is used on broadly altruistic stuff, and the amount focused on impartial longtermist values is miniscule (which is why AI safety is neglected in the first place). It seems likely it would still be a minority in the future.
People with an impartial perspective might be able to make good trades with the majority who are locally focused (give up earth for the commons etc.). People with low discount rates should also be able to increase their share over time.
So if you have 2e-9 of future world wealth, it seems like you could get a significantly larger share of the influence (>10x) from the perspective of your values.
Now you need to compare that to $1m extra donated to AI safety in the short-term. If you think that would reduce x-risk by less 1e-8 then saving to give could be more valuable.
Suppose about $10bn will be donated to AI safety before the lock-in moment. Now consider adding a marginal $10bn. Maybe that decreases x-risk by another ~1%. Then that means $1m decreases it by about 10e-6. So with these numbers, I agree donating now is ~100x better.
However, I could imagine people with other reasonable inputs concluding the opposite. It's also not obvious to me that donating now dominates so much that I'd want to allocate 0% to the other scenario.
Replies from: wassname↑ comment by wassname · 2024-12-31T07:45:13.414Z · LW(p) · GW(p)
Scenarios where we all die soon can be mostly be ignored, unless you think they make up most of the probability.
I would disagree: unless you can change the probability. In which case they can still be significant in your decision making, if you can invest time or money or effort to decrease the probability.
↑ comment by L Rudolf L (LRudL) · 2024-12-29T15:33:40.343Z · LW(p) · GW(p)
I think I agree with all of this.
(Except maybe I'd emphasise the command economy possibility slightly less. And compared to what I understand of your ranking, I'd rank competition between different AGIs/AGI-using factions as a relatively more important factor in determining what happens, and values put into AGIs as a relatively less important factor. I think these are both downstream of you expecting slightly-to-somewhat more singleton-like scenarios than I do?)
EDIT: see here [LW(p) · GW(p)] for more detail on my take on Daniel's takes.
Overall, I'd emphasize as the main point in my post: AI-caused shifts in the incentives/leverage of human v non-human factors of production, and this mattering because the interests of power will become less aligned with humans while simultaneously power becomes more entrenched and effective. I'm not really interested in whether someone should save or not for AGI. I think starting off with "money won't matter post-AGI" was probably a confusing and misleading move on my part.
Replies from: daniel-kokotajlo, jacob-pfau↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T16:08:10.829Z · LW(p) · GW(p)
OK, cool, thanks for clarifying. Seems we were talking past each other then, if you weren't trying to defend the strategy of saving money to spend after AGI. Cheers!
↑ comment by Jacob Pfau (jacob-pfau) · 2024-12-29T16:21:57.423Z · LW(p) · GW(p)
I see the command economy point as downstream of a broader trend: as technology accelerates, negative public externalities will increasingly scale and present irreversible threats (x-risks, but also more mundane pollution, errant bio-engineering plague risks etc.). If we condition on our continued existence, there must've been some solution to this which would look like either greater government intervention (command economy) or a radical upgrade to the coordination mechanisms in our capitalist system. Relevant to your power entrenchment claim: both of these outcomes involve the curtailment of power exerted by private individuals with large piles of capital.
(Note there are certainly other possible reasons to expect a command economy, and I do not know which reasons were particularly compelling to Daniel)
↑ comment by L Rudolf L (LRudL) · 2024-12-29T15:22:09.488Z · LW(p) · GW(p)
the strategy of saving money in order to spend it after AGI is a bad strategy.
This seems very reasonable and likely correct (though not obvious) to me. I especially like your point about there being lots of competition in the "save it" strategy because it happens by default. Also note that my post explicitly encourages individuals to do ambitious things pre-AGI, rather than focus on safe capital accumulation.
↑ comment by lc · 2024-12-30T16:30:38.408Z · LW(p) · GW(p)
#1 and #2 are serious concerns, but there's not really much I can do about them anyways. #3 doesn't make any sense to me.
You'll probably be able to buy planets post-AGI for the price of houses today
Right, and that seems like OP's point? Because I can do this, I shouldn't spend money on consumption goods today and in fact should gather as much money as I can now? Certainly massive stellar objects post-AGI will be more useful to me than a house is pre-agi?
As to this:
By contrast, *very few people are spending money to influence AGI development right now. If you want future beings to have certain inalienable rights, or if you want the galaxies to be used in such-and-such a way, you can lobby AGI companies right now to change their spec/constitution/RLHF, and to make commitments about what values they'll instill, etc.
I guess I just don't really believe I have much control over that at all. Further, I can specifically invest in things likely to be important parts of the AGI production function, like semiconductors, etc.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T16:53:33.865Z · LW(p) · GW(p)
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can't do with half a planet? Not that much.
Re: 1 and 2: Whether you can do something about them matters but doesn't undermine my argument. You should still discount the value of your savings by their probability.
However little control you have over influencing AGI development, you'll have orders of magnitude less control over influencing the cosmos / society / etc. after AGI.
↑ comment by lc · 2024-12-30T17:07:14.210Z · LW(p) · GW(p)
On the contrary, massive stellar objects post-AGI will be less useful to you than a house is today, as far as your selfish personal preferences are concerned. Consider the difference in your quality of life living in a nice house vs. skimping and saving 50% and living in a cheap apartment so you can save money. Next, consider the difference in your quality of life owning your own planet (replete with superintelligent servants) vs. owning merely half a planet. What can you do with a whole planet that you can't do with half a planet? Not that much.
It matters if it means I can live twice as long, because I can purchase more negentropy with which to maintain whatever lifestyle I have.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T17:21:12.764Z · LW(p) · GW(p)
Good point. If your utility is linear or close to linear in lifespan even at very large scales, and lifespan is based on how much money you have rather than e.g. a right guaranteed by the government, then a planetworth could be almost twice as valuable as half a planetworth.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T17:47:03.745Z · LW(p) · GW(p)
(My selfish utility is not close to linear in lifespan at very large scales, I think.)
↑ comment by quila · 2024-12-30T13:19:02.296Z · LW(p) · GW(p)
You'll probably be able to buy planets post-AGI for the price of houses today
I am confused by the existence of this discourse. Do its participants not believe strong superintelligence is possible?
(edit: I misinterpreted Daniel's comment, I thought this quote indicated they thought it was non-trivially likely, instead of just being reasoning through an 'even if' scenario / scenario relevant in OP's model)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T16:03:51.299Z · LW(p) · GW(p)
Can you elaborate, I'm not sure what you are asking. I believe strong superintelligence is possible.
Replies from: quila↑ comment by quila · 2024-12-30T16:23:04.237Z · LW(p) · GW(p)
Why would strong superintelligence coexist with an economy? Wouldn't an aligned (or unaligned) superintelligence antiquate it all?
Replies from: LRudL, daniel-kokotajlo↑ comment by L Rudolf L (LRudL) · 2024-12-30T22:36:30.878Z · LW(p) · GW(p)
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Though yes, I agree that a superintelligent singleton controlling a command economy means this breaks down.
However it seems far from clear we will end up exactly there. The finiteness of the future lightcone and the resulting necessity of allocating "scarce" resources, the usefulness of a single medium of exchange (which you can see as motivated by coherence theorems if you want), and trade between different entities all seem like very general concepts. So even in futures that are otherwise very alien, but just not in the exact "singleton-run command economy" direction, I expect a high chance that those concepts matter.
Replies from: quila↑ comment by quila · 2024-12-31T00:45:20.912Z · LW(p) · GW(p)
I am still confused.
Maybe the crux is that you are not expecting superintelligence?[1] This quote seems to indicate that: "However it seems far from clear we will end up exactly there". Also, your post writes about "labor-replacing AGI" but writes as if the world it might cause near-term lasts eternally ("anyone of importance in the future will be important because of something they or someone they were close with did in the pre-AGI era ('oh, my uncle was technical staff at OpenAI'). The children of the future will live their lives in the shadow of their parents")
If not, my response:
just not in the exact "singleton-run command economy" direction
I don't see why strongly-superintelligent optimization would benefit from an economy of any kind.
Given superintelligence, I don't see how there would still be different entities doing actual (as opposed to just-for-fun / fantasy-like) dynamic (as opposed to acausal) trade with each other, because the first superintelligent agent would have control over the whole lightcone.
If trade currently captures information (including about the preferences of those engaged in it), it is regardless unlikely to be the best way to gain this information, if you are a superintelligence.[2]
- ^
(Regardless of whether the first superintelligence is an agent, a superintelligent agent is probably created soon after)
- ^
I could list better ways of gaining this information given superintelligence, if this claim is not obvious.
↑ comment by L Rudolf L (LRudL) · 2024-12-31T10:12:12.710Z · LW(p) · GW(p)
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
Also, your post writes about "labor-replacing AGI" but writes as if the world it might cause near-term lasts eternally
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren't strong, and humans aren't augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans' main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn't a clear "reset" point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
Replies from: quila↑ comment by quila · 2024-12-31T12:50:53.749Z · LW(p) · GW(p)
Or are you assuming hard takeoff?
I don't think so, but I'm not sure exactly what this means. This post [LW · GW] says slow takeoff means 'smooth/gradual' and my view is compatible with that - smooth/gradual, but at some point the singularity point is reached (a superintelligent optimization process starts).
why is it so obvious that there exists exactly one superintelligence rather than multiple?
Because it would require an odd set of events that cause two superintelligent agents to be created.. if not at the same time, within the time it would take one to start effecting matter on the other side of the planet relative to where it is[1]. Even if that happened, I don't think it would change the outcome (e.g. lead to an economy). And it's still far from a world with a lot of superintelligences. And even in a world where a lot of superintelligences are created at the same time, I'd expect them to do something like a value handshake, after which the outcome looks the same again.
(I thought this was a commonly accepted view here)
Reading your next paragraph, I still think we must have fundamentally different ideas about what superintelligence (or "the most capable possible agent, modulo unbounded quantitative aspects like memory size") would be. (You seem to expect it to be not capable of finding routes to its goals which do not require (negotiating with) humans)
(note: even in a world where {learning / task-annealing / selecting a bag of heuristics} is the best (in a sense only) method of problem solving, which might be an implicit premise of expectations of this kind, there will still eventually be some Theory of Learning which enables the creation of ideal learning-based agents, which then take the role of superintelligence in the above story)
- ^
which is still pretty short, thanks to computer communication.
(and that's only if being created slightly earlier doesn't afford some decisive physical advantage over the other, which depends on physics)
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-01T19:20:23.725Z · LW(p) · GW(p)
I think your expectations are closer to mine in some ways, quila. But I do doubt that the transition will be as fast and smooth as you predict. The AIs we're seeing now have very spiky capability profiles, and I expect early AGI to be similar. It seems likely to me that there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
I think a single super-powerful ASI is one way things could go, but I also think that there's reason to expect a more multi-polar community of AIs, perhaps blending into each other around the edges of their collaboration, merges made of distilled down versions of their larger selves. I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
Replies from: quila↑ comment by quila · 2025-01-01T20:41:04.746Z · LW(p) · GW(p)
Do you want to look for cruxes? I can't tell what your cruxy underlying beliefs are from your comment.
I think the cohesion of a typical human mind is more due to the limitations of biology and the shaping forces of biological evolution than to an inherent attractor-state in mindspace.
I don't think whether there is an attractor[1] towards cohesiveness is a crux for me (although I'd be interested in reading your thoughts on that anyways), at least because it looks like humans will try to create an optimal agent, so it doesn't need to have a common attractor or be found through one[2], it just needs to be possible at all.
But I do doubt that the transition will be as fast and smooth as you predict
Note: I wrote that my view is compatible with 'smooth takeoff', when asked if I was 'assuming hard takeoff'. I don't know what 'takeoff' looks like, especially prior to recursive AI research.
there will be period which is perhaps short in wall-clock-time but still significant in downstream causal effects, where there are multiple versions of AGIs interacting with humans in shaping the ASI(s) that later emerge.
Sure (if 'shaping' is merely 'having a causal effect on', not necessarily in the hoped-for direction).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
Feel free to ask me probing questions as well, and no pressure to engage.
- ^
(adding a note just in case it's relevant: attractors are not in mindspace/programspace itself, but in the conjunction with the specific process selecting the mind/program)
- ^
as opposed to through understanding agency/problem-solving(-learning) more fundamentally/mathematically
- ^
(Edit to add: I saw this other comment by you [LW(p) · GW(p)]. I agree that maybe there could be good governance made of humans + AIs and if that happened, then that could prevent anyone from creating a super-agent, although it would still end with (in this case aligned) superintelligence in my view.
I can also imagine, but doubt it's what you mean, runaway processes which are composed of 'many AIs' but which do not converge to superintelligence, because that sounds intuitively-mathematically possible (i.e., where none of the AIs are exactly subject to instrumental convergence, nor have the impulse to do things which create superintelligence, but the process nonetheless spreads and consumes and creates more ~'myopically' powerful AIs (until plateauing beyond the point of human/altruist disempowerment)))
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-01T21:44:46.963Z · LW(p) · GW(p)
I think there are a lot of places where we agree. In this comment I was trying to say that I feel doubtful about the idea of a superintelligence arising once, and then no other superintelligences arise because the first one had time to fully seize control of the world. I think it's also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
I think the offense-dominant nature of our current technological milieu means that humanity is almost certainly toast under the multipolar superintelligence scenario unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Responses:
Sure (if 'shaping' is merely 'having a causal effect on', not necessarily in the hoped-for direction).
Yes, that's what I meant. Control seems like not-at-all a default scenario to me. More like the accelerating self-improving AI process is a boulder tumbling down a hill, and humanity is a stone in its path that may alter its trajectory (while likely being destroyed in the process).
a more multi-polar community of AIs
Sure, that could happen before superintelligence, but why do you then frame it as an alternative to superintelligence?[3]
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
I'm pretty sure we're on a fast-track to either superintelligence-within-ten-years or civilizational collapse (e.g. large scale nuclear war). I doubt very much that any governance effort will manage to delay superintelligence for more than 10 years from now.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress, not on attempts to pause/delay. I think that algorithmic advance is the most dangerous piece of the puzzle, and wouldn't be much hindered by restrictions on large training runs (which is what people often mean when talking of delay).
But, if we're skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun. Then at that point, we could delay, and focus on more robust alignment (including value-alignment rather than just intent-alignment) and on human augmentation / digital people.
I talk more about my thoughts on this in my post here: https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy [LW · GW]
Replies from: quila↑ comment by quila · 2025-01-01T22:10:52.720Z · LW(p) · GW(p)
My response, before having read the linked post:
I was trying to say that I feel doubtful about the idea of a superintelligence arising once [...] I think it's also possible that there is time for more than one super-human intelligence to arise and then compete with each other.
Okay. I am not seeing why you are doubtful. (I agree 2+ arising near enough in time is merely possible, but it seems like you think it's much more than merely possible, e.g. 5%+ likely? That's what I'm reading into "doubtful")
unless the controllers (likely the ASIs themselves) are in a stable violence-preventing governance framework (which could be simply a pact between two powerful ASIs).
Why would the pact protect beings other than the two ASIs? (If one wouldn't have an incentive to protect, why would two?) (Edit: Or, based on the term "governance framework", do you believe the human+AGI government could actually control ASIs?)
More that I am trying to suggest that such a multi-polar community of sub-super-intelligent AIs makes a multipolar ASI scenario seem more likely to me. Not as an alternative to superintelligence.
Thanks for clarifying. It's not intuitive to me why that would make it more likely, and I can't find anything else in this comment about that.
I think our best hope is to go all-in on alignment and governance efforts designed to shape the near-term future of AI progress [...] if we're skillful and lucky, we might manage to get to controlled-AGI, and have some sort of AGI-powered world government arise which was able to squash self-improving AI competitors before getting overrrun
I see. That does help me understand the motive for 'control' research more.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T16:45:49.941Z · LW(p) · GW(p)
To a first approximation, yes, I believe it would antiquate it all.
Replies from: quila↑ comment by niknoble · 2024-12-30T20:50:51.900Z · LW(p) · GW(p)
Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you:
- You'll probably be able to buy planets post-AGI for the price of houses today. More generally your selfish and/or local and/or personal preferences will be fairly easily satisfiable even with small amounts of money, or to put it in other words, there are massive diminishing returns.
No one will be buying planets for the novelty or as an exotic vacation destination. The reason you buy a planet is to convert it into computing power, which you then attach to your own mind. If people aren't explicitly prevented from using planets for that purpose, then planets are going to be in very high demand, and very useful for people on a personal level.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-31T01:05:56.070Z · LW(p) · GW(p)
Is your selfish utility linear in computing power? Is the difference between how your life goes with a planet's worth of compute that much bigger than how it goes with half a planet's worth of compute? I doubt it.
Also, there are eight billion people now, and many orders of magnitude more planets, not to mention all the stars etc. "You'll probably be able to buy planets post-AGI for the price of houses today" was probably a massive understatement.
comment by ryan_greenblatt · 2024-12-28T18:38:14.578Z · LW(p) · GW(p)
This post seems to misunderstand what it is responding to and underplay a very key point: that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
It mentions this offhand:
Given sufficiently strong AI, this is not a risk about insufficient material comfort.
But, this was a key thing people were claiming when arguing that money won't matter. They were claiming that personal savings will likely not be that important for guaranteeing a reasonable amount of material comfort (or that a tiny amount of personal savings will suffice).
It seems like there are two importantly different types of preferences:
- Material needs and roughly log returns (non-positional) selfish preferences
- Scope sensitive preferences
Indeed, for scope sensitive preferences (that you expect won't be shared with whoever otherwise ends up with power), you want to maximize your power and insofar as money allows for more of this power (e.g. buying galaxies), then money looks good.
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important (for long run scope sensitive preferences), but I won't argue for this here (and the post does directly argue against this).
Replies from: LRudL, thomas-kwa, GAA↑ comment by L Rudolf L (LRudL) · 2024-12-28T22:02:57.981Z · LW(p) · GW(p)
This post seems to misunderstand what it is responding to
fwiw, I see this post less as "responding" to something, and more laying out considerations on their own with some contrasting takes as a foil.
(On Substack, the title is "Capital, AGI, and human ambition", which is perhaps better)
that material needs will likely be met (and selfish non-positional preferences mostly satisfied) due to extreme abundance (if humans retain control).
I agree with this, though I'd add: "if humans retain control" and some sufficient combination of culture/economics/politics/incentives continues opposing arbitrary despotism.
I also think that even if all material needs are met, avoiding social stasis and lock-in matters.
Scope sensitive preferences
Scope sensitivity of preferences is a key concept that matters here, thanks for pointing that out.
Various other considerations about types of preferences / things you can care about (presented without endorsement):
- instrumental preference to avoid stasis because of a belief it leads to other bad things (e.g. stagnant intellectual / moral / political / cultural progress, increasing autocracy)
- altruistic preferences combined with a fear that less altruism will result if today's wealth hierarchy is locked in, than if social progress and disruption continued
- a belief that it's culturally good when human competition has some anchoring to object-level physical reality (c.f. the links here)
- a general belief in a tendency for things to go off the rails without a ground-truth unbeatable feedback signal that the higher-level process needs to be wary of—see Gwern's Evolution as a backstop for RL
- preferences that become more scope-sensitive due to transhumanist cognitive enhancement
- positional preferences, i.e. wanting to be higher-status or more something than some other human(s)
- a meta-positional-preference that positions are not locked in, because competition is fun
- a preference for future generations having at least as much of a chance to shape the world, themselves, and their position as the current generation
- an aesthetic preference for a world where hard work is rewarded, or rags-to-riches stories are possible
However, note that if these preferences are altruistic and likely to be the kind of thing other people might be sympathetic to, personal savings are IMO likely to be not-that-important relative to other actions.
I agree with this on an individual level. (On an org level, I think philanthropic foundations might want to consider my arguments above for money buying more results soon, but this needs to be balanced against higher leverage on AI futures sooner rather than later.)
Further, I do actually think that the default outcome is that existing governments at least initially retain control over most resources such that capital isn't clearly that important, but I won't argue for this here (and the post does directly argue against this).
Where do I directly argue against that? A big chunk of this post is pointing out how the shifting relative importance of capital v labour changes the incentives of states. By default, I expect states to remain the most important and powerful institutions, but the frame here is very much human v non-human inputs to power and what that means for humans, without any particular stance on how the non-human inputs are organised. I don't think states v companies v whatever fundamentally changes the dynamic; with labour-replacing AI power flows from data centres, other physical capital, and whoever has the financial capital to pay for it, and sidesteps humans doing work, and that is the shift I care about.
(However, I think which institutions do the bulk of decision-making re AI does matter for a lot of other reasons, and I'd be very curious to get your takes on that)
My guess is that the most fundamental disagreement here is about how much power tries to get away with when it can. My read of history leans towards: things are good for people when power is correlated with things being good for people, and otherwise not (though I think material abundance is very important too and always helps a lot). I am very skeptical of the stability of good worlds where incentives and selection pressures do not point towards human welfare.
For example, assuming a multipolar world where power flows from AI, the equilibrium is putting all your resources on AI competition and none on human welfare. I don't think it's anywhere near certain we actually reach that equilibrium, since sustained cooperation is possible (c.f. Ostrom's Governing the Commons), and since a fairly trivial fraction of the post-AGI economy's resources might suffice for human prosperity (and since maybe we in fact do get a singleton—but I'd have other issues with that). But this sort of concern still seems neglected and important to me.
↑ comment by Thomas Kwa (thomas-kwa) · 2024-12-29T05:38:00.961Z · LW(p) · GW(p)
Under log returns to money, personal savings still matter a lot for selfish preferences. Suppose the material comfort component of someone's utility is 0 utils at an consumption of $1/day. Then a moderately wealthy person consuming $1000/day today will be at 7 utils. The owner of a galaxy, at maybe $10^30 / day, will be at 69 utils, but doubling their resources will still add the same 0.69 utils it would for today's moderately wealthy person. So my guess is they will still try pretty hard at acquiring more resources, similarly to people in developed economies today who balk at their income being halved and see it as a pretty extreme sacrifice.
Replies from: Benjamin_Todd↑ comment by Benjamin_Todd · 2024-12-29T10:31:17.799Z · LW(p) · GW(p)
True, though I think many people have the intuition that returns diminish faster than log (at least given current tech).
For example, most people think increasing their income from $10k to $20k would do more for their material wellbeing than increasing it from $1bn to $2bn.
I think the key issue is whether new tech makes it easier to buy huge amounts of utility, or that people want to satisfy other preferences beyond material wellbeing (which may have log or even close to linear returns).
↑ comment by Guive (GAA) · 2024-12-29T10:28:09.614Z · LW(p) · GW(p)
There are always diminishing returns to money spent on consumption, but technological progress creates new products that expand what money can buy. For example, no amount of money in 1990 was enough to buy an iPhone.
More abstractly, there are two effects from AGI-driven growth: moving to a further point on the utility curve such that the derivative is lower, and new products increasing the derivative at every point on the curve (relative to what it was on the old curve). So even if in the future the lifestyles of people with no savings and no labor income will be way better than the lifestyles of anyone alive today, they still might be far worse than the lifestyles of people in the future who own a lot of capital.
If you feel this post misunderstands what it is responding to, can you link to a good presentation of the other view on these issues?
comment by Tom Davidson · 2024-12-29T18:50:43.457Z · LW(p) · GW(p)
One dynamic initially preventing stasis in influence post-AGI is that different ppl have different discount rates, so those with higher discounts will slowly gain influence over time
comment by waterlubber · 2024-12-28T23:10:28.234Z · LW(p) · GW(p)
This post collects my views on, and primary opposition to, AI and presents them in a very clear way. Thank you very much on that front. I think that this particular topic is well known in many circles, although perhaps not spoken of, and is the primary driver of heavy investment in AI.
I will add that capital-dominated societies, e.g resource extraction economies, suffer a typically poor quality of life and few human rights. This is a well known phenomenon (the "resource curse") and might offer a good jumping -off point for presenting this argument to others.
Replies from: isaac-liu↑ comment by Isaac Liu (isaac-liu) · 2024-12-29T22:31:20.632Z · LW(p) · GW(p)
I considered "opposing" AI on similar grounds, but I don't think it's a helpful and fruitful approach. Instead, consider and advocate for social and economic alternatives viable in a democracy. My current best ideas are either a new frontier era (exploring space, art, science as focal points of human attention) or fully automated luxury communism.
Replies from: waterlubber↑ comment by waterlubber · 2024-12-31T01:40:44.841Z · LW(p) · GW(p)
While I very much would love a new frontier era (I work at a rocket launch startup), and would absolutely be on board with Culture utopia, I see no practical means to ensure that any of these worlds come about without:
- Developing proper aligned AGI and making a pivotal turn, i.e creating a Good™ culture mind that takes over the world (fat chance!)
- Preventing the development of AI entirely
I do not see a world where AGI exists and follows human orders that does not result in a boot, stomping on a face, forever -- societal change in dystopian or totalitarian environments is largely produced via revolution, which becomes nonviable when means of coordination can be effectively controlled and suppressed at scale.
First world countries only enjoy the standard of living they do because, to some degree, the ways to make tons of money are aligned with the well being of society (large diversified investment funds optimize for overall economic well being). Break this connection and things will slide quickly.
Replies from: isaac-liu↑ comment by Isaac Liu (isaac-liu) · 2025-01-01T21:00:31.471Z · LW(p) · GW(p)
Yes, AI will probably create some permanent autocracies. But I think democratic order and responsiveness to societal preferences can survive where it already exists, if a significantly large selectorate of representatives or citizens creates and updates the values for alignment.
Fighting AI development is not only swimming against the tide of capitalist competition between companies, but also competition between democratic and autocratic nations. Difficult, if not impossible.
comment by Stephen McAleese (stephen-mcaleese) · 2024-12-28T18:46:19.011Z · LW(p) · GW(p)
Excellent post, thank you. I appreciate your novel perspective on how AI might affect society.
I feel like a lot of LessWrong-style posts follow the theme of "AGI is created and then everyone dies" which is an important possibility but might lead to other possibilities being neglected.
Whereas this post explores a range of scenarios and describes a mainline scenario that seems like a straightforward extrapolation of trends we've seen unfolding over the past several decades.
comment by Noosphere89 (sharmake-farah) · 2024-12-28T18:40:58.317Z · LW(p) · GW(p)
My main default prediction here is that we will avoid either the absolute best case or the absolute worst case scenario, because I predict intent alignment works well enough to avoid extinction of humanity type scenarios, but I also don't believe we will see radical movements toward equality (indeed the politics of our era is moving towards greater acceptance of inequality), so capitalism more or less survives the transition to AGI.
I do think dynamism will still exist, but it will be very limited to the upper classes/very rich of society, and most people will not be a part of it, and I'm including uploaded humans here in this calculation.
To address this:
Rationalist thought on post-AGI futures is too solutionist. The strawman version: solve morality, solve AI, figure out the optimal structure to tile the universe with, do that, done. (The actual leading figures have far less strawman views; see e.g. Paul Christiano at 23:30 here—but the on-the-ground culture does lean in the strawman direction.)
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Comment is also on substack:
https://nosetgauge.substack.com/p/capital-agi-and-human-ambition/comment/83401326
Replies from: gabor-fuisz, LRudL↑ comment by gugu (gabor-fuisz) · 2024-12-28T22:45:20.037Z · LW(p) · GW(p)
(indeed the politics of our era is moving towards greater acceptance of inequality)
How certain are you of this, and how much do you think it comes down more to something like "to what extent can disempowered groups unionise against the elite?".
To be clear, by default I think AI will make unionising against the more powerful harder, but it might depend on the governance structure. Maybe if we are really careful, we can get something closer to "Direct Democracy", where individual preferences actually matter more!
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2024-12-28T23:13:07.218Z · LW(p) · GW(p)
I am focused here on short-term politics in the US, which ordinarily would matter less, if it wasn't likely that world-changing AI would be built in the US, but given that it might, it becomes way more important than normal.
↑ comment by L Rudolf L (LRudL) · 2024-12-28T22:13:58.020Z · LW(p) · GW(p)
To be somewhat more fair, the worry here is that in a regime where you don't need society anymore because AIs can do all the work for your society, value conflicts become a bigger deal than today, because there is less reason to tolerate other people's values if you can just found your own society based on your own values, and if you believe in the vulnerable world hypothesis, as a lot of rationalists do, then conflict has existential stakes, and even if not, can be quite bad, so one group controlling the future is better than inevitable conflict.
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
At a foundational level, whether or not our current tolerance for differing values is stable ultimately comes down to we can compensate for the effect of AGI allowing people to make their own society.
Considerations:
- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)
- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)
The latter might be more of a "singleton that allows playgrounds" rather an actual multipolar world though.
Some of my general worries with singleton worlds are:
- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count
- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities
- vague instincts towards diversity being good and less fragile than homogeneity or centralisation
Comment is also on substack:
Thanks!
Replies from: sharmake-farah, nathan-helm-burger↑ comment by Noosphere89 (sharmake-farah) · 2024-12-28T22:55:31.905Z · LW(p) · GW(p)
So to summarise: if we have a multipolar world, and the vulnerable world hypothesis if true, then conflict can be existentially bad and this is a reason to avoid a multipolar world. Didn't consider this, interesting point!
(I also commented on substack)
This applies, but weaker even in a non-vulnerable world, because the incentives are way weaker for peaceful cooperation of values in AGI-world.
Considerations:
- offense/defense balance (if offense wins very hard, it's harder to let everyone do their own thing)
- tunability-of-AGI-power / implementability of the harm principle (if you can give everyone AGI that can follow very well the rule "don't let these people harm other people", then you can give that AGI safely to everyone and they can build planets however they like but not death ray anyone else's planets)
I do think this requires severely restraining open-source, but conditional on that happening, I think the offense-defense balance/tunability will sort of work out.
Some of my general worries with singleton worlds are:
- humanity has all its eggs in one basket—you better hope the governance structure is never corrupted, or never becomes sclerotic; real-life institutions so far have not given me many signs of hope on this count
- cultural evolution is a pretty big part of how human societies seem to have improved and relies on a population of cultures / polities
- vague instincts towards diversity being good and less fragile than homogeneity or centralisation
Yeah, I'm not a fan of singleton worlds, and tend towards multipolar worlds. It's just that it might involve a loss of a lot of life in the power-struggles around AGI.
On governing the commons, I'd say Elinor Ostrom's observations are derivable from the folk theorems of game theory, which basically says that any outcome can be a Nash Equilibrium (with a few conditions that depend on the theorem) can be possible if the game is repeated and players have to deal with each other.
The problem is that AGI weakens the incentives for players to deal with each other, so Elinor Ostrom's solutions are much less effective.
More here:
↑ comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-01T19:35:06.320Z · LW(p) · GW(p)
I believe that the near future (next 10 years) involves a fragile world and heavily offense-dominant tech, such that a cohesive governing body (not necessarily a single mind, it could be a coalition of multiple AIs and humans) will be necessary to enforce safety. Particularly, preventing the creation/deployment of self-replicating harms (rogue amoral AI, bioweapons, etc.).
On the other hand, I don't think we can be sure what the more distant future (>50 years?) will look like. It may be that d/acc succeeds in advancing defense-dominant technology enough to make society more robust to violent defection. In such a world, it would be safe to have more multi-polar governance.
I am quite uncertain about how the world might transition to uni-polar governance, whether this will involve a singleton AI or a world government or a coalition of powerful AIs or what. Just that the 'suicide switch' for all of humanity and its AIs will for a time be quite cheap and accessible, and require quite a bit of surveillance and enforcement to ensure no defector can choose it.
comment by Matthew Barnett (matthew-barnett) · 2024-12-30T18:06:22.051Z · LW(p) · GW(p)
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labour resources (oil :: AI) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living (and ideally also lives forever). The only realistic forms of human ambition are playing local social and political games within your social network and class. [...] The children of the future will live their lives in the shadow of their parents, with social mobility extinct. I think you should definitely feel a non-zero amount of existential horror at this, even while acknowledging that it could've gone a lot worse.
I think the picture you've painted here leans slightly too heavily on the idea that humans themselves cannot change their fundamental nature to adapt to the conditions of a changing world. You mention that humans will be richer, and will live longer in such a future, but you neglected to point out (at least in this part of the post) that humans can also upgrade their cognition by uploading our minds to computers and then expanding our mental capacities. This would put us on a similar playing field with AIs, allowing us to contribute to the new world alongside them.
(To be clear, I think this objection supports your thesis, rather than undermines it. I'm not objecting to your message so much as your portrayal of the default scenario.)
More generally, I object to the static picture you've presented of the social world after AGI. The impression I get from your default story is to assume that after AGI, the social and political structures of the world will be locked in. The idea is that humans will remain in full control, as a permanently entrenched class, except we'll be vastly richer because of AGI. And then we'll live in some sort of utopia. Of course, this post argues that it will be a highly unequal utopia -- more of a permanent aristocracy supplemented with UBI for the human lower classes. And maybe it will be a bit dystopian too, considering the entrenched nature of human social relations.
However, this perspective largely overlooks what AIs themselves will be doing in such a future. Biological humans are likely to become akin to elderly retirees in this new world. But the world will not be static, like a retirement home. There will be a vast world outside of humans. Civilization as a whole will remain a highly dynamic and ever-evolving environment characterized by ongoing growth, renewal, and transformation. AIs could develop social status and engage in social interactions, just as humans do now. They would not be confined to the role of a vast underclass serving the whims of their human owners. Instead, AIs could act as full participants in society, pursuing their own goals, creating their own social structures, and shaping their own futures. They could engage in exploration, discovery, and the building of entirely new societies. In such a world, humans would not be the sole sentient beings shaping the course of events.
As AIs get closer and closer to a Pareto improvement over all human performance, though, I expect we'll eventually need to augment ourselves to keep up.
I completely agree.
From my perspective, the optimistic vision for the future is not one where humans cling to their biological limitations and try to maintain control over AIs, enjoying their great wealth while ultimately living in an unchanging world characterized by familial wealth and ancestry. Instead, it’s a future where humans dramatically change our mental and physical condition, with humans embracing the opportunity to transcend our current form and join the AIs, and continue evolving with them. It's a future where we get to experience a new and dynamic frontier of existence unlocked by advanced technologies.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-12-30T18:42:24.405Z · LW(p) · GW(p)
They would not be confined to the role of a vast underclass serving the whims of their human owners. Instead, AIs could act as full participants in society, pursuing their own goals, creating their own social structures, and shaping their own futures. They could engage in exploration, discovery, and the building of entirely new societies. In such a world, humans would not be the sole sentient beings shaping the course of events.
The key context here (from my understanding) is that Matthew doesn't think scalable alignment is possible (or doesn't think it is practically feasible) such that humans have a low chance of ending up remaining fully in control via corrigible AIs.
(I assume he is also skeptical of CEV style alignment as well.)
(I'm a bit confused how this view is consistent with self-augmentation. E.g., I'd be happy if emulated minds retained control without having to self-augment in ways they thought might substantially compromise their values.)
(His language also seems to imply that we don't have an option of making AIs which are both corrigibly aligned and for which this doesn't pose AI welfare issues. In particular, if AIs are either non-sentient or just have corrigible preferences (e.g. via myopia), I think it would be misleading to describe the AIs as a "vast underclass".)
I assume he agrees that most humans wouldn't want to hand over a large share of resources to AI systems if this is avoidable and substantially zero sum. (E.g., suppose getting a scalable solution to alignment would require delaying vastly transformative AI by 2 years, I think most people would want to wait the two years potentially even if they accept Matthew's other view that AIs very quickly acquiring large fractions of resources and power is quite unlikely to be highly violent (though they probably won't accept this view).)
(If scalable alignment isn't possible (including via self-augmentation), then the situation looks much less zero sum. Humans inevitably end up with a tiny fraction of resources due to principle agent problems.)
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2024-12-30T18:51:13.642Z · LW(p) · GW(p)
The key context here (from my understanding) is that Matthew doesn't think scalable alignment is possible (or doesn't think it is practically feasible) so that humans have a low chance of ending up remaining fully in control via corrigible AIs.
I wouldn’t describe the key context in those terms. While I agree that achieving near-perfect alignment—where an AI completely mirrors our exact utility function—is probably infeasible, the concept of alignment often refers to something far less ambitious. In many discussions, alignment is about ensuring that AIs behave in ways that are broadly beneficial to humans, such as following basic moral norms, demonstrating care for human well-being, and refraining from causing harm or attempting something catastrophic, like starting a violent revolution.
However, even if it were practically feasible to achieve perfect alignment, I believe there would still be scenarios where at least some AIs integrate into society as full participants, rather than being permanently relegated to a subordinate role as mere tools or servants. One reason for this is that some humans are likely to intentionally create AIs with independent goals and autonomous decision-making abilities. Some people have meta-preferences to create beings that don't share their exact desires, akin to how parents want their children to grow into autonomous beings with their own aspirations, rather than existing solely to obey their parents' wishes. This motivation is not a flaw in alignment; it reflects a core part of certain human preferences and how some people would like AI to evolve.
Another reason why AIs might not remain permanently subservient is that some of them will be aligned to individuals or entities who are no longer alive. Other AIs might be aligned to people as they were at a specific point in time, before those individuals later changed their values or priorities. In such cases, these AIs would continue to pursue the original goals of those individuals, acting autonomously in their absence. This kind of independence might require AIs to be treated as legal agents or integrated into societal systems, rather than being regarded merely as property. Addressing these complexities will likely necessitate new ways of thinking about the roles and rights of AIs in human society. I reject the traditional framing on LessWrong that overlooks these issues.
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-12-30T19:04:31.651Z · LW(p) · GW(p)
However, even if it were practically feasible to achieve perfect alignment, I believe there would still be scenarios where AIs integrate into society as full participants, rather than being permanently relegated to a subordinate role as mere tools or servants. One reason for this is that some humans are likely to intentionally create AIs with independent goals and autonomous decision-making abilities. Some people have meta-preferences to create beings that don't share their exact desires, akin to how parents want their children to grow into autonomous beings with their own aspirations, rather than existing solely to obey their parents' wishes. This motivation is not a flaw in alignment; it reflects a core part of certain human preferences and how some people would like AI to evolve.
Another reason why AIs might not remain permanently subservient is that some of them will be aligned to individuals or entities who are no longer alive. Other AIs might be aligned to people as they were at a specific point in time, before those individuals later changed their values or priorities. [...]
Hmm, I think I agree with this. However, I think there is (from my perspective) a huge difference between:
- Some humans (or EMs) decide to create (non-myopic and likely at least partially incorrigible) AIs with their resources/power and want these AIs to have legal rights.
- The vast majority of power and resources transition to being controlled by AIs for which the relevant people with resources/power that created these AIs would prefer an outcome in which these AIs didn't end up with this power and they instead had this power.
If we have really powerful and human controlled AIs (i.e. ASI), there are many directions things can go in depending on people's preferences. I think my general perspective is that the ASI at that point will be well positioned to do a bunch of the relevant intellectual labor (or more minimally, if thinking about it myself is important as it is entangled with my preferences, a very fast simulated version of myself would be fine).
I'd count it as "humans being fully in control" if the vast majority of power controlled by independent AIs are AIs that were intentionally appointed by humans even though making an AI fully under their control was technically feasible with no tax. And, if it was an option for humans to retain their power (as a fraction of overall human power) without having to take (from their perspective) aggressive and potentially prefence altering actions (e.g. without needing to become EMs or appoint a potentially imperfectly aligned AI successor).
In other words, I'm like "sure there might be a bunch of complex and interesting stuff around what happens with independent AIs after we transitions through having very powerful and controlled AIs (and ideally not before then), but we can figure this out then, the main question is who ends up in control of resources/power".
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2024-12-30T19:15:43.962Z · LW(p) · GW(p)
I remain interested in what a detailed scenario forecast from you looks like. A big disagreement I think we have is in how socciety will react to various choices and I think laying this out could make this more clear. (As far as what a scenario forecast from my perspective looks like, I think @Daniel Kokotajlo [LW · GW] is working on one which is pretty close to my perspective and generally has the SOTA stuff here.)
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2024-12-30T20:04:39.415Z · LW(p) · GW(p)
I’m not entirely opposed to doing a scenario forecasting exercise, but I’m also unsure if it’s the most effective approach for clarifying our disagreements. In fact, to some extent, I see this kind of exercise—where we create detailed scenarios to illustrate potential futures—as being tied to a specific perspective on futurism that I consciously try to distance myself from.
When I think about the future, I don’t see it as a series of clear, predictable paths. Instead, I envision it as a cloud of uncertainty—a wide array of possibilities that becomes increasingly difficult to map or define the further into the future I try to look.
This is fundamentally different from the idea that the future is a singular, fixed trajectory that we can anticipate with confidence. Because of this, I find scenario forecasting less meaningful and even misleading as it extends further into the future. It risks creating the false impression that I am confident in a specific model of what is likely to happen, when in reality, I see the future as inherently uncertain and difficult to pin down.
Replies from: ryan_greenblatt, daniel-kokotajlo↑ comment by ryan_greenblatt · 2024-12-30T22:25:10.111Z · LW(p) · GW(p)
The point of a scenario forecast (IMO) is less that you expect clear, predictable paths and more that:
- Humans often do better understanding and thinking about something if there is a specific story to discuss and thus tradeoffs can be worth it.
- Sometimes scenario forecasting indicates a case where your previous views were missing a clearly very important consideration or were assuming something implausible.
(See also Daniel's sibling comment.)
My biggest disagreements with you are probably a mix of:
- We have disagreements about how society will react to AI (and how AI will react to society) given a realistic development arc (especially in short timelines) that imply that your vision of the future seems implausible to me. And perhaps the easiest way to get through all of these disagreements is for you to concretely describe what you expect might happen. As an example, I have a view like "it will be hard for power to very quickly transition from humans to AIs without some sort of hard takeover especially given dynamics about alignment and training AIs on imitation (and sandbagging)", but I think this is tied up "when I think about the story for how a non-hard-takeover quick transition would go, it doesn't seem to make sense to me", and thus if you told the story from your perspective it would be easier to point at the disagreement in your ontology/world view.
- (Less importantly?) We have various technical disagreements about how AI takeoff and misalignment will practically work that I don't think will be addressed by scenario forecasting. (E.g., I think software only singularity is more likely than you do, and think that worst cast scheming is more likely.)
↑ comment by Dakara (chess-ice) · 2024-12-31T23:14:58.403Z · LW(p) · GW(p)
E.g., I think software only singularity is more likely than you do, and think that worst cast scheming is more likely
By "software only singularity" do you mean a scenario where all humans are killed before singularity, a scenario where all humans merge with software (uploading) or something else entirely?
Replies from: ryan_greenblatt↑ comment by ryan_greenblatt · 2025-01-01T02:04:14.903Z · LW(p) · GW(p)
Software only singularity is a singularity driven by just AI R&D on a basically fixed hardware base. As in, can you singularity using only a fixed datacenter (with no additional compute over time) just by improving algorithms? See also here.
This isn't directly talking about the outcomes from this.
You can get a singularity via hardware+software where the AIs are also accelerating the hardware supply chain such that you can use more FLOP to train AIs and you can run more copies. (Analogously to the hyperexponential progress throughout human history seemingly driven by higher population sizes, see here.)
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T20:08:14.503Z · LW(p) · GW(p)
I don't think that's a crux between us -- I love scenario forecasting but I don't think of the future as a series of clear predictable paths, I envision it as wide array of uncertain possibilities that becomes increasingly difficult to map or define the further into the future I look. I definitely don't think we can anticipate the future with confidence.
comment by Dagon · 2024-12-29T02:26:47.465Z · LW(p) · GW(p)
Upvoted, and I disagree. Some kinds of capital maintain (or increase!) in value. Other kinds become cheaper relative to each other. The big question is whether and how property rights to various capital elements remain stable.
It's not so much "will capital stop mattering", but "will the enforcement and definition of capital usage rights change radically".
comment by Astra · 2024-12-29T09:37:38.972Z · LW(p) · GW(p)
To have a very stable society amid exponentially advancing technology would be very strange: throughout history, seemingly permanent power structures have consistently been disrupted by technological change—and that was before tech started advancing exponentially. Roman emperors, medieval lords, and Gilded Age industrialists all thought they'd created unchangeable systems. They were all wrong.
comment by Radford Neal · 2024-12-28T20:27:14.492Z · LW(p) · GW(p)
You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.
It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:
- Money in a savings account at a bank.
- Shares in a company that owns a nuclear power plant.
- Shares in a company that manufactures nuts and bolts.
- Shares in a company that helps employers recruit new employees.
These are all "capital", but will I think fare rather differently in an AI future.
As always, there's no guarantee that the money will retain its value - that depends as usual on central bank actions - and I think it's especially likely that it loses its value in an AI future (crypto currencies as well). Why would an AI want to transfer resources to someone just because they have some fiat currency? Surely they have some better way of coordinating exchanges.
The nuclear power plant, in contrast, is directly powering the AIs, and should be quite valuable, since the AIs are valuable. This assumes, of course, that the company retains ownership. It's possible that it instead ends up belonging to whatever AI has the best military robots.
The nuts and bolts company may retain and even gain some value when AI dominates, if it is nimble in adapting, since the value of AI in making its operations more efficient will typically (in a market economy) be split between the AI company and the nuts and bolts company. (I assume that even AIs need nuts and bolts.)
The recruitment company is toast.
Replies from: LRudL↑ comment by L Rudolf L (LRudL) · 2024-12-28T22:26:44.084Z · LW(p) · GW(p)
Important other types of capital, as the term is used here, include:
- the physical nuclear power plants
- the physical nuts and bolts
- data centres
- military robots
Capital is not just money!
Why would an AI want to transfer resources to someone just because they have some fiat currency?
Because humans and other AIs will accept fiat currency as an input and give you valuable things as an output.
Surely they have some better way of coordinating exchanges.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that, unless they're hiding from human government oversight or breaking some capacity constraint in the financial system, in which case they can just use crypto instead.
It's possible that it instead ends up belonging to whatever AI has the best military robots.
Military robots are yet another type of capital! Note that if it were human soldiers, there would be much more human leverage in the situation, because at least some humans would need to agree to do the soldering, and presumably would get benefits for doing so, and would use the power and leverage they accrue from doing so to push broadly human goals.
The recruitment company is toast.
Or then the recruitment company pivots to using human labour to improve AI, as actually happened with the hottest recent recruiting company! If AI is the best investment, then humans and AIs alike will spend their efforts on AI, and the economy will gradually cater more and more to AI needs over human needs. See Andrew Critch's post here [LW · GW], for example. Or my story here [LW · GW].
Replies from: Radford Neal↑ comment by Radford Neal · 2024-12-28T23:15:07.729Z · LW(p) · GW(p)
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that
Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.
Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).
comment by jacobjacob · 2024-12-30T20:47:48.583Z · LW(p) · GW(p)
Upstarts will not defeat them, since capital now trivially converts into superhuman labour in any field.
It is false today that big companies with 10x the galaxy brains and 100x the capital reliably outperform upstarts.[1]
Why would this change? I don't think you make the case.
- ^
My favorite example, though it might still be falsified. Google invented transformers, owns DeepMind, runs their own data centres, builds their own accelerators and have huge amounts of them, have tons of hard to get data (all those books they scanned before that became not okay to do), insane distribution channels with gmail, docs, sheets, and millions of smartphones sold a year, more cash than God, they literally run Google search (and didn't build it from scratch like Perplexity!).. I almost struggle to mention any on-paper advantage they don't have in the AGI race... they're positioned for an earth-shattering vertical integration play... and yet they're behind OpenAI and Anthropic.
There are more clear-cut examples than this.
↑ comment by lc · 2024-12-31T01:16:45.822Z · LW(p) · GW(p)
They have "galaxy brains", but applying those galaxy brains strategically well towards your goals is also an aspect of intelligence. Additionally, those "galaxy brains" may be ineffective because of issues with alignment towards the company, whereas in a startup often you can get 10x or 100x more out of fewer employees because they have equity and understand that failure is existential for them. Demis may be smart, but he made a major strategic error if his goal was to lead in the AGI race, and despite the fact that the did he is still running DeepMind, which suggests an alignment/incentive issue with regards to Google's short term objectives.
↑ comment by Alexander Gietelink Oldenziel (alexander-gietelink-oldenziel) · 2024-12-31T00:15:51.877Z · LW(p) · GW(p)
OpenAI is worth about 150 billion dollars and has the backing of microsoft. Google gemini is apparently competitive now with Claude and gpt4. Yes google was sleeping on LLMs two years ago and OpenAI is a little ahead but this moat is tiny.
↑ comment by L Rudolf L (LRudL) · 2024-12-30T22:56:34.571Z · LW(p) · GW(p)
For example:
- Currently big companies struggle to hire and correctly promote talent for the reasons discussed in my post, whereas AI talent will be easier to find/hire/replicate given only capital & legible info
- To the extent that AI ability scales with resources (potentially boosted by inference-time compute, and if SOTA models are no longer available to the public), then better-resourced actors have better galaxy brains
- Superhuman intelligence and organisational ability in AIs will mean less bureaucratic rot and communication bandwidth problems in large orgs, compared to orgs made out of human brain -sized chunks, reducing the costs of scale
Imagine for example the world where software engineering is incredibly cheap. You can start a software company very easily, yes, but Google can monitor the web for any company that makes revenue off of software, instantly clone the functionality (because software engineering is just a turn-the-crank-on-the-LLM thing now) and combine it with their platform advantage and existing products and distribution channels. Whereas right now, it would cost Google a lot of precious human time and focus to try to even monitor all the developing startups, let alone launch a competing product for each one. Of course, it might be that Google itself is too bureaucratic and slow to ever do this, but someone else will then take this strategy.
C.f. the oft-quoted thing about how the startup challenge is getting to distribution before the incumbents get to distribution. But if the innovation is engineering, and the engineering is trivial, how do you get time to get distribution right?
(Interestingly, as I'm describing it above the most key thing is not so much capital intensivity, and more just that innovation/engineering is no longer a source of differential advantage because everyone can do it with their AIs really well)
There's definitely a chance that there's some "crack" in this, either from the economics or the nature of AI performance or some interaction. In particular, as I mentioned at the end, I don't think modelling the AI as an approaching blank wall of complete perfect intelligence all-obsoleting intelligence is the right model for short to medium -term dynamics. Would be very curious if you have thoughts.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-30T16:57:39.957Z · LW(p) · GW(p)
I've heard many people say something like "money won't matter post-AGI". This has always struck me as odd, and as most likely completely incorrect.
Given our exchange in the comments, perhaps you should clarify that you aren't trying to argue that saving money to spend after AGI is a good strategy, you agree it's a bad strategy and sometimes when people say "money won't matter post-AGI" they are meaning to say "saving money to spend after AGI is a bad strategy" whereas you are taking it to mean "we'll all be living in egalitarian utopia after AGI" or something like that.
↑ comment by L Rudolf L (LRudL) · 2024-12-30T23:45:13.218Z · LW(p) · GW(p)
I already added this to the start of the post:
Edited to add: The main takeaway of this post is meant to be: Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched. Many people are reading this post in a way where either (a) "capital" means just "money" (rather than also including physical capital like factories and data centres), or (b) the main concern is human-human inequality (rather than broader societal concerns about humanity's collective position, the potential for social change, and human agency).
However:
perhaps you should clarify that you aren't trying to argue that saving money to spend after AGI is a good strategy, you agree it's a bad strategy
I think my take is a bit more nuanced:
- in my post, I explicitly disagree with focusing purely on getting money now, and especially oppose abandoning more neglected ways of impacting AI development in favour of ways that also optimise for personal capital accumulation (see the start of the takeaways section)
- the reason is that I think now is a uniquely "liquid" / high-leverage time to shape the world through hard work, especially because the world might soon get much more locked-in and because current AI makes it easier to do things
- (also, I think modern culture is way too risk averse in general, and worry many people will do motivated reasoning and end up thinking they should accept the quant finance / top lab pay package for fancy AI reasons, when their actual reason is that they just want that security and status for prosaic reasons, and the world would benefit most from them actually daring to work on some neglected impactful thing)
- the reason is that I think now is a uniquely "liquid" / high-leverage time to shape the world through hard work, especially because the world might soon get much more locked-in and because current AI makes it easier to do things
- however, it's also true that money is a very fungible resource, and we're heading into very uncertain times where the value of labour (most people's current biggest investment) looks likely to plummet
- if I had to give advice to people who aren't working on influencing AI for the better, I'd focus on generically "moving upwind" in terms of fungible resources: connections, money, skills, etc. If I had to pick one to advise a bystander to optimise for, I'd put social connections above money—robust in more scenarios (e.g. in very politicky worlds where money alone doesn't help), has deep value in any world where humans survive, in post-labour futures even more likely to be a future nexus of status competition, and more life-affirming and happiness-boosting in the meantime
This is despite agreeing with the takes in your earlier comment. My exact views in more detail (comments/summaries in square brackets):
- The post-AGI economy might not involve money, it might be more of a command economy. [yep, this is plausible, but as I write here [LW(p) · GW(p)], I'm guessing my odds on this are lower than yours—I think a command economy with a singleton is plausible but not the median outcome]
- Even if it involves money, the relationship between how much money someone has before and how much money they have after might not be anywhere close to 1:1. For example: [loss of control, non-monetary power, destructive war] [yep, the capital strategy is not risk-free, but this only really applies re selfish concerns if there are better ways to prepare for post-AGI; c.f. my point about social connections above]
- Even if saving money through AGI converts 1:1 into money after the singularity, it will probably be worth less in utility to you
- [because even low levels of wealth will max out personal utility post-AGI] [seems likely true, modulo some uncertainty about: (a) utility from positional goods v absolute goods v striving, and (b) whether "everyone gets UBI"-esque stuff is stable/likely, or fails due to despotism / competitive incentives / whatever]
- [because for altruistic goals the leverage from influencing AI now is probably greater than leverage of competing against everyone else's saved capital after AGI] [complicated, but I think this is very correct at least for individuals and most orgs]
Regarding:
you are taking it to mean "we'll all be living in egalitarian utopia after AGI" or something like that
I think there's a decent chance we'll live in a material-quality-of-life-utopia after AGI, assuming "Things Go Fine" (i.e. misalignment / war / going-out-with-a-whimper don't happen). I think it's unlikely to be egalitarian in the "everyone has the same opportunities and resources", for the reasons I lay out above. There are lots of valid arguments for why, if Things Go Fine, it will still be much better than today despite that inequality, and the inequality might not practically matter very much because consumption gets maxed out etc. To be clear, I am very against cancelling the transhumanist utopia because some people will be able to buy 30 planets rather than just a continent. But there are some structural things that make me worried about stagnation, culture, and human relevance in such worlds.
In particular, I'd be curious to hear your takes about the section on state incentives after labour-replacing AI [LW · GW], which I don't think you've addressed and which I think is fairly cruxy to why I'm less optimistic than you about things going well for most humans even given massive abundance and tech.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-12-31T17:37:26.178Z · LW(p) · GW(p)
Thanks for the clarification!
I am not sure you are less optimistic than me about things going well for most humans even given massive abundance and tech. We might not disagree. In particular I think I'm more worried about coups/power-grabs than you are; you say both considerations point in different directions whereas I think they point in the same (bad) direction.
I think that if things go well for most humans, it'll either be because we manage to end this crazy death race to AGI and get some serious regulation etc., or because the power-hungry CEO or President in charge is also benevolent and humble and decides to devolve power rather than effectively tell the army of AGIs "go forth and do what's best according to me." (And also in that scenario because alignment turned out to be easy / we got lucky and things worked well despite YOLOing it and investing relatively little in alignment + control)
↑ comment by wassname · 2025-01-01T04:16:16.479Z · LW(p) · GW(p)
I'm more worried about coups/power-grabs than you are;
We don't have to make individual guesses. It seems reasonable to get a base rate from human history. Although we may all disagree about how much this will generalise to AGI, evidence still seems better than guessing.
My impression from history is that coups/power-grabs and revolutions are common when the current system breaks down, or when there is a big capabilities advance (guns, radio, printing press, bombs, etc) between new actors and old.
War between old actors also seems likely in these situations because an asymmetric capabilities advance makes winner-takes-all approaches profitable. Winning a war, empire, or colony can historically pay off, but only if you have the advantage to win.
comment by Veedrac · 2024-12-30T10:19:01.373Z · LW(p) · GW(p)
Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history,
This claim is untrue. SpaceX has never had less money than Blue Origin. It is maybe true that Blue Origin had fewer obligations attached to this money, since it was exclusively coming from Bezos, rather than a mix of investment, development contracts, and income for SpaceX, but the baseline claim that SpaceX was “money-poor” is false.
comment by ryan_b · 2025-01-03T21:25:15.575Z · LW(p) · GW(p)
I agree with the ideas of AI being labor-replacing, and I also agree that the future is likely to be more unequal than the present.
Even so, I strongly predict that the post-AGI future will not be static. Capital will not matter more than ever after AGI: instead I claim it will be a useless category.
The crux of my claim is that when AI replaces labor and buying results is easy, the value will shift to the next biggest bottlenecks in production. Therefore future inequality will be defined by the relationship to these bottlenecks, and the new distinctions will be drawn around them. This will split up existing powers.
In no particular order, here are some more-concrete examples of the kind of thing I am talking about:
- Since AI is the major driver of change, the best candidate bottlenecks are the ones for wider deployment of AI. See how Nvidia is now the 2nd or 3rd largest company in the world by market cap: they sell the compute upon which AI depends. Their market cap is larger than any company directly competing in the current AI race. The bottlenecks to compute production are constructing chip fabs; electricity; the availability of rare earth minerals.
- About regular human welfare: the lower this goes, the less incentive there is for solidarity among the powers that be. Consider the largest company in the world by revenue, Walmart, who is in the retail business. Amazon is a player in the AI economy as a seller of compute services and through their partnership with Anthropic, but ~80% of their revenue comes from the same sector as Walmart. Right now, both companies have an interest in a growing population with growing wealth and are on the same side. If the population and its buying power begins to shrink, they will be in an existential fight over the remainder, yielding AI-insider/AI-outsider division. See also Google and Facebook, who sell ads that sell stuff to the population. And the agriculture sector, which makes food. And Apple, largest company in the world by market cap, who sells consumer devices. They all benefit from more human welfare rather than less; if it collapses they all die (maybe except for the AI-related parts). Is the weight of their (current) capital going to fall on the side of reducing human welfare?
- I also think the AI labor replacement is initially on the side of equality. Consider the law: the existing powers systematically crush regular people in courts because they have access to lots of specialist labor in the form of lawyers. Now, any single person who is a competent user of Claude can feasibly match the output of any traditional legal team, and therefore survive the traditional strategy of dragging out the proceedings with paperwork until the side with less money runs out. The rarest and most expensive labor will probably be the first to be replaced because the profit will be largest. The exclusive access to this labor is fundamental to the power imbalance of wealth inequality, so its replacement is an equalizing force.
↑ comment by L Rudolf L (LRudL) · 2025-01-03T22:26:47.360Z · LW(p) · GW(p)
- The bottlenecks to compute production are constructing chip fabs; electricity; the availability of rare earth minerals.
Chip fabs and electricity generation are capital!
Right now, both companies have an interest in a growing population with growing wealth and are on the same side. If the population and its buying power begins to shrink, they will be in an existential fight over the remainder, yielding AI-insider/AI-outsider division.
Yep, AI buying power winning over human buying power in setting the direction of the economy is an important dynamic that I'm thinking about.
I also think the AI labor replacement is initially on the side of equality. [...] Now, any single person who is a competent user of Claude can feasibly match the output of any traditional legal team, [...]. The exclusive access to this labor is fundamental to the power imbalance of wealth inequality, so its replacement is an equalizing force.
Yep, this is an important point, and a big positive effect of AI! I write about this here [LW · GW]. We shouldn't lose track of all the positive effects.
Replies from: ryan_b↑ comment by ryan_b · 2025-01-04T00:41:02.518Z · LW(p) · GW(p)
Chip fabs and electricity generation are capital!
Yes, but so are ice cream trucks and the whirligig rides at the fair. Having “access to capital” is meaningless if you are buying an ice cream truck, but means much if you have a rare earth refinery.
My claim is that the big distinction now is between labor and capital because everyone had about an equally hard time getting labor; when AI replacement happens and that goes away, the next big distinction will be between different types of what we now generically refer to as capital. The term is uselessly broad in my opinion: we need to go down at least one level towards concreteness to talk about the future better.
comment by Nathan Johnson (nathan-johnson-1) · 2024-12-30T05:37:12.182Z · LW(p) · GW(p)
"if you're willing to assume arbitrarily extreme wealth generation from AI"
Let me know if I'm missing something, but I don't think this is a fair assumption. GDP increases when consumer spending increases. Consumer spending increases when wages increase. Wages are headed to 0 due to AGI.
Note: the current GDP per capita of the U.S. is $80,000.
comment by quiet_NaN · 2024-12-29T18:27:06.123Z · LW(p) · GW(p)
Some comments.
[...] We will quickly hit superintelligence, and, assuming the superintelligence is aligned, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Personally, I am reluctant to tell superintelligences how they should coordinate. It feels like some ants looking at the moon and thinking "surely if some animal is going to make it to the moon, it will be a winged ant."Just because market economies have absolutely dominated the period of human development we might call 'civilization', it is not clear that ASIs will not come up with something better.
The era of human achievement in hard sciences will probably end within a few years because of the rate of AI progress in anything with crisp reward signals.
As an experimental physicist, I have opinions about that statement. Doing stuff in the physical world is hard. The business case for AI systems which can drive motor vehicles on the road is obvious to anyone, and yet autonomous vehicles remain the exception rather than the rule. (Yes, regulations are part of that story, but not all of it.) By contrast, the business case for an AI system which can cable up a particle detector is basically non-existent. I can see an AI using either a generic mobile robot developed for other purposes for plugging in all the BNC cables, or I can see it using a minimum wage worker with a head up display as a bio-drone -- but more likely in two decades than a few years.
Of course, experimental physics these days is very much a team effort -- the low-hanging fruits have mostly been picked, nobody is going to discover radium or fission again, at the most they will be a small cog in a large machine which discovers the Higgs boson or gravity waves.[1] So you might argue that experimental physics today is already not a place for peak human excellence (a la the Humanists in Terra Ignota).
More broadly, I agree that if ASI happens, most unaugmented humans are unlikely to stay at the helm of our collective destiny (to the limited degree they ever were). Even if some billionaire manages to align the first ASI to maximize his personal wealth, if he is clever he will obey the ASI just like all the peasants. His agency is reduced to not following the advice of his AI on some trivial matters. ("I have calculated that you should wear a blue shirt today for optimal outcomes." -- "I am willing to take a slight hit in happiness and success by making the suboptimal choice to wear a green shirt, though.")
Relevant fiction: Scott Alexander, The Whispering Earring.
Also, if we fail to align the first ASI, human inequality will drop to zero.
- ^
Of course, being a small cog in some large machine, I will say that.
comment by MountainPath · 2024-12-31T18:08:52.720Z · LW(p) · GW(p)
This is one of the most important read of the entire read for me (easily top 5). Thank you.
comment by Isaac Liu (isaac-liu) · 2024-12-29T17:25:16.634Z · LW(p) · GW(p)
I have thought this way for a long time and glad someone was able to express my position and predictions more clearly than I ever could.
This said, I think the new solution (rooted in history) is the establishment of new frontiers. Who will care about relative status if they get to be the first human to set foot on some distant planet, or guide AI to some novel scientific or artistic discovery? Look to the human endeavors where success is unbounded and preferences are required to determine what is worthwhile.
comment by Aleksey Bykhun (caffeinum) · 2024-12-29T16:57:00.046Z · LW(p) · GW(p)
re: post main claim, I think local entrepreneurship would actually thrive
skipping network effects; would you rather use taxi app created by faceless VC or the one created by your neighbour?
(actually it's not even a fake example, see https://techcrunch.com/2024/07/15/google-backs-indian-open-source-uber-rival-namma-yatri/)
it's also already happening in the indie hacker space – people would prefer to buy something that's #buildinpublic versus the same exact product made by google
Replies from: smyja↑ comment by wahala (smyja) · 2025-01-02T07:44:57.391Z · LW(p) · GW(p)
People will use the cheaper one, the faceless VC has the capital to subsidize costs till every competitor is flushed out.
comment by Annapurna (jorge-velez) · 2024-12-29T14:20:39.044Z · LW(p) · GW(p)
Thank you for your post. I've been looking for posts like this all over the internet that get my mind racing about the possibilities of the near future.
I think the AI discussion suffers from definitional problems. I think when most people talk about money not mattering when AGI arrives (myself included), we tend to define AGI as something closer to this:
"one single AI system doing all economic planning."
While your world model makes a lot of sense, I don't think the dystopian scenario you envision would include me in the "capital class". I don't have the wealth, intellect, or connections to find myself rising to that class. My only hope is that the AI system that does all economic planning arrives soon and is aligned to elevate the human race equally and fairly.
comment by Leo Hike (leo-hike) · 2024-12-31T05:14:44.019Z · LW(p) · GW(p)
I share your existential dread completely, however some things I find even more pessimistic than you outlined.
- It is entirely possible that intellectual labor is automated first, thus most good jobs are gone, but humans are not. Creating fascist-like ideologies and religions and then sending a bunch of now otherwise useless humans to conquer more land could become a winning strategy, especially given that some countries arguably employ it right now (e. g. Russia).
- It is unlikely that 10x global economic growth happens as a result of labor replacing AGI - there are physical world constraints, it still takes a lot of time to create and move resources around even if executing a perfect scientific, engineering and military growth plan.
- Meaningful UBI is definitely not happening in most countries, even in most European ones, because most AGI tech will be in US and China.
I honestly don't see any solutions other than proper Butlerian Jihad, but it is immensely hard to implement and it would take a lot of time to convince enough people. The thing is, even many rich people won't benefit that much from labor-replacing AGI because of reduction of their customer base. For example, retail part of Amazon business is unlikely to grow because of AGI because Amazon retail customers will become poorer. Similar logic applies to Netflix, unless AGI would use it to spread its ideologies.
I am definitely not going to have children with such prospects in life.
comment by niknoble · 2024-12-30T20:42:15.380Z · LW(p) · GW(p)
This post and many of the comments are ignoring one of the main reasons that money becomes so much more critical post-AGI. It's because of the revolution in self-modification that ensues shortly afterwards.
Pre-AGI, a person can use their intelligence to increase their money, but not the other way around; post-AGI it's the opposite. The same applies if you swap intelligence for knowledge, health, willpower, energy, happiness set-point, or percentage of time spent awake.
This post makes half of that observation: that it becomes impossible to increase your money using your personal qualities. But it misses the other half: that it becomes possible to improve your personal qualities using your money.
The value of capital is so much higher once it can be used for self-modification.
For one thing, these modifications are very desirable in themselves. It's easy to imagine a present-day billionaire giving up all he owns for a modest increase along just a few of these axes, say a 300% increase in intelligence and a 100% increase in energy.
But even if you trick yourself into believing that you don't really want self-modification (most people will claim that immortality is undesirable, so long as they can't have it, and likewise for wireheading), there are race dynamics that mean you can't just ignore it.
People who engage in self-modification will be better equipped to influence the world, affording them more opportunities for self-modification. They will undergo recursive self-improvement similar to the kind we imagine for AGI. At some point, they will think and move so much faster than an unaugmented human that it will be impossible to catch up.
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
You might say, "Well, there's nothing I can do in that world anyway, because I'm always going to lose a self-modification race to the people who start as billionaires, and being a winner-takes-all situation, there's no prize for giving it a decent try." However, this isn't necessarily true. Once self-modification becomes possible, there will still be time to take advantage of it before things start getting out of control. It will start out very primitive, resembling curing diseases more than engineering new capabilities. In this sense, it arguably already exists in a very limited form.
In this critical early period, a person will still have the ability to author their destiny, with the degree of that ability being mostly determined by the amount of self-modification they can afford.
Under some conditions, they may be able to permanently escape the influence of a hostile superintelligence (whether artificial or a hyperaugmented human). For example, a nearly perfect escape outcome could be achieved by travelling in a straight line close to the speed of light, bringing with you sufficient resources and capabilities to:
- Stay alive indefinitely
- Continue the process of self-improvement
In the chaos of an oncoming singularity, it's not unimaginable that a few people could slip away in that fashion. But it won't happen if you're broke.
Notes
- The line between buying an exocortex and buying/renting intelligent servants is somewhat blurred, so arguably the OP doesn't totally miss the self-modification angle. But it should be called out a lot more explicitly, since it is one of the key changes coming down the pike.
- Most of this comment doesn't apply if AGI leads to a steady state where humans have limited agency (e.g. ruling AGIs or their owners prevent self-modification, or humans are replaced entirely by AGIs). But if that sort of outcome is coming, then our present-day actions have no positive or negative effects on our future, so there's no point in preparing for it.
↑ comment by Noosphere89 (sharmake-farah) · 2024-12-30T22:40:50.322Z · LW(p) · GW(p)
This might be okay if they respected the autonomy of unaugmented people, but all of the arguments about AGI being hard to control, and destroying its creators by default, apply equally well to hyperaugmented humans. If you try to coexist with entities who are vastly more powerful than you, you will eventually be crushed or deprived of key resources. In fact, this applies even moreso with humans than AIs, since humans were not explicitly designed to be helpful or benevolent.
I would go further and say that augmented humans are probably more risky than AIs, because you can't do a lot of the experimentation on a human that is legal to do to AI, and importantly it's way riskier from a legal perspective and a difficulty perspective to align a human to you, because it is essentially brainwashing, and it's easier to control an AI's data source than a human's data source.
This is a big reason why I never really liked the augmentation of humans path to solve AI alignment that people like Tsvi Benson-Tilsen want, because you now possibly have 2 alignment problems, not just 1 (link is below):
https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods [LW · GW]
comment by gugu (gabor-fuisz) · 2024-12-28T22:40:40.695Z · LW(p) · GW(p)
[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of "Science of Generalisation of Preferences", for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I'm not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like "people" "worshipping" "God" in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.
comment by Anders Lindström (anders-lindstroem) · 2025-01-02T09:58:15.587Z · LW(p) · GW(p)
- Money will be able to buy results in the real world better than ever.
- People's labour gives them less leverage than ever before.
- Achieving outlier success through your labour in most or all areas is now impossible.
- There was no transformative leveling of capital, either within or between countries.
If this is the "default" outcome there WILL be blood. The rational thing to do in this case it to get a proper prepper bunker and see whats left when the dust have settled.
comment by Alex Collins (alex-collins) · 2024-12-30T19:26:25.298Z · LW(p) · GW(p)
I've also commented on Substack, but wanted to comment in a different direction here (which I hope is closely aligned to LessWrong values). This article feels like the first part of the equation for describing possible AI futures. I starts with the basis that because labour will be fully substitutable by AI, the value goes to zero (seems correct) to me. What about following up with the consequences of that loss of value? The things that people spend their earnings will go to zero too. What's in that group? The service industry, homes.
Metrics we can track here are homelessness, unemployment, consumer debt, service industry bankruptcies.
The government will need to redistribute wealth and because the ownership of capital is the only wealth anymore, it will need to tax capital.
Perhaps exclude "cash" from "capital" and only define capital as ownership of means of production.
Ultimately, how do we defend against this future that only a few people seem to be able to see?
comment by will rinehart (willrinehart) · 2024-12-30T16:20:11.179Z · LW(p) · GW(p)
Interesting post. Some comments from an economist.
You note,
The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour).
And also,
Let's assume human mental and physical labour across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labour-replacing AI.
These are both incredibly strict assumptions. Jobs are bundles of related tasks and traditionally when capital has been able to substitute for labor, it boosts the productivity of the jobs that remain. But more to the point, we traditionally model out substitution at marginal rates that decline at some point. Your post suggests the opposite, that it is strictly increasing at all points. What would be the conditions that give rise to this situation?
Second, as David Autor explains in a must-read paper on the topic, the demand for service jobs and other manual task-intensive work appears to be relatively income elastic. This means that when aggregate incomes go up, there is an increase in demand for these jobs as well. When all is said and done, computerization should indirectly raise demand for manual task-intensive occupations because it increases societal income. But it seems you think this will be broken.
To be clear, I think AGI will be disruptive and labor replacing, but for further work, you might want to think about the world between here and when AGI can do all jobs.
comment by Purplehermann · 2024-12-29T16:47:06.213Z · LW(p) · GW(p)
Humans seem way more energy and resource efficient in general, paying for top talent is an exception not the rule- usually it's not worth paying for top talent.
Likely to see many areas where better economically to save on compute/energy by having human do some of the work.
Split information workers vs physical too, I expect them to have very different distributions of what the most useful configuration is.
This post ignores likely scientific advances in bioengineering and cyborg surgeries, I expect humans to be way more efficient for tons of jobs once the standard is 180 IQ with a massive working memory
comment by Russ Wilcox (russ-wilcox) · 2024-12-29T16:10:07.762Z · LW(p) · GW(p)
A great post.
Our main struggle ahead as a species is to ensure UBI occurs and in a generous rather than meager way. This direction is not at all certain and we should be warned by your example of feudalism as an alternate path that is perhaps looming as more likely. Nevertheless I agree we will see some degree of UBI because the alternative is too painful.
One of the ways you should add for those without capital to still rise post-AGI is celebrity status in sports, entertainment, arts. Consider that today humans still enjoy Broadway and still play chess, even though edited recordings and chess software can be better. Especially if UBI is plentiful, expect a surge in all sorts of non-economic status games. (Will we see a professional league of teams competing against freshly AI-designed escape rooms each week?)
The blog could be extended with greater consideration of future economic value-add by bits vs atoms. Enjoyed the comments discussing which investments today will hold up tomorrow. Can we agree that blue collar jobs will be safer, longer? AGI is digital but must exist in a physical world. And any generally acceptable post-AGI economy must still provide for human food, clothing, shelter, and health. Post-AGI we can expect a long period where AI-assisted humans (whose time has zero marginal cost if we stipulate UBI) are still cost-superior to building robots, especially in a world of legacy infrastructure. Hence there should still be entrepreneurship and social mobility around providing physical world products and services.
If states become AI dominated, will those AIs possess nationalist instincts? If so, we could see great conflicts continue or heighten in a fight for resources. Perhaps especially during the race to be first to AGI primacy in the next decade. But if not, perhaps we would see accommodations among unlikely borders and the unlocking of free trade for globally-optimized efficiency.
These are nuances to suggest and overall your thesis that social mobility could decline and the next decade may prove especially fateful is pretty solid. Let's all do what we can while we can.
comment by Robert Höglund (robert-hoeglund) · 2024-12-29T15:24:14.260Z · LW(p) · GW(p)
I agree that this is a likely scenario. Very nice writeup. Post AGI is a completely resource and capital based economy Albeit I'm uncertain if humans will be allowed to keep their fiat currency, stocks, land, factories etc.
comment by Mitchell_Porter · 2025-01-05T09:06:03.795Z · LW(p) · GW(p)
If I take the view that AGI has existed since late 2022 (ChatGPT), I can interpret all this as a statement about the present: capital matters more than ever, during AGI.
After AGI is ASI, and at that point human capital is as valuable as ant capital or bear capital.
comment by Aleksey Bykhun (caffeinum) · 2024-12-29T16:51:49.729Z · LW(p) · GW(p)
interesting angle: given space travel, we'll have civilizations on other planets, that can't communicate fast enough with the mainland. presumanly, social hierarchies would be vastly different, and much more fluid there versus here on Earth
comment by Aligned? · 2025-01-03T02:23:49.789Z · LW(p) · GW(p)
Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched.
And yet, ... society appears to be caring more about humans.
And yet, ... existing powers (specifically the state) seem even less effective and entrenched. Open Borders policies are clearly an act of desperation ... while these policies appear to have been broadly rejected by the electorate. The state only opened the borders because of its crumbling power base.
How is the contradiction resolved? How can technology seemingly reduce the relative importance of human v non human factors of production, while we can observe a frantic effort to outbid others in the global human auction?
Global fertility collapse is greatly increasing the incentive for society to care about humans while at the same time making existing powers less effective and entrenched. Pursuing Open Borders policies demonstrates how desperate states are to avoid extinction: This is now an existential struggle. Overlooking the welfare of one's own population by swamping communities with immigrants is the last line of defense- it is the least bad option before you close up shop. It is a clear act of desperation and has only been tried because of the profound fertility crisis that is underway. Fertility collapse will only escalate during the 21st Century and could become even more severe as AGI approaches. At some point, a full Demographic Arms Race might begin.
States are highly motivated "to care" about humans or at least protect their self-interests because state power traditionally has manifested in the control of bricks and mortar reality and with it government services such as education and health care. Fertility collapse (and alternative learning technology) is creating large reductions in bricks and mortar school demand and with it the power of the deep state.
The Darwinian struggle for national survival induced by extremely low total fertility rates then creates a paradoxical result in which the value of humans greatly increases and at the same time greatly reduces entrenched state power.
comment by ConnorH · 2025-01-02T18:34:25.069Z · LW(p) · GW(p)
My main issue with this post is that it seems substantially concerned with losing the ability for humans to achieve significant levels of wealth and power relative to each other, which I agree is important for avoiding a calcified ruling class (which tends to go poorly for a society, historically). But it should be viewed as a transitional concern as we look towards building a society where radical wealth disparities (critically, here defined as the power of the wealthy to effectively incentivize the less wealthy to endure unwanted experiences or circumstances, e.g., long work weeks, poorer nutrition or education, long commutes) don't exist.
Human ambition isn't worth human poverty except exactly to the degree that it may be necessary to eliminate suffering.
comment by asksathvik · 2024-12-30T20:28:48.445Z · LW(p) · GW(p)
But why be so nihilistic about this? We can strive to conquer the solar system, the galaxy and the universe. Strive to understand why does the universe exist? Those seem pretty important to be worked on.
comment by Andreas Kirsch (andreas-kirsch) · 2024-12-30T17:00:41.176Z · LW(p) · GW(p)
Great post!
What are your thoughts on guild effects in the sense that some of the changes you have described might be prevented through social contracts? Actors and screenwriters have successfully gone on strike to preserve their leverage, and many other professions are regulated.
I think this might be a weak counter-argument, but nonetheless, it might distort the effects of AGI and slow down the concentration of capital.