Keeping Capital is the Challenge

post by LTM · 2025-02-03T02:04:27.142Z · LW · GW · 1 comments

This is a link post for https://routecause.substack.com/p/keeping-capital-is-the-challenge

Contents

  What Needs to go right
  Your labour will always have value, but that might not matter
  A humanless economy
  Capital investment as our way out
  Reality collapse
  Institutions and truth
  Separating the fool from his money
  What to do about it
None
1 comment

Capital will likely matter more post-AGI. However, the capital you have been able to raise up to this point and may be able to acquire throughout the maturation of AGI will only retain its value if reliable information remains available.

Historically, investing has been grounded in the idea that stashing capital away in some enterprise, and accepting the risk associated, will let you grow your money over time. By accepting that you will not be able to access that capital for a period, and that it could wind up worthless, you will (hopefully) be left with more than you started with.

This idea has enjoyed immense success for centuries across Europe, North America, and other regions with a developed financial system. Famously the past few decades have given extremely positive outcomes to just about anybody who engages sensibly and invests long-term. The practice of patiently enduring risk in hopes of a payout has become a core feature of many people’s lives, and is the way in which they (somewhat dubiously) expect to keep themselves afloat in their old age.

The slow upwards march of equity prices is one of the most robust observations in finance. The general public’s ability to capitalise on this by investing has traditionally been maintained by three fundamental features of public markets.

1 - The value of capital tends to rise: If you make an investment in a business, its value will probably drift upwards with time.

2 - You will have sufficient information to make good decisions: Knowing what to buy and when to buy it, as well as when to sell, is necessary to capture the increase in value.

3 - Your stake will be easy to defend: If someone steals your shares, denies you voting rights, transfers the assets of a company under their own name etc. the law will step in and stop them. Better yet - if you are investing in a sufficiently large company, an immense financial institution will even come and do this for you!

And so, we spend much of our lives selling our labour for capital, which we can then exchange for what we need when we can no longer sell our labour.

The emergence of powerful AI, the kind which could enormously accelerate human productivity or supplant it entirely, threatens this life long economic planning. Our economy rests of a foundation of businesses selling products to humans who in turn sell their labour to businesses. But with powerful AI, the number of humans in that loop can be expected to drastically reduce.

What does this tell us about the effectiveness of investments? In the near-term future, it seems very likely that the value of capital will explode. As powerful AI allows more and more economic activity with declining need for human intellectual or physical labour, the total size of the economy can be expected to grow alongside the capital share. Feature 1 holds strong.

As powerful AI seeps into activity on equity markets, (profitable) mispricings to trade off will become even rarer than they are now. However, trading firms are already so much more sophisticated than your average retail investor that trying to trade against them is almost always pointless. The only things retail investors actually need to know to achieve good returns on average is that they should diversify (which ETFs such as SPY and VUKE make easy) and that you should hold in the long term.

If anything, powerful AI will make these facts even clearer. The incredible returns on capital as well as the impossibility of effectively trading against supercharged financial institutions should encourage this buy and hold attitude. Feature 2 holds strong in spirit.

And then you have feature 3 - your ability to defend and monitor your capital stake. The degradation of our information environment is one of the few features of the post AGI world which we are already feeling in earnest. Text has always been an easy medium to lie through, but creating responsive and well-articulated talking points on the fly has always been the exclusive domain of expensive humans. Now that this is no longer the case, and mass, cheap, customised propaganda is possible, the way in which societies interact with information is being fundamentally challenged.

As the inability to verify information worsens, so too will your ability to prove ownership or verify that you are receiving all of the benefits your ownership entitles you to. The continued value of your capital stake in a world with powerful AI depends on the continuing quality of institutions and their ability to find and distribute accurate, trusted information.

While I think that this high quality information environment is possible, I do not think it is a certain feature of our world even a few years from now. The possibility that you will completely lose control of your investments should impact your investing decisions now, as well as any plans you may have for keeping capital for a post-AGI world.

What Needs to go right

Of course, all of this is irrelevant if unaligned AGI is developed and given access to the nuclear codes. However unlikely this may be, I will assume that we are heading into a future with AI powerful enough to foundationally disrupt our economic relations with the state, companies, and one another, but which does not seek large-scale human depopulation.

This covers a pretty broad base of possible futures. We may have merely powerful AI which can automate the average call centre worker or paralegal with enough recordings of their work. We may have a Transformative AI scenario, where the most productive among us are accelerated enough to automate away everyone outside the top 1% (or 0.1%, or 0.01%) in a matter of years. Or we may experience full-blown aligned AGI turned ASI which quickly dominates the planet.

Either way, so long as humans have desires which we want to satisfy and labour which is too worthless to sell, we have a problem.

Your labour will always have value, but that might not matter

In this post I also assume a high level of automation that replaces human labour rather than augments it. The idea that AGI will make individual workers more productive (and thus increase their salaries as the economy grows) has some sympathisers (AGI Will Not Make Labor Worthless - by Maxwell Tabarrok, Artificial Intelligence, Automation and Work). The economic arguments behind this are strong, and I think we will see a rise in labour values in the short term. However, in the long term I think it misses the possibility of an economy based entirely on rents (and the value of human labour falling below the cost of keeping someone alive).

Comparative advantage looms large here - the idea that just because there is someone or something that can do everything you can do better still doesn’t mean that there is no value in you doing that thing. This idea isn’t ridiculous, and I understand why economists find it convincing. Let’s say you have an AGI which can do all intellectual labour better than you can. It has also built a robot that can do everything physical you can do better than you can, but a little less better than the intellectual grand canyon that you could never plausibly cross. There is clearly some value in you performing simple physical tasks which would otherwise require the robot to drop the more valuable thing it was doing and replace you there as well. However, there does come a point where robotics is sufficiently advanced that while, yes, there is a cost to them dropping what they were doing to sort out your problems, it is less than the cost of keeping you alive.

Humans require food, they require water, they require space. None of these things are free, and they may well become more valuable than your labour contribution.

To clarify, I think that economic theory does indicate that there is always value in your labour no matter how advanced the available alternatives, simply because you can have both as opposed to only one. But once this value exceeds the cost of keeping you alive, you become an economic drain.

Let’s call this the redundancy point - the point at which the value you add by freeing up more sophisticated systems no longer exceeds the cost of physically maintaining you. The point at which, not only can a system always do your job better, but at which you are an intrinsic value drain, and (likely) will be forever. Wages you might earn from your labour will not be enough to keep you alive without supplement from the proceeds of capital, or some payment from the state or other benefactor. If this point is passed, there is nothing about the most efficient allocation of resources for production of valuable goods that requires your survival.[1]

A humanless economy

But is all this even plausible? There is an argument that this entire economic transition fails around the redundancy point and we will enter a kind of steady state for technological progress, simply because productivity has to go somewhere. The creation of valuable goods necessitates that they are valuable to someone, and if humans are removed entirely so to is the incentive for production.

To see why this might be the case (and ultimately why I still think humans are out), let’s go ahead and follow the money. Cash flows from person to person through businesses in the following way.

  1. Businesses paying humans for labor
  2. Humans spending wages on goods and services
  3. Those purchases funding other businesses and their workers

The business gives the cash to the human for their labour, and the human gives that cash to another business for goods and services, and that business in turn gives it to other humans for their labour, ad infinitum. The perfect abstraction, allowing us all to ignore the fact that people work for other people, with a little skimmed off at every step for rent seekers.

Now, imagine a world where human labour is worthless. What does the above even look like? If there are no humans pushing cash around, the profit motive no longer drives businesses to produce in the same way as they did before. So, the argument goes, when we hit the redundancy point where humans can no longer exchange their labour for enough cash to sustain themselves, the economy would grind to a halt. We might imagine that there is an economic stable point near the redundancy point, where if capital tries to progress businesses will wind up hoarding cash, and without a cash demand to keep functioning, operations will slow down until humans get enough cash to start it back up again.

However, there is another motivation for productivity among rent seeking owners of capital. To understand it, let’s follow the cash once more.

  1. Capital owners, equipped with AGI and physical infrastructure, hold the means of production
  2. Humans attempt to sell their labor, but receive sub-subsistence wages despite retaining some comparative advantage
  3. As humans become unable to sustain themselves, the economy transitions to a closed loop where businesses and their owners trade exclusively with each other[2]

This represents a fundamental break from traditional economic cycles, as human labor becomes fully decoupled from the flow of capital.

In this model, there is no stopping at the redundancy point. Businesses don’t need humans to have something to sell. They only ever needed humans to have money, regardless of where it came from. We typically imagine the flow of cash from businesses to humans in the form of humans selling their labour, but dividends or a UBI or charity or pretty much anything else would also work in the above model.

Capital investment as our way out

So, as a human, you find yourself with expenses in the form of food, water, internet, rent (imagine post-AGI rent - that won’t be fun!). But you can no longer sell your labour at a high enough price to cover those expenses. How do you economically sustain yourself in a world where you aren’t worth sustaining?

Fortunately, we don’t have to speculate all that much about how to handle this situation: there are already hundreds of millions of people navigating it quite successfully. The unwell, the elderly, and anyone else in a similar labour market position also have expenses. But due to circumstances outside their control they can no longer sell their labour for enough to be able to meet them.

There are two main ways such people keep the books balanced:

  1. Direct payments from governments
  2. Rents from capital investment, either directly or from investment vehicles such as pension funds

Implementing a UBI is a perfectly reasonable approach, and the one which I expect is the most likely to work. Creating a set of direct payments from the state is by no means guaranteed to work even if AI stays under human control - corporate power may outstrip government power, your government may not capture AI successfully enough to fund the UBI, we are still exposed to arbitrarily high wealth inequality etc. There is even the prospect that the mere presence of AI in the economy will remove the states’ concern for their citizens welfare, removing the incentive [LW · GW] to institute a UBI at all.

But if the world’s governments don’t step in to save you, there is still a way for you to save yourself. Pile up capital, and hope that your investments pay enough to keep you alive indefinitely. I think it is extremely likely that the value of capital will explode. Nvidia, Microsoft, Google, and any other publicly traded proxy for AI progress has shot up in these past couple of years. While the easy availability of intelligence post-AGI may lead to a highly competitive market reducing returns for owners of AGI labs, it is unlikely to have the same impact on chip manufacturers, rare earth metal miners, and power plant operators. Anyone who owns anything can expect that thing to get more valuable as better ways to utilise it economically are found.

Sure, maybe you won’t be able to day trade profitably all other market actors will be working with billions of dollars of highly optimised compute. But a weaker version of that situation is already the reality, and people buy anyway. Some hedge funds are known to use satellite photography to guess customer footfall, others to phone polling stations obsessively throughout an election to get that all-important 5 minute head start on the competition. You are already trading against people who get it right enough of the time to build highly lucrative careers out of it, but that doesn’t matter. Ultimately, the only information you need is that you should buy. You already have that, and have for a very long time. Market makers and hedge funds having better information will probably make your returns better on average in the form of tighter spreads and more accurate pricing.

(There is a pedantic but important point to be made here about the opportunities available on public versus private markets. Private equity has historically seen higher returns than publicly listed securities for reasons that are aggressively debated to this day. Reasonable arguments have been made that it is due to the illiquidity premium (rich people can afford to tie up money for longer), higher risk tolerance, and the high costs associated with listing a security publicly. Either way, it is plausible that the best opportunities may not be available to ordinary investors.)

Let’s assume that you take this message to heart. You join a quant firm or big law, or even found a successful start up. For your efforts, you are rewarded with a large capital stake, large enough that only through its rents you can sustain yourself indefinitely and enjoy the fruits of dazzling intelligence. We assume that the stake itself rockets up in value, and that the world looks enough like it does today for all this to make sense (no AI apocalypse, or collapse of the financial system, or global communism). I claim that if the information environment degrades sufficiently, we may be able to maintain states, companies, and a general international order while individuals nevertheless lose access and ownership of any capital they might have acquired.

Reality collapse

For most of human history, it has been difficult to know whether someone is telling you a lie or the truth. It is about as easy to tell a story which accurately reflects the past as it is to tell a preposterous lie and make it sound convincing.

We have some social and psychological innovations for handling this. For example, if a person gives you information that they would clearly benefit from you believing (e.g. a story proving their immense physical strength, or a guarantee that they did not have an affair), we have cause for scepticism. But fundamentally, it is no harder to tell someone it is raining outside when it is than when it isn’t. Or that the Earth is flat when it’s actually round. Or just about anything else which they themselves do not have direct experience of or strong priors about.

Reputation was our first line of defence. You trust a media outlet until it can be shown to have lied, and you never trust it thereafter. This mostly works pretty well (although there might be other complex social dynamics going on - The Media Very Rarely Lies - by Scott Alexander).

Then, we got the photograph. Suddenly, it was a lot harder to tell a lie than tell the truth. Getting an image representative of the real world only required sitting with a photosensitive plate for a while - creating a realistic fiction required vast amounts of careful, skilled effort. Nevertheless, for companies, governments, or a dedicated con-artist it was sometimes worthwhile to doctor photos.

Audio recordings of high enough quality were also immensely difficult to fake. To the point that banks would accept it as verification to access your account (and some still might: Voice Verification Security Feature | CIBC).

And then video, the real kicker. An information source so hard to fake that courts accept it as evidence near universally, and creating reliable fakes in the form of fictional video has an entire specialised industry and can cost millions or tens of millions an hour. If you see a video on Twitter/X, then it is most likely real. The text that goes along with it providing context is still trivial to fake, but the video itself is probably real.

Then came deepfakes. First, faking an image had a cost reduction from hundreds of dollars of editor time to a few cents at a significant quality reduction. Then, the quality improved and improved as AI does. The ability to edit a video to put words in someone’s mouth was no longer the purview of governments and media empires, but could be achieved on a nice enough laptop.

Video is still mostly reliable. Perhaps not what we once had, now that political speeches and interviews can have their content changed in a way which is difficult to detect at first glance. Nevertheless, if I showed you a video of an important political figure being assassinated, or a celebrity arrested on the street, or political demonstrations in your hometown, you would have good cause to believe them. But how much longer will we have this?

Sora isn’t that good yet, but remember - this is the worst it will ever be. Image generation has crept up quietly, and while not quite photorealistic I see no reason why such capabilities won’t wind up in the hands of the public. There are ways to detect AI generated images and video, but text and audio are getting harder and harder.[3]

Even if you can detect them, if you are making an API call to a service reliably detecting fakes and your connection winds up with a middle man, you may never know. Similarly, satellite photography is hard to fake but only to the satellite. With sufficient disruption to the way in which we transfer information with a combination of AGI-enabled cyberattacks and the ability to fake an entire world’s worth of truth, ordinary people could lose their grip on reality beyond what they can see.

Institutions and truth

With every connection compromised, every digital artifact under question, it becomes difficult to imagine anything digital being reliable ever again.

But what about institutions?! In this kind of environment of advanced reality collapse, accurate and consistent news could be immensely valuable. It may be easy to fake digital records, but it will get no easier to carve messages directly into stone and deliver them to your loyal subscribers.

For centuries, we endured an information environment where speech and writing, the principle mediums of communication, permitted lies and easily as they permitted truth. There were problems - myths, religions, fickle lies tearing through continents where they could. But ultimately we did alright, largely because of the establishment of institutions whose chief pride and reason for existing was the faith their audience had in them for their reliability and commitment to the truth.

It is entirely possible for every digital record to become completely falsifiable and still have reliable information spread with enough effort. Imagine a new media organisation entirely based around satellite photography on a mass scale - not enough to tell you which celebrities are getting married, but probably enough to tell you if there is an active war on the other side of the world or not. This organisation collates their findings, carves the headlines onto stone blocks, and ships them to designated locations around the world for their subscribers to enter and view. We would likely never have to fall back on something so extreme, but as a proof of concept I suspect that this kind of system would continue to function even if internet-based communication collapsed entirely.

(There is a failure mode where a foreign adversary sets up multiple identical stores in your small sleepy town, and ships similar but not identical blocks between them. Which is the real one? The people working there probably don’t know, let alone the customers.)

In my mind, any plan to survive the development of powerful AI through capital investment inherently hinges on three assumptions - the continued existence of humanity, powerful AI remaining under human control, and robust institutions for conveying information. If any of these assumptions falters, your on-paper investments quickly become meaningless.

Estimating the likelihood of such reality collapse occurring seems to hinge on a lot of factors. It is relevant who deploys the AI, as governments are more likely to directly profit from mass propaganda in and outside their borders than massive international corporate entities. But equally, if open sourcing of model weights is a major part of AI proliferation, millions of people all hurling semi-plausible information into an undifferentiated internet would likely have a similar effect. The offense-defence balance of fake creation and detection is of high importance to whether these fakes impact all of society, or only the least technically sophisticated who are not aware of proper verification techniques developed. I have heard it proposed that we may even wind up in a MAD situation, with governments agreeing not to tamper with one another’s information environment backed up by the threat of irreversible retaliation.

(MAD preventing disruption of information environments seems to suffer from the lack of bright lines. Unlike the clear divide between a nuclear explosion and a lack of one, it is unclear how much propaganda above the current level would be ‘too much’. I don’t find this stable outcome very likely, but I am aware of enough people who think it is plausible for it to be worth including.)

But why do I think the failure of reliable communication threatens investments so significantly?

Separating the fool from his money

(To be clear, the point of this section is not that more sophisticated actors will swoop in and take your stake at a bad price. The solution to that kind of issue is simply not to trade. I claim that without reliable information, you may have your stake taken legally or otherwise and be completely denied recourse without an impact on the monopoly of power.)

Imagine that you are a very online person. You interact with your colleagues sometimes, your family even less, but mostly you spend your time with funny people from the internet, and your attitudes to the world are largely formed by them. One day you click on a phishing email, or download a dodgy app, or otherwise get your local network infected in any of the thousands of ways people do. This particular infection does not take your data, nor does it make your smart fridge start mining bitcoin. Instead, it redirects every search, every scroll, every depraved scan of the world’s great archives to a server running a range of advanced generative models creating the world as you seek it.

You click on a YouTube video, and it generates it. You click on the news, and it appears before your eyes plausibly. You scroll through Instagram and it makes what you expect to see. You head over to Robinhood or Trading212 or whatever you chose to use, and it tells you that your stake in Nvidia has gone to zero.

This seems … odd? You head over to the news app of your choice, and lo and behold Nvidia manufacturing has experienced a major technical fault. The giant is finished! You go to YouTube and you find video after video of your most trusted parasocial friends explaining exactly what went wrong, and being wildly incorrect in their own characteristic styles. That’s a big chunk of your net worth gone seemingly overnight, with no warning or cause. Ouch!

You mention this strange happening to your colleagues the next day. Some agree - it does look like Nvidia has crashed. Others seem confused - don’t you know that their earnings were better than ever and they’re up 3000%. You search on your work computer and find that seemingly nobody knows what happened with Nvidia. You phone the police and someone picks up who seems as clueless as you are (though you suspect they are not who they say they are). Do you hold? Do you sell? Can you even sell?

While what I have just described is a pretty blunt failure of our financial system, I think it communicates the real difficulties in defending your stake without reliable information. It seems more likely that the separation of the uninformed from their capital stake would take the form of subtle legal wrangling which relevant parties were unable to properly assess.

For example, your shares in the newly-listed OpenAI get acquired at a surprisingly low price by a company which the previous majority shareholders are invested in. Or google hires a large number of important investors and starts paying large salaries instead of dividends.

This kind of misbehaviour is covered by extensive minority shareholder protection legislation, so we can confidently expect that the companies with the most immediate and advanced access to intelligence will find cleverer ways of disenfranchising the public if at all possible. We have a solution to this problem of institutions harming individuals in the form of forcing all entities to engage with securities on the same terms. The hope is that the might of financial institutions is wielded not just on their behalf but on the behalf of the smaller investors they have to stand beside. However, without any ability to accurately discern what your stake is and if you still have it, how exactly are you going to sue or know if someone else is suing on your behalf?[4]

As it is the majority of many people’s investment strategy, I have focused on tradable securities. However, I think these arguments also apply to other forms of wealth storage and accumulation. Outside equities markets, I think there are few investment opportunities which can be expected to reliably grow in the coming years. Holding onto cash suffers from the same issues as equity stakes (without communication, how can you keep track of your cash?) but will also likely suffer intense inflationary pressure under the monetary pressures of powerful AI. Commodities, while they may see a short term price spike from increased economic activity, will likely see prices drop in the longer term as detection, extraction, and economic application all become more efficient with the application of limitless intelligence. The list goes on in alternative asset classes, but everywhere the point stands - you cannot defend your stake without reliable communication.

What to do about it

You should still invest! In the short-ish term, there is likely to be an explosion of the value of capital investment including in public equity markets. The corresponding decline in the value of your labour means that you may need to ride this wave to keep yourself healthy in the coming years, if not alive. I think that most people reading this should not plan on holding much capital after the creation of AGI. But if your long term plan involves acquiring and holding large amounts of capital, I strongly recommend you rethink this strategy.

But you should not rely on equity investments in the very-long term. Power in the post-AGI world, where it is wielded by humans, seems very likely to look like AGI listening to you and not a number in a bank account. I know this will sound astoundingly unhelpful, but I think it is true - there is no path other than alignment.[5]

The inability of the wealthy to let their capital holdings carry them to safety might actually be good for the development of safe AI in the short term. If people with power today believe that they will be disenfranchised in the same way as everyone else by the rise of powerful AI, they may be more inclined to use their resources to steer this progression in humanity’s favour. 

It all goes well and we make it to the other side of the singularity, you won’t make a down payment on a star with Nvidia stock. You will make it with the affection of whatever entities we have raised, and I doubt our creations will care too much about the wealth you gave a life to build.

  1. ^

    Some humans may only want to pay for things done by other humans. An industry in hand made goods exists perfectly well in our world with machined excess, and it is plausible that this continues. But if at each time step, every human gives some of their money to immense corporate entities and some to human artisans, there will come a time when they have no money to pay those artisans unless they are subsidised by those corporate entities.

  2. ^

    The owners of these enterprises may not be able to use one another’s owned productivity fully. But if automation is cheaper than human employees, the economic incentive to remove humans from the loop would kill any competitor which keeps returning cash to the hands of consumers.

    Even if taking humans out of the loop does remove the incentive to produce, the economic incentive to be the one who produces seems overpowering.

  3. ^

    The detection of these fakes is also advancing, but there seems to be little sign of whether generation or detection will scale more favourably. As these models get larger and more advanced, will it get progressively easier to generate fakes or progressively easier to detect them? I have no strong impression about the direction of the offense-defence balance in this technology.

  4. ^

    The argument that you will be financially disenfranchised by a worsening ability to access information extends to more of your stake in the world than your capital one. Along with your capital stake, your political and social stake in the world will be difficult to defend without reliable communication. Without knowing who the candidates are and what professional economists and political analysts think of their policies, the informed decisions which democracy relies upon seem unlikely.

  5. ^

    Difficulty retaining capital has the slight upside that arbitrarily unequal wealth distribution seems more unlikely. The power is shifted from those who have capital today to those who control AGI labs, but that somehow feels more likely to yield very long term equitable outcomes. It would feel like a great cosmic injustice if the boomers, of all people, were the only ones who got to live forever.

1 comments

Comments sorted by top scores.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-02-03T18:45:42.309Z · LW(p) · GW(p)

How much of their original capital did the French nobility retain at the end of the French revolution?

How much capital (value of territorial extent) do chimpanzees retain now as compared to 20k years ago?