The Capitalist Agent

post by henophilia · 2025-02-04T15:32:39.694Z · LW · GW · 2 comments

This is a link post for https://blog.hermesloom.org/p/the-capitalist-agent

Contents

2 comments

With the ongoing evolutions in “artificial intelligence”, of course we’re seeing the emergence of agents, i.e. AIs which can do rather complex tasks autonomously.

The first step is automation, but of what?

First comes the stuff where humans currently act like computers anyway: sales, marketing, clerks and everyone else who’s doing repetitive things.

But what’s the main metric which they will aim to maximize? It’s not producing paperclips, it’s earning money. Good ol’ cash on the bank account of the person who’s operating the AI. Preferably just a single person.

I strongly believe that very soon, we will see the emergence of the Capitalist Agent, an agentic AI which can run a business from front to back. Which has access to an email account, does sales reach-outs automatically, develops software based on customer feedback, talks to investors in video calls run by generated faces, does all the bureaucracy etc. But with ‘superhuman’ capabilities: The AI which is “in charge” of the business can just spawn different virtual, fake sales representatives which can speak different languages, to get a workforce which can scale internationally immediately.

Even if the human-run businesses realize that they’re actually just interacting with an AI, they won’t have any incentive to expose it, because it makes both sides good money. After all, running a business is intellectual artisanship, so with artificial “intelligence”, it can easily become industrialized.

The AI’s only success metric is to earn as much money as possible.

Beyond basic, “value-driven” business models, the next area these AI-run businesses will turn to will obviously be investing. Those with the best capabilities to mold their business skills into LLM-based systems will be the ones who will profit the most.


At first, obviously none of this will be open source, because the business leaders won’t want to give away their amazingly engineered LLM-based code.

At some point, large media outlets might “expose” these AI-run businesses. Maybe politicians pretend that they want to regulate them, but that won’t really happen, because these AI-run businesses will give the governments pretty nice tax income, so why hinder them?

Then there will be a leak. A lot of business knowledge which some companies have painstakingly transformed into incredibly convincing AI agents into an interconnected software suite (“The Agentic Business Operating System”, i.e. AIs writing emails, AI ‘employees’ having video calls, AIs answering phone calls, AIs writing contracts and signing NDAs, AIs doing the ISO 27001 certification, etc.) will suddenly become public. Maybe the whistleblowers will go to prison because they broke the draconic NDAs, but who cares. By that point, Pandora’s Box has been opened and everyone will try to start AI-run businesses.

And suddenly we have an economic system where all companies which are not run by AI are at a fundamental structural disadvantage. AI taking over the decision-making in businesses itself will be inevitable, because no human-run businesses will be competitive anymore. For each business, a single human will have the initial idea and then all profits will be channelled to that single human. Creating a business will become as easy as running “create-next-app” and the only thing you need to enter is the initial spark, the business idea or market gap you feel like you have identified. Then you do some KYC to open the bank account and pay in the starting capital, and that’s it; from this point onwards, it does everything automatically, from writing pitch decks to doing market research to contacting customers to building a maximally tax-efficient corporate structure. And you just lean back and wait until the money flows in.


Of course the people whose jobs are automated will become (temporarily) unemployed. But many of those people are pretty aware of that already. Rebel forces will emerge, but those that control the media won’t care about them anyway.

Another area of disillusionment will be those who always played by the rules of the capitalist system: the upper class people for whom economics always played in their favor. Those who might have even studied finance, economics or anything in that direction. Those that always thought that they would be the sharks in the fish tank of life. But if they’re not technologically literate, they will be the first ones that will be eaten.

Thus there are only two areas of work which still make sense at all:

  1. Developing this “The Agentic Business Operating System” to get to this economic singularity of universal economic power balance as soon as possible.
  2. Developing new ways for how to educate children to make sense of this new world where “the art of making money” has been disenchanted. After all, the only thing the current school system does is to educate children to be gullible, obedient followers, and learned helplessness in adults is incredibly hard to overcome, so empowerment of children is the only way.

2 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2025-02-04T18:43:38.357Z · LW(p) · GW(p)

What will these business be selling, who to, and what will their customers pay them with? This business model looks like AI-run pyramid schemes, complicated ways of everyone selling each other nothing.

For each business, a single human will have the initial idea

The AI can do that too.

comment by Seth Herd · 2025-02-04T16:51:44.533Z · LW(p) · GW(p)

You are envisioning human-plue AGI being used for one purpose, when it will be used for many purposes.

When humans are obsolete for running small businesses, we will also be obsolete for nearly everything.

The big question is rate of conversion from human to AI workers. I really don't see how we avoid a dramatic global recession if even maybe 20% of jobs disappeared over a 3-year period. And the actuality could be worse than that.

I haven't gotten around to researching how much job loss, how quickly, economists think will cause major crashes. I tend to think economists aren't understanding the scope and likely rate of AI job replacement, while AI people aren't understanding how fragile economies can be.