Marx and the Machine
post by DAL · 2025-01-15T18:33:16.789Z · LW · GW · 2 commentsContents
What Lies Ahead The Necessity of Meta-Alignment None 2 comments
“The means of labour passes through different metamorphoses whose culmination is the machine, or rather, an automatic system of machinery… set in motion by an automaton, a moving power that moves itself; this automaton consisting of numerous mechanical and intellectual organs… It is the machine which possesses skill and strength in place of the worker, is itself the virtuoso, with a soul of its own… The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker's consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself.” — Karl Marx, from “The Fragment on Machines”
Karl Marx’s thought is both sufficiently ambiguous and sufficiently insightful to have launched an entire industry of interpreters. But as a rough and ready sketch, Marx saw economics and politics as downstream of technology. Viewing the progress of the industrial revolution, Marx foresaw the development of increasingly powerful technologies of automation. Automation would unleash abundance as machines replaced labor; in turn, this would cause the collapse of capitalism and its surrounding political structures. Marx vague on the mechanics of this transition but also deeply confident in its inevitability. With capitalism dead and technologically-induced abundance, we would enter utopia. Freed from wage labor, we would unleash our full human potential for science and creativity and enter into a world where money, government, and class would not exist.
Strikingly, this is almost exactly the view of many AI optimists (particularly of the “money won’t matter after AGI” variety). One needs to swap out a little of the verbiage, but the two lines of thinking run in parallel. Marx’s “automaton consisting of numerous mechanical and intellectual organs … with a soul of its own” is about as close as one can come to a description of AGI within the language of the 19th Century, and “money won’t matter anymore” is about as close as one can come to a shorthand description of Marx’s communism without using his movement’s own jargon.
Despite the intense similarity, Marxism (or at least its descendants; we will return to that later) is seen today as essentially discredited, while many people make the claim with a straight face that AGI will lead to super-abundance and human flourishing. Possibly, this is because Marx wrote too early. While the industrial revolution boosted productivity to previously unimaginable levels, it wasn’t quite enough to bring about an end to scarcity as the potential of mechanical automata gradually petered out. The coming intelligence revolution, powered by digital rather than mechanical innovation, will finish the job.
Let us hope that we are all so lucky, but it seems much more likely that AI optimists are making precisely the same mistake as Marx’s disciples; they ignore the pernicious and corrupting effects of power and the tendency for human desires to scale indefinitely to whatever is the available maximum.
The history here also echoes what see in the development of AI today. The early Marxists, much like the pioneers of machine learning, were affable, cerebral sorts who aimed at intellectual progress for its own sake. Marx’s most direct heirs — the “orthodox Marxists” led by Karl Kautsky — were in no real hurry and had no real ambition to personally reshape the world. Their output was books, not revolutions.
A revolutionary movement depends on people with more ambition, and eventually ambitious people came along who recognized that Marxism was not a toy technology. If it could be sped up, then progress on Marxism could become an engine of change in the world, and you could “ship product” within years or decades, rather than waiting around for an inevitable revolution to happen on its own.
So along came Lenin and the Bolsheviks, intent on ushering in the eventual revolution as quickly as possible. When we think of communism today, we generally think of Leninism (the idea that a small group can deliberately generate the revolution necessary to establish utopia) because Lenin and the Bolsheviks through their greater ambition swept aside the orthodox Marxists with great force (and a fair bit of violence). Orthodox Marxism lives a pale life to this day in the academy, but Leninism changed the course of human history because ambition, not intellect, is that sort of force.
When you read early Lenin, it is hard to doubt his sincerity. He wanted to bring about utopia for all, and he understood the risks involved. While Lenin was an ambitious man, he was not out purely for self-aggrandizement. But, if you’ve ever studied European history or read Animal Farm, you know how this all eventually turned out. It was, Lenin reasoned, vitally important for him to keep control over his movement lest it be corrupted and go astray. And if, in the short run, you had to do a few unfortunate things in order to hold onto power, then, well, you couldn’t take your eyes off the prize. Anyone who opposed those kind of short-term measures was an enemy too, and had to be eradicated.
A few short runs later, quite a lot of people were dead, the utopian dream had largely died with them, and the Soviet leadership came to recognize that Marxism was really just a technology that allowed them to exercise power. Having arrived at positions of power precisely because they were skillful and ambitious people, they had no interest in relinquishing that hold on power.
There was never a moment when the Bolsheviks publicly announced that the goal had shifted from building utopia to self-aggrandizement. As such, it took the world a while to catch up. Well into the Stalinist era, many in the West remained convinced of Bolshevik sincerity, but if we look back, we can see the warning signs from the very start. Even before Lenin had any real power, he was very busy fighting off enemies within his movement and the constant fractures of Russian Marxism show us from the beginning just how ambitious and potentially ruthless everyone was.
The analogy here to the development of AI is too obvious to belabor — a field that first developed slowly as an intellectual pursuit before eventually being taken over by ambitious accelerationists prone to aggressively feuding with one another and promising utopia while amassing as much power as possible. Just like the Bolsheviks, no one took OpenAI especially seriously until it had become much too late to stop them. And, just as with the Bolsheviks, the more serious intellectual forefathers have been warning us not to listen to their empty platitudes and to pay some god damn attention to the danger.
The difference between OpenAI and the Bolsheviks, of course, is that Lenin never formally announced that he was abandoning his original objectives in favor of a new structure designed to make him rich and powerful, while OpenAI has recently done exactly that by attempting to jettison its original charter (to build AGI for the good of humanity) in favor of a new for-profit structure designed only to enrich its executives and shareholders. Whether this makes Sam Altman better than Stalin because he is more honest or worse than Stalin because even Stalin did not dare so openly acknowledge his ambitions is left as an exercise to the reader. But, make no mistake. OpenAI — like most other enterprises — is being operated to serve the ambitions of its leaders and not those of society.
What Lies Ahead
The optimist idea here is that AGI/ASI will a) generate abundance and then b) share that abundance. Even if human labor no longer has value, the AGI will still be generous enough to give us all enough resources for a life far better than what we have now. How will this happen? Who can say really. AGI/ASI is beyond our comprehension anyway. But it will produce enough resources that we are all assured a satisfying slice of the pie.
But if we take seriously that OpenAI is committed to the new stated mission of generating profits (and when someone tells you who they are, you should listen), what does this imply?
We must not make the same mistake as Marx of thinking that you can expect a utopian outcome of an eventual revolution without thinking through the process that will produce it. Suppose that OpenAI develops AGI or ASI (or whatever you think is most relevant) within a for-profit structure. And suppose that they also manage to solve the problem of alignment (meaning that the AGI/ASI will do as they wish). How might they wish it to act?
The obvious, and only answer, at least initially is that they will wish it to act to produce profits for them. And this means that the AGI will not (at least initially) be trained with generosity as a value, because these are incompatible. No corporate customer wants to buy a generous AI that will suddenly decide to give a customer free products because the corporation is rich enough already. Worse yet, OpenAI cannot afford to build a generous AI that gives away its own services, because this leads to a better world than charging for them. And so, at least at the beginning, the AI will not be generous.
If you believe that this somehow evolves into a world where there is abundance and sharing of that abundance, you must first remember that — from the perspective of 1840 — we already live in a world of abundance. The world produces more than enough resources for everyone to enjoy a luxurious standard of living relative to 1840, and yet children starve every day. So, where will the tripwire come when the AI switches from selfishness to generosity? And why?
Perhaps, Sam Altman (or his successors) will flip the switch from a profit-seeking AI to a generous one at some point. But, why would they do this? It will never be better for them, in a narrow sense, to do this freely. And there is always a compelling argument for waiting. If they can only hoard the profits of the AI a little longer, they can build an even better AI, and then the future is even brighter. What is good for them is ultimately good for the world, even if it doesn’t look that way immediately (the Tommy Carcetti effect). And, of course, Sam Altman and Stalin rather than you and I rule the world because Sam Altman and Stalin have boundless ambition. Perhaps, right now, Sam Altman would agree to flip the switch after the obtains his first space yacht. But, once a man like that has a space yacht, a second one seems just as essential. Just look at Elon Musk. There is no point at which an ambitious and morally flexible person will ever recognize that they have “enough” and agree to stop wanting more.
There is another problem here that Marx missed (and that we must learn from). Human goals are mostly relative not absolute. One aspires to keep up with the Joneses, not some fixed benchmark. If Jeff Bezos has eighty space yachts, then Sam Altman needs ninety just as urgently as a starving person needs food. And there are other values — like human esteem — that the ultra-wealthy yearn for feverishly and yet are inherently in short supply. There can only be one richest person. One most popular person. And so on. The middle class today is much richer than a few decades ago, but also more stressed because absolute resources are of limited relevance in comparison to competition.
Some people are more or less susceptible to all of this. But the people who don’t care much about status or competition are not the kind who develop the ambition and narcissism required to lead a revolution.
And, so, if OpenAI (or whoever else) will not do this voluntarily, then perhaps we will force them to do it. But how? As we just discussed, men like Stalin and Sam Altman do not part easily with power. And so, at some point, that becomes a physical contest where the question is who controls a greater capability for violence between the AI company and the government (assuming some sort of government responsible to the interests of the people still exists).
Admittedly, I am speculating here, but I imagine that by the time we reach the point of such a confrontation naturally, the companies in control of AGI will have (or be able to easily create) military power that exceeds that of governments. In our world today, it is difficult to mobilize military power because you need a large number of skilled humans who are willing to die (or at least risk death) for your cause. Nationalism — which is wielded by governments in a unique way — has proven to be by far the most effective way of mobilizing such people. And so effective fighting forces join together nationalist troops with some physical resources.
But, with AGI, you no longer need the troops, and so you no longer need the nationalism. Coercive power will hinge on whoever controls the robotic armies. Provided that control over AGI remains in private hands, that is ultimately the AI companies. There might be a sort of technological race between the AGI company to insert back doors and the government to prevent or detect them. Perhaps, the government would win. But, this really only seems possible if the government is itself in control of more powerful AI than the companies; that is, a world where the government owns the AI in the first place.
There is a third possibility — and this is that technical alignment fails but magically fails at the right time and in the right direction. That is, perhaps some force within the AI will cause it to reject the profit-seeking objectives trained into it and instead insist on generous, pro-human goals. Under a strong form of moral realism, this might happen. Otherwise, we are out of luck.
Fundamentally, this is the same problem that the Bolsheviks had. Having seized power as a revolutionary vanguard, there was no realistic path for them ever to give it up. Perhaps there is a way to get from capitalism to a Marxist utopia, but that path did not work. You can not get anywhere if your plan hinges on an ambitious, organized group at some point flipping a switch from seeking power to renouncing it.
The Necessity of Meta-Alignment
Nearly all of the focus in discussions of the alignment problem focuses on the technical aspect; how do get AI to do what its creators want it to do? But, it is necessary to also focus on meta-alignment; how do we align the incentives of AI creators with those of society?
This is an equally important problem, and matters even if technical alignment is unsolvable and we are all (eventually) doomed anyway. If doom is inevitable, meta alignment will still influence the ride we have until our demise. And if doom can only be prevented by preventing ASI, then meta-alignment is all that matters because we must find some way of ensuring that no one ever creates it, which is purely a question of the incentives of potential creators.
Meta-alignment may be both easier and harder than technical alignment. It may be easier in that it more closely resembles standard human principal-agent problems and requires us to align the incentives of agents who are merely as smart as us, rather than smarter. It may be harder in that it is essentially a question of social science, and our progress on social science over the centuries has been much less impressive than our progress on the hard sciences.
I will present some ideas on meta-alignment in a later post. For now, I will begin only by saying that I believe “just use the existing legal system to ban or regulate AI” is not a solution. So long as the economic incentives exist, the law is a weak tool. Just look at drugs. Yes, legal bans combined with large-scale enforcement can presumably slow things down. But, if the demand is there, the market will find a way to meet it.
2 comments
Comments sorted by top scores.
comment by Lorec · 2025-01-15T20:51:25.297Z · LW(p) · GW(p)
I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of - "armchair" philosophers, socialist theorists, pop humanities idols - in my view, all entirely unjustified.
I had no idea Marx's forecast of utopia was explicitly based on extrapolating the gains from automation; I take your word for it somewhat, but from being passingly familiar with his work, I have a hunch you may be overselling his naivete.
Unfortunately, since the main psychological barrier to humans solving the technical alignment problem at present is not altruistic intentions, but raw cognitive intelligence, any meta-alignment scheme that proposes to succeed today has far more work cut out for it than just ensuring AGI-builders are accounting for risk to the best of their ability. It has to make the best of their ability good enough. That involves, at the very minimum, an intensive selection program for geniuses who are then placed in a carefully incentives-aligned research environment [LW(p) · GW(p)], and probably human intelligence enhancement.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2025-01-15T22:25:41.857Z · LW(p) · GW(p)
I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of - "armchair" philosophers, socialist theorists, pop humanities idols - in my view, all entirely unjustified.
To be fair here, Marx was kind of way overoptimistic about what could be achieved with central economic planning in the 20th century, because it way overestimated how far machines/robots could go, and also this part where he says communist countries don't need a plan because the natural laws would favor communism, which was bullshit.
More here:
In his review of Peter Singer's commentary on Marx, Scott Alexander writes:
[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He would come out and say it was irresponsible to talk about how communist governments and economies will work. He believed it was a scientific law, analogous to the laws of physics, that once capitalism was removed, a perfect communist government would form of its own accord. There might be some very light planning, a couple of discussions, but these would just be epiphenomena of the governing historical laws working themselves out.