The two-tiered society
post by Roman Leventov · 2024-05-13T07:53:25.438Z · LW · GW · 9 commentsContents
Response approaches None 9 comments
On AI and Jobs: How to Make AI Work With Us, Not Against Us With Daron Acemoglu
Here is Claude.ai's summary of Daron Acemoglu's main ideas from the podcast:
- Historically, major productivity improvements from new technologies haven't always translated into benefits for workers. It depends on how the technologies are used and who controls them.
- There are concerns that AI could further exacerbate inequality and create a "two-tiered society" if the benefits accrue mainly to a small group of capital owners and highly skilled workers. Widespread prosperity is not automatic.
- We should aim for "machine usefulness" - AI that augments and complements human capabilities - rather than just "machine intelligence" focused on automating human tasks. But the latter is easier to monetize.
- Achieving an AI future that benefits workers broadly will require changing incentives - through the tax system, giving workers more voice, government funding for human-complementary AI research, reforming business models, and effective regulation.
- Some amount of "steering" of AI development through policy is needed to avoid suboptimal social outcomes, but this needs to be balanced against maintaining innovation and progress. Regulation should be a "soft touch."
- An "AI disruption reduction act," akin to climate legislation, may be needed to massively shift incentives in a more pro-worker, pro-social direction before AI further entrenches a problematic trajectory. But some temporary slowdown in AI progress as a result may be an acceptable tradeoff.
The prospect of two-tiered socioeconomic order looks very realistic to me, and it is scary.
On the one hand, this order won't be as static as feudal or caste systems: sure thing, politicians and technologists will create (at least formal) systems for vertical mobility from the lower tier, i.e., people who just live off UBI, and the higher level, politicians, business leaders, chief scientists, capital and land owners.
On the other hand, in feudal and caste systems people in all tiers have their role in the societal division of labour from which they can derive their sense of usefulness, purpose, and self-respect. It will be more challenging for those "have-nots" in the future AI world. Not only their labour will not be valued by the economy, their family roles will also be eroded: teacher for their own kids (why kids would respect them if AI will be vastly more intelligent, empathetic, ethical, etc.?), lover for their spouse (cf. VR sex), bread-winner (everyone is on UBI, including their spouse and kids). And this assumes they will have a family at all, which is increasingly rare, where as in faudal and caste societies most people were married and had kids.
Vertical mobility institutions will likely grow rather dysfunctional as well, akin to the education systems in East Asia where the youth are totally deprived of childhood and early adulthood in the cutthroat competition for a limited number of cushy positions at corporations, or the academic tenure in the US. If the first 30 years of people's lives is a battle for a spot in the "higher tier" of the society, it will be very challenging for them to switch to a totally different mindset of meditative, non-competitive living like doing arts, crafts, gardening, etc.
Although many people point out the dysfunctionality of positional power institutions like the current academia, governments, or corporations, the alternative "libertarian" spin on social mobility in the age of AI is not obviously better: if AI enables very high leverage in the business, social, or media entrepreneurship, the resulting frenzy may be too intense either for the entrepreneurs, their customers, or both.
Response approaches
I'm not aware of anything that looks to me like a comprehensive and feasible alternative vision to the two-tiered society (if you know such, please let me know).
Daron Acemoglu proposes five economic and political responses that sound at least like they could help to steer the economy and the society in some alternative place, without knowing what place that is (which in itself is not a problem: vice versa, thinking of any alternative vision as a likely target would be a gross mistake and disregard for unknown unknowns):
- Tax reforms to favour employment rather than automation
- Foster labour voice for better power balance at the companies
- A federal agency that provides seed funding and subsidies for the human-complementary AI technologies and business models. Subsidies are needed because "machine usefulness" is not as competitive as "machine intelligence/automation", at least within the current financial system and economic fabric.
- Reforming business models, e.g., a "digital ad tax" that should change the incentives of media platforms such as Meta or TikTok, and improve the mental health
This all sounds good to me, but this is not enough. We also need other political responses (cf. The Collective Intelligence Project), and new design ideas in methodology (of human--AI cooperation), social engineering (cf. Game B), and psychology, as a minimum.
If you know some interesting research in some of these directions, or other directions to help reach a non-tiered society that I missed, please comment.
9 comments
Comments sorted by top scores.
comment by Viliam · 2024-05-13T14:35:58.459Z · LW(p) · GW(p)
Tax reforms to favour employment sounds like creation of bullshit jobs. It would depend on exact details, but if a machine can do something as well or better than a human, then the machine should do it.
What does "foster labour voice" even mean? Especially in companies where everything is automated. You can give more power to current employees of current companies, but soon there will be new startups with zero employees (or where, for tax reasons, owners will formally employ their friends or family members). Giving more power to labour will not matter when the new zero-labour companies outcompete the current ones.
Human-complementary AI technologies again sounds like a bullshit job, only mostly did by a machine, where a human is involved somewhere in the loop, but the machine could still do his part better, too.
Tax on media platforms -- solves a completely different problem. Yes, it is important to care about public mental health. But that is separate from the problem of technological unemployment. (You could have technological unemployment even in the universe where all social media are banned.)
Replies from: Roman Leventov↑ comment by Roman Leventov · 2024-05-13T17:09:37.448Z · LW(p) · GW(p)
It would depend on exact details, but if a machine can do something as well or better than a human, then the machine should do it.
It's a question of how to design work. Machine can cultivate better than a human a monoculture mega-farm, but not a small permaculture garden (at least, yet). Is a monoculture mega-farm more "effective"? Maybe, if we take the pre-AI opportunity cost of human labour, but also maybe not with the post-AI opportunity cost of human labour. And this is before factoring in the "economic value" of better psychological and physical health of people who work on small farms vs. those who eat processed food on their couches that is done from the crops grown on monoculture mega-farms, and do nothing.
As I understand, Acemoglu rougly suggests to look for ways to apply this logic in other domain of economy, including the knowledge economy. Yes, it's not guaranteed that such arrangements will stay economical for a long time (but it's also not beyond my imagination, especially if we factor in the economic value of physical and psychological health), but it may set the economy and the society on a different trajectory with higher chances of eventualities that we would consider "not doom".
What does "foster labour voice" even mean?
Unions 2.0, or something like holacracy?
Especially in companies where everything is automated.
Not yet. Clearly, what he suggests could only remain effective for a limited time.
You can give more power to current employees of current companies, but soon there will be new startups with zero employees (or where, for tax reasons, owners will formally employ their friends or family members).
Not that soon at all, if we speak about the real economy. In IT sector, I suspect that Big Techs will win big in the AI race because only they have deep enough pockets (you already see Inflection AI quasi-acquired by MS, Stability essentially bust, etc.). And Big Techs still have huge workforces and it won't be just Nadella or just Pichai anytime soon. Many other knowledge sectors (banks, law) are regulated and also won't shed employees that fast.
Human-complementary AI technologies again sounds like a bullshit job, only mostly did by a machine, where a human is involved somewhere in the loop, but the machine could still do his part better, too.
In my gardening example, a human may wear AI goggles that tell them which plants or animal species do their see or what disease a plant has.
Tax on media platforms -- solves a completely different problem. Yes, it is important to care about public mental health. But that is separate from the problem of technological unemployment. (You could have technological unemployment even in the universe where all social media are banned.)
Tax on media platforms is just a concrete example of how "reforming business models" could be done in practice, maybe not the best one (but it's not my example). I will carry on with my gardening example and suggest "tax on fertiliser": make it so huge that megafarms (which require a lot of fertiliser) become less economical than permaculture gardens. Because without such a push, permaculture gardens won't magically materialise. Acemoglu underscores this point multiple times: it's not a matter of pure technological invention and application of it in a laissez-faire market to switch to a different socioeconomic trajectory. Inventing AI goggles for gardening (or any other technology which makes permaculture gardening arbitrarily convenient) won't make the economy to switch from monoculture mega-farms without an extra push.
Perhaps, Acemoglu also has something in his mind about attention/creator economy and the automation that may happen to them (AI influencers can replace human influencers) when he talks about "digital ad tax", but I don't see it.
Replies from: Viliam↑ comment by Viliam · 2024-05-14T12:39:07.636Z · LW(p) · GW(p)
Thank you for the answers, they are generally nice but this one part rubbed me the wrong way:
And this is before factoring in the "economic value" of better psychological and physical health of people who work on small farms vs. those who eat processed food on their couches that is done from the crops grown on monoculture mega-farms, and do nothing.
If I live to see a post-scarcity society, I sincerely hope that I will be allowed to organize my remaining free time as I want to, instead of being sent to work on a small farm for psychological and physical health benefits. I would rather get the same benefits from taking a walk with my friends, or something like that.
I do not want to dismiss the health concerns, but again these are two different problems -- how to solve technological unemployment, and how to take care of one's health in the modern era -- which can be solved separately.
comment by Gordon Seidoh Worley (gworley) · 2024-05-13T16:48:16.005Z · LW(p) · GW(p)
Much of this depends on what kind of AI we get and how long we live in relatively the same conditions as that AI.
The unstated assumptions here seem to me to be something like:
- AI provides relatively fixed levels of automation, getting gradually better over time
- AI doesn't accelerate us towards some kind of singularity so that society has time to adapt to tiering
I'm pretty suspicious of accepting the second assumption here, as I think just the opposite is more likely. But, given the assumptions Acemoglu seems to be making, a two-tiered society split seems a likely outcome to me.
Replies from: Roman Leventov↑ comment by Roman Leventov · 2024-05-13T17:20:58.158Z · LW(p) · GW(p)
What levels of automation does the AI provide and at what rate is what he suggests to influence directly (specifically, slow down), through economic and political measures. So it's not fair to list that as an assumption.
comment by FlorianH (florian-habermacher) · 2024-05-13T20:59:07.743Z · LW(p) · GW(p)
From what you write, Acemoglu's suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:
- If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country's egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
- If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
- If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Replies from: Roman Leventov, Roman Leventov↑ comment by Roman Leventov · 2024-05-14T05:41:33.009Z · LW(p) · GW(p)
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Clearly, specific rule-based regulation is a dumb strategy. Acemoglu's suggestions: tax incentives to keep employment and "labour voice" to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist "job bullshittification" done to them, if they have the political power ("labour voice"). But they should also have alternative choice of technology and work arrangement/method that doesn't turn their work into rubber-stamping bullshit, but also alleviates the burden ("machine usefulness"). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.
↑ comment by Roman Leventov · 2024-05-14T05:33:07.862Z · LW(p) · GW(p)
If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
If anything, this problem seems more pernicious wrt. climate change mitigation and environmental damage: it's much more distributed, not only in US and China, but Russia and India are also big emitters, big leverage in Brazil, Congo, and Indonesia with their forests, overfishing and ocean pollution everywhere, etc.
With AI, it's basically the question of regulating US and UK companies: EU is always eager to over-regulate relative to the US, and China is already successfully and closely regulating their AI for a variety of reasons (which Acemoglu points out). The big problem of the Chinese economy is weak internal demand, and automating jobs and therefore increasing inequality and decreasing the local purchasing power is the last thing that China wants.
Replies from: Roman Leventov↑ comment by Roman Leventov · 2024-05-14T05:50:54.298Z · LW(p) · GW(p)
But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It's just another one in the row of hard political challenges posed by AI, along with the questions of "aligned with whom?", considering/accounting for people's voice past dysfunctional governments and political elites in general, etc.