Surviving Automation In The 21st Century - Part 1

post by George3d6 · 2022-05-15T19:16:16.966Z · LW · GW · 16 comments

This is a link post for https://www.epistem.ink/p/surviving-automation-in-the-21st?s=w

Contents

    i - Framing Automation
    ii - Automation Timelines
    iii - Stories
  iv - The Sociopolitical
None
17 comments

This is another career article (series?), I’m, in part, writing this for myself, but I may as well publish it.

I want to think through the kind of jobs and career choices I should take in order to hedge against and benefit from increased automation. Especially the kind that is driven by better and more widely applied ML. Not just for monetary reasons, but also to optimize for deriving meaning out of an automated world; To avoid crushing existential angst due to a fundamental part of your “self” being tied to doing a task that’s now better fulfilled by a microscopic piece of silicon.

i - Framing Automation

There are many ways to frame and think about automation.

The “conservative” take is to point out that it’s happened before and it never led to that much rapid change in the job market, with the only noticeable trend being that the people hardest hit by it were the already unprivileged, doing tedious manual labor; And new jobs were created, usually office-work.

The “futurist” take is that this time is different, and that we’re on the cusp of a technological singularity where machines will surpass human intelligence, leading to a future where most jobs are automated and we have to find new ways to occupy our time. And while that does sound cliche, keep in mind all the bits in italic are generated by a 2-years outdated text model, *'*first take, with no editing, in a few milliseconds, using less energy than is required for my finger to type a single letter. If that isn’t a sign of exponential change, I don’t know what is.

So where does that leave us?

The answer, as is often the case, is somewhere in the middle. We are on the cusp of a major change, but it’s not going to be as sudden or all-encompassing as the “futurists” would have you believe. And it’s not going to be as slow or imperceptible as the “conservatives” would have you believe.

The truth is that automation is going to change the world, but not in the way that we expect.

... I disagree with the language model here. I think it leaves us in a spot where there’s a significant probability of changes so vast that 5 years from now this piece and all of its advice would be irrelevant, maybe that “we” will no longer be “a thing” in 5 years from now. But that is not very relevant to worry about; We should hedge our bets within the kind of bounds where concepts like “money” and “job” and “human” still make sense, anything outside that is so far beyond our comprehension it’s not worth bothering about.

Nor am I saying the conservative take is wrong, maybe automation will principally affect professional drivers, construction workers, cashiers, baristas, and farmhands. In which case the answer to thriving with automation will remain the obvious “learn to code” style snarky reply... pray that you’re the kind of person that can do “intellectual labor” and pivot your career to that.

What if the conservative position is correct? If the current takes on automation are too “alarmist” and you end up preparing for nothing? ... Then you still want to be in the good spot for having put in the work.

ii - Automation Timelines

First, I want to look at what can and can’t be easily automated.

Thinking abstract tasks are some way away from being automated. This paper provides some decent timelines based on expert estimations.

Most thinking-heavy jobs will be out of reach for ml in the next 20 to 30 years, at least, as well as most jobs requiring very specific fine motor control or environment observation.

The things we’re certain ml will be good at are the things we already do pretty well algorithmically:

Answering questions with well-defined answers in persuasive ways (sales, legal, search engines, scoring well on an SAT)

Solving well-bounded optimization problems over a gigantic search space (video games, casino games, navigating a warehouse or highway)

Subtext and broad-context unaware symbolic manipulation (grammar checks, style imitation, summarization, style alteration, translation from images to symbols, translations between languages)

Of course, there are other players, in the form of hardware, regulations, adoption, and old-style automation.

I’ve come to believe hardware is mainly a function of energy production and energy storage. Thus far the future seems to be headed towards no exponential increase in energy production (e.g. fusion or fission-based), energy storage seems to improve in all directions but refuses modification past a point. So an AAA+ battery that holds 5x the amount of energy seems feasible, but not one that’s 5x smaller, not so.

So I think an increase in Roomba-sized to train-sized high-autonomy robots is plausible, and some of this is being experimented with, having last-mile delivery and long-distance shipments handled automatically.

Drones are, of course, the most important bit when it comes to robotics. Since humans are garbage at controlling them, flying requires years of specialized training to get right, we’re just not very good at moving with that many degrees of freedom. But the problem in itself isn’t more challenging for ml than self-driving, indeed it’s quite a lot easier.

This is where regulation comes in and plays a big role, the reason food isn’t (always) delivered by drone as of yet is part-technical part-regulation. But unlike the last big pass in automation, when missing out meant getting conquered, this time the penalty for missing out seems insubstantial.

You could imagine a country deciding to ban self-driving, autonomous drones, automated checkouts, and such, resulting in a massive loss to GDP and cost to consumers. But that cost is expressed in... what? restaurant orders? Starbucks lattes? Having to take the bus or, god forbid, bike or scooter? slower and more expensive amazon deliveries? There’s real value somewhere in there, sure, where “real” needs could be met by this increasing automation, but they don’t seem to be its main target.

Assuming automation will be inconsequential goes against GDP-equals-wealth economy 101. Still, I'll wait and see how economies that ban convenience automation fare against those that don't, I think the result may be surprising. The “important” bits of automation will happen in both. Germany and the US will both want their bombardier drone swarms. It’s just that the latter will have the engineers projecting them be driven to his co-working space by a self-driving car, getting their “Mediterranean wrap” from the claws of a drone; While the former will have said engineers take a subway to the office, then get this shawarma delivered by a guy on a bike. How much that difference will count for is anybody’s guess, I suspect it’s not that much.


Finally, there’s adoption, which counts for a lot.

Working from home “automates away” a lot of stuff by virtue of just using technology to bypass it. But it doesn’t really work unless everybody’s doing it, a half-remote team is not a functional working unit.

Now that we’ve reached a tipping point for remote work, a lot more tools and companies are going to adapt to it, thus leading to more skilled managers and engineers investing their time, and more investors investing their money... iterate a few times and in 10 years from now either half of the tech world is remote, or all of it.

The same rule applies to many other forms of automation, adoption drives adoption, and tipping points will exist. Though I can’t predict them or when they will come.

This brings us to the final point of “old school” automation becoming better once things are automated “the hard way”.

Self-driving cars powered by trillion-parameter neural networks, able to operate better than the best human drivers, will be “a thing” just until they replace every single human driver. Once every human is off the road the self-driving cars can be replaced by a few conditional statements.

This is how I like to conceptualize automation, and out of that loads of conclusions can arise, but I’ve been talking abstractly for a while now, so let me go through some concrete examples.

iii - Stories

The truck driver; He lives in MAGA county Texas and makes 110k/year driving for Walmart, it’s hard work but requires little qualification and he’s in his early 20s, with a monthly cost of living can be less than 1k, rent included, and his taxes are low. He’s probably going to be automated in a dozen years, but in the meanwhile increasing labor shortages drive his wages up; If he’s half-decent at savings he might retire way before 10% of trucks are automated, with enough money to support a large family.

The radiologist; She lives in San Francisco and makes 200k/year. Problem is she has roughly 600k of cumulative debt from her college, med school and residency time, and could only start working in her mid 30s. 50% of her job involves pressing a few buttons, selecting the best slices for surgeons to look at putting 5-letter acronym label on MRIs, the rest is paperwork. She’s joined a union and is trying to ignore the recent publications making waves showing neural networks outperform humans on such tasks. She’s saving very little with 40% taxes, debt, and 60k/year going towards rent and the luxurious lifestyle of her friends (who had it a lot easier than her, their parents being medics and all). Her plan for retirement was to skip kids and save more once the seniority pay-arises start kicking in... hope nobody acts on those publications, or the union keeps its promises.

The remote programmer; They/Them/Zem/Zer contract on Upwork for 120$/hr, mainly python and Rust (loads of interest in crypto and productionizing PyTorch models recently). Keeps hearing about GPT-5 top-pilot being the “hot new thing” but it seems like no matter how much extra automation is added the need for programmers keeps rising. Needs to work for about 90 minutes a month to afford living expenses in Pokhara and Dharamshala, but trying to save for retirement in a country with top-notch healthcare and maybe cryonics-friendly life insurance.

The London Landlord; Its 5-bedroom Victorian house turned 20-tenant apartment building is devaluating at an alarming rate. It seems that London has become a realm for the rich, or at least of the “not struggling dirt poor”. A modest UBI and most low-end jobs being fully automated kickstarted a wave of people moving to the countryside. Antennas curl and scales change color, sharp teeth break through the membrane and form a painfilled grin: “No matter, soon the last phase will be in motion and the surface world will be ours!”

iv - The Sociopolitical

The one thing I and everyone are unable to predict is the sociopolitical element of automation.

Is universal basic income going to become a thing? If so, what will the imperfect “real world” implementation look like?

Can current micromanaged social welfare programs expand to fill the gap instead of UBI? If so, how dystopian will they get?

Will we see more pronounced class divisions and less social mobility? More? Is the concept becoming blurrier or being erased?

Might value shift away from earning money and working for others? Will “having a job” start being seen as taboo, in bad taste, or obsolete?

Might governments in the “1st world” world lose control of the tokens used to make transactions, much like what happened to those in the “3rd”? Will that significantly affect their ability to influence the economy?

Will people value a service just because it’s done by a human instead of a machine? Will shopping at a man-managed supermarket or coffee shop becomes something desirable; a mark of privilege and status?

... The above might sound like bullshit because they are. Society is too hard to predict, there are no proven experts and markets to shed considerable light on the subject.


But if a https://www.epistem.ink/p/surviving-automation-in-the-21st?s=wmassive increase in automation happens, sociopolitical change will follow, and it’s good to adapt to that. Either by moving to a society that fits your preferences or by adapting them to the direction your society is moving in.

Getting a high-paying job, working very hard, investing, and quitting in order to find meaning in expensive hobbies; might be a poor choice if you live in a place that will start de-consolidating wealth and property-ownership in order to deal with financial divides caused by automation.

Leading a frugal bohemian lifestyle and hoping covid-era reliefs stick and expand into something-like-UBI might be an amazing gamble, which leads to proper use of the best years of one’s life if progressive economics take over, or it might leave you homeless if winner-takes-all capitalism is accelerated by automation.


The best strategy for dealing with sociopolitical change right now seems to be maintaining flexibility, keep a strong passport with long-term visas for major countries; avoid citizenship in countries with too much international overreach (e.g. USA), seek it in countries with loads of international goodwill (e.g. New Zeland), don’t invest too heavily in a single property, don’t limit your friends to a single country, learn many major circulation languages.

Keeping an eye on “real politik” from the individual perspective also helps. Run cost-benefit analyses of what your government is giving you and how much you’re paying it. Could you get a better deal? Do you know how good the insurances your getting (unemployment, healthcare, welfare) are? Did you ever put them to the test?

I for one have noticed a large divide between what governments advertise and their actual practice. But asking lawyers and accountants about the taxes you’d pay “if so and so” or the benefits you can get “if such and such” will reveal a lot of relevant information. Though maybe this is just common sense that everyone gains in their early 20s and not a useful insight.

Of course, if you aren’t a weirdo you might find some of this rather hard, people tend to “settle down” after a while, all of this is moot, and your best bet is to smell the winds of social change in your corner of the world and adapt as best you can.


At any rate, this post is going on for long enough and I think it offers a solid foundation for the next two. Next up I’m going to try and give some more concrete advice around how machine learning and regulations around digital transactions will affect various sectors, which I hope are topics I have some “real” expertise in. After that, I might try to talk about meaning in an increasingly automated world. I feed the text model this article and it came up with “The Future of Work: Disillusionment, Digitization and Disrupted Futures” as the title for the follow-up, bit cliche, but with a bit of polish, it might work.

16 comments

Comments sorted by top scores.

comment by Logan Zoellner (logan-zoellner) · 2022-05-15T21:24:45.714Z · LW(p) · GW(p)

But unlike the last big pass in automation, when missing out meant getting conquered, this time the penalty for missing out seems insubstantial.

 

This claim has been empirically refuted in Armenia and Ukraine.  Missing out on drones DOES mean getting conquered.

Replies from: George3d6
comment by George3d6 · 2022-05-16T10:39:34.851Z · LW(p) · GW(p)

Hence why I make mention of it in the article:

>The “important” bits of automation will happen in both. Germany and the US will both want their bombardier drone swarms. It’s just that the latter will have the engineers projecting them be driven to his co-working space by a self-driving car, getting their “Mediterranean wrap” from the claws of a drone; While the former will have said engineers take a subway to the office, then get this shawarma delivered by a guy on a bike

But the vast majority of automation doesn't seem to be militarily relevant. Even if you assume some sort of feedback loop where military insubstantial automation leads to better military automation, world powers already have the trump card in terms of nukes for wars of aggression against them.

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2022-05-16T13:22:01.552Z · LW(p) · GW(p)

But the vast majority of automation doesn't seem to be militarily relevant. Even if you assume some sort of feedback loop where military insubstantial automation leads to better military automation, world powers already have the trump card in terms of nukes for wars of aggression against them.

 

I think your underestimating the use of non-military tech for military purposes.  As a point of comparison, the US pre-WWII had a massive economy (and very little of it dedicated to the military). But this still proved to be a decisive advantage.

Or, as admiral Yamomoto said

Anyone who has seen the auto factories in Detroit and the oil fields in Texas knows that Japan lacks the national power for a naval race with America.

A country that has 100 million drones delivering "Mediterranean wraps" is also going to have a huge advantage when it comes to building drones for other purposes.

Nuclear weapons are also only a trump card as long as their use remains unthinkable.  In a war with actual use of tactical nuclear weapons, you're going to want to be on the side that has the advantage in terms of missile defense, precision strikes, dominating the infosphere, etc.

comment by mukashi (adrian-arellano-davin) · 2022-05-16T08:40:25.472Z · LW(p) · GW(p)

Good post. I truly appreciate the refreshing take on AI. I have the impression that lately, most posts around are: we develop AGI in 5 years, it creates nanotechnology and suddenly we all die. This seems to be a more realistic take on the things we can expect and the things we should be worried about

Replies from: angmoh, George3d6, George3d6
comment by angmoh · 2022-05-17T03:54:04.316Z · LW(p) · GW(p)

Seconded - I'd like to see more of this angle of analysis too. I assume the reason why the 'soft take-off' is underdiscussed is that tech people a) pay more attention to the news on AI, and b) see the future of this stuff viscerally and the endgame is what looms. I think that's not wrong, because the endgame is indeed transformative. But how we get there and how quickly it happens is a completely open question.

I work in the AEC industry (Architecture, Engineering, Construction) - 90%+ of people have zero idea about recent advances in AI. But on the other hand, should they be personally worried about their employment prospects in the next decade? I feel like lots of LW-type people would say "Yes!" - I can only speak personally, but it's really hard for me to see it happening. If only for the fact that doing anything in meatspace takes a long time. There are plenty of great 'digital' solutions to problems in this industry that have been around for 10+ years and have still made no headway. I know AGI is different, but it's worth mentioning how slow things can be, and how much of a grinding bureaucracy many industries are.

The other way I think about it is that ultimately human concerns (politics, agency etc) underpin all economic activity, and there will be massive political and bureaucratic opposition to extreme levels of economic 'disruption' (in the negative sense of mass unemployment etc). I foresee active responses from the populace and governments shaping the path this takes to a significant degree (eg. increases in labour force protectionism). Not so much that capitalism just takes to AI like a duck to water, governments let it happen, and 99% of people end up in a terrafoam box in a few short years.

comment by George3d6 · 2022-05-16T10:47:29.956Z · LW(p) · GW(p)

I think that's very relegated to LW and AIAF, but I think you don't have to dismiss some probability of that happening in order to prepare for the probability where it doesn't or where it does in a "hundreds of years soft take-off" way.

comment by George3d6 · 2022-05-16T10:46:18.620Z · LW(p) · GW(p)
comment by Lost Futures (aeviternity1) · 2022-05-15T22:01:49.088Z · LW(p) · GW(p)

Good post George. But I'm surprised by this assertion:

You could imagine a country deciding to ban self-driving, autonomous drones, automated checkouts, and such, resulting in a massive loss to GDP and cost to consumers. But that cost is expressed in... what? restaurant orders? Starbucks lattes? Having to take the bus or, god forbid, bike or scooter? slower and more expensive amazon deliveries? There’s real value somewhere in there, sure, where “real” needs could be met by this increasing automation, but they don’t seem to be its main target.

That's hard for me to fathom. Self-driving cars, autonomous drones, and automated checkouts will have a far greater impact on society and the economy than just GDP stats and slightly lower-cost consumer goods. Mass adoption of self-driving cars alone would make big waves. The option of having cheap transportation anywhere on demand is a major boon for people in poverty. As autonomous vehicles become more commonplace, consider how many parking spaces in America could be replaced with more stores, housing, etc...

Replies from: George3d6
comment by George3d6 · 2022-05-16T10:44:05.616Z · LW(p) · GW(p)

This is why I said I'm not certain it will have an impact, rather than I'm sure it will or won't.

I can see self-driving having a big impact, but won't that impact be outweighed by the kind of jobs people have to drive to being mainly automated?

comment by Malmesbury (Elmer of Malmesbury) · 2022-05-15T22:36:41.264Z · LW(p) · GW(p)

I like to imagine a future where people get bored of automation and use transhumanism to start working again. Instead of drones, delivery is done by humans in exoskeletons running through the country at the speed that maximizes fun. Intellectual jobs are done by humans with augmented intelligence that feels just like normal consciousness. Even if it doesn't generalize, I'd expect at least a few people chose this way of life. Isn't it tempting ? (I, for one, will be the running delivery guy)

Replies from: George3d6
comment by George3d6 · 2022-05-16T10:50:23.258Z · LW(p) · GW(p)

I can see that happening, and that's part of "how do you find meaning once work is automated" which is the question I struggle with most. Props on brain extensions that you are conscious of/as, it does seem intuitive to me that something like that will soon be available.

comment by Malmesbury (Elmer of Malmesbury) · 2022-05-15T22:26:00.457Z · LW(p) · GW(p)

Could you give more details on how you used the language model? (Great post btw)

Replies from: George3d6
comment by George3d6 · 2022-05-16T10:48:32.624Z · LW(p) · GW(p)

Create an account with goose ai or open ai, go to "completion", paste in text you wrote thus far, paste the completion.

comment by gbear605 · 2022-05-15T20:51:09.854Z · LW(p) · GW(p)

This is a minor point, but everyone always forgets it - for the radiologist who's making 200k in California, she's paying closer to 33% in taxes, 25% going to the federal government and 8% going to California, or about $67,000. That's because the structure of tax brackets means that the marginal tax rate is not the total tax rate. It's still a lot of course.

Source: https://www.irscalculators.com/tax-calculator

Based on some common med school loan interest rates and terms (see https://www.bankrate.com/loans/student-loans/medical-school-loans/ and https://www.calculator.net/loan-calculator.html?cloanamount=600000&cloanterm=15&cloantermmonth=0&cinterestrate=3.5&ccompound=monthly&cpayback=month&x=65&y=35#amortized-result), she's probably paying about $50,000 to her loans.

So she's still saving about $23,000 per year, which is an amount of savings that most people would love to have. Of course, it assumes that automation won't come for her job, but if it does, a lot of the medical school debt could be discharged in bankruptcy, so at least she wouldn't be in a horrible position.

 

Also for USA citizenship, while you nominally have to pay taxes to the US government even if you're abroad, the first (currently) $108,000 is excluded. That could change in the future, but any government could theoretically institute a policy of international taxation for its citizens, so the US doesn't seem particularly worse.

Replies from: George3d6
comment by George3d6 · 2022-05-15T21:21:07.316Z · LW(p) · GW(p)

That could change in the future, but any government could theoretically institute a policy of international taxation for its citizens, so the US doesn't seem particularly worse.

Would be rather difficult or impossible for most, since many intl trade treaties forbid double taxation among members. The us is an outlier because it can afford special treatment afaict.

Replies from: gbear605
comment by gbear605 · 2022-05-15T21:33:57.054Z · LW(p) · GW(p)

I suspect that the set of universes where America drops the taxation exemption to zero is roughly equal to the universes where those most countries withdraw from those intl trade treaties. That is, it would only happen when crazy things are already happening with international trade.

Replies from: George3d6
comment by George3d6 · 2022-05-16T10:55:27.892Z · LW(p) · GW(p)

Agree on that point, though I think the "crazy things are happening" scenario is likely one where money is less legible so countries bother about taxing such things less.