How much white collar work could be automated using existing ML models?
post by AM · 2022-05-26T08:09:18.032Z · LW · GW · No commentsThis is a question post.
Contents
My intuition None Answers 6 Dagon 5 Daniel Kokotajlo None No comments
Assume that progress in AI halts right now and only the currently published breakthroughs are available. GPT-3, PaLM, Flamingo for text generation and Imagen, Dall-E 2 for image generation.
Let's say these models all become available via paid APIs (which also allow fine tuning on custom data), and we let a few years pass for startups to create nice plugins and interfaces for non-technical people to use these models in their work. This type of tooling becomes available abundantly, with competition pushing prices down to some small multiple of the compute cost to run the models.
A copywriter has a paid subscription to a GPT-3/Palm plugin. A customer support agent has access to a fine-tuned version of a question-answering PaLM. A designer has a subscription to Imagen/Dall-E 2 built into their software for generating ideas to build on or use for complex in-painting.
How much should we expect global white collar productivity to increase just from letting these existing models spread their impact across the economy? I'm looking for a high-level estimate — could we potentially automate 1% of all office-hours (or conversely, increase productivity by ~1%), or 10% or 30%?
My intuition
To me it looks like ML has gone through a step change recently where the economic impact has gone from small, but meaningful to potentially very large. Image and text models a few years ago were impressive, but the relevant use-cases were mostly in improving backend-type workflows like search, recommendations or targeted ads. Now, ML model capabilities seem powerful enough to start being visible in GDP growth rates, albeit with a fairly long time-delay to allow for building tooling.
I suspect the progress that a few teams of ~1000 people have made over the last ~2 years may have by themselves, without further fundamental tech improvements, increased global GDP by multiple percentage points (although this will take a few years to fully materialise).
Answers
I suspect it'll be a few percent over a few years. Basically, one of the contributors to economic growth, but not a step-change.
My reasoning is that the VAST majority of the value in office work is the funny mix of judgement and trust/provenance that comes with human workers. Knowing WHICH individual is standing behind a given piece of work is important in valuing the work. These tools are making many people faster at their jobs, and in a lot of cases more precise in their work, but not fully replacing people.
Also, the biggest improvements will be in the less-valuable outputs, and the work will shift from generating the outputs to curating the outputs. This will limit the overall impact for quite some time.
↑ comment by AM · 2022-05-29T15:48:52.631Z · LW(p) · GW(p)
Thank you for your response!
I think I disagree with your take, but with high uncertainty. The counterfactual world I'm describing will only exist in parallel with the real world where we get increased spread lf the technology + improvements in it.
I agree with your assessment of where the value comes from, but I think a lot it time is spent by office workers on the stuff that could be automated. Since time is fungible, workers can use the saved time on the stuff you say drives value.
I currently suspect you don't go far enough! If people are allowed to collect more data and fine-tune on it, the sky's the limit... or at least, it's unclear how far above our heads the limit is.
Consider a hypothetical world like ours except where for some reason the government paid a million people to predict random internet text all day.
In that world, a million people just got their jobs automated away by GPT-3, which I hear* is superhuman at predicting random internet text, sufficiently so that it would probably be able to beat trained professionals (it's unclear since there are no such people in our world). And PaLM, Chinchilla, etc. are much better still.
OK, so why hasn't the singularity happened yet? Because our models are good at predicting internet text but only mediocre at the downstream tasks that are actually economically valuable.
But why are they only mediocre at those tasks?
You could say: Because those tasks are inherently, objectively harder! We'll need much bigger models or new architectures before we can do them!
But maybe instead the answer is mostly that our big models haven't been trained on those tasks. Or maybe they have been trained a little bit (fine-tuned) but not very much. Train for a trillion data points directly on an important economic task, and then we'll see what happens...
I'm especially interested to hear counterarguments to this take.
*I heard this from Buck, Ryan, and Tao, who I think work at Redwood Research. They pointed me to this nifty tool which can show you what random samples of internet text look like.
No comments
Comments sorted by top scores.