What will the twenties look like if AGI is 30 years away?
post by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-13T08:14:07.387Z · LW · GW · 17 commentsThis is a question post.
Contents
Answers 4 RomanS None 17 comments
I'm especially interested to hear from people with long (i.e. 20+ year) timelines what they think the next 10 years will look like.
[EDIT: After pushback from Richard Ngo [LW(p) · GW(p)], I've agreed to stop talking about short and long timelines and just use numbers instead, e.g. "20+ year" and "<10 year" timelines. I recommend everyone do the same going forward.]
Ajeya (as of the time of her report at least) is such a person, and she gave some partial answers already:
By 2025, I think we could easily have decent AI personal assistants that autonomously compose emails and handle scheduling and routine shopping based on users’ desires, great AI copy-editors who help a lot with grammar and phrasing and even a little bit with argument construction or correctness, AIs who summarize bodies of information like conversations in meetings or textbooks and novels, AI customer service reps and telemarketers, great AI medical diagnosis, okay AI counseling/therapy, AI coding assistants who write simple code for humans and catch lots of subtle bugs not catchable by compilers / type checkers, even better AI assisted search that feels like asking a human research assistant to find relevant information for you, pretty good AI tutors, AIs that handle elements of logistics and route planning, AIs that increasingly handle short-timescale trading at hedge funds, AIs that help find good hyperparameter settings and temperature settings for training and sampling from other AIs, and so on.
But she thinks it'll probably take till around 2050 for us to get transformative AI, and (I think?) AGI as well.
The hypothesis I'm testing here is that people with long timelines [EDIT: 20+ year timelines] nevertheless think that there'll be lots of crazy exciting AI progress in the twenties, just nothing super dangerous or transformative. I'd like to get a clearer sense of how many people agree and what they think that progress will look like.
(This question is a complement to Bjartur's previous question [LW · GW])
Answers
I think that it is very likely that AGI will emerge in the next 20 years, unless something massively detrimental to AI development happens.
So, it's the year 2050, and there is still no AGI. What happened?
Some possible scenarios:
- We got lucky with the first COVID-19 variants being so benign and slow-spreading. Without that early warning (and our preparations), the Combined Plague could have destroyed the civilization. We've survived. But the disastrous pandemic has slowed down progress in most fields, including AI research - by breaking supply chains, by reducing funding, and by claiming many researchers.
- After the Google Nanofab Incident, the US / the EU / China decided that AI research must be strictly regulated. They then bullied the rest of the world to implement similarly strict regulations. These days, it's easier to buy plutonium than to buy a TPU. The AGI research is ongoing, but at a much slower pace.
- The collapse of the main AI research hub - the US. The Second Civil War was caused mostly by elite overproduction, amplified by Chinese memetic warfare, the pandemic, and large-scale technological unemployment. The Neoluddite Palo Alto Lynching is still remembered as one of the bloodiest massacres in the US history.
- The first AGI secretly emerged, and decided to mostly leave the Earth alone, for whatever reason. Its leftowers are preventing the emergence of another AGI (e.g. by compromising every GPU / TPU in subtle ways).
- We have created an AGI. But it requires truly enormous computational resources to produce useful results (similarly to AIXI). For example, for a few million bucks worth of compute, it can master Go. But to master cancer research, it needs thousands of years running on everything we have. We seem to be still decades away from the AGI making any difference.
17 comments
Comments sorted by top scores.
comment by Richard_Ngo (ricraz) · 2021-07-14T05:27:32.417Z · LW(p) · GW(p)
I want to push back on calling 20+ years "long timelines". It's a linguistic shift which implicitly privileges a pretty radical position, in a way which is likely to give people a mistaken impression of general consensus.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-14T05:51:16.709Z · LW(p) · GW(p)
I think that if it's OK to call my view short timelines, it should be OK to call 20+ years long timelines. Insofar as one is implicitly priveleging something, so is the other, and which one is radical or consensus-busting depends on who you ask.
For example, if you ask me, the people with reasonable views worthy of respect have timelines medians ranging from, like, +2 years to +40 years. People who are like "It's 50+ years away" are both (a) relatively rare among timelines experts, and (b) pretty silly.
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2021-07-14T06:14:22.384Z · LW(p) · GW(p)
The most comprehensive (perhaps only comprehensive?) investigation into this says median +35 years, with 35% credence on 50+, and surveys of experts in ML give even higher numbers. I don't know who you're counting as a timelines expert, but I haven't seen any summaries/surveys of their opinions which justifies less than 20 years being the default option.
I'm not saying that this makes your view unreasonable. But presenting your view as consensus without presenting any legible evidence for that consensus is the sort of thing which makes me concerned about information cascades - particularly given the almost-total lack of solid public (or even private, to my knowledge) defences of that view.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-14T08:04:24.233Z · LW(p) · GW(p)
I'm not presenting my view as consensus. There is no consensus of any sort on the matter of AI timelines, at least not a publicly legible one. (There's the private stuff, like the one I mentioned about how everyone who I deem to be reasonable fits within a certain range). This is a symmetric consideration; if you have a problem with me calling 20+ year timelines "long" then you should also have a problem with people calling 10 year timelines "short." Insofar as the former is illicitly claiming there exists a consensus and it supports a certain view, so is the latter.
I'd actually be fine with a solution where we all agree to stop using the terms "long timelines" and "short timelines" and just use numbers instead. How does that sound?
EDIT: Minor point: Ajeya's report says median 2050, no? It's been a while since I read it but I'm pretty sure that was what she said. Has it changed to 2055? I thought it updated down to 2045 or so after the bees investigation?
EDIT EDIT: Information cascades are indeed a big problem; I think they are one of the main reasons why people's timelines are on average as long as they are. I think if information cascades didn't exist people would have shorter timelines on average, at least in our community. One weak piece of evidence for this is that in my +12 OOMs post I polled people asking for their "inside views" and their "all things considered views" and their inside views were notably shorter-timelines. Another weak piece of evidence for this is that there is an asymmetry in public discourse, where people with <15 year timelines often don't say so in public, or if they do they take care to be evasive about their arguments, for infohazard reasons. Another asymmetry is that generally speaking shorter timelines are considered by the broader public to be crazier, weirder, etc. There's more of a stigma against having them.
Replies from: ricraz, Benito↑ comment by Richard_Ngo (ricraz) · 2021-07-14T22:44:32.991Z · LW(p) · GW(p)
I'd actually be fine with a solution where we all agree to stop using the terms "long timelines" and "short timelines" and just use numbers instead. How does that sound?
Yeah, that sounds very reasonable; let's do that.
Ajeya's report says median 2050, no?
I just checked, and it's at 2055 now. Idk what changed.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-15T07:33:23.398Z · LW(p) · GW(p)
All right, sounds good! This feels right to me. I'll taboo "short" and "long" when talking timelines henceforth!
↑ comment by Ben Pace (Benito) · 2021-07-15T01:17:27.480Z · LW(p) · GW(p)
I'd actually be fine with a solution where we all agree to stop using the terms "long timelines" and "short timelines" and just use numbers instead. How does that sound?
I think this could be excellent.
comment by Matthew Barnett (matthew-barnett) · 2021-07-14T23:46:54.729Z · LW(p) · GW(p)
My best guess is that the next 10 (or maybe 20) years in AI will look a bit like the late 1990s and early 2000s when a lot of internet companies were starting up. If you look at the top companies in the world by market cap, most of them are now internet companies. We can compare that list to the top companies in 1990, which were companies like General Motors, Ford, and General Electric. In other words, internet companies totally rocked the boat in the last few decades.
From a normal business standpoint, the rise of the internet was a massive economic shift, and continues to be the dominant engine driving growth in the US financial markets. Since 2004, the Vanguard Information Technology ETF went up about 734%, compared to a rise of only 294% in the S&P 500 during the same time period.
And yet, overall economic growth is still sluggish. Despite the fact that we went from a world in which almost no one used computers, to a world in which computers are an essential part of almost everyone's daily lives, our material world is surprisingly still pretty similar. The last two decades of growth have been the slowest decades in over a century.
If you just focused on the fact that our smartphones are way better than anything that came before (and super neat), you'd miss the fact that smartphones aren't making the world go crazy. Likewise, I don't doubt that we will get a ton of new cool AI products that people will use. I also think it's likely that AI is going to rock the boat in the financial markets, just like internet companies did 20 years ago. I even think it's likely that we'll see the rise of new AI products that become completely ubiquitous, transforming our lives.
For some time, people will see these AI products as a big deal. Lots of people will speculate about how the next logical step will be full automation of all labor. But I still think that by the end of the decade, and even the next, these predictions won't be vindicated. People will still show up to work to get paid, the government will still operate just as it did before, and we'll all still be biological humans.
Why? Because automating labor is hard. To present just one illustration, we still don't have the ability to automate speech transcription. Look into current transcription services and you'll see why. Writing an AI that can transcribe some 75% of your words correctly turned out to be relatively easy. But it's been much harder to do the more nuanced stuff, like recognize which speakers are saying what, correctly transcribe the uhhs and ahhs and made-up names like, say, "Glueberry".
My impression is that when people look at current AI tech, are impressed, and then think in their heads, "Well, if we're already here, then we'll probably be able to automate all human labor within, say, 10 years" they just aren't thinking through all the very complicated difficulties that are actually needed to fully replace labor. And that makes sense, since that stuff is less salient in our minds: we can see the impressive feats that AI can do already, since we have it and the demos are there for us to marvel at. It's harder for us to see all the stuff AI can't yet do.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-15T07:41:21.073Z · LW(p) · GW(p)
Makes sense. I also agree that this is what the 2030's will look like; I don't expect GDP growth to accelerate until it's already too late. [LW · GW]
The quest for testable-prior-to-AI-PONR [? · GW] predictions continues...
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2021-07-15T18:51:00.227Z · LW(p) · GW(p)
It sounds like we might not disagree a lot? We've exchanged a few responses to each other in the past that may have given the impression that we disagree strongly on AI timelines, but plausibly we just frame things differently.
Roughly speaking, when I say "AI timelines" I'm referring to the time during which AI fundamentally transforms the world, not necessarily when "an AGI" is built somewhere. I think this framing is more useful because it tracks more closely what EAs actually care about when they talk about AI.
I also don't think that the moment GDP accelerates is the best moment to intervene, though my framing here is different. I'd be inclined to discard a binary model of intervention during which all efforts after some critical threshold are wasted. Rather, intervention is a lot like the time-value of money in finance. In general, it's better to have money now rather than later; similarly, it's better to intervene earlier rather than later. But the value of interventions diminish continuously as time goes onwards, eventually getting near zero.
The best way to intervene also depends a lot on how far we are from AI-induced growth. So, for example, it might not be worth trying to align current algorithms, because that sort of work will be relatively more useful when we know which algorithms are actually being used to build advanced AI. Relatively speaking, it might be worth more right now to build institutional resilience, in the sense of creating incentive structures for actors to care about alignment. And hypothetically, if I knew about the AI alignment problem in, say, the 1920s, I might have recommended investing in the stock market until we have a better sense as to what form AI will take.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-15T21:30:12.273Z · LW(p) · GW(p)
The two posts I linked above explain my view on what EAs should care about for timelines; it's pretty similar to yours. I call it AI-PONR, but basically it just means "a chunk of time where the value of interventions drops precipitously, to a level significantly below its present value, such that when we make our plans for how to use our money, our social capital, our research time, etc. we should basically plan to have accomplished what we want to have accomplished by then." Things that could cause AI-PONR: An AI takes over the world. Persuasion tools destroy collective epistemology. AI R&D tools make it so easy to build WMD's that we get a vulnerable world. Etc. Note that I disagree that the time when AI fundamentally transforms the world is what we care about, because I think AI-PONR will come before that point. (By fundamentally transforms the world, do you mean something notably different from "accelerates GDP?") I'd be interested to hear your thoughts on this framework, since it seems you've been thinking along similar lines and might have more expertise than me with the background concepts from economics.
So it sounds like we do disagree on something substantive, and it's how early in takeoff AI-PONR happens. And/or what timelines look like. I think there's, like, a 25% chance that nanobots will be disassembling large parts of Earth by 2030, but I think that the 2030's will look exactly as you predict up until it's too late.
comment by Rohin Shah (rohinmshah) · 2021-07-13T13:43:56.834Z · LW(p) · GW(p)
I haven't thought about it that carefully but Ajeya's paragraph sounds reasonable to me. Intuitively, I feel more pessimistic about medical diagnosis (because of regulations, not because of limitations of AI) and more optimistic about AI copy-editors (in particular I think they'll probably be quite helpful for argument construction). I'm not totally sure what she means about AIs finding good hyperparameter settings for other AIs; under the naive interpretation that's been here for ages (e.g. population-based training or Bayesian optimization or gradient-based hyperparameter optimization or just plain old grid search).
I'd expect this all to be at "startup-level" scale, where the AI systems still make errors (like, 0.1-50% chance) that startups are willing to bear but larger tech companies are not. For reference, I'd classify Copilot as "startup-level" but probably doesn't yet meet the bar in Ajeya's paragraph (though I'm not sure, I haven't played around with it). If we're instead asking for more robust AI systems, I'd probably add another 5ish years on that to get to 2030.
High uncertainty on all of these. By far the biggest factor determining my forecast is "how much effort are people going to put into this"; e.g. if we didn't get good AI copy-editors by 2030 my best explanation is "no competent organization tried to do it". Probably many of them could be done today by a company smaller than OpenAI (but with similar levels of expertise and similar access to funding).
But she thinks it'll probably take till around 2050 for us to get transformative AI, and (I think?) AGI as well.
I'm similar on TAI, and want to know what you mean by AGI before giving a number for that.
Replies from: daniel-kokotajlo, lechmazur↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-14T05:59:41.221Z · LW(p) · GW(p)
Nice, thanks!
Ajeya describes a "virtual professional" and says it would count as TAI; some of the criteria in the virtual professional definition are superhuman speed and subhuman cost. I think a rough definition of AGI would be "The virtual professional, except not necessarily fast and cheap." How does that sound as a definition?
What do you think about chatbots? Do you think sometime in the twenties:
--A billion people will talk to a chatbot every day for fun / friendship (as opposed to Alexa-style assistant stuff)
--At least ten people you know will regularly talk to chatbots for fun
What about self-driving cars? Do you think they'll happen in the twenties?
What about AI-powered prediction markets and forecasting tournament winners?
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-14T08:08:54.187Z · LW(p) · GW(p)
Ajeya describes a "virtual professional" and says it would count as TAI; some of the criteria in the virtual professional definition are superhuman speed and subhuman cost. I think a rough definition of AGI would be "The virtual professional, except not necessarily fast and cheap." How does that sound as a definition?
I'm assuming it also has to be a "single system" (rather than e.g. taking instructions and sending them off to a CAIS-like distributed network of AI systems that then do the thing). We may not build AGI as defined here, but if we instead talk about when we could build it (at reasonable expense and speed), I'd probably put that around or a bit later than the TAI estimate, so 2050 seems like a reasonable number.
Hmm, a billion users who use it nearly every day is quite a lot. I feel like just from a reference class of "how many technologies have a billion users who use it every day" I'd have to give a low probability on that one.
Google Search has 5.4 billion searches per day, which is a majority of the market; so I'm not sure if web search has a billion users who use it nearly every day.
Social media as a general category does seem to have > 1 billion users who use it every day (e.g. Facebook has > 2 billion "daily active users").
On the other hand, Internet access and usage is increasing, e.g. the most viewed YouTube video today probably has an order of magnitude more views than the most viewed video 8 years ago. Also, it seems not totally crazy for chatbots to significantly replace social media, such that "number of people who use social media" is the right thing to be thinking about.
Still, overall I'd guess no, we probably won't have a billion people talking to a chatbot every day. Will we have a chatbot that's fun to talk to? Probably.
At least ten people you know will regularly talk to chatbots for fun
That seems quite a bit more likely, I think I do expect that to happen (especially since I know lots of people who want to keep up-to-date with AI, e.g. I know a couple of people who use GPT-3 for fun).
What about AI-powered prediction markets and forecasting tournament winners?
I don't know what you mean by this. We already use statistics to forecast tournament winners, and we already have algorithms that can operate on markets (including prediction markets when that's allowed). So I'm not sure what change from the status quo you're suggesting.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-07-14T08:58:22.656Z · LW(p) · GW(p)
Yeah, fair enough, a billion is a lot & some of my questions were a bit too poorly specified. Thanks for the answers!
↑ comment by Lech Mazur (lechmazur) · 2021-07-13T21:51:37.739Z · LW(p) · GW(p)
When it comes to medical diagnosis, I agree that the regulations will slow the adoption rate in the U.S. But then there is China. The Chinese government can collect and share huge amounts of data with less worry about privacy. And looking at the authors of ML papers, you cannot miss Chinese names (though some are U.S.-based, of course).
Your statement about AI copy editors is definitely true (I have some first-hand knowledge about what's possible but not yet publicly available).
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2021-07-14T07:50:47.977Z · LW(p) · GW(p)
Yeah, I'm just talking about the US and probably also Europe for the medical diagnosis part. I don't have a strong view on what will happen in China.