Why is Toby Ord's likelihood of human extinction due to AI so low?
post by ChristianKl · 2022-04-05T12:16:00.278Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 19 Buck 15 Davidmanheim 5 fin 3 ACrackedPot None 3 comments
In The Precipice: Existential Risk and the Future of Humanity Toby Ord gives 1/10 as the likelihood for humanity to die this century due to AI risk. This seems to be a very different view than that of MIRI [LW · GW]. Is FHI in general much more optimistic or is it just Toby Ord that's this optimistic in regards to AI risk?
Answers
FWIW if you look at Rob Bensinger's survey of people who work on long-term AI risk [LW · GW], the average P(AI doom) is closer to Ord than MIRI. So I'd say that Ord isn't that different from most people he talks to.
You might enjoy these posts where people argue for particular values of P(AI doom), all of which are much lower than Eliezer's:
- Paul Christiano interviewed by AI impacts
- Rohin Shah and me on the FLI podcast (ctrl-f "probability of AI-induced existential risk")
"There is significant uncertainty remaining in these estimates and they should be treated as representing the right order of magnitude—each could easily be a factor of 3 higher or lower."
"In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century."
"Some of my colleagues give higher chances than me, and some lower. But for many purposes our numbers are similar. Suppose you were more skeptical of the risk and thought it to be one in 100. From an informational perspective, that is actually not so far apart: it doesn’t take all that much evidence to shift someone from one to the other. And it might not be that far apart in terms of practical action either—an existential risk of either probability would be a key global priority."
- Toby Ord, The Precipice
As Buck points out [LW(p) · GW(p)], Toby's estimate of P(AI doom) is closer to the 'mainstream' than MIRI's, and close enough that "so low" doesn't seem like a good description.
I can't really speak on behalf of others at FHI, of course, by I don't think there is some 'FHI consensus' that is markedly higher or lower than Toby's estimate.
Also, I just want to point out that Toby's 1/10 figure is not for human extinction, it is for existential catastrophe caused by AI, which includes scenarios which don't involve extinction (forms of 'lock-in'). Therefore his estimate for extinction caused by AI is lower than 1/10.
Suppose astronomers detect an asteroid, and suggest a 10% chance of it hitting the Earth on a near-pass in 2082. Would you regard this assessment of risk as optimistic, or pessimistic? How many resources would you dedicate to solving the problem?
My understanding is that 10% isn't actually that far removed from what many people who are deeply concerned about AI think (or, for that matter, people who aren't that concerned about AI think - it's quite remarkable how differently people can see that 10%); they just happen to think that a 10% chance of total extinction is a pretty bleak thing, and ought to get our full attention. Indeed, I'd bet there's somebody around here who is deeply concerned about AI risk who assesses the risk as 1%. Remember that that risk of total human annihilation is greater than the risk of COVID to any one individual, and our society suffered massive upheaval to limit the risks there.
Which is to say - I don't think FHI or Toby Ord are significantly more optimistic than people who are deeply concerned about AI risk.
↑ comment by ChristianKl · 2022-04-05T15:35:45.499Z · LW(p) · GW(p)
I didn't speak about absolute optimism but said: "more optimistic then".
How many resources would you dedicate to solving the problem?
That's an argument you can make for spending much more money on alignment research.
It's however not an argument against stronger measures such as doing the kind of government regulation that would make it impossible to develop AGI.
Replies from: ACrackedPot↑ comment by ACrackedPot · 2022-04-05T16:25:08.279Z · LW(p) · GW(p)
The topic question is "Why is Toby Ord's likelihood of human extinction due to AI so low?"
My response is that it isn't low; as a human-extinction event, that likelihood is very high.
You ask for a comparison to MIRI, but link to EY's commentary; EY implies a likelihood of human extinction of, basically, 100%. From a Bayesian updating perspective, 10% is closer to 50% than 100% is to 99%; Ord is basically in line with everybody else, it is EY who is entirely off the charts. So the question, why is Ord's number so low, is being raised in the context of a number which is genuinely unusually high; the meaningful question isn't what differentiates Ord from EY, but what distinguishes EY from everybody else. And honestly, it wouldn't surprise me if EY also thought the risk was 10%, and thought that a risk of 10% justifies lying and saying the risk is 100%, and that's the entirety of the discrepancy.
As for potential reasons, any number - for example, maybe superintelligence of the sort that would reliably be capable of wiping humanity out just isn't possible and what we could create would only succeed 10% of the time, or maybe it isn't possible on hardware we'll have available in the next century, or maybe there's a lot more convergence in what we might think of as morality-space than we currently have reason to expect, or maybe there is a threshold of intelligence where acausal negotiation is standard and any given superintelligence will limit particular kinds of actions - not to mention the many possibilities where the people developing the superintelligence get it horribly wrong but in a way that doesn't lead to human extinction. We're basically guessing at unknown unknowns.
From my perspective, I think intelligence is a lot more complicated than most people think, and the current batch of people are doing the intelligence-construction-equivalent of trying to build a house by randomly nailing boards to other boards, and thinking they're onto something when they manage to create something that manages to behave like a roof in that it can be used to keep the rain off your head if you hold it just right; I think even a .01% risk of human extinction is giving AI development a lot of credit.
(Also, I think people greatly underestimate how difficult it will be, once they get the right framework to enable intelligence, to get that framework to produce anything useful, as opposed to a superstitious idiot / internet troll.)
3 comments
Comments sorted by top scores.
comment by tailcalled · 2022-04-05T14:16:35.967Z · LW(p) · GW(p)
How does he estimate this probability? Does he say?
Replies from: Davidmanheim, ChristianKl↑ comment by Davidmanheim · 2022-04-05T15:52:54.917Z · LW(p) · GW(p)
Kind of - see my response quoting the book, above.
↑ comment by ChristianKl · 2022-04-05T15:34:18.531Z · LW(p) · GW(p)
I'm not aware of an explicit probability estimate by EY but EY did speak in the post about doubling the chance from 0.x% as a baseline.