Geoffrey Hinton - Full "not inconceivable" quote
post by WilliamKiely · 2023-03-28T00:22:01.626Z · LW · GW · 2 commentsContents
Quote Transcript leading up to the quote My understanding of Hinton's views None 2 comments
CBS interview with Geoffrey Hinton published March 25th, 2023:
In the last two days I've seen several people share news articles about Hinton saying it is "not inconceivable" that AI will wipe out humanity in this interview.
The articles often don't give the full context of the quote, so I made this post to do that, even though it might be overkill:
Quote
Q:[1] "This is like the most pointed version of the question, and you can just laugh it off or not answer it if you want, but what do you think the chances are of AI just wiping out humanity? Can we put a number on that?"
[27:40]
Hinton: "It's somewhere between, um, naught percent and 100 percent. [Hinton *laughs*] I mean, I think, I think it's not inconceivable. That's all I'll say. I think if we're sensible, we'll try and develop it so that it doesn't. But what worries me is the political system we're in, where it needs everybody to be sensible.
Note that just before this Hinton was talking about how he thinks "if Putin had autonomous lethal weapons he'd use them right away," which plausibly may have led him to say that worries him is the political system we're in.
Transcript leading up to the quote
[25:52]
Q: "Some people are worried that this could take off very quickly and we just might not be ready for that. Does that concern you?"
Hinton: "It does a bit. Until quite recently I thought it was going to be like 20-50 years before we had general purpose AI and now I think it may be 20 years or less, so."
Q: "Some people think it could be like 5 [years]. Is that silly?"
Hinton: "I wouldn't completely rule that possibility out now, whereas previously, a few years ago, I would have said 'no way.'"
Q: "And then, some people say, AGI could be massively dangerous to humanity because we just don't know what a system that's so much smarter than us will do. Do you share that concern?"
Hinton: "I do a bit. Um, I mean obviously what we need to do is make this synergistic, have it so it helps people. And I think the main issue here, well one of the main issues, is the political systems we have. So, I'm not confident that President Putin is going to use AI in ways that help people."
[...]
Hinton: "I think if Putin had autonomous lethal weapons he'd use them right away."
Q: "This is like the most pointed version of the question, and you can just laugh it off or not answer it if you want, but what do you think the chances are of AI just wiping out humanity? Can we put a number on that?"
[27:40]
Hinton: "It's somewhere between, um, naught percent and 100 percent. [Hinton *laughs*] I mean, I think, I think it's not inconceivable. That's all I'll say. I think if we're sensible, we'll try and develop it so that it doesn't. But what worries me is the political system we're in, where it needs everybody to be sensible.
My understanding of Hinton's views
The interview doesn't seem like it provides very strong evidence of what Hinton believes on this topic and I'm unaware if he's stated his views elsewhere, but based on this interview I update in the direction of believing the following:
- Hinton seems more concerned about bad actors misusing AI good actors accidentally creating unaligned AI that destroys humanity.
- Hinton may or may not believe that the chance of AI wiping out humanity is >1%, or >10%.
- Plausibly he may put significant weight on it and may have just not wanted to state a specific number on the record.
If anyone else has more information on Hinton's credence that AI will wipe out humanity, please share so I can update this to make it more informative.
- ^
I abbreviate the interviewer, Brook Silva-Braga, as "Q."
2 comments
Comments sorted by top scores.
comment by dsj · 2023-03-28T07:59:07.858Z · LW(p) · GW(p)
Another interesting section:
Silva-Braga: Are we close to the computers coming up with their own ideas for improving themselves?
Hinton: Uhm, yes, we might be.
Silva-Braga: And then it could just go fast?
Hinton: That's an issue, right. We have to think hard about how to control that.
Silva-Braga: Yeah. Can we?
Hinton: We don't know. We haven't been there yet, but we can try.
Silva-Braga: Okay. That seems kind of concerning.
Hinton: Uhm, yes.