What Peter Thiel thinks about AI risk
post by Dr_Manhattan
score: 12 (13 votes) ·
This is probably the clearest statement from him on the issue:
25:30 mins in
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.
I recommend the rest of the lecture as well, it's a good summary of "Zero to One" and a good QA afterwards.
Comments sorted by top scores.
comment by jessicat
· score: 38 (38 votes) · LW
Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?
Thiel: I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky)
· score: 7 (7 votes) · LW
Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).
comment by Vulture
· score: 0 (0 votes) · LW
Musk thinks there's an issue in the 5-7 year timeframe
Hopefully his enthusiasm (financially) isn't too dampened when that fails to be vindicated.
comment by examachine
· score: -12 (10 votes) · LW
I'm sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.
Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.