What Peter Thiel thinks about AI risk

post by Dr_Manhattan · 2014-12-11T21:22:27.167Z · LW · GW · Legacy · 6 comments

Contents

6 comments

This is probably the clearest statement from him on the issue:

http://betaboston.com/news/2014/12/10/audio-peter-thiel-visits-boston-university-to-talk-entrepreneurship-and-backing-zuck/

25:30 mins in

 

TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

 

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

6 comments

Comments sorted by top scores.

comment by jessicat · 2014-12-11T22:37:03.725Z · LW(p) · GW(p)

Transcript:

Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?

Thiel: I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.

Replies from: Eliezer_Yudkowsky, examachine
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-12-14T19:00:33.992Z · LW(p) · GW(p)

Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).

Replies from: lukeprog, Vulture
comment by lukeprog · 2015-01-12T01:10:01.633Z · LW(p) · GW(p)

The Rob Bensinger post on this is now here.

comment by Vulture · 2014-12-14T23:33:38.359Z · LW(p) · GW(p)

Musk thinks there's an issue in the 5-7 year timeframe

Hopefully his enthusiasm (financially) isn't too dampened when that fails to be vindicated.

comment by examachine · 2014-12-12T19:10:15.901Z · LW(p) · GW(p)

I'm sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.

Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.

comment by ESRogs · 2014-12-12T08:32:00.696Z · LW(p) · GW(p)

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.