Jason Silva on AI safety

post by curiousepic · 2012-05-09T18:07:54.070Z · LW · GW · Legacy · 24 comments

Just an FYI that Jason Silva, a "performance philosopher" who is quickly gaining popularity and audience, seems to have given little thought to, or not been exposed to the proper arguments for, or is unconvinced by, the existential threat of AGI. But of course, perhaps this optimism is what has allowed him to become so engagingly exuberant.

"And I think if they're truly trillions of times more intelligent than us, they're not going to be less empathetic than us---they're probably going to be more empathetic. For them it might not be that big of deal to give us some big universe to play around in, like an ant farm or something like that. We could already be living in such a world for all we know. But either way, I don't think they're going to tie us down and enslave us and send us to death camps; I don't think they're going to be fascist A.I.'s. "

Anyone have the connections to change his mind and help the X-risk meme piggyback on his voice?  Perhaps inviting him to Singularity Summit?

24 comments

Comments sorted by top scores.

comment by Paul · 2012-05-09T20:33:32.557Z · LW(p) · GW(p)

Jason Silva spoke at last year's Singularity Summit.

Replies from: curiousepic
comment by curiousepic · 2012-05-09T20:49:48.674Z · LW(p) · GW(p)

Ah, thanks, and apologies for not researching further. Perhaps that ship has sailed, then.

comment by [deleted] · 2012-05-09T23:33:04.629Z · LW(p) · GW(p)

Why should we care what one minor TV presenter thinks? Assuming that all Eliezer's direst predictions were true (and I don't have anything like the confidence in that that he does, but I presume from the post that you do), I could name a thousand people off the top of my head without even thinking very hard who it would be a better idea to convince of the risk.

I think what you're doing here is quite close to privileging the hypothesis -- "why don't we try convincing... this guy?" The amount of effort it would take to target one z-list celebrity for 'conversion', compared to the expected reward, suggests that almost anything would be a better idea.

comment by [deleted] · 2012-05-09T18:56:06.307Z · LW(p) · GW(p)

I would prefer to see fewer of these "Here's someone saying something vaguely relevant to transhumanism/FAI"-type posts. I don't mind when it's a well-known thinker whose opinion is actually worth updating on, or if the person in question can reach an extremely large audience, but I don't think someone like Jason Silva is worth a discussion post.

Replies from: shminux, curiousepic
comment by shminux · 2012-05-09T19:20:02.350Z · LW(p) · GW(p)

You missed OP's point, which is crowdsourcing:

Anyone have the connections to change his mind and help the X-risk meme piggyback on his voice?

Replies from: None
comment by [deleted] · 2012-05-09T19:48:56.524Z · LW(p) · GW(p)

Sorry, my phrasing wasn't very clear. I understood OP's intent; I just don't want to see outreach posts about lesser-known people.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-10T07:14:17.861Z · LW(p) · GW(p)

Using them for practice might be useful and instructive.

comment by curiousepic · 2012-05-09T19:17:09.215Z · LW(p) · GW(p)

It's my hope that Jason can update on our ideas, and he certainly does seem to have the potential to reach a large, if not extremely large audience. And it's my intuition that he will be more inclined to update while he is gaining popularity, than after he reaches it.

In my mind, this falls into rationality and X-risk outreach, and lies squarely within Discussion territory.

Replies from: None
comment by [deleted] · 2012-05-09T19:47:17.921Z · LW(p) · GW(p)

I suspect we have differing expectations about how well-known Silva will become. This isn't a precise prediction, but I believe with ~65% confidence that he is at roughly the peak of his popularity (measured in terms of how frequently his name appears in blogs and major news outlets).

Personally, I would prefer to see fewer posts on x-risk outreach, especially those focused on contacting specific people, and especially when said person isn't very well-known.

Replies from: curiousepic, faul_sname
comment by curiousepic · 2012-05-09T20:08:18.701Z · LW(p) · GW(p)

I agree resources are well spent reaching out to those with a larger audience. But I would assert that it would take less resources to influence those with a smaller current audience; that the larger one's audience, the more one's current opinions are mentally reinforced.

And, he seems a prime target to be invited to the Singularity Summit, where he'd have hopefully influential social contact with SI folks.

Can I ask why you prefer to see fewer posts on outreach?

comment by faul_sname · 2012-05-10T02:22:41.371Z · LW(p) · GW(p)

That estimate sounds, if anything, a bit low. However, if there is a significant chance he will become very popular in the future, it may still be worthwhile to introduce him those topics now.

comment by Luke_A_Somers · 2012-05-09T19:22:11.465Z · LW(p) · GW(p)

For starters, I'd ask him to audit those beliefs. Why does he think that?

Replies from: curiousepic
comment by curiousepic · 2012-05-09T19:35:03.324Z · LW(p) · GW(p)

Superficially, it seems like he's assuming intelligence implies benevolence.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-05-09T19:51:20.767Z · LW(p) · GW(p)

Well, yeah, but why does he think THAT?

Replies from: XiXiDu, earthwormchuck163
comment by XiXiDu · 2012-05-10T11:10:52.578Z · LW(p) · GW(p)

Superficially, it seems like he's assuming intelligence implies benevolence.

Well, yeah, but why does he think THAT?

One way to think about it.

comment by earthwormchuck163 · 2012-05-09T20:25:15.589Z · LW(p) · GW(p)

Because it's so obvious that it doesn't require further examination. (Of course this is wrong and it does, but he hasn't figured that out yet.)

Replies from: shminux
comment by shminux · 2012-05-09T21:25:22.446Z · LW(p) · GW(p)

Of course this is wrong and it does, but he hasn't figured that out yet

That's quite condescending. How do you know which one of you is wrong?

Replies from: David_Gerard, earthwormchuck163
comment by David_Gerard · 2012-05-10T07:13:24.715Z · LW(p) · GW(p)

The one-line answer is "'Superintelligence implies supermorality!' thought the cow as the bolt went through its brain."

comment by earthwormchuck163 · 2012-05-09T21:56:09.558Z · LW(p) · GW(p)

I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong. Just that it does in fact require further examination. Whereas here it looks like an invisible background assumption.

Did my phrasing not make it clear that this is what I meant, or did you interpret me as I intended and still think it sounds condescending?

Replies from: shminux, timtyler
comment by shminux · 2012-05-09T22:54:27.860Z · LW(p) · GW(p)

Just that it does in fact require further examination.

Ah, that makes more sense. I did indeed misinterpret it sorry.

Replies from: earthwormchuck163
comment by earthwormchuck163 · 2012-05-09T23:02:59.210Z · LW(p) · GW(p)

No need to apologize. It's clear in hindsight that I made a poor choice of words.

comment by timtyler · 2012-05-09T23:14:50.725Z · LW(p) · GW(p)

I'm not saying the apparent object level claim (ie intelligence implies benevolence) is wrong.

I think few would claim that. We can point to smart-but-evil folk to demonstrate otherwise. The more defensible idea is that there's a correlation.

comment by timtyler · 2012-05-09T23:10:42.348Z · LW(p) · GW(p)

Just an FYI that Jason Silva, a "performance philosopher" who is quickly gaining popularity and audience, seems to have given little thought to, or not been exposed to the proper arguments for, or is unconvinced by, the existential threat of AGI. But of course, perhaps this optimism is what has allowed him to become so engagingly exuberant.

Do you disagree with the material you quoted? If so, can you say why?

comment by Alerus · 2012-05-09T18:17:26.571Z · LW(p) · GW(p)

I'm also wildly optimistic. Not because I don't think there are challenges we need to overcome, but because by the time we're able to make an AI as smart as us, I think we'll almost surely have those problems worked out.