CalvinCash's Shortform

post by CalvinCash · 2023-12-06T01:08:15.494Z · LW · GW · 2 comments

2 comments

Comments sorted by top scores.

comment by CalvinCash · 2023-12-05T19:05:09.102Z · LW(p) · GW(p)

Something I've wondered in relation to AI takeover is whether it's actually useful to think AI takeover certain. Consider the fact that the only future where humans continue to exist, and thus human action is meaningful and all of our predictions for the future carry any practical application, is the future wherein AI doesn't take over-- or, at least, doesn't cause extinction. 

Sure, speculating about AI takeover is an excellent debate topic and good overall intellectual engagement, but for all practical purposes, should we (that is, to say, people at large) not simply assume that AI will not cause extinction?

That isn't to say I don't think that AI is a threat in any way; I'm just saying that I think it would be good for people in general to be more optimistic about artificial intelligence, in that they don't believe doom is certain, but rather an avoidable possibility that a lot of very smart people are working hard on preventing.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-12-06T06:20:21.017Z · LW(p) · GW(p)

Dodging questions like this and living in the world where they go well is something you can do approximately once in your life before you stop living in reality and are in an entirely-imaginary dream world. Twice if you're lucky and neither of the hypotheticals were particularly certain.