Why focus on AI?
post by nomore · 2018-04-08T22:58:47.833Z · LW · GW · 6 commentsContents
6 comments
Why does this community focus on AI over all other possible apocalypses? What about pandemics, runaway climate change, etc.?
6 comments
Comments sorted by top scores.
comment by Ben Pace (Benito) · 2018-04-08T23:08:23.208Z · LW(p) · GW(p)
Here's the Superintelligence FAQ [? · GW].
comment by TedSanders · 2018-04-09T00:37:43.850Z · LW(p) · GW(p)
Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:
- Less knowledge about AGI. Because there is less knowledge about AGI than pandemics or climate change, it's easier to share opinions before feeling ignorant and withdrawing from conversations. This results in more conversations.
- A disbelieving public. Implicit in arguments 'for' a position is the presumption that many people are 'against' that position. That is, believing 'X is true' is by itself insufficient to motivate someone to argue for X; someone will only argue for X if they additionally believe others don't believe X. In the case of AI, perhaps arguments for AI risk are more likely to encounter disagreement than believing in pandemic risk. This encountered disagreement spurs more conversations.
- Positive feedback. The more a community reads, thinks, and talks about an issue, the more things they find to say and the more sophisticated their thinking becomes. This begets more conversations on the topic, in a reinforcing feedback loop.
(Disclaimer: I personally don't worry about AI, am skeptical that AGI will happen in the next 100 years, am skeptical that AGI will take over Earth in under 100 years, but nonetheless recognize that these are more than 0% probable. I don't have a great mental model of why others disagree, but believe that it can be partly explained by software people being more optimistic than hardware people, since software people have experienced more amazing success in the past couple decades.)
Replies from: ChristianKl↑ comment by ChristianKl · 2018-04-09T07:43:16.661Z · LW(p) · GW(p)
If you think there's good information about bioengineered pandemics out there, what sources would you recommend?
Multiple LW surveys considered those to be a more likely Xrisk and if there would be a good way to spend Xrisk EA dollar I think it would be likely that the topic would get funding but currently there doesn't seem to be good targets.
comment by gbear605 · 2018-04-09T00:01:07.405Z · LW(p) · GW(p)
Basically, because many of those other things have a large number of people already working on them, while a significant portion of all the AI risk researchers in the world are part of this community - this was even more so the case when Yudkowsky started Lesswrong. Also, a lot of people in this community have interest/skills in the computer science field, so applying that to AI is much less of a reach than them, say, learning biology so they can help stop pandemics.
A similar question (though just asking about climate change) was answered here.
comment by ESRogs · 2018-04-09T08:36:23.457Z · LW(p) · GW(p)
What about pandemics, runaway climate change, etc.?
None of those other problems fights back. That makes AI scarier to me.
The other problems are worth thinking about, but AI seems most significant.
Replies from: ESRogs↑ comment by ESRogs · 2018-04-09T08:41:11.566Z · LW(p) · GW(p)
Let's say you're hiking in the mountains, and you find yourself crossing a sloped meadow. You look uphill and see a large form. And it's moving towards you!
Are you more scared if the form turns out to be a boulder or a bear? Why?
The boulder could roll over you and crush you. But if you get out of its path, it won't change course. Can't say the same for the bear.