irving's Shortform

post by irving (judith) · 2023-04-28T13:57:10.589Z · LW · GW · 3 comments

3 comments

Comments sorted by top scores.

comment by irving (judith) · 2023-04-28T13:57:11.144Z · LW(p) · GW(p)

Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:

  • There’s no evidence of a malign superintelligence existing currently, therefore it can be dismissed without evidence
  • We're faking being worried because if we truly were, we would use violence
  • Yudkowsky is calling for violence
  • Saying something as important as the end of the world could happen could influence people to commit violence, therefore warning about the end of the world is bad
  • Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
  • When the time comes we’ll just figure it out
  • There were other new technologies that people warned would cause bad outcomes
  • We didn’t know whether nuclear experimentation would end the world but we went ahead with it anyway and we didn’t end the world (omitting that careful effort was put forth first to ensure this risk was miniscule)
  • My personal favorite: AI doom would happen in the future, and anything happening in the future is unfalsifiable, therefore it is not a scientific claim and should not be taken seriously.
Replies from: TAG
comment by TAG · 2023-05-01T16:46:52.038Z · LW(p) · GW(p)

Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity

Currently, they seem to have a lot of trouble explaining the motivation. The "How" steps are a lot easier.

Replies from: judith
comment by irving (judith) · 2023-05-02T03:37:47.500Z · LW(p) · GW(p)

This post was meant as a summary of common rebuttals. I haven't actually heard much questioning of motivation, as instrumental convergence seems fairly intuitive. The more common question asked is how an AI could actually physically achieve the destruction.