[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

post by wallowinmaya · 2016-07-21T20:22:06.062Z · score: 9 (14 votes) · LW · GW · Legacy · 5 comments

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

5 comments

Comments sorted by top scores.

comment by Manfred · 2016-07-21T21:48:54.159Z · score: 12 (15 votes) · LW(p) · GW(p)

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

comment by Wei_Dai · 2016-07-22T15:49:22.152Z · score: 7 (8 votes) · LW(p) · GW(p)

That's funny. :) But these people actually sound remarkably sane. See here and here for example.

comment by The_Jaded_One · 2016-07-23T12:58:00.691Z · score: 6 (7 votes) · LW(p) · GW(p)

Just commenting to point out that I'm having a fabulous day, and have a very painless, enjoyable life. I struggle to even understand what suffering is, to be honest, so make a note of that any negative utilitarians who may be listening!

comment by [deleted] · 2016-07-22T13:41:50.302Z · score: 6 (7 votes) · LW(p) · GW(p)

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.

comment by RomeoStevens · 2016-07-22T20:34:33.180Z · score: 2 (2 votes) · LW(p) · GW(p)

If we get only one thing right I think a plausible candidate is right to exit. (if you have limited optimization power narrow the scope of your ambition blah blah)