The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

post by Gordon Seidoh Worley (gworley) · 2018-02-23T21:42:20.604Z · LW · GW · 8 comments

This is a link post for https://arxiv.org/abs/1802.07228

8 comments

Comments sorted by top scores.

comment by tristanm · 2018-02-25T19:20:42.238Z · LW(p) · GW(p)

I could probably write a lot more about this somewhere else, but I'm wondering if anyone else felt that this paper seemed to be kind of shallow. This comment is probably too brief to really do this feeling justice, but I'll probably decompose this into two things I found disappointing:

  1. "Intelligence" is defined in such a way that leaves a lot to be desired. It doesn't really define it in a way that makes it qualitatively different than technology in general ("tasks thought to require intelligence" is probably much less useful than "narrowing the set of possible futures into one that match an agent's preference ordering."). For this reason, the paper imagines a lot of scenarios that amount to basically one party being able to do one narrow task much better than another party. This is not specific enough to really narrow us down to any approaches that deal with AI more generally.
  2. As a consequence of the choice of the authors to leave their framework sort of fuzzy, their suggestions for how to respond to this problem also take on this fuzziness. For example their first suggestion is that policy leaders should consult with AI researchers. This reads a little bit like an applause light, and it doesn't seem to offer many suggestions about how to make this more likely, or about how to make sure that policy leaders are well informed enough to make sure they are considering the right people to be advised by and take their advice seriously.

Overall I'm happy that these kinds of things can be discussed by a large group of various organizations. But I think any public efforts to work towards mitigating AI risk need to be very careful that they aren't losing something extremely important by trying to appeal to too many uncertainties and disagreements at once.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-02-25T19:58:41.492Z · LW(p) · GW(p)

My cynical take is that the point of writing papers like this is for them to be cited, not read.

comment by Qiaochu_Yuan · 2018-02-23T22:36:09.087Z · LW(p) · GW(p)

Meta: It feels weird to give people karma for linkposts to things they didn't write. StackExchange has a "community wiki" flag for posts that means they award no reputation (and that everyone can edit them); should we have something like that?

Replies from: habryka4, Raemon
comment by habryka (habryka4) · 2018-02-24T03:31:21.656Z · LW(p) · GW(p)

I feel something like “giving you half karma capped at a certain number” seems more reasonable, since someone is still providing a valuable service to the community by posting it.

comment by Raemon · 2018-02-24T03:48:00.556Z · LW(p) · GW(p)

So far, I've never seen this sort of link post get that much karma, so I'm not too worried about it.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-02-25T18:56:18.602Z · LW(p) · GW(p)

Well, that's part of the problem though. I think it's very good that this paper exists but I didn't upvote this post because it felt weird to me. So the low karma total doesn't reflect the goodness of the link.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2018-02-25T22:25:30.646Z · LW(p) · GW(p)

I'm obviously a bit biased here but I generally think worrying too much about who gets karma is an antipattern. I posted this link because I came across the paper, it had been out for a few days, none of the authors had shared it here, so seemed worth sharing with the LW community, but with karma having a logarithmic impact on capability on the site it would require a lot of link gaming (which I assume is the main adversarial, free-riding behavior we don't want to encourage here) for someone to get lots of useful karma without doing anything other than being first to post links to things that will get upvotes.

Or put another way I think there's enough noise in karma that you don't need to worry about it; it'll take a lot of karma "misattribution" to have a serious impact on the quality of the site.

Also, just out of curiosity, would you have felt differently about voting if I had, say, provided an executive summary rather than just giving the link?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2018-02-26T03:04:26.950Z · LW(p) · GW(p)

That's fair. And yes, I would have been happy to vote if you'd provided a one-paragraph summary or something.