A simple presentation of AI risk arguments

post by Seth Herd · 2023-04-26T02:19:19.164Z · LW · GW · 0 comments

Contents

No comments

Here is a draft of an accordion-style AGI risk FAQ.

I'm interested in three types of feedback:

  1. The best place to host a similar format
  2. Feedback on content 
  3. Thoughts about this general approach

The goal here is something that's very easy to read. One major idea in the accordion-style presentation is to make sure that the article isn't overwhelming. I also wanted to let people address their own biggest questions without having to read content they're less interested in. The whole idea is low effort threshold, and hoping to draw the reader in from minimal interest to a little more interest. The purpose is to inform, although I'm also hoping people leave agreeing with me that this is something that society at large should be taking a bit more seriously. I'm going for something you could point your mother to. 

The challenge is that the AGI risk arguments actually are not simple to someone who hasn't thought about them. There are major sticking points for many people, and those are different for different people. That's why I've been thinking about the FAQ that follows different lines of questions. This accordion style is an attempt to do that in a way that's quick and smooth enough to keep people's attention. The intent is that all bullet points start out collapsed.

If this seems worthwhile, I will expand it to have deeper content. It is currently very much an incomplete draft because I strongly suspect I'll get better suggestions for hosting and format that will require a lot of transplanting work. 

It also needs links to other similar articles that go into more depth. My first would be to the stampy.ai project of Rob Miles and collaborators. It has a similar structure of letting the user choose questions and the huge advantage of letting users enter their own questions and having a semantic search for answers to similar questions. My attempt is different in that it's aimed at a more general audience than the tech types that have to date become interested in the AI safety issue.

I think we're likely to see repeating waves of new public interest in AI safety from here on out. I'm looking forward to the opportunities presented by AI scares and changing public beliefs [LW · GW], but only if we can avoid creating polarization, as I discuss in that article. I think we are very likely to get more scares, and I agree with Leopold's point (made briefly here) that the societal response to COVID suggests that we may see a very rapid wave of intense societal concern. I think we'd better be prepared to surf that wave rather than just watch it sweep past. (To that end, we should also figure out what public policy would actually help our odds, but that is a separate issue).

As such, I'm going to put at least a little effort into refining an introductory message for the newly concerned, and I'd love any suggestions that people want to offer.

Edit: While I don't have ideas about specific policies, I think that raising public awareness of AI X-risk is probably a net good. Contributing to a public panic could easily be bad, so I don't want to present the strongest arguments in absence of optimism and counterarguments. In addition, I think convincing others of what you see as the truth is easier than convincing people to believe what you want. Humans have decent lie-detection and propaganda protection. I do think that having the average human think something like "these things are dangerous, and if you're developing them without being really careful, you're a bad human" seems like a net benefit. I'd expect such a social pressure to diffuse upward toward the people actually making decisions through friends, relatives, coworkers, and underlings.

0 comments

Comments sorted by top scores.