AI safety should be made more accessible using non text-based media
post by Massimog · 2022-05-10T03:14:13.259Z · LW · GW · 4 commentsContents
Climate change's media landscape AI safety's media landscape Example project idea Why bother? None 4 comments
I've been doing some thinking on AI safety's awareness problem, after a quick search I found that this post [LW · GW] summarizes my thoughts pretty well. In short, AI safety has an awareness problem in a way that other major crises do not (I'll draw parallels specifically with climate change in my analysis). Most ordinary people have not even heard of the problem. Of those that have, most do not understand the potential risks. They cannot concretely imagine the ways that things could go horribly wrong. I'll outline a few reasons I think this is an undesirable state of affairs, but on a surface level I feel it should be obvious to most people convinced of the severity of the issue why the alignment problem should be garnering at least as much attention as climate change, if not more.
The reason I'm writing this up in the first place though is to point out what I see as a critical obstacle for raising awareness that I feel is somewhat overlooked, namely, that virtually all of the best entry-level material on the alignment problem is text-based. It is sadly the case that many many people are simply unwilling to read anything longer than a short blog post or article, ever. Of those that are, getting them to consume lengthy non-fiction on what appears at first glance to be a fairly dry technical topic is still an almost insurmountable challenge. It seems to simply be that for people like that there's currently no easy way to get them invested in the problem, but it really doesn't have to be that way.
Climate change's media landscape
In sharp contrast with AI safety, google brings up a number of non text-based material on climate change aimed at a general audience:
- Dozens of movies (including, ironically, one titled "Artificial Intelligence"). I will point out that this includes both non-fiction documentaries and, crucially, fiction that features climate change as a major worldbuilding or plot element.
- An endless pile of short, highly produced youtube videos providing good explainers supported by engaging animated content.
- Illustrated children's books
- A few obscure videogames, though climate change was recently worked into a major mechanic in the very popular Civilization series.
- Captain planet. If someone figures out how to make captain planet for AI safety that would be pretty incredible.
AI safety's media landscape
AI safety does, at least, have a small amount of decent non text-based materials. However, each is somewhat flawed to a greater or lesser extent when it comes to reaching a wide audience:
- Robert Miles has done some fantastic work making video explainers on AI safety, but these mostly consist of him talking into a camera with the occasional medium quality graphic or animation. This is quite impressive for someone who spends most of their time (I assume) doing actual work in the field, but a far-cry from the condensed high-quality climate change explainers produced by experienced animation teams seen before. Plus, a lot of the time these are fairly academic explainers that fail to provide a strong narrative to latch on to.
- Universal paperclips is a short browser-based idle game where you play as a paperclip maximizer attempting to... do its thing. This game provides a pretty good depiction of a relatively realistic fast takeoff scenario, but without context it does not seem to provide any strong indications that the events of the game are a real thing that people should be worried could actually happen. I know I wouldn't be particularly convinced if this was the only thing someone had shown me on the topic. It's reach is also probably not helped by the fact that it looks mostly like a colorless simple HTML page where you just watch numbers go up.
- Sentient AI is a trope in a wide variety of sci-fi media, but even remotely realistic depictions of an unfriendly superintelligence rising to power are sorely lacking, with the actual state of things having probably done more harm than good when it comes to AI safety's credibility as a field.
Further suggestions for media to include in this section are welcome.
Example project idea
While the examples I've posted for climate might give anyone reading this ideas for potential projects it may be worth pursuing or funding, I did have a concrete idea for a project that may succeed at grabbing the attention of demographics who may be valuable to the field but wouldn't find out about it otherwise. I think someone should adapt Gwern's recent short story, It Looks Like You’re Trying to Take Over the World, into a graphic novel. Disclaimer: I am not an illustrator, I know next to nothing about that industry, I don't even know what the copyright situation would be, this is just an idea I think someone who would be equipped to pursue it might not have thought of. The main reason I've picked out this story in particular is that it does its best to "imagine a hard takeoff scenario using solely known sorts of NN & scaling effects…" I envision a main storyline punctuated by short, illustrated explainers for the concepts involved (this might be harder than I'm imagining, I don't have the expertise to judge), meant to provide a basic understanding for the layperson or, failing that, to at least confirm to them that at every step of the way to doom there are concrete, known, and potentially intractable failure modes. I don't expect a project like this to bring AI safety into the mainstream, but I feel like it would be a massive help in allowing someone worried about the issue to express to someone unfamiliar with the problem why they're concerned about things in an entertaining and accessible way. And it might even be profitable!
Why bother?
In closing this post, I want to verbalize some of my intuitions on why it's even worth bringing these issues to more people's attention (this is by no means a comprehensive list):
- Policy is hard enough to push for these days, but I imagine it would be even harder if almost none of the officials or voters have even heard of or understand the issues underlying a certain proposal.
- Working on a well-known crisis generally confers more status than working on a relatively obscure academic topic. Climate science as a career path has a status incentive that AI safety does not, whether you're directly working on technical solutions to the problems or not.
- Can't choose a career in a field you've never even heard of.
- While I'm under the impression AI safety as a field is not doing *too* poorly in terms of funding, I still wonder how much more could be done with the vast resources governments could bring to bear on a major issue with vast popular support.
4 comments
Comments sorted by top scores.
comment by ChristianKl · 2022-05-10T14:37:13.280Z · LW(p) · GW(p)
Most people don't have a good idea of the risks of climate change. Instead of understanding risks, they treat it as a partisan issue where it's about signaling tribal affiliation. The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Climate science as a career path has a status incentive that AI safety does not, whether you're directly working on technical solutions to the problems or not.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
Can't choose a career in a field you've never even heard of.
I think there's a good chance that people with an intellectual life where they won't hear about AI safety are net harmful to being involved in AI safety.
It is sadly the case that many many people are simply unwilling to read anything longer than a short blog post or article, ever.
While that's true, why do you believe that those people have something useful to contribute to AI safety on net?
Replies from: Massimog↑ comment by Massimog · 2022-05-14T06:57:19.051Z · LW(p) · GW(p)
Yeah, I think this gets at a crux for me, I feel intuitively that it would be beneficial for the field if the problem was widely understood to be important. Maybe climate change was a bad example due to being so politically fraught, but then again maybe not, I don't feel equipped to make a strong empirical argument for whether all that political attention has been net beneficial for the problem. I would predict that issues that get vastly more attention tend to receive many more resources (money, talent, political capital) in a way that's net positive towards efforts to solve it but I admit I am not extremely certain about this and would very much like to see more data pertaining to that.
To respond to your individual points:
The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Good point, though I'd argue there's much less of a technical hurdle to understanding the risks of mercury pollution compared to that of future AI.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
Certainly there may be some undesirable people who would be 100% focused on status and would not contribute to the object-level problem, but I would also consider those for whom status is a partial consideration (maybe they are under pressure from family, are a politician, are a researcher using prestige as a heuristic to decide which fields to even pay attention to before deciding on their object-level merits, etc.). I'd argue that not every valuable researcher or policy advocate has the luxury or strength of character to completely ignore status and that AI safety being a field that offers some slack in that regard might serve it well.
I think there's a good chance that people with an intellectual life where they won't hear about AI safety are net harmful to being involved in AI safety.
You're probably right about this, I think the one exception might be children, who tend to have a much narrower view of available fields despite their future potential as researchers. Though I still think their maybe people of value in populations who have ever heard of AI safety but who did not bother taking a closer look due to its relative obscurity.
While that's true, why do you believe that those people have something useful to contribute to AI safety on net?
Directly? I don't. To me, getting them to understand is more about casting a wider net of awareness to get the attention of those that could make useful contributions, as well as alleviating the status concerns mentioned above.
comment by Bezzi · 2022-05-10T15:52:45.973Z · LW(p) · GW(p)
Sentient AI is a trope in a wide variety of sci-fi media, but even remotely realistic depictions of an unfriendly superintelligence rising to power are sorely lacking, with the actual state of things having probably done more harm than good when it comes to AI safety's credibility as a field.
Well, I'd like to see an example of remotely realistic depiction of climate change, then. An awful lot of "climate change" movies are just post-apocalyptic bullshit with "climate change!" used to justify the apocalypse (and of course, actual climate change will be bad but not at full-fledged apocalypse level). I've yet to see a movie where Venice or Miami is sinking but there's no immediate danger everywhere else.
Movies like Transcendence or Superintelligence, or the Next and The Fear Index TV series, are far from being realistic depiction of AI risk, but I still think they are more or less pointing in the right direction. Even Hollywood has moved away from killer robots marching on the streets, give them a tiny little bit of credit.
Replies from: Massimog↑ comment by Massimog · 2022-05-14T07:03:41.049Z · LW(p) · GW(p)
Yeah, I'll admit I am more iffy on the fiction side of this argument, Hollywood isn't really kind to the reality of anything. I was actually not aware of any of these movies or shows (except superintelligence which I completely forgot about, whoops), it does seem things are getting better in this regard. Good! I hold that climate change still has a much stronger non-fiction presence though.