Should AI safety be a mass movement?
post by mhampton · 2025-03-13T20:36:59.284Z · LW · GW · 1 commentsContents
Epistemic status Reasons for x-risk to be a technocratic issue rather than a public conversation Counterpoints None 1 comment
When communicating about existential risks from AI misalignment, is it more important to focus on policymakers/experts/other influential decisionmakers or to try to get the public at large to care about this issue?[1] I lean towards it being overall more important to communicate to policymakers/experts rather than the public. However, it may be valuable for certain individuals/groups to focus on the latter, if that is their comparative advantage.
Epistemic status
The following is a rough outline of my thoughts and is not intended to be comprehensive. I'm uncertain on some points, as noted, and I am interested in counterarguments.
Reasons for x-risk to be a technocratic issue rather than a public conversation
- Communicating to a narrower audience makes it more likely that the issue can remain non-partisan and not divisive. Conversely, if the public becomes divided into "pro-safety" and "anti-safety" camps, potentially among partisan lines, then:
- It will be harder to cooperate to reduce risk with the "anti-safety" party and voters/groups aligned with it.
- It will also be more likely that AI policy and strategy will take place within the broader ideological paradigm of the pro-safety party; any legitimate concerns that don't fit within this paradigm are less likely to be addressed, compared to if AI safety is apolitical.
- The debate will become less rational.[2]
- There will be negative epistemic consequences from persuading policymakers as well ("Politics is the mind-killer" [? · GW]), but my sense is that it would be much harder to speak honestly and avoid demagoguery when trying to convince large masses of people. There are all kinds of misconceptions and false memes that spread in popular political debates, and it seems easier to have a more informed conversation if you're talking to a smaller number of people.
- It's hard to persuade people to believe in and care about a risk that feels remote / hard to understand / weird. Most people tend to focus on things that affect their day-to-day lives, so they are only likely to care about x-risk once harms from AI have become concrete and severe. This may not happen before it is too late.[3] Given this uncertainty, it seems better not to rely on a strategy that will mostly only work if we are in a soft-takeoff scenario.
- Voters' opinions will influence policy to some degree, but it is not obvious that persuading voters is a more effective method of change than lobbying policymakers directly (even if many voters can be persuaded, in spite of point 2),[4] and it seems like lobbying policymakers is quicker than changing the opinions of the public at large, which is important if timelines are short.
Counterpoints
- "Even if persuading voters is more difficult/riskier than persuading policymakers, people have been trying to persuade policymakers, and it hasn't gotten us far enough. Therefore, we’ll need to persuade voters. It's true that sometimes the government does good things without it being politically necessary, but we shouldn't expect this. Instead, we should persuade voters to give the government an incentive to do something."
- It may be the case that persuading policymakers is insufficiently tractable. Policymakers, like voters, tend to be more likely to react to present rather than future dangers. But there are at least some instances where they act on risks that most people don’t care about. All the best-handled risks (e.g., biological weapons, asteroid impacts[5]) are quietly taken care of by bureaucrats without any popular demand for it. Most of the tangible AI policy actions that have happened so far, (e.g. the executive order, export controls) seem to be independent of voter opinion. Arguably, policymakers do have incentives to take care of future risks even when voters don't immediately care: avoiding future controversies, maintaining national power and stability, ensuring economic prosperity.
- "X-risk is likely to be politicized regardless of whether we talk to the public about it."
- "Because politicians will intentionally polarize the public."
- This isn't my model of polarization. I view polarization as largely driven by actors outside of government who can make money off of making people angry. Donald Trump is an example of a politician who has gotten particularly involved in stirring people up, but this is not the default. My model is that politicians want to get things done, and that often involves working with the other side. There are plenty of issues on which politicians work together without the public caring one way or the other: corn subsidies, copyright extensions, etc.[6]
- Even if politicians want to stir people up, how much of people’s information consumption is driven by politicians directly (without being filtered by media/social media)? Do politicians create most of the conspiracy theories/toxic memes floating around, or do people on the internet?
- "Because AI will affect people's lives, so it will inherently be political."
- AI will affect people's lives, but it's not clear that regulations around preventing catastrophic risks would affect your life all that much if you don't have millions of dollars worth of compute.
- Then again, we may be close to a point where voters might get mad if a policy stops GPT-N from coming out, so this may be a fair point.
- "For whatever reason, the topic has already become politicized[7] even though we haven't done much to politicize it."
- This seems accurate. Maybe we could have done things differently, but maybe politics is just very Online right now, and all it takes for politicians to take sides on something is for a few nerds to argue about it on X, even if we don't try to market it to the wider public.
- It may still be the case that public awareness pushes could make the problem worse.
- "Because politicians will intentionally polarize the public."
- "Talking to the public does not necessarily have to cause polarization. We could talk about our concerns in a neutral way."
- In principle, yes, but it's hard, rightly or wrongly, to avoid injecting one's own ideas that may be seen as biased, rightly or wrongly. If you believe that such-and-such is a very important issue to the future of humanity, but Party P views it as Very Bad to care about it, are you really going to keep it off of the AI Safety platform? Are you then going to make sure everyone else advocating for this issue does the same? Good luck!
- "Climate change activism has arguably been a successful popular movement despite being (mainly) a future risk."
- Good point, updates me on point 2.
- "Public awareness will allow for transparency around AI policy."
- I agree that transparency and accountability are valuable, but it's not clear to me that transparency will lead to much accountability due to points 2 and 3.
- "But what about [Mass Movement M] that achieved [positive thing]?"
- Such examples exist, but there are also plenty of examples of mass movements that have had negative consequences and/or influenced public opinion in an unintended direction.[8] I don't have a robust opinion yet regarding whether this reference class is positive or negative.
- ^
By "the public," I mean average voters, not people on LessWrong.
- ^
Regardless of whether the division aligns with partisan lines.
- ^
E.g. Toby Ord, "The Precipice: Existential Risk and the Future of Humanity" (2020), p. 183: "Pandemics can kill thousands, millions, or billions; and asteroids range from meters to kilometers in size. [...] This means that we are more likely to get hit by a pandemic or asteroid killing a hundredth of all people before one killing a tenth, and more likely to be hit by one killing a tenth of all people before one killing almost everyone. In contrast, other risks, such as unaligned artificial intelligence, may well be all or nothing."
- ^
See e.g. here.
- ^
Regarding asteroids, see Ord (2020), p. 72.
- ^
I don't have a formal source for this, just my observations of politics and others' analysis of it.
- ^
- ^
Backlash against protests in 1968 has been said to have led to the election of Richard Nixon. See also here.
1 comments
Comments sorted by top scores.
comment by the gears to ascension (lahwran) · 2025-03-13T21:48:36.877Z · LW(p) · GW(p)
Re convo with Raemon yesterday, this might change my view.