Should AI safety be a mass movement?

post by mhampton · 2025-03-13T20:36:59.284Z · LW · GW · 1 comments

Contents

  Epistemic status 
  Reasons for x-risk to be a technocratic issue rather than a public conversation
  Counterpoints
None
1 comment

When communicating about existential risks from AI misalignment, is it more important to focus on policymakers/experts/other influential decisionmakers or to try to get the public at large to care about this issue?[1] I lean towards it being overall more important to communicate to policymakers/experts rather than the public. However, it may be valuable for certain individuals/groups to focus on the latter, if that is their comparative advantage. 

Epistemic status 

The following is a rough outline of my thoughts and is not intended to be comprehensive. I'm uncertain on some points, as noted, and I am interested in counterarguments. 

Reasons for x-risk to be a technocratic issue rather than a public conversation

  1. Communicating to a narrower audience makes it more likely that the issue can remain non-partisan and not divisive. Conversely, if the public becomes divided into "pro-safety" and "anti-safety" camps, potentially among partisan lines, then:
    1. It will be harder to cooperate to reduce risk with the "anti-safety" party and voters/groups aligned with it.
    2. It will also be more likely that AI policy and strategy will take place within the broader ideological paradigm of the pro-safety party; any legitimate concerns that don't fit within this paradigm are less likely to be addressed, compared to if AI safety is apolitical.
    3. The debate will become less rational.[2]
      1. There will be negative epistemic consequences from persuading policymakers as well ("Politics is the mind-killer" [? · GW]), but my sense is that it would be much harder to speak honestly and avoid demagoguery when trying to convince large masses of people. There are all kinds of misconceptions and false memes that spread in popular political debates, and it seems easier to have a more informed conversation if you're talking to a smaller number of people.
  2. It's hard to persuade people to believe in and care about a risk that feels remote / hard to understand / weird. Most people tend to focus on things that affect their day-to-day lives, so they are only likely to care about x-risk once harms from AI have become concrete and severe. This may not happen before it is too late.[3] Given this uncertainty, it seems better not to rely on a strategy that will mostly only work if we are in a soft-takeoff scenario.
  3. Voters' opinions will influence policy to some degree, but it is not obvious that persuading voters is a more effective method of change than lobbying policymakers directly (even if many voters can be persuaded, in spite of point 2),[4] and it seems like lobbying policymakers is quicker than changing the opinions of the public at large, which is important if timelines are short.

Counterpoints

  1. ^

    By "the public," I mean average voters, not people on LessWrong.

  2. ^

    Regardless of whether the division aligns with partisan lines.

  3. ^

    E.g. Toby Ord, "The Precipice: Existential Risk and the Future of Humanity" (2020), p. 183: "Pandemics can kill thousands, millions, or billions; and asteroids range from meters to kilometers in size. [...] This means that we are more likely to get hit by a pandemic or asteroid killing a hundredth of all people before one killing a tenth, and more likely to be hit by one killing a tenth of all people before one killing almost everyone. In contrast, other risks, such as unaligned artificial intelligence, may well be all or nothing."

  4. ^

    See e.g. here.

  5. ^

     Regarding asteroids, see Ord (2020), p. 72.

  6. ^

    I don't have a formal source for this, just my observations of politics and others' analysis of it.

  7. ^

    See, e.g. here and here.

  8. ^

    Backlash against protests in 1968 has been said to have led to the election of Richard Nixon. See also here.

1 comments

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2025-03-13T21:48:36.877Z · LW(p) · GW(p)

Re convo with Raemon yesterday, this might change my view.