2028 Should Not Be AI Safety's First Foray Into Politics
post by Jesse Richardson (SharkoRubio) · 2025-03-04T16:46:37.370Z · LW · GW · 0 commentsContents
No comments
I liked the idea in this comment [LW(p) · GW(p)] that it could be impactful to have someone run for President in 2028 on an AI notkilleveryoneism platform. Even better would be for them to run on a shared platform with numerous candidates for Congress, ideally from both parties. I don't think it's particularly likely to work, or even get off the ground, but it seems worthwhile to explore, given that we don't know what the state of play will be by then. In my view, either the 2024 or the 2028 US Presidential election is probably the most important election in human history, and it's too late to affect the former.
My suggestion is that, if you're someone who is at all on board with this idea, as of March 2025, it is not sufficient to wait until the Presidential race gets going sometime in 2027 to do something. In particular, the first major test of political support for AI safety should not be the one time it has to work (note that I have pretty short timelines so I'm implicitly assuming that transformative AI will arrive before 2033).
Think about all the relevant knowledge that a practitioner of regular left-right politics might have, that someone interested in AI notkilleveryoneism politics mostly wouldn't have today:
- Who is our base?
- Who are the swing voters?
- Who are the most relevant groups & institutions to have on our side?
- What level of support can we expect i.e. how high or low should we aim?
- Which kinds of messages work, which don't?
- On which issues do voters trust us?
- Which of our people are the most well-liked, and why?
- What are the reliable failure modes for our brand of politics?
- How would our opponents answer the above questions, and how can we exploit our knowledge of their answers?
If you lack clear answers to these questions, you're not doomed to fail, but you're not exactly setting yourself up for success. I also think that these are questions that are unlikely to be satisfactorily answered by armchair theorizing. The best way to answer them is to actually run an electoral campaign, a public-facing lobbying effort, something of this nature. SB1047 was a good start, let's do more.
Maybe you think the idea of AI safety having any kind of impact on the 2028 Presidential election is a pipe dream, in which case this post is not for you. But if you do want to leave the door open to this possibility, the time to start laying groundwork is now. The 2026 midterm elections or upcoming international elections might be good places to start.
I don't claim to have a particular answer for what should be done right this second, nor am I very confident that this kind of approach should be prioritized over other AI safety efforts. But if there is support among the AI safety community for pursuing it as a political movement, it's worth doing right, and that means not waiting until a Presidential election. All in all, I am very open to suggestions from people about concrete actions we could be taking in this direction.
0 comments
Comments sorted by top scores.