A better “Statement on AI Risk?”
post by Knight Lee (Max Lee) · 2024-11-25T04:50:29.399Z · LW · GW · 1 commentsContents
Help Why References None 1 comment
Remember the “Statement on AI Risk,” which was signed by many experts and influenced governments? Let's write a new stronger statement for experts to sign:
Statement on AI Inconsistency (v1.0us):
1: The US spends $800 billion/year on its military. ASI is as likely to invade the US (or NATO) as all other countries combined. Why does the US spend less than $0.1 billion/year on AI alignment/safety?
2: ASI being equally dangerous isn't an extreme opinion: the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%.
3: “The military isn't just for protecting us, it protects other countries.” US foreign aid (including Ukrainian aid) is only $100 billion/year. This can't be the real reason. The average voter/taxpayer will be shocked if this $800 billion/year is also foreign aid. (Even if it was foreign aid, other countries are not 8000 times less likely to be invaded by ASI.)
4: The real reason is habit, habit, and habit. The foreign invasion probability has decreased decade by decade, and the ASI invasion probability has increased year by year, but budgets remained within the status quo, causing a massive inconsistency between belief and behaviour.
5: Do not let humanity's story be so heartbreaking.
We are one or two anonymous guys with zero connections, zero resources, zero experience.
We need an organization to publish it on their website, and contact the AI experts and others who might sign it. We really prefer an organization like the Future of Life Institute (which wrote the pause AI letter) or the Center for AI Safety (which wrote the Statement on AI Risk).
Help
We've sent an email to the Future of Life Institute but our gut feeling is they won't reply to such an anonymous email. Does anyone here have contacts with one of these organizations? Would you be willing to help?
Of course we'd also like to hear other critique, advice, and edits to the statement.
Why
We feel the Statement on AI Inconsistency might accomplish more than the Statement on AI Risk, while being almost as easy to sign.
The reason it might accomplish more is that people in the government cannot acknowledge the statement (and the experts who signed it), say it makes a decent point, but then do very little about it.
So long as the government spends a token amount on a small AI Safety Institute (AISI), they can feel they have done “enough,” and that the Statement on AI Risk is out of the way. The Statement on AI Inconsistency is more “stubborn:” they cannot claim to have addressed it until they spend a nontrivial amount relative to the military budget.
On the other hand, the Statement on AI Inconsistency is almost as easy to sign, because the main difficulty of signing it is how crazy it sounds. But once people acknowledge the Statement on AI Risk—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—the Overton window has moved so much that signing the Statement on AI Inconsistency requires only a little craziness beyond the normal position. It is a small step on top of a big step.
References
“The US spends $800 billion/year on its military”
- [1] says it's $820 billion in 2024. $800 billion is an approximate number.
“the US spend less than $0.1 billion/year on AI alignment/safety”:
- The AISI is the most notable US government funded AI safety organization. It does not focus on ASI takeover risk though it may partially focus on other catastrophic AI risks. AISI's budget is $10 million according to [2]. Worldwide AI safety funding is between $0.1 billion and $0.2 billion according to [3].
“the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%”
- [4] says: Median superforecaster: 2.13%. Median “domain experts” i.e. AI experts: 12%. Median “non-domain experts:” 6.16%. Public Survey: 5%. These are predictions for 2100. Nonetheless, these are predictions before ChatGPT was released, so it's possible they see the same risk sooner than 2100 now.
- [5] says the median AI expert sees a 5% chance of “future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species” and a 10% chance of “human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species.”
“US foreign aid (including Ukrainian aid) is only $100 billion/year”
- [6] says the 2023 foreign aid was $62 billion, but only includes $16 billion to Ukraine. [7] puts “Ukraine aid bills for FY 2023” at $60 billion. It's unclear how these numbers fit together or overlap, but we feel $100 billion is a good rough estimate.
- ^
USAFacts Team. (August 1, 2024). “How much does the US spend on the military?” USAFacts. https://usafacts.org/articles/how-much-does-the-us-spend-on-the-military/
- ^
Wiggers, Kyle. (October 22, 2024). “The US AI Safety Institute stands on shaky ground.” TechCrunch. https://techcrunch.com/2024/10/22/the-u-s-ai-safety-institute-stands-on-shaky-ground/
- ^
McAleese, Stephen, and NunoSempere. (July 12, 2023). “An Overview of the AI Safety Funding Situation.” LessWrong. https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation/ [? · GW]
- ^
Karger, Ezra, Josh Rosenberg, Zachary Jacobs, Molly Hickman, Rose Hadshar, Kayla Gamin, and P. E. Tetlock. (August 8, 2023). “Forecasting Existential Risks Evidence from a Long-Run Forecasting Tournament.” Forecasting Research Institute. p. 259. https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64f0a7838ccbf43b6b5ee40c/1693493128111/XPT.pdf#page=260
- ^
Stein-Perlman, Zach, Benjamin Weinstein-Raun, and Katja Grace. (August 3, 2022). “2022 Expert Survey on Progress in AI.” AI Impacts. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/
- ^
USAID. (September 26, 2024). “ForeignAssistance.gov.” https://www.foreignassistance.gov/aid-trends
- ^
Masters, Jonathan, and Will Merrow. (September 27, 2024). “How Much U.S. Aid Is Going to Ukraine?” Council on Foreign Relations. https://www.cfr.org/article/how-much-us-aid-going-ukraine
1 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2024-11-25T09:34:31.389Z · LW(p) · GW(p)
Why does the US spend less than $0.1 billion/year on AI alignment/safety?
Because no-one knows how to spend any more? What has come out of $0.1 billion a year?
I am not connected to work on AI alignment, but I do notice that every chatbot gets jailbroken immediately, and that I do not notice any success stories.