A better “Statement on AI Risk?”

post by Knight Lee (Max Lee) · 2024-11-25T04:50:29.399Z · LW · GW · 4 comments

Contents

  Help
  Why
  References
None
4 comments

Remember the “Statement on AI Risk,” which was signed by many experts and influenced governments? Let's write a new stronger statement for experts to sign:

Statement on AI Inconsistency (v1.0us):

1: ASI threatens the US (and NATO) as much as all military threats combined. Why does the US spend $800 billion/year on its military but less than $0.1 billion/year on AI alignment/safety?

2: ASI being equally dangerous isn't an extreme opinion: the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%. To justify 8000 times less spending, you must be 99.999% sure of no AI catastrophe, and thus 99.95% sure that you won't realize you were wrong and the majority of experts were right (if you studied the disagreement further).

3: “But military spending isn't just for protecting NATO, it protects other countries far more likely to be invaded.” Even they are not 8000 times less likely to be attacked by ASI. US foreign aid—including Ukrainian aid—is only $100 billion/year, so protecting them can't be the real reason for military spending.

4: The real reason for the 8000fold difference is habit, habit, and habit. Foreign invasion concerns have decreased decade by decade, and ASI concerns have increased year by year, but budgets remained within the status quo, causing a massive inconsistency between belief and behaviour.

5: Do not let humanity's story be so heartbreaking.

We are one or two anonymous guys with zero connections, zero resources, zero experience.

We need an organization to publish it on their website, and contact the AI experts and others who might sign it. We really prefer an organization like the Future of Life Institute (which wrote the pause AI letter) or the Center for AI Safety (which wrote the Statement on AI Risk).

Help

We've sent an email to the Future of Life Institute but our gut feeling is they won't reply to such an anonymous email. Does anyone here have contacts with one of these organizations? Would you be willing to help?

Of course we'd also like to hear other critique, advice, and edits to the statement.

Why

We feel the Statement on AI Inconsistency might accomplish more than the Statement on AI Risk, while being almost as easy to sign.

The reason it might accomplish more is that people in the government cannot acknowledge the statement (and the experts who signed it), say it makes a decent point, but then do very little about it.

So long as the government spends a token amount on a small AI Safety Institute (AISI), they can feel they have done “enough,” and that the Statement on AI Risk is out of the way. The Statement on AI Inconsistency is more “stubborn:” they cannot claim to have addressed it until they spend a nontrivial amount relative to the military budget.

On the other hand, the Statement on AI Inconsistency is almost as easy to sign, because the main difficulty of signing it is how crazy it sounds. But once people acknowledge the Statement on AI Risk—“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”—the Overton window has moved so much that signing the Statement on AI Inconsistency requires only a little craziness beyond the normal position. It is a small step on top of a big step.

References

“the US spend $800 billion/year on its military”

“less than $0.1 billion/year on AI alignment/safety”:

“the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people), the median AI expert sees 5%-12%, other experts see 5%, and the general public sees 5%”

“US foreign aid—including Ukrainian aid—is only $100 billion/year”

  1. ^

    USAFacts Team. (August 1, 2024). “How much does the US spend on the military?” USAFacts. https://usafacts.org/articles/how-much-does-the-us-spend-on-the-military/

  2. ^

    Wiggers, Kyle. (October 22, 2024). “The US AI Safety Institute stands on shaky ground.” TechCrunch. https://techcrunch.com/2024/10/22/the-u-s-ai-safety-institute-stands-on-shaky-ground/

  3. ^

    McAleese, Stephen, and NunoSempere. (July 12, 2023). “An Overview of the AI Safety Funding Situation.” LessWrong. https://www.lesswrong.com/posts/WGpFFJo2uFe5ssgEb/an-overview-of-the-ai-safety-funding-situation/ [? · GW]

  4. ^

    Karger, Ezra, Josh Rosenberg, Zachary Jacobs, Molly Hickman, Rose Hadshar, Kayla Gamin, and P. E. Tetlock. (August 8, 2023). “Forecasting Existential Risks Evidence from a Long-Run Forecasting Tournament.” Forecasting Research Institute. p. 259. https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64f0a7838ccbf43b6b5ee40c/1693493128111/XPT.pdf#page=260

  5. ^

    Stein-Perlman, Zach, Benjamin Weinstein-Raun, and Katja Grace. (August 3, 2022). “2022 Expert Survey on Progress in AI.” AI Impacts. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

  6. ^

    USAID. (September 26, 2024). “ForeignAssistance.gov.” https://www.foreignassistance.gov/aid-trends

  7. ^

    Masters, Jonathan, and Will Merrow. (September 27, 2024). “How Much U.S. Aid Is Going to Ukraine?” Council on Foreign Relations. https://www.cfr.org/article/how-much-us-aid-going-ukraine

  8. ^

    Cancian, Mark and Chris Park. (May 1, 2024). “What Is in the Ukraine Aid Package, and What Does it Mean for the Future of the War?” Center for Strategic & International Studies. https://www.csis.org/analysis/what-ukraine-aid-package-and-what-does-it-mean-future-war

4 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2024-11-25T09:34:31.389Z · LW(p) · GW(p)

Why does the US spend less than $0.1 billion/year on AI alignment/safety?

Because no-one knows how to spend any more? What has come out of $0.1 billion a year?

I am not connected to work on AI alignment, but I do notice that every chatbot gets jailbroken immediately, and that I do not notice any success stories.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2024-11-25T10:51:31.618Z · LW(p) · GW(p)

This is an important point. AI alignment/safety organizations take money as input and write very abstract papers as their output, which usually have no immediate applications. I agree it may appear very unproductive.

However, if we think from first principles, a lot of other things are like that. For instance, when you go to school, you study the works of Shakespeare, you learn to play the guitar, and you learn how Spanish pronouns work. These things appear to be a complete waste of time. If 50 million students in the US spend 1 hour a day on these kinds of activities, and each hour is valued at only $10, that's $180 billion/year.

But we know these things are not a waste of time, because in hindsight, when you study how students grow up, this work somehow helps them later in life.

Lots of things appear useless, but are valuable in hindsight for reasons beyond the intuitive set of reasons we evolved to understand.

Studying the nucleus of atoms might appear like a useless curiosity, if you didn't know it'll lead to nuclear energy. There are no real world applications for a long time but suddenly there are enormous applications.

Pasteur's studies on fermentation might appear limited to modest winemaking improvements, but it led to the discovery of germ theory which saved countless lives.

The stone age people studying weird rocks may have discovered obsidian and copper. Those who studied the strange seeds that plants produce may have discovered agriculture.

We don't know how valuable this alignment work is. We should cope with this uncertainty probabilistically: if there is a 50% chance it will help us, the benefits per cost is halved, but that doesn't reduce ideal spending to zero.

comment by Odd anon · 2024-11-25T11:51:11.182Z · LW(p) · GW(p)

I don't think this would be a good letter. The military comparison is unhelpful; risk alone isn't a good way to decide budgets. Yet, half the statement is talking about the military. Additionally, call-to-action statements that involve "Spend money on this! If you don't, it'll be catastrophic!" are something that politicians hear on a constant basis, and they ignore most of them out of necessity.

In my opinion, a better statement would be something like: "Apocalyptic AI is being developed. This should be stopped, as soon as possible."

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2024-11-25T12:29:31.625Z · LW(p) · GW(p)

It's true that risk alone isn't a good way to decide budgets. You're even more correct that convincing demands to spend money are something politicians learn to ignore out of necessity.

But while risk alone isn't a good way to decide budgets, you have to admit that lots of budget items have the purpose of addressing risk. For example, flood barriers address hurricane/typhoon rick. Structural upgrades address earthquake risk. Some preparations also address pandemic risk.

If you accept that some budget items are meant to address risk, shouldn't you also accept that the amount of spending should be somewhat proportional to the amount of risk? In that case, if the risk of NATO getting invaded is similar in amount to the rogue AGI risk, then the military spending to protect against invasion should be similar in amount to the spending to protect against rogue ASI.

I admit that politicians might not be rational enough to understand this, and there is a substantial probability this statement will fail. But it is still worth trying. The cost is a mere signature and the benefit may be avoiding a massive miscalculation.

Making this statement doesn't prevent others from making an even better statement. Many AI experts have signed multiple statements, e.g. the "Statement on AI Risk," and "Pause Giant AI Experiments." Some politicians and people are more convinced by one argument, while others are more convinced by another argument, so it helps to have different kinds of arguments backed by many signatories. Encouraging AI safety spending doesn't conflict with encouraging AI regulation. I think the competition between different arguments isn't actually that bad.