List of requests for an AI slowdown/halt.

post by Cleo Nardo (strawberry calm) · 2023-04-14T23:55:09.544Z · LW · GW · 6 comments

Contents

    Last updated: April 14th 2023.
  About this document
None
6 comments

Last updated: April 14th 2023.

  1. Pause Giant AI Experiments: An Open Letter
    by Future of Life Institute
  2. Pausing AI Developments Isn't Enough. We Need to Shut it All Down
    by Eliezer Yudkowsky [LW · GW]
  3. We must slow down the race to God-like AI
    by Ian Hogarth
  4. The A.I. Dilemma
    by the Center for Humane Technology
  5. The case for slowing down AI [1]
    by Sigal Samuel
  6. The Case for Halting AI Development
    by Max TegmarkLex Fridman
  7. Lennart Heim on Compute Governance
    by Lennart HeimFuture of Life Institute
  8. Let’s think about slowing down AI [AF · GW]
    by KatjaGrace [AF · GW]
  9. The 0.2 OOMs/year target [AF · GW]
    by Cleo Nardo [AF · GW]
  10. AI Summer Harvest [LW · GW]
    by Cleo Nardo [LW · GW]
  11. Instead of technical research, more people should focus on buying time [LW · GW]
    by Akash [LW · GW], Olivia Jimenez [LW · GW], Thomas Larsen [LW · GW]
  12. Slowing down AI progress is an underexplored alignment strategy [EA · GW]
    by Michael Huang [EA · GW]
  13. Slowing Down AI: Rationales, Proposals, and Difficulties [1]
    by Simeon Campos, Henry Papadatos, Charles M
  14. What an actually pessimistic containment strategy looks like [LW · GW]
    by lc [LW · GW]
  15. In the Matter of OpenAI (FTC 2023) [1]
    by Center for AI and Digital Policy
  16. We need a Butlerian Jihad against AI [2]
    by Erik Hoel
  17. Dangers of AI and the End of Human Civilization
    by Eliezer Yudkowsky [LW · GW], Lex Fridman
  18. We’re All Gonna Die with Eliezer Yudkowsky
    by Eliezer Yudkowsky [LW · GW], Bankless
  19. The public supports regulating AI for safety [LW · GW]
    by Zach Stein-Perlman [LW · GW]
  20. New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development [LW · GW]
    by Akash [LW · GW]

About this document

There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination. This document aspires to collect all examples into a single list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions. Note that I'm also including surveys.

This list is:

Please mention in the comments any examples I've missed so I can add them.

  1. ^

    Credit to Zach Stein-Perlman [LW · GW].

  2. ^

    Credit to MM Maas [LW · GW].

6 comments

Comments sorted by top scores.

comment by Connor Williams · 2023-04-15T01:05:29.406Z · LW(p) · GW(p)

Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.

Replies from: strawberry calm
comment by Cleo Nardo (strawberry calm) · 2023-04-15T01:09:09.747Z · LW(p) · GW(p)

well-spotted😳

comment by MMMaas (matthijs-maas) · 2023-04-18T07:39:30.749Z · LW(p) · GW(p)

Nice, thanks for collating these!

Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological [EA · GW

and somewhat older: 
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022.  [LW · GW]https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like. [LW · GW]
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.

 

Replies from: strawberry calm
comment by Cleo Nardo (strawberry calm) · 2023-04-18T08:16:31.372Z · LW(p) · GW(p)

Thanks! I've included Erik Hoel's and lc's essays.

Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —

This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies.

But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.