List of requests for an AI slowdown/halt.

post by Cleo Nardo (strawberry calm) · 2023-04-14T23:55:09.544Z · LW · GW · 6 comments

Contents

  About this document
  List of slowdown/halt AI requests
    Last updated: April 14th 2023.
None
6 comments

About this document

There has been a recent flurry of letters/articles/statements/videos which endorse a slowdown or halt of colossal AI experiments via (e.g.) regulation or coordination.

This document aspires to collect all examples into a single centralised list. I'm undecided on how best to order and subdivide the examples, but I'm open to suggestions.

As a disclaimer, this list is...

Please mention in the comments any examples I've missed so I can add them!

List of slowdown/halt AI requests

Last updated: April 14th 2023.

(Note that I'm also including surveys.)

  1. ^

    Credit to Zach Stein-Perlman [LW · GW].

  2. ^

    Credit to MM Maas [LW · GW].

6 comments

Comments sorted by top scores.

comment by Connor Williams · 2023-04-15T01:05:29.406Z · LW(p) · GW(p)

Nitpick: I believe you meant to say last updated Apr 14, not Mar 14.

Replies from: strawberry calm
comment by Cleo Nardo (strawberry calm) · 2023-04-15T01:09:09.747Z · LW(p) · GW(p)

well-spotted😳

comment by MMMaas (matthijs-maas) · 2023-04-18T07:39:30.749Z · LW(p) · GW(p)

Nice, thanks for collating these!

Also perhaps relevant: https://forum.effectivealtruism.org/posts/pJuS5iGbazDDzXwJN/the-history-epistemology-and-strategy-of-technological [EA · GW

and somewhat older: 
lc. ‘What an Actually Pessimistic Containment Strategy Looks Like’. LessWrong, 5 April 2022.  [LW · GW]https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like. [LW · GW]
Hoel, Erik. ‘We Need a Butlerian Jihad against AI’. The Intrinsic Perspective (blog), 30 June 2021. https://erikhoel.substack.com/p/we-need-a-butlerian-jihad-against.

 

Replies from: strawberry calm
comment by Cleo Nardo (strawberry calm) · 2023-04-18T08:16:31.372Z · LW(p) · GW(p)

Thanks! I've included Erik Hoel's and lc's essays.

Your article doesn't actually call for AI slowdown/pause/restraint, as far as I can tell, and explicitly guards off that interpretation —

This analysis does not show that restraint for AGI is currently desirable; that it would be easy; that it would be a wise strategy (given its consequences); or that it is an optimal or competitive approach relative to other available AI governance strategies.

But if you've written anything which explicitly endorses AI restraint then I'll include that in the list.