FLI report: Policymaking in the Pause

post by Zach Stein-Perlman · 2023-04-15T17:01:06.727Z · LW · GW · 3 comments

This is a link post for https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf

Contents

  this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.
None
3 comments

this policy brief provides policymakers with concrete recommendations for how governments can manage AI risks.

 

Policy recommendations:
1. Mandate robust third-party auditing and certification.
2. Regulate access to computational power.
3. Establish capable AI agencies at the national level.
4. Establish liability for AI-caused harms.
5. Introduce measures to prevent and track AI model leaks.
6. Expand technical AI safety research funding.
7. Develop standards for identifying and managing AI-generated content and recommendations.

3 comments

Comments sorted by top scores.

comment by Zach Stein-Perlman · 2023-04-15T17:01:11.891Z · LW(p) · GW(p)

From an x-risk perspective, I think this report is good. It's far from shovel-ready policy proposals, but it points in some reasonable directions and might advance the debate.

I think it's wrong on #5 (watermarking)– what could Meta do if LLaMa had been watermarked? And #7 seems to have little x-risk relevance.

comment by Noosphere89 (sharmake-farah) · 2023-04-15T18:00:37.842Z · LW(p) · GW(p)

There's a tweet thread by Jason Crawford that talks about how liability law could be used for safer AI:

The tweet thread is here:

https://twitter.com/jasoncrawford/status/1646894709032247296

In essence, use liability law and liability insurance to make the market price in externalities, such that the market can be incentivized to solve the AI Alignment problem.

comment by _Mark_Atwood (fallenpegasus) · 2023-04-16T04:13:41.077Z · LW(p) · GW(p)

Carefully arranged to bring all motion to a fully regulated stop forever.  Yeahno.