Many AI governance proposals have a tradeoff between usefulness and feasibility

post by Akash (akash-wasil), Carson Ezell (carson-ezell) · 2023-02-03T18:49:44.431Z · LW · GW · 2 comments

Contents

  Additional points
None
2 comments

Epistemic status: We think this is a simple and common idea in discussions about AI governance proposals. We don’t expect this to be controversial, but we think it might be useful to put a label on it  [LW · GW]and think about it explicitly. 

Suppose an AGI lab could choose between two safety standards:

All else equal, people prefer restrictions that are less costly than restrictions that are more costly. Standard 2 is less demanding on labs, but it’s also less helpful in reducing x-risk. 

Core point: There is often a tradeoff between the feasibility of a proposal (its likelihood of being implemented) and its usefulness (its expected impact on reducing x-risk, conditional on it being implemented).

We commonly observe this tradeoff in various conversations about AI governance. We don’t think this is a new idea, but sometimes having a term for something helps us track the concept more clearly [LW · GW].

We refer to this as the usefulness-feasibility tradeoff

Additional points

2 comments

Comments sorted by top scores.

comment by Seth Herd · 2023-02-03T23:27:01.229Z · LW(p) · GW(p)

It sounds like you're thinking mostly about voluntary standards. I think legislated standards are a real possibility (as the public gets more freaked out by both powerful nonmagnetic systems like ChatGPT, and less powerful but clearly self-directed systems). I think legislated standards adhere to this tradeoff a bit less. Legislators have much less reason to care how difficult standards are to adhere to. Therefore, standards that sound good to the public are going to be a bigger criteria, and that has only an indirect relationship to both ease of implementation and actual usefulness.

Replies from: trevor
comment by tlevin (trevor) · 2023-02-12T21:51:25.644Z · LW(p) · GW(p)

It seems to me like government-enforced standards are just another case of this tradeoff - they are quite a bit more useful, in the sense of carrying the force of law and applying to all players on a non-voluntary basis, and harder to implement, due to the attention of legislators being elsewhere, the likelihood that a good proposal gets turned into something bad during the legislative process, and the opportunity cost of the political capital.