Safe Stasis Fallacy

post by Davidmanheim · 2024-02-05T10:54:44.061Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by Roko · 2024-02-05T15:47:24.375Z · LW(p) · GW(p)

I think the risk with AI safety is whilst it is not an explicitly pro-tyranny ideology, there is an increasing need for governance and control, and so by blocking off governance by Friendly AGI we will not get less AI risk but more tyranny from human government structures (which are inherently corrupt and tyrannical because humans are untrustworthy and traditional human government structures are riddled with brokenness)

Replies from: Davidmanheim
comment by Davidmanheim · 2024-02-11T19:29:32.807Z · LW(p) · GW(p)

You talk about "governance by Friendly AGI" as if it's a solved problem we're just waiting to deploy, not speculation that might simply not be feasible even if we solve AGI alignment, which itself is plausibly unsolvable in the near term. You also conflate AI safety research with AI governance regimes. And note that the problems with governance generally aren't a lack of intelligence by those in charge, it's largely conflicting values and requirements. And with that said, you talk about modern liberal governments as if they are the worst thing we've experienced, "riddled with brokenness," as if that's the fault of the people in charge, not the deeply conflicting mandates that the populace gives them. And to the extent that the systemic failure is the fault of the untrustworthy incentives of those in charge, why would controllable or aligned AGI fix that?

Yes, stasis isn't safe by default, but undirected progress isn't a panacea, and governance certainly isn't any closer to solved just because we have AI progress.