Why empiricists should believe in AI risk

post by Knight Lee (Max Lee) · 2024-12-11T03:51:17.979Z · LW · GW · 0 comments

Contents

No comments

Empiricists are people who believe empirical information (from experiments and observational studies) is far more useful and has far more weight than speculating about possibilities using pure reasoning.

Why should they believe in AI risk?

I present the Empiricist's Paradox:

Actual empiricists should support AI safety because the median superforecaster sees a 2.1% chance of an AI catastrophe (killing 1 in 10 people).[1]

There is empirical evidence that 2% of these predictions turn out true, if the superforecasters predict them with 2% chance.

A 2% chance of AI catastrophe actually justifies a large spending relative to military spending (see our Statement on AI Inconsistency [LW · GW]).
 

  1. ^

    The predictions were for 2100, but the predictions were made before ChatGPT was released.

0 comments

Comments sorted by top scores.