Posts
Riga, Latvia – ACX Meetups Everywhere 2023
2023-08-23T20:07:20.967Z
Comments
Comment by
Artūrs Kaņepājs (arturs-kanepajs) on
Predictable updating about AI risk ·
2023-05-19T11:42:38.298Z ·
LW ·
GW
Seems like a stark case of contrast between Bayesianism and the way a frequentist might approach things. I.e. do not reject the null hypothesis of no significant probability until convinced by evidence, either formal arguments or by seeing real-life mishaps. Labeling something as having P(x)~0 probably helps to compartmentalize things, focus to other tasks at hand. But can lead to huge risks being neglected, like in this case of AI Alignment.
Edit: "premortem" seems like a useful exercise to align mind & gut