A case for donating to AI risk reduction (including if you work in AI)

post by tlevin (trevor) · 2024-12-02T19:05:06.658Z · LW · GW · 2 comments

Contents

2 comments

2 comments

Comments sorted by top scores.

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-12-03T02:34:53.422Z · LW(p) · GW(p)

IMO "major donors won't fund this kind of thing" is a pretty compelling reason to look into it, since great opportunities which are illegible or structurally-hard-to-fund definitely exist (as do illegible-or-etc terrible options; do your diligence). On the other hand I'm pretty nervous about the community dynamics that emerge when you're granting money and also socially engaged with and working in the field. Caveat donor!

Replies from: trevor
comment by tlevin (trevor) · 2024-12-03T16:31:54.304Z · LW(p) · GW(p)

Agreed, I think people should apply a pretty strong penalty when evaluating a potential donation that has or worsens these dynamics. There are some donation opportunities that still have the "major donors won't [fully] fund it" and "I'm advantaged to evaluate it as an AIS professional" without the "I'm personal friends with the recipient" weirdness, though -- e.g. alignment approaches or policy research/advocacy directions you find promising that Open Phil isn't currently funding that would be executed thousands of miles away.