If I have some money, whom should I donate it to in order to reduce expected P(doom) the most?

post by KvmanThinking (avery-liu) · 2024-10-03T11:31:19.974Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    15 TsviBT
    7 Jeremy Gillen
None
1 comment

I just want to make sure that when I donate money to AI alignment stuff it's actually going to be used economically

Answers

answer by TsviBT · 2024-10-03T11:57:59.143Z · LW(p) · GW(p)

You probably shouldn't donate to alignment research. There's too much useless stuff with too good PR for you to tell what, if anything, is hopeworthy. If you know any young supergenius people who could dedicate their lifeforce to thinking about alignment FROM SCRATCH given some money, consider giving to them.

If there's some way to fund research that will lead to strong human intelligence amplification, you should do that. I can give some context for that, though not concrete recommendations.

comment by RHollerith (rhollerith_dot_com) · 2024-10-03T13:39:02.673Z · LW(p) · GW(p)

TsviBT didn't recommend MIRI probably because he receives a paycheck from MIRI and does not want to appear self-serving. I on the other hand have never worked for MIRI and am unlikely ever to (being of the age when people usually retire) so I feel free to recommend MIRI without hesitation or reservation.

MIRI has abandoned hope of anyone's solving alignment before humanity runs out of time: they continue to employ people with deep expertise in AI alignment, but those employees spend their time explaining why the alignment plans of others will not work.

Most technical alignment researchers are increasing P(doom) because they openly publish results that help both the capability research program and the alignment program, but the alignment program is very unlikely to reach a successful conclusion before the capability program "succeeds", so publishing the results only shortens the amount of time we have to luck into an effective response or resolution to the AI danger (which again if one appears is very unlikely to involve figuring out how to align an AI so that it stays aligned as it becomes an ASI).

There are 2 other (not-for-profit) organizations in the sector that as far as I can tell are probably doing more good than harm, but I don't know enough about them for it to be a good idea for me to name them here.

Replies from: TsviBT
comment by TsviBT · 2024-10-03T13:52:06.685Z · LW(p) · GW(p)

I'm no longer employed by MIRI. I think Yudkowsky is by far the best source of technical alignment research insight; but MIRI's research program was in retrospect probably pretty doomed even before I got there. I can see ways to improve it but I'm not that confident and I can somewhat directly see that I'm probably not capable of carrying out my suggested improvements. And AFAIK, as you say they're not currently doing very much alignment research. I'm also fine with appearing self-serving; if I were actively doing alignment research, I might recommend myself, though I don't really think it's appropriate to do so to a random person who can't evaluate arguments about alignment research and doesn't know who to trust. I guess if someone pays me enough I'll do some alignment research. I recommend myself as one authority among others on strategy regarding strong human intelligence amplification.

answer by Jeremy Gillen · 2024-10-03T14:29:36.219Z · LW(p) · GW(p)

The non-spicy answer is probably the LTFF, if you're happy deferring to the fund managers there. I don't know what your risk tolerance for wasting money is, but you can check whether they meet it by looking at their track record.

If you have a lot of time you might be able to find better ways to spend money than money than the LTFF can. (Like if you can find a good way to fund intelligence amplification as Tsvi said).

1 comment

Comments sorted by top scores.

comment by Tamsin Leake (carado-1) · 2024-10-03T13:10:44.192Z · LW(p) · GW(p)

In my opinion the hard part would not be figuring out where to donate to {decrease P(doom) a lot} rather than {decrease P(doom) a little}, but figuring out where to donate to {decrease P(doom)} rather than {increase P(doom)}.