Rank the following based on likelihood to nullify AI-risk

post by Aorou (Adnll) · 2022-09-30T11:15:58.171Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    8 Not Relevant
None
No comments

Rank the following based on the likelihood to nullify AI risk (whether by achieving alignment, stopping AI development, or another way)
If you think you have better solutions to AI risk than I came up with, please add them to your ranking.

[1]

 

Final Notes:

  1. ^

    I've put line jumps between each category but in your final ranking please rank them all together and against each other.
    (eg 1. Give $10B, 2. Convince Major Personalities by 2030, 3. Achieve widespread agreement by 2030...).
    If you’re feeling lazy, just ignore the solutions you think are worthless.

  2. ^

    I use EY (Eliezer Yudkowsky) as a catch-all for [someone competent AND ultra-motivated to nullify AI risk / P(Doom)].
    That being said, there is a reason I’m using Eliezer specifically. I believe he’d be more willing than others to be creative and unconventional, even at the cost of looking foolish or unreasonable. I trust EY is able to navigate outside the Overton Window, and, in his writing, I like his moral code. 

    Feel free to change the name to whom you’d give money no-questions-asked (why them?).

    Giving EY money is not the same as funding MIRI or another org. Organizations have to justify themselves to funders in exchange for money. Organizations have to look reasonable. I think that [Organisation with extra 100M] looks different than [EY with extra 100M].
    (Let’s not have a debate whether it’s a good or bad idea to give anyone money with 0 conditions. Obviously, you’d at least make sure the person is sane).
    Crux-solving: Would organizations still be constrained by the ‘need to look reasonable’ to the outside world, if one gave them money no-questions-asked? Could the work they do with that money be done in secret?

  3. ^

    Organization... that seeks to nullify P(Doom).
    My list doesn’t contain ‘give Organisation $10M’, because I’ve gotten the impression from reading on EA and AI Alignment that money is not a bottleneck right now, but that talent is.
    That said, I do include ‘give organization $1B+’ because maybe, at those amounts, organizations are not bottlenecked on talent anymore.

  4. ^

    eg. ~“97% of AI scientists agree that AI Risk is real and that achieving aligned AIthe  is way harder than achieving AI”

  5. ^

    eg. Mark Zuckerberg, Yann LeCun, Sergey Brin, Larry Page, Bill Gates, Jack Ma, Donald Trump, Joe Biden, Barack Obama, Xi Jinping…

  6. ^

    May or may not involve convincing everyone else

Answers

answer by Not Relevant · 2022-09-30T11:30:54.775Z · LW(p) · GW(p)

Other than giving organizations/individuals $1T, which gets into the range of actions like “buy NVIDIA”, IMO the only genuinely relevant thing here is “ Achieve widespread agreement[4] on AI risk, by 2025”. All our time pressure problems are downstream of this not being true, and 90% of our problems are time pressure problems. The valur of “key personalities” stuff is just an instrumental step towards the former, unless we are talking about every key personality that sets organizational priorities for every competitive company/government in the West and China.

Re the difference between persuading people that “AI risk is real” and “doom is more likely than not”, I do not think they are practically very different. The level of societal investment we’d get from A is roughly the same as from B, if people genuinely believed it. Lastly, the total # of talented young folks we’re trying to persuade obviously matters, but unless it’s something like a monopoly, we still have the same time pressure problems. I’d guess it’s useful on the margin (especially for persuading other social actors) but I would take political consensus over this any day of the week.

No comments

Comments sorted by top scores.