Jaan Tallinn's 2023 Philanthropy Overview

post by jaan · 2024-05-20T12:11:39.416Z · LW · GW · 5 comments

This is a link post for https://jaan.info/philanthropy/#2023-results-success

to follow up my philantropic pledge [LW · GW] from 2020, i've updated my philanthropy page with 2023 results.

in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) — exceeding my commitment of $23.8M (20k times $1190.03 — the minimum price of ETH in 2023).

5 comments

Comments sorted by top scores.

comment by ektimo · 2024-05-21T15:57:29.809Z · LW(p) · GW(p)

On behalf of humanity, thank you.

comment by Diziet · 2024-05-21T00:32:15.013Z · LW(p) · GW(p)

Top 10 donations in 2023, since the html page offers no sorting and is sorted by date:

$2,800,000Cooperative AI FoundationGeneral support
$1,846,000Alignment Research CenterGeneral support for ARC Evals Team
$1,733,000Center for Applied RationalityGeneral support for Lightcone Infrastructure
$1,327,000Center on Long-Term RiskGeneral support
$1,241,000Manifold for CharityGeneral support for Manifold Markets
$1,159,000Alliance to Feed the Earth in DisastersGeneral support
$1,000,000Carnegie Mellon UniversityFoundations of Cooperative AI Lab
$1,000,000Massachusetts Institute of TechGift to the Tegmark research group at MIT for General Support
$1,000,000Meridian PrimeGeneral support
$909,000Center for Artificial Intelligence SafetyGeneral support
comment by trevor (TrevorWiesinger) · 2024-05-22T20:23:34.481Z · LW(p) · GW(p)

Thank you for making so much possible.

I was just wondering, what are some of the branches of rationality that you're aware of that you're currently most optimistic about, and/or would be glad to see more people spending time on, if any? Now that people are rapidly shifting effort to policymaking in DC and UK (including through EA) which is largely uncharted territory, what texts/posts/branches do you think might be a good fit for them? 

I've been thinking that recommending more people to read ratfic [LW · GW] would be unusually good for policy efforts, since it's something very socially acceptable for high-minded people to do in their free time, should have a big impact through extant orgs without costing any additional money, and it's not weird or awkward in the slightest to talk about the original source if a conversation gets anyone interested in going deeper into where they got the idea from.

Plus, it gets/keeps people in the right headspace the curveballs that DC hits people with, which tend to be largely human-generated and therefore simple enough for humans to easily understand, just like the cartoonish simplifications of reality in ratfic (unusually low levels of math/abstraction/complexity but unusually high levels of linguistic intelligence, creative intelligence, and quick reactions e.g. social situations). 

But unlike you, I don't have much of a track record making judgments about big decisions like this and then seeing how they play out over years in complicated systems.

Replies from: jaan
comment by jaan · 2024-06-03T06:19:27.240Z · LW(p) · GW(p)

thanks! basically, i think that the top priority should be to (quickly!) slow down the extinction race. if that’s successful, we’ll have time for more deliberate interventions — and the one you propose sounds confidently net positive to me! (with sign uncertainties being so common, confident net positive interventions are surprisingly rare).

comment by Review Bot · 2024-05-21T05:49:04.015Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?