Jaan Tallinn's 2023 Philanthropy Overview
post by jaan · 2024-05-20T12:11:39.416Z · LW · GW · 5 commentsThis is a link post for https://jaan.info/philanthropy/#2023-results-success
Contents
5 comments
to follow up my philantropic pledge [LW · GW] from 2020, i've updated my philanthropy page with 2023 results.
in 2023 my donations funded $44M worth of endpoint grants ($43.2M excluding software development and admin costs) — exceeding my commitment of $23.8M (20k times $1190.03 — the minimum price of ETH in 2023).
5 comments
Comments sorted by top scores.
comment by Diziet · 2024-05-21T00:32:15.013Z · LW(p) · GW(p)
Top 10 donations in 2023, since the html page offers no sorting and is sorted by date:
$2,800,000 | Cooperative AI Foundation | General support |
$1,846,000 | Alignment Research Center | General support for ARC Evals Team |
$1,733,000 | Center for Applied Rationality | General support for Lightcone Infrastructure |
$1,327,000 | Center on Long-Term Risk | General support |
$1,241,000 | Manifold for Charity | General support for Manifold Markets |
$1,159,000 | Alliance to Feed the Earth in Disasters | General support |
$1,000,000 | Carnegie Mellon University | Foundations of Cooperative AI Lab |
$1,000,000 | Massachusetts Institute of Tech | Gift to the Tegmark research group at MIT for General Support |
$1,000,000 | Meridian Prime | General support |
$909,000 | Center for Artificial Intelligence Safety | General support |
comment by trevor (TrevorWiesinger) · 2024-05-22T20:23:34.481Z · LW(p) · GW(p)
Thank you for making so much possible.
I was just wondering, what are some of the branches of rationality that you're aware of that you're currently most optimistic about, and/or would be glad to see more people spending time on, if any? Now that people are rapidly shifting effort to policymaking in DC and UK (including through EA) which is largely uncharted territory, what texts/posts/branches do you think might be a good fit for them?
I've been thinking that recommending more people to read ratfic [LW · GW] would be unusually good for policy efforts, since it's something very socially acceptable for high-minded people to do in their free time, should have a big impact through extant orgs without costing any additional money, and it's not weird or awkward in the slightest to talk about the original source if a conversation gets anyone interested in going deeper into where they got the idea from.
Plus, it gets/keeps people in the right headspace the curveballs that DC hits people with, which tend to be largely human-generated and therefore simple enough for humans to easily understand, just like the cartoonish simplifications of reality in ratfic (unusually low levels of math/abstraction/complexity but unusually high levels of linguistic intelligence, creative intelligence, and quick reactions e.g. social situations).
But unlike you, I don't have much of a track record making judgments about big decisions like this and then seeing how they play out over years in complicated systems.
Replies from: jaan↑ comment by jaan · 2024-06-03T06:19:27.240Z · LW(p) · GW(p)
thanks! basically, i think that the top priority should be to (quickly!) slow down the extinction race. if that’s successful, we’ll have time for more deliberate interventions — and the one you propose sounds confidently net positive to me! (with sign uncertainties being so common, confident net positive interventions are surprisingly rare).
comment by Review Bot · 2024-05-21T05:49:04.015Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?