Why Hasn't Effective Altruism Grown Since 2015?

post by AppliedDivinityStudies (kohaku-none) · 2021-03-09T01:43:03.647Z · LW · GW · 28 comments

Contents

    1. Alienation
    2. Decline is the Baseline
    3. The Fall LessWrong and Rise of SlateStarCodex
    4. Community Stagnation was Caused by Funding Stagnation
    5. EA Didn't Stop Growing, Google Trends is Wrong
    To sum up:
  A Speculative Alternative: Effective Altruism is Innate
None
28 comments

Edit: There's now a follow up post here. LW Discussion [LW · GW]. EA Forum [EA · GW] and r/ssc.

Here's a chart of GiveWell's annual money moved. It rose dramatically from 2014 to 2015, then more or less plateaued:

Open Philanthropy doesn't provide an equivalent chart, but they do have a grants database, so I was able to compile the data myself. It peaks in 2017, then falls a bit and plateaus:

(Note that the GiveWell and Open Philanthropy didn't formally split until 2017. GiveWell records $70.4m from Open Philanthropy in 2015, which isn't included in Open Philanthropy's own records. I've emailed them for clarification, but in the meantime, the overall story in the same: A rapid rise followed by several years of stagnation. Edit: I got a reply explaining that years are sometimes off by one, see footnote [0])

Finally, here's the Google Trends result for "Effective Altruism". It grows quickly starting in 2013, peaks in 2017, then falls back down to around 2015 levels. Broadly speaking, interest has been about flat since 2015.

If this data isn't surprising to you, it should be.

Several EA organizations work on actively growing the community, have been funding community growth for years and view it as an active priority:

So if EA community growth is stagnating despite these efforts, it should strike you as very odd, or even somewhat troubling. Open Philanthropy decided to start funding EA community growth in 2015/2016 [1]. It's not as if this is only a very recent effort.

As long as money continues to pour into the space, we ought to understand precisely why growth has stalled so far. The question is threefold:

Here are some possible explanations.

1. Alienation

Effective Altruism makes large moral demands, and frames things in a detached quantitative manner. Utilitarianism is already alienating, and EA is only more so.

This is an okay explanation, but it doesn't explain why growth initially started strong, and then tapered off.

2. Decline is the Baseline

Perhaps EA would have otherwise declined, and it is only thanks to the funding that it has even succeeded in remaining flat.

I'm not sure how to disambiguate between these cases, but it might be worth spending more time on. If the goal is merely community maintenance, different projects may be appropriate.

3. The Fall LessWrong and Rise of SlateStarCodex

Several folk sources indicate the LessWrong went through a decline in 2015. A brief history of LessWrong [LW · GW] says "In 2015-2016 the site underwent a steady decline of activity leading some to declare the site dead." The History of Less Wrong [? · GW] writes:

Around 2013, many core members of the community stopped posting on Less Wrong, because of both increased growth of the Bay Area physical community and increased demands and opportunities from other projects. MIRI's support base grew to the point where Eliezer could focus on AI research instead of community-building, Center for Applied Rationality worked on development of new rationality techniques and rationality education mostly offline, and prominent writers left to their own blogs where they could develop their own voice without asking if it was within the bounds of Less Wrong.

Specifically, some [LW · GW] blame the decline on SlateStarCodex:

With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.

In other words, SlateStarCodex and LessWrong catered to similar audiences, and SlateStarCodex won out. [2]

This view is somewhat supported by Google Trends, which shows a subtle decline in mentions of "Less Wrong" after 2015, until a possible rebirth in 2020.

Except SlateStarCodex also hasn't been growing since 2015:

The recent data is distorted by the NYT incident, but basically the story is the same. Rapid rise to prominence in 2015, followed by a long plateau. So maybe some users left for Slate Star Codex in 2015, but that doesn't explain why neither community saw much growth from 2015 - 2020.

And here's the same chart, omitting the last 12 months of NYT-induced frenzy:

4. Community Stagnation was Caused by Funding Stagnation

One possibility is that there was not a strange hidden cause behind widespread stagnation. It's just that funding slowed down, and so everything else slowed down with it. I'm not sure what the precise mechanism is, but this seems plausible.

Of course, now the question becomes: why did Open Philanthropy giving slow? This isn't as mysterious since it's not an organic process: almost all the money comes from Good Ventures which is the vehicle for Dustin Moskovitz's giving.

Did Dustin find another pet cause to pursue instead? It seems unlikely. In 2019, they provided $274 million total, nearly all of which ($245 million) went to Open Philanthropy recommendations.

Let's go a level deeper and take a look at the Good Ventures grant database aggregated by year:

It looks a lot like the Open Philanthropy chart! They also peaked in 2017, and have been in decline ever since.

So this theory boils down to:

To be clear, the causal mechanism and direction for the first piece of this argument remains speculative. It could also be:

This is plausible, but seems unlikely. Even if you can't give money to AI Safety, you can always give more money to bed nets.

5. EA Didn't Stop Growing, Google Trends is Wrong

Google Trends is an okay proxy for actual interest, but it's not perfect. Basically, it measures the popularity of search queries, but not the popularity of the websites themselves. So maybe instead of searching "effective altruism", people just went directly to forum.effectivealtruism.org and Google never logged a query.

Are there other datasets we can look at?

Giving What We Can doesn't release historical results, but I was able to use archive.org to see their past numbers, and compiled this dataset of money pledged [3] and member count:

So is the entire stagnation hypothesis disproved? I don't think so. Google Trends tracks active interest, whereas Giving What We Can tracks cumulative interest. So a stagnant rate of active interest is compatible with increasing cumulative totals. Computing the annual growth rate for Giving What We Can, we see that it also peaks in 2015, and has been in decline ever since:

To sum up:

A Speculative Alternative: Effective Altruism is Innate

You occasionally hear stories about people discovering LessWrong or "converting" to Effective Altruism, so it's natural to think that with more investment we could grow faster. But maybe that's all wrong.

Thing of Things once wrote

I think a formative moment for any rationalist-- our "Uncle Ben shot by the mugger" moment, if you will-- is the moment you go "holy shit, everyone in the world is fucking insane." [4]

That's not exactly scalable. There will be no Open Philanthropy grant for providing experiences of epistemic horror to would-be effective altruists.

Similarly, from John Nerst's Origin Story:

My favored means of procrastination has often been lurking on discussion forums. I can't get enough of that stuff ...Reading forums gradually became a kind of disaster tourism for me. The same stories played out again and again, arguers butting heads with only a vague idea about what the other was saying but tragically unable to understand this.

....While surfing Reddit, minding my own business, I came upon a link to Slate Star Codex. Before long, this led me to LessWrong. It turned out I was far from alone in wanting to understand everything in the world, form a coherent philosophy that successfully integrates results from the sciences, arts and humanities, and understand the psychological mechanisms that underlie the way we think, argue and disagree.

It's not that John discovered LessWrong and "became" a rationalist. It's more like he always has this underlying compulsion, and then eventually found a community where it could be shared and used productively.

In this model, Effective Altruism initially grows quickly as proto-EAs discover the community, then hits a wall as it saturates the relevant population. By 2015, everyone who might be interested in Effective Altruism has already heard about it, and there's not much more room for growth no matter how hard you push.

One last piece of anecdotal evidence: Despite repeated attempts, I have never been able to "convert" anyone to effective altruism. Not even close. I've gotten friends to agree with me on every subpoint, but still fail to sell them on the concept as a whole. These are precisely the kinds of nerdy and compassionate people you might expect to be interested, but they just aren't. [5]

In comparison, I remember my own experience taking to effective altruism the way a fish takes to water. When I first read Peter Singer, I thought "yes, obviously we should save the drowning child." When I heard about existential risk, I thought "yes, obvious we should be concerned about the far future". This didn't take slogging through hours of blog posts or books, it just made sense. [6]

Some people don't seem to have that reaction at all, and I don't think it's a failure of empathy or cognitive ability. Somehow it just doesn't take.

While there does seem to be something missing, I can't express what it is. When I say "innate", I don't mean it's true from birth. It could be the result of a specific formative moment, or an eclectic series of life experiences. Or some combination of all of the above.

Fortunately, we can at least start to figure this out through recollection and introspection. If you consider yourself an effective altruist, a rationalist or anything adjacent, please email me about your own experience. Did Yudkowsky convert you? Was reading LessWrong a grand revelation? Was the real rationalism deep inside of you all along? I want to know.


I'm at applieddivinitystudies@gmail.com, or if you read the newsletter, you can reply to the email directly. I might quote some of these publicly, but am happy to omit yours or share it anonymously if you ask.

Data for Open Philanthropy and Good Ventures is available here. Data for Giving What We Can is here. If you know how Open Philanthropy's grant database accounts for funding before it formally split off from GiveWell in 2017, please let me know.

Disclosure: I applied for funding from the EA Infrastructure Fund last week for an unrelated project.


Footnotes [0] Open Philanthropy writes:

Hi, thanks for reaching out.

Our database's date field denotes a given grant's "award date," which we define as the date when payment was distributed (or, in the case of grants paid out over multiple years, when the first payment was distributed). Particularly in the case of grants to organizations based overseas, there can be a short delay between when a grant is recommended/approved and when it is paid/awarded. (For more detail on this process, including average payment timelines, see our Grantmaking Stages page.) In 2015/2016, these payment delays resulted in top charity grants to AMF, DtWI, SCI, and GiveDirectly totaling ~$44M being paid in January 2016 and falling under 2016 in your analysis even as GiveWell presumably counted those grants in its 2015 "money moved" analysis.

Payment delays and "award date" effects also cause some artificial lumpiness in other years. For example, some of the largest top charity grants from the 2016 giving season were paid in January 2017 (SCI, AMF, DtWI) but many of the largest 2017 giving season grants were paid in December 2017 (Malaria Consortium, No Lean Season, DtWI). This has the effect of artificially inflating apparent 2017 giving relative to 2018. Other multi-year grants are counted as awarded entirely in the month/year the first payment was made -- for example, our CSET grant covering 2019-2023 first paid in January 2019. So I wouldn't read too much into individual year-to-year variation without more investigation.

Hope this helps.

[1] For more on OpenPhil's stance on EA growth, see this note  from their 2015 progress report:

Effective altruism. There is a strong possibility that we will make grants aimed at helping grow the effective altruist community in 2016. Nick Beckstead, who has strong connections and context in this community, would lead this work. This would be a change from our previous position on effective altruism funding [EA · GW], and a future post will lay out what has changed. [emphasis mine]

[2] For what it's worth, the vast majority of SlateStarCodex readers don't actually identify as rationalist or effective altruists.

[3] My Giving What We Can dataset also has a column for money actually donated, though the data only goes back to 2015.

[4] I'm conflating effective altruism with rationalism in this section, but I don't think it matters for the sake of this argument.

[5] For what it's worth, I'm typically pretty good at convincing people to do things outside of effective altruism. In every other domain of life, I've been fairly successful at getting friends to join clubs, attend events, and so on, even when it's not something they were initially interested in. I'm not claiming to be exceptionally good, but I'm definitely not exceptionally bad.

But maybe this shouldn't be too surprising. Effective Altruism makes a much larger demand than pretty much every other cause. Spending an afternoon at a protest is very different from giving 10% of your income.

Analogously, I know a lot of people who intellectually agree with veganism, but won't actually do it. And even that is (arguably) easier than what effective altruism demands.

[6] In one of my first posts, I wrote:

Before reading A Human's Guide to Words [? · GW] and The Categories Were Made For Man, I went around thinking "oh god, no one is using language coherently, and I seem to be the only one seeing it, but I cannot even express my horror in a comprehensible way." This felt like a hellish combination of being trapped in an illusion, questioning my own sanity, and simultaneously being unable to scream. For years, I wondered if I was just uniquely broken, and living in a reality that no one else seemed to see or understand.

It's not like I was radicalized or converted. When I started reading LessWrong, I didn't feel like I was learning anything new or changing my mind about anything really fundamental. It was more like "thank god someone else gets it."

When did I start thinking this way? I honestly have no idea. There were some formative moments, but as far back as I can remember, there was at least some sense that either I was crazy, or everyone else was.

28 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2021-03-09T13:30:43.347Z · LW(p) · GW(p)

I think EA made a strategic choice not to rapidly grow, for various reasons:

  • "Eternal September" worries: rapid growth makes it harder to filter for fit, makes it harder to bring new members up to speed, makes it harder to for high-engagement EAs to find each other, etc.
  • A large movement is harder to "steer". Much of EA's future impact likely depends on our ability to make unusually wise prioritization decisions, and rapidly update and change strategy as we learn more. Fast growth makes it less likely we'll be able to do this, and more likely we'll either lock in our current ideas as "the truth" (lest the movement/community refuse when it comes time for us to change course), or end up drifting toward the wrong ideas as emotional appeal and virality comes to be a larger factor in the community's thinking than detailed argument.
  • As EA became less bottlenecked on "things anyone can do" (including donating) and more bottlenecked on rare talents, it became less valuable to do broad "grow the movement" outreach and more valuable to do more targeted outreach to plug specific gaps.
  • This period also overlapped with a shift in EA toward longtermism and x-risk. It's easier to imagine a big nation-wide movement that helps end malaria or factory farming, whereas it's much harder to imagine a big nation-wide movement that does reasonable things about biorisk or x-risk, since those are much more complicated problems requiring more specialized knowledge. So a shift toward x-risk implies less interest in growth.
  • Rapid growth is an irreversible decision, so it lost some favor just for the sake of maintaining option value. If you choose not to grow, you can always take the brakes off later should you change your mind. If you choose to grow, you probably can't later decide to (painlessly) contract.

There was a fair bit of discussion in 2014-2015 about the dangers of growing EA. Anna Salamon gave a talk to EA leaders in 2014 outlining pros and cons of growth, and in 2015 I think I remember "growth is plausibly a bad idea" becoming a more popular view.

That's one story about what happened, anyway. I wouldn't be shocked if some EA leaders saw things differently.

Note that the GiveWell and Open Philanthropy didn't formally split until 2017.

Also note that Open Philanthropy officially launched in August 2014.

Some other events that happened around this time:

  • SSC started in Feb 2013.
  • Peter Singer's "effective altruism" TED talk was in May 2013.
  • 2014-2015 is also when AI x-risk "went mainstream": Stephen Hawking made waves talking about it in May 2014, Superintelligence came out in July 2014, Elon Musk made more waves in August 2014, MIRI introduced the first alignment research agenda in December 2014, FLI's Puerto Rico conference and open letter was January 2015, and OpenAI launched in December 2015.

I could imagine those causing step changes in EA's size.

"Some people don't seem to have that reaction at all, and I don't think it's a failure of empathy or cognitive ability. Somehow it just doesn't take.

While there does seem to be something missing, I can't express what it is."

Failure of taking ideas seriously [? · GW]?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-03-09T14:13:03.066Z · LW(p) · GW(p)

One reason I might have expected at least somewhat more growth recently: Vox launched an effective altruism vertical in October 2018.

comment by Peter Wildeford (peter_hurford) · 2021-03-09T06:53:48.524Z · LW(p) · GW(p)

1.) I think the core problem is that honestly no one (except 80K) actually is investing significant effort on growing the EA community since 2015 (especially comparable to the pre-2015 effort and especially as a percentage of total EA resources)

2.) Some of these examples are suspect. The GiveWell numbers definitely look to be increasing beyond 2015, especially when OpenPhil's understandably constant fundraising is removed - and this increase in GiveWell seems to line up with GiveWell's increased investment in their outreach. The OpenPhil numbers also look just to be sensitive to a few dominant eight figure grants, which understandably are not annual events. (Also my understanding is that Open Phil is starting off slowly intentionally but will aim to ramp up significantly in the near future.)

3.) As I capture in "Is EA Growing? EA Growth Metrics for 2018" [EA · GW], many relevant EA growth statistics have peaked after 2015 or haven't peaked yet.

4.) There are still a lot of ways EA is growing other than what is captured in these graphs. For example, I bet something like total budget of EA orgs has been growing a lot even since 2015.

5.) Contrary to the "EA is inert" hypothesis, EA Survey data has shown that many people have been "convinced" of EA. Furthermore, our general population surveys show that the vast majority of people (>95% of US university students) have still not heard of EA.

comment by Peter Wildeford (peter_hurford) · 2021-03-09T06:44:13.027Z · LW(p) · GW(p)

FWIW I I put together "Is EA Growing? EA Growth Metrics for 2018" [EA · GW] and I'm looking forward for doing 2019+2020 soon

Replies from: kohaku-none
comment by AppliedDivinityStudies (kohaku-none) · 2021-03-09T16:40:50.868Z · LW(p) · GW(p)

This is great, thanks! Wish I had seen this earlier.

comment by Vaniver · 2021-03-09T18:29:50.347Z · LW(p) · GW(p)

That's not exactly scalable. There will be no Open Philanthropy grant for providing experiences of epistemic horror to would-be effective altruists.

I will be interested to see what happens with the aftermath of the pandemic; I think a lot of people took EA/rationality more seriously after we seemed to be ahead of the curve / much more reasonable than the 'official experts'. But I don't think this has shown up in the 2020 numbers, in part because the pandemic has shut down a lot of the events that would capitalize on that increased seriousness. But maybe now with Biden elected, people will deradicalize / forget the epistemic horror? Unclear.

comment by mingyuan · 2021-03-09T20:41:10.219Z · LW(p) · GW(p)

Other people have already replied well to the central point of this post, so I'll say something different: I think you misunderstand the relationship between Good Ventures and Open Phil. You frame it as:

  • EA finances stopped growing because Good Ventures stopped growing
  • Good Ventures stopped growing because the wills and whims of billionaires are inscrutable?

This isn't how it works. Disclaimer: I have worked for both GiveWell and Open Philanthropy in the past, but it's been more than two years since I was involved at all and also I was mostly, like, an intern-level person the whole time. But to be safe with confidentiality stuff I'll just draw on public information. From Wikipedia:

Good Ventures plans to spend out the majority of its money before the death of Moskovitz and Tuna, rather than be a foundation in perpetuity. Most of the money for the foundation comes from the stock Moskovitz obtained as a Facebook co-founder. They are working closely with charity evaluator GiveWell to determine how to spend their money wisely. At GiveWell's recommendation, Good Ventures is not currently spending a significant share of the couple's wealth, but they plan to up their spending to 5% of the foundation's wealth every year once GiveWell has built sufficient capacity to help allocate that level of money.

The key points here being:

  • Good Ventures is spending at way less than full capacity because GiveWell/Open Phil* told them to. They could clearly be spending more.
  • GiveWell/Open Phil told them not to spend more because they didn't know how to usefully spend that money.
  • Good Ventures is a foundation. This means** it has an endowment funded by Moskovitz's personal fortune, analogous to the Gates Foundation or the Chan-Zuckerberg Initiative. So it doesn't really make sense to talk about a foundation 'growing'. I guess the endowment can grow if it's accumulating interest faster than it's being spent down, but that's different from what I think you meant.

Bottom line being, funding from Good Ventures has never been the bottleneck when it comes to money moved. The bottleneck is knowing how to usefully allocate that money. It's not as simple as "you can always give more money to bednets", because Open Phil / GiveWell / Good Ventures doesn't like to provide more than 50% of the funding for any organization. (The reason being that there are a bunch of bad things that happen if an organization becomes primarily dependent on any one funder; I didn't find a specific GW/OP blog post on this but I can elaborate if someone asks.)

ETA: Two relevant blog posts by Holden here and here.

--

*Yes I know GiveWell and Open Phil are separate now but I don't think that's relevant to my point

**I'm not an expert on what exactly a foundation is so this is just my sense; people can correct me if I'm wrong

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2021-03-09T22:20:29.797Z · LW(p) · GW(p)

It's not as simple as "you can always give more money to bednets", because Open Phil / GiveWell / Good Ventures doesn't like to provide more than 50% of the funding for any organization.

They also think they can find more valuable things to spend the money on than bednets and GiveDirectly. (Or at least, they think this is likely enough to justify not spending it all now.)

Replies from: mingyuan
comment by mingyuan · 2021-03-09T22:54:49.848Z · LW(p) · GW(p)

Oh yeah that too. Rob with the assist!

comment by Vaniver · 2021-03-09T18:23:43.944Z · LW(p) · GW(p)

One possibility is that there was not a strange hidden cause behind widespread stagnation. It's just that funding slowed down, and so everything else slowed down with it. I'm not sure what the precise mechanism is, but this seems plausible.

FWIW, my sense is the opposite seems more likely: if growth in the number of EAs (and, particularly, EAs who founded organizations that seemed good) kept up, then funding would have matched the growth. Most orgs / funders that I'm familiar with are looking for more good ways to spend money (while not just embezzling it or w/e), rather than allocating limited funds among too many promising options.

[Related to other "would more growth be bad?" questions, one common way to 'hack' growth metrics is mergers and acquisitions. You could imagine the world where people went around to various other charities and groups, trying to get them to join the EA umbrella; the charts would look more 'up and to the right', but the underlying 'innate' variable might still be the primary factor in how many 'real EAs' there are.]

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-03-09T06:51:29.130Z · LW(p) · GW(p)

EA (and rationality) might be an ouroboros.

If you act on it by donating, you can be done with it. It's a conversation-stopper.

Alternatively, if you act on it through direct work, it becomes pressing to focus on the specific details of that work. The ideas, institutions, and community surrounding EA/LW can't follow you into the weird specialized niche you'll inevitably grow into. To navigate that area, EA/LW offer very little value, even though those were the motivating forces that directed you there in the first place.

Cultural forms seem to last and grow when the main way you participate is by talking, identifying, and showing up to meetings; and when they appeal to people who have a strong need for that.

What I think might help would be if EAs shifted their focus away from the ideology and towards building relationships with each other. If EA was less about "help the world" and more about "help each other help the world," I think we'd get farther faster.

Replies from: PeterMcCluskey
comment by PeterMcCluskey · 2021-03-09T19:15:36.617Z · LW(p) · GW(p)

If you act on it by donating, you can be done with it. It’s a conversation-stopper.

In 2014, it felt like donations were a good conversation topic. There were enough new charities to evaluate that it was worthwhile to get other people's opinions. The EA community and the number of new charities were small enough that we could come close to knowing most of the people involved in starting the charities, and expect most EAs to know something about those new charities.

Then the EA movement became much larger than the Dunbar number, it became harder to keep track of all the charities, and the value of additional funding declined a bit. At least some of those factors made it harder for EA to be a good community.

comment by interstice · 2021-03-09T03:54:20.259Z · LW(p) · GW(p)

Data point: I was definitely in the ''rationalism seemed innately obvious" camp. re: the broader reach of EA, can't confirm or deny either way myself, but here's an alternative perspective.

Maybe the "1% of the 1%" he mentions are the people who naturally take to EA? I also suspect that the undergrads he talks to are far from a random sample of elite-college undergrads. I think the analogy with communism is important -- like any intellectual movement, most of the expected impact of EA probably comes from in its potential to influence key decision-makers at some future pivotal moment.

comment by Ben Pace (Benito) · 2021-03-09T01:54:27.798Z · LW(p) · GW(p)

Solid post.

You may also wish to crosspost to the EA Forum (a site that looks notably like this one) where I imagine people will feel defensive about EA not growing and then generate lots of alternative explanations for you! :)

I like the point that OpenPhil giving leveled out at the same time. I'd be interested in seeing the OpenPhil chart for net giving disaggregated into the different areas, especially "Other Areas" (which is what they bucket lots of the EA stuff into) and "Potential Risks from Advanced Artificial Intelligence" (which I think has seen a lot of growth in funding rationalist/EA orgs but perhaps also leveled out).

Replies from: tylermaule, kohaku-none
comment by tylermaule · 2021-03-10T00:22:35.194Z · LW(p) · GW(p)

I'd be interested in seeing the OpenPhil chart for net giving disaggregated into the different areas

see here and here [EA · GW]

comment by AppliedDivinityStudies (kohaku-none) · 2021-03-09T02:18:03.343Z · LW(p) · GW(p)

Thanks! I have an x-post here pending moderator approval https://forum.effectivealtruism.org/posts/dRkGXHxKGWwWY6AqP/why-hasn-t-effective-altruism-grown-since-2015-1 [EA · GW]

Replies from: Benito
comment by Ben Pace (Benito) · 2021-03-09T02:26:59.348Z · LW(p) · GW(p)

Ah, pending moderator approval. It seems that x-posts come with a certain amount of x-risk...

comment by Charlie Steiner · 2021-03-09T06:30:41.197Z · LW(p) · GW(p)

It sounds like you've made a good case for high noise in the data. I was around on the internet in 2010, when arguments about the "global warming pause" were everywhere. And this is triggering the same sort of detectors. Not in the sense of "I have an inside view of comparable strength to global warming," I mean the sense of "my model tells me to expect noise at least this big sometimes, so the shape of this graph isn't as informative as if first appears, and we kinda have to wait and see."

comment by tylermaule · 2021-03-09T21:08:08.833Z · LW(p) · GW(p)
  1. As Katja's response alludes to, the non-Open-Phil chunk of GiveWell has more than doubled since 2015 (plus EA funds has gone from zero to $9M, etc.)
  2. Although Open Phil's contribution to GiveWell has remained roughly similar since 2015, the amount they direct to EA as a whole has grown substantially [EA · GW]
Replies from: tylermaule
comment by Viliam · 2021-03-11T22:00:29.535Z · LW(p) · GW(p)

In other words, SlateStarCodex and LessWrong catered to similar audiences, and SlateStarCodex won out.

SSC kept the (online) rationalist community alive when it was most needed: when LW "died" for a few months. Also, SSC spread the idea of effective altruism to a new audience: the subset of SSC readers who are not LW readers. I don't see how users switching from LW to SSC could have a negative impact on effective altruism.

Among the options provided in article, 2 and 6 (the speculative alternative) felt plausible to me. However... what other people already wrote here.

Explanation 2 because "people getting enthusiastic about something, and abandoning it a few months or years later" sounds like typical human behavior, the null hypothesis for "people trying new things".

Explanation 6 because, if I may generalize from 1 example [LW · GW], that's how it was for me and rationality. It didn't feel like "Eliezer converted me", but rather like "for decades I felt like a weird person for caring about things literally no one else seemed to care about... and then I found a blog from a guy on the other side of the planet, who cared about similar things, came to similar conclusions, and actually took it even further but in a direction that felt obviously correct to me". Also, when I tried to popularize LW in my social circles, the reactions I got were similar to reactions I previously got for my own thoughts: most people ignored it, some people were amuzed for 5 minutes, then tried to integrate it into some bullshit they already believed and essentially use "rationality" as just another applause light.

From that I conclude that LW-style rationalists are... well, "born" is perhaps too strong word, but I would not be surprised if there was a test we could give to 13 years old kids that would quite reliably predict whether 20 years later they will or will not like LessWrong. Because I am pretty sure my 13 years old self would have the same "gods, I'm not the only sane person on this planet" reaction on reading the Sequences. -- But as I said, 1 example.

And similarly, I believe it works the same with effective altruism. There is something almost innate that makes you either agree or disagree emotionally with the proposition that we should care about how much good we do (as opposed to just doing random things and declaring that all non-zero values are alike and anyone who suggests otherwise is a horrible person), which is a thing I have; and then another thing which makes you react to this knowledge by actually donating because you care about other people so much that you are willing to make a non-trivial personal sacrifice, which I admit I have not (though my PR module would prefer to say I do).

Though I am not opposed to further advertising effective altruism (or rationality), because there are probably still many people who haven't noticed that it exists yet.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-03-11T22:04:37.120Z · LW(p) · GW(p)

"when LW "died" for a few months" <- more like a few years.

comment by Unnamed · 2021-03-10T06:05:40.779Z · LW(p) · GW(p)

The "all other money moved" bars on the first GiveWell graph (which I think represent donations from individual donors) do look a lot like exponential growth. Except 2015 was way above the trend line (and 2014 & 2016 a bit above too).

If you take the first and last data points (4.1 in 2011 & 83.3 in 2019), it's a 46% annual growth rate.

If you break it down into four two-year periods (which conveniently matches the various little sub-trends), it's:

2011-13: 46% annual growth (4.1 to 8.7)
2013-15: 123% annual growth (8.7 to 43.4)
2015-17: 3% annual growth (43.4 to 45.7)
2017-19: 35% annual growth (45.7 to 83.3)

2019 "all other money moved" is exactly where you'd project if you extrapolated the 2011-13 trend, although it does look like the trend has slowed a bit (even aside from the 2015 outlier) since 35% < 46%.

If GiveWell shares the "number of donors" count for each year that trend might be smoother (less influenced by a few very large donations), and more relevant for this question of how much EA has been growing.

Funding from Open Phil / Good Ventures looks more like a step function, with massive ramping up in 2013-16 and then a plateau (with year-to-year noise). Which is what you might expect from a big foundation - they can ramp up spending much faster than what you'd see with organic growth, but that doesn't represent a sustainable exponential trend (if Good Ventures had kept ramping up at the same rate then they would have run out of money by now).

The GWWC pledge data look like linear growth since 2014, rather than exponential growth or a plateau.

On the whole it looks like there has been growth over the past few years, though the growth rate is lower than it was in 2012-16 and the amount & shape of the growth differs between metrics.

comment by Nebulus · 2021-03-09T17:43:50.048Z · LW(p) · GW(p)

Thank you for the post. I've joined Less Wrong less than a year ago, so personally I appreciated getting more context around it.

I'd like to respond to your last point, on whether EA is innate. I agree that those who join EA should have at least a strong common denominator. From my own experience, I'd say that EA should easily catch the interest of anyone curious enough. When I mentionned the movement to my professors or friends, they were very much intrigued. As you mentionned, however, it wasn't enough for them to actually join the movement. They see the usefulness but do not act on it. I was (and still am) very confused by that. If I have to put it into words, I would say they do not identify with the movement. They have their life plans, their own visions of the world, and adhering to EA would change too much. It would change them. And that's too steep a price. 

So I think that if there's any answer to be found in my experience, it's that EA requires the willingness to change yourself. 

comment by ChristianKl · 2021-03-09T09:49:14.677Z · LW(p) · GW(p)

Given that we observe EA growth stalling a naturally question would be to ask where people went instead. I would suspect that a lot of people who went to university and wanted to create positive change in the world went into social justice activism. That activism is based on a different epistemology that focus on the experience of various groups that are perceived as unprivileged instead of focusing on empiric evidence and utility calculations.