Does the rationalist community have a membership funnel?

post by Alex_Altair · 2022-04-12T18:44:48.795Z · LW · GW · 4 comments

This is a question post.

Contents

  Answers
    16 dkirmani
    14 G Gordon Worley III
    13 ChristianKl
    6 Big Tony
None
4 comments

I was talking to someone the other day about the ways in which I've noticed the [Berkeley] rationalist community changing. One of the main ways was that group houses seemed to be disappearing. People were getting older, moving away, or just moving into their own houses to have kids. It then occurred to me that it doesn't seem like this is happening with the EA side of the community. Thinking about it more, it seems to me that EA has a quite strong funnel in the form of student groups. I semi-regularly hear about events, companies, projects, or just impressive people that are coming from EA student groups. Meanwhile I'm not even aware of a rationalist student group (although I'm sure there are some).

When I think about where rationalists came from, my answer is 1) EY writing the original sequences, and 2) EY writing HPMOR. It feels like those things happened, tons of people joined, and then they stopped happening, and people stopped joining. Then people got older, and now we have a population pyramid problem.

I think this is something of a problem for the mission of preventing AI x-risk. It is of course great to have lots of EAs around, but I think that people that the rationalist community would differentially appeal to would provide a lot of value that EA-learning people would be a lot less likely to provide (focus on AI, obsessive investigation into understanding confusing yet important subjects, etc.).

Do others agree with the pattern? Do you also see it as a problem? Any suggestions for what we could do about it? Why aren't there many rationalist student groups?

Answers

answer by dkirmani · 2022-04-12T20:12:12.453Z · LW(p) · GW(p)

When I think about where rationalists came from, my answer is 1) EY writing the original sequences, and 2) EY writing HPMOR. It feels like those things happened, tons of people joined, and then they stopped happening, and people stopped joining.

Is this really the case? I'm eighteen, and I know a few people here in my age group.

I also didn't take either of the membership routes you listed. When I was 16.75 (±0.33) years old, I happened upon a Slate Star Codex post (might've been this one). I thought "Wow, this blog is great!" so I then proceeded to read all of the SSC backcatalog[1]. Once I ran out of posts, I saw "LessWrong" under SSC's "blogroll" header, and the rest is history. I didn't systematically read the sequences[2], but instead just read whatever looked interesting on the frontpage. I had previously read Superintelligence, Thinking Fast and Slow, and Surely You're Joking, Mr. Feynman, so I already had some kind of background exposure to the ideas discussed here.


  1. I was trying to curb my Reddit addiction, so I used SSC, Hacker News (and later, LW) as substitutes. Still do. ↩︎

  2. And I never have. Did read HPMoR, though. ↩︎

comment by Alex_Altair · 2022-04-12T20:24:01.634Z · LW(p) · GW(p)

I'll be thrilled to find out that my premise is wrong!

Replies from: Big Tony
comment by Big Tony · 2022-04-13T00:13:09.849Z · LW(p) · GW(p)

Don't over-index on this particular answer being refutation of your hypothesis!

I came to LessWrong via HPMOR, and I've thought in the same vein myself (if HPMOR/equivalent = more incoming rationalists, no HPMOR/equivalent = ...less incoming rationalists?).

comment by Ben (ben-lang) · 2022-10-21T13:46:00.382Z · LW(p) · GW(p)

My experience was similar. I am a little older (early 30's). Stumbled into a random lesswrong article when looking for something specific online. Then got really into the site just from "homepage shooting". HPMOR came quite late in the day for me. Like you only half read "the sequences". (Not anything as sensible as "the first half", just turned them into swiss cheese from my homepage shooting).

comment by lc · 2022-06-06T06:26:50.794Z · LW(p) · GW(p)

I'm 22, and I came from SSC as well, but my intuition is that most people here are older than me.

comment by Kayden (kunvar-thaman) · 2022-06-06T07:56:31.601Z · LW(p) · GW(p)

I'm 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19. 

The generational shift is a concern to me because as we start losing people who've accumulated decades of knowledge (of which only a small fraction is available to read/watch), it's possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there's a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2022-04-12T19:27:15.461Z · LW(p) · GW(p)

Do others agree with the pattern? Do you also see it as a problem?

Yes. No.

I don't think it's a problem for a couple reasons:

  • my AI timelines are short enough that it's not going to become very pressing
  • if it does become a pressing problem it will be solved by a new generation of folks who will solve it themselves better than we did because they'll live in the culture we affected (cf. the Reformation -> the Enlightenment -> Victornian-era Science -> General Semantics -> LessWrong pipeline)

We could try to do something about it but I think it's quite likely we'd end up solving the wrong problem because we'd be trying too much to recreate what we needed when we were 20 rather than what new people coming up need. Each of us has to rediscover how to live for ourselves, so our duty is mainly to leave behind lots of clues about things we've already figured out to speed them along their way.

answer by ChristianKl · 2022-04-13T13:13:50.581Z · LW(p) · GW(p)

Right now, I'm partly afraid that the room where I hold an ACX meetup next Monday that can hold ~30 people will be too small. I already know from emails that people will attend that haven't attended any past meetups. Scott's blog is similar to what EY was doing when he wrote the sequences. It's a membership funnel.

On the LessWrong side, we are now in a situation where there are a lot more resources for community development than there have been in the past. While CEA didn't want to fund rationality community development, we now have funders [LW · GW]who expressed willingness to fund rationality community development. 

Later this year there's a retreat for community organisers [LW · GW] which includes funding for the travel costs of attendees. 

If you want to do something to grow the rationality community, the times are great and I want to encourage you to take action. 

comment by Chris_Leong · 2022-06-06T04:59:07.301Z · LW(p) · GW(p)

What city are you in?

Replies from: ChristianKl
comment by ChristianKl · 2022-06-06T05:56:24.363Z · LW(p) · GW(p)

I'm in Berlin.

Replies from: Chris_Leong
comment by Chris_Leong · 2022-06-06T07:47:59.149Z · LW(p) · GW(p)

Looks like Berlin is becoming a real hub.

Replies from: ChristianKl
comment by ChristianKl · 2022-06-06T11:01:23.720Z · LW(p) · GW(p)

That's currently the intention of myself and a few other people. 

answer by Big Tony · 2022-04-13T01:27:17.668Z · LW(p) · GW(p)

Do others agree with the pattern? Do you also see it as a problem?

Yes. Somewhat, yes.

Any suggestions for what we could do about it?

In the ideal world, EY and others would launch into writing fun and interactive fiction!

That's probably not going to happen, so in the real world: be the change you want to see.

If you think it's a good idea, and you have the time and the inclination to do it — do it :)

4 comments

Comments sorted by top scores.

comment by Aay17ush · 2022-04-12T20:13:38.992Z · LW(p) · GW(p)

I assume EA student groups have a decent amount of rationalists in them (30%?), so the two categories are not as easily separable. And thus it's not as bad as it sounds for rationalists.

comment by Jalex Stark (jalex-stark-1) · 2022-04-13T01:07:56.034Z · LW(p) · GW(p)

I think the most important claim you make here is that trying to fit into a cultural niche called "rationality" makes you a more effective researcher than trying to fit into a cultural niche called "EA". I think this is a plausible claim, (e.g. I feel this way about doing a math or philosophy undergrad degree over doing an economics or computer science undergrad degree) but I don't intuitively agree with it. Do you have any arguments in favor?

Replies from: Alex_Altair
comment by Alex_Altair · 2022-04-13T01:12:07.171Z · LW(p) · GW(p)

Hm, so maybe a highly distilled version of my model here is that EAs tend to come from a worldview of trying to do the most good, whereas rationalists tend to come from a world view of Getting the Right Answer. I think the latter is more useful for preventing AI x-risk. (Though to be very clear, the former is also hugely laudable, and we need orders of magnitude more of both types of people active in the world; I'm just wondering if we're leaving value on the table by not having a rationalist funnel specifically).

Replies from: jalex-stark-1
comment by Jalex Stark (jalex-stark-1) · 2022-04-14T05:16:55.209Z · LW(p) · GW(p)

I think I get what you're saying now; let me try to rephrase. We want to grow the "think good and do good" community. We have a lot of let's say "recruitment material" that appeals to people's sense of do-gooding, so unaligned people that vaguely want to do good might trip over the material and get recruited. But we have less of that on the think-gooding side, so there's a larger gap of unaligned people who want to think good that we could recruit. 

Does that seem right? 

Where does the Atlas fellowship fall on your scale of "recruits do-gooders" versus "recruits think-gooders"?