What exactly is the "Rationality Community?"
post by Raemon · 2017-04-09T00:11:44.364Z · LW · GW · Legacy · 10 commentsContents
I. The Rationalsphere II. The Broader "Rationality Community" Where does Project Hufflepuff fit into this? None 10 comments
[This is the second post in the Project Hufflepuff sequence [? · GW], which I no longer quite endorse in its original form. But this post is particularly standalone]
I used to use the phrase "Rationality Community" to mean three different things. Now I only use it to mean two different things, which is... well, a mild improvement at least. In practice, I was lumping a lot of people together, many of whom neither wanted to get lumped together nor had much in common.
As Project Hufflepuff took shape, I thought a lot about who I was trying to help and why. And I decided the relevant part of the world looks something like this:
I. The Rationalsphere
The Rationalsphere is defined in the broadest possible sense - a loose cluster of overlapping interest groups, communities and individuals. It includes people who disagree wildly with each other - some who are radically opposed to one another. It includes people who don’t identify as “rationalist” or even as especially interested in “rationality” - but who interact with each other on a semi-regular basis. I think it's useful to be able to look at that ecosystem as a whole, and talk about it without bringing in implications of community.
There is no single feature defining people in the rationalsphere, but there are overlapping habits and patterns of thought. I'd guess that any two people in the cluster share at least one of the following features:
- Being attentive to ways your mind is unreliable
- Desire to understand objective reality.
- Willingness to change one’s mind about ideas that are important to you.
- Having goals, which you care about achieving badly enough to decide “if my current habits of thought are an obstacle to achieving those goals, I want to prioritize changing those habits.”
- Ambitious goals, that require a higher quality of decision-making than most humans have access to.
People invested in the rationalsphere seem to have three major motivations:
- Truthseeking - How do we improve our thinking? How do we use that improved-thinking to better understand the world?
- Impact - A lot of things in the world could be a lot better. Some see this in moral terms - there are suffering people and unrealized potential and we have an obligation to help. Others see this purely in opportunity and excitement, and find the concept of “altruism” offputting or harmful. But they share this: an interest in having a big impact, while understanding that ‘having a big, intentional impact’ is very hard. And confusing. Lots of people have tried, and failed. If we’re to succeed, we will need better understanding and resources than we have now.
- Human/Personal - Your individual life and the people you know could also be a lot better. How can you and the people you love have as much fulfillment as possible?
For some people in the rationalsphere, “Doing a good job at being human” is a thing they’re already doing and don’t feel a need to approach from especially “rationality” flavored perspective, but still use principles (such as goal factoring) gleaned from the overall rationality project.
Others specifically do want to be part of a culture lets them succeed at Project Human that is uniquely “rationalist” - either because they want rationality principles percolating through their entire life, or because they like various cultural artifacts.
II. The Broader "Rationality Community"
Within the Rationalsphere, there is a subset of people that specifically want a community. They also disagree on a lot, but often want some combination of the following:
- Social structures that make it easy to make friends, colleagues, and perhaps romantic partners, who also care about one or more of the three focus areas.
- Social atmosphere that inspires and helps one to improve at one or more of the three focus areas.
- Institutions that actively pursue one of the three in a serious fashion, and that collaborate when appropriate.
- Sharing memes/culture/history. Feeling like “these are my people.”
The overlapping social structures for each focus benefit each other. Here are some examples. (I want to note that I don’t think all of these are unambiguously good. Some might trigger alarm bells, for good reason)
- The Center for Applied Rationality (CFAR) is able to develop techniques that help people communicate better, think more clearly, be more effective, and choose to work on more high-impact (I think even with their more explicit shift [LW · GW] towards "help with AI", they will continue to have this effect in areas non-adjacent to AI).
- In addition to helping their alumni progress on their own truthseeking, impact and human-ing, CFAR leaves in its wake a community more energetic about trying additional experiments of their own.
- Giving What We Can encourages people to fund various projects (“EA” and non-EA) more seriously than they otherwise would - in a cluster of people who might otherwise fail to do so at all.
- Startup culture helps encourage people to launch ambitious projects of various stripes, which build people’s individual skills in addition to hopefully having a direct impact on the world of some sort.
- There are spaces where the Human and Truthseeking foci overlap, that create an environment friendly for people who like to think and talk deeply about complex concepts, for whom this is an important part of what they need to thrive as individuals. It’s really hard to find environments like this elsewhere.
- Givewell and the Open Philanthropy Project helps people concerned with Impact in obvious ways, but this in turn plays an important role for the Human focus - it gives people who don’t intrinsically care that much about effective altruism a way to contribute to it *without* taking too much of their attention. This is good for the world *and* their own sense of meaning and purpose. (Although I want to note that getting too attached to a particular source of meaning can be harmful, if it makes it harder to change your mind)
- Various other EA orgs that need volunteer work done provide an outlet for people who are not ready to jump-head-first into a major project, also providing sense-of-purpose as well as.
- Parties, meetups, etc (whether themed around Human-ing, Rationality, or EA) provide value to all three projects. They’re fun and mostly satisfy human-ing needs in the moment, but they let people bump into each other and swap ideas, network, etc, valuable for Impact and Understanding.
- A community need not be fully unified. Literal villages include people who disagree on religion, morality or policy - but they still come together to build marketplaces and community-centers and devise laws and guidelines on how to interact.
In addition to the “broader rationality community”, there are local groups that have more specific cultures and needs. (For example, NYC has a Less Wrong and Effective Altruism meetup group, which have different cultures both from each other, and from similar groups in Berkeley and Seattle)
This can include both physical-meet-space communities and some of the tighter knit online groups.
Where does Project Hufflepuff fit into this?
I think each focus area has seen credible progress in the past 10 years - both by developing new insights and by getting better at combining useful existing tools. I think we've gotten more and more glimpses of the Something More That's Possible [LW · GW].
At our best, there's a culture forming that is clever, innovative, compassionate, and most all - takes ideas seriously.
But we're often not at our best. We make progress in fits and spurts. And there's a particular cluster of skills, surrounding interpersonal dynamics, that we seem to be systematically bad at. I think this is crippling the potential of all three focus areas.
We've made progress over the past 10 years [LW · GW] - in the form of people writing individual blogposts, facebook conversations, dramatic speeches and just plain in-person-effort. This has helped shift the community - I think it's laid the groundwork for something like Project Hufflepuff being possible. But the thing about interpersonal dynamics is that they require common knowledge and trust. We need to believe (accurately) that we can rely on each other to have each other's back.
This doesn't mean sacrificing yourself for the good of the community. But it means accurately understanding what costs you are imposing on other people - and therefore the costs they are imposing on you, and what those norms mean when you extrapolate them community-wide. And making a reflective decision on what kind of community we want to live so we can actually achieve our goals.
Since we don't all share the same goals and values, I expect this not to mean that community overall shifts towards some new set of norms. I'm hoping for individual people to think about the tradeoffs they want to make, and how those will affect others. And I suspect that this will result in a few different clusters forming, with different needs, solving different problems.
In the next post, I will start diving into the grittier details of what I think this requires, and why this is an especially difficult challenge.
10 comments
Comments sorted by top scores.
comment by SquirrelInHell · 2017-04-09T21:00:43.383Z · LW(p) · GW(p)
Funny. This is exactly the same classification as we developed for the Accelerator Project... also with the same conclusion that we want to build up the interpersonal/social dynamics part, and make it at least as important as the other ones and also the first stop of the journey for newcomers.
Our classification is roughly:
social & psychology stuff (branches: personal strength vs losing)
effectiveness & applied rationality (branches: choosing direction vs creativity & play)
cognitive skill & epistemic rationality (branches: reality vs patterns)
By the way, we're setting up in Europe, so anyone in the EU who is interested in the Hufflepuff Project might do well to join us. We are not going full public just yet, but anyone interested - ping me via private message.
comment by Zvi · 2017-04-09T01:48:57.867Z · LW(p) · GW(p)
[To avoid our traditional failure mode of appearing too overly critical of mostly good ideas/projects, I will state up front that I think Project Hufflepuff is a good idea worth trying, other than the part where it means Raemon moves to The Bay]
I am getting quite the Ultima principles/virtues vibe (I generally put Rationality in the center instead of Spirituality) from that chart to the extent that my brain tells me the colors are wrong: You've got Truth (Truthseeking, previously blue), Love (Human, previously red), and Courage (Impact, previously yellow).
Thing is, whether or not we should do so, I don't think we tend consider these principles equals. I know I don't.
I also don't think it is a coincidence that all three 'local community' circles are within Truth. If there was a Rationalist Community that was working primarily on Impact and Human Focus, and was big on Valor, Compassion and Sacrifice, but did not care about truth all that much and did not seem to value Honesty (and perhaps Humility, Justice or Honor), I am not saying such a thing is inherently bad, but it would no longer feel to me like 'our people'. I also wonder what the Humility-space would look like that isn't focused on any of the three things, but is still in the Community circle. What are they up to?
In that graph, I find my instinctive drawing of the "Rationalist Community" circle to be around most/all of the Truthseeking Focus circle, rather than around most of all combinations of the three circles! How much of the ab-reaction we have to certain adjacent groups (some of which are mentioned above), and our concerns about them, is due to this difference? Is this concern good? Are a lot of the worries people have about this project, that they worry too many resources/norms may shift into a relatively non-critical principal, and if we are not careful we will lose our unique focus?
Replies from: Raemon, Raemon↑ comment by Raemon · 2017-04-09T16:14:51.370Z · LW(p) · GW(p)
Some other random thoughts as they come to me (epistemic status: musing).
1. All the elements in the Rationalsphere are related to some form of rationality, the question is which aspect and why.
Impact and Human are basically the two focuses on instrumental rationality, split loosely along whether they are outward or inward facing. Truthseeking is about epistemic rationality for its own sake.
2. All three focus areas can agree that things are important, but disagree on whether it's terminal or instrumentally valuable.
If you literally had Impact/Human focused group that didn't care about honesty and truth at all, I'd say "that's not a rationality community, no matter how they frame things." But I think we can (and do) have Impact/Human focused groups that see Truthseeking as instrumentally important for those goals.
Whereas pure Truthseeking tends to be like "reality is neat and forming deep models of things is neat, and we were going to do that anyway, and if it so happens that this actually helps someone I guess that's cool."
3. The three focuses are not precisely about values, but about orientation.
The "human" cluster essentially means "I want to focus on me and the people around me", which sometimes means "form good relationships" and sometimes means "be personally successful at my career or craft." The Impact cluster could be about EA type stuff, AI type stuff, or even truthseeking-related stuff like "reform academia" or "improve discourse norms in general society."
I think there's a lot of tension (in particular in EA spaces), between Impact and Truthseeking oriented people. Both sides (in this context) agree that both action and truth are important. But you have something like...
Impact people, who notice that truthseekers a) tend to spend a lot of time talking and not enough doing, and b) believe that actually_doing is the limiting reagent to effective change, and excessive truthseeking-oriented-norms tend to distract from or penalize doing. (For example, encouraging criticism tends to result in people not wanting to try to do things).
vs
Truthseeking people, who notice that lots of people have tried to change things but consistently get things really wrong, and consistently get their epistemics corrupted as organizations mature and get taken over by Exploiter/Parasite/Vaosociopath-types. And the truthseekers see the Rationality/EA alliance as this really rare precious thing that's still young enough not to have been corrupted in the usual way things get intellectually corrupted.
And I think the thing I've been gradually orienting towards over the past 6 months is something like
"Truthseeking and Agency are BOTH incredibly rare and precious and we don't have nearly enough of both of them. If we're fighting over the mindshare of which types of norms are winning out, we're already lost because the current size of the mindshare-pie and associated Truth and Agency skills are not sufficient to accomplish The Things."
(I think the same principle ends up applying to Truthseeker/Human conflicts and Human/Impact conflicts)
Replies from: Zvi, Viliam↑ comment by Zvi · 2017-09-21T13:20:14.960Z · LW(p) · GW(p)
Reading this again several months later, after having developed related thoughts more, and seeing Viliam's comment below, caused a strong negative reaction that the line "If we're fighting over the mindshare of which types of norms are winning out, we're already lost."
I have the instinctive sense that when people say "We can't be fighting over this" it's often because they are fighting over it and don't want the other side fighting BACK, and are using the implicit argument that they've already pre-committed to fighting so if you fight back we're gonna have to fight for real, so why not simply let me win? I'm already winning. We're actively trying to recruit your people and promote our message over your message. We can't afford to then have you try to recruit our people and have you trying to promote your message over ours. What we do is good and right, what you do is causing conflict.
Thus, you have a project about moving more into the human/impact, arguing that it deserves larger mind share. Fair enough! There's certainly a case to be made there, but making that case while also arguing we can't afford to be arguing over various cases sets off my alarm bells. Especially since 'arguing over what should get more attention' is itself a truth-seeking mindshare activity, and there are human/impact activities that can be negative to truth-seeking rather than simply neutral, and that we have to do to some extent.
So I'd be more in a 'you can't afford not to' camp rather than a 'you can't afford to' camp, and I think that if we view such an activity as fighting and negative rather than a positive thing, that's itself a sign of further problems.
↑ comment by Viliam · 2017-04-19T11:42:03.904Z · LW(p) · GW(p)
If we're fighting over the mindshare of which types of norms are winning out, we're already lost
Yep. And most people will continue doing what fits them better anyway... so the whole debate would mostly contribute to making one group feel less welcome.
Also, I suspect that healthy communities are not homogeneous. While the debates about whether X is better than Y already silently assume that homogeneity is the desired outcome -- we only need to choose the right template for everyone to copy.
↑ comment by Raemon · 2017-04-09T03:51:55.112Z · LW(p) · GW(p)
Yeah, the positioning/size of the circles has nothing whatsoever to do with their actual size/degree of overlap, and everything to do with how to make the graph clearly legible at a glance (including the fact that the local-communities are in truth - that's where they happen fit while being easy to label).
(I edited the image to explicitly say that so if anyone uses it elsewhere in the future hopefully people can avoid getting caught up on that particular nitpick. :P)
I have more thoughts about the rest that is taking longer (which could basically be summed up as "politics, however that plays out". Basically, what matters (for any given community), is not which circle is most important, but rather:
- what degree of overlap you have between values
- insofar as people don't share values, do they have reason to stick together anyway? If not, maybe a schism is fine.
- if they're sticking together, how can we resolve differences in such a way as to get everyone the most of what they want, look for pareto improvements, etc.
(epistemic status: thought about it 5 minutes, and am aware that if I thought about it a lot longer would probably have phrased some of that different)
comment by [deleted] · 2017-04-09T14:50:14.344Z · LW(p) · GW(p)
I think the categorization you give is useful, and it does a good job of summarizing different facets of the community. Thank you for writing this up.
comment by Viliam · 2017-04-19T11:32:27.045Z · LW(p) · GW(p)
The division into 3 areas reminds me of an old article at SSC:
You get a bunch of hippies throwing love into the pot, computer programmers adding brainpower, and entrepreneurs adding competence. Mix and stir and you get people who want to make the world better, know how to do it, and sometimes even get up off their armchair and do.
...which is more or less like you divided it.
comment by Deku-shrub · 2017-04-17T20:03:21.844Z · LW(p) · GW(p)
Could you integrate some of you ideas with the wiki definition I've been working on? :)