Should rationality be a movement?
post by Chris_Leong · 2019-06-20T23:09:10.555Z · LW · GW · 13 commentsContents
13 comments
This post is a quick write-up of a discussion that I recently had with two members of the rationality community. For reasons of simplicity I'll present them as holding a single viewpoint that is a merger of both their arguments. All parties seemed to be in agreement about the long-term future being an overwhelming consideration, so apologies in advance to anyone with a different opinion.
In a recent discussion, I noted that the rationality community didn't have an organisation like CEA engaging in movement building and suggested this might at least partially why EA seemed to be much more successful than the rationality community. While the rationality community has founded the MIRI and CFAR, I pointed out that there were now so many EA-aligned organisations it's impossible to keep track. EA runs conferences where hundreds of people attend, with more on the waitlist, while LW doesn't even have a conference in it's hometown. EA has groups at the most prominent universities, while LW has almost none. Further, EA now has it's own university department at Oxford and the support of OpenPhil, a multi-billion dollar organisation. Admittedly, Scott Alexander grew out of the rationality community, but EA has 80,000 hours. I also noted that EA had created a large number of people who wanted to become AI safety researchers; indeed at some EA conferences it felt like half the people there were interested in pursuing that path.
Based on this comparison, EA seems to have been far more successful. However, the other two suggested that appearances could be misleading and that it therefore wasn't so obvious that rationality should be a movement at all. In particular, they argued that most of the progress made so far in terms of AI safety didn't come from anything "mass-movement-y".
For example, they claimed:
- Slatestarcodex has been given enthusiastic praise by many leading intellectuals who may go on to influence how others think. This is the work of just one man who has intentionally tried to limit the growth of the community around it
- Eliezer Yudkowsky was more influential than EA on Nick Bostrom's Superintelligence. This book seems to have played a large role in convincing more academic types to take this viewpoint more seriously. Neither Yudkowsky's work on Less Wrong nor Superintelligence are designed for a casual audience.
- They argued that CFAR played a crucial role in developing an individual who helped found the Future of Life Institute. This institute ran the Asilomar Conference which kicked off a wave of AI safety research.
- They claimed that even though 80,000 Hours had access to a large pool of EAs, they hadn't provided any researchers to OpenPhil, only people filling other roles like operations. In contrast, they argued that CFAR mentors and alumni were around 50% of OpenPhil's recent hires and likely deserved some level of credit for this.
Part of their argument was that quality is more important than quantity for research problems like safe AI. In particular, they asked whether a small team of the most elite researchers was more likely to succeed in revolutionising science or building a nuclear bomb than a much larger group of science enthusiasts.
My (partially articulated) position was that it was too early to expect too much. I argued that even though most EAs interested in AI were just enthusiasts, some percentage of this very large number of EAs would go on to become to be successful researchers. Further, I argued that we should expect this impact to be significantly positive unless there was a good reason to believe that a large proportion of EAs would act in strongly net-negative ways.
The counterargument given was that I had underestimated the difficulty of being able to usefully contribute to AI safety research and that the percentage who could usefully contribute would be much smaller than I anticipated. If this were the case, then engaging in more targeted outreach would be more useful than building up a mass movement.
I argued that more EAs had a chance of becoming highly skilled researchers than they thought. I said that this was not just because EAs tended to be reasonably intelligent; but also because they tended to be much better than average at engaging in good-faith discussion, be more exposed to content around strategy/prioritisation and also benefit from network effects.
The first part of their response was to argue that by being a movement EA had ended up compromising on their commitment to truth, as follows:
i) EA's focus on having an impact entails growing the movement which entails protecting the reputation of EA and attempting to gain social status
ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren't particularly rigorous thinker.
iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to. In particular, they argued that this had a chilling effect on EA and caused people to speak in a way that was much more guarded.
iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them.
They then argued that despite the reasons I pointed out for believing that EAs could be successful AI safety researchers, that most were lacking a crucial component which was a deep commitment to attempting to fix the issue as opposed to merely seeming like they are attempting to fix the issue. They believed that EA wasn't the right kind of environment for developing people like this and that without this attribute most work people engaged in would end up being essentially pointless.
Originally I listed another point here, but I've removed it since it wasn't relevant to this particular debate, but instead a second simultaneous debate about whether CEA was an effective organisation. I believe that the discussion of this topic ended here. I hope that I have represented the position of the people I was talking to fairly and I apologise in advance if I've made any mistakes.
13 comments
Comments sorted by top scores.
comment by Davidmanheim · 2019-06-21T13:05:37.473Z · LW(p) · GW(p)
It seems strange to try to draw sharp boundaries around communities for the purposes of this argument, and given the obvious overlap and fuzzy boundaries, I don't really understand what the claim that the "rationality community didn't have an organisation like CEA" even means. This is doubly true given that as far as I have seen, all of the central EA organizations are full of people who read/used to read Lesswrong.
comment by Zack_M_Davis · 2019-06-21T15:39:19.428Z · LW(p) · GW(p)
My response was that even if all of this were true, EA still provided a pool of people from which those who are strategic could draw and recruit from.
This ... doesn't seem to be responding to your interlocutor's argument?
The "anti-movement" argument is that solving alignment will require the development of a new [LW · GW] 'mental martial art' of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core "rationalists" were originally trying to send.
Now, you might think that this story is false—that the growth of EA hasn't made "rationality" worse, that we're succeeding in raising the sanity waterline [LW · GW] rather than selling out and being corrupted. But if so, you need to, like, argue that?
If I say, "Popularity is destroying our culture", and you say, "No, it isn't," then that's a crisp disagreement that we can potentially have a productive discussion about. If instead you say, "But being popular gives you a bigger pool of potential converts to your culture," that would seem to be missing the point. What culture?
Replies from: steven0461, Chris_Leong, mr-hire↑ comment by steven0461 · 2019-06-21T19:24:57.750Z · LW(p) · GW(p)
the development of a new [LW · GW] 'mental martial art' of systematically correct reasoning
Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2019-06-21T20:35:11.665Z · LW(p) · GW(p)
I agree with this in some ways! I think the rationality community as it is isn't what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people's ability to discuss without social risk is IMO the core thing that's needed for humans to become more rational right now.
IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it's safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what's true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity's rationality.
The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now - seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.
Thoughts on this balance, other folks?
Replies from: ExCeph↑ comment by ExCeph · 2019-06-22T06:32:53.086Z · LW(p) · GW(p)
With finesse, it's possible to combine the techniques of truth-seeking with friendliness and empathy so that the techniques work even when the person you're talking to doesn't know them. That's a good way to demonstrate the effectiveness of truth-seeking techniques.
It's easiest to use such finesse on the individual level, but if you can identify general concepts which help you understand and create emotional safety for larger groups of people, you can scale it up. Values conversations require at least one of the parties involved to have an understanding of value-space, so they can recognize and show respect for how other people prioritize different values even as they introduce alternative priority ordering. Building a vocabulary for understanding value-space to enable productive values conversations on the global scale is one of my latest projects.
↑ comment by Chris_Leong · 2019-06-21T16:52:22.684Z · LW(p) · GW(p)
To be honest, I'm not happy with my response here. There was also a second simultaneous discussion topic about whether CEA was net positive and even though I tried simplifying this into a single discussion, it seems that I accidentally mixed in part of the other discussion (the original title of this post in draft was EA vs. rationality).
Update: I've now edited this response out.
↑ comment by Matt Goldenberg (mr-hire) · 2019-06-21T20:31:41.321Z · LW(p) · GW(p)
The "anti-movement" argument is that solving alignment will require the development of a new [LW · GW] 'mental martial art' of systematically correct reasoning, and that the social forces of growing a community impair our collective sanity and degrade the signal the core "rationalists" were originally trying to send.
This may be an argument, but the one I've heard (and saw above) is something more like "It's better to focus our efforts on a small group of great people and give them large gains, then a larger group of people each of whom we give small gains... or something.
It's possible that both of these points are cruxes for many people who hold opposing views, but it does seem worth separating.
comment by Jan Kulveit (jan-kulveit) · 2019-06-22T10:21:09.269Z · LW(p) · GW(p)
I had similar discussions, but I'm worried this is not a good way how to think about the situation. IMO the best part of both 'rationality' and 'effective altruism' is often the overlap - people who to a large extent belong to both communities/do not see the labels as something really important for their identity.
Systematic reasons for that may be... [LW(p) · GW(p)]
Rationality asks the question "How to think clearly". For many people who start to think more clearly, this leads to an update of their goals toward the question "How we can do as much good as possible (thinking rationally)", and acting on the answer, which is effective altruism.
Effective altruism asks the question "How we can do as much good as possible, thinking rationally and based on data?". For many people who actually start thinking about the question, this leads to an update "the ability to think clearly is critical when trying to answer the question". Which is rationality.
This is also to some extent predictive about failure modes. "Rationality without the EA part" can deteriorate into something like high-IQ-people discussion club and can have trouble with actions. "EA without the rationality part" can be something like a group of high-scrupulosity people who are personally very nice and donate to effective charities, but actually look away from things which are important.
This is not to say that organizations identified with either of the brands are flawless.
Also - we have now several geographically separated experiments in how the EA / LW / rationality / long-termist communities may look like, outside of the Bay area, and my feeling is places where the core of the communities is shared are healthier/producing more good things than places where the overlap is small, and that is better than having lot of distrust.
comment by PeterMcCluskey · 2019-06-21T15:21:22.694Z · LW(p) · GW(p)
Note that the rationality community used to have the Singularity Summit, which was fairly similar to EAGlobal in its attitude toward high-status speakers.
comment by MichaelBowlby · 2021-10-13T11:32:21.951Z · LW(p) · GW(p)
The claim that most EAs want to look like they're contributing to AI safety rather than having a deep commitment is just deeply at odds to my personal experience. The EAs I meet are in general the most committed people to solving problems that I've ever met. I might try to come up with a more systematic argument for this, but my gut is that that's crazy.
This may be random or something, but in my experience there is a higher probability that rationality people aren't committed to solving problems and want to use rationality to improve their lives personally. But outside view is that this shouldn't be surprising given that the core of the rationality movement is trying have true beliefs.