The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo
post by Evan_Gaensbauer · 2018-05-20T07:19:12.924Z · LW · GW · 70 commentsContents
70 comments
Background Context, And How to Read This Post
This post is inspired by and a continuation of comments I made on the post 'What is the Rationalist Berkeley Community's Culture?' by Zvi on his blog Don't Worry About the Vase. As a community organizer both online and in-person in Vancouver, Canada, my goal was to fill in what appeared to be some gaps in the conversation among rationalists mostly focused on the Berkeley community. Zvi's post was part of a broader conversation pertaining to rationalist community dynamics within Berkeley.
My commentary pertains to the dynamics between the Bay Area and other local rationality communities, informed by my own experience in Vancouver and those of rationalists elsewhere. The below should not be taken be taken as comment on rationalist community dynamics within the Bay Area. This post should be considered an off-shoot from the original conversation Zvi was contributing to. For full context, please read Zvi's original post.
I. The Rationality Community: Berkeley vs. The World
While I didn't respond to them at the time, several community members commented on Zvi's post they had similar experiences: that while some local rationality communities and their members perceive themselves in a zero-sum game with Berkeley they didn't sign up for (and, to be fair, the Berkeley community didn't consciously initiate as though it's a single agency), and some don't, a sense of what Zvi was trying to point appears ubiquitous. An example:
In my experience, the recruitment to Berkeley was very aggressive. Sometimes it felt like: “if you don’t want to move to Berkeley as soon as possible, you are not *really* rational, and then it is a waste of our time to even talk to you.” I totally understand why having more rationalists around you is awesome, but trying to move everyone into one city feels like an overkill.
Similar anecdata from local rationality communities around the world:
Melbourne. When I met several rationalists originally from Melbourne in Berkeley a few years ago, the way they talked about the exodus of the core of the Melbourne rationality community to the Bay Area, it was a mixed assessment. Melbourne is an example of very successful local rationality community outside the Bay Area, with the usual milestones like successful EA non-profits, for-profit start-ups and rationalist sharehouses. So that many rationalists from Melbourne left for the Bay Area passed a cost-benefit analysis as high-impact individuals it was obvious to them they should be reducing existential risks on the other side of the world.
In conversation, Helen Toner expressed some unease that a local rationality community which had successfully become a rationality hub second only to the Bay Area had had a whole generation of rationalists from Melbourne leave at once. This could have left open the possibility a sustainable system for rationalist development for years had been gutted. My impression since then is around this time the independent organization of the Melbourne EA community began to pick up, and between that and the remaining rationalists, the Melbourne community is doing well. If past or present members of the Melbourne rationality community would like to add their two cents, it would be greatly appreciated.
The rationality community growth strategy out of Berkeley by default became to recruit the best rationalists from local communities around the world at a rate faster than rationalist organizers could replenish the strength of those local communities. Given the stories I've heard from outside Melbourne being more lopsided, with the organization of local rationality communities utterly collapsing, only recovering after multiple years if ever, I'd consider the case of Melbourne rationality community surviving the exit of its leadership for Berkeley to have been a lucky outlier.
Seattle. The Seattle rationality community has experienced a bad case of exodus to Berkeley over the last few years. My understanding of this story is as follow:
- Like with rationalists around the world, effective altruism came along and said "hey, while our communities have significant differences, we care about existential risk reduction and other common goals; we've got several billion dollars; and worldwide network of thousands rising through every kind of institution to coordinate the globe". At the time, the whole strategy for AI alignment wasn't much more than "read the Sequences and then donate to MIRI...?", so at the time EA's value proposition couldn't be beat. In Seattle the organizers of the rationality community took off their rationalist hats and switched it for an effective altruist one, albeit while prominently placing a rationalist button on it. This is what started happening in Vancouver as well circa 2013. The Seattle rationalists started a successful Rationality Reading Group in 2015 which got through the whole LessWrong Sequences.
- Things went swimmingly in Seattle until AI safety 'went mainstream', and as the financial resources flowed into the institutions of the Berkeley rationality community, the demand and pressure to acquire the resources that were distant rationalists and their skill-sets intensified. In a period of several months but less than two years, the Seattle rationality community lost at least a half-dozen members, including some local organizers and other veteran community members. The Rationality Reading Groups ceased as regular meetups for over a year, and local community organization was at best intermittent.
- The excitement of EA brought many more Seattleites into the world of x-risk reduction, and the EA and rationality communities of Seattle effectively merged to survive. Since then, they're thriving again, but Seattle is still gradually exuding community members to Berkeley. Because of its proximity to the Bay Area, and the excellence of the Seattle rationality community, I expect it might have experienced more absolute loss from leaking members to Berkeley more than any other. Due to its size, the Seattle community has sustained itself, so the relative loss of local rationality communities which totally collapsed may be greater than has been the case in Seattle. As with Melbourne, if any community members who have lived or are living in Seattle wish to provide feedback, that is encouraged.
Vancouver. The experience in Vancouver has in the past certainly felt like "“if you don’t want to move to Berkeley as soon as possible, you are not *really* rational". The biggest reason Vancouver may not have exuded as many rationalists to the Bay Area as cities in the United States is the difficulty being Canadian poses to gaining permanent residence in the United States and hence moving to the Bay Area. A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area.
But the apparent consensus was it just wasn't possible. Maybe the rationality community a few years ago lacked the language to talk about it, but rationalists who'd lived in Berkeley for a time only to return felt the rationality-shaped hole in their heart could only be filled in the Berkeley. A malaise had fallen over the Vancouver rationality community. All of us were still around, but with a couple local EA organizations, many of us were drawn to that crowd. Those of us who weren't were alienated from any personal connection to the rationality community. I saw in my friends a bunch of individual heroes who together were strangely less than and not greater than the sum of their parts.
Things have been better lately, and a friend remarked they're certainly better than a few years ago, when everyone was depressed about the fact it was too difficult for us to all move to the Bay Area. In the last several months, the local rationality community has taken on as our mission our own development, and we've not rebounded so much as flourished like never before. But it took the sorts of conversations about the Berkeley rationalist community last year Zvi and others had to break the spell we had cast on ourselves, that apparently Berkeley had running a rationalist community like a well-oiled machine down to an art and a science.
II. The Berkeley Community and the Mission of Rationality
Benquo commented on Zvi's post:
This is a good description of why I feel like I need to leave Berkeley whether or not there’s a community somewhere else to participate in. This thing is scary and I don’t want to be part of it.
I think this is some evidence that the Rationalist project was never or only very briefly real and almost immediately overrun by MOPs, and largely functions as a way for people to find mates. Maybe that’s OK in a lot of cases, but when your branding is centered around “no really, we are actually trying to do the thing, literally all we are about is not lying to ourselves and instead openly talking about the thing we’re trying to do, if you take things literally saving the world really literally is the most important thing and so of course you do it,” it’s pretty disappointing to find it’s just another flavor.
Since he wrote this comment, Benquo has actually continued to participate in the rationality community. This conversation was mired in tension in the rationality community it must have been difficult to think about impersonally, and so a charitable interpretation would be while these problems exist, Benquo and others are generally not as fatalistic about the rationality community as they were the time they wrote the comments. While I and others in thread saw grains of truth in Benquo's statement, precision nonetheless remains a virtue of rationality [LW · GW], and I felt compelled to clarify. I commented:
I’d say the rationality community started whenever Eliezer forked off LessWrong Overcoming Bias, which was around 2008 or 2009. That’s certainly not when it peaked. Even in a way MIRI never was, CFAR started out a project built by the rationality community. That was happening in 2012 or 2013. Above Sarah is also quoted as saying she thinks the Berkeley rationality community hit the right balance of focusing on being a welcoming community qua community, and aspiring to the whatever the core mission(s) of the aspiring rationalist project are.
Unless you’re arguing there was a latency effect where the MOPs overran the community in 2009, but the consequences of such were buried for several years, the period between 2008/09 and 2012/13 doesn’t constitute being “immediately overrun”.
I get you’re pessimistic, but I think you’re overshooting. Matching the map to the territory of what went wrong in the Berkeley rationality community is key to undoing it, or making sure similar failures don’t occur in the future.
FWIW, I’m sorry you’ve had to experience so directly what you feel like is a decline in an aspect of your local rationality community. As someone who connects with rationalists primarily online, I can tell you they’re everywhere, and even if there isn’t a meatspace community as developed as the one in Berkeley, there are rationalists who won’t let the Craft disappear everywhere, and they want meatspace communities of their own built up outside of Berkeley as much as anyone.
Other comments in-thread from community members who had been around longer than Benquo or I confirmed my impression from their own personal experiences, so unless Benquo would further dispute these accounts, this thread seems put to rest. However, Zvi then replied to me:
I think we need to realize the extent to which Berkeley is actively preventing the formation of, and destroying, these other communities. The majority of high-level rationalists who started in the New York community are in the Berkeley community, which caused New York to outright collapse for years before recovering, and they just now once again caused a crisis by taking away a pair of vital community members and almost wiping out the only rationalist group space in the process. From meeting other community leaders in other cities, I hear similar stories A LOT.
I do agree that Plan A for most members can and should be Fix It, not walking away, and that pointing out it needs fixing is the requirement for perhaps fixing it.
To respond to Zvi here, indeed it appears to be an uncannily ubiquitous problem. I've collected a few stories and described them in some detail above. Between that and several comments from independent rationalists on Zvi's original post giving the impression members of their local communities were being sucked to Berkeley as though through a pneumatic tube and leaving a vacuum of community and organization in its wake, it appears these many local stories could be a single global one.
The original mission of the rationality community was to raise the sanity waterline to ensure human values get carried to the stars, but we're still godshatter [LW · GW], so doing so can and should take different forms than just ensuring superintelligence is aligned with human values. If ever the goal was to seed successful, stable rationalist communities outside Berkeley to coordinate projects beyond the Bay Area, it's been two steps forward, one step back, at best. Even if we assume for the sake of argument it's a good idea for rationalists worldwide to view Berkeley as a nucleus and their own rationalist communities as recruitment centres to drive promising individuals to Berkeley for the mission of AI alignment or whatever, the plan isn't working super well. That's because the apparent rate of local rationalist communities sending their highest-level rationalists Berkeley is occurring at a much faster rate than those rationalist communities can level up more rationalists to replenish their leadership and sustain the local community at all.
The state of affairs could be worse than it is now. But it creates the possibility that if enough local rationalist communities around the world outside the Bay Area simultaneously collapsed, the Berkeley rationalist community (BRC) could lose sufficient channels for recruitment to sustain itself. Given the tendency of communities like all things toward entropy, communities decay over time. The BRC could not be rubbing any of its members the wrong way and we would probably still observe some naturally occurring attrition. In a scenario where the decay rate of the BRC was greater than its rate of replenishment, which has historically largely depended on rationalists from outside communities, the BRC would start decaying. If we were to assume the BRC acts as a single agency, it's in the BRC's self-interest as the nucleus of the worldwide rationality movement to sustain communities-as-recruitment centres at least to the extent they can sustainably drive their highest-level rationalists to Berkeley over the long-term.
While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening.
So whether a rationalist sees the outcome of the primary purpose of rationalist communities acting as a recruitment centres for the Berkeley rationalist community as an excellent plan or an awful failure mode, there's a significant chance it's unsustainable either way. It appears a high-risk strategy that's far from foolproof, and as far as I know virtually nobody is consciously monitoring the situation to prevent further failure.
III. Effective Altruism and the Rationalist Community
In another thread, I responded directly to Zvi. I commented:
While rationalists are internally trying to figure out how there community has changed, and they’re lamenting how it’s not as focused on world-saving, there’s a giant factor nobody has talked about yet. The only community which is more focused on the rationality community’s way of world-saving than the rationality community is effective altruism. To what extent is the rationalist community less world-save-y than it used to be because the rationalists whose primary rationalist role was “world saver” just switched to EA as their primary world-saving identity. I think as things have gotten less focused since LessWrong 1.0 died, and the rationalist diaspora made entryism much easier as standards fell, what you’re saying is all true. You might be overestimating the impact of entryism, though, and underestimating people who exited not because they had no voice, but for sensible reasons. If at any point a rationalist felt they could better save the world within the EA rather than through the rationality community, it’d internally make sense to dedicate one’s time and energy to that community instead.
The EA community doesn’t seem able to build bonds as well as the rationality community. However, the EA community seems better at making progress on outward-facing goals. In that case, I for one wouldn’t blame anyone who find more at home as a world-saver in EA than they did in the rationalist community.
Zvi replied:
Definitely an elephant in the room and a reasonable suspect! Certainly partially responsible. I haven’t mentioned it yet, but that doesn’t mean I’ve missed that it is in the picture. I wanted to get this much out there now, and avoid trying to cover as many bases as possible all at once.
There have been many (Sarah [Constantin] and Benquo among them) who have been trying to talk for a long time, with many many words, about the problems with EA. I will consider that question beyond scope here, but rest assured I Have Thoughts.
Since then Zvi and others have made good on their intentions to point out said problems with effective altruism. I intend to engage these thoughts at length in the future, but suffice to say for now local rationalist communities outside the Bay Area appear to definitely have experienced being 'eaten' by EA worse than Berkeley.
I never bothered to tie up the loose ends I saw in the comments on Zvi's post last year, but something recently spurred me to do so. From Benquo's recent post 'Humans need places [LW · GW]':
I am not arguing that it would merely be a nice thing for Bay Arean EAs and Rationalists to support projects like this; I am arguing that if you have supported recruiting more people into your community, it is morally obligatory to offer a corresponding level of support for taking care of them once you are in community with them. If you can’t afford to help take care of people, you can’t afford to recruit them.
If you don’t have enough for yourself, take care of that first. But if you have more than enough to take care of your private needs, and you are thinking of allocating your surplus to some combination of (a) people far away in space or time, and (b) recruiting others to do the same, I implore you, please first assess - even approximately - the correct share of resources devoted to direct impact, recruiting more people into your community, and taking care of the community’s needs, and give accordingly.
[...]
The Berkeley EA / Rationalist community stands between two alternatives:
1.Pull people in, use them up, and burn them out.
2. Building the local infrastructure to support its global ambitions, enabling sustainable commitments that replenish and improve the capacity of the people making them.
It's important for rationalists in Berkeley to know that from where they're standing, to rationalists around the world, these statements could ring hollow. The perception of the Centre for Effective Altruism slighting the Berkeley REACH is mirrored many times over in rationalists feeling like Berkeley pulled in, used up and burned out whole rationalist communities. The capital of a nation receives resources from everyone across the land. If the capital city recruits more citizens to the nation, is it not morally obligatory for the capital city offer a corresponding level of support for taking care of them once they joined your nation? Is it not the case if the rationality community can not afford to take care of our people, then we can't afford to recruit them?
The worldwide rationalist project stands between two alternatives:
- Seed new local communities, use them up, and burn them out.
- Building the global infrastructure to support its global ambitions, enabling sustainable commitments that replenish and improve the capacity of the local communities making them.
This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere. The good news in the bad news is if all rationalist communities face common problems, we can all benefit from working towards common solutions. So global coordination may not be as difficult as one might think. I wrote above the Vancouver rationality community has recently taken on as our mission our own development, and we're not recovering from years of failures past so much as flourishing like never before. We haven't solved all the problems a rationalist community might face, but we've been solving a lot. As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community. And they worked in Vancouver. I think they're some of the pieces of the puzzle of building global infrastructure to match the rationality community's global ambitions. To lay that out will be the subject of my next post.
70 comments
Comments sorted by top scores.
comment by stardust · 2018-05-20T17:47:08.107Z · LW(p) · GW(p)
A couple friends of mine who were early attendees of a CFAR workshop lived in the Bay Area for several months in 2013, and returned home with stories of how wondrous the Bay Area was. They convinced several of us to attend CFAR workshops as well, and we too returned home with the sense of wonderment after our brief immersion in the Berkeley rationality community. But when my friends and I each returned, somehow our ambition transformed into depression. I tried rallying my friends to try carrying back or reigniting the spark that made the Berkeley rationalist community thrive, to really spread the rationalist project beyond the Bay Area.
You seem to be conflating "CFAR workshop atmosphere" with "Berkeley Rationalist Community" in this section, which makes me wonder if you are conflating those things more generally.
The depressive slump post-CFAR happens *in Berkeley* too. The thriving community you envision Berkeley as having *does not exist,* except at CFAR workshops. The problem you're identifying isn't a Bay-Area-vs-the-world issue, it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
Replies from: Qiaochu_Yuan, Zvi, Evan_Gaensbauer↑ comment by Qiaochu_Yuan · 2018-05-20T22:03:30.046Z · LW(p) · GW(p)
it's a general issue with the way CFAR operates, building up intense social connections over the course of a weekend, then dropping them suddenly.
So, this is definitely a thing that happens, and I'm aware of and sad about it, but it's worth pointing out that this is a generic property of all sufficiently good workshops and things like workshops (e.g. summer camps) everywhere (the ones that aren't sufficiently good don't build the intense social connections in the first place), and to the extent that it's a problem CFAR runs into, 1) I think it's a little unfair to characterize it as the result of something CFAR is particularly doing that other similar organizations aren't doing, and 2) as far as I know nobody else knows what to do about this either.
Or are you suggesting that the workshops shouldn't be trying to build intense social connections?
Replies from: clone of saturn, Evan_Gaensbauer↑ comment by clone of saturn · 2018-05-21T01:14:42.553Z · LW(p) · GW(p)
I don't think he was criticizing CFAR workshops, but people who implicitly expect their own communities to automatically produce the same intense social connections.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-26T03:07:47.836Z · LW(p) · GW(p)
Yes, this is what I was getting at. Thanks.
↑ comment by Evan_Gaensbauer · 2018-05-26T03:07:32.244Z · LW(p) · GW(p)
I agree with these statements, and clone of saturn is correct I was talking about an implicit expectation other rationalist communities will produce the same intense social connections found at CFAR workshops (and also attributed to the Berkeley community generally, but as stardust points out this isn't as amazing as myself and others had built it up to be).
↑ comment by Zvi · 2018-05-20T19:53:27.109Z · LW(p) · GW(p)
Is this suggesting that top-tier Berkeley is even eating the seed corn of Berkeley and making everyone but its own top-tier depressed in its wake?
Replies from: Raemon, stardust↑ comment by Raemon · 2018-05-20T21:24:11.594Z · LW(p) · GW(p)
I think there is specifically a "work on x-risk" subgroup, which yes recruits from within Berkeley, and yes has some debilitating effects. I wouldn't quite characterize it the way Zvi does but will say it's not obviously wrong.
[Edit: I have mixed feelings about whether or how bad the current dynamics are. I think it actually is the case that x-risk desperately needs agents, and yes this competes with non-x-risk community building which also needs agents. I think it's possible to make pareto-optimal improvements to the situation but there will probably be at least some tradeoffs that need to get made and I think reasonable people can disagree about where to draw those tradeoffs]
Replies from: Zvi, Evan_Gaensbauer↑ comment by Zvi · 2018-05-21T13:23:24.346Z · LW(p) · GW(p)
We can all agree that x-risk prevention is a Worthy Cause, or even the most worthy cause. And at some point, you need to divert increasing parts of your resources to that rather than to building resources to be spent, and that this time is, as one otherwise awful teacher of mine called it, immediately if not sooner.
The key question, in terms of implications/VOI, is: Is 'work on x-risk' the kind of all-consuming task (a la SSC's scholars who must use every waking moment to get to those last few minutes where they can make progress, or other all-consuming jobs like start-up founder in a cash crunch) where you must/should let everything else burn, because you have power law returns to investment and the timeline is short enough that you'll burn out now and fix it later? Or is it where you can and should do both, especially given there isn't really a cash crunch and the timeline distribution is highly uncertain and so is what would be helpful?
I want vastly more resources into x-risk, but some (very well meaning) actors have taken the attitude of 'if it's not directly about x-risk I have no interest' and otherwise making everything fit into one of the 'proven effective' boxes, which starves community for resources since it doesn't count as an end goal. It's a big problem.
Anyway, whole additional huge topic and all that. And I'm currently debating how to divide my own resources between these goals!
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-26T03:23:11.945Z · LW(p) · GW(p)
I've got a lot of thoughts on this myself I haven't gotten done yet either, but it appears many effective altruists and rationalists share your perspective of a common problem disrupting other community projects. See this comment [LW(p) · GW(p)].
↑ comment by Evan_Gaensbauer · 2018-05-26T03:17:29.821Z · LW(p) · GW(p)
This ties into an underrated factor I talked about in this comment:
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi's comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists' expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more 'effective altruist' than they are 'rationalist'. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don't consider themselves effective altruists aren't happy EA eats up such a huge part of the community's time, attention and money. As far as I can tell, it's not that they don't like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I'm aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren't able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.
↑ comment by Evan_Gaensbauer · 2018-05-26T02:17:45.061Z · LW(p) · GW(p)
This was the experience in Vancouver after CFAR workshops, and the atmosphere persisted for a long time. It wasn't only me who was conflating "[big event] atmosphere" with "Berkeley Rationalist Community". Not just me but a lot of other people in Vancouver, and also how a lot of rationalists from elsewhere talk about the Berkeley Rationalist Community (I'm going to call it the Bayesian Area), it's often depicted as super awesome.
The first thing that comes to mind is a lot of rationalists from outside of Berkeley only visit town for events like CFAR workshops, CFAR alumni reunions, EA Global, Burning Man, etc. So if one rationalist visits Berkeley a few times a year and always returns to their home base talking about their experiences in Berkeley right after these exciting events, it makes the Berkeley community itself seem constantly exciting. I'm guessing the reality is Berkeley community isn't always buzzing with conferences and workshops, and organizing all those things is actually very stressful.
There definitely is a halo around the Berkeley Rationalist Community for other reasons:
- It's often touted 'leveling up' to the point one can get hired at an x-risk reduction organization or working on another important project like a startup in Berkeley is an important and desirable thing for rationalists to do.
- There's often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile.
- Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment.
- Rationalists from outside Berkeley often report feeling as though the benefits or incentives to moving to the Berkeley community are exaggerated relatives to the trade-offs or costs of moving to Berkeley.
It would not surprise me if this halo effect around the Berkeley rationalist community around the world is just a case of confirmation bias writ large among rationalists everywhere. It could be there is a sense the Bayesian Area is doing all this deliberately, when almost no rationalists in Berkeley intended to do this. The accounts of what has happened to the NYC community are pretty startling, especially as one of the healthier communities I thought it would persist. The most I can say is there is a wide variance in accounts of how much a local rationalist community feels or not pressure exerted from Berkeley to send as many people as possible their way.
But then I also read stuff like this post by Alyssa, who is from the Berkeley rationalist community, and Zvi's comment about Berkeley itself eating the seed corn of Berkeley sounds plausible. Sarah C also wrote this post about how the Bayesian Area has changed over the years. The posts are quite different but the theme of both is the Bayesian Area in reality defies many rationalists' expectations of what the community is or should be about.
Another thing is much of the recruitment is driven by efforts which are decidedly more 'effective altruist' than they are 'rationalist'. With the Open Philanthropy Project and the effective altruism movement enabling the growth of so many community projects based in the Bay Area, it both i) draws people from outside Bay Area; ii) draws attention to the sorts of projects EA incentivizes at the expense of focusing on other rationalist projects in Berkeley. As far as I can tell, much of the rationality community who don't consider themselves effective altruists aren't happy EA eats up such a huge part of the community's time, attention and money. As far as I can tell, it's not that they don't like EA. The major complaint is projects in the community with the EA stamp of approval are magically more deserving of support than other rationalist projects, regardless of arguments weighing the projects against each other.
To me a funny thing is from the other side I'm aware of a lot of effective altruists long focused on global poverty alleviation or other causes are unhappy with a disproportionate diversion of time, attention, money, and talent toward AI alignment, but moreover EA movement-building and other meta-level activities. Both rationalists and effective altruists find projects also receive funding on the basis of fitting frameworks which are ultimately too narrow and limited to account for all the best projects (e.g., the Important/Neglected/Tractable framework). So it appears the most prioritized projects in effective altruism are driving rapid changes that the grassroots elements of both the rationality and EA movements aren't able to adapt to. A lot of effective altruists and rationalists from outside the Bay Area perceive it as a monolith eating their communities, and a lot of rationalists in Berkeley see the same happening to local friends whose attention used to not be so singularly focused on EA.
comment by Zvi · 2018-05-20T19:57:42.403Z · LW(p) · GW(p)
Thank you for writing this. I think your statement of the fundamental puzzle is basically accurate. I don't know what to do about it. If I felt that by investing in NYC (or some other place) I could build up a community I'd want to be a part of in the long term, I'd devote effort to that, but I don't know how to prevent my work from being raided and destroyed by Berkeley, so I don't do the work. Hell, I don't even know how to get those people to stop recruiting me, or my wife, every chance they get. Mentioning 'the fire of a thousand suns' and writing many articles about this does not seem to prevent it causing direct stress and serious damage to my life, on an ongoing basis, even after the posts this references.
Hell, the latest such attempt was yesterday.
Replies from: John_Maxwell_IV, stardust, Chris_Leong↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-05-21T01:38:20.618Z · LW(p) · GW(p)
[Brainstorming]
One idea is to try to differentiate the NYC 'product' from the Berkeley 'product'. For example, the advantage of Vancouver over the Bay Area is that you can live in Vancouver if you're Canadian. The kernel project attempted to differentiate itself through e.g. a manifesto. In the same way, you could try to create an identity that contrasts with the Bay Area's somehow (for example, figure out the top complaints people have about the Bay Area, then figure out which ones you are best positioned to solve--what keeps you in NYC?) Academic departments at different universities are known for different things; I could imagine a world where rationalist communities in different cities are known for different things too.
Replies from: Zvi↑ comment by Zvi · 2018-05-21T13:28:03.134Z · LW(p) · GW(p)
It's a good idea if there's something we can come up with that's a sufficient draw and is actually raid-proof. The other issue is that trying and failing is a disaster - e.g. MetaMed was an attempt to do many things, this was one of them (even if that wasn't the intent), and its failure cost us several key community members like Sarah+Andrew.
↑ comment by Chris_Leong · 2018-05-21T08:14:59.846Z · LW(p) · GW(p)
That's interesting. I would expect that New York would be a large enough city that it should be possible to build up a strong community there.
Replies from: Raemoncomment by Benquo · 2018-05-20T11:16:45.748Z · LW(p) · GW(p)
The perception of the Centre for Effective Altruism slighting the Berkeley REACH
I had hoped this was clear in my original post, but apparently it wasn't - I'm not saying CEA owes Berkeley REACH anything. I'm just saying we shouldn't conflate CEA with the sort of organization that would support the Berkeley REACH, and that Bay Area locals should fund the neglected cause of themselves having nice things locally.
Replies from: stardust, Evan_Gaensbauer↑ comment by stardust · 2018-05-20T17:54:21.503Z · LW(p) · GW(p)
CEA turned down my proposal because there were other, more established groups than REACH with clearer track records of success and better thought out metrics for success/failure applying for the same round of grants. I am working on building up a track record and metrics/data capture so that I can reapply later.
Replies from: Zvi↑ comment by Zvi · 2018-05-20T19:51:56.252Z · LW(p) · GW(p)
I read this as "CEA cares more about procedures that appear objective and fair and that can be defended, and not making mistakes, than doing the right/best thing." That may or may not be fair to them.
I do know that someone recently claiming to be brought in to work for CEA (and raided by SF from NYC, and who proceeded to raid additional people from NYC), claimed that CEA is explicitly looking to do exactly this sort of thing, and was enthusiastic about supporting an NYC-based version of this (this was before either of knew about REACH, I believe), despite my obvious lack of track record on such matters, or any source of good metrics.
If they'll only support REACH after it has a proven track record that can point to observable metrics to demonstrate an impact lower bound, it's the proverbial bank that only gives you a loan you don't need.
I do think Benquo was clear he wasn't calling on CEA to do anything, just observing that they'd told us who they were. And we were free to not like who they were, but the onus remained on us. That sounds right.
Replies from: stardust↑ comment by stardust · 2018-05-20T20:53:24.105Z · LW(p) · GW(p)
I think funding REACH before there was a track record would've been financially risky. I chose to take that risk personally because I didn't see how it would happen without someone doing something risky. It certainly would have been nice to have gotten support from CEA right away, but I don't think they were wrong to choose to focus resources on people who'd been working on community building for longer, and likely had fewer resources to spare.
Replies from: Zvi↑ comment by Zvi · 2018-05-21T13:33:41.427Z · LW(p) · GW(p)
I can appreciate that. If CEA is budget constrained, and used all its resources on proven community builders doing valuable projects, I can't really argue with that too hard. However...
If CEA did it because you had personal resources available to sacrifice in their place, knowing you would, that seems like a really bad principle to follow.
If CEA feels it can't take 'risk' on this scale, in the sense that they might fund something that isn't effective or doesn't work out, that implies curiously high risk aversion where there shouldn't be any - this would be a very small percent of their budget, so there isn't much effective risk even if CEA's effectiveness was something to be risk averse about, which given its role in the overall ecosystem is itself questionable. It's a much smaller risk for them to take than for you to take!
Replies from: Evan_Gaensbauer, Benquo↑ comment by Evan_Gaensbauer · 2018-05-26T04:30:56.463Z · LW(p) · GW(p)
Peter Hurford wrote last year on the Effective Altruism Forum about the 'hits-based giving' approach the Open Philanthropy Project takes toward funding projects, inspired by YCombinator:
How do you find the best non-profits to donate to? This is an important question that is critical to effective altruism.
One suggestion comes from Holden Karnofsky at the Open Philanthropy Project, who describes a strategy called “hits-based giving”. In this framework, you make a number of investments, some of which are very counter-intuitive and against expert consensus, with the understanding that many will not amount to much but those that work will generate excess returns to make the overall portfolio have a high altruistic return on philanthropic investment.
This strategy originates from YCombinator. In the essay “Black Swan Farming”, Paul Graham argues that funding for-profit startups is the art of hunting for the one deal that will make it big. You have a lot of “misses” when you invest, but the one time you make a “hit”, it will hit big and repay all your losses and then some. In order to guess right, you have to make many gambles. YCombinator has been working on this problem since 2005, and has since invested over $170M into over 1400 different start-ups. The combined valuation of their current start-up batch is stated to now be over $80B.
“Black swan farming” seems to work well for YCombinator. But does it apply well when donating to non-profits? Does hits-based giving work? Since writing that post on April 2016, OpenPhil has already allocated over $197M according to this philosophy. YCombinator is also applying hits-based giving to their own batch of non-profits, to which they have donated $3M.
Peter also summarizes 80,000 Hours' application of start-up principles to evaluating projects (For those unfamiliar, 80,000 Hours is the careers advising organization part of the Centre for Effective Altruism, and both 80,000 Hours and the CEA were incubated by YCombinator).
The “Start Up” Approach
Separately, Ben Todd outlines that many donors concerned with effectiveness judge organizations based on their short-term marginal impact. For example, as Todd mentions, GiveWell had returns lower than its costs for the first four years, but then quickly exploded in its fundraising ratio, doubling money moved from 2012 to 2013, again from 2013 to 2014, and moving more money in 2015 than twice as much raised in 2013 and 2014 combined. An impact assessment focused solely on short-run fundraising ratios in 2011 would have missed GiveWell as an incredibly valuable investment.
In contrast, Todd argues for evaluating early “start-up” non-profits with standard start-up metrics, such as making sure they have a high-quality product, a large addressable market, and the ability to “sell” to this market at scale. Similarly, the organization should have a good growth rate and the team should ideally demonstrate competence and have a track record. For example, GiveWell had a superior research product with the ability to scale to millions of small donors plus dozens of interested large-scale foundations. While the team did not have much of a prior track record, they showed their competence through their early research and early traction with donors.
Lastly, Todd implies that upfront, early investments in rigorous cost-effectiveness analyses are premature, as they draw attention away from growing the core product in quality and scale, and they likely focus too much on the short-run impact, ignoring long-run opportunities.
He makes a couple points relevant to your and Benquo's observation about how non-profit investing should be more risk-neutral than appears to be the case between CEA and the Berkeley REACH.
Non-profit investing affords you the opportunity to be far more risk-neutral than you can in for-profit investing, which changes your options. Index funds are typically chosen less because the diversification increases average returns, but rather because the diversification decreases the variance of the investment, exposing you to less risk. A risk-neutral for-profit investor might be pursuing variance increasing strategies instead, like leverage. However, altruistic investments are not used with the intention of saving for one’s own future, which allows the altruist to be more risk-neutral to chase higher expected returns.
[...]
The returns for for-profit funds are relatively clear, but non-profit returns require a lot of work to understand. While there might be issues of applying the correct methodology, you can generally look at how much cash you get back for how much cash you put in. With non-profit investing, there is no clear measure of your return on investment. Instead, you have to use complex analysis to assess your return and some investments will never be able to show a conclusive return even if they do have one.
↑ comment by Evan_Gaensbauer · 2018-05-26T03:29:21.430Z · LW(p) · GW(p)
I had difficulty finding a word to get across what I meant, so I went with 'slighted' but, I didn't think that's what you meant. My interpretation of your original post was not that you think CEA owes Berkeley REACH in particular, but you think CEA ought to be the kind of organization more willing to consider community projects like the Berkeley REACH. Of course it's apparent now that isn't what you meant either. Thanks for clarifying.
One point I was getting across is there is a perception among the rationality community not the Berkeley rationality community itself, but maybe leaders or key organizations there should play the role for rationalist communities around the world you're hoping Berkeley rationalists play as patrons of the Berkeley REACH, as the Berkeley has received so much from the rest of the rationality community. Whether this sense of owed reciprocity is fair is another question entirely (I personally don't think it's the right question to be asking if we want to find solutions to problems the community faces. I'm still working on my thoughts on that though).
comment by Jacob Falkovich (Jacobian) · 2018-05-30T20:37:35.075Z · LW(p) · GW(p)
I can understand the frustrations of people like Zvi who don't want to invest in local rationality communities, but I don't think that reaction is inevitable.
I went to a CFAR mentor's workshop in March and it didn't make me sad that the average Tuesday NYC rationality meetup isn't as awesome. It gave me the agency-inspiration to make Tuesdays in NYC more awesome, at least by my own selfish metrics. Since March we've connected with several new people, established a secondary location for meetups in a beautiful penthouse (and have a possible tertiary location), hosted a famous writer, and even forced Zvi to sit through another circle. The personal payoff for investing in the local community isn't just in decades-long friendships, it's also in how cool next Tuesday will be. It pays off fast.
And besides, on a scale of decades people will move in an out of NYC/Berkeley/anywhere else several times anyway as jobs, schools, and residential zoning laws come and go. Several of my best friends, including my wife, came to NYC from the Bay Area. Should the Areans complain that NYC is draining them of wonderful people?
One of my favorite things about this community is that we're all geographically diverse rootless cosmopolitans. I could move to a shack in Montana next year and probably find a couple of people I met at NYC/CFAR/Solstice/Putanumonit to start a meetup with. Losing friends sucks, but it doesn't mean that investing in the local rationality community is pointless.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-30T22:11:04.095Z · LW(p) · GW(p)
Thank you for making this comment. Of all the reactions to this post, this one best captures how I want rationalists outside the Bay Area to relate to it going forward. Of course it doesn't go as far as I'd like, but I'm unsure of how I want to take it. I've been reading some of Zvi's posts from last year, which are wrongly pessimistic not because they're a self-fulfilling prophecy preventing non-Berkeley rationality communities from achieving their values, but because it's a map of how rationality communities develop not matching the territory. (I'm aware things were more tense between NYC and Berkeley a year ago, and while I don't know all the details, I imagine Zvi had sufficient reason for how he felt, and may not endorse as strongly now everything he said then.) At the same time, not regarding inter-community dynamics, but the whole rationality movement, I feel like the Community has failed to uphold the Craft. This isn't the same as not devoting enough resources or doing so in the right way toward AI alignment or another mission. It's about the sense I got from reading posts like this one from last year, and my sense other rationality communities are like Berkeley now: rationalists have an aversion to the changes trying to level up might bring to their communities because it would disturb the local state of affairs too much.
In Vancouver, we never blamed the Bay Area for our woes. I think it partially induced our woes, but I don't think from the scale of the individual to the whole Berkeley rationality community, or any subset in between should be blamed for what's happened. We depressed ourselves with how inadequate we seemed relative to Berkeley, and to the extent the Berkeley rationality community perpetuates that mindset, they're preventing the expansion of the Craft. That nobody in Berkeley talks about that, and barely anybody who complains about Berkeley mentions this, leads me to think it's a huge blind spot for all of us. In the past there have been attitudes toward other rationality communities from Berkeley rationalists I've found upsetting because of the harm I think they cause, but to act as though there is an agency at the heart of the Berkeley rationality community conspiring to plunder others is pointless.
If rationality communities outside Berkeley are tired of losing people because it prevents them from launching projects which would advance the rationality happening in Berkeley, I want to help solve that problem. If rationality communities outside Berkeley are tired of losing people to Berkeley because they feel like there should be a more equitable distribution of bonobo rationalist cuddle puddles between Berkeley and everywhere else, that's not a problem I'm interested in putting much effort to solve. I've got nothing against bonobo rationalists, and if I had a magic button which would optimize cuddle puddles everywhere for any rationalist who'd want them, I'd press it.
I think different concerns about the relationship between Berkeley and other rationality communities are all bundled up, it's hard to tease them out, and so for now I'm treating them as though they're all equally valid. But to the extent complaints are less of the form "we're tired of Berkeley taking up too much of the bandwidth of the awesome projects in the worldwide community" and more "we demand a redistribution of warm fuzzies", the less they move me. Local communities need to give individual rationalists reasons to stay, or reasons to come, and they need to take more self-responsibility for that. That's the attitude I tried bringing to turning things around in Vancouver, it inspired others, and we got a lot done. It sounds like some of what NYC experienced is an outlier in that regard, but based on the other comments here it's my impression local rationality communities solving the problem of getting hollowed out might be getting organized. My experience has been it's 99% perspiration, 1% inspiration. Based on my experience in Vancouver and accounts from elsewhere I'm guessing we can identify some heuristic 'best practices' for rationality community organization/development. So if rationalists, wherever they are, are willing to perspire, I'm confident other, more developed rationality communities can provide them with the tools they need. But to get people to the point they're willing to perspire, I (we?) have to get that 1% inspiration right. And how to do that is what I'm stuck on now.
comment by Gordon Seidoh Worley (gworley) · 2018-05-21T01:30:55.101Z · LW(p) · GW(p)
Reading this I was reminded of something. Now, not to say rationality or EA are exactly religions, but the two function in a lot of the same ways especially with respect to providing shared meaning and building community. And if you look at new, not-state-sponsored religions, they typically go through an early period where they are are small and geographically colocated and only later have a chance to grow after sufficient time with everyone together if they are to avoid fracturing such that we would no longer consider the growth "growth" per se and would more call it dispersion. Consider for example Jews in the desert, English Puritans moving to North America, and Mormons settling in Utah. Counterexamples that perhaps prove the rule (because they produced different sorts of communities) include early Christians spread through the Roman empire and various missionaries in the Americas.
To me this suggests that much of the conflict people feel today about Berkeley is around this unhappiness at being rationalists who aren't living in Berkeley when the rationality movement is getting itself together in preparation for later growth, because importantly for what I think many people are concerned about this is a necessary period that comes prior to growth not to the exclusion of growth (not that anyone is intentionally doing this, but more this is a natural strategy that communities take up under certain conditions because it seems most likely to succeed). Being a rationalist not in Berkeley right now probably feels a lot like being a Mormon not in Utah a century ago or a Puritan who decided to stay behind in England.
Now, if you care about existential risk you might think we don't have time to wait for the rationality community to coalesce in this way (or to wait to see if it even does!), and that's fair but that's a different argument than what I've mostly heard. And anyway none of this is necessarily what's actually going on, but it is an interesting parallel I noticed reading this.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-26T00:19:04.166Z · LW(p) · GW(p)
I agree with all of this, except for existential risk reduction and other potential goals of the rationality community don't fit with waiting rationality to coalesce into a world religion, which you've already acknowledged. Also, I feel like just because it's the rationality community we should find a way to create tighter feedback loops of coalescing into worldwide community in a shorter period than religions typically take. Personally I'm more motivated by the Craft than the Community, but I figure to rally the whole community both are necessary (and interdependent?), so I'm still trying to hack together a way to balance both while accelerating sustainable development of local rationality communities.
comment by Jan_Kulveit · 2018-05-22T07:29:26.918Z · LW(p) · GW(p)
On a sufficiently meta-level, the cause of the problem may be both rationality and EA thought leaders have roots in disciplines like game theory, microeconomics, and similar. These styles of analysis usually disregard topology (structure of interactions).
For better or worse, rationalists and effective altruists actually orient themselves based on such models.
On a less meta level
Possibly I'm overconfident, but from a network science inspired perspective, the problem with the current global movement structure seems quite easily visible, and the solutions are also kind of obvious (but possibly hard to see if people are looking mainly through models like "comparative advantage"?).
So what is the solution? A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.
It seems this view is now gradually spreading at least in European effective altruism community, so the structure will get better.
(Possible caveat: if people have very short AGI timelines and high risk estimates, they may want to burn whatever is available, sacrificing future options.)
Replies from: Thrasymachus, Chris_Leong↑ comment by Thrasymachus · 2018-05-22T09:23:19.781Z · LW(p) · GW(p)
A healthy topology of the field should have approximately power-law distribution of hub sizes. This should be true also for related research fields we are trying to advance, like AI alignment or x-risk. If the structure is very far from that (e.g. one or two very big hubs, than nothing, than a lot of two orders of magnitude smaller groups fighting for mere existence), the movement should try to re-balance, supporting growth of medium-tier hubs.
Although my understanding of network science is abecedarian, I'm unsure of both whether this feature is diagnostic (i.e. divergence from power-law distributions should be a warning sign) or whether we in fact observe overdispersion even relative to a power law. The latter first.
1) 'One or two big hubs, then lots of very small groups' is close to what a power law distribution should look like. If anything, it's plausible the current topology doesn't look power-lawy enough. The EA community overlaps with the rationalist community, and it has somewhat better data on topology: If anything the hub sizes of the EA community are pretty even. This also agrees with my impression: although the bay area can be identified as the biggest EA hub, there are similar or at least middle sized hubs elsewhere (Oxford, Cambridge (UK), London, Seattle, Berlin, Geneva, etc. etc.) If we really thought a power law topology was desirable, there's a plausible case to push for centralisation.
The closest I could find to a 'rationalist survey' was the SSC survey, which again has a pretty 'full middle', and not one or two groups ascendant. That said, I'd probably defer to others impressions here as I'm not really a rationalist and most of the rationalist online activity I see does originate from the bay. But even if so, this impression wouldn't worry us if we wanted to see a power law here.
2) My understanding is there are a few generators of power law distributions. One is increasing returns to scale (e.g. cities being more attractive to live in the larger they are, ceteris paribus), another is imperfect substitution (why listen to an okay pianist when I can have a recording of the world's best?), a third could be positive feedback loops or Matthew effects (maybe 'getting lucky' with a breakout single increases my chance of getting noticed again, even when controlling for musical ability versus the hitless).
There are others, but many of these generators are neutral, and some should be welcomed. If there's increasing marginal returns to rationalist density, inward migration to central hubs seems desirable. Certain 'jobs' seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan. Ditto if the best people in a smaller hub migrate to contribute to a larger one (although emotions run high, I don't think calling this 'raiding' is helpful - the people who migrate have agency).
[3) My hunch is what might be going on is that the 'returns' are sigmoid, and so are diminishing with new entrants to the Bay Area. 'Jobs'-wise, it is not clear the Bay Area is the best place to go if you aren't going to work on AI research, and even if so this is a skill set that is rare in absolute terms amongst rationalists). Social-wise, there's limited interaction bandwidth, especially among higher status folks, and so the typical rationalist who goes to the bay won't get the upside the most desirable bits of bay area social interactions - when weighed across from the transaction costs, staying put and fostering another hub might look better.]
(I echo Chris's exhortation)
Replies from: Jan_Kulveit↑ comment by Jan_Kulveit · 2018-05-22T17:58:06.199Z · LW(p) · GW(p)
1) Thanks for the pointer to the data, I have to agree that if the surveys are representative of EA / rationalist community, than actually there are enough medium sized hubs. When plotting it, the data seem to look reasonably power-lawy - (an argument for a greater centralization could have the form of arguing for a different exponent).
I'm unsure about what the data actually show - at least my intuitive impression is much more activity is going on in Bay area than suggested by the surveys. A possible reason may be the surveys count equally everybody above some relatively low level of engagement (willingness to fill a survey), and if we had data weighted by engagement/work effort/... it would look very different.
If the complains that hubs are "sucking in" the most active people from smaller hubs, than big differences between "population size" and "results produced" can be a consequence (effectively wasting the potential of some medium sized hubs, because some key core people left, damaging the local social structure of the hub)
2) Yes there are many effects leading to power laws (and influencing their exponents). In my opinion, rather than trying to argue from the first principles which of these effects are good and bad, it may be more useful to find comparable examples (e.g. of young research fields, or successful social movements), and compare their structures. My feel is rationality/EA/AI safety communities are getting it somewhat wrong.
Certain 'jobs' seem to have this property: a technical AI researcher in (say) Japan probably can have greater EV working in an existing group (most of which are in the bay) rather than trying to seed a new AI safety group in Japan.
This certainly seems to be the prevalent intuition in the field, based on EV guesstimates, etc., and IMO could be wrong. Or, speculation, possibly isn't wrong _per se_, but does not take into account that people want to be in the most prestigious places and groups anyway, and already include this on an S1 level. And this model / meme pushes them away from good decisions.
↑ comment by Chris_Leong · 2018-05-22T07:49:27.425Z · LW(p) · GW(p)
I don't suppose I could persuade you to write up a post with what you consider to be some of the most important insights from network theory? I've started to think myself that some of our models that we tend to use within the rationality community are overly simplistic.
comment by ChristianKl · 2018-05-20T09:35:20.804Z · LW(p) · GW(p)
This isn't about the Berkeley rationalist community, but rationalist communities everywhere. In reading about the experiences of rationalists in Berkeley and elsewhere, I've learned their internal coordination problems are paralleled in rationalist communities everywhere.
I'm not sure to what extends that true. It seems to me like Berkeley has problems of status competition that come through scale that I don't see in my local LessWrong community the way I see them described when I talk with people about the Bay Area.
If there are more people interested to go to an event then there are spaces for the event you need to restrict entry and thus people have to compete over entry.
Replies from: stardust↑ comment by stardust · 2018-05-20T17:49:24.867Z · LW(p) · GW(p)
I don't think I've ever seen an event with more people interested than able to attend in Berkeley. If anything, it's difficult to get people to come out for events.
Replies from: Raemon↑ comment by Raemon · 2018-05-20T19:32:36.443Z · LW(p) · GW(p)
I actually think this happens fairly frequently, although may be happening sort of invisibly:
- I think it most concretely happened at the last Winter and Summer Solstice – in this case it was explicitly due to event insurance concerns and explicit attendee caps.
- More often and more generally: I think, esp for medium-sized parties (basically any time it's a private FB event, and the room ends up pretty full), I think it's often the case that, before you got to the point where people notice and feel excluded, there's a pre-emptive pass where only a smaller subset of people get invited in the first place. The competition is happening quietly in the social network.
↑ comment by stardust · 2018-05-20T21:18:29.174Z · LW(p) · GW(p)
Ah, yeah, it did happen at last summer's solstice, I had forgotten. I was not involved with the winter solstice and didn't know about similar problems there.
I do agree that house parties are often selective, but I have never seen an event with a topic (as opposed to a purely social party) have more interest than the space allowed, which was the category of thing that was in my head when I said "event" above. I consider house parties to be more about hanging out with friends than about "the community" or whatnot.
Replies from: Raemon, gwillen↑ comment by Raemon · 2018-05-20T21:31:00.233Z · LW(p) · GW(p)
Yeah, agreed that events that are "expecting effort" on the part of participants don't usually have this problem.
The place where it seems most relevant are events that are sort of on the border between "hanging out with friends" and "hanging out with community" – house parties that play a large role in determining the overall social scene for Berkeley, where, say, 50-100 people get invited, but there 200 people in the area.
(This is not me saying anyone is doing anything wrong, just, it's a thing to be aware of)
Replies from: stardust↑ comment by stardust · 2018-05-20T21:41:19.725Z · LW(p) · GW(p)
Yeah. For me, events at REACH are a good way to get to know new people and decide if I trust them enough to invite them to more private events. I think a lot of folks in the community are already at capacity for how many social connections they can keep up and so don't end up wanting to get to know new people.
I think some of this stems from the fact that many people seem to prefer talking to folks one on one which makes it hard to parallelize social time. My personal preference is for groups of 5-10, sometimes within a larger social setting, and have been sorta trying to impose this preference on others through doing things at REACH :P
Replies from: ChristianKl, Raemon↑ comment by ChristianKl · 2018-05-21T19:43:13.427Z · LW(p) · GW(p)
I think a lot of folks in the community are already at capacity for how many social connections they can keep up and so don't end up wanting to get to know new people.
That's basically the dynamic I was referring to. You don't have that to the same extend with less people in a community.
↑ comment by gwillen · 2018-05-22T00:39:19.305Z · LW(p) · GW(p)
The winter solstice last year used the same venue it had used the previous year, but the venue imposed a new, lower restriction on the maximum number of attendees, due to some new interpretation of the fire code or something. As a result, tickets did sell out. (I wasn't close enough to organization last year to know how last-minute the change was, but my impression was that there was some scrambling in response.)
This year a new venue is being sought that can better accommodate the number of people who want to attend.
comment by Dagon · 2018-05-21T02:54:15.466Z · LW(p) · GW(p)
Two somewhat independent thoughts:
1) If you think tech money is important, you need to be in the bay area. Just accept that. There's money elsewhere, but not with the same concentration and openness.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? "bring them in, use them up" _may_ be the way to get the most value from volunteer sacrifices. It may not - I haven't seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
Both of these highlight the fact that although I'm a long-time reader and commenter, and consider myself a little-r rationalist, I find the community and organizational groupings to be opaque and alien. I'm glad people are experimenting with these things, but I'm happy to be far away from it.
Replies from: Zvi, Evan_Gaensbauer↑ comment by Zvi · 2018-05-21T13:38:08.393Z · LW(p) · GW(p)
The money in the Bay uses 'if you're not in the Bay you're not serious, and even if you are other Bay money won't take you seriously so I can't afford to' as a coercive strategy to draw people there. Parallel with the community issues. Giving in to such tactics makes the problem that much worse and it snowballs.
Yes, Bay tech money is bigger and more our flavor there, but there's lots in many other places, and we'd get more out of what money exists if we were spread out than if we all chased the biggest pile, even with that pile playing hostile negative-sum games on us.
Replies from: Dagon, Evan_Gaensbauer↑ comment by Dagon · 2018-05-21T17:36:02.509Z · LW(p) · GW(p)
The money in the Bay uses 'if you're not in the Bay you're not serious, and even if you are other Bay money won't take you seriously so I can't afford to'
Right. That's my "just accept it" point. If you want that money, you (currently) have to play by those rules. If you don't want to play that way, you need to stand up and say that your plan isn't based on bay-area money/support levels.
as a coercive strategy to draw people there.
It's hard for me to understand the use of "coercive" here. Other than choosing not to give you money/attention, what coercion is being applied?
Even so, I think that strategy (to draw the serious people who have the capability to contribute) is a small part of it. It's mostly just a simple acknowledgement that distance matters. it's just a bit more hassle to coordinate with distant partners, and that's enough to make many want to invest time/effort/money more locally, all else equal. This is compounded by the (weak but real) signals about your seriousness if you won't find a way to be in the center of things.
↑ comment by Evan_Gaensbauer · 2018-05-26T04:55:19.246Z · LW(p) · GW(p)
This dovetails with my experience from what I've heard in other points in the community, as I described in this comment [LW(p) · GW(p)]:
There's often a perception resources are only invested in projects based in the Bay Area, so trying to start projects with rationalists elsewhere and expect to sustain them long-term is futile.
Moving to Berkeley is still inaccessible or impractical for a lot of rationalists scattered everywhere that (especially if their friends leave) it breeds a sense of alienation and being left behind/stranded as one watches everyone else talk about how they *can* flock to the Berkeley. Combined with the rest of the above, this can also unfortunately breed feelings of resentment.
Rationalists from outside Berkeley often report feeling as though the benefits or incentives to moving to the Berkeley community are exaggerated relatives to the trade-offs or costs of moving to Berkeley.
↑ comment by Evan_Gaensbauer · 2018-05-26T04:50:36.183Z · LW(p) · GW(p)
1) If you think tech money is important, you need to be in the bay area. Just accept that. There's money elsewhere, but not with the same concentration and openness.
This is true. There are reasons other than community-building to not be concentrated in one place. I don't think trying to reverse the relatively high concentration of rationalists in the Bay Area is at this time a solution to common community problems.
2) Are you focused on saving the world, or on building communitiy/ies who are satisfied with their identity as world-savers? "bring them in, use them up" _may_ be the way to get the most value from volunteer sacrifices. It may not - I haven't seen a growth plan for any org that explicitly has many orders of magnitudes of increase while still being an infinitesimal fraction of the end-goal.
This strikes me as pretty unlikely. Often even moreso among EA organizations than ones in the rationality community, world-saving operations which try this strategy appear to have a higher turnover rate, and they don't appear to have improved enough to compensate for that. The Centre for Effective Altruism and the Open Philanthropy Project are two organizations which have close ties and are the two biggest funders in effective altruism, which also covers x-risk/world-saving rationalist projects. They're taking more of a precision approach building community/ties in a way they think will maximize the world-saving-ness of the community. Not everyone agrees with the strategy (see this thread [LW(p) · GW(p)]), but it's definitely more of a hands-on approach moving away from a "bring them in, use them up" model that was closer to what EA organizations tended to do a few years ago.
Many of the other comments on this post point to an issue of concern being a trade-off between a world-saving focus and rationality community-building, but my sense of why it is tense is because both are considered important, so the way is to find better ways to not lose community-building to world-saving.
comment by sapphire (deluks917) · 2018-05-20T23:17:32.950Z · LW(p) · GW(p)
I am sort of agnostic about whether the Berkeley community is a good idea or not. On one hand it certainly feels pointless to try to build up any non-Berkeley community. If someone is a committed rationalist they are pretty likely to move to Berkeley in the near future. In addiiton it is very hard to constantly lose friends. This post probably best captures the emotional reality:
"I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than myself, and I have no desire to be a ‘den mother’ to nubes who will just move to Berkley if they actually develop agency… I worry that I have wasted the last decade of my life putting emotional effort into relationships that I have been unable to keep and I would have been better off finding other communities that are not so prone to having its members disappear."
If you base your social life around the rationality community, and do not live in Berkeley, you are in for alot of heart ache. For this reason I cannot really recommend people invest too heavily in the rationalist unless they want to move to Berkeley.
===
On the other hand concentration has benefits. Living close to your friends has huge social benefits. As sarah says very few adults live on a street with their friends and many Berkeley rationalists do. Its looks likely there will be rationalist group parenting/unschooling. The Berkeley REACH looks awesome (I am a patron despite living on the other side of the country). The question is whether the Berkeley community is worth the severe toll it places on other rationalist communities. In the past i thought Berkeley had some pretty severe social problems. Alot of people (who were neither unusually well connected or high status) who moved their reported surprising levels of social isolation. However things seem to have improved a ton. There are now a ton of group houses near each other and the online community (discord/tumblr) is pretty inclusive and lets 'not high status' people make connections pretty easily.
Also arguably 'Moloch already won'. So its hard to tell people to refrain from moving to Berkeley
===
(I am currently one of the more active people in NYC. The meetup currently occurs in my apartment, etc. )
Replies from: Evan_Gaensbauer, moridinamael, Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-24T08:17:26.560Z · LW(p) · GW(p)
One pattern I'm noticing is because of the fact that because of the relative comparative advantage of citizenship in other countries, and the relative difficulty of attaining permanent residency in the United States, the communities of rationalists abroad are more stable over time because of the practical difficulty of convincing people to move to the United States. For example, having post-secondary education that is more subsidized not just in undergrad but in graduate studies as well in countries aside from the United States keeps non-American rationalists in their home countries until their mid-to-late twenties. That's young enough I know rationalists who musing about moving to Berkeley to work on AI alignment or another community project someday, but I also know a lot of rationalists who have set down roots where they are by then, and aren't inclined to move. Another thing if is a rationalist doesn't have a university degree or highly in-demand skills (e.g., STEM) for big corporations, it's difficult enough to get health insurance and visas for a lot of rationalists emigrating to the United States it doesn't make sense to try. This first post I wrote is intended to be part of a sequence to be focused on finding solutions to problems apparently common to community organization both in Berkeley and elsewhere. I tried to end this post on a positive tone, intending to build up to optimism in the next one, with marked examples of recent success among rationalists around the world. Vancouver has a couple local EA organizations, and strategies we've implemented locally have dramatically increased our rate of meetups, doubled the number of rationalist houses (from 2 to 4, but in 6 months that is still significant. The same has happened in Motreal the last few months. Jan Kulveit is another rationalist who has commented on this post as well who has had reported a lot of success with local community organization in the Czech Republic, as has Toon Alfrink from the Netherlands. If we can integrate what worked for us into a single strategy for mobilizing resources in local rationality communities it could be excellent.
The good news is I think the possible global failure mode I pointed out of the rationality community being too heavily concentrated in a single geographic hub which may then collapse appears quite unlikely to come about for the foreseeable future. So while the experience of the NYC rationalist community may be similar to a lot of rationality communities, it's not universal. I don't know if that means much given the NYC community has lost so many people, but hopefully if something comes out of people sharing solutions we can find a way to help the NYC community as well.
↑ comment by moridinamael · 2018-05-21T12:27:17.038Z · LW(p) · GW(p)
So its hard to tell people to refrain from moving to Berkeley
I apologize for possibly/probably twisting your words a bit here, but I never have trouble telling people to refrain from moving to the Bay/Berkeley. I tell them I lived there for a few years and it’s a pretty unpleasant place, objectively, along any of ten different metrics relevant to comfort and peace of mind. I tell them I never actually developed any sense of belonging with the local Rationalist Community, so it’s not gauranteed that that will happen. I tell them I make a pretty good amount of money in many cities, but since I’m not a Comp Sci grad that doesn’t translate to a decent living in Berkeley. I tell them on top of that, Berkeley is one of the most expensive places to live in the world and if there were some kind of objective ratio of cost of living divided by objective comfort/quality/value-of-a-dollar, Berkeley would be near the top worldwide.
I also don’t find the proposition that you have to literally move to an expensive unpleasant overcrowded dystopian city in order to be rational to be particularly, uh, rational.
Replies from: Zvi, deluks917↑ comment by Zvi · 2018-05-21T13:42:18.080Z · LW(p) · GW(p)
If you could turn that warning into a post, I think it might be helpful, especially if you can be explicit about things. Having it come from someone with experience living there helps make the message credible, and helps you craft a better message. I worry my words ring hollow, and I can't make clear much of what I see.
↑ comment by sapphire (deluks917) · 2018-05-21T14:07:21.751Z · LW(p) · GW(p)
I don't tell everyone to move to Berkeley. But if you are heavily invested socially in the rationalist community you are passing up alot of personal utility by not moving to Berkeley. Other considerations apply of course. But I think the typical hghly invested rationalist would be personally better off if they moved to Berkeley. Whether this dynamic is good for the community longterm or not is unclear.
Replies from: Elo↑ comment by Elo · 2018-05-21T20:57:32.254Z · LW(p) · GW(p)
Or you could start a new branch.
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-24T00:05:47.013Z · LW(p) · GW(p)
What do you mean by a new branch of the rationality community? John Maxwell suggested in another thread local rationality communities aside from Berkeley could have comparative advantages in specializing in offering rationalists the sort of things they might want but typically can't find in Berkeley. This has been the intention of other projects to build up like local rationality communities like Project Kernel [LW · GW] (which is currently experiencing significant problems).
Replies from: Elo↑ comment by Elo · 2018-05-26T05:19:13.834Z · LW(p) · GW(p)
I meant "a new local meetup".
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-26T05:56:10.783Z · LW(p) · GW(p)
Alright, that makes sense. I was reading some of Zvi's other posts on his blog about the rationality community, and I think there are significant advantages to start a new local meetup he was missing. Some of them used to apply to me until the last few months we've had success in starting a new local meetup after organization fell through for almost a year.
Replies from: Elo↑ comment by Evan_Gaensbauer · 2018-05-25T22:27:11.772Z · LW(p) · GW(p)
Thanks for your response.
I am sort of agnostic about whether the Berkeley community is a good idea or not. On one hand it certainly feels pointless to try to build up any non-Berkeley community. If someone is a committed rationalist they are pretty likely to move to Berkeley in the near future. In addiiton it is very hard to constantly lose friends.
So there is the 'Craft' and the 'Community', or at least that is sometimes how rationality is modeled. And the Community could be broken down into Berkeley and other communities elsewhere. But if rationality is also about a mission to ensure human values are carried to the stars, and right now that hinges on AI alignment, it makes sense to me the rationality community is significantly concentrated in the Bay Area. This or other mindsets of singular focus in the name of the Craft appear they might come at some expense to the Community in Berkeley as well. The last year has seen some people in the Berkeley community ask if the Berkeley community is good for the community. I think this might be part of a worldwide problem in rationality, which I only have half an idea of how to tackle. I might need to get a lot of thoughts down before I figure out where I'm going with them.
This post probably best captures the emotional reality:
"I have lost motivation to put any effort into preserving the local community – my friends have moved away and left me behind – new members are about a decade younger than myself, and I have no desire to be a ‘den mother’ to nubes who will just move to Berkley if they actually develop agency… I worry that I have wasted the last decade of my life putting emotional effort into relationships that I have been unable to keep and I would have been better off finding other communities that are not so prone to having its members disappear."
If you base your social life around the rationality community, and do not live in Berkeley, you are in for alot of heart ache. For this reason I cannot really recommend people invest too heavily in the rationalist unless they want to move to Berkeley.
There are stories of mixed success throughout the community in building a rationalist community outside of Berkeley, and it gives me some hope, but then I read about these experiences and I feel ambivalent. I'm afraid anything other local rationality community organizers might recommend is something NYC or another once-flourishing rationalist community has already tried before, and it didn't work. I'm also afraid if a new community takes advice on how to build up while retaining membership over time that worked somewhere else, and then fails, it will greatly discourage the new community someone tried launching. Ultimately I consider the struggles the community faces as hard optimization problems, and right now I'm holding off on proposing solutions until I've discussed the solution more.
On the other hand concentration has benefits. Living close to your friends has huge social benefits. As sarah says very few adults live on a street with their friends and many Berkeley rationalists do. Its looks likely there will be rationalist group parenting/unschooling. The Berkeley REACH looks awesome (I am a patron despite living on the other side of the country). The question is whether the Berkeley community is worth the severe toll it places on other rationalist communities. In the past i thought Berkeley had some pretty severe social problems. Alot of people (who were neither unusually well connected or high status) who moved their reported surprising levels of social isolation. However things seem to have improved a ton. There are now a ton of group houses near each other and the online community (discord/tumblr) is pretty inclusive and lets 'not high status' people make connections pretty easily.
Also arguably 'Moloch already won'. So its hard to tell people to refrain from moving to Berkeley.
Ideally we would find ways to create similar outcomes for rationalists in lots of different places, which I see as a hard optimization problem I'm holding off on proposing solutions to until I've looked at it from more angles.
comment by Unreal · 2018-05-26T08:03:07.766Z · LW(p) · GW(p)
I'm confused by either your Seattle timeline or your use of the term "Rationality Reading Group."
As far as I know, I started the Rationality Reading Group in 2015, after my Jan CFAR Workshop. We read through a bunch of the Sequences.
I left Seattle in late 2016 and left RRG in some other capable hands. To this day, RRG (afaik) is still going and hasn't had any significant breaks, unless they did and I just didn't know about it.
In any case, I'd appreciate some kind of update to your post such that it is either more accurate or less confusing...
Replies from: Unreal, Evan_Gaensbauer↑ comment by Unreal · 2018-05-26T08:07:19.893Z · LW(p) · GW(p)
Also, the story is basically: for a while there was a LessWrong meetup, but then this got dropped and transformed into an EA Meetup. Then there were only EA meetups for a while. Then I started RRG and brought rationality back as its own hub, creating the Seattle Rationality FB group as well. The rationality community grew. Now there are multiple rationalist group houses including a new hub. People did leave for Berkeley, but weekly RRG is still going afaik, and there is still an active community, although its composition is perhaps quite different now.
↑ comment by Evan_Gaensbauer · 2018-05-26T16:29:27.112Z · LW(p) · GW(p)
I've edited my post. Thanks for clarifying.
comment by ChristianKl · 2018-05-20T09:36:18.249Z · LW(p) · GW(p)
As a local community organizer, I developed tactics for doing so that if they worked in Vancouver, they should work for any rationalist community.
As a fellow community organizer (Berlin), I would be happy to read about them.
Replies from: stardust, Evan_Gaensbauer↑ comment by stardust · 2018-05-20T17:51:49.529Z · LW(p) · GW(p)
I'm working on building up a similar reproducible set of operating guidelines for REACH and would be very interested in comparing notes.
Replies from: ChristianKl, Evan_Gaensbauer↑ comment by ChristianKl · 2018-05-20T19:22:20.307Z · LW(p) · GW(p)
I just ran my third open LessWrong meetup in Berlin about gratitude. Before I did run one in Hamburg a while ago and Christian Kamm was responsible for running the monthly LessWrong meetup in Berlin.
After running the first meetup I wrote up the idea for the meetup under How do we change our minds? A meetup blueprint [LW · GW].
I did organize Quantified Self meetups in Berlin from 2011 to 2013 and have a bit of other community leading experience.
I'm happy to talk more.
↑ comment by Evan_Gaensbauer · 2018-05-26T00:10:25.828Z · LW(p) · GW(p)
I've got a bunch of different ideas, some of which are about creating a local rationalist culture, which depending on how they pan out might be a sequence of blog posts. I also have some tips for what worked well in Vancouver, and they might work even better at a community center.
- In Vancouver, having a Facebook group has helped. Not everyone is into FB for any number of reasons, so mailing lists or a Discord server also works. Keeping people in touch with online as well as offline keeps local people who can't make it out in person so often but are invested in the community in the loop, and it helps promote bonds between people.
- The biggest thing might be posting housing opportunities, and even pinning a post in our Facebook group about requests for housemates/housing, which has helped several local community members find new roommates. It's helped contribute to increase the number of rationalist sharehouses in Vancouver from 1 to 3 in 8 months. Using an online group as a digital bulletin board for housing opportunities has helped local rationalists get more involved in the community, creates more rationalist spaces, and helps the houses in question retain a community culture over time. We haven't tried it for things like rationalists sharing employment opportunities with each other, but supposedly trying to create a virtual bulletin board could help create more opportunities for material mutual support/exchange between rationalists. Of course at REACH or another rationality community center (hi Seattle!), you can also do this with a literal bulletin board.
- In the Vancouver rationality/LW FB group I made two polls: one for what periods of day and day of the weeks people we're most available for doing things; and another for what kinds of activities different people wanted to do with other people. This worked well with Facebook groups because visually FB group polls immediately tell you what the most popular choices are, and shows you the names and profiles of people who all picked the same poll options. In both polls everyone could choose multiple options. I then cross-referenced the two polls, and grouped together people who wanted to do the same thing with other rationalists at the same time as each other. That way I was able to organize online several meetups in parallel, for lots of different people, dramatically increasing the number of local community members who were able to attend a meetup which suited their interests and availability. Using this polling system created more diversity and quantity of things rationalists did together locally. Also it tended to produce more frequent, smaller gatherings which were more focused, instead of big, generic social events which I expect don't gel well with a significant minority of rationalists.
Someone doesn't have to use FB group polls like I did to figure out when to create a system for matching different kinds of rationalists to the kinds of meetups they would want to attend. Someone could do it with Survey Monkey or a Google form. The key part is sending individual rationalists invitations to events organized based on their expressed preferences, instead of only organizing generic events intended to accommodate everyone's interests at the same time. I was the person who did the survey, but I didn't organize all the meetups. I mean, I set them up online, but the hosts or facilitators who were other community members who either volunteered or to whom I delegated some event organization. This created a suite of available events/activities for rationalists to choose from. The final part I never got around too was incorporating this into for everyone to browse, so people who didn't fill out the survey or who wanted to try something new could figure out what they try. But I think I should've done that, and I'd suggest the same to anyone else who tries. Again, this fits well into a rationality community center because everyone knows where the things on the calendar are (the community center), which simplifies things.
↑ comment by Evan_Gaensbauer · 2018-05-26T00:13:18.457Z · LW(p) · GW(p)
It's taking me longer than get everything written down, but I explained the tactics that have generated the most value for the Vancouver rationality community in the last 6 months in this reply to stardust's comment [LW(p) · GW(p)].
comment by Chris_Leong · 2018-05-20T09:59:16.486Z · LW(p) · GW(p)
Thanks for writing this post, this is a worry that I have as well.
I also believe that more could be done to build the global rationality community. I mean, I'm certainly keen to see the progress with LW2.0 and the new community section [? · GW], but if we really want rationality to grow as a movement, we at least need some kind of volunteer organisation responsible for bringing this about. I think the community would be much more likely to grow if there was a group doing things like advising newly started groups, producing materials that groups could use or creating better material for beginners.
"While this worst-case scenario could apply to any large-scale rationalist project, with regards to AI alignment, if the locus of control for the field falls out of the hands of the rationality community, someone else might notice and decide to pick up that slack. This could be a sufficiently bad outcome rationalists everywhere should pay more attention to decreasing the chances of it happening." - what would be wrong with this?
Replies from: Evan_Gaensbauer↑ comment by Evan_Gaensbauer · 2018-05-25T21:33:46.223Z · LW(p) · GW(p)
Creating some kind of volunteer organization like that is an end-goal I have in mind, and I've started talking to other people about this project. I've volunteered and been friends for a long time with a local EA organization, Rethink Charity, which runs the Local Effective Altruism Network (LEAN), which does exactly that for EA: advising newly started groups, producing materials the groups can use, and innovating ways to help groups get organized. So as part of a volunteer organization I could get advice from them on how to optimize it for the rationality community.
what would be wrong with this?
Conceivably a community other than rationality steering the trajectory of AI alignment as a field might increase existential risk directly if they were abysmal at it, or counterfactually increase x-risk relative to what would be achieved by the rationality community. By 'rationality community', I also mean organizations that were started from within the rationality community or have significantly benefited from it, such as CFAR, MIRI, BERI, FLI and BERI. So my statement is based on 2 assumptions:
1. AI alignment is a crucial component of x-risk reduction, which is in turn a worthwhile endeavour.
2. The rationality community as a coalition, including the listed organizations, form a coalition which has the best track record of advancing AI alignment with epistemic hygiene relative to any other, and so on priors the loss of relative influence on AI alignment by the rationality community to other agencies would decrease x-risk less than it otherwise would be.
If someone doesn't share those assumptions, my statement doesn't apply.