The Story CFAR
post by Zvi · 2017-12-25T15:10:00.320Z · LW · GW · 4 commentsContents
4 comments
In addition to to my donation to MIRI, I am giving $4000 to CFAR, the Center for Applied Rationality, as part of their annual fundraiser. I believe that CFAR does excellent and important work, and that this fundraiser comes at a key point where an investment now can pay large returns in increased capacity.
I am splitting my donation and giving to both organizations for three reasons. I want to meaningfully share my private information and endorse both causes. I want to highlight this time as especially high leverage due to the opportunity to purchase a permanent home. And importantly, CFAR and its principles have provided and in the future will provide direct personal benefits, so it’s good and right to give my share of support to the enterprise.
As with MIRI, you should do your own work and make your own decision on whether a donation is a good idea. You need to decide if the cause of teaching rationality is worthy, either in the name of AI safety or for its own sake, and whether CFAR is an effective way to advance that goal. I will share my private information and experiences, to better aid others in deciding whether to donate and whether to consider attending a workshop, which I also encourage.
Here are links to CFAR’s 2017 retrospective, impact estimate, and plans for 2018.
I
My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.
I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.
In the years since then, many of my friends in the community attended workshops, reporting that things have improved steadily over time. A large number of rationality concepts have emerged directly from CFAR’s work, the most central being double crux. They’ve also helped us take known outside concepts that work and helped adapt them to the context of rationalist outlooks, an example being trigger action plans. I had the opportunity recently to look at the current CFAR workbook, and I was impressed.
In February, CFAR president and co-founder Anna Salamon organized an unconference I attended. It was an intense three days that left me and many other participants better informed and also invigorated and excited. As a direct result of that unconference, I restarted this blog and stepped back into the fray and the discourse. I have her to thank for that. She was also a force behind the launch of the new Less Wrong, as were multiple other top CFAR people, including but far from limited to Less Wrong’s benevolent dictator for life Michael “Valentine” Smith and CFAR instructor Oliver Habryka.
I wanted to attend a new workshop this year at Anna’s suggestion, as I think this would be valuable on many levels, but my schedule and available vacation days did not permit it. I hope to fix this in the coming year, perhaps as early as mid-January.
As with MIRI, I have known many of the principles at CFAR for many years, including Anna Salamon, Michael Smith and Lauren Lee, along with several alumni and several instructors. They are all smart, trustworthy and dedicated people who believe in doing their best to help their students and to help those students have an impact in AI Safety and other places that matter.
In my endorsement of MIRI, I mentioned that the link between AI and rationality cuts both ways. Thinking about AI has helped teach me how to think. That effect does not get the respect it deserves. But there’s no substitute for studying the art of thinking directly. That’s where CFAR comes in.
II
CFAR is at a unique stage of its development. If the fundraiser goes well, CFAR will be able to purchase a permanent home. Last year CFAR spent about $500,000 on renting space. Renting the kind of spaces CFAR needs is expensive. Almost all of these needs would be covered by CFAR’s new home, with a mortgage plus maintenance that they estimate costing at most $10,000 a month, saving 75% on space costs and a whopping 25% of CFAR’s annual budget. The marginal cost of running additional workshops would fall even more than that.
In addition to that, the ability to keep and optimize a permanent home, set up for their purposes, will make things run a lot smoother. I expect a lot of gains from this.
Whether or not CFAR will get to do that depends on the results of their current fundraiser, and on what they can raise before the end of the year. The leverage available here is quite high – we can move to a world in which the default is that each week there is likely a workshop being run.
III
As with MIRI, it is important that I also state my concerns and my biases. The dangers of bias are obvious. I am highly invested in exactly the types of thinking CFAR promotes. That means I can verify that they are offering ‘the real thing’ in an important sense, and that they have advanced not only the teaching of the art but also the art itself, but it also means that I am especially inclined to think such things are valuable. Again as with MIRI, I know many of the principles, which means good information but also might be clouding my judgment.
In addition, I have concerns about the philosophy behind CFAR’s impact report.
In the report, impact is measured in terms of students who had an ‘increase in expected impact (IEI)’ as a result of CFAR. Impact is defined as doing effective altruist (EA) type things, either donating to EA-style organizations, working with such organizations (including MIRI and CFAR), or a career path towards on EA-aligned work, including AI safety, or leading rationalist/EA events. 151 of the 159 alumni with such impact fall into one of those categories, with only 8 contributing in other ways.
I sympathize with this framework. Not measuring at all is far worse than measuring. Measurement requires objective endpoints one can measure.
I don’t have a great alternative. But the framework remains inherently dangerous. Since CFAR is all about learning how to think about the most important things, knowing how CFAR is handling such concerns becomes an important test case.
The good news is that CFAR is thinking hard and well about these problems, both in my private conversations with them and in their listed public concerns. I’m going to copy over the ‘limitations’ section of the impact statement here:
- The profiles contain detailed information about particular people’s lives, and our method of looking at them involved sensitive considerations of the sort that are typically discussed in places like hiring committees rather than in public. As a result, our analysis can’t be as transparent as we’d like and it is more difficult for people outside of CFAR to evaluate it or provide feedback.
- We might overestimate or underestimate the impact that a particular alum is having on the world. Risk of overestimation seems especially high if we expect the person’s impact to occur in the future. Risk of underestimation seems especially high if the person’s worldview is different from ours, in a way that is relevant to how they are attempting to have an impact.
- We might overestimate or underestimate the size of CFAR’s role in the alum’s impact. We found it relatively easier to estimate the size of CFAR’s role when people reported career changes, and harder when they reported increased effectiveness or skill development. For example, the September 2016 CFAR for Machine Learning researchers (CML) program was primarily intended to help machine learning researchers develop skills that would lead them to be more thoughtful and epistemically careful when thinking about the effects of AI, but we have found it difficult to assess how well it achieved this aim.
- We only talked with a small fraction of alumni. Focusing only on these 22 alumni would presumably undercount CFAR’s positive effects. It could also cause us to miss potential negative effects: there may be some alums who counterfactually would have been doing high-impact work, but instead are doing something less impactful because of CFAR’s influence, and this methodology would tend to leave them out of the sample.
- This methodology is not designed to capture broad, community-wide effects which could influence people who are not CFAR alums. For example, one alum that we interviewed mentioned that, before attending CFAR, they benefited from people in the EA/rationality community encouraging them to think more strategically about their problems. If CFAR is contributing to the broader community’s culture in a way that is helpful even to people who haven’t attended a workshop, then that wouldn’t show up in these analyses or the IEI count.
- When attempting to shape the future of CFAR in response to these data, we risk overfitting to a small number of data points, or failing to adjust for changes in the world over the past few years which could affect what is most impactful for us to do.
These are very good concerns to have. Many of the most important effects of CFAR are essentially impossible to objectively measure, and certainly can’t be quantified in an impact report of this type.
My concern is that measuring in this way will be distortionary. If success is measured and reported, to EAs and rationalists, as alumni who orient towards and work on EA and rationalist groups and causes, the Goodhart’s Law dangers are obvious. Workshops could become increasingly devoted to selling students on such causes, rather than improving student effectiveness in general and counting on effectiveness to lead them to the right conclusions.
Avoiding this means keeping instructors focused on helping the students, and far away from the impact measurements. I have been assured this is the case. Since our community is unusually scrupulous about such dangers, I believe we would be quick to notice and highlight the behaviors I am concerned about, if they started happening. This will always be an ongoing struggle.
As I said earlier, I have no great alternative. The initial plan was to use capitalism to keep such things in check, but selling to the public is if anything more distortionary. Other groups that offer vaguely ‘self-help’ style workshops end up devoting large percentages of their time to propaganda and to giving the impression of effectiveness rather than actual effectiveness. They also cut off many would-be students from the workshops due to lack of available funds. So one has to pick one’s poison. After seeing how big a distortion market concerns were to MetaMed, I am ready to believe that the market route is mostly not worth it.
IV
I believe that both MIRI and CFAR are worthy places to donate, based on both public information and my private information and analysis. Again I want to emphasize that you should do your own work and draw your own conclusions. In particular, the case for CFAR relies on believing in the case for rationality, the same way that the case for MIRI relies on believing in the need for work in AI Safety. There might be other causes and other organizations that are more worthy; statistically speaking, there probably are. These are the ones I know about.
Merry Christmas to all.
4 comments
Comments sorted by top scores.
comment by [deleted] · 2017-12-28T00:40:31.323Z · LW(p) · GW(p)
Quick point of clarification (I'm currently heading up CFAR's venue search): the relative success of the fundraiser does affect the likelihood that we'll be able to buy a venue, but the most relevant factor is whether or not we receive a hoped-for ~$800k institutional grant to cover the down payment. If we don't, there's some chance we won't be able to purchase the venue even if we hit our optimistic fundraising target, forcing us to lease rather than buy (much more expensive per unit of goodness, but still preferable to the status quo). The most relevant effect of the fundraiser will be to determine the proportion of higher-impact special workshops we can run relative to intro mainline workshops.
comment by Ben Pace (Benito) · 2017-12-26T00:45:14.053Z · LW(p) · GW(p)
Btw, LessWrong’s BDFL is Matthew Graves aka Vaniver, not Michael ‘Valentine” Smith.
Added: Vaniver is the lord, our god, and hereafter we shall praise him.
Replies from: Vanivercomment by Raemon · 2017-12-27T03:56:17.891Z · LW(p) · GW(p)
For someone generally familiar with CFAR, who thinks they do good work, but wasn't sure how they rank compared to x-risk opportunities, I think the most salient takeaways from this were.
- Now that we have a lot of people relyin on each other for information, and in particular we have OpenPhil that doesn't fund anything for more than 50% of their budget (I think with good reason), there's some reason to donate rougly proportionately of what you think OpenPhil should be funding things at.
- CFAR, in addition to doing generally good work, is trying to purchase a building that'd allow them to run workshops with almost-zero-marginal-cost, allowing them to try much more experimental workshops. This seems high value in general, and in particular if you think experimental workshops are where most of CFAR's value lies.