Why CFAR? The view from 2015
post by PeteMichaud · 2015-12-23T22:46:55.113Z · LW · GW · Legacy · 63 commentsContents
CFAR’s mission, and why that mission matters today Our progress to date Last year's "goals for 2015" Organizational capital Some snapshots from our rationality development Financial Retrospective for 2015 General overview Main workshops Alumni events Special programs Financial Summary Ambitions for 2016 Hitting Scale Core and Labs The Plan Increase Workshop Volume Community and Continued Training Opportunities Directly Addressing Talent Gaps Labs: Informal experimentation toward a better "Applied Rationality" Limitations and Updates The path forward, and how you can help None 63 comments
In this post, we:
- Revisit CFAR’s mission, and why that mission matters today;
- Review our progress to date;
- Offer a look at our financial overview;
- Share our ambitions for 2016; and
- Ask your help, via donations and other means.
We are in the middle of our matching fundraiser; so if you’ve been considering donating to CFAR this year, now is an unusually good time.
CFAR’s mission, and why that mission matters today
CFAR’s mission is to help people develop the abilities that let them meaningfully assist with the world’s most important problems, by improving their ability to arrive at accurate beliefs, act effectively in the real world, and sustainably care about that world.
We know this is an audacious thing to try—especially the “ability to form accurate beliefs” part—but it seems to us that such attempts work sometimes anyhow. Eliezer’s Sequences seem to offer principled improvements to some aspects of some peoples' world-modeling skill (via synthesizing much recent cognitive science, probability theory, etc.); this seems to us to be a useful point from which to build.
The fact remains that we do not yet have the talent necessary to win—to see the world’s problems clearly, plot strategies that have a shot at working, update when those strategies don’t work, and plan effectively around unknowns. To avoid any great filters that may be lurking, solve global and even astronomical challenges, and create a flourishing world for all.
Arguably, people of the caliber we’re shooting for don’t exist yet, but even if they do, it seems clear that we don’t have enough of them to have enough of a guarantee of actually succeeding.
So, audacious or not, this is a task that needs to be done, and CFAR is our attempt to do it. If we can widen the bottleneck on thinking better and doing more, we’re increasing the odds of a better future regardless of what the important problems turn out to be.
Our progress to date
By the end of 2014, CFAR had created workshops that participants liked a lot and which evidence suggests had concrete benefits for them. However, our mission remains to impact the world. The question became whether we could adapt our workshops into something that had the potential for large impact.
Our central goal for 2015 was therefore to create what we called a "minimum strategic product" -- a product that, as we put it last year, would "more directly justify CFAR's claim to be an effective altruist project" by demonstrating that we could sometimes improve peoples' thinking skill, competence, and/or do-gooding to the point where they were able to engage in direct work on a key talent-limited task.
Running the MIRI Summer Fellows program gave the opportunity we'd sought to try our hand at creating such direct impact. Our plan was to test and develop our curriculum and training methods through running a training program that would not only improve people’s ability to think about some of the big questions, but also do so in a fashion that could lead to immediate progress.
How did we do? Here’s what Nate Soares, MIRI’s Executive Director, had to say:
“MSFP was a resounding success: many participants gained new skills relevant to alignment research, and the program led directly to multiple MIRI hires. The world needs more talented people focusing on big important problems, and CFAR has figured out how to develop those sorts of talents in practice.”
While working to help create AI alignment researchers, we also found that this focus on how to become a better scientist led us into more fruitful territory for improving our understanding of the art. (If you're curious, you can see a highly incoherent version of some of the skills we tried to get across in this working document. Read below for more details about art creation, and our plans to expand on more targeted training programs.)
Last year's "goals for 2015"
We hit some of our concrete goals for 2015 and got distracted from others (partly, perils of unanticipated opportunities :-/).
We created a provisional metric for participants' before-and-after strategic usefulness, hitting the first goal; we started tracking that metric, hitting the second goal. Then we found that the metric was too unwieldy and too interpersonally tricky to regularly use on participants, making this "hitting" of our "goals" somewhat less useful than we had hoped. (On the upside, we learned something about how not to build metrics. :-/)
We then got the opportunity to run MIRI Summer Fellows, as noted above... and mostly dropped our previously declared goals to pull off the program, partly because our goals had been meant as a concretization of "can we train people who matter for the world", and the Summer Fellows program seemed like a better concretization of the same. (The program required a lot of new curriculum beyond what we had before, and a lot of skill development on the part of our teaching staff; and even so, and despite Nate's calling it a "resounding success", we had a feeling of leaving a lot of opportunity on the table; opportunity we intend to pick up in our second MIRI Summer Fellows program this coming summer).
From the original "concrete goals" list: goal three was a bit wishy-washy but was probably done. Goals four and five we did not even measure to see if we hit. We should and will measure this, and will let you know when we do; it seems good that we opportunistically put our all into the summer fellows program (and okay to de-emphasize old goals in pursuit of that), but good also to then follow it up for the sake of feedback loops and honesty.
Organizational capital
2015 was the year in which we finally managed to stop wearing all the hats thanks to a huge increase in organizational capital. At the start of 2015 workshops were stressful for staff. Between workshops, our workdays were cluttered with a disproportionate amount of attention spent on logistics, alumni followups, and tasks like accounting.
This stress and clutter was part of what was preventing us from seeing what we were doing, and figuring out how to actually contribute to the world; smoothing out the wrinkles in our day-to-day workflow was (we think) a major stepping stone toward discovering our minimum strategic product.
That’s why we spent a lot of time and effort this year on streamlining operations and increasing specialization so that we could both free the capacity to focus on developing the art and create the capacity to scale our workshops. We systematized tasks like accounting and venue searches, and began using alumni volunteers as follow up mentors to supplement our newly-created post-workshop email exercises and online hangouts. These efforts culminated in two new hires—Pete Michaud and Duncan Sabien—and a reorganization of CFAR into two subteams, Core (focused on operations) and Labs (focused on research).
For a complete overview of what we intend to accomplish in 2016, see Ambitions for 2016 below.
Some snapshots from our rationality development
There is the process by which we improve a workshop, and there is the process by which we improve our understanding of how rationality works at its core. The two processes don’t always help one another, but this year they did.
How we got there:
- As it turns out, attempting to create AI risk scientists (as opposed to boosting the scientist-nature of everyday people) put a subtle but very different spin on the teaching of Sequences-style epistemic rationality. It helped that the researchers were themselves trying to model mind-like processes and that they stubbornly insisted on building related models of what the heck we were trying to convey.
- MIRI Summer Fellows was also a project we could just actually see mattered, and there's nothing quite like actual stakes when it comes to creating a sense of drive and purpose, and being willing to update.
- Improving organizational capital created a positive feedback loop. Working to make our workshops “crisp”—to clean up the methods and metaphors that weren’t pulling their weight—helped make more of what we knew more visible.
Here are some brief highlights of the new Art of Rationality that we’re currently seeing:
- One pillar, not three. CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we’ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of “applied rationality” (for instance, a deep understanding of reductionism seems to help with all three). Recently, we’ve gotten better at articulating how this link works. For example:
- Double Crux is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement—the root uncertainty that has the potential to affect both of their beliefs. We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause prioritization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. (See the next two bullet points.)
- Competence as “deep/internal epistemic rationality.” If I am frequently late to appointments and “don’t want to be,” one can frame this as stemming from an inaccurate anticipation somewhere in my mind—perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful (in our experience) to “internally double crux” the apparent disagreement (i.e., to play the double crux game between two different models within my own head, working until I have both a better model and a better actual outcome). More generally, we are increasingly making headway on “competence” or “instrumental rationality” problems via techniques aimed at integrating accurate beliefs into all parts of one’s psyche.
- Do-gooding and epistemic rationality. “Do-gooding” would seem to be a goal that some have and others don’t, and it would seem odd to try to shift goals by learning epistemic rationality. But it seems to many of us (informally, anecdotally) that there is a kind of “deep epistemic rationality” that doesn’t change one’s goals, but does help one make actual contact with what is at stake in the world, and with the parts of one's psyche that already care about those stakes... and this can sometimes help in practice to build deep, sustainable caring. The idea is again to e.g. notice a part of you that thinks the world matters, and a part of you that is afraid to look in that direction, and help these parts trade model-pieces and update back and forth (double crux, again). For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val’s recent post on grieving.
- Teaching the synthesis. Our pre-2015 workshops were made of techniques, which was like sounding out words a letter at a time (C-A-T…C…Ca…Cat!). After years of trying to use these techniques to point at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we’ve finally found framings and explanations (like this one) that actually bridge the gap. Those framings, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art. (The techniques are now stuffed into the first two days; the synthesis, and the rhythms of using applied rationality in practice, now occupy the second half of the workshop and give people a better sense of the lived feeling of the art. We think.)
This is the beginning of work that we’re poised to expand and improve in the coming year via our new Labs group.
Financial Retrospective for 2015
General overview
Our net cashflow for the year is about $14k positive so far, though without any further revenue we expect to be around $30k negative by the end of December 2015, as most of our large expenses (rent, payroll, etc.) occur at the end of the month. Note that this includes donation revenue from last year’s winter fundraiser.
Our basic monthly operating costs for 2015 have averaged $40k, although the average after September went up to $44k due to changing and slightly expanding our team. This is the number we use to determine burn rate.
$30k of this was payroll in the last quarter, and the rest was split amongst rent and utilities, parking, office supplies, meals, and miscellaneous. Many of these resources are used for in-office events like test sessions, Less Wrong meetups, and rationality training sessions; each staff member has a different and often changing split of percentage time working on operations, curriculum design, teaching, data analysis, etc. That’s why giving a good number for monthly overhead is tricky and unreliable. But to give it a go, it looks like roughly a third of monthly expenses is for organization maintenance.
A bit over half of the revenue covering this came from donations. The rest came from net revenue from our standard introductory workshops plus MIRI’s payment for our running MSFP. (More details below.)
Main workshops
Our standard introductory workshops serve several important purposes for us. One of them is that we hope to develop useful products that simultaneously support our mission and also make CFAR less fiscally dependent on donations.
We ran four of these workshops (three in the Bay Area and one in Boston). They varied widely in both cost and revenue due to travel, testing out new venues, changing the number of participants per workshop, and several other factors. All told, ignoring costs of staff time (as that’s factored into the above burn rate), CFAR main workshops took in a total of ~$123k net revenue (i.e., revenue exceeding cost), or an average of ~$31k net revenue per workshop. Compared to last year, this is down ~$107k total, but up ~$6k per workshop. This is due to us choosing to run less than half as many workshops so as to focus on:
- Making the workshops more efficient
- Running other programs equally well
- Setting up better systems both for workshops and for research
In addition, we’ve continued a trend from last year: we’ve decreased the per-workshop cost in staff time, partly through streamlined curriculum and improved systems and partly through training volunteers to conduct follow-ups, freeing up our core staff to build new programs and spend more time developing advanced rationality theory and instruction. (The volunteer training also does double-duty: the original impetus for doing it was wanting to help alumni benefit from the “learn by teaching” phenomenon, so we are both freeing up staff time and also using this to help deepen alums’ skill with rationality.)
Alumni events
CFAR typically goes into alumni events (workshops and the annual reunion) with the assumption that we’re taking on a cost. We view these as opportunities to explore potentially new areas of rationality and also as ways of encouraging and supporting the CFAR alumni community in their development as rationalists and as a community. It has generally been our policy that we don’t charge for alumni events, but instead we let our alumni know what the per capita cost comes to and ask them to consider donating to compensate.
We track the donations that are in support of these events separately from our standard general donations. As a result, we can pretty clearly see how much each event cost us on beyond the associated donations. That is, we can see net cost. In that spirit, here is what we “paid” on net for each of our alumni programs, ignoring staff time:
- For net zero cost (participants covered meals), we ran a one-day workshop out of the CFAR office on applying Sequences-style thinking to one’s daily life and to hard problems like exrisk, as part of our preparation for MSFP.
- For net zero cost (again, participants covered meals), we ran a 2-day workshop out of the CFAR office on applying Sequences-style thinking to AI risk analysis, also as part of our preparation for MSFP.
- For net zero cost (participants donated enough to cover venue and meals), we ran a “Hamming” workshop in Boston, to explore what techniques are needed to identify and dive into the most important problems one is currently facing (at work, in one’s personal life, as an altruist, or in whatever other domain).
- For ~$2k, we ran a mentoring workshop out of Tiburon, to train volunteers to help us run large-scale workshops and also to do follow-up conversations with participants to help them benefit from the workshop in the weeks & months afterwards.
- For ~$15k, we ran our annual alumni reunion. This year we had ~130 participants, with presentations and exercises on some angles on rationality that we think are promising. These also seem to be a lot of fun and help to energize the alumni community and keep us in touch with fresh ideas from the community that haven’t yet been put in writing.
- For net zero cost, we have continued to run a weekly “rationality dojo” out of the CFAR office, where alumni work to deepen their skills with rationality and experiment with possible refinements or additions to the art.
Special programs
This year we ran two main summer programs:
- SPARC ran for its fourth year in a row. Cisco and MIRI covered the costs of this program, so the non-time cost to CFAR was nil.
- MIRI hired CFAR to run a three-week intensive Summer Fellows Program (MSFP), aimed at identifying and developing promising math research talent potentially related to AI safety research. MIRI covered the costs of running MSFP and paid CFAR $85k to cover both curriculum development time and time running the program itself.
In addition, an unnamed company hired CFAR to run a small training for them. The net financial effect on CFAR was zero: we charged enough to cover costs, viewing this workshop as an opportunity to continue exploring how CFAR might tailor its material for particular workplaces or specific needs.
Financial Summary
Our financial focus this last year was less on making money now and more on establishing internal infrastructure and strategies for developing solid income going forward.
We’re now in an excellent position to make CFAR much less dependent on donations going forward while simultaneously putting more focused effort on development, testing, and sharing of rationality tools than we’ve been able to do in the past.
This has made 2016 look very promising — but it has also put us in a difficult position right now.
We’re farther behind right now than we were this time last year, and we need some capital to implement the plans we have in mind. Predicting markets is always hard, but we think that with one more financial push this winter, we can both improve our contribution to the development of rationality and also make CFAR largely or maybe even entirely financially self-sustaining in 2016.
Ambitions for 2016
Hitting Scale
CFAR’s mission cashes out when people we equipped to think better and do more are actually in positions where they are changing the future of our world for the better.
With our external brand and our positioning within the community, we are perhaps uniquely well positioned to attract bright people, orient them to the values of systematically truer beliefs and world scale impact, and then make sure they get into the highest leverage positions they can fill.
We’ve spent the last three years leveling up our own ability to transmit a skillset and culture that we believe will move the needle in the right direction, and now is the time to execute at scale.
Core and Labs
To make scaling possible and still be able to competently tackle the pedagogical challenges we face, CFAR has arranged itself into two divisions: CFAR Core and CFAR Labs.
Pete Michaud (that’s me!) was hired to manage Core operations, including workshop and curriculum production and logistics. Anna Salamon will take the helm of CFAR Labs, which will be principally responsible for answering the questions:
- What are the highest impact skillsets?
- How can we detect them?
- How can we train them?
- Is our training actually affecting the important dimensions at the high end?
The Plan
Broadly, in order to attract more people, level them up reliably, and make sure they land in the highest impact positions they can, our plan is to:
- Substantially increase workshop volume
- Expand our community and continued training opportunities
- Directly address talent gaps by working with other organizations
- Continue increasing the quality of our instruction
Increase Workshop Volume
We intend to substantially increase the number of intake workshops we run and the number of participants we can serve per workshop.
“Intake workshops” here means workshops for people who haven’t necessarily been exposed to our material or community; said another way, these are workshops that will bring new people into our alumni network.
We are actively seeking a direct sales manager who can not only generate leads but close workshop sales. An alternative is to hire a two person marketing and sales team who together can generate leads and place prospects into workshops.
With the help of that new outreach team, we hope to add on the order of 1,000 new alumni in 2016, increasing our total throughput by nearly an order of magnitude.
Handling that new volume of alumni will require increasing attention to streamlining operations, which CFAR Core is handling partially by adding new team members and clarifying roles. In addition to me as the new Managing Director, we’ve already hired Duncan Sabien, an experienced educator and robustly capable operations generalist. Aside from the outreach team already mentioned, we also intend to hire a community manager (see below for details) and office assistant to fill in the inevitable gaps of an organization moving as fast as we intend to.
Community and Continued Training Opportunities
Bringing more talented people into the alumni network is only half the battle. Once participants have gone from “Zero to One,” only a community of practice can help ensure continued growth for most people.
We believe that one of the primary benefits of CFAR training is ongoing participation in the alumni community, both local to the Bay Area and throughout the world in local meetups and online. That’s why we’re going to invest in making the community stronger, with even more alumni events, experimental workshops, and deep-dive classes into specific aspects of our curriculum.
Perhaps the crown jewel of our community program is our Mentorship Training Program (MTP), which began its life as our TA Workshop. We intend to develop that seed into a robust pipeline capable of transforming workshop participants into trained rationality instructors.
One major benefit of the MTP will be that we’ll have more mentors and instructors to handle the increased load of all these workshops, classes, and other events.
But the MTP is a major growth opportunity even for people who aren’t necessarily interested in spreading the art of rationality themselves; we believe from our experience over the past three years that the best way to fully grok the art is to be immersed in a field of peers striving for the same, and ultimately to be able to teach it yourself.
This is what we intend to create with the MTP and new focus on community events.
To plan and manage all these alumni events, we’re looking for a capable community manager.
Directly Addressing Talent Gaps
In addition to our classic workshops and general education alumni programs, we’ll also be attempting to ramp up our targeted workshops meant to fill talent gaps for specific organizations.
For example, we’ll run our second MIRI Summer Fellows Program, as well as a grant funded by the Future of Life Institute to help promising upcoming AI researchers think about AI safety. We’re in conversation with other organizations, and it’s our intention to have an increasing number of these workshops that focus on thinking skills needed for particular tasks in order to help fill critical gaps in important organizations on very small time horizons.
If funding permits and our experiments in this area go well, we intend to make these types of workshops more frequent, and perhaps expand on past success with programs like a European SPARC, and possible “summer camp” style events where we try to identify particularly talented high school students for training and recruitment into existential risk research.
Labs: Informal experimentation toward a better "Applied Rationality"
The split between Core and Labs doesn't only allow focus on operations--it also allows our Lab folk to invest in the informal experiments, arguments, data-gathering, etc. that seems, over time, to conduce to a better applied rationality.
(This process is messy. Rationality today is not at the level of Newton. It isn't even at the level of Ptolemy, who, despite the mockability of the nested-epicycles method, could predict the motions of the planets with great precision. Rationality is more at the level of a toddler running around, putting everything in its mouth, and ending up thereby with a more integrated informal world-model by having examined many example-objects through several senses each. Our aim this year in Labs is basically to put many many things in our mouths rapidly, and to argue about models in between, and to especially expose ourselves to people who are working on issues that matter in already-very-competent ways who we can nevertheless try to make better, and to try in this way to get a better sense of the higher-end parts of "rationality".)
Toward this end, Labs is currently:
- Offering one-on-one coaching to quite a few individuals who seem to be contributing to the world in a high-end way; and trying to figure out how they're doing what they're doing, and what pieces may help them contribute more;
- Working toward more robust and explicit models of the underlying mechanisms that create drive, scientific and epistemic skill, and relevant real-world competence (and how to intervene upon them);
- Creating new written rationality sequences meant to expand upon, augment, and improve the original sequences that brought so many people into the culture of being “less wrong,” and oriented them around audacious goals that actually make a difference;
- Planning experimental workshops of varied sorts, aiming to boost people further toward "actually useful skill-levels in applied rationality".
The primary limiting factor in these plans is our ability to attract a truly excellent sales person or team. With a sufficient workshop participation, cashflow bottlenecks are broken and we‘ll achieve economies of scale that will fundamentally transform our operations.
Failing that recruitment, the next best alternative is to grow organically through the MTP and other community programs. That is a much slower process, but pushes us in the same fundamental direction.
And as always, our plans coming into contact with the reality of 2016 will correctly cause us to update, iterate, and potentially pivot given new evidence and insight.
The path forward, and how you can help
CFAR’s mission is to gather together people with the potential for real and meaningful impact, and to cause them to come closer to meeting that potential. It doesn’t much matter whether you think we’re under a ticking clock of existential risk, or you’re concerned about a million humans dying every week, or you’re simply grumpy that we haven’t gotten a human past low earth orbit since 1972—our individual and collective thinking skill is a key bottleneck on our future.
Applied rationality, more than almost anything else, has a shot at being a truly all-purpose tool in humanity’s toolkit, and the bigger the problems on the horizon, the more vital that tool becomes.
2016 will be a particularly critical year in CFAR’s history. We’re restructuring our team in pretty major ways, and finding the right team members (or not) will determine our ability to get the right character and culture from this new beginning; and we've had at least three good people in the last eight months who we wanted to hire, and who wanted to work for us, but who required salaries we couldn't afford. Beginnings are far easier times in which to make change, and this is the closest we've come to a fresh beginning -- and the time we've most expected differential impact from marginal donation -- since our inaugural fundraiser of late 2012.
The world of AI risk is changing rapidly, and decisions made over the coming months will shape the future of the field -- it would be well to get relevant training programs going now, and not to wait for some later additional hard-won new beginning for CFAR in 2018 or something. The strategic competence we will have going into the spring is likely to be the difference between a CFAR that actually matters, and one that sounds good but is ultimately irrelevant.
There are at least four major ways to help:
- Donate directly to our winter fundraising drive. This is the most straightforward way to help, and makes a categorical difference in our ability to execute the mission. (A large majority of our funding comes from small donors.)
- If you’re interested in rationality, or in the larger questions of humanity’s future and existential risk, consider reading the Sequences, or otherwise working to improve your thinking and world-modeling skill. (Strong community epistemology is extremely helpful.)
- We’re always looking for new alumni, particularly those who care about both rationality and the world. If you haven’t been, consider applying to a CFAR workshop; and if you have been, consider mentioning it to people who fit said description.
- If you’re interested in joining us for the long haul, we’re currently looking to hire a sales manager, a community manager, and an office assistant (funding permitting). We’ve identified these three roles as the highest-impact additions to the CFAR staff, and are eager to hear from enthusiastic and qualified candidates.
This is the mission; these are the steps. CFAR has made substantial progress on building a talent pipeline for clear thinkers and world changers, in large part thanks to generous contributions of time, money, energy, and insight from people like you. We’d like to see a world where this goal has been achieved, and your support is what gets us there. Thanks for reading; do send us any thoughts; and do please consider donating now.
63 comments
Comments sorted by top scores.
comment by Academian · 2015-12-19T04:12:34.164Z · LW(p) · GW(p)
Just donated $500 and pledged $6500 more in matching funds (10% of my salary).
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2015-12-21T11:02:24.366Z · LW(p) · GW(p)
Thank you! We appreciate this enormously.
comment by alyssavance · 2015-12-18T03:19:17.768Z · LW(p) · GW(p)
Hey! Thanks for writing all of this up. A few questions, in no particular order:
The CFAR fundraiser page says that CFAR "search[es] through hundreds of hours of potential curricula, and test[s] them on smart, caring, motivated individuals to find the techniques that people actually end up finding useful in the weeks, months and years after our workshops." Could you give a few examples of curricula that worked well, and curricula that worked less well? What kind of testing methodology was used to evaluate the results, and in what ways is that methodology better (or worse) than methods used by academic psychologists?
One can imagine a scale for the effectiveness of training programs. Say, 0 points is a program where you play Minesweeper all day; and 100 points is a program that could take randomly chosen people, and make them as skilled as Einstein, Bismarck, or von Neumann. Where would CFAR rank its workshops on this scale, and how much improvement does CFAR feel like there has been from year to year? Where on this scale would CFAR place other training programs, such as MIT grad school, Landmark Forum, or popular self-help/productivity books like Getting Things Done or How to Win Friends and Influence People? (One could also choose different scale endpoints, if mine are too suboptimal.)
While discussing goals for 2015, you note that "We created a metric for strategic usefulness, solidly hitting the first goal; we started tracking that metric, solidly hitting the second goal." What does the metric for strategic usefulness look like, and how has CFAR's score on the metric changed from 2012 through now? What would a failure scenario (ie. where CFAR did not achieve this goal) have looked like, and how likely do you think that failure scenario was?
CFAR places a lot of emphasis on "epistemic rationality", or the process of discovering truth. What important truths have been discovered by CFAR staff or alumni, which would probably not have been discovered without CFAR, and which were not previously known by any of the staff/alumni (or by popular media outlets)? (If the truths discovered are sensitive, I can post a GPG public key, although I think it would be better to openly publish them if that's practical.)
You say that "As our understanding of the art grew, it became clear to us that “figure out true things”, “be effective”, and “do-gooding” weren’t separate things per se, but aspects of a core thing." Could you be more specific about what this caches out to in concrete terms; ie. what the world would look like if this were true, and what the world would look like if this were false? How strong is the empirical evidence that we live in the first world, and not the second? Historically, adjusted for things we probably can't change (like eg. IQ and genetics), how strong have the correlations been between truth-seeking people like Einstein, effective people like Deng Xiaoping, and do-gooding people like Norman Borlaug?
How many CFAR alumni have been accepted into Y Combinator, either as part of a for-profit or a non-profit team, after attending a CFAR workshop?
↑ comment by ChristianKl · 2015-12-18T16:28:26.322Z · LW(p) · GW(p)
Where on this scale would CFAR place other training programs, such as MIT grad school, Landmark Forum, or popular self-help/productivity books like Getting Things Done or How to Win Friends and Influence People?
I would suspect that the data about the effectiveness of Landmark that you would need to make such an assessment isn't public. Do you disagree? If so, what would you take as a basis?
Replies from: taygetea↑ comment by taygetea · 2015-12-20T19:37:26.686Z · LW(p) · GW(p)
There are a few people who could respond who are both heavily involved in CFAR and have been to Landmark. I don't think Alyssa was intending for a response to be well-justified data, just an estimate. Which there is enough information for.
comment by folkTheory · 2015-12-24T23:18:56.858Z · LW(p) · GW(p)
Just donated $1100
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-28T22:27:05.220Z · LW(p) · GW(p)
Thank you so much!
comment by lukeprog · 2015-12-20T19:57:54.441Z · LW(p) · GW(p)
Just donated!
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2015-12-21T11:01:55.603Z · LW(p) · GW(p)
Thanks!
comment by philh · 2015-12-24T19:11:43.282Z · LW(p) · GW(p)
Donated £220.
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-28T22:27:21.003Z · LW(p) · GW(p)
Thanks a lot!
comment by AnnaSalamon · 2015-12-20T02:48:29.293Z · LW(p) · GW(p)
I would be extremely glad to talk to talk to anyone about CFAR, the impact of marginal CFAR donations on the world's talent bottlenecks, or any related things. (If you like, we can try the double crux game.) You can book time with me here: http://www.meetme.so/cfar-anna
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-29T00:26:16.170Z · LW(p) · GW(p)
If anyone has any questions they think might best be directed at me (or you just want to connect with the new guy!), I've also made a lot of room in my schedule for connecting with people: http://www.meetme.so/cfar-pete
Looking forward to connecting with many of you!
comment by EStokes · 2015-12-24T17:02:50.481Z · LW(p) · GW(p)
Donated 200$.
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-28T22:27:28.793Z · LW(p) · GW(p)
Thank you!
comment by RomeoStevens · 2015-12-19T00:16:25.114Z · LW(p) · GW(p)
Has CFAR considered applying for a grant from the John Templeton Foundation? Fits two of their core funding areas and they've made a grant to FQXi indicating some meme compatibility.
comment by Gleb_Tsipursky · 2015-12-19T17:53:36.376Z · LW(p) · GW(p)
Great progress, and I just donated! As a nonprofit director myself, I am especially happy to see your progress on systematization going forward. That's what will help pave the path to long-term success. Great job!
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2015-12-21T11:01:59.890Z · LW(p) · GW(p)
Thanks!
comment by Qiaochu_Yuan · 2015-12-18T02:18:42.016Z · LW(p) · GW(p)
Thanks for writing this up!
As a participant, I think the claim that MSFP was a resounding success is a little strong. It's not at all clear to me that anyone gained new skills by attending (at least, I don't feel like I did), as distinct from learning about new ideas, using their existing skills, becoming convinced of various positions, and making social connections (which are more than enough to explain the new hires). To me it was an interesting experiment whose results I find hard to evaluate.
Replies from: So8res, alyssavance↑ comment by So8res · 2015-12-18T02:44:40.765Z · LW(p) · GW(p)
I don't claim that it developed skill and talent in all participants, nor even in the median participant. I do stand by my claim that it appears to have had drastic good effects on a few people though, and that it led directly to MIRI hires, at least one of which would not have happened otherwise :-)
Replies from: Lumifer, IlyaShpitser↑ comment by Lumifer · 2015-12-18T15:40:43.704Z · LW(p) · GW(p)
I don't claim that it developed skill and talent in all participants, nor even in the median participant.
And yet you called it "a resounding success". Does that mean that you're focusing on the crème de la crème, the top tier of the participants, while being less concerned with what's happening in lower quantiles?
Replies from: So8res, ciphergoth↑ comment by So8res · 2015-12-18T17:37:15.501Z · LW(p) · GW(p)
Yes, precisely. (Transparency illusion strikes again! I had considered it obvious that the default outcome was "a few people are nudged slightly more towards becoming AI alignment researchers someday", and that the outcome of "actually cause at least one very talented person to become AI alignment researcher who otherwise would not have, over the course of three weeks" was clearly in "resounding success" territory, whereas "turn half the attendees into AI alignment researchers" is in I'll-eat-my-hat territory.)
↑ comment by Paul Crowley (ciphergoth) · 2015-12-19T16:19:39.953Z · LW(p) · GW(p)
For this unusual, MIRI-comissioned workshop, yes.
↑ comment by IlyaShpitser · 2015-12-18T20:33:26.863Z · LW(p) · GW(p)
Is CFAR going to market themselves like this?
[at the workshop]:
"Look to the left of you, now to the right of you, now in 12 other directions. Only one of you will have a strong positive effect from this workshop."
Replies from: Academian, Lumifer↑ comment by Academian · 2015-12-19T04:09:51.201Z · LW(p) · GW(p)
I would expect not for a paid workshop! Unlike CFAR's core workshops, which are highly polished and get median 9/10 and 10/10 "are you glad you came" ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
↑ comment by IlyaShpitser · 2015-12-19T17:56:24.434Z · LW(p) · GW(p)
I am not saying it wasn't a worthwhile effort (and I agreed to help look into this data, right?) I am just saying if your definition of "resounding success" is one that cannot be used to market this workshop in the future, that definition is a little peculiar...
In general, it's hard to find effects of anything in the data.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2015-12-20T06:56:18.268Z · LW(p) · GW(p)
The value of running a workshop and the things you can use to market a workshop are distinct, and that seems to explain it.
The fact that a workshop is in a lovely venue is a good thing for marketing, and irrelevant to the value of running it. That is not confusing.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-12-20T16:59:26.332Z · LW(p) · GW(p)
Sure, but for example things used to market a charity and effectiveness of charity are distinct.
People worry about "effectiveness." Is that going out the window in this case?
Replies from: Academian↑ comment by Academian · 2015-12-20T23:05:06.028Z · LW(p) · GW(p)
See Nate's comment above:
http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99
And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely "effective".
[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (yay!) that they are unable to hire appropriately skilled people to work full-time on safety.]
↑ comment by alyssavance · 2015-12-18T03:27:27.536Z · LW(p) · GW(p)
That seems a little surprising to me. Even if CFAR weren't involved at all, I'd naively have expected that eg. having people practice formal logic problems from a textbook would cause skill gains in formal logic. Could you talk a bit about what kinds of skills you think MSFP was attempting to teach?
comment by ChristianKl · 2015-12-18T10:11:49.183Z · LW(p) · GW(p)
New written rationality sequences meant to expand upon, augment, and improve the original sequences that brought so many people into the culture of being “less wrong,” and oriented them around audacious goals that actually make a difference;
That's great to hear. I also think that it will increase the amount of people who are interested in CFAR.
comment by Jacob Falkovich (Jacobian) · 2015-12-23T18:01:03.006Z · LW(p) · GW(p)
A question about donating:
AFAIK about half of the payment for attending a workshop (~$2,000) is considered a charitable donation, is tax-deductible etc. Would it be possible for me to donate to the winter fundraiser and have the donation amount deducted from my workshop participation fee if and when I choose to attend in the future?
I think it works out great for CFAR to allow this: either you get a pre-commited attendee or a free donation.
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-28T23:33:28.070Z · LW(p) · GW(p)
Yes, I agree, I think the incentives are aligned here.
(Make sure that you note the donation when you apply for the workshop--we will likely notice without you saying anything, but depending on a couple factors, our system may not automatically connect you-the-donor to you-the-participant.)
comment by gjm · 2015-12-20T22:13:58.450Z · LW(p) · GW(p)
I find this bit quite alarming:
We hit some of our concrete goals for 2015, and pivoted away from others.
We created a metric for strategic usefulness, hitting the first goal; we started tracking that metric, hitting the second goal.
We chose to change focus from boosting alumni scores on these components, however. [...] Focusing on boosting those components no longer made sense, and we transitioned away from that target.
because it seems to me to amount to this: "Our goals for the year included putting in place metrics by which we could tell whether we were actually achieving what we want. So we did that. And then we decided we didn't want to track those, so we threw them away again."
... which is, to be sure, a reasonable course of action if you discover that you were measuring the wrong thing -- but is also exactly what you'd see if CFAR had found (or guessed) that it wasn't making progress according to those metrics, and didn't want that fact to be too visible.
Combined with an apparent shift from "hold workshops that enhance people's instrumental rationality" in the direction of "hold workshops that funnel people into MIRI", and the discovery that real rationality apparently necessarily involves "deep caring" ... I dunno, maybe it's all absolutely fine, but it looks just a little too much like a transition from "rationality enhancer" to "cult recruitment vehicle".
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2015-12-21T08:51:25.084Z · LW(p) · GW(p)
Sorry. Original phrasing around how we were now going to measure was pretty bad, I agree. I just edited it. I had been bothered by the very text you quoted, and we had an internal thread where we all discussed that and agreed that the phrases were wrong... but we were slow about that, and you commented while we were discussing! The new text more closely reflects the actual structure of how we've been thinking about it all.
It's a bit tricky to publish a long post with many co-editors without letting something inaccurate through (at least in a sleep-deprived marathon like we very rationally used before publishing this one...; there were a bunch of us working collaboratively on the text...); but we should probably in fact have edited a bit more before posting; anyhow, my apologies for editing this text on you after you commented.
comment by Squark · 2015-12-18T19:31:00.923Z · LW(p) · GW(p)
Thank you for writing this. Several questions.
How do you see CFAR in the long term? Are workshops going to remain in the center? Are you planning some entirely new approaches to promoting rationality?
How much do you plan to upscale? Are the workshops intended to produce a rationality elite or eventually become more of a mass phenomenon?
It seem possible that revolutionizing the school system would have much higher impact on rationality than providing workshops for adults. SPARC might be one step in this direction. What are you thoughts / plans regarding this approach?
comment by blob · 2016-01-04T06:15:23.035Z · LW(p) · GW(p)
Here's a concrete anecdote related to the "Do-gooding and epistemic rationality" part.
One of the key benefits I got from the workshop I attended in 2014 was clearer perception and acceptance of my goals.
"I don't know what's important to me beyond myself, family, friends" and "It doesn't seem like I really care about the world" (donating to EA charities seemed like a should) got changed. I do care, and already did before the workshop. It seems like the goals hadn't propagated fully, I hadn't accepted them - possibly because of the scope, the stakes and the implications of taking them seriously.
I have a clear memory of this shift happening because the question "Given these goals, is what you're currently doing correct?" popped up for real the first time. It was great to be able to talk about it directly.
comment by Lumifer · 2015-12-18T15:49:11.395Z · LW(p) · GW(p)
Couple of notes...
We created a metric for strategic usefulness
What is that metric?
But it seems to many of us that there is a kind of “deep epistemic rationality” that doesn’t change one’s goals, but does help one make actual contact with the deep caring that already exists within a person.
I think this is a dangerous path to take. If you stay on it, I suspect that soon enough you'll come to the conclusion that absence of appropriate "caring" is irrational and should be fixed. And from there it's only a short jump and a hop to declaring that just those people who share your value system are rational. That would be an... unfortunate position for you to find yourselves in.
Replies from: taygetea, malcolmocean, ChristianKl↑ comment by taygetea · 2015-12-20T19:40:44.795Z · LW(p) · GW(p)
I could very well be in the grip of the same problem (and I'd think the same if I was), but it looks like CFAR's methods are antifragile to this sort of failure. Especially considering the metaethical generality and well-executed distancing from LW in CFAR's content.
Replies from: Lumifer↑ comment by Lumifer · 2015-12-20T22:05:47.392Z · LW(p) · GW(p)
CFAR's methods are antifragile
What does that mean?
Replies from: FeepingCreature↑ comment by FeepingCreature · 2015-12-25T00:55:13.243Z · LW(p) · GW(p)
Basically, systems that can improve from damage.
Replies from: Lumifer, ChristianKl↑ comment by ChristianKl · 2015-12-25T19:39:07.346Z · LW(p) · GW(p)
Basically, systems that can improve from damage.
The question isn't about what the word means in general but in what way CFAR's methods are supposedly antifragile.
↑ comment by MalcolmOcean (malcolmocean) · 2015-12-21T11:15:25.952Z · LW(p) · GW(p)
We created a metric for strategic usefulness
What is that metric?
It wouldn't surprise me if they didn't want to publish it because some of the aspects of the measure might be gameable, allowing people to pretend to be super useful by guessing the teacher's password.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-21T12:18:48.456Z · LW(p) · GW(p)
GIven that the use it to justify the claim that CFAR made progress in the last year, it seems that the relevant people already know the metric.
↑ comment by ChristianKl · 2015-12-20T21:02:40.213Z · LW(p) · GW(p)
If I understand the position correctly it's that people who don't care about what they are working on won't be working effective and procrastinate.
In HPMOR Harry beats Voldemort because the has access to the superpower of caring and having something to protect.
I don't think that there a push to declare people who feel that they have the "wrong" things as somthing to protect as irrational. Even if there would be such a push the goal of the research in rationality isn't to label people are rational or irrational.
Replies from: Lumifer↑ comment by Lumifer · 2015-12-20T22:07:53.554Z · LW(p) · GW(p)
If I understand the position correctly it's that people who don't care about what they are working on won't be working effective and procrastinate.
No, I don't think so. I think the "deep caring" CFAR talks about is only a particular kind of caring.
Let's get some people who very deeply care about money and large diamonds and arriving to parties in Monaco on their own private jet (or a superyacht, at least). They do care. Just not about the right thing.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-21T11:01:06.720Z · LW(p) · GW(p)
Let's get some people who very deeply care about money and large diamonds and arriving to parties in Monaco on their own private jet (or a superyacht, at least). They do care.
Recently I had a conversation with a person from the LW sphere, who felt a bit empty. At the end of that conversation the solution was that the person comitted to making a plan to increase their professional skills and earn more money in the future.
I think that kind of caring is completely fine and I wouldn't expect anybody in CFAR to object to that solution.
As far of the people who go to Monaco with their private jet, they do care. With the rationality is winning
frame, you would also call people who earn enough money to have a private jet rational.
A person who doesn't care deeply about money won't work 80 hours per week at an investment bank.
there is a kind of “deep epistemic rationality” that doesn’t change one’s goals, but does help one make actual contact with the deep caring that already exists within a person
That's not about saying that people's goals are wrong but about actually making them care towards working towards their goals instead of suffering from akrasia.
comment by AnnaSalamon · 2015-12-21T08:52:43.782Z · LW(p) · GW(p)
We revised the text some after posting; apologies to anyone who replied to original text that has now been changed.
comment by MaximumLiberty · 2015-12-23T23:42:22.302Z · LW(p) · GW(p)
Can you explain more about your Mentorship Training Program?
Replies from: PeteMichaud↑ comment by PeteMichaud · 2015-12-28T23:34:12.990Z · LW(p) · GW(p)
Sure, I'd be happy to--I can share a summary of the plan and what we hope to achieve with it, but before I do that, are there specific questions you'd like answered about it?
Replies from: MaximumLiberty↑ comment by MaximumLiberty · 2015-12-30T16:43:07.985Z · LW(p) · GW(p)
I doubt I know enough to ask good questions. The article has a very bare-bones reference to it, so here are some basic questions:
- What is the high level objective?
- Describe the training from the outside: when, where, who, how much?
- Describe the training from the inside: what gets taught, what gets learned?
- What role do you expect mentors to play?
- How do you support the mentors in playing that role?
↑ comment by ThoughtSpeed · 2016-08-21T08:11:58.030Z · LW(p) · GW(p)
Did this ever get answered?
comment by [deleted] · 2015-12-24T23:05:29.903Z · LW(p) · GW(p)
To make scaling possible and still be able to competently tackle the pedagogical challenges we face, CFAR has arranged itself into two divisions: CFAR Core and CFAR Labs.
Simon Wardley has made an excellent case that a three tiered structures are better than two tiers, may be worth looking into his logic: http://blog.gardeviance.org/2015/04/the-only-structure-youll-ever-need.html
Replies from: chaosmage↑ comment by chaosmage · 2016-01-15T15:08:57.887Z · LW(p) · GW(p)
The (IMHO) relevant bit from that link:
Replies from: LumiferWhat we realised back then is we needed brilliant people in all three areas. We needed three cultures and three groups and each one has to excel at what it does.
Pioneers are brilliant people. They are able to explore never before discovered concepts, the uncharted land. They show you wonder but they fail a lot. Half the time the thing doesn't work properly. You wouldn't trust what they build. They create 'crazy' ideas. Their type of innovation is what we call core research. They make future success possible. Most of the time we look at them and go "what?", "I don't understand?" and "is that magic?". In the past, we often burnt them at the stake. They built the first ever electric source (the Parthian Battery, 400AD) and the first ever digital computer (Z3, 1943).
Settlers are brilliant people. They can turn the half baked thing into something useful for a larger audience. They build trust. They build understanding. They learn and refine the concept. They make the possible future actually happen. They turn the prototype into a product, make it manufacturable, listen to customers and turn it profitable. Their innovation is what we tend to think of as applied research and differentiation. They built the first ever computer products (e.g. IBM 650 and onwards), the first generators (Hippolyte Pixii, Siemens Generators).
Town Planners are brilliant people. They are able to take something and industrialise it taking advantage of economies of scale. They build the platforms of the future and this requires immense skill. You trust what they build. They find ways to make things faster, better, smaller, more efficient, more economic and good enough. They build the services that pioneers build upon. Their type of innovation is industrial research. They take something that exists and turn it into a commodity or a utility (e.g. with Electricity, then Edison, Tesla and Westinghouse). They are the industrial giants we depend upon.
comment by oge · 2015-12-20T16:36:52.092Z · LW(p) · GW(p)
Hi Pete, could you please give some examples of what you mean by "the world’s most important problems"?
I don't have money to give now, but perhaps I could just work on a problem directly.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-20T20:44:41.305Z · LW(p) · GW(p)
GiveWell's list of causes might give you some idea of causes considered to be important: http://www.givewell.org/labs/causes
80000hours has a good list of various causes for which talent can be useful at https://80000hours.org/2015/11/why-you-should-focus-more-on-talent-gaps-not-funding-gaps/
Replies from: oge↑ comment by oge · 2015-12-21T00:51:32.499Z · LW(p) · GW(p)
Hi ChristianKI, I was trying to find out from Pete what the winning would look like for the specific problems CFAR has in mind.
The causes in your links are very diverse, from biosecurity to AI risk. I 'd assumed that CFAR focused only on a couple of the most pressing problems. But I haven't heard officially what problems CFAR wants to solve the most.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-21T09:42:30.050Z · LW(p) · GW(p)
As far as I understand CFAR doesn't focus on individual problems but on building the art of rationality. On the other hand GiveWell and 80000hours do focus on which areas it makes sense for people to invest effort.
The causes in your links are very diverse, from biosecurity to AI risk.
CFAR did specifically on AI risk with their summer workshop for CFAR but that in no way implies that biosafety isn't important. Biosafety was multiple times rated the higher probability in the LW census and there also was a poll at the Singularity Summit a while ago that came to the same conclusion.
I would be surprised if CFAR doesn't consider both of those pressing issues.