CFAR 2017 Retrospective

post by Valentine · 2017-12-19T19:38:35.516Z · LW · GW · 5 comments

Contents

  Sprint & Instructor Training 
  Curriculum Development 
  Metrics 
  Other Programs & Projects 
None
5 comments

As we approach the end of the year, we’d like to offer everyone a summary of what CFAR has been up to in 2017.

Sprint & Instructor Training

The first half of the year focused on improving and stress testing our main intro workshops. We ran a “sprint” of five workshops from February through May. This had a couple of main aims:

The crop of new instructors then ran a “leaving the nest” workshop in Seattle in June. They ran everything: outreach, logistics, curriculum design, teaching, operations, and anything else that came up. Anna Salamon and I attended as participants, both because we wanted to see how the group ran the workshop, but also because at that point we were the only people at CFAR who hadn’t experienced the workshop as participants. We can now honestly (if a little tongue-in-cheek) say that CFAR is staffed entirely by alumni!

Overall we were pretty pleased with the results here. We learned a lot from the sprint, and we now have a crew of alumni we can call on who can lend both support and new perspective for teaching at workshops.

(Pete Michaud, our Executive Director, gives a more detailed breakdown of what we did during the first half of the year in a pair of CFAR newsletters. You can find the second one here which links to the first.),

Curriculum Development

Shortly after the Seattle workshop, CFAR and some of the alumni instructors spent a few days at a “curriculum reforging”. The goal was to refactor the curriculum and possibly rebuild the main CFAR workshop content from the ground up. We were responding to an opportunity rather than a problem: we could tell we had more insights and promising ideas than we’d managed to fit in the main workshops, so we wanted to see if we could do better.

Very little about the main workshops changed as a result of that, but we did start running a “Tier II” workshop. The idea was to pick up where the intro workshop ended, working with alumni attendees to both express and refine new approaches to applying rationality. At this point we’ve run Tier II twice: once in mid-July and once in early December. It’s clearly still a young workshop, and hasn’t quite come together in a way that feels as reliably coherent as the intro workshop; we’re still working out how to both clearly set and effectively meet participants’ expectations. But I found them fun, and it looks like the current participants have been getting some value from them in a way that reminds me of CFAR’s 2013 workshops. I expect we’ll turn this into something good with time.

In early November we also had a several day-long workshop with a few researchers from the Machine Intelligence Research Institute (MIRI). Several people on both sides had their own reasons for thinking this was a good idea. I’ll just offer mine here: MIRI and CFAR are both trying to understand what kind of thing rationality should be and how it works when applied in the real world. CFAR is doing this by looking at people and how people make decisions and encounter challenges, and watching what happens when we offer them new thinking tools. MIRI is approaching the question from mathematics, starting from mathematical models of agents and working out what kinds of designs or approaches can result in a superintelligent agent being fully competent while also preserving human values. We thought the two groups might be able to teach each other something.

I have little sense of how much MIRI gained from the interaction, but CFAR did develop some questions and promising insights. For instance, Duncan Sabien (our Curriculum Director) noticed that several of CFAR’s techniques were meant to address a kind of “control problem” analogous to a challenge in MIRI’s research agenda, namely about how much and in what ways you might want to constrain a better-informed agent to do what you want. For MIRI, this involves reasoning about how much to let a superintelligent AI do when it disagrees with its creators about what should be done. For CFAR, this comes up most often when looking at whether or not to keep commitments we’ve made to ourselves. This and other threads fed into the curriculum for the December Tier II workshop.

Metrics

A big push this year has been refining metrics to evaluate CFAR’s impact, and producing write-ups of some of our findings. I won’t go into detail about that here though; our research analyst, Dan Keys, has posted a more thorough description of that here.

Other Programs & Projects

Our time from August through October had a much wider spread of activity types.

In early August I ran a three-day “Pre-EAG” workshop, which happened in the few days right before Effective Altruism Global. We wanted to offer a few EAs conversational and thinking tools they might find helpful at the conference. It was much lighter in content than our usual intro workshops, and had a smaller attendee list (14 people). We didn’t gather much follow-up data because we were mostly focusing our metrics-gathering on the intro workshops, so it’s hard to say with much objectivity what impact we had on the attendees. Several of them reported having a “cohort” feeling at the conference: going in already knowing a few people, and knowing that they’d help one another with networking, made their experience at EAG more fun and engaging.

As has happened every year since 2012, CFAR helped run the Summer Program on Applied Rationality and Cognition (SPARC). SPARC inspired the Oxford-based Euro-SPARC last year, which had generally similar structure and was aimed at a similar demographic as California-based SPARC, but had slightly more CFAR-like content. (SPARC has two rough clusters of classes: math-like classes and CFAR-like classes.) Since the European team and the American SPARC team have little overlap, Euro-SPARC was rebranded ESPR and was run in London this last August.

We also ran our annual CFAR Alumni Reunion in mid-August. This was our second year at Westminster Woods, and aside from the Wi-Fi being shaky we were pleased with the venue. I always love seeing a large crew of alumni coming together to share thoughts and play games. I think it does something enriching for the alumni community to have this kind of touchstone. I also had a lot of fun, from talking about literary analysis and living meaningfully to fencing in the large field outside.

We focused on the Artificial Intelligence Summer Fellows Program (AISFP) during pretty much the whole month of September. We’d runa similar program at MIRI’s request in 2015 and 2016 This year MIRI didn’t need us to run the program for them, but we still wanted to apply and refine our skills at helping people tackle AI risk effectively. We also keep finding that we get promising avenues of inquiry when we work with mathematically-oriented and programming-oriented people. It’s a bit early to say what the impact of AISFP was; metrics results are still incoming, and I personally was only involved with a few small parts of it.

We ran three more standard workshops in October, the first two in Prague and the third at our usual venue in California. The Prague EA group ran the venue search and logistics and operations for us in the Czech Republic. I found working with them incredibly smooth. Normally far-away workshops are a challenge for us, but I — and I think the rest of the CFAR team that were there — had a really easy time of it. My thanks go out to the Prague EAs for opening such an easy pathway for CFAR to run two effective workshops in Eastern Europe!

We closed the year out with a festive open house held at our office in Berkeley. This was a casual event where senior CFAR staff (me, Duncan Sabien, Pete Michaud, and Anna Salamon) held a Q&A about CFAR.

---

Afternote from Pete Michaud, CFAR Executive Director :

CFAR had the fullest year we've ever had, running a major program more than every three weeks on average for the whole year, plus smaller alumni workshops and events throughout. As a result we increased throughput by 61%, increasing from 167 new alumni in 2016 to 275 new alumni in 2017.

Coming into this year we were hopeful that we could streamline operations and expand our scale. It's clear that we have.

To find out about the new bottlenecks and opportunities for 2018, and how you might help with them, read more about CFAR's 2017 Winter Fundraiser.

5 comments

Comments sorted by top scores.

comment by AnnaSalamon · 2017-12-19T20:59:34.864Z · LW(p) · GW(p)

I continue to think CFAR is among the best places to donate re: turning money into existential risk reduction (including this year -- basically because our good done seems almost linear in the number of free-to-participants programs we can run (because those can target high-impact AI stuff), and because the number of free-to-participants programs we can run is more or less linear in donations within the range in which donations might plausibly take us). If anyone wants my take on how this works, or on our last year or our upcoming year or anything like that, I'd be glad to talk: anna at rationality dot org.

comment by ChristianKl · 2017-12-20T11:52:50.487Z · LW(p) · GW(p)
This was a casual event where senior CFAR staff (me, Duncan Sabien, Pete Michaud, and Anna Salamon) held a Q&A about CFAR.

I do understand why you don't put your normal workshops on YouTube, but when you do an event with a Q&A format, I think there would be a lot of value to be gained by publishing recordings for relatively little cost.

comment by ChristianKl · 2017-12-20T11:51:24.079Z · LW(p) · GW(p)

I found reading the the post about the 2018 budget to be suprising on two fronts.

1) I wouldn't have thought that CFAR buys a venue given what I know about CFAR, but I think it's a very good decision, especially given the presented economics. In addition what's already written having your own venue means that when the venue isn't used for official CFAR purposes it can be rented cheaply to other people running events for rationalists. Especially in California where real estage is expensive I would expect that capability to be very valuable for the broader movement.

2) The decision to run fewer mainline workshops feels strange to me. I would expect that it would be good to sell as many mainline workshops as it can when it can get participants who pay $4000 for it.

When I try to put my intuitions into words, I think the biggest reason is that I believe that scaling up general rationality education is very valuable and even when the impact per participant is higher with workshops for AI Safety.

A general expectation for startups is that most of the value that gets created won't be created next year but years down the line. The road of running workshops for money seems to scale much better than the revenue plan that depends on donations.

In the document about your impact measurement you don't list the fact that self-reports are notorisly unreliable under limitations. I think there's a good chance you put too much stock into the numbers.

Replies from: Raemon
comment by Raemon · 2017-12-21T17:15:57.242Z · LW(p) · GW(p)
The road of running workshops for money seems to scale much better than the revenue plan that depends on donations.

I used to think this. What I think now is that being forced to do things that are profit-driven would inevitably warp CFAR into doing things that are less valuable. In sort of the way the publish-or-perish thing warps scientists' incentives. Similarly, if CFAR focused on "self-sustain via workshops", then they would face all the pressures of the self-help industry, which pushes towards marketing, towards finding rich clients (i.e. corporate seminars). It pushes them towards finding people with money rather than finding people who are the most promising people to teach rationality to.

I do think most of CFAR's value is in... well, medium term payoffs (my AI timelines are short which changes how long I think it's reasonable for payoff to payout). But rather than the value coming from scaling up and becoming sustainable (which could easily become a lost purpose), I think the value comes from actually figuring out how to teach the most imporant rationality techniques and frameworks to the people who need them most.

This is easier to do if they are financially independent.

Replies from: ChristianKl
comment by ChristianKl · 2017-12-22T00:48:11.504Z · LW(p) · GW(p)

I thought that the mission of CFAR was to teach the skills to as many people as possible and not only to a select number of people.

If you have AGI timelines of 10-20 yes I could understand to move all available resources to AGI, but are you really operating under such short timelines? If that's the case, I would like to see more presentation of those timelines in written form on LW. If those timelines drive the strategy, I see no reason not to have the arguments out in the open.

As far as financial indepence goes, needing to raise money from donations is likely going to limit the amount of money that's raised. Full independence comes from having a lot of money.