Why CFAR?

post by AnnaSalamon · 2013-12-28T23:25:10.296Z · LW · GW · Legacy · 117 comments

Contents

  Our long-term goal
  Our plan, and our progress to date
    Curriculum design
      Progress to date
      Next steps
    Forging community
      Progress to date
      Next steps
  Financials
    Expenses
    Revenue
    Donations
    Savings and debt
    Summary
  How you can help
    Footnotes
None
117 comments

Summary:  We outline the case for CFAR, including:

CFAR is in the middle of our annual matching fundraiser right now.  If you've been thinking of donating to CFAR, now is the best time to decide for probably at least half a year.  Donations up to $150,000 will be matched until January 31st; and Matt Wage, who is matching the last $50,000 of donations, has vowed not to donate unless matched.[1]

Our workshops are cash-flow positive, and subsidize our basic operations (you are not subsidizing workshop attendees).  But we can't yet run workshops often enough to fully cover our core operations.  We also need to do more formal experiments, and we want to create free and low-cost curriculum with far broader reach than the current workshops.  Donations are needed to keep the lights on at CFAR, fund free programs like the Summer Program on Applied Rationality and Cognition, and let us do new and interesting things in 2014 (see below, at length).[2]

Our long-term goal

CFAR's long-term goal is to create people who can and will solve important problems -- whatever the important problems turn out to be.[3]  

We therefore aim to create a community with three key properties:

  1. Competence -- The ability to get things done in the real world.  For example, the ability to work hard, follow through on plans, push past your fears, navigate social situations, organize teams of people, start and run successful businesses, etc.
  2. Epistemic rationality -- The ability to form relatively accurate beliefs.  Especially the ability to form such beliefs in cases where data is limited, motivated cognition is tempting, or the conventional wisdom is incorrect. 
  3. Do-gooding -- A desire to make the world better for all its people; the tendency to jump in and start/assist projects that might help (whether by labor or by donation); and ambition in keeping an eye out for projects that might help a lot and not just a little.  
Why competence, epistemic rationality, and do-gooding?

To change the world, we'll need to be able to take effective action (competence).  We'll need to be able to form a good implicit and explicit understanding of the human world and how to shift it. We'll need to have the best shot we can get at modeling situations yet unseen.  We'll need to solve problems outside the realms where competent business people already find traction (all of which require competence plus epistemic rationality). And we'll need to blend these abilities with a burning ambition to leave the world far better than we found it (competence plus epistemic rationality plus do-gooding).

And we'll need a community, not just a set of individuals.  It is hard for an isolated individual to figure out what the most important problems are, let alone how to effectively solve them.  This is still harder for individuals who have interesting day jobs, and who are busy amassing real-world competence of varied sorts.  Communities can assemble a complex world-model piece by piece.  Communities can build and sustain motivation, as well, and facilitate the practice and transfer of useful skills.  The aim is thus to create a community that, collectively, can figure out what needs doing and can then do it -- even when this requires multiple simultaneous competencies (e.g., locating a particular existential risk, and having good scientific connections, and knowing good folks in policy, and knowing how to do good technical research).

We intend to build that sort of community.

Our plan, and our progress to date

How can we create a community with high levels of competence, epistemic rationality, and do-gooding?  By creating curricula that teach (or enhance) these properties; by seeding the community with diverse competencies and diverse perspectives on how to do good; and by linking people together into the right kind of community.


We've now had two years to execute on this vision.[4]  It's not a lot of time, but it's enough to get started; and it's enough that folks should already be able to update as to our ability to execute.

Here's our current working plan, the progress we've made so far, and the pieces we still need to hit.

Curriculum design

In October 2012, we had no money and little visible means of obtaining more.[5] We needed runway; and we needed a way to use that runway to rapidly iterate curriculum.  

We therefore focused our initial efforts into making a workshop that could pay its own bills, and at the same time give us data -- a workshop that would give us the opportunity to run (and learn from) many further workshops.  Our applied rationality workshops have filled this role.

Progress to date

Reported benefits
After about a dozen workshops (and over 100 classes that we’ve designed and tested), we’ve settled on a workshop model that runs smoothly, and seems to provide value to our participants, who report a mean of 9.3 out of 10 to the question “Are you glad you came?”. In the process we’ve substantially improved our skill at curriculum design: it used to take us about 40 hours to design a unit we regarded as decent (design; test on volunteers; re-design; test on volunteers; etc). It now takes us about 8 hours to design a unit of the same quality.[6]

Anecdotally, we have many, many stories from alumni about how our workshop increased their competence (both generally and for altruistic ends). For example, alum Ben Toner, CEO of Draftable, recounts that after the July 2012 workshop, “At work, I realized I wasn’t doing anywhere near enough planning. My employees were spending time on the wrong things because I hadn’t planned things out in enough detail to make it clear what was the most important thing to do next. I fixed this immediately after the camp.” Alum Ben Kuhn has described how the CFAR workshop helped his effective altruism group “vastly increase our campus presence--everything from making uncomfortable cold calls to powering through bureaucracy, and from running complex events to quickly updating on feedback.” (Check out our testimonials page for more examples.)  

Measurement
Anecdata notwithstanding, the jury is still out regarding the workshops' usefulness to those who come.  During the very first minicamps (the current workshops are agreed to be better) we randomized admission of 15 applicants, with 17 controls.  Our study was low-powered and effects on e.g. income would have needed to be very large for us to expect to detect them.  Still, we ended up with non-negligible evidence of absence: income, happiness, and exercise did not visibly trend upward one year later.  We detected statistically significant positive impacts on the standard (BFI-10) survey pair for emotional stability "I see myself as someone who is relaxed, handles stress well" / "I get nervous easily" (p=.002).  Also significant were effects on an abridged General Self-Efficacy Scale (sample item:"I can solve most problems if I invest the necessary effort") (p=.007).  The details will be available soon on our blog (including a much larger number of negative results).  We'll run another RCT soon, funding permitting.  

Like many participants, we at CFAR have the subjective impression that the workshops boost strategicness; and, like most who have observed two workshops, we have the impression that today's workshops are much better than those in the initial RCT.  We'll need to find ways to actually test those impressions, and to create stronger feedbacks from measurement into curriculum development. 

Epistemic rationality curricula
After a rocky start, our epistemic rationality curriculum has seen a number of recent victories.  Our “Building Bayesian Habits” class began performing much better after we figured out how to help people notice their intuitive, “System 1″ expectations of probabilities.[7]  Our "inner simulator" class conveys the distinction between profession and anticipation while aiming at immediate, practical benefits; it isn't about religion and politics, it's about whether your mother will actually enjoy the potted plant you’re thinking of giving her.  More generally, the epistemic rationality curriculum appears to be integrating deeply with the competence curriculum, and appears to be becoming more appealing to participants as it does so.  Strengthening this curriculum, and building in real tests of its efficacy, will be a major focus in 2014.

Integrating with academic research
We made preliminary efforts in this direction - for example by taking standard questionnaires from the academic literature, including Stanovich's indicators of the traits he calls “rationality”, and administering them to attendees at a Less Wrong meetup.  (We found that meetup attendees scored near the ceiling, so we'll probably need new questionnaires with better discrimination.)  Our research fellow, Dan Keys (whose masters thesis was on heuristics and biases), spends a majority of his time keeping up with the literature and integrating it with CFAR workshops, as well as designing tests for our ongoing forays into randomized controlled trials.  We're particularly excited by Tetlock's Good Judgment Project, and we'll be piggybacking on it a bit to see if we can get decent ratings.  

Accessibility
Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality -- such as a local politician, a police officer, a Spanish teacher, and others -- are by and large quite happy with the workshop and feel it is valuable.

Nevertheless, the total set of people who can travel to a 4.5-day immersive workshop, and who can spend $3900 to do so, is limited.  We want to eventually give a substantial skill-boost in a less expensive, more accessible format; we are slowly bootstrapping toward this.  

Specifically:
  • Shorter workshops:  We’re working on shorter versions of our workshops (including three-hour and one-day courses) that can be given to larger sets of people at lower cost. 
  • College courses:  We helped develop a course on rational thinking -- for UC Berkeley undergraduates, in partnership with Nobel Laureate Saul Perlmutter.  We also brought several high school and university instructors to our workshop, to help seed early experimentation into their curricula.
  • Increasing visibility: We’ve been working on increasing our visibility among the general public, with alumni James Miller and Tim Czech both working on non-fiction books that feature CFAR, and several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal.

Next steps

In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.  

Forging community

The most powerful interventions are not one-off experiences; rather, they are the start of an ongoing practice.  Changing one's social environment is one of the highest impact ways to create personal change.  Alum Paul Crowley writes that “The most valuable lasting thing I got out of attending, I think, is a renewed determination to continually up my game. A big part of that is that the minicamp creates a lasting community of fellow alumni who are also trying for the biggest bite of increased utility they can get, and that’s no accident.” 

The goal is to create a community that is directly helpful for its members, and that simultaneously improves its members' impact on the world.

Progress to date

A strong set of seed alumni
We have roughly 350 alumni so far, which include scientists from MIT and Berkeley, college students, engineers from Google and Facebook, founders of Y-combinator startups, teachers, professional writers, and the exceptionally gifted high-school students who participated in SPARC 2013 and 2012. (Not counted in that tally are the 50-some attendees of the 2013 Effective Altruism Summit, for whom we ran a free, abridged version of our workshop.) 

Alumni contact/community
There is an active alumni Google group, which gets daily traffic. Alumni use it to share useful life hacks they’ve discovered, help each other trouble-shoot, and notify each other of upcoming events and opportunities. We’ve also been using our post-workshop parties as reunions for alumni nearby (in the San Francisco Bay area, the New York City area, and -- in two months -- Melbourne, Australia).

In large part thanks to our alumni forum and the post-workshop party networking, there have already been numerous cases of alumni helping each other find jobs and collaborating on startups or other projects.  There have also been several alumni recruited to do-gooding projects (e.g., MIRI and Leverage Research have engaged multiple alumni), and of alumni improving their “earn to give” ability or shifting their own do-gooding strategy.

Many alumni also take CFAR skills back to Less Wrong meet-ups or other local communities (for example, the effective-altruism meetup in Melbourne, a homeless youth shelter in Oregon, and a self-improvement group in NYC; many have also practiced in their start-ups and with co-workers (for example, Beeminder, MetaMed, and Aquahug)).

Do-gooding diversity
We’d like the alumni community to have an accurate picture of how to effectively improve the world.  We don’t want to try to figure out how to improve the world all from scratch.  There are already a number of groups who’ve done a lot of good thinking on the subject; including some who call themselves "effective altruists", but also people who call themselves "social entrepreneurs", "x-risk minimizers", and "philanthropic foundations".  

We aim to bring in the best thinkers and doers from all of these groups to seed the community with diverse good ideas on the subject.  The goal is to create a culture rich enough that the alumni, as a community, can overcome any errors in CFAR’s founders’ perspectives.  The goal is also to create a community that is defined by its pursuit of true beliefs, and that is not defined by any particular preconceptions as to what those beliefs are.

We use applicants’ inclination to do good as a major criterion of financial aid. Recipients of our informally-dubbed “altruism scholarships” have included members of the Future of Humanity Institute, CEA, Giving What We Can, MIRI, and Leverage Research.  They also include many college or graduate students who have no official EA affiliation, but who are passionate about their desire to devote their career to world-saving (and who hope the workshops can help them figure out how to do so).  And they include folks who are working full-time on varied do-gooding projects of broader origin, such as social entrepreneurs, someone working on community policing, and folks working at a major philanthropic foundation.

International outreach
We'll be running our first international workshop in Australia, in February 2014, thanks to alumni Matt and Andrew Fallshaw.  

Also, starting in 2014, we'll be bringing about 20 Estonian math and science award-winners per year to CFAR workshops, thanks to a 5-year pledge from Jaan Tallinn to sponsor workshop spots for leading students from his home country.  Estonia is an EU member country with a population of 1.2 million and a high-technology economy, and going forward this might be the first opportunity to check whether there are network effects in relatively larger fractions of a stratum.

Next steps

Over 2014, a major focus will be improving opportunities for ongoing alumni involvement.  If funding allows, we’ll also try our hand at pilot activities for meet-ups.

Specific plans include:
  • A two-day "Epistemic Rationality and EA" mini-workshop in January, targeted at alumni
  • An alumni reunion this summer (which will be a multi-day event drawing folks our entire worldwide alumni community, unlike the alumni parties at each workshop);
  • An alumni directory, as an attempt to increase business and philanthropic partnerships among alumni.

Financials

Expenses

Our fixed expenses come to about $40k per month. In some detail:
  • About $7k for our office space
  • About $3k for miscellaneous expenses
  • About $30k for salary & wages, going forward
    • We have five full-time people on salary, each getting $3.5k per month gross. The employer portion of taxes adds roughly an additional $1k/month per employee.
    • The remaining $7k or so goes to hourly employees and contractors.  We have two roughly full-time hourly employees, and a few contractors who do website adjustment and maintenance, workbook compilation for a workshop, and similarly targeted tasks.

In addition to our fixed expenses, we chose to run SPARC 2013, even though it would cause us to run out of money right around the end-of-year fundraising drive. We did so because we judged SPARC to be potentially very important[8], enough to justify the risk of leaning on this winter fundraiser to continue. All told, SPARC cost approximately $50k in direct costs (not counting staff time).

(We also chose to e.g. teach at the EA Summit, do rationality research, put some effort into curricula that can be delivered cheaply to a larger crowd, etc.  These did not incur much direct expense, but did require staff time which could otherwise have been directed towards revenue-producing projects.)

Revenue

Workshops are our primary source of non-donation income.  We ran 7 of them in 2013, and they became increasingly cash-positive through the year.  We now expect a full 4-day workshop held in the Bay Area to give us a profit of about $25k (ignoring fixed costs, such as staff time and office rent), which is just under 3 weeks of CFAR runway.  Demand isn't yet reliable enough to let us run them at that frequency. We've made significant traction on building interest outside of the Less Wrong community, but there's still work to be done here, and that work will take time.  In the meantime, workshops can subsidize some of our non-workshop activities, but not all of them.  (Your donations do not go to subsidize workshops!)

We're also actively exploring revenue models other than the four-day workshop. Several of them look promising, but need time to come to fruition before the income they offer us is relevant.

Donations

CFAR received $166k in our previous fundraising drive at the start of 2013, and a smaller amount of donations spread across the rest of the year.  SPARC was partially sponsored with $15k from Dropbox and $5k from Quixey.  These donations subsidized SPARC, the rationality workshop at the EA summit, research and development, and core expenses and salary.

Savings and debt

Right now CFAR has essentially no savings. The savings we accumulated by the end of 2012 went to (a) feeding the gap between income and expenses and (b) funding SPARC.

A $30k loan, which helped us cover core 2013 expenses, comes due in March 2014.

Summary

If this winter fundraiser goes well, it will give us time to make some of our current experimental products mature. We think we have an excellent shot at making major strides forward in CFAR's mission as well as becoming much more self-sustaining during 2014.

If this winter fundraiser goes poorly, CFAR will not yet have sufficient funding to continue core operations.

How you can help

Our main goals in 2014:  

  1. Building a scalable revenue base, including via ramping up our workshop quality, workshop variety, and our marketing reach.
  2. Community-building, including an alumni reunion. 
  3. Creating more connections with the effective altruism community, and other opportunities for our alumni to get involved in do-gooding.
  4. Research to feed back into our curriculum -- on the effectiveness of particular rationality techniques, as well as the long-term impact of rationality training on meaningful life outcomes.
  5. Developing more classes on epistemic rationality.

The three most important ways you can help:

1.  Donations
If you’re considering donating but want to learn more about how CFAR uses money, or you have other questions or hesitations, let us know -- we’d be more than happy to chat with you via Skype. You can sign up for a one-on-one call with Anna here.

2.  Talent
We’re actively seeking a new director of operations to organize our workshops; good operations can be a great multiplier on CFAR’s total ability to get things done.  We are continuing to try out exceptional candidates for a curriculum designer.[9]  And we always need more volunteers to help out with alpha-testing new classes in Berkeley, and to participate in online experiments.

3.  Participants
We're continually searching for additional awesome people for our workshops. This really is a high-impact way people can help us; and we do have a large amount of data suggesting that (you /your friends) will be glad to have come.  You can apply here -- it takes 1 minute, and leads to a conversation with Anna or Kenzi, which (you'll / they’ll) probably find interesting whether or not they choose to come.

Like the open-source movement, applied rationality will be the product of thousands of individuals’ contributions. The ideas we've come up with so far are only a beginning. If you have other suggestions for people we should meet, other workshops we should attend, ways to branch out from our current business model, or anything else -- get in touch, we’d love to Skype with you. 

You can also be a part of open-source applied rationality by creating good content for Less Wrong. Some of our best workshop participants, volunteers, hires, ideas for rationality techniques, use cases, and general inspiration have come from Less Wrong.  Help keep the LW community vibrant and growing.

And, if you’re willing -- do consider donating now.

 


 

Footnotes

[1]  That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars.  Much thanks to Matt Wage, Peter McCluskey, Benjamin Hoffman, Janos Kramar & Victoria Krakovna, Liron Shapira, Satvik Beri, Kevin Harrington, Jonathan Weissman, and Ted Suzman for together putting up $150k in matching funds.  (Matt Wage, as mentioned, promises not only that he will donate if the pledge is matched, but also that he won't donate the $50k of matching funds to CFAR if the pledge isn't filled -- so your donation probably really does cause matching at the margin.)

[2]  This post was result of a collaborative effort between Anna Salamon, Kenzi Amodei, Julia Galef, and “Valentine” Michael Smith - like many of our endeavors at CFAR, it went through many iterations, in many hands, to create an overall whole where the credit due is difficult to tease apart.  

[3]  In the broadest sense, CFAR can be seen as a cognitive branch of effective altruism - making a marginal improvement to thinking where thinking matters a lot.  MIRI did not gain traction until it began to include explicit rationality in its message - maybe because thinking about AI puts heavy loads on particular cognitive skills, though there are other hypotheses.  Other branches of effective altruism may encounter their own problems with a heavy cognitive load.  Effective altruism is limited in its growth by the supply of competent people who want to quantify the amount of good they do.

It has been true over the course of human history that improvements in world welfare have often been tied to improvements in explicit thinking skills, most notably with the invention of science.  Even for someone who doesn't think that existential risk is the right place to look, trying to invest more in good reasoning, qua good reasoning - doubling down on the huge benefits which explicit cognitive skills have already brought humanity - is a plausible candidate for the highest-impact marginal altruism.

[4]  That is, we’ve had two years since our barest beginnings, when Anna, Julia, and Val began working together under the auspices of MIRI; and just over a year as a financially and legally independent organization.

[5]  Our pilot minicamps, prior to that October, gave us valuable data/iteration; but they did not pay for their own direct (room and board) costs, let alone for the staff time required. 

[6]  I’m estimating quality by workshop participants’ feedback, here; it takes many fewer hours now for our instructors to create units that receive the same participant ratings as some older unit that hasn’t been revised (we did this accidental experiment several times).  Unsurprisingly, large quantities of unit-design practice, with rapid iteration and feedback, were key to improving our curriculum design skills.

[7]  Interestingly, we threw away over a dozen versions of the Bayes class before we developed this one.  It has proven somewhat easier to create curricula around strategicness, and around productivity/effectiveness more generally, than around epistemic rationality.  The reason for the relative difficulty appears to be two-fold.  First, it is somewhat harder to create a felt need for epistemic rationality skills, at least among those who aren’t working on gnarly, data-sparse problems such as existential risk.  Second, there is more existing material on strategicness than on epistemic rationality; and it is in general harder to create from scratch than to create with borrowing.  Nevertheless, we have, via much iteration, had some significant successes, including Bayes, separating professed beliefs from anticipated ones, and with certain subskills of avoiding motivated cognition (e.g. noticing curiosity; noticing and tuning in to mental flinches).  Better yet, there seems to be a pattern to these successes which we are gradually getting the hang of.

We’re excited that Ben Hoffman has pledged $23k of funding specifically to enable us to improve our epistemic rationality curriculum and our research plan.

[8]  From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons.  For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements.  An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well. 

More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.

[9]  To those who’ve already applied: Thanks very much for applying; and our apologies for not getting back to you so far.  If the funding drive is filled (so that we can afford to possibly hire someone new), we’ll be looking through the applications shortly after the drive completes and will get back to you then.

117 comments

Comments sorted by top scores.

comment by pengvado · 2014-01-07T08:54:39.300Z · LW(p) · GW(p)

I donated $40,000.00

Replies from: ciphergoth, gjm, AnnaSalamon, Eliezer_Yudkowsky
comment by Paul Crowley (ciphergoth) · 2014-01-07T12:44:00.758Z · LW(p) · GW(p)

Holy crap, dude. Thanks for helping to save the world.

comment by gjm · 2014-01-07T14:19:18.146Z · LW(p) · GW(p)

I think it's unlikely that pengvado is lying -- but if anyone from CFAR is reading this and can confirm this donation, I think that would be a Good Thing.

Replies from: AnnaSalamon, Kawoomba
comment by AnnaSalamon · 2014-01-12T08:34:45.957Z · LW(p) · GW(p)

Confirmed. (The delay replying was because checks take time to get places.)

comment by Kawoomba · 2014-01-07T14:32:16.516Z · LW(p) · GW(p)

Probably a European ("," = ".").

But really, quite impressive, not only as a donation but also as a demonstration of how highly selected the LW readership is.

Replies from: gjm
comment by gjm · 2014-01-07T15:42:20.391Z · LW(p) · GW(p)

Who would give three decimal places on a number of dollars?

[EDITED to add: Er, I suppose Kawoomba's second sentence indicates that the first was only a joke. My apologies for not getting it.]

Replies from: Kawoomba
comment by Kawoomba · 2014-01-07T16:00:38.038Z · LW(p) · GW(p)

Who would give three decimal places on a number of dollars?

Who would donate 40k dollars? :)

Not that it's a bad thing, just comparing the rarity of 3-decimal-givers vs. 40k-$-givers. The latter is not something I've ever encountered announced "casually" in a forum, other than on LW, that is.

Replies from: gjm
comment by gjm · 2014-01-07T16:51:56.737Z · LW(p) · GW(p)

Fair point. I've seen quite substantial donations announced in LW threads before, but never as big as $40k.

Replies from: lukeprog
comment by lukeprog · 2014-01-09T22:36:38.394Z · LW(p) · GW(p)

Pengvado previously commented on a MIRI fundraising post that he "donated 20,000$ now, in addition to 110,000$ earlier this year," which was true.

comment by AnnaSalamon · 2014-01-07T21:20:14.154Z · LW(p) · GW(p)

Thank you!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-08T02:19:00.645Z · LW(p) · GW(p)

!

Color me impressed.

comment by Benquo · 2013-12-28T16:09:04.186Z · LW(p) · GW(p)

To positively reinforce CFAR for finally posting this, I'm going to give $750 before the end of 2013. This is separate from my matching funds pledge - treat it like any other donation.

In addition my employer should match that, for a total of $1,500, or $3,000 when you count the fundraiser's match of both.

UPDATE: Donation made. I'll request the employer match in the next few days.

UPDATE2: Employer match requested

comment by Zack_M_Davis · 2013-12-28T23:42:34.988Z · LW(p) · GW(p)

(donated $1,500)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:16:06.785Z · LW(p) · GW(p)

Thanks very much. We really appreciate it.

comment by katydee · 2013-12-28T22:37:51.503Z · LW(p) · GW(p)

Excellent post, I've just sent in $200.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:14:19.578Z · LW(p) · GW(p)

Thanks so much!

comment by shokwave · 2013-12-29T04:48:04.683Z · LW(p) · GW(p)

I made a $150 donation. I particularly like that effort has gone into making the workshops more accessible. I'm suggesting to my father that he should apply for the February workshop (I am very surprised to have ended up believing it will be worthwhile for him).

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:16:25.907Z · LW(p) · GW(p)

Thank you!

comment by DeevGrape · 2013-12-29T04:19:18.696Z · LW(p) · GW(p)

Eliezer posted a Facebook status about the fundraiser needing more support, so I was going to donate $1000... but then I saw I would get a PrettyRational print if I donated $1500, so here we are :)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:16:38.124Z · LW(p) · GW(p)

Awesome :). Thanks!

comment by Kutta · 2013-12-29T11:17:49.256Z · LW(p) · GW(p)

Donated $500.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:17:15.733Z · LW(p) · GW(p)

Thanks!

comment by [deleted] · 2013-12-29T15:46:36.772Z · LW(p) · GW(p)

I've pledged $600 ($50/month) towards the fundraiser, with an okay from Anna.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:18:29.737Z · LW(p) · GW(p)

Yes; thank you; we really appreciate it. Monthly contributions are a very good way to help, if anyone's thinking about it; and if you pledge a year's worth of monthly contributions, that whole year counts toward this match.

comment by BraydenM · 2013-12-29T05:11:56.500Z · LW(p) · GW(p)

Great post. I've made it a personal goal to attempt to find 5 high value participants for the Melbourne workshop, and I'll also provide support in the form of accommodation for CFAR instructors and volunteers before/after the February workshop.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:17:29.618Z · LW(p) · GW(p)

Thanks so much!

comment by taryneast · 2013-12-30T01:54:20.513Z · LW(p) · GW(p)

Great post - lots of useful information about the program, where it's headed and how it's been going the last few years. Thanks. $150

comment by Raemon · 2013-12-29T21:35:27.510Z · LW(p) · GW(p)

I donated $100. I'd have donated more, but I had put somewhere over $3000 towards and attending and helping someone else to attend the effective altruism conference earlier this year.

Also, am about to quit my job and am not sure about my future cash flow situation.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:38:01.429Z · LW(p) · GW(p)

Thanks so much! And thanks for helping with the effective altruism conference last year; I really enjoyed the opportunity to teach and attend there; it made a real difference for me.

comment by MondSemmel · 2013-12-29T16:31:42.819Z · LW(p) · GW(p)

Donated 40€. I was going to donate to MIRI or CFAR, and chose CFAR due to this Facebook discussion.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:18:40.190Z · LW(p) · GW(p)

Thanks!

Replies from: ialdabaoth
comment by ialdabaoth · 2013-12-29T23:23:53.048Z · LW(p) · GW(p)

Quick feedback: Thanking people for their contributions is awesome, but with this many people contributing, your thank-yous are completely stomping the "recent comments" section, which makes it harder to keep up with site flow. If you want to publicly thank everyone, a top-level reply to the article twice per day that thanks each of that day's contributors by name will keep your article in LessWrong's "front-of-mind presence" and give everyone their deserved recognition without lowering the signal-to-noise ratio.

This is not to disparage your excellent organization or your dedication to it; I will be donating myself ASAP.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-02T22:35:22.403Z · LW(p) · GW(p)

The main problem with this is that it makes it cumbersome to send notifications to the people you're thanking. I also feel like your method would come off as more impersonal, distant, and artificial.

I haven't gotten the sense that thanking donors is a huge problem, since funding drives only occur once a year. Perhaps if we had hundreds of donors rather than a few dozen leaving comments. I may be undervaluing the cleanness of the Recent Comments section because I don't use it regularly enough, but my current feeling is that a few minutes of annoyance for Recent Comments browsers is worth it for making an important comment section feel slightly more warm and personable to a much larger and less LW-savvy audience. And for giving Anna and Luke a bit less work.

comment by philh · 2013-12-29T23:37:34.196Z · LW(p) · GW(p)

I've donated £420 since the start of the fundraiser, and intend to donate 10% of my next paycheque too if the goal hasn't been reached by then.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:39:48.552Z · LW(p) · GW(p)

Thanks so much!

comment by edanm · 2013-12-30T11:56:18.551Z · LW(p) · GW(p)

I just donated $100, in large part because of the detailed writeup and because of the many people writing here how much they donated. So thanks everyone!

comment by somervta · 2013-12-29T23:23:40.490Z · LW(p) · GW(p)

Donated $100.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:39:33.770Z · LW(p) · GW(p)

Thank you!

comment by Decius · 2013-12-31T01:17:51.233Z · LW(p) · GW(p)

I'm working the three holidays this season, and will donate the incentive pay from that.

comment by tanagrabeast · 2014-01-05T06:42:01.226Z · LW(p) · GW(p)

Donated $105, making my contribution the true baseball bat in the infamous $110 question.

May we get these things right more often.

comment by Vaniver · 2014-01-02T00:37:51.615Z · LW(p) · GW(p)

Donated $100 a month.

comment by MalcolmOcean (malcolmocean) · 2013-12-31T18:32:35.612Z · LW(p) · GW(p)

Donated $100!

comment by Julia_Galef · 2014-01-02T19:01:06.326Z · LW(p) · GW(p)

several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal

That article's up now -- it was on the cover of the Personal Journal section of the WSJ, on December 31st. Here's the online version: More Rational Resolutions

comment by Peter Wildeford (peter_hurford) · 2013-12-28T21:24:04.905Z · LW(p) · GW(p)

I think this is a very well written and useful picture of what CFAR is up to. I applaud CFAR for writing this and it definitely puts me many steps closer to be willing to fund CFAR.

However, one concern of mine is that the altruistic value of CFAR does not seem to me to compare much to the value of other organizations expressly focused on do-gooding, like GiveWell or the Centre for Effective Altruism. It seems like CFAR would be a nice thing to fund once these organizations are already more secure in their own funding, but that's not true yet. Any thoughts on this? (As a disclaimer, I think I have more detailed reservations about funding CFAR that I may discuss if this becomes a conversation, so don't see me doing this in the future as moving the goalposts.)

Replies from: Benquo, AnnaSalamon, tog
comment by Benquo · 2013-12-28T21:35:47.624Z · LW(p) · GW(p)

I can give you a proof of concept, actual numbers and examples omitted.

Considered a simplified model where there are only two efficient charities, a direct one and CFAR, and no other helping is possible. If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.

In addition CFAR is explicitly trying to build a network of competent rational do-gooders, with the expectation that the gains will be more than linear, because of division of labor.

Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.

Replies from: CarlShulman, peter_hurford
comment by CarlShulman · 2013-12-29T00:37:44.296Z · LW(p) · GW(p)

CEA and GiveWell are both building communities, GiveWell to the point of more than doubling its community (by measures such as number of donors, money moved, with web traffic slightly slower) every year, year after year. Giving What We Can's growth has been more linear, but 80,000 hours has also had good growth (albeit somewhat less and over a shorter time).

That makes the bar for something like CFAR much, much higher than your model suggests, although there is merit in experimenting with a number of different models (and the Effective Altruism movement needs to cultivate the "E"/ element as well as the "A", which something along the lines of CFAR may be especially helpful for).

ETA: I went through more GiveWell growth numbers in this post. Absolute growth excluding Good Ventures (a big foundation that has firmly backed GiveWell) was fairly steady for the 2010-2011 and 2011-2012 comparisons, although growth has looked more exponential in other years.

Replies from: Benquo, Eliezer_Yudkowsky, Benquo, private_messaging
comment by Benquo · 2013-12-29T02:44:20.601Z · LW(p) · GW(p)

On reflection, this is an opportunity for me to be curious. The relevant community-builders I'm aware of are:

  • CFAR
  • 80,000 Hours / CEA
  • GiveWell
  • Leverage Research

Whom am I leaving out?

My model for what they're doing is this:

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality, but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

Do any of these descriptions seem off? If so, how?

PS I don't think I would have stuck my neck out & made these guesses in order to figure out whether I was right, before the recent CFAR workshop I attended.

Replies from: CarlShulman, Alex_Altair
comment by CarlShulman · 2013-12-29T03:40:07.433Z · LW(p) · GW(p)

Do any of these descriptions seem off? If so, how?

Some comments below.

GiveWell isn't trying to change much about people at all directly, except by helping them find efficient charities to give to. It's selecting people by whether they're already interested in this exact thing.

And publishing detailed analysis and reasons that get it massive media attention and draw in and convince people who may have been persuadable but had not in fact been persuaded. Also in sharing a lot of epistemic and methodological points on their blogs and site. Many GIveWell readers and users are in touch with each other and with GiveWell, and GiveWell has played an important role in the growth of EA as a whole, including people making other decisions (such as founding organizations and changing their career or research plans, in addition to their donations).

80,000 Hours is trying to intervene in certain specific high-impact life decisions like career choice as well as charity choice, effectively by administering a temporary "rationality infusion," but isn't trying to alter anyone's underlying character in a lasting way beyond that.

I would add that counseled folk and extensive web traffic also get exposed to ideas like prioritzation, cause-neutrality, wide variation in effectiveness, etc, and ways to follow up. They built a membership/social networking functionality, but I think they are making it less prominent on the website to focus on the research and counseling, in response to their experience so far.

Separately, how much of a difference is there between a three-day CFAR workshop and a temporary "rationality infusion"?

CFAR has the very ambitious goal of creating guardians of humanity with hero-level competence, altruism, and epistemic rationality,

The post describes a combination of selection for existing capacities, connection, and training, not creation (which would be harder).

but has so far mainly succeeded in some improvements in personal effectiveness for solving one's own life problems.

As the post mentions, there isn't clear evidence that this happened, and there is room for negative effects. But I do see a lot of value in developing rationality training that works, as measured in randomized trials using life outcomes, Tetlock-type predictive accuracy, or similar endpoints. I would say that the value of CFAR training today is more about testing/R&D and creating a commercial platform that can enable further R&D than any educational value of their current offerings.

Leverage has tried to directly approach the problem of creating a hero-level community but doesn't seem to have a track record of concrete specific successes, replicable methods for making people awesome, or a measure of effectiveness

I don't know much about what they have been doing lately, but they have had at least a couple of specific achievements. They held an effective altruist conference that was well-received by several people I spoke with, and a small percentage of people donating or joining other EA organizations report that they found out about effective altruism ideas through Leverage's THINK.

They may have had other more substantial achievements, but they are not easily discernible from the Leverage website. Their team seems very energetic, but much of it is focused on developing and applying a homegrown amateur psychological theory that contradicts established physics, biology, and psychology (previous LW discussion here and here ). That remains a significant worry for me about Leverage.

Replies from: Benquo
comment by Benquo · 2013-12-29T03:41:35.537Z · LW(p) · GW(p)

Thank you, that's helpful.

comment by Alex_Altair · 2013-12-29T03:31:18.220Z · LW(p) · GW(p)

MIRI has been a huge community-builder, through LessWrong, HPMOR, et cetera.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-12-29T08:59:37.885Z · LW(p) · GW(p)

Those predate the founding of CFAR; at that time MIRI (then SI) was doing double duty as a rationality organisation. It's explicitly pivoted away from that and community building since.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-12-29T01:52:52.228Z · LW(p) · GW(p)

It would be nice if all that doubling helped save the world somehow, after all.

comment by Benquo · 2013-12-29T02:09:33.706Z · LW(p) · GW(p)

That makes sense. It depends on whether the bar is much higher than what there already is for "competent, rational" etc. AND how much better (if at all) CFAR is at making people so and finding those people. I think the first is pretty likely, but at this point the second is merely at the level of plausibility. (Which is still really impressive!)

comment by private_messaging · 2013-12-31T17:23:49.163Z · LW(p) · GW(p)

The main problem with teaching generic success skills is already "those who can't, teach". Donations only exacerbate this problem by lowering the barrier to entry.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-12-31T18:16:15.848Z · LW(p) · GW(p)

Only when there isn't a secondary goal in mind. For example, apprenticeship is a process where someone who clearly can do, teaches, because the master recognizes that some of their tasks are better performed by novice apprentices than by themselves - and the only way to guarantee quality novice apprentices is to create them.

For CFAR, the magnum opus seems to be human uplift - a process where the doing and the teaching are simply different levels of the same process.

Replies from: private_messaging, V_V
comment by private_messaging · 2013-12-31T20:22:21.135Z · LW(p) · GW(p)

The point is that there are many people who want to spread their message on how to effectively attain your goals. Generally, the quality of message is going to positively correlate with success and thus negatively correlate with being short on money or depending on charitable contributions.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-01T17:54:41.513Z · LW(p) · GW(p)

I am not sure what is your definition of "success", but why exactly should getting money through contributions be worse than getting money by any other means?

If "success" is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum is, by definition, a success.

If "success" is something else, then... please be more specific.

Replies from: private_messaging
comment by private_messaging · 2014-01-01T18:29:54.712Z · LW(p) · GW(p)

If "success" is just a black box for doing what you wanted to do, then CFAR asking for money, getting donations, and using them to teach their curricullum

Wait. The success at extracting from you this specific piece of money (the utility of donating which you ponder), is not yet decided. Furthermore, the prior success at finding actions that produce a lot of money, must have been quite low.

edit: besides, the end goal is wealth creation.

comment by V_V · 2013-12-31T22:15:35.709Z · LW(p) · GW(p)

Artisan masters (or, to some extent, college professors, at least in scientific and technical fields) generally have a track record of being good at doing what they teach.

Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience).
As far as I know, CFAR staff are no different in that regard.

EDIT:

And if you give them donations, they don't even have to be good at it!

Replies from: katydee
comment by katydee · 2014-01-01T00:17:37.527Z · LW(p) · GW(p)

Self-help instructors usually only have a track record of being good at making a living from being self-help instructors (which includes being good at self promotion to the relevant audience). As far as I know, CFAR staff are no different in that regard.

While to some extent I think this criticism may be valid, especially given the fact that it was a known factor prior to the foundation of CFAR, I think it's not entirely fair. Given that CFAR is more or less attempting to create a new curriculum and area of study, it isn't entirely clear what it would look like to have a proven track record in the field.

Now obviously CFAR would be more impressive if it was being run by Daniel Kahneman. But given that that isn't going to happen, I think the organization that we have is doing a fairly good job, especially given that many of their staff members have impressive accomplishments in other domains.

Replies from: V_V, private_messaging
comment by V_V · 2014-01-01T12:24:44.702Z · LW(p) · GW(p)

it isn't entirely clear what it would look like to have a proven track record in the field.

They want to teach people how to be rational, professionally successful, and altruistic, hence it would be desirable if the staff had strong credentials in that areas, such as being successful scientists, inventors, entrepreneurs, having done something that unquestionably helped many other people, etc.

especially given that many of their staff members have impressive accomplishments in other domains.

Such as?

According to the OP, CFAR has five full time employees. I suppose they are the first five people listed in the website (Galef, Salamon, Smith, Critch and Amodei).
Galef is a blogger and podcaster, Amodei was a theatre stage manager, the others are mathematicians:
Critch is the only PhD of them and has done some research in abstract computer science and applied math. I don't have the expertise to evaluate his work, does it count as an impressive accomplishment?
Salamon mostly worked at SIAI/SI/MIRI and didn't publish much outside MIRI own venues and philosophical conferences.
Smith, I don't know because I cant find much information online.

EDIT:

Actually, according to the profile, Smith has a PhD in math education.

Replies from: katydee
comment by katydee · 2014-01-01T21:00:32.991Z · LW(p) · GW(p)

I don't have the expertise to evaluate his work, does it count as an impressive accomplishment?

Impressiveness exists in the map, not the territory-- but I certainly think so.

Replies from: V_V
comment by V_V · 2014-01-01T23:57:31.297Z · LW(p) · GW(p)

Impressiveness exists in the map, not the territory

Kinda. Science is inter-subjective. Whether or not somebody's contributions are considered breakthroughs by domain experts is an empirical question.

comment by private_messaging · 2014-01-01T08:37:06.399Z · LW(p) · GW(p)

it isn't entirely clear what it would look like to have a proven track record in the field.

Having a track record of creating something else that's unambiguously useful would be a start.

Mostly, people attempt to do grand and exceptional things either due to having evidence (prior high performance, for example), or due to having delusions of grandeur (prior history of such delusions). Those are two very distinct categories.

Replies from: katydee
comment by katydee · 2014-01-01T09:40:35.256Z · LW(p) · GW(p)

Having a track record of creating something else that's unambiguously useful would be a start.

Certainly-- that's what I was discussing when I wrote "many of their staff members have impressive accomplishments in other domains."

Replies from: private_messaging
comment by private_messaging · 2014-01-01T13:09:31.652Z · LW(p) · GW(p)

On the other hand, the reason said enterprise is seeking donations is largely that the most involved member's prior endeavours failed to monetize despite, in some cases, presence of some innate talents. A situation suggestive not of exceptionally superior but rather inferior rationality.

comment by Peter Wildeford (peter_hurford) · 2013-12-28T23:48:41.355Z · LW(p) · GW(p)

If you give your charity budget to the direct charity, you help n people. If instead you give that money to CFAR they transform two inefficient givers to efficient givers (or doubles the money an efficient giver like you can afford to give), helping 2n people. The second option gives you more value for money.

I agree with you on this, but I think CEA is that meta-charity you're talking about, not CFAR. The reason for this is that CFAR and CEA (via Giving What We Can and 80,000 Hours) are both focused on building a community of do-gooders, but only CEA is doing it explicitly.

My understanding from current CFAR workshops is that CFAR doesn't have much content about effectively donating or effective altruism per se, though I could be missing something.

Is there any before / after analysis of CFAR attendees on metrics like amount of money donated or donation targets?

~

Finally, neither CEA nor GiveWell is working (AFAIK) on the problem of creating a group of people who can identify new, nonobvious problems and solutions in domains where we should expect untrained human minds to fail.

I agree this is the key benefit of CFAR, though I think it's hard to know at the moment whether CFAR is going to adequately accomplish this (though I do agree that current CFAR material is high-quality and getting better).

Replies from: Benquo
comment by Benquo · 2013-12-29T00:27:33.823Z · LW(p) · GW(p)

That's pretty much why I wanted a commitment to certain epistemic rationality projects: to show that it's possible to train that better (which has high VOI) and to make sure CFAR gets some momentum in that direction.

comment by AnnaSalamon · 2013-12-29T23:36:25.964Z · LW(p) · GW(p)

It's a complicated subject, of course, but my own impression is that CFAR is indeed a good place to donate on the present margin, from the perspective of long-term world-improvement, even bearing in mind that there are other organizations one could donate to that are focused on community building around effective altruism.

My reason for this is two-fold:

  • (1) Both epistemic rationality and strategicness really do seem to have high yield in an effective altruism context -- and so it's worth making a serious effort to see if we can increase these (I expect we can); and
  • (2) It's worth having a portfolio that includes multiple strong efforts at creating high-impact people. CEA is awesome, and if I thought that it was about to falter and that CFAR was strong, I would be seeking to direct money to CEA. But the two organizations are non-redundant -- CEA appeals largely to those who are already interested in altruism; CFAR appeals also to many potentially high-impact who are interested in entrepreneurship, or in increasing their own powers, or in rationality, and who have not yet thought seriously about do-gooding. (Who then may.)

The SPARC program (for highly math-talented high school students) seems particularly key to me as a potential influencer of future technology, and it would, I think, be much harder for other organizations in this space to run such a program.

I'd be glad to engage more directly with your concerns, if you want to fill them in a bit more -- either here or by Skype. I suspect I'll learn from the conversation regardless. Maybe CFAR's strategy will also improve.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2014-01-09T12:21:31.812Z · LW(p) · GW(p)

Sorry for the delayed response, but I'd be interested in hearing more. I think it would be easiest to just Skype, so I've scheduled a time slot for the 21st. I look forward to it.

comment by tog · 2014-01-03T09:33:27.116Z · LW(p) · GW(p)

It'd be great if someone from CFAR could spell out the case for its having a large positive impact (on the things we ultimately care about, such as human welfare). If I understand it correctly, Anna's post suggests that CFAR will do good by creating a highly effective community of do-gooders, but this would benefit from a bit more substantiation. For example, could CFAR give some specific cases in which their training has increased the ultimate good done by its recipients? And could someone fully describe a typical or representative story by which CFAR training increases human welfare?

comment by So8res · 2014-01-07T01:23:41.970Z · LW(p) · GW(p)

I've just sent a check for 3000$, scheduled for delivery on Jan 13. CFAR is pending approval for my employer's donation matching program. Once that goes through my donation will be matched by my employer.

comment by ChrisHallquist · 2014-01-11T18:03:01.831Z · LW(p) · GW(p)

Donated $1,500.

(In part because I realized that while I'm currently as income-deficient as I was last year, I expect that to change soon and anything I donate now counts for this year's taxes, so may as well get an early start.)

comment by Paul Crowley (ciphergoth) · 2013-12-29T09:04:30.923Z · LW(p) · GW(p)

In CFAR, MIRI have the ultimate hedge. If the whole MIRI mission is misdirected or wrong headed, CFAR is designed to create the people who will notice that and do whatever does most need to be done.

Replies from: Eliezer_Yudkowsky, TheTerribleTrivium
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-12-30T00:50:39.176Z · LW(p) · GW(p)

I would phrase this more along the lines of "If nothing MIRI does works, or for that matter if everything works but it's still not enough, CFAR tries to get a fully generic bonus on paths unseen in advance."

Replies from: Discredited
comment by Discredited · 2014-01-01T03:26:52.863Z · LW(p) · GW(p)

Do you choose that rephrasing because you don't see how MIRI's work could be harmful or because there is nothing CFAR can do in that case?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-02T03:31:37.639Z · LW(p) · GW(p)

Switch out 'harmful' for 'aiming at the wrong goals', since that's the possibility cipher raised and Eliezer didn't. (Those goals might make MIRI useless; harmful isn't the only possibility.)

I'd guess that Eliezer's rephrasing reflects (1) his vagueness about the means by which CFAR would act as game-changer, and (2) his being much more worried that MIRI lacks the ingenuity and intellectual firepower to achieve its goals than worried that MIRI's deepest values and concerns are misplaced. CFAR might also help in some low-probability scenarios, but it's the likelier scenarios that make Eliezer a CFAR supporter.

comment by TheTerribleTrivium · 2013-12-29T11:49:09.742Z · LW(p) · GW(p)

Only if you have an extremely high opinion of the work CFAR does to the extent that it is sufficient to overcome the extremely strong signalling and group affiliation effects that MIRI is as vulnerable to as anyone else. (anyone who has been reading LW for more than an hour can think of the obvious examples.)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:20:44.324Z · LW(p) · GW(p)

I mean, the main way CFAR might be able to overcome this isn't by being super extremely unbiased, but by bringing a wide diversity of good thinkers into the network (with diverse starting views, diverse group affiliations, and diverse basic thinking styles). This is totally a priority for us.

comment by ArisKatsaris · 2014-01-08T13:17:40.585Z · LW(p) · GW(p)

A small note/improvement request: Just as I asked last time for MIRI's donation bar (and that one was fixed), it's a minor annoyance for me when the donation bar doesn't indicate when it was last updated -- if I e.g. look at it on January 4 and again on January 7, and it hasn't moved, I'd like to know whether it hasn't moved because it simply hasn't been updated the last few days, or because people haven't been donating the last few days.

Please try to have this minor fix implemented, at least in time for the next donation drive. Many thanks in advance. (As I've already mentioned in another thread, I have donated $1000 to CFAR's current donation drive.)

Replies from: Julia_Galef, Benquo
comment by Julia_Galef · 2014-01-09T19:25:56.821Z · LW(p) · GW(p)

Yes, that makes a lot of sense!

Since we don't have any programmers on staff at the moment, we went with the less-than-ideal solution of a manual thermometer, which we update about once a day -- but it certainly would be better to have it happen automatically.

For now, I've gone with the kluge-y solution of an "Updated January XXth" note directly above the menu bar. Thanks for the comment.

comment by Benquo · 2014-01-08T15:00:16.223Z · LW(p) · GW(p)

Seconded

comment by Alex_Altair · 2013-12-29T04:56:48.509Z · LW(p) · GW(p)

In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.

I'd love to hear more detailed plans or ideas for achieving these.

we’ll be devoting more resources to epistemic curriculum development

This is really exciting! I think people tend to have a lot more epistemic rationality than instrumental rationality, but that they still don't have enough epistemic rationality to care about x-risk or other EA goals.

comment by Alex_Altair · 2013-12-28T23:18:25.599Z · LW(p) · GW(p)

Excellent post! I wish my donation didn't have to wait a few months.

comment by Morendil · 2014-01-03T07:38:22.334Z · LW(p) · GW(p)

Donated $100. Happy New Year!

comment by Peter Wildeford (peter_hurford) · 2014-01-09T12:18:11.543Z · LW(p) · GW(p)

Another important comment occurred to me -- sorry it's late.

During the very first minicamps (the current workshops are agreed to be better) we randomized admission of 15 applicants, with 17 controls. Our study was low-powered and effects on e.g. income would have needed to be very large for us to expect to detect them. Still, we ended up with non-negligible evidence of absence: income, happiness, and exercise did not visibly trend upward one year later. [...] The details will be available soon on our blog (including a much larger number of negative results). We'll run another RCT soon, funding permitting.

This is really exciting, as I saw CFAR doing an RCT as one of the cool things that really made me feel like CFAR "gets it" and is committed to measuring their own impact and caring about whether they're impactful in a way that is not just mere speculation, which is good (warning: lots of nuance missing from this sentence).

However, I'm a bit disappointed to see little in the way of CFAR explicitly reacting to this negative evidence. It seems to me to be stated (which is really good!) but then ignored (which could be bad!). What is CFAR's plans in response to this RCT? If it's just fund another/better RCT, what is the status of that funding and how high of a priority is it? What long-run effects on CFAR will RCTs/measurement have? Would there ever be a situation where CFAR would shut down / admit they aren't an equally compelling donation opportunity, based on RCT or other evidence?

Replies from: Vaniver
comment by Vaniver · 2014-01-09T18:22:00.759Z · LW(p) · GW(p)

I think this conversation is a time when numerical hypotheses are helpful; I personally did not expect the CFAR minicamp to increase income over the next year, happiness, or exercise, but thought if there was a discernible effect it was more likely to be positive than negative. A year is a short time as far as income is concerned; happiness is very hard to adjust; a weekend motivational retreat is unlikely to be effective at altering exercise relative to other interventions. (I exercise more now than I did before, primarily thanks to Beeminder, which shows up a lot in CFAR circles and some on LW, and I think I started that more than a year after going to CFAR the first time.)

Now, if the CFAR staff had put high probability on having success on one of those three fronts, then I think that logic is worth discussing.

Replies from: peter_hurford, Drayin
comment by Peter Wildeford (peter_hurford) · 2014-01-09T19:14:36.048Z · LW(p) · GW(p)

a weekend motivational retreat is unlikely to be effective at altering exercise

I agree about income and happiness, but I would expect CFAR to at least boost exercise, as (a) it doesn't seem hard and (b) to be exactly the kind of thing CFAR is trying to do. I don't know much about the specifics of the RCT with regard to statistical power, etc., however.

However, A lot of my questions in my previous comment weren't aimed specifically at the current RCT, but at the bigger picture overall here. For example, if CFAR wasn't putting high probability on having success with these three fronts, then why were they the dependent variables for the RCT? And what does CFAR put high probability of having success on? How do they plan on measuring that?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-01-09T20:41:39.392Z · LW(p) · GW(p)

For example, if CFAR wasn't putting high probability on having success with these three fronts, then why were they the dependent variables for the RCT?

We were not putting high probability on it -- the RCT had few participants but a large number of questions, which we launched knowing full well that it was unlikely to tell us much and that most results would likely be negative (and that any results with e.g. p=.05 would probably be statistical flukes, given the number of comparisons), specifically so we could figure out which hypotheses to test more carefully later.

We'll be continuing with small, not-bankruptingly-expensive tests this year. If a large targeted donation could be found, we could of course do more of this faster; if anyone's interested they should talk to me. We'll also be continuing to rapidly shift the curriculum as we get informal impressions/feedback from our workshops and from the continuing stream of new units that we try on volunteers, in response mostly to our intuitive impressions but also to more formal tests.

(The RCT is not an attempt to conform to an effective altruism ritual -- if such ritual was imposed on CFAR's structure without thinking carefully about what we're actually trying to do, such attempts would probably do more harm than good to our mission, in the manner of Feynman's "Cargo cult science". The RCT is just a part of a much larger set of attempts to figure out how to create a effective, clear-thinking do-gooding -- and to avoid deluding ourselves while we do this.)

I’m looking forward to talking with you on Skype -- thanks for signing up for a timeslot -- this’ll probably be easier to discuss in person.

comment by Drayin · 2014-01-09T20:55:49.274Z · LW(p) · GW(p)

"if the CFAR staff had put high probability on having success on one of those three fronts, then I think that logic is worth discussing."

It would seem somewhat strange for CFAR to test three variables they did not expect to increase...

Also I do not think happiness is very hard to adjust. There is research that some simple things can improve your happiness and have been tested with RCT's. E.g. meditation and gratitude lists had a measurable effect.

comment by Zian · 2014-01-07T07:25:08.907Z · LW(p) · GW(p)

It was a bit troublesome to figure out if the donation would be tax deductible because the word "deductible" isn't used anywhere at the page you linked to (http://rationality.org/fundraiser2013/). In fact, I almost gave up.

Fortunately, if you go to http://rationality.org/donate/, CFAR says they're a 501(c)(3) organization although I'm not sure how I'd verify that... And since the IRS has very big teeth, maybe I should figure that out first.

In addition, for this sort of minor question, doing a full blown Skype conversation probably isn't appropriate but I don't see any alternate ways to get in touch with CFAR on either http://rationality.org/fundraiser2013/ or http://rationality.org/donate/ (except for sending a letter).

Update:

I found the form 990 at http://990finder.foundationcenter.org/990results.aspx?990_type=&fn=&st=&zp=&ei=453100226&fy=&action=Find but now I'm really worried because it looks like CFAR lost all its key staff. I don't see the secretary, treasurer, or president from the 2012 filing listed at http://rationality.org/about/.

I would like to think that CFAR will do a terrific job but confirmation bias is already tilting my opinion so it seems that donating money without seriously thinking about the perils of an organization that can't retain key staff is unwise.

Replies from: gjm
comment by gjm · 2014-01-07T14:30:14.257Z · LW(p) · GW(p)

I don't see any secretary or treasurer listed on the CFAR website. I suspect that these are purely administrative (or even largely ceremonial) posts, and may be filled by people with little or no role in CFAR's actual work.

I agree that it seems a bad sign that early-2012's president seems to be out of the picture unannounced. Perhaps he was always intended as president only pro tem, e.g. until Julia Galef (a founder and now the president of the organization) was sure she could handle the work?

Replies from: AlexMennen
comment by AlexMennen · 2014-01-07T20:30:56.753Z · LW(p) · GW(p)

I attended a CFAR rationality workshop in 2012, and the way I remember it, Anna and Julia were running things from the beginning, and I'm surprised to see that Julia was not listed as President on the form 990 for 2012. My guess would be that the people listed on that form only nominally filled their listed roles. This is supported by the observation that according to the form, they devoted no time to their roles and were not paid.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-01-07T21:14:20.905Z · LW(p) · GW(p)

CFAR is a 501c(3) tax-exempt organization. The current team has indeed been running things from the beginning; it is simply that, prior to the beginning (prior to any paid staff; prior to me meeting Julia or Val or anyone; prior to deciding that there would be a CFAR), some folk filed for a non-profit "just in case" a CFAR ended up being launched, since the processing time required for getting 501c(3) status is large. We have not lost key staff.

Replies from: Zian
comment by Zian · 2014-01-13T06:24:18.093Z · LW(p) · GW(p)

Wow, speaking as someone who tried to start the papers for a non profit org, you really have dedicated people!

I'm going to take it on faith then that CFAR is more or less a legit non profit/etc. so I have 2 questions:

  1. I read above about someone doing a monthly recurring thing and the entire amount being matched. What if I do $X (say, X = 100) now (where now = this week) and $Y (say, $200) at a later time for a total of $Z? I ask because my next paycheck (and possibly the one after that) are already accounted for but I want to make sure you get the most matching out of things possible. If necessary, I can put this in writing/sign/etc. I'd even be happy to provide some sort of small $Y that simply recurs monthly but I suspect that CFAR would be happier getting $Z by say, March instead of December 2014. :)

  2. When will the next Form 990 be filed? I'd like to lose my faith as quickly as possible. :)

comment by Robin · 2013-12-29T00:26:40.858Z · LW(p) · GW(p)

Does CFAR feel developed enough that it would prefer money to feedback?

I.E, I presume there are many people out there who could help CFAR either by dedicating a few hours of there time thinking about how to improve CFAR or earning money to donate to CFAR.

Replies from: Benquo, KatieHartman, lukeprog
comment by Benquo · 2013-12-29T00:33:47.820Z · LW(p) · GW(p)

I think CFAR feels poor enough to prefer money to feedback.

Also they've tried a lot of the obvious things - I had a conversation with Anna where I suggested about 10 things for CFAR to try, they'd already tried about 9, and the 10th wasn't obviously better than the stuff already on their list. Maybe you're smarter than me, though :)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:12:30.993Z · LW(p) · GW(p)

That preference seems mostly right to me... but I did just get quite a good suggestion by email that I hadn't thought of. If you feel like you know important things, do share.

comment by KatieHartman · 2013-12-29T12:49:03.634Z · LW(p) · GW(p)

Having spent a fair amount of time around CFAR staff, in the office and out, I can testify to their almost unbelievable level of self-reflection and creativity. (I recall, several months ago, Julia joking about how much time in meetings was spent discussing the meetings themselves at various levels of meta.) For what it's worth, I can't think of an organization I'd trust to have a greater grasp on its own needs and resources. If they're pushing fundraising, I'd estimate with high confidence that it's because that's where the bottleneck is.

I think donating x hours-worth of income is, with few exceptions, a better route than trying to donate x hours of personal time, especially when you consider that managing external volunteers/having discussions (a perhaps-unpredictable percentage of which will be unproductive) is itself more costly than accepting money.

I'd be willing to guess that the next best thing to donating money would be to pitch CFAR to/offer to set up introductions with high-leverage individuals who might be receptive, but only if that's the sort of thing (you have evidence for believing) you're good at.

Also, sharing information about the fundraising drive via email/Facebook/Twitter/etc. is probably worth the minimal time and effort.

Replies from: GuySrinivasan
comment by SarahSrinivasan (GuySrinivasan) · 2013-12-29T21:48:41.280Z · LW(p) · GW(p)

Do you know why CFAR's probability experiment reports have stopped after exactly one? Did they stop performing experiments? Were the results uninteresting and they decided not to write them up despite their claim that they would? I'd also love to see their underlying data for even the first experiment but no one's sharing. Should I offer them money to release the data instead?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2013-12-29T23:11:28.527Z · LW(p) · GW(p)

We did one more experiment and have another in the works. Second experiment will be written up, I think, but hasn't been yet. I suspect we'd also love to share the data with you (and possibly more widely if there aren't anonymization issues; I wasn't closely involved in the experiments and don't know if there are); I see your unanswered comment back in the thread; I suspect it's just a matter of a small team of somewhat overbooked people dropping a thing.

Replies from: GuySrinivasan
comment by SarahSrinivasan (GuySrinivasan) · 2013-12-30T01:06:56.381Z · LW(p) · GW(p)

Thanks, that's what I suspected too given no responses.

comment by lukeprog · 2013-12-30T16:28:54.616Z · LW(p) · GW(p)

I helped create CFAR, and work every day in the same office as they do, and I still need to talk with the co-founders for several hours before I understand enough detail about CFAR's challenges and opportunities to have advice that I'm decently confident will be useful rather than something they've already tried, or something they have a good reason for not doing, etc.

comment by katydee · 2014-02-05T18:46:23.745Z · LW(p) · GW(p)

Update: this fundraiser has been completed successfully. :)

comment by ChrisHallquist · 2014-01-05T07:23:37.988Z · LW(p) · GW(p)

Question: what exactly is CFAR doing to encourage do-gooding? Of the three listed goals, my impressions of what CFAR does seem mostly focused on the first two.

Replies from: somervta
comment by somervta · 2014-01-05T09:23:02.287Z · LW(p) · GW(p)

(Just one thing that came to mind, I'm sure there are others than Anna et al can talk about.) People who are looking to do good can get - I guess they're called scholarships? - towards the workshop price. Not only does this hopefully make those looking to do good better, more effective, it also brings those people who aren't thinking about do-gooding as a (life choice? career?) into an environment surrounded by people who are passionate about doing good. The conversations that go on around them are extremely skewed towards that kind of thing, and I think that's likely to be very valuable (and not just to those unfamiliar with EA - I know several people were inspired by some of those conversations, and some of them came out of them with ideas that they're collaborating on).

comment by amcknight · 2013-12-31T06:43:24.967Z · LW(p) · GW(p)

From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.

More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.

I'm extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?

comment by brazil84 · 2013-12-31T06:47:02.985Z · LW(p) · GW(p)

If CFAR's curricula is good at creating people who are effective rational do-gooders, then such people will (1) correctly ascertain the value of CFAR; (2) have the means to support CFAR; and (3) act by supporting CFAR. So arguably there is no need to charge money up front for CFAR training -- just tell participants to evaluate the training after the fact and pay whatever they think is appropriate. Kind of like a tip in a restaurant.

Replies from: JGWeissman, KatieHartman
comment by JGWeissman · 2013-12-31T14:28:52.922Z · LW(p) · GW(p)

CFAR does offer to refund the workshop fee if after the fact participants evaluate that it wasn't worth it. They also solicit donations from alumni. So they are kind of telling participants to evaluate the value provided by CFAR and pay what they think is appropriate, while providing an anchor point and default which covers the cost of providing the workshop. That anchor point and default are especially important for the many workshop participants who are not selected for altruism, who probably will learn a lot of competence and epistemic rationality but not much altruism, and whose workshop fees subsidize CFAR's other activities.

Replies from: brazil84
comment by brazil84 · 2013-12-31T19:25:13.094Z · LW(p) · GW(p)

CFAR does offer to refund the workshop fee if after the fact participants evaluate that it wasn't worth it.

Yes, I noticed that on CFAR's web site. I do think it's a step in the right direction but arguably it should be unnecessary. When you've already paid for services, it's psychologically more difficult to ask for a refund than to simply not pay for services you have already received. But CFAR shouldn't need to rely on this principle. Besides, CFAR doesn't seem to have deep pockets and enough people asked for refunds, I suspect that such refunds would not be forthcoming.

That anchor point and default are especially important for the many workshop participants who are not selected for altruism,

Well how much is CFAR a selection process? If CFAR isn't competent at making people more altruistic, then probably the goals need to be re-written, e.g. to find do-gooders and make them more effective/rational.

Replies from: Benja, philh
comment by Benya (Benja) · 2013-12-31T20:18:30.522Z · LW(p) · GW(p)

I would agree with your reasoning if CFAR claimed that they can reliably turn people into altruists free of cognitive biases within the span of their four-day workshop. If they claimed that and were correct in that, then it shouldn't matter whether they (a) require up-front payment and offer a refund or (b) have people decide what to pay after the workshop, since a bias-free altruist would make end up paying the same in either case. There would only be a difference if CFAR didn't achieve what, in this counterfactual scenario, it claimed to achieve, so they should be willing to choose option (b) which would be better for their participants if they don't achieve these claims. But of course CFAR doesn't actually claim that they can make you bias-free in four days, or even that they can make themselves bias-free with years of training. Much of CFAR's curriculum is aimed at taking the brain we actually have and tweaking the way we use it in order to achieve better (not perfect, but better) results -- for example, using tricks that seem to engage our brain's mechanisms for habit formation, in order to bypass using willpower to stick with a habit, rather than somehow acquiring all the willpower that would be useful to have (since there's no known way to just do that). Or consider precommitment devices like Beeminder -- a perfectly bias-free agent wouldn't have any use for these, but many CFAR alumni (and, I believe, CFAR instructors) have found them useful. CFAR doesn't pretend to be able to turn people into bias-free rationalists who don't need such devices, so I see nothing inconsistent about them both believing that they can deliver useful training that makes people both on average more effective and more altruistic (though I would expect the latter to only be true in the long run, through contact with the CFAR community, and only for a subset of people, rather than for the vast majority of attendees right after the 4-day workshop), and also believing that if they didn't charge up-front and asked people to pay afterwards whatever they thought it was worth, they wouldn't make enough money to stay afloat.

Replies from: brazil84
comment by brazil84 · 2013-12-31T21:02:18.645Z · LW(p) · GW(p)

I would agree with your reasoning if CFAR claimed that they can reliably turn people into altruists free of cognitive biases within the span of their four-day workshop. If they claimed that and were correct in that, then it shouldn't matter whether they (a) require up-front payment and offer a refund or (b) have people decide what to pay after the workshop, since a bias-free altruist would make end up paying the same in either case.

It's not so much what CFAR is claiming as what their goals are and which outcomes they prefer.

The goal is to create people who are effective, rational do-gooders. I see four main possibilities here:

First, that they succeed in doing so.

Second, that they fail and go out of business.

Third, that they become a sort of self-help cult like the Landmark Forum, i.e. they charge people money without delivering much benefit.

Fourth, they become a sort of fraternal organization, i.e. membership does bring benefits mainly from being able to network with other members.

Obviously (1) is the top choice. But if (1) does not occur, which would they prefer -- (2), or some combination of (3) and (4)? By charging money up front, they are on the path to (3) or (4) as a second choice. Which goes against their stated goal.

So let's assume that they do not claim to be able to turn people into effective rational do-gooders. The fact remains that they hope to do so. And one needs to ask, what do they hope for as a second choice?

Replies from: JGWeissman
comment by JGWeissman · 2014-01-01T02:17:11.819Z · LW(p) · GW(p)

CFAR can achieve its goal of creating effective, rational do-gooders by taking existing do-gooders and making them more effective and rational. This is why they offer scholarships to existing do-gooders. Their goal is not to create effective, rational do-gooders out of blank slates but make valuable marginal increases in this combination of traits, often by making people who already rank highly in these areas even better.

They also use the same workshops to make people in general more effective and rational, which they can charge money for to fund the workshops, and gives them more data to test their training methods on. That they don't turn people in general into do-gooders does not constitute a failure of the whole mission. These activities support the mission without directly fulfilling it.

Fourth, they become a sort of fraternal organization, i.e. membership does bring benefits mainly from being able to network with other members.

CFAR is creating an alumni network to create benefits on top of increased effectiveness and rationality.

Replies from: brazil84
comment by brazil84 · 2014-01-01T12:57:51.006Z · LW(p) · GW(p)

CFAR can achieve its goal of creating effective, rational do-gooders by taking existing do-gooders and making them more effective and rational.

I wasn't aware that this was the strategy; perhaps I read the original post too quickly.

This is why they offer scholarships to existing do-gooders.

Well are they attempting to turn non-do-gooders into do-gooders?

That they don't turn people in general into do-gooders does not constitute a failure of the whole mission. These activities support the mission without directly fulfilling it.

Perhaps, but that strikes me as a dangerous first step towards a kind of mission creep. Towards a scenario (3) or (4).

CFAR is creating an alumni network to create benefits on top of increased effectiveness and rationality.

Same problem.

comment by philh · 2014-01-01T21:23:09.677Z · LW(p) · GW(p)

When you've already paid for services, it's psychologically more difficult to ask for a refund than to simply not pay for services you have already received.

I recall I-think-it-was-Anna telling me that CFAR has given a refund to someone who didn't ask for a refund but who seemed unhappy with the service received.

(I don't claim that this fact makes that psychological trait entirely irrelevant here.)

comment by KatieHartman · 2014-01-01T13:46:48.188Z · LW(p) · GW(p)

This seems irresponsible and unwise when you have substantial fixed costs, all necessary for core activities, and not much in the way of back-up resources. I can see it feasibly leading to a bunch of problems, including (a) the incentive to save up financial resources rather than put them to use toward high-EV activities and (b) difficulty hiring staff smart enough to realize that the resources from which their salaries are paid out will be highly variable month-to-month.

Replies from: brazil84
comment by brazil84 · 2014-01-01T15:06:15.392Z · LW(p) · GW(p)

This seems irresponsible and unwise

Well again, it depends on what the organization's preferences are. How important is it to keep the doors open if the organization is not really accomplishing what it set out to do?