CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype

post by AnnaSalamon · 2014-12-26T15:33:08.388Z · LW · GW · Legacy · 61 comments

Contents

  Highlights from 2014
  Improving operations
  Attempts to go beyond the current workshop and toward the ‘full prototype’ of CFAR: our experience in 2014 and plans for 2015
    Epistemic rationality curriculum
    Goals for 2015
    Nuts, Bolts, and Financial Details
    The big picture and how you can help
None
61 comments

Summary:  We outline CFAR’s purpose, our history in 2014, and our plans heading into 2015.

One of the reasons we’re publishing this review now is that we’ve just launched our annual matching fundraiser, and want to provide the information our prospective donors need for deciding. This is the best time of year to decide to donate to CFAR. Donations up to $120k will be matched until January 31.[1] 

To briefly preview: For the first three years of our existence, CFAR mostly focused on getting going. We followed the standard recommendation to build a ‘minimum viable product’, the CFAR workshops, that could test our ideas and generate some revenue. Coming into 2013, we had a workshop that people liked (9.3 average rating on “Are you glad you came?”; a more recent random survey showed 9.6 average rating on the same question 6-24 months later), which helped keep the lights on and gave us articulate, skeptical, serious learners to iterate on. At the same time, the workshops are not everything we would want in a CFAR prototype; it feels like the current core workshop does not stress-test most of our hopes for what CFAR can eventually do. The premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth (2) be strategically effective (3) do good in the world. We have dreams of scaling up some particular kinds of sanity.  Our next goal is to build the minimum strategic product that more directly justifies CFAR’s claim to be an effective altruist project.[2]

Highlights from 2014

Our brand perception improved significantly in 2014, which matters because it leads to companies being willing to pay for workshop attendance.  We were covered in Fast Company -- twice -- the Wall Street Journal, and The Reasoner.  Other mentions include ForbesBig ThinkBoing Boing, and Lifehacker.  We’ve also had some interest in potential training for tech companies.

Our curriculum is gaining a second tier in the form of alumni workshops.  We tried 4 experimental alumni workshops, 3 of which went well enough to be worth iterating:

Our alumni community continues to grow.  There are now 550 CFAR alumni, counting 90 from SPARC.  It’s a high-initiative group. Startups by CFAR alumni include: Apptimize; Bellroy; Beeminder; Complice; Code Combat; Draftable; MealSquares; OhmData; Praxamed; Vesparum; Teleport; Watu; Wave; ZeroCater.[4] There is a highly active mailing list with over 400 members, and over 600 conversation threads, over 30 of which were active in the last month.  We also ran our first-ever alumni reunion, and started a weekly alumni dojo.  This enabled further curricular experimentation, and allowed alumni ideas and experiences to feed into curricular design. 

SPARC happened again, with more-honed curriculum and nearly twice as many students.

Basic operations improved substantially.  We’ll say more on this in section 2.

Iteration on the flagship workshop continues.  We’ll say more on this (including details of what we learned, and what remains puzzling) in section 3.

Improving operations

The two driving themes of CFAR during 2014 were making our operations more stable and sustainable, and our successful struggle to pull our introductory workshop out of a local optimum and get back on track toward something that is more like a ‘full prototype’ of the CFAR concept.

At the end of 2013, we had negative $30,000 and had borrowed money to make payroll, placing us in the ‘very early stage, struggling startup’ phase. Almost all of our regular operations, such as scheduling interviews for workshop admissions, were being done by hand. Much of our real progress in 2014 consisted of making things run smoothly and getting past the phase where treading water requires so many weekly hours that nobody has time for anything else. Organizational capital is real, and we had to learn the habit of setting aside time and effort for accumulating it. (In retrospect, we were around a year too slow to enter this phase, although in the very early days it was probably correct to be building everything to throw away.)

A few of the less completely standard lessons we think we learned are as follows:

We also learned a large number of other standard lessons. As of the end of 2014, we think that basic processes at CFAR have improved substantially. We have several months of runway in the bank account - our finances are still precarious, but at least not negative, and we think they’re on an improving path. Our workshop interviews and follow-up sessions have an online interface for scheduling instead of being done by hand (which frees a rather surprising amount of energy). The workshop instructors are almost entirely not doing workshop ops. Accounting has been streamlined. The office has nutritious food easily available, without the need to quit working when one gets hungry.

CFAR feels like it is out of the very-early-startup stage, and able to start focusing on things other than just staying afloat.  We feel sufficiently non-overwhelmed that we can take the highest-value opportunities we run into, rather than having all staff members overcommitted at all times. We have a clearer sense of what CFAR is trying to do; of what our internal decision-making structure is; of what each of our roles is; of the value of building good institutions for recording our heuristic updates; etc. And we have will, momentum, and knowledge with which to continue improving our organizational capital over 2015.

Attempts to go beyond the current workshop and toward the ‘full prototype’ of CFAR: our experience in 2014 and plans for 2015

Where are we spending the dividends from that organizational capital?  More ambitious curriculum.  Specifically, a "full prototype" of the CFAR aim.

Recall that the premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth; (2) be strategically effective; and (3) do good in the world. By a “prototype”, or “minimum strategic product”, we mean a product that actually demonstrates that the above goal is viable (and, thus, that more directly justifies CFAR's claim to be an effective altruist project). For CFAR, this will probably require meaningfully boosting some fraction of participants along all three axes (epistemic rationality; real-world competence; and tendency to do good in the world). [5]

So that's our target for 2015.  In the rest of this section, we’ll talk about what CFAR did during 2014, go into greater detail on our attempt to build a curriculum for epistemic rationality, and describe our 2015 goals in more detail.

---

One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization.  In our present state this continues to be a lot harder than we would like; and our 2014 workshop, for example, was done via crude "what do you feel you learnt?" surveys and our own gut impressions. The sort of randomized trial we ran in 2012 is extremely expensive for us because it requires randomly not admitting workshop attendees, and we don’t presently have good-enough outcome metrics to justify that expense.  Life outcomes, which we see as a gold standard, are big noisy variables with many contributing factors - there’s a lot that adds to or subtracts from your salary besides having attended a CFAR workshop, which means that the randomized tests we can afford to run on life outcomes are underpowered.  Testing later ability to perform specific skills doesn’t seem to stress-test the core premise in the same way.  In 2014 we continued to track correlational data and did more detailed random followup surveys, but this is just enough to keep such analyses in the set of things we regularly do, and remind ourselves that we are supposed to be doing better science later.

At the start of 2014, we thought our workshops had reached a point of decent order, and we were continuing to tweak them.  Partway through 2014 we realized we had reached a local optimum and become stuck (well short of a full prototype / minimum strategic product).  So then we smashed everything with a hammer and tried:

These experiments ended up feeding back into the flagship workshop, and we think we're now out of the local optimum and making progress again.

Epistemic rationality curriculum

In CFAR’s earliest days, we thought epistemic rationality (figuring out the answers to factual questions) was the main thing we were supposed to teach, and we took some long-suffering volunteers and started testing units on them.  Then it turned out that while all of our material was pretty terrible, the epistemic rationality parts were even more terrible compared to the rest of it.

At first our model was that epistemic rationality was hard and we needed to be better teachers, so we set out to learn general teaching skills.  People began to visibly enjoy many of our units.  But not the units we thought of as "epistemic rationality".  They still visibly suffered through those.

We started to talk about "the curse of epistemic rationality", and it made us worry about whether it would be worth having a CFAR if we couldn't resolve it somehow.  Figuring out the answers to factual questions, the sort of subject matter that appears in the Sequences, the kind of work that we think of scientists as carrying out, felt to us like it was central to the spirit of rationality.  We had a sense (and still do) that if all we could do was teach people how to set up trigger-action systems for remembering to lock their house doors, or even turn an ugh-y feeling of needing to do a job search into a series of concrete actions, this still wouldn't be making much progress on sanity-requiring challenges over the next decades.  We were worried it wouldn't contribute strategic potential to effective altruism.

So we kept the most essential-feeling epistemic rationality units in the workshop even despite participants' lowish unit-ratings, and despite our own feeling that those units weren't "clicking', and we thought: “Maybe, if we have workshops full of units that people like, we can just make them sit through some units that they don’t like as much, and get people to learn epistemic rationality that way”.  The “didn’t like” part was painful no matter what story we stuck on it.  We rewrote the Bayes unit from scratch more or less every workshop.  All of our “epistemic rationality” units changed radically every month.

One ray of light appeared in mid-2013 with the Inner Simulator unit, which included techniques about imagining future situations to see how surprised you felt by them, and using this to determine whether your Inner Simulator really strongly expected a new hire to work out or whether you are in fact certain that your project will be done by Thursday.  This was something we considered to be an "epistemic rationality" unit at the time, and it worked, in the sense that it (a) set up concepts that fed into our other units, (b) seemed to actually convey some useful skills that people noticed they were learning, and (c) people didn't hate it.

(And it didn't feel like we were just trying to smuggle it in from ulterior motives about skills we thought effective altruists ought to have, but that we were actually patching concrete problems.)

A miracle had appeared!  We ignored it and kept rewriting all the other "epistemic rationality" units every month.

But a lesson that we only understood later started to seep in.  We started thinking of some of our other units as having epistemic rationality components in them -- and this in turn changed the way we practiced, and taught, the other techniques. 

The sea change that occurred in our thinking might be summarized as the shift from, "Epistemic rationality is about whole units that are about answering factual questions" to there being a truth element that appears in many skills, a point where you would like your System 1 or System 2 to see some particular fact as true, or figure out what is true, or resolve an argument about what will happen next.

When we were organizing the UK workshop at the end of 2014, there was a moment where we had the sudden realization, "Hey, maybe almost all of our curriculum is secretly epistemic rationality and we can organize it into 'Epistemic Rationality for the Planning Brain' on day 1 and 'Epistemic Rationality for the Affective Brain' on day 2, and this makes our curriculum so much denser that we'll have room for the Hamming Question on day 3."  This didn't work as well in practice as it did in our heads (though it still went over okay) but we think this just means that the process of our digesting this insight is ongoing.

We have hopes of making a lot of progress here in 2015.  It feels like we're back on track to teaching epistemic rationality - in ways where it's forced by need to usefully tackle life problems, not because we tacked it on.  And this in turn feels like we're back on track toward teaching that important thing we wanted to teach, the one with strategic implications containing most of CFAR's expected future value.

(And the units we think of as "epistemic" no longer get rated lower than all our other units; and our alumni workshop on Epistemic Rationality for Effective Altruists went over very well and does seem to have helped validate the propositions that "People who care strongly about EA's factual questions are good audiences for what we think of as relevant epistemic skills" and "Having learned CFAR basics actually does help for learning more abstract epistemic rationality later".)

Goals for 2015

In 2015, we intend to keep building organizational capital, and use those dividends to keep pushing on the epistemic rationality curriculum, and pushing toward the minimum strategic project that stress-tests CFAR's core value propositions.  We've also set the following concrete goals[7]:

Nuts, Bolts, and Financial Details

Total expenditures
Our total expenditures in 2014 came up about $840k.  This number includes about $330k of non-staff direct workshop costs (housing, food, etc.), which is offset for the associated workshop revenue; if one excludes this number, our total expenditures in 2014 came to about $510k.

Basic operating expenses
Our basic operating expenses from 2014 were fairly similar to 2013: a total of about $42k/month, outside-view:
  • $5.3k/month for office rent;
  • $30k/month for salaries (includes tax, health insurance, and contractors; our full-time people are still paid $3.5k/month);
  • $7k/month for total other non-workshop costs (flights and fees to attend others' trainings; office groceries; storage unit, software subscriptions; ...)

Flagship Workshops
We ran 9 workshops in 2014, which generated about $435k in revenue, but also $210k in non-staff costs (mostly food and housing for workshop participants), for a total net of about $230k in additional money (or $25k/workshop in additional money), ignoring staff cost.

Per workshop staff time-cost is significantly lower than it was (counting sales, pre-working prep, instruction, and follow-ups) -- perhaps 100 person-days per workshop going forward, compared against perhaps 180 person-days per workshop in 2013.  (We aim to decrease this further in 2014 while maintaining or increasing quality.)

Per workshop net revenue is on the other hand roughly similar to 2013; this was based on an intentional effort to move staff time away from short-term sales toward investment in longer-term press funnel, curriculum development (e.g., the alumni events), and other shifts to our longer-term significance.

Alumni reunion, alumni workshops, alumni dojo...
We ran an alumni reunion, 4 alumni workshops, and a continuing alumni dojo.  We intentionally kept the cost of these low to participants, and sliding-scale, so as to help build the community that can take the art forward.  
Detail:
  • Alumni reunion: $34k income; $38k non-staff costs (for ~100 participants)
  • Hamming: $3.6k revenue; $3k non-staff costs
  • Assisting thinking: $2.1k revenue; $3.2k non-staff costs
  • Attention: $3.3k revenue; $2.7k non-staff costs
  • Epistemic Rationality for Effective Altruists: $5k revenue; $3k costs
  • Dojo: free.
We also ran a 1.5-day beta workshop for beginners:
  • “A taste of rationality”: $5k revenue; $2.6k non-staff costs. 

SPARC
SPARC 2014’s non-staff costs came to $62k, and were covered by Dropbox, Quixey, and MIRI (although, as with our other programs, considerable CFAR staff time also went into SPARC).

Balance sheet
CFAR has about $130k, going into 2015.  (The $30k short-term loan we took last year was repaid as scheduled, following last year's fundraising drive.)

Summary
CFAR is more financially stable than it was a year ago but remains dependent on donation to make ends meet, and still more dependent on donation if it is to e.g. outsource the accounting, to further streamline the per-workshop staff time-costs, and to put actual quality focus into developing the epistemic rationality and do-gooding impacts.

The big picture and how you can help

CFAR seems to many of us to be among the efforts most worth investing in.  This isn’t because our present workshops are all that great.  Rather, it is because, in terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.  And we suspect that CFAR is a promising kernel from which to help with that effort.

As noted, we aim in 2015 to get all the way to a “full prototype” --  a point from which we are actually visibly helping in the aimed-for way.  This will be a tricky spot to get to. Our experience slowly coming to grips with epistemic rationality is probably more rule than the exception, and I suspect we’ll run into a number of curve balls on path to the prototype.    

But with your help -- donations are at this stage critical to being able to put serious focused effort into building the prototype, instead of being terribly distracted staying alive -- I suspect that we can put in the requisite focus, and can have the prototype in hand by the end of 2015.

...

Besides donations, we are actually in a good position now use your advice, your experience, and your thoughts on how to navigate CFAR's remaining gaps; we have enough space to take a breath and think strategically.

We're hoping 2015 will also be a year when CFAR alumni and supporters scale up their connections and their ambitions, launching more startups and other projects.  Please keep in touch if you do this; we’d like our curriculum-generation process to continue to connect to live problems.

A very strong way to help, also, is to come to a workshop, and to send your friends there.  It keeps CFAR going, we always want there to be more CFAR alumni, and it might even help with that quest.  (The data strongly indicates that your friends will thank you for getting them to come… and will do so even more 6 months later!)

And do please donate to the Winter 2014 fundraising drive!



[1] That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars. Much thanks to Peter McCluskey, Jesse Liptrap, Nick Tarleton, Stephanie Zolayvar, Arram Sabeti, Liron Shapira, Ben Hoskin, Eric Rogstad, Matt Graves, Alyssa Vance, Topher Hallquist, and John Clasby for together putting up $120k in matching funds.

[2] This post is a collaborative effort by many at CFAR.

[3] The title we ran it under was "TA training", but the name desperately needs revision.

[4] This is missing several I can almost-recall and probably several others I can’t; please PM me if you remember one I missed.  Many of the startups on this list have multiple founders who are CFAR alum.  Omitted from this list are startups that were completed before the alumni met us, e.g. Skype; we included however startups that were founded before folks met us and carried on after they became alumni (even when we had no causal impact on the startups).  Also of note is that many CFAR alumni are in founding or executive positions at EA-associated non-profits, including CEA, CSER, FLI, Leverage, and MIRI.  One reason we're happy about this is that it means that the curriculum we're developing is being developed in concert with people who are trying to really actually accomplish hard goals, and who are therefore wanting more from techniques than just "does this sound cool".

[5] Ideally, such a prototype might accomplish increases in (1), (2), and (3) in a manner that felt like facets of a single art, or that all drew upon a common base of simpler cognitive skills (such as subskills for getting accurate beliefs into system 1, for navigating internal disagreement, or for overcoming learned helplessness).  A “prototype” would thus also be a product that, when we apply local optimization on it, takes us to curricula that are strategically important to the world -- rather than, say, taking us to well-honed “feel inspired about your life” workshops, or something).

Relative to this ideal, the current curriculum seems to in fact accomplish some of (2), for all that we don't have RCTs yet; but it is less successful at (1) and (3).  (We'd like, eventually, to scale up (2) as well.)  However, we suspect the curriculum contains seeds toward an art that can succeed at (1) and (3); and we aim to demonstrate this in 2015.

[6] Apologies for the jargon.  It is probably about time we wrote up a glossary; but we don't have one yet.  If you care, you can pick up some of the vocabulary from our sample workshop schedule.

[7] This isn’t the detailed tactical plan; we’ll need one of those separately, and we have a partial version that this margin was too small to contain; it’s meant to be a listing of how you and we can tell whether we won, at the end of 2015.

[8] The Apgar score for assessing newborn health is inspiring, here; if you've not seen it before, and you're wondering how one could possibly come up with a metric, you might glance at its wikipedia page.  Basically, instead of coming up with a single 0 to 10 newborn health scale, Dr. Apgar chose 5 simpler components (newborn color; newborn heart rate; etc.), came up with very simple "0 to 2" measures for these, and then added.

61 comments

Comments sorted by top scores.

comment by Zack_M_Davis · 2014-12-26T18:12:13.016Z · LW(p) · GW(p)

I donated $4,000 the other week (or I will have once the check clears).

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-26T19:57:39.389Z · LW(p) · GW(p)

Thank you so much for this.

comment by Qiaochu_Yuan · 2014-12-26T22:18:46.892Z · LW(p) · GW(p)

Thanks for the detailed update! Donated $1,500.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-26T22:25:26.064Z · LW(p) · GW(p)

Thank you! It helps our morale, as well as our budget.

comment by beoShaffer · 2015-01-01T04:04:49.785Z · LW(p) · GW(p)

Gave $8000

comment by Morendil · 2014-12-26T19:36:48.381Z · LW(p) · GW(p)

Donated $300. Happy New Year!

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-26T19:56:52.339Z · LW(p) · GW(p)

Thanks! We appreciate it a lot; and happy new year to you!

comment by Error · 2014-12-26T15:54:02.301Z · LW(p) · GW(p)

we realized we had reached a local optimum and become stuck...So then we smashed everything with a hammer...and we think we're now out of the local optimum

Suggestion: A unit on identifying and escaping bad local optima, if you don't have one already. It seems to me that an awful lot of people-years are lost to situations that are sub-par but painful to get out of (e.g. crappy jobs).

Attention Workshop: A 2.5-day workshop on clearing mental space. This failed and taught us some important points about what doesn’t work.

I'd be curious to see a post-mortem on this and other failed efforts. I like that CFAR is willing to acknowledge when it's screwed up. That I don't find this willingness terribly surprising says some nice things about the LW-sphere it pulls from.

Replies from: Raemon, lirene, ColonelMustard
comment by Raemon · 2014-12-26T22:38:35.124Z · LW(p) · GW(p)

Generally upvoted, but I think there's a significant difference between "tried something that didn't work" and "screwed up" - the former is executing on a correct decision algorithm (which includes explore as well as exploit patterns), the latter means actually making a bad decision given the available information.

comment by lirene · 2014-12-29T14:51:39.251Z · LW(p) · GW(p)

I'd also be curious to see an elaboration on the Attention workshop. The concept of attention as a limited and important resource was one of my main takeaways from the 4-day workshop (+discussions on the alumni list), leading me to the tools I needed to gain better focus and not feel overwhelmed all the time. Now and then I try to explain the concepts in conversations with people who I think might benefit from it, so I'd be interested in how not to do it.

comment by ColonelMustard · 2014-12-26T22:54:14.199Z · LW(p) · GW(p)

Strongly agree with the last two sentences here.

comment by aarongertler · 2014-12-28T03:51:54.389Z · LW(p) · GW(p)

I gave $50, and plan to give substantially more within a year of graduation. That was one hell of a "big picture" section, Anna.

comment by lukeprog · 2014-12-26T15:46:48.530Z · LW(p) · GW(p)

In case LWers are wondering why MIRI didn't post to LW about its own fundraising drive, that's because we already finished it.

Also, if your employer does corporate matching (check here) and you haven't used it all up yet and you'd like to donate to CFAR, remember to do so before January 1st so that your corporate matching for 2014 doesn't go unused!

Replies from: beoShaffer, Metus
comment by beoShaffer · 2014-12-26T22:56:23.010Z · LW(p) · GW(p)

Is it currently better to donate to CfAR or MIRI?

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2014-12-28T03:22:29.955Z · LW(p) · GW(p)

Based on the fact that MIRI has finished its fundraising drive and CFAR has not, I'm gonna guess CFAR. Especially because of the matching.

comment by Metus · 2014-12-26T19:37:19.713Z · LW(p) · GW(p)

Any other fundraisers interesting to LW going on?

Replies from: amcknight, homunq, Gleb_Tsipursky
comment by amcknight · 2015-01-28T00:53:53.129Z · LW(p) · GW(p)

Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)

comment by homunq · 2015-02-23T00:27:04.868Z · LW(p) · GW(p)

Electology is an organization dedicated to improving collective decision making — that is, voting. We run on a shoestring; somewhere in the lowish 5 digits $ per year. We've helped get organizations such as the German Pirate Party and the various US stat Libertarian Parties to use approval voting, and gotten bills brought up in several states (no major victories so far, but we're just starting.)

Is a better voting system worth it, even if most people still vote irrationally? I'd say emphatically yes. Plurality voting is just a disaster as a system, filled with pathological results, perverse incentives, and pernicious equilibria. Credible numerical estimates (utility-based simulations) suggest that better systems such as approval voting offer as much improvement again as the move from dictatorship to democracy was.

Replies from: Lumifer
comment by Lumifer · 2015-02-23T18:36:26.443Z · LW(p) · GW(p)

Credible numerical estimates (utility-based simulations)

The first three words here are in contradiction with the last three words... :-/

Replies from: homunq
comment by homunq · 2015-02-24T12:09:27.833Z · LW(p) · GW(p)

I presume you're saying that utility-based simulations are not credible. I don't think you're actually trying to say that they're not numerical estimates. So let me explain what I'm talking about, then say what parts I'm claiming are "credible".

I'm talking about monte-carlo simulations of voter satisfaction efficiency. You use some statistical model to generate thousands of electorates (that is, voters with numeric utilities for candidates); a media model to give the voters information about each other; and a strategy model to turn information, utilities, and choice of voting system into valid ballots for that voting system. Then, you see who wins each time, and calculate the average overall utility of that winners. Clearly, there are a lot of questionable assumptions in terms of the statistical, media, and strategy models, but the interesting thing is that exploring various assumptions in all of those cases shows that the (plurality-dictatorship)≈(good system-plurality) equation is pretty robust, with various systems such as approval, condorcet, majority judgment, score, or SODA in place of "good system".

There are certainly various ways to criticize the above.

  • "Don't believe it": If you think that I've messed up my math or not done a good job with the sensitivity analysis, of course you'd question my conclusions. But if you want to play with my code to check it, it's here.

  • "Utilitarianism is a bad metric": It may not be perfect, but as far as I can tell it's the only rational way to put numbers on things.

  • "Democracy is a bad idea": In other words, if you think that the average voter's estimate of their utility for a candidate has 0 or negative correlation with their true utility of that candidate winning, then this simulation is garbage. I'd respond with the old saying about democracy being the worst system except all the others.

  • "The advantages of democracy over dictatorship aren't in terms of who's in charge": if you think that democracy's clear superiority to dictatorship in terms of human welfare comes from something other than choosing better leaders (such as, for instance, reducing the prevalence of civil wars), then improving the voting system might not be able to have comparable payoff as instituting a voting system to begin with. I'd respond that this critique is probably partially right, but on the other hand, better leadership could credibly have better responses to crises (financial, environmental, and/or existential-risk) which could indeed be on the same order as the democracy dividend.

All in all, taking a more outside view, I see how the combination of the above objections would reduce your estimate of the expected "voting system dividend". Still, when I "shut up and multiply" I get: $80 trillion world GDP plausible (conservative) effect size in a good year of 2% .1 plausible portion of good years over time .5 plausible portion of good years over space (some country's economies might already be immune to the kind of harm this could prevent) .5 chance you trust my simulations .1 correlation of voter preference with utility .5 probability leadership makes any difference = about $2 billion/year potential payoff in expected value, even without compounding. That seems to me like (a) quite a conservative choice of factors, (b) not a totally implausible end result, and (c) still big enough to care about. Of course, it's incredibly back-of-the-envelope, but I invite you to try doing the estimation yourself.

Replies from: Lumifer
comment by Lumifer · 2015-02-24T16:21:47.971Z · LW(p) · GW(p)

I presume you're saying that utility-based simulations are not credible, because they're clearly numerical estimates.

Actually, no, that's not what I mean. I have no problems with numerical estimates in general.

What I mean by "credible", in this context, is "shown to be relevant to real-life situations" and "supported by empirical data".

You've constructed a model. You've played with this model and have an idea of how it behaves in different regimes. That's all fine. But then you imply that this model reflects the real world and it's at this point that I start to get sceptical and ask for evidence. Not evidence of how your model works, but evidence that the map matches the territory.

Replies from: homunq, homunq
comment by homunq · 2015-02-24T16:44:51.362Z · LW(p) · GW(p)

The model is not easy to subject to full, end-to-end testing. It seems reasonable to test it one part at a time. I'm doing the best I can to do so:

  • I've run an experiment on Amazon Mechanical Turk involving hundreds of experimental subjects voting in dozens of simulated elections to probe my strategy model.

  • I'm working on getting survey data and developing statistical tools to refine my statistical model (mostly, posterior predictive checks; but it's not easy, given that this is a deeper hierarchical model than most).

  • In terms of the utilitarian assumptions of my model, I'm not sure how those are testable rather than just philosophical / assumed axioms. Not that I regard these assumptions as truly axiomatic, but that I think they're pretty necessary to get anywhere at all, and in practice unlikely to be violated severely enough to invalidate the work.

  • I haven't started work on testing / refining my media model (other than some head-scratching), but I can imagine how to do at least a few spot checks with posterior predictive checks too.

  • The assumptions that preference and utility correlate positively, even in an environment where candidates are strategic about exploiting voter irrationality, are certainly questionable. But insofar as these are violated, it would just make democracy a bad idea in general, not invalidate the fact that plurality is still a worse idea than other voting systems such as approval. Also, I think it would be basically impossible to test these assumptions without implausibly accurate and unbiased measurements of true utility. Finally, call me a hopeless optimist, but I do actually have faith that democracy is a good idea because "you can't fool all the people all the time".

tl;dr: I'm working on this.

Replies from: Lumifer
comment by Lumifer · 2015-02-24T16:58:49.150Z · LW(p) · GW(p)

I do actually have faith that democracy is a good idea

Democracy is complicated. For a simple example, consider full direct democracy: instant whole-population referendums on every issue. I am not sure anyone considers this a good idea -- successful real-life democratic systems (e.g. the US) are built on limited amounts of democracy which is constrained in many ways. Given this, democracy looks to be a Goldilocks-type phenomenon where you don't want too little, but you don't want too much either.

And, of course, democracy involves much more than just voting -- there are heavily... entangled concepts like the rule of law, human rights, civil society, etc.

Replies from: homunq
comment by homunq · 2015-02-24T17:10:32.685Z · LW(p) · GW(p)

Full direct democracy is a bad idea because it's incredibly inefficient (and thus also boring/annoying, and also subject to manipulation by people willing to exploit others' boredom/annoyance). This has little or nothing to do with whether people's preferences correlate with their utilities, which is the question I was focused on. In essence, this isn't a true Goldilocks situation ("you want just the right amount of heat") but rather a simple tradeoff ("you want good decisions, but don't want to spend all your time making them").

As to the other related concepts... I think this is getting a bit off-topic. The question is, is energy (money) spent on pursuing better voting systems more of a valid "saving throw" than when spent on pursuing better individual rationality. That's connected to the question of the preference/utility correlation of current-day, imperfectly-rational voters. I'm not seeing the connection to rule of law &c.

Replies from: Lumifer
comment by Lumifer · 2015-02-24T17:24:21.259Z · LW(p) · GW(p)

Full direct democracy is a bad idea because it's incredibly inefficient

No, I don't think so. It is a bad idea even in a society technologically advanced to make it efficient and even if it's invoked not frequently enough to make it annoying.

whether people's preferences correlate with their utilities

People's preferences are many, multidimensional, internally inconsistent, and dynamic. I am not quite sure what do you want to correlate to a single numerical value of "utility".

The question is, is energy (money) spent on pursuing better voting systems more of a valid "saving throw" than when spent on pursuing better individual rationality.

Why are you considering only these two options?

I'm not seeing the connection to rule of law &c.

The connection is that what is a "better" voting system depends on the context, context that includes things like rule of law, etc.

Replies from: homunq
comment by homunq · 2015-02-24T17:53:47.622Z · LW(p) · GW(p)

You're raising some valid questions, but I can't respond to all of them. Or rather, I could respond (granting some of your arguments, refining some, and disputing some), but I don't know if it's worth it. Do you have an underlying point to make, or are you just looking for quibbles? If it's the latter, I still thank you for responding (it's always gratifying to see people care about issues that I think are important, even if they disagree); but I think I'll disengage, because I expect that whatever response I give would have its own blemishes for you to find.

In other words: OK, so what?

Replies from: Lumifer
comment by Lumifer · 2015-02-24T18:16:24.831Z · LW(p) · GW(p)

Some people find blemish-finding services valuable, some don't :-)

Replies from: homunq
comment by homunq · 2015-02-24T18:22:53.325Z · LW(p) · GW(p)

Fair enough. Thanks. Again, I agree with some of your points. I like blemish-picking as long as it doesn't require open-ended back-and-forth.

comment by homunq · 2015-02-24T16:48:34.215Z · LW(p) · GW(p)

(small note: the sentence you quote from me was unclear. "because" related to "presume", not "saying". But your response to what I accidentally said is still largely cogent in relation to what I meant to say, so the miscommunication isn't important. Still, I've corrected the original. Future readers: lumifer quoted me correctly.)

comment by Gleb_Tsipursky · 2014-12-27T17:43:51.758Z · LW(p) · GW(p)

Well, Intentional Insights is a Rationality-themed nonprofit dedicated to spreading rationality to a broad audience and thus raising the sanity waterline. We have recently received our official nonprofit designation so haven't had time to plan out and run a fundraiser as such, but we are accepting donations, and they are tax-deductible: anything you give, whether in time/skills/money, would be super-helpful. We especially appreciate those who become monthly donors, as that allows us to plan ahead and also show other potential donors and granting agencies that we have a good base of support and can bring our mission into the world well. We would be happy to talk more to you on the phone/Skype about this matter if you wish, and/or you can donate on the website itself directly. The donation button is on the top left of the website home page, and the monthly recurring donation indication is just below the donation button itself.

Replies from: beoShaffer
comment by beoShaffer · 2014-12-28T16:29:53.220Z · LW(p) · GW(p)

This sounds like a good idea, but I had a look at the website and it is unclear to me exactly how you plan to raise the sanity waterline.

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2014-12-28T18:12:21.754Z · LW(p) · GW(p)

Here's a description of what we plan to do and how we plan to do it. Let me know any questions you might have!

comment by dthunt · 2014-12-31T19:35:40.920Z · LW(p) · GW(p)

Donated!

comment by Gleb_Tsipursky · 2014-12-27T02:35:01.900Z · LW(p) · GW(p)

My wife and I are monthly donors, and here's to CFAR having a great 2015! I'd also love to talk about potential collaborations between CFAR and Intentional Insights as we get our own infrastructure and internal operations set up well in the next month or two.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-27T21:11:44.402Z · LW(p) · GW(p)

Thanks! Discussing collaborations sounds good; easiest way to do this is to schedule an appointment with me here.

(Others are also very welcome to do this.)

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2014-12-28T18:10:53.625Z · LW(p) · GW(p)

Anna, will do as we get our plans and infrastructure more clear!

comment by [deleted] · 2014-12-28T13:05:46.899Z · LW(p) · GW(p)

CFAR seems to many of us to be among the efforts most worth investing in. This isn’t because our present workshops are all that great. Rather, it is because, in terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.

Why? You tend to be marketing your workshops to people who've already got significant training in much of Traditional Rationality. In my view, much of the world's irrationality comes from people who have not even heard of the basics or people whose resource constraints do not allow them to apply what they know, or both. In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world through prevention of Dumb Moves than giving semi-advanced cognitive self-improvement workshops to the Silicon Valley elite.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

Replies from: Vaniver, Arran_Stirton, homunq, dthunt
comment by Vaniver · 2014-12-28T18:00:17.629Z · LW(p) · GW(p)

In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world through prevention of Dumb Moves than giving semi-advanced cognitive self-improvement workshops to the Silicon Valley elite.

So, I recently started training in the Alexander Technique, which is a well-developed school of thought and practice on how to use bodies well. It's been taught for about a century, and during the 1940s there was a brief attempt to teach it in schools to children.

My impression is that the children didn't get all that much out of it- yes, they had better posture, and the students who might have been klutzier were more coordinated. But the people that keep Alexander alive are mostly the performers and musicians and people with painful movement problems- that is, the sort of people that get enough value out of it that it makes sense for them to take special lessons and think about it in their off time and so on.

Similarly, it might be true that while there is a great mass of irrationality out there, cognitive labor, like any other labor, can be specialized- and so focusing your rationality training on people who specialize in thinking makes sense just as focusing your movement training on people who specialize in movement makes sense. (Here I'm including speaking as movement for reasons that are anatomically obvious.)

But supposing your model is correct--that a broad rationality education would do the most good--I seem to recall hearing about an undergraduate-level rationality curriculum being developed by Keith Stanovich, a CFAR advisor, and I suspect Anna or others may know more details. Once we've got an undergraduate curriculum being taught, that should teach us enough to develop high-school level curriculum, and so on down to songs that can be sung in kindergarten.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

Why? It seems to me that training people to think well is better, because if they end up disagreeing that gives you valuable information to update on.

Replies from: None
comment by [deleted] · 2014-12-28T21:00:33.829Z · LW(p) · GW(p)

Similarly, it might be true that while there is a great mass of irrationality out there, cognitive labor, like any other labor, can be specialized- and so focusing your rationality training on people who specialize in thinking makes sense just as focusing your movement training on people who specialize in movement makes sense. (Here I'm including speaking as movement for reasons that are anatomically obvious.)

This would imply that CFAR should be pitching its workshops to academics and government policymakers. Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world. Actual scientific research and public policy-making are (or, at least, scientific research is fairly intensive cognitive labor... I wouldn't necessary say it has a high mean impact on any per-unit basis).

Why? It seems to me that training people to think well is better, because if they end up disagreeing that gives you valuable information to update on.

I would hope so! But what information indicates CFAR does this?

But supposing your model is correct--that we a broad rationality education would do the most good--I seem to recall hearing about an undergraduate-level rationality curriculum being developed by Keith Stanovich, a CFAR advisor, and I suspect Anna or others may know more details. Once we've got an undergraduate curriculum being taught, that should teach us enough to develop high-school level curriculum, and so on down to songs that can be sung in kindergarten.

That's good, but I worry that it doesn't go far enough. The issue is not that we're failing to teach probability theory to kindergartners -- they don't need it and don't want it. The issue is that our society allows people to walk around thinking that there isn't actually an external world to which their actions will be held accountable at all, and that subjective feeling both governs reality and normatively dictates correct actions.

To make an offensive political quip: there is the assertion-based community, and the reality-based community; too many people belong to the former and not nearly enough to the latter. The biggest impact we can have on "raising the sanity waterline" is to move people from the group who believe in a Fideist Theory of Truth ("Things are true by virtue of how I feel about them") to people who believe in the Correspondence Theory of Truth ("Things are true when they match the world outside my head!"), which also thus inspires people to listen to educated domain experts at all.

To give a flagrantly stupid example, we really really really don't want society's way of dealing with the Friendly AI problem determined by people who believe that AIs have souls and would never harm anyone because they don't have original sin. Giving Silicon Valley executives effectiveness workshops will not avert this problem, while teaching the broad public the very basic worldview that the universe is lawful, rather than consciously optimizing for recognizably humanoid goals, is likely to affect this problem.

Replies from: Vaniver
comment by Vaniver · 2014-12-29T08:35:42.555Z · LW(p) · GW(p)

This would imply that CFAR should be pitching its workshops to academics and government policymakers.

My understanding is that CFAR is attended by both present and likely future academics; I don't know about government policymakers. (I've met people on national advisory boards from at least two countries at CFAR workshops, but I don't pretend to know how much influence they have on those boards, or how much influence those boards have on actual policy.)

Not to be a dick, but the latest local-mobile-social app-kerjigger is not intensive cognitive labor with a high impact on the world.

At time of writing this comment, there are 14 startups listed in the post. What number of them would you consider local-mobile-social apps? (This seems to be an example of "not to be X" signifying "I am aware this is being an X but would like to avoid paying the relevant penalty.")

I would hope so! But what information indicates CFAR does this?

I have always gotten the impression from them that they want to be as cause agnostic as is reasonable, but I can't speak to their probability estimates over time and thus how they've updated.

The biggest impact we can have on "raising the sanity waterline" is to move people from the group who believe in a Fideist Theory of Truth ("Things are true by virtue of how I feel about them") to people who believe in the Correspondence Theory of Truth ("Things are true when they match the world outside my head!"), which also thus inspires people to listen to educated domain experts at all.

Are there people working on a reproducible system to help people make this move? It's not at all obvious to me that this would be the comparative advantage of the people at CFAR. (Though it seems to me that much of the CFAR material is helping people finish making that transition, or, at least, get further along it.)

comment by Arran_Stirton · 2014-12-29T14:34:44.912Z · LW(p) · GW(p)

As far as I understand it, CFAR's current focus is research and developing their rationality curriculum. The workshops exist to facilitate their research, they're a good way to test which bits of rationality work and determine the best way to teach them.

In this model, broad improvements in very fundamental, schoolchild-level rationality education and the alleviation of poverty and time poverty are much stronger prospects for improving the world

In response to the question "Are you trying to make rationality part of primary and secondary school curricula?" the CFAR FAQ notes that:

We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world.

So I'm fairly sure they agree with you on the importance of making broad improvements to education. It's also worth noting that effective altruists are among their list of clients, so you could count that as an effort toward alleviating poverty if you're feeling charitable.

However they go on to say:

At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans.

Additionally, for them to change public-school curricula they have to first develop a rationality curriculum, precisely what they're doing at the moment - building a 'minimum strategic product'. Giving "semi-advanced cognitive self-improvement workshops to the Silicon Valley elite" is just a convenient way to test this stuff.

You might argue for giving the rationality workshops to "people who have not even heard of the basics" but there's a few problems with that. Firstly the number of people CFAR can teach in the short term is tiny percentage of the population, not where near enough to have a significant impact on society (unless those people are high impact people, but then they've probably already hear of the basics). Then there's the fact that rationality just isn't viewed as useful in the eyes of the general public, so most people won't care about learning to become so. Also teaching the basics of rationality in a way that sticks is quite difficult.

Mind, if what you're really trying to do is propagandize the kind of worldview that leads to taking MIRI seriously, you rather ought to come out and say that.

I don't think CFAR is aiming to propagandize any worldview; they're about developing rationality education, not getting people to believe any particular set of beliefs (other than perhaps those directly related to understanding how the brain works). I'm curious about why you think they might be (intentionally or unintentionally) doing so.

Replies from: shullak7
comment by shullak7 · 2015-01-03T05:32:02.581Z · LW(p) · GW(p)

I truly wish that I was in a position to help make rationality training part of the public school curriculum because I think that would be of tremendous value to our society. I do work at a library and people hold workshops there...libraries could be a good place to "spread the word" to people who might be interested in rationality education, but may not have heard about it. The workshop would have to be free of charge, though, and CFAR isn't there yet.

comment by homunq · 2015-02-23T00:07:34.173Z · LW(p) · GW(p)

In terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.

Improvements to collective decision making seem to be potentially an even bigger win. I mean, voting reform; the kind of thing advocated by Electology. Disclaimer: I'm a board member.

Why do I think that? Individual human decisionmaking has already been optimized by evolution. Sure, that optimization doesn't fit perfectly with a modern need for rationality, but it's pretty darn good. However, democratic decisionmaking is basically still using the first system that anybody ever thought of, and monte carlo utility simulations show that we can probably make it at least twice as good (using a random dictator as a baseline).

On the other hand, achieving voting reform requires a critical mass, while individual rationality only requires individuals. And electology is not as far along in organizational growth as CFAR. But it seems to me that it's a complementary idea, and that it would be reasonable for an effective altruist to diversify their "saving throw" contributions. (We would also welcome rationalist board members or volunteers.)

Replies from: None
comment by [deleted] · 2015-02-24T12:06:32.987Z · LW(p) · GW(p)

Improvements to collective decision making seem to be potentially an even bigger win. I mean, voting reform; the kind of thing advocated by Electology. Disclaimer: I'm a board member.

Disclaimer: I now support you. What do you need done, what's your vision, and where do you work? Making democracy work better has been a pet drive of mine for an extremely long time.

EDIT: Upon your website loading and my finding that you push Approval Voting, I am now writing in about volunteering.

comment by dthunt · 2014-12-28T16:12:54.074Z · LW(p) · GW(p)

I'm kind of curious; what do you think CFAR's objective is 5 years from now (assuming they get the data they want and it strongly supports the value of the workshops)?

Replies from: None
comment by [deleted] · 2014-12-28T21:08:17.137Z · LW(p) · GW(p)

what do you think CFAR's objective is 5 years from now (assuming they get the data they want and it strongly supports the value of the workshops)?

In all sincerity, I don't actually know, and am very open to developing an opinion when I get actual information. I reread TFA, and it doesn't seem to say. It does come out and state that "CFAR is one of the efforts most worth investing in", but it doesn't say how that worth will manifest itself within any bounded time period at all.

comment by Evan_Gaensbauer · 2015-01-30T22:04:24.053Z · LW(p) · GW(p)

These are my thoughts as a CFAR workshop alumnus. I don't have funds to donate right now, so my perspective isn't backed up by action of donation, or a conscious choice not to donate. Feel free to put however much weight on my opinion as (any of) you like. I figure I would comment because providing more data is better than less data. I don't claim for my perspective to be typical of CFAR workshop alumni.

  • After I attended a workshop, realizing its cost for participants as revenue for the CFAR, I did a Fermi estimate of how much revenue CFAR actually achieves. It included an estimate of the revenue and cost of each participant, multiplied by the number of participants, minus the CFAR's operating costs. I concluded that at best the CFAR would only be making ends meet if their only source of revenue was its workshops. As expensive as the workshops may seem, reading about the CFAR's finances in this post made me realize how seriously the CFAR's takes their own goal of providing and testing their minimal viable product. Regarding theiri finances and operations, they're not goofing around.

  • The CFAR workshop I attended was a great experience for me. I mention to some friends they seem like the sort who would get a lot out of it. However, I don't give them a full recommendation. This is because the cost is often prohibitively expensive for those in or just out of university. My friends tell me this, and I'm well aware of it. Grand hopes for the future aside, I hope that if the CFAR received enough donations that it could offer their workshops at a lower cost. I hope this not only for my friends, but also for all others who aren't attending because of costs, yet would benefit both themselves, the CFAR, and its alumni community. This is personally why I respect their fundraising efforts.

  • Hooray to the CFAR for being one of few (non-profit) organizations who admit "we tried some stuff that didn't work well. we'll be rejigging and testing and improving efforts in the future!" Kudos! This earnestness is refreshing.

  • The CFAR is taking being part of effective altruism quite seriously. It didn't seem to me they were treating this association as seriously one year ago. They might have felt as serious, but I wasn't receiving the signal. I am now. Also, I like their honesty in expressing how they're not just identifying with, but trying to reach the standard of what, effective altruism ought to be.

comment by dho · 2014-12-27T22:33:13.196Z · LW(p) · GW(p)

Love hearing about how much CFAR has learned in 2014 and your aggressive 2015 goals. Thanks for the look into your operations and the reminder to donate!

comment by bentarm · 2015-01-04T15:04:02.171Z · LW(p) · GW(p)

Serious question - why do you (either CFAR as an organisation or Anna in particular) think in-person workshops are more effective than, eg, writing a book, or making a mooc-style series of online lessons for teaching this stuff? Is it actually more about network building than the content of the workshops themselves? Do you not understand how to teach well enough to be able to do it in video format? Videos are inherently less profitable?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2015-01-07T14:15:10.266Z · LW(p) · GW(p)

I don't speak for CFAR, but I believe that they wish to develop their product further before actually taking the time to write extensively about it, because the techniques are still being under active development and there's no point in writing a lot about something that may change drastically the next day.

It's also true that a large part of the benefit of the workshops comes from interacting with other participants and instructors and getting immediate feedback, as well as from becoming a part of the community.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-01-08T18:34:45.260Z · LW(p) · GW(p)

I think the tighter feedback loops is a big point. Being there in person really helps assess what works and does not.

Of course a change of format can get in the way when that comes around, but I think the workshop format will help anyway.

On the specific suggestion of a book, there's already a lot of written material on this.

comment by ColonelMustard · 2014-12-26T22:35:41.339Z · LW(p) · GW(p)

How does cfar rank other thinking skills organisations outside the EA/MIRI groups? For instance, is Ember Associates plausibly one of the most important organisations currently existing?

Replies from: Curiouskid
comment by Curiouskid · 2014-12-28T03:27:38.670Z · LW(p) · GW(p)

What is Ember Associates? I did a quick google search, and when I clicked on their site, I got a page that said "Website Expired". What other groups do you have in mind?

Replies from: ColonelMustard, malcolmocean
comment by ColonelMustard · 2014-12-28T22:37:17.134Z · LW(p) · GW(p)

It is, or was, an organisation to teach thinking skills. Please don't focus on the example; it was the first one that came to mind and I didn't realise the website had expired. The point is that a lot of groups claim to teach thinking skills. Do you consider all such count to be EA? If not, what distinguishes CFAR from those that don't?

comment by MalcolmOcean (malcolmocean) · 2014-12-28T03:31:14.414Z · LW(p) · GW(p)

Was just about to post the same thing. Having your website expired is definitely evidence against effectiveness.

comment by badschema · 2015-01-30T13:20:33.566Z · LW(p) · GW(p)

Donated! Hooray for matching!

comment by [deleted] · 2014-12-31T00:02:34.923Z · LW(p) · GW(p)

Hi Anna - during last year's fundraiser you said you were allowed to match recurring monthly donations (up to one year's worth) pledged to CFAR. Do you know if that policy is still in effect?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-31T05:52:04.939Z · LW(p) · GW(p)

Yep! If you tell us that you intend to keep the monthly pledge up for the 2015 calendar year, and start a monthly pledge, it is matched at its yearly amount (e.g., if you pledge $n/month, and tell us you intend to keep it (by messaging me here, or emailing me, or commenting in the public thread), it is matched at 12n).

Thanks for bringing this up.

comment by evand · 2014-12-31T03:57:23.923Z · LW(p) · GW(p)

Thank you for posting this. An excellent writeup all around, and it gives me lots of hope that CFAR will continue improving.

Is your definition of "do good in the world" approximately equivalent to "donating to effective charity"? It sounds from this post like it is, and I find that odd. Your startup list is impressive, and personally I would credit the founders of several of those startups (specifically, all the ones I know anything about) with doing good in the world, regardless of their charitable activities or lack thereof.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2014-12-31T05:50:17.390Z · LW(p) · GW(p)

Is your definition of "do good in the world" approximately equivalent to "donating to effective charity"?

No, not at all. Donating to effective charity can be highly important; but I'll be sad, and think something has gone badly wrong, if e.g. CFAR's altruistic impact occurs exclusively or even mainly through causing such donation; it is important to increase generators of knowledge of what is actually worth doing (rather than e.g. creating copies of CFAR's founders' initial beliefs on that subject), to increase people capable of finding important gaps in the world and then filling them, etc.

At the same time, donating to effective charity is both high-impact enough, and simple enough, that I suspect something will have gone badly wrong if we don't also see a lot of giving of that sort -- it'll suggest an unwillingness to take risks, or to trust others, or to pool together into common efforts, or something similar. I have actually a lot of thoughts on how the above point and this one can both be true, but the subject is a bit unwieldly; I may write a post; in any case, I agree with your nonequivalence.

comment by homunq · 2015-02-22T23:53:47.191Z · LW(p) · GW(p)

One idea for measurement in a randomized trial:

In order to apply, you have to list 4 people who would definitely know how awesome you're being a year from now, and give their contact info. Then, choose 1 of those people 6 months later and 1 person a year later and ask them how awesome the person is being. When you ask, include a "rubric" of various stories of various awesomeness levels, in which the highest levels are not always just $$$ but sometimes are. Ask the people you're asking to please not contact the person specifically to check awesomeness, because that could introduce bias ("this person is checking, that makes me remember the workshop I did, and feel awesome").

The 4 people should probably include no couples. Your family, long-term friends...

The one way this breaks down is facebook. I mean, if your interaction with each person is separate, and the workshop makes you seem more awesome to each of 4 people, it is working. But if it just makes you post more upbeat things on Facebook, that might not translate to actual awesomeness. But I think that's a really minor factor.

Sure, it's gonna be a noisy and imperfect measurement. You will have to look at standard deviations and calculate power (including burning all 4 contacts for some people to see the within-subject variance). Also, correct for demographic info on contacts, and various other tricks to increase power. But one way or another, you'll get a posterior distribution of the causal impact.