Why CFAR's Mission?

post by AnnaSalamon · 2016-01-02T23:23:30.935Z · LW · GW · Legacy · 56 comments

Contents

  Q:  Why not focus exclusively on spreading altruism?  Or else on "raising awareness" for some particular known cause?
  Q: Why suppose “sanity skill” can be increased?
  Q.  Even if you can train skills: Why go through all the trouble and complications of trying to do this, rather than trying to find and recruit people who already have the skills?
  Q.  Can a small organization realistically do all that without losing Pomodoro virtue?  (By "Pomodoro virtue", I mean the ability to focus on one thing at a time and so to actually make progress, instead of losing oneself amidst the distraction of 20 goals.)
  Q.  What is CFAR's relationship to existential risk?  And what should it be?
  Q.  Should I do “Earning to Give”?  Also: I heard that there are big funders around now and so “earning to give” is no longer a sensible thing for most people to do; is that true?  And what does all this have to do with CFAR?
None
56 comments

Related to:


Briefly put, CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.

I'd like to explain what this mission means to me, and why I think a high-quality effort of this sort is essential, possible, and urgent.

I used a Q&A format (with imaginary Q's) to keep things readable; I would also be very glad to Skype 1-on-1 if you'd like something about CFAR to make sense, as would Pete Michaud.  You can schedule a conversation automatically with me or Pete.

---

Q:  Why not focus exclusively on spreading altruism?  Or else on "raising awareness" for some particular known cause?

Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.

Q:  Even given the above -- why focus extra on sanity, or true beliefs?  Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have?  (Also, have you ever met a Less Wronger?  I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)

This is an interesting one, IMO.

Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.

For example:

If I have one floor to sweep, it would be best to hire a person who has pre-existing skill at sweeping floors.

If I have 250 floors to sweep, it would be best to have someone energetic and perceptive, who will stick to the task, notice whether they are succeeding, and improve their efficiency over time.  An "all round competent human being", maybe.

If I have 10^25 floors to sweep, it would... be rather difficult to win at all, actually.  But if I can win, it probably isn't by harnessing my pre-existing skill at floor-sweeping, nor even (I'd claim) my pre-existing skill at "general human competence".  It's probably by using the foundations of science and/or politics to (somehow) create some totally crazy method of getting the floors swept (a process that would probably require actually accurate beliefs, and thus epistemic rationality). 

The world's most important problems look to me more like that third example.  And, again, it seems to me that to solve problems of that sort -- to iterate through many wrong guesses and somehow piece together an accurate model until one finds a workable pathway for doing what originally looked impossible -- without getting stuck in dead ends or wrong turns or inborn or societal prejudices -- it is damn helpful to have something like epistemic rationality.  (Competence is pretty darn helpful too -- it's good to e.g. be able to go out there and get data; to be able to form networking relations with folks who already know things; etc. -- but epistemic rationality is necessary in a more fundamental way.)

For the sake of concreteness, I will claim that AI-related existential risk is among humanity's most important problems, and that it is damn confusing, damn hard, and really really needs something like epistemic rationality and not just something like altruism and competence if one is to impact it positively, rather than just, say, randomly impacting it.  I'd be glad to discuss in the comments.

Q: Why suppose “sanity skill” can be increased?

Let’s start with an easier question: why suppose thinking skills (of any sort) can be increased?

The answer to that one is easy: Because we see it done all the time.  

The math student who arrives at college and does math for the first time with others is absorbing a kind of thinking skill; thus mathematicians discuss a person’s “mathematical maturity”, as a property distinct from (although related to) their learning of this and that math theorem.  

Similarly, the coder who hacks her way through a bunch of software projects and learns several programming languages will have a much easier time learning her 8th language than she did her first; basically because, somewhere along the line, she learned to “think like a computer scientist”...

The claim that “sanity skill” is a type of thinking skill and that it can be increased is somewhat less obvious.  I am personally convinced that the LW Sequences / AI to Zombies gave me something, and gave something similar to others I know, and that hanging out in person with Eliezer Yudkowsky, Michael Vassar, Carl Shulman, Nick Bostrom, and others gave me more of that same thing; a “same thing” that included e.g. actually trying to figure it out, making beliefs pay rent in anticipated experience; using arithmetic to entangle different pieces of my beliefs; and so on.  

I similarly have the strong impression that e.g. Feynman’s and Munger’s popular writings often pass on pieces of this same thing; that the convergence between the LW Sequences and Tetlock’s Superforecasting training is non-coincidental; that the convergence between CFAR’s workshop contents and a typical MBA program’s contents is non-coincidental (though we were unaware of it when creating our initial draft); and more generally that there are many types of thinking skill that are routinely learned/taught and that non-trivially aid the process of coming to accurate beliefs in tricky domains.  I update toward this partly from the above convergences; from the fact that Tetlock's training seems to work; from the fact that e.g. Feynman and Munger (and for that matter Thiel, Ray Dalio, Francis Bacon, and a number of others) were shockingly conventionally successful and advocated similar things; and from the fact that there is quite a bit of "sanity" advice that is obviously correct once stated, but that we don't automatically do (advice like "bother to look at the data; and try to update if the data doesn't match your predictions").

So, yes, I suspect that there is some portion of sanity that can sometimes be learned and taught.  And I suspect this portion can be increased further with work.

Q.  Even if you can train skills: Why go through all the trouble and complications of trying to do this, rather than trying to find and recruit people who already have the skills?

The short version: because there don't seem to be enough people out there with the relevant skills (yet), and because it does seem to be possible to help people increase their skills with training.

How can I tell there aren't enough people out there, instead of supposing that we haven't yet figured out how to find and recruit them?

Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems.  Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.

I realize this is a somewhat odd thing to say, and may not seem true to most readers.  It didn't used to seem true to me.  I myself found the idea of existential risk partly through luck, and so, being unable to picture thinkers who had skills I lacked, I felt that anyone would need luck to find out about existential risk.  Recruiters, I figured, could fix that luck-gap by finding people and telling them about the risks.

However, there are folks who can reason their own way to the concept.  And there are folks who, having noticed how much these issues matter, can locate novel paths that may plausibly impact human outcomes.  And even if there weren't folks of this sort, there would be hypothetical people of this sort; much of what is luck at some skill-levels can in principle be made into skill.  So, finding that only a limited number of people are working effectively on these problems (provided one is right about that assessment) does offer an upper bound on how many people today can exceed a certain combined level of epistemic rationality, competence, and caring about the world.

And of course, we should also work to recruit competence and epistemic skill and caring wherever we can find them.  (To help such folk find their highest impact path, whatever that turns out to be.)  We just shouldn't expect such recruiting to be enough, and shouldn't stop there.  Especially given the stakes.  And given the things that may happen if e.g. political competence, "raised awareness", and recruited good intentions are allowed outpace insight and good epistemic norms on e.g. AI-related existential risk, or other similarly confusing issues.  Andrew Critch makes this point well with respect to AI safety in a recent post; I agree with basically everything he says there.

Q.  Why do you sometimes talk of CFAR's mission as "improving thinking skill", and other times talk of improving all of epistemic rationality, competence, and do-gooding?  Which one are you after?

The main goal is thinking skill.  Specifically, thinking skill among those most likely to successfully use it to positively impact the world.

Competence and caring are relevant secondary goals: some of us have a conjecture that deep epistemic rationality can be useful for creating competence and caring, and of course competence and caring about the world are also directly useful for impacting the world's problems.  But CFAR wants to increase competence and caring via teaching relevant pieces of thinking skill, and not via special-case hacks.  For example, we want to help people stay tuned into what they care about even when this is painful, and to help people notice their aversions and sort through which of their aversions are and aren't based in accurate implicit models.  We do not want to use random emotional appeals to boost specific cause areas, nor to use other special-case hacks that happen to boost efficacy in a manner opaque to participants.

Why focus primarily on thinking skill?  Partly so we can have focus enough as an organization so as to actually do anything at all.  (Organizations that try to accomplish several things at once risk accomplishing none -- and "epistemic rationality" is more of a single thing.)  Partly so our workshop participants and other learners can similarly have focus as learners.  And partly because, as discussed above, it is very very hard to intervene in global affairs in such a way as to actually have positive outcomes, and not merely outcomes one pretends will be positive; and focusing on actual thinking skill seems like a better bet for problems as confusing as e.g. existential risk. 

Why include competence and caring at all, then?  Because high-performing humans make use of large portions of their minds (I think), and if we focus only on "accurate beliefs" in a narrow sense (e.g., doing analogs of Tetlocks forecasting training and nothing else), we are apt to generate "straw lesswrongers" whose "rationality" applies mainly to their explicit beliefs... people who can nitpick incorrect statements and can in this way attempt accurate verbal statements, but who are not creatively generative, do not have the iterated energy/competence/rapid iteration required to launch a startup, and cannot run good fast realtime social skills.  We aim to do better.  And we suspect that working to hit competence and caring via what one might call "deep epistemic rationality" is a route in.

Q.  Can a small organization realistically do all that without losing Pomodoro virtue?  (By "Pomodoro virtue", I mean the ability to focus on one thing at a time and so to actually make progress, instead of losing oneself amidst the distraction of 20 goals.)

We think so, and we think the new core/labs division within CFAR will help.  Basically, Core will be working to scale up the workshops and related infrastructure, which should give a nice trackable set of numbers to optimize -- numbers that, if grown, will enable better financial health for CFAR and will also enable a much larger set of people to train in rationality.

Labs will be focusing on impacting smaller numbers of people who are poised to impact existential risk (mainly), and on seeing whether our "impacts" on these folk do in fact seem to help with their impact on the world.

We will continue to work together on many projects and to trade ideas frequently, but I suspect that this separation into two goals will give more "pomorodo virtue" to the whole organization.

Q.  What is CFAR's relationship to existential risk?  And what should it be?

CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world -- via whatever cause areas may be most important.  Many of us suspect that AI-related existential risk is an area with huge potential for useful impact; and so we are focusing partly on helping meet talent gaps in that field.  This focus also gives us more "pomodoro virtue" -- it is easier to track whether e.g. the MIRI Summer Fellows Program helped boost research on AI safety, than it is to track whether a workshop had "good impacts on the world" in some more general sense.

It is important to us that the focus remain on "high impact pathways, whatever those turn out to be", that we do not propagandize for particular pre-set answers (rather, that we assist folks in thinking things through in an unhindered way), and that we work toward a kind of thinking skill that may let people better assess what paths are actually high impact for having positive effects in the world, and to overcome flaws in our current thinking. 

Q.  Should I do “Earning to Give”?  Also: I heard that there are big funders around now and so “earning to give” is no longer a sensible thing for most people to do; is that true?  And what does all this have to do with CFAR?

IMO, earning to give remains a pathway for creating very large positive impacts on the world.

For example, CFAR, MIRI, and many of the organizations under the CEA umbrella seem to me to be both high potential impact, and substantially funding limited right now, such that further donation is likely to cause a substantial increase in how much good these organizations accomplish.

This is an interesting claim to make in a world where e.g. Good Ventures, Elon Musk, and others are already putting very large sums of money into making the world better; if you expect to make a large difference via individual donation, you must implicitly expect that you can pick donation targets for your own smallish sum better than they could, at least at the margin.  (One factor that makes this more plausible is that you can afford to put in far more time/attention/thought per dollar than they can.)

Alternately stated: the world seems to me to be short, not so much on money, as on understanding of what to do with money; and an "Earning to Give" career seems to me to potentially make sense insofar as it is a decision to really actually try to figure out how to get humanity to a win-state (or how to otherwise do whatever it is that is actually worthwhile), especially insofar as you seem to see low-hanging fruit for high-impact donation.  Arguing with varied others who have thought about it is perhaps the fastest route toward a better-developed inside view, together with setting a 5 minute timer and attempting to solve it yourself.  "Earning to give"ers, IMO, contribute in proportion to the amount of non-standard epistemic skill they develop, and not just in proportion to their giving amount; and, much as with Giving Games, knowing that you matter today can provide a reason to develop a worldview for real.

This has to do with CFAR both because I expect us to accomplish far more good if we have money to do it, and because folks actually trying to figure out how to impact the world are building the kind of thinking skill CFAR is all about.

56 comments

Comments sorted by top scores.

comment by alyssavance · 2015-12-31T13:11:21.240Z · LW(p) · GW(p)

I mostly agree with the post, but I think it'd be very helpful to add specific examples of epistemic problems that CFAR students have solved, both "practice" problems and "real" problems. Eg., we know that math skills are trainable. If Bob learns to do math, along the way he'll solve lots of specific math problems, like "x^2 + 3x - 2 = 0, solve for x". When he's built up some skill, he'll start helping professors solve real math problems, ones where the answers aren't known yet. Eventually, if he's dedicated enough, Bob might solve really important problems and become a math professor himself.

Training epistemic skills (or "world-modeling skills", "reaching true beliefs skills", "sanity skills", etc.) should go the same way. At the beginning, a student solves practice epistemic problems, like the ones Tetlock uses in the Good Judgement Project. When they get skilled enough, they can start trying to solve real epistemic problems. Eventually, after enough practice, they might have big new insights about the global economy, and make billions at a global macro fund (or some such, lots of possibilities of course).

To use another analogy, suppose Carol teaches people how to build bridges. Carol knows a lot about why bridges are important, what the parts of a bridge are, why iron bridges are stronger than wood bridges, and so on. But we'd also expect that Carol's students have built models of bridges with sticks and stuff, and (ideally) that some students became civil engineers and built real bridges. Similarly, if one teaches how to model the world and find truth, it's very good to have examples of specific models built and truths found - both "practice" ones (that are already known, or not that important) and ideally "real" ones (important and haven't been discovered before).

Replies from: AnnaSalamon, elharo, RomeoStevens
comment by AnnaSalamon · 2016-01-11T03:45:25.473Z · LW(p) · GW(p)

Example practice problems and small real problems:

  • Fermi estimation of everyday quantities (e.g., "how many minutes will I spend commuting over the next year? What's the expected savings if I set a 5-minute timer to try to optimize that?);
  • Figuring out why I'm averse to work/social task X and how to modify that;
  • Finding ways to optimize recurring task X;
  • Locating the "crux" of a disagreement about a trivia problem ("How many barrels of oil were sold worldwide in 1970?" pursued with two players and no internet) or a harder-to-check problem ("What are the most effective charities today?"), such that trading evidence for the crux produces shifts in one's own and/or the other player's views.

Larger real problems: Not much to point to as yet. Some CFAR alums are running start-ups, doing scientific research for MIRI or elsewhere, etc. and I imagine make estimates of various quantities in real life, but I don't know of any discoveries of note. Yet.

comment by elharo · 2016-01-11T12:02:00.852Z · LW(p) · GW(p)

I've learned useful things from the sequences and CFAR training, but it's almost all instrumental, not epistemic. I suppose I am somewhat more likely to ask for an example when I don't understand what someone is telling me, and the answers have occasionally taught me things I didn't know; but that feels more like an instrumental technique than an epistemic one.

comment by RomeoStevens · 2016-01-01T03:02:23.158Z · LW(p) · GW(p)

Before and after prediction market performance jumps to mind and is easy, though doesn't cover the breadth of short feedback topics that would be ideal.

comment by ChristianKl · 2016-01-02T12:30:42.654Z · LW(p) · GW(p)

Why is CFAR's main venue for teaching those skills a 4-day workshop?

Why not weekly classes of 2 to 3 hours?
Why not a focus on written material as the original sequences had?
Why not a focus on creating videos that teach rationality skills?
Why not focus on creating software that trains the skills?

Replies from: AnnaSalamon, Good_Burning_Plastic, Xachariah
comment by AnnaSalamon · 2016-01-11T02:59:41.506Z · LW(p) · GW(p)

The short answer: because we're trying to teach a kind of thinking rather than a pile of information, and this kind of thinking seems to be vary more easily acquired in an immersive multi-day context -- especially a context in which participants have set aside their ordinary commitments, and are free to question their normal modes of working/socializing/etc. without needing to answer their emails meanwhile.

Why I think this: CFAR experimented quite a bit with short classes (1 hour, 3 hours, etc.), daylong commuter events, multi-day commuter events, and workshops of varying numbers of days. We ran our first immersive workshop 6 months into our existence, after much experimentation with short formats; and we continued to experiment extensively with varied formats thereafter.

We found that participants were far more likely to fill in high scores to "0 to 10, are you glad you came?" at multi-day residential events. We found also that they seemed to us to engage with the material more fully and acquire the "mindset" of applied rationality more easily and more deeply, and that conversations relaxed, opened up, and became more honest/engaged as each workshop progressed, with participants feeling free to e.g. question whether their apparently insoluble problems were in fact insoluble, whether they in fact wanted to stay in the careers they felt "already stuck" in, whether they could "become a math person after all" or "learn social skills after all" or come to care about the world even if they hadn't been born that way, etc.

We also find we learn more from participants with whom we have more extensive contact, and the residential setting provides that well per unit staff time -- we can really get in the mode of hanging out with a given set of participants, trying to understand where they're at, forming hypotheses that might help, trying those hypotheses real-time in a really data-rich setting, seeing why that didn't quite work, and trying again... And developing better curricula is perhaps CFAR's main focus.

That said, discussed in our year-end review & fundraiser post, we are planning to attempt more writing, both for the sake of scalable reading and for the sake of more explicitly formulating some of what we think we know. It'll be interesting to see how that goes.

(You might also check our Critch's recent post on why CFAR has focused so much on residential workshops.)

Replies from: ChristianKl
comment by ChristianKl · 2016-01-11T18:58:06.930Z · LW(p) · GW(p)

Thanks for your insightful answer into CFAR's strategic choices. Especially the fact that's it's based on data from participants evaluation scores.

comment by Good_Burning_Plastic · 2016-01-06T09:57:30.278Z · LW(p) · GW(p)

Why not weekly classes of 2 to 3 hours?

Those would be very impractical for people not within a reasonable commuting distance of where the classes are held.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-06T10:25:15.225Z · LW(p) · GW(p)

It's easier to build a tight community if the people aren't far apart. I got my Salsa skills via weekly classes. A lot of people get Yoga skills through weekly classes.

If you get a curriculum to work in one location you can franchize it out.

comment by Xachariah · 2016-01-06T08:13:12.965Z · LW(p) · GW(p)

This is my main question. I've never seen anything to imply that multi-day workshops are effective methods of learning. Going further, I'm not sure how Less Wrong supports Spaced Repetition and Distributed Practice on one hand, while also supporting an organization that's primary outreach seems to be crash courses. It's like Less Wrong is showing a forum wide cognitive dissonance that nobody notices.

That leaves a few options:

  • I'm wrong (though I consider it highly unlikely)
  • CFAR never bothered to look it up or uses self selection to convince themselves it's effective
  • CFAR is trying to optimize for something aside from spreading rationality, but they aren't actually saying what.
Replies from: AnnaSalamon
comment by AnnaSalamon · 2016-01-11T03:11:54.517Z · LW(p) · GW(p)

See my reply above. It is worth noting also that there is follow-up after the workshop (emails, group Skype calls, 1-on-1 follow-up sessions, and accountability buddies), and that the workshops are for many an entry-point into the alumni community and a longer-term community of practice (with many participating in the google group; attending our weekly alumni dojo; attending yearly alumni reunions and occasional advanced workshops, etc.).

(Even so, our methodology if not what I would pick if our goal was to help participants memorize rote facts. But for ways of thinking, it seems to work better than anything else we've found. So far.)

comment by Gleb_Tsipursky · 2015-12-31T20:57:14.963Z · LW(p) · GW(p)

First, on a meta-note, since Anna was too humble to mention it herself, I want to highlight that the CFAR 2015 Winter Fundraiser will last through January 31, 2016, with every $2 donated matched by $1 from CFAR supporters. Just to be clear, for those who don't know me, I'm not a staff person or Board member at CFAR, and am in fact the President of another organization spreading rationality and effective altruism to a broad audience, so with a somewhat distinct mission with CFAR, which targets, as Anna said, those elites who are in the strongest position to impact the world. However, I'm also a monthly donor to CFAR, and very much support the mission, and encourage you to donate to CFAR during this fundraiser, since your dollars will do a lot of good there.

Second, let me come down from meta, and speak from my CFAR donor hat. I'm curious to learn more about the target group of elites that you talk about Anna, namely those "who are most likely to actually usefully impact the world." When I think of MIRI Summer Fellows, I totally get your point regarding AI research. But what about offering training to others such as aspiring politicians/bureaucrats who are likely to be in the position to make AI-relevant policies, and also policies that address short and medium-term existential risk in the next several of decades before the possibility of FAI becomes more tangible - existential risk like cyberwarfare, nuclear war, climate change, etc. If we can get politicians to be more sane about short, medium, and long-term existential risk, it seems like that would be a win-win scenario. What are CFAR's thoughts on that?

Replies from: AnnaSalamon
comment by AnnaSalamon · 2016-01-11T03:19:35.050Z · LW(p) · GW(p)

If we can get politicians to be more sane about short, medium, and long-term existential risk, it seems like that would be a win-win scenario. What are CFAR's thoughts on that?

Getting politicians to me more sane sounds awesome, but somewhat harder for us and more outside our immediate reach than getting STEM-heavy students to be more sane. I realize I said "who are most likely to actually usefully impact the world", but I should perhaps instead have said "who have high values for the product of [likely to usefully impact the world if they think well] * [comparatively easy for us to assist in acquiring good thinking skills]"; and STEM seems to help with both of these.

Still, we are keen to have aspiring politicians, civil servants, etc. to our workshops, we've found financial aid for several such in the past, and we'd love it if you or others would recommend our workshops to aspiring rationalists who are interested in this path (as well as in other paths).

Replies from: Gleb_Tsipursky
comment by Gleb_Tsipursky · 2016-01-11T20:12:27.679Z · LW(p) · GW(p)

Anna, thanks for clarifying about impacting the world, I think it's much clearer (and epistemically accurate) the way you rephrased it.

I will keep in mind about recommending aspiring politicians and civil servants for your workshops, as well as financial aid opportunities for them.

comment by coyotespike · 2016-01-01T15:53:16.442Z · LW(p) · GW(p)

This is an excellent post, which I'll return to in future. I particularly like the note about the convergence between Superforecasting, Feynman, Munger, LW-style rationality, and CFAR - here's a long list of Munger quotations (collected by someone else) which exemplifies some of this convergence. http://25iq.com/quotations/charlie-munger/

Replies from: RomeoStevens, katydee
comment by RomeoStevens · 2016-01-02T02:41:34.972Z · LW(p) · GW(p)

There's also a pretty big overlap with the intelligence community which is briefly discussed in Superforecasting (the good judgement project was funded by IARPA).

Replies from: XFrequentist
comment by XFrequentist · 2016-01-12T17:25:15.717Z · LW(p) · GW(p)

(The alignment of both goals and methods between CFAR and the IC is, I think, under-exploited by both.)

comment by katydee · 2016-01-01T19:13:16.951Z · LW(p) · GW(p)

Excellent link.

comment by tangled_y · 2016-01-04T00:06:33.983Z · LW(p) · GW(p)

I initially wrote quite a dismissive post after getting a negative impression of CFAR from the massive price tag attached to the workshop.

However after watching a talk by Anna Salamon explaining what CFAR was I could see a genuine passion behind the vision, so I changed my mind and deleted the post.

I would be interested in learning more, if you guys have any public lectures hosted online.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-05T19:03:12.268Z · LW(p) · GW(p)

Julia Galef of CFAR gave a good talk at titled How to Change Your Mind

Replies from: tangled_y
comment by tangled_y · 2016-01-07T22:23:46.909Z · LW(p) · GW(p)

Thanks! Really good one. Does CFAR have a forum or mailing list of its own anywhere?

Replies from: ChristianKl
comment by ChristianKl · 2016-01-07T22:53:33.366Z · LW(p) · GW(p)

CFAR has a mailing list for it's alumni. But that mailing list isn't public.

comment by ZoltanBerrigomo · 2016-01-01T21:10:11.041Z · LW(p) · GW(p)

A very interesting and thought provoking post -- I especially like the Q & A format.

I want to quibble with one bit:

How can I tell there aren't enough people out there, instead of supposing that we haven't yet figured out how to find and recruit them?

Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.

There is an empirical claim about the world that is implicit in that statement, and it is this claim I want to disagree with. Namely: I think having a high impact on the world is really, really hard. I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck.

It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.

Replies from: ChristianKl, Gleb_Tsipursky
comment by ChristianKl · 2016-01-02T21:25:21.978Z · LW(p) · GW(p)

I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck

Do the extend that it does require luck that simply means that it's important to have more people with rationality + competence + caring. If you have many people some will get lucky.

Many such people respond to unreasonable confidence

I think the term "unreasonable confidence" can be misleading. It's possible to very confidently say "I don't know".

At the LW Community Camp in Berlin, I consider Valentine of CFAR to have been the most charismatic person in attendence. When speaking with Valentine, he said things like: "I think it's likely that what you are saying is true, but I don't see a reason why it has to be true." He also very often told people that he might be wrong and that people shouldn't trust his judgements as strongly as they do.

may be more difficult to produce the more you are used to thinking things through rationally.

I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.

Replies from: AlexZ, ZoltanBerrigomo
comment by AlexZ · 2016-01-03T16:00:27.402Z · LW(p) · GW(p)

I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.

I don't think that's true. In my experience spending time with rationalists and studying aspects of it myself, I have found that rationalists separate themselves from the general population in many ways which would make it hard to convince non-rationalists. Those aspects are things that rationalists cultivate partially in an effort to improve their thinking, but also, in order to signal membership in the rationalist tribe. (Rationalists are humans after all) Those are not things that rationalists can easily turn on and off. I can identify 3 general groups of aspects that many rationalists seem to have:

1) The use of esoteric language. Rationalists tend to use a lot of language that is unfamiliar to others. Rationalists "update" their beliefs. They fight "akrasia". They "install" new habits. If you spend any time in rationalist circles, you will have heard those terms used in those ways very frequently. This is of course not bad in and of itself. But it marks one as a member of the rationalist tribe and even someone who does not know about rationalists will be able to identify the speaker who uses this terminology as alien and "weird". My first encounter with rationalists was indeed of this type. All I knew was that they seemed to speak in a very strange manner.

2) Rationalists, at least the ones in this community, hold a variety of unusual beliefs. I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.

3) Rationalists think very differently from the way most other people think. That is after all the point. However, it means that arguments that convince rationalists will frequently fail to convince an average person. For instance, arguing that the effects of brain damage show that there is no soul in the conventional sense will get you nowhere with an average person while many rationalists see this as a very persuasive if not conclusive argument.

I claim that to convince another human being, you must be able to model their cognitive processes. As many rationalists realize, humans have a tendency to model other humans as similar to themselves. Doing otherwise is incredibly difficult and increases in difficulty exponentially with your difference from that other human. This is after all unsurprising. If modeling an identical copy of yourself, you need only fake sensory inputs and see what the output would be. If you model someone different than yourself, you need to basically replicate their brain within your brain. This is obviously very effortful and error-prone. This is actually hard-enough that it is difficult for you to replicate the processes that led you to believe something you no longer believe. And you had access to the brain which held those now-discarded beliefs!

I do not claim it is an impossible task. But I do claim that the better you are at rationality, the worst you will be at understanding non-rationalists and how to convince them of anything. If anything, as a good rationalist, you will have learned to flinch away from lines of reasoning that are the result of common cognitive errors. But of course, cognitive errors are an integral part of the way most people live their lives. So if you flinch away from such things, you will miss lines of reasoning that would be very fruitful to convince others of the correctness of your beliefs.

Let me provide an example. I recently discussed abortion with a non-rationalist but very intelligent friend. I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents. The responses I got were riddled with fallacies. Most interestingly, the idea that science has determined that fetuses are not humans. I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a "human deserving of right" as that is a purely moral category. This was to no avail. People (even very intelligent people) hang their beliefs and actions of such fallacy-riddled lines of reasoning all the time. If you train yourself to avoid such lines of reasoning, you will have great difficulty in convincing others without first turning them into yourself.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-04T12:34:03.281Z · LW(p) · GW(p)

My first encounter with rationalists was indeed of this type.

If I'm chatting with other rationalists I will use a term like akrasia but in other contexts I will say procastination. I'm prefectly able to use different words in different social contexts.

In my experience spending time with rationalists and studying aspects of it myself

There are ways of studying rationality that do have those effects. I don't think going to a CFAR workshop is going to make a person less likely to convince the average person.

I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a "human deserving of right" as that is a purely moral category. This was to no avail.

Conving a person who believes in sciencism that science doesn't work that way is similar to trying to convince a theist that there's no god. Both are hard problems that you can't easily solve by making emotional appeals even if you are good at making emotional appeal.

I claim that to convince another human being, you must be able to model their cognitive processes.

I don't believe that to be true. In many cases it's possible to convince other people by making generalized statements that different human beings will interpret differently and where it's not important that you know which interpretation the other person chooses.

In NLP that principle is called the Milton model.

As many rationalists realize, humans have a tendency to model other humans as similar to themselves.

I think it would be more accurate to say humans have a tendency to model other humans as they believe themselves to be.

I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents.

From an LW perspective I think abortion is obviously murder is an argument with little substance because it's about the definitions of words. My reflex through ratioanlity training would be to taboo murder.

I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.

I don't think that the average rationalist has the same opinon on any of those subjects. There a sizeable portion of EA people in this community but not everybody agrees with the EA frame.

I had a Hemming circle at our local LW meetup where pride was very important to the circled person. He choose actions because he wanted to achive results that make him feel pride. For myself pride is no important concept or emotion. It's not an emotion that I seek.

The Hamming cirlce allowed me to have a perspective into the workings of a mind that in that regard significantly different then myself. Hamming circles are a good way to learn to model people different than yourself.

There are people in this community who focus on analytical reasoning and as a result are bad at modelling normal people. I think those people would get both more rational and better at modeling normal people if they would frequently engage in Hamming circles.

I think the same is true for practicing techniques like goal factoring and urge propagation.

If you train Focusing you can speak from that place to make stronger emotional appeal than you could otherwise.

comment by ZoltanBerrigomo · 2016-01-03T17:42:10.119Z · LW(p) · GW(p)

Do the extend that it does require luck that simply means that it's important to have more people with rationality + competence + caring. If you have many people some will get lucky.

The "little bit of luck" in my post above was something of an understatement; actually, I'd suggest it requires a lot of luck (among many other things) to successfully change the world.

I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.

Not sure if I am, but I believe I am making a correct claim about human psychology here.

Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.

This does not mean you don't have emotions.

You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.

Now if, in the real world, the way you persuade people is by emotional appeals (and this is at least partially true), this will be more difficult the more you get in the habit of rational thinking, even if you have an accurate model about what it takes to persuade someone -- emotions are not easy to fake and humans have strong intuitions about whether someone's expressed feelings are genuine.

Replies from: ChristianKl
comment by ChristianKl · 2016-01-04T12:24:53.744Z · LW(p) · GW(p)

Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.

No. CFAR rationality is about aligning system I and system II. It's not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.

You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.

The alternative is working towards feeling more strongly for the fundamental principles than caring about the fights.

emotions are not easy to fake and humans have strong intuitions about whether someone's expressed feelings are genuine.

A person who cares strongly for his cause doesn't need to fake emotions.

Replies from: ZoltanBerrigomo, ZoltanBerrigomo
comment by ZoltanBerrigomo · 2016-01-05T05:51:22.454Z · LW(p) · GW(p)

Sure, you can work towards feeling more strongly about something, but I don't believe you'll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.

As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public's sense of anger, I'm not sure you'd be able to match him.

Replies from: gjm, ChristianKl
comment by gjm · 2016-01-05T23:53:14.401Z · LW(p) · GW(p)

"The best lack all conviction, and the worst / Are full of passionate intensity" -- W B Yeats

"The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt" -- Bertrand Russell

comment by ChristianKl · 2016-01-05T10:25:58.895Z · LW(p) · GW(p)

Julian Assange was one of the first people to bring tears to my eyes when he spoke and I saw him live. At the same time Julian's manifesto is rational to the extend that it makes it case with graph theory.

Interestingly the "We Lost The War"-speech that articulated the doctrine that we need to make life easier for whistleblowers by providing a central venue to which the can post their documents was 10 years ago. The week ago there was a "Ten years after ‚We Lost The War‘" at this CCC congress.

Rop Gonggrijp closes by describing the new doctrine as:

Know there are probably not be a revolution magically manifesting itself next friday, probably also no zombie acopolypse but still we need to be ready for rapid and sizable changes of all sorts and kinds the only way to be effective in this and probably our mission as a community, is to play for the long term, develop a culture that is more fun and attractive to more people to develop infrastructure and turn around and offer that infrastructure to people that need it. That is not a thing we do as a hobby anymore. That's also something we do for people that need this infrastructure. Create a culture that capable of putting up a fight, that gives it's inhabitants a sense of purpose, self worth, usefulnes and then lunch that culture over time till it becomes a viable alternative to the status quo.

I think that's the core strategy. We don't want eternal september so it's no problem if the core community uses language that's not understood by outsiders. We can have our cuddle pies and feel good with each other. Cuddle pies produce different emotions than anger but they also create emotions that produce strong bonds.

If we really need strong charismatic speakers that are world class at persuasion I think that Valentine currently is at that level (as is Julian Assange in the hacker community). It's not CFAR mission to maximize for charisma but nothing that CFAR does prevents people from maximizing charisma. If someone wants to develop themselves into that role Valentine wrote down his body language secrets in http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/ .

A great thing about the prospects of our community is that there's money seeking Effective Altruistic uses. As EA grows there might be an EA person running for office in a few years. If other EA people consider his run to have prospects for making a large positive impact he can raise money from them. But as Rop says in the speech, we should play for the long-term. We don't need a rationalist to run for office next year.

comment by ZoltanBerrigomo · 2016-01-08T23:55:50.533Z · LW(p) · GW(p)

No. CFAR rationality is about aligning system I and system II. It's not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.

I believe you are nitpicking here.

If your reason tells you 1+1=2 but your emotions tell you that 1+1=3, being rational means going with your reason. If your reason tells you that ghosts do not exist, you should believe this to be the case even if you really, really want there to be evidence of an afterlife.

CFAR may teach you techniques to align your emotions and reason, but this does not change the fundamental fact that being rational involves evaluating claims like "is 1+1=2?" or empirical facts about the world such as "is there evidence for the existence of ghosts?" based on reason alone.

Just to forestall the inevitable objections (which always come in droves whenever I argue with anyone on this site): this does not mean you don't have emotions; it does not mean that your emotions don't play a role in determining your values; it does not mean that you shouldn't train your emotions to be an aid in your decision-making, etc etc etc.

Replies from: Kaj_Sotala, ChristianKl
comment by Kaj_Sotala · 2016-01-09T13:06:37.610Z · LW(p) · GW(p)

Being rational involves evaluating various claims and empirical facts, using the best evidence that you happen to have available. Sometimes you're dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence. Both are information-processing systems that have evolved to make sense of the world and orient your behavior appropriately; they're just evolved for dealing with different tasks.

This means that in some domains explicit reasoning will provide better evidence, and in some domains emotions will provide better evidence. Rationality involves figuring out which is which, and going with the system that happens to provide better evidence for the specific situation that you happen to be in.

Replies from: ZoltanBerrigomo
comment by ZoltanBerrigomo · 2016-01-11T03:28:40.100Z · LW(p) · GW(p)

Sometimes you're dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence.

And how should you (rationally) decide which kind of domain you are in?

Answer: using reason, not emotions.

Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of inductive reasoning.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2016-01-11T17:29:08.242Z · LW(p) · GW(p)

Sure.

comment by ChristianKl · 2016-01-09T11:13:05.298Z · LW(p) · GW(p)

but this does not change the fundamental fact that being rational involves evaluating claims like "is 1+1=2?" or empirical facts about the world such as "is there evidence for the existence of ghosts?" based on reason alone.

On of the claims is analytic. 1+1=2 is true by definition of what 2 means. There's little emotion involved.

When it comes to an issue such as is there evidence for the existence of ghosts? neither rationality after Eliezer's sequences nor CFAR argues that emotions play no role. Noticing when you feel the emotion of confusion because your map doesn't really fit is important.

Beauty of mathematical theories is a guiding stone for mathematicians.

Basically any task that doesn't need emotions or intuitions is better done by computers than by humans. To the extend that human's outcompete computers there's intuition involved.

Replies from: ZoltanBerrigomo
comment by ZoltanBerrigomo · 2016-01-11T03:34:17.878Z · LW(p) · GW(p)

1+1=2 is true by definition of what 2 means

Russell and Whitehead would beg to differ.

Replies from: gjm, ChristianKl
comment by gjm · 2016-01-11T09:24:16.948Z · LW(p) · GW(p)

"True by definition" is not at all the same as "trivial" or "easy". In PM the fact that 1+1=2 does in fact follow from R&W's definition of the terms involved.

comment by ChristianKl · 2016-01-11T13:19:54.354Z · LW(p) · GW(p)

I learned math with the Peano axioms and we considered the symbol 2 to refer to the 1+1, 3 to (1+1)+1 and so on. However even if you consider it to be more complicated it still stays an analytic statement and isn't a synthetic one.

If you define 2 differently what's the definition of 2?

Replies from: gjm, Richard_Kennaway, mcallisterjp
comment by gjm · 2016-01-11T18:06:34.625Z · LW(p) · GW(p)

When you write "1+1" you may mean two things: "the result of doing the adding operation to 1 and 1", and "the successor of 1". It just happens that we use "+1" to denote both of those. The fact that successor(1) = add(1,1) isn't completely trivial.

Principia Mathematica, though, takes a different line. IIRC, in PM "2" means something like "the property a set has when it has exactly two elements" (i.e., when it has an element a and an element b, and a=b is false, and for any element x we have either x=a or x=b) and similarly for "1" (with all sorts of complications because of the hierarchy of kinda-sorta-types PM uses to try to avoid Russell-style paradoxes). And "m+n" means something like "the property a set has when it it is the union of two disjoint subsets, one of which has m and the other of which has n". Proving 1+1=2 is more cumbersome then. And PM begins from a very early point, devoting quite a lot of space to introducing propositional calculus and predicate calculus (in an early, somewhat clunky form).

comment by Richard_Kennaway · 2016-01-12T01:26:14.532Z · LW(p) · GW(p)

If you define 2 differently what's the definition of 2?

One popular definition (at least, among that small class of people who need to define 2) is { { }, { { } } }.

Another, less used nowadays, is { z : ∃x,y. x∈z ∧ y∈z ∧ x ≠ y ∧ ∀w∈z.(w=x ∨ w=y) }.

In surreal numbers, 2 is { { { | } | } | }.

comment by mcallisterjp · 2016-01-11T18:52:10.179Z · LW(p) · GW(p)

In type theory and some fields of logic, 2 is usually defined as (λf.λx.f (f x)); essentially, the concept of doing something twice.

comment by Gleb_Tsipursky · 2016-01-02T19:55:52.674Z · LW(p) · GW(p)

It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.

Really good point! In fact, there is a specific challenge in that the rationality community itself lashes back against rationalists using such tactics, as I experienced myself. So this is a particular challenging area of impacting the world.

comment by AnnaSalamon · 2016-02-01T01:26:08.798Z · LW(p) · GW(p)

The fundraiser closes today at midnight Pacific time; if you've been planning to donate, now is the moment. Marginal funds seem to me to be extremely impactful this year; I'd be happy to discuss. http://rationality.org/donate-2015/

comment by elharo · 2016-01-11T11:56:56.107Z · LW(p) · GW(p)

Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.

I agree with this, but I strongly disagree that epistemic rationality is the limiting factor in this equation. Looking at the world, I see massive lack of caring. I see innumerable people who care only about their own group, or their own interests, to the exclusion of others.

For example, many people give to ineffective local charities instead of more effective charities that invest their money in the developing world because they care more about the park down the street than they do about differently colored refugees in the developing world. People care more about other people who are closer to them and more like them than they do about different people further away. Change that, and epistemic rationality will take care of itself.

Solutions for the problems that exist in the world today are not limited by competence or epistemic rationality. (Climate change denial is a really good example: it's pretty obvious that denial is politically and personally motivated and that the deniers are performing motivated reasoning, not seriously misinformed. Better epistemic rationality will not change their actions because they are acting rationally in their own self-interests. They're simply willing to damage future generations and poorer people to protect their interests over those of people they don't care about.)

Anna's argument here is a classic example of the fallacy of assuming your opponents are stupid or misinformed, that they simply need to be properly educated and everyone will agree. This is rarely true. People disagree and cause the problems that exist in the world today because they have different values, not because they see the world incorrectly.

To the extent that people do see the world incorrectly, it is because epistemic rationality interferes with their values and goals, not because poor epistemic rationality causes them to have the wrong values and goals. That is, a lack of caring leads to poor epistemic rationality, not the other way around.

This is why I find CFAR to be a very low-effectiveness charity. It is attacking the wrong problem.

Replies from: IlyaShpitser, ChristianKl
comment by IlyaShpitser · 2016-01-11T21:27:57.391Z · LW(p) · GW(p)

When it comes to helping folks, I am sure "the causal effect is 0," because that's just how it generally is.

But then they think it's a success if they find one person at these workshops to work on super theoretical decision theory. Which I think is super weird and slightly misleading as far as what CFAR is really about.

comment by ChristianKl · 2016-01-11T21:04:32.864Z · LW(p) · GW(p)

it's pretty obvious that denial is politically and personally motivated and that the deniers are performing motivated reasoning, not seriously misinformed.

CFAR mission isn't informing people but teaching them to reason well. Not holding wrong beliefs because of motivated reasoning is part of that of it.

Caring doesn't fix the problem of motivated reasoning. A person who believes that AI risk doesn't exists because they think that an FAI would be really awesome has no problem with not caring for humanity.

Even on the issue of climate change you have problems of motivated reasoning on both sides of the isle. A lot of the money invested into Green energy companies went bust because people didn't think well about the sector and where the money has the biggest impact.

comment by Gunslinger (LessWrong1) · 2016-01-11T10:13:22.411Z · LW(p) · GW(p)

Briefly put, CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.

How do you decide who is more likely to impact the world?

comment by Squark · 2016-01-17T06:50:19.072Z · LW(p) · GW(p)

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?

Replies from: AnnaSalamon, pcm
comment by AnnaSalamon · 2016-01-26T09:02:01.613Z · LW(p) · GW(p)

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now.

I don't think this; it seems to me that the next decade or two may be pivotal, but they may well not be, and the rest of the century matters quite a bit as well in expectation.

There are three main reasons we've focused mainly on adults:

  1. Adults can contribute more rapidly, and so can be part of a process of compounding careful-thinking resources in a shorter-term way. E.g. if adults are hired now by MIRI, they improve the ratio of thoughtfulness within those thinking about AI safety, and this can in turn impact the culture of the field, the quality of future years’ research, etc.

  2. For reasons resembling (1), adults provide a faster “grounded feedback cycle”. E.g., adults who come in with business or scientific experience can tell us right away whether the curricula feel promising to them; students and teens are more likely to be indiscriminatingly enthusiastic. .

  3. Adults can often pay their own way at the workshops; children can’t; we therefore cannot afford to run very many workshops for kids until we somehow acquire either more donation, or more financial resource in some other way.

Nevertheless, I agree with you that programs targeting children can be higher impact per person and are extremely worthwhile in the medium- to long-run. This is indeed part of the motivation for SPARC, and expanding such programs is key to our long-term aims; marginal donation is key to our ability to do these quickly, and not just eventually.

comment by pcm · 2016-01-22T19:56:59.648Z · LW(p) · GW(p)

I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.

Replies from: Squark
comment by Squark · 2016-01-25T09:38:39.930Z · LW(p) · GW(p)

Hi Peter! I am Vadim, we met in a LW meetup in CFAR's office last May.

You might be right that SPARC is important but I really want to hear from the horse's mouth what is their strategy in this regard. I'm inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It's not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven't seen such an analysis by CFAR.

Replies from: pcm
comment by pcm · 2016-01-26T19:41:58.885Z · LW(p) · GW(p)

Peer pressure matters, and younger people are less able to select rationalist-compatible peers (due to less control over who their peers are).

I suspect younger people have short enough time horizons that they're less able to appreciate some of CFAR's ideas that take time to show benefits. I suspect I have more intuitions along these lines that I haven't figured out how to articulate.

Maybe CFAR needs better follow-ups to their workshops, but I get the impression that with people for whom the workshops are most effective, they learn (without much follow-up) to generalize CFAR's ideas in ways that make additional advice from CFAR unimportant.

comment by Ben Pace (Benito) · 2016-01-10T20:29:49.960Z · LW(p) · GW(p)

In your floor-sweeping example, I agree that you cannot scale previous solutions to 10^25. And I agree that you need very novel thoughts to solve that larger problem, that involve the ability to come up with new ideas and build things from them.

Could you give a concrete example of how 'rationality skills' fits into that picture, and point at a mechanism by which getting better at these skills causes you to be able to solve the 10^25 level problem?

comment by LawrenceC (LawChan) · 2016-01-09T14:02:12.931Z · LW(p) · GW(p)

CFAR’s workshop contents and a typical MBA program’s contents is non-coincidental

I was really surprised after going to a couple more traditional business workshops to learn that many of CFAR's techniques like focused grit, trigger-action planning, value of information, and murphyjitsu are taught in business contexts as well (though sometimes under different names, like the swiss-cheese method for focused grit). It seems the main value of CFAR over these programs is the focus on helping those with high impact/working on existential-risk - but what specific skills/methods does CFAR offer that MBA programs/other workshops do not?

comment by tangled_y · 2016-01-03T22:59:50.892Z · LW(p) · GW(p)

OK. So, I'm not sure how this came up in my newsfeed, but since I'm here I'll take the bait and reply.

So. I do not mean to offend, but aren't you guys, like, full of it?

You're charging 4 grand for a "rationality workshop"? Now, considering that it'll cost me 250$ to spend 2 weeks in a graduate-level summer school on logic - the same summer school that has freely made all of their lecture videos for the past six years free to view online - what secret and hidden gems of wisdom am I to gain from paying you fifteen times as much for an extended weekend of unknown value?

You're claiming to be "improving the world" but all I see your mission boiling down to is scamming a bunch rich kids with more money than brains out of their cash. I don't need to be the one to tell you that if you actually wanted to "improve the world" you'd be making all of your hidden gems of wisdom free for the people who would benefit from them most (if any of them actually provide any benefit).

Of course you "can't" because that would deprive you of funding for your "mission". So instead lets sprinkle some buzzwords like "cognitive science" all over your webpage and see who takes the bait instead! (Conveniently there's no mention anywhere that no one on your team actually has any qualifications in cogntive science. But degrees are for hacks anyway, amirite?)

The saddest part is that despite your enormously prohibitive prices of attendance, you STILL hunger for more money and try to get it for free with your "donation runs" - and there are gullible people willing to give it to you for nothing in return (This is precisely why you people think you lot are a cult).

And if my words sound harsh, well, why not do what every LWer does when confronted with the fact that their "rationality" is a sham? Cover your ears and keep repeating "newcombs paradox" until the cognitive dissonance goes away!