CFAR Handbook: Introduction

post by CFAR!Duncan (CFAR 2017) · 2022-06-28T16:53:53.312Z · LW · GW · 12 comments

Contents

12 comments

The Center for Applied Rationality is a Bay Area non-profit that, among other things, ran lots of workshops to offer people tools and techniques for solving problems and improving their thinking. Those workshops were accompanied by a reference handbook, which has been available as a PDF since 2020.

The handbook hasn't been substantially updated since it was written in 2016, but it remains a fairly straightforward primer for a lot of core rationality content.  The LW team, working with the handbook's author Duncan Sabien [LW · GW], have decided to republish it as a lightly-edited sequence, so that each section can be linked on its own.

In the workshop context, the handbook was a supplement to lectures, activities, and conversations taking place between participants and staff. Care was taken to emphasize the fact that each tool or technique or perspective was only as good as it was effectively applied to one's problems, plans, and goals. The workshop was intentionally structured to cause participants to actually try things (including iterating on or developing their own versions of what they were being shown), rather than simply passively absorb content. Keep this in mind as you read—mere knowledge of how to exercise does not confer the benefits of exercise!

Discussion is strongly encouraged, and disagreement and debate are explicitly welcomed. Many LWers (including the staff of CFAR itself) have been tinkering with these concepts for years, and will have developed new perspectives on them, or interesting objections to them, or thoughts about how they work or break in practice. What follows is a historical artifact—the rough state-of-the-art at the time the handbook was written, circa 2017. That's an excellent jumping-off point, especially for newcomers, but there's been a lot of scattered progress since then, and we hope some of it will make its way into the comments.

12 comments

Comments sorted by top scores.

comment by FiftyTwo · 2022-07-01T10:05:11.467Z · LW(p) · GW(p)

Thanks. This is the kind of content I originally came to LW for a decade ago, but seems to have become less popular

comment by maia · 2022-07-26T15:03:51.309Z · LW(p) · GW(p)

Couple questions about this sequence.

Is there any plan to write down and post more of the surrounding content like activities/lectures/etc.?

How does CFAR feel about "off-brand"/"knockoff" versions of these workshops being run at meetups? If OK with it, how should those be announced/disclaimed to make it clear that they're not affiliated with CFAR?

I'm interested in this as an organizer, and based on conversations at the meetup organizers' retreat this weekend, I think a number of other organizers would be interested as well.

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-07-28T18:54:34.250Z · LW(p) · GW(p)

There aren't currently plans to write up e.g. descriptions of the classes and activities, but there are lots of people who have been to CFAR workshops who can offer their anecdotes, and you may be able to reach out to CFAR directly for descriptions of what a workshop is like.

(Also, there are going to be workshops in Europe this fall [LW · GW] that you could attend if you want.)

As for spreading off-brand versions of the content: CFAR is enthusiastically pro the idea! Their main request is just that you clearly headline:

  • That CFAR originated the content you're attempting to convey (e.g. credit them for terms like "TAPs")
  • That you are teaching your version of CFAR's TAPs (or whatever); that this is "what I, Maia, got out of attempting to learn the CFAR technique called TAPs."

As long as you're crediting the creators and not claiming to speak with authority about the thing you're teaching, CFAR is (very) happy to have other people spreading the content.

comment by Stephen Bennett (GWS) · 2022-06-28T17:20:08.425Z · LW(p) · GW(p)

Nitpick (about a line that I otherwise quite liked):

Keep this in mind as you read—mere knowledge of how to exercise does not convey the benefits of exercise!

Should "convey" be "confer" here? "Convey" implies that the thing changing is typically information (i.e. knowledge), whereas "confer" implies that someone has been granted possession of something (i.e. health).

Replies from: CFAR 2017
comment by CFAR!Duncan (CFAR 2017) · 2022-06-28T17:30:04.092Z · LW(p) · GW(p)

Seems right. =)

comment by Khachik Gobalyan (khachik-gobalyan) · 2023-09-06T22:56:53.193Z · LW(p) · GW(p)

Keep this in mind as you read—mere knowledge of how to exercise does not confer the benefits of exercise!

comment by s-video · 2023-01-27T19:56:10.625Z · LW(p) · GW(p)

>What follows is a historical artifact—the rough state-of-the-art at the time the handbook was written, circa 2017.

The PDF hosted on CFAR's website is dated January 2021. Is that version of the handbook more up-to-date than this one?

Replies from: Raemon
comment by Raemon · 2023-01-27T20:01:42.233Z · LW(p) · GW(p)

This was written in mid-2022, so it's probably at least somewhat more up-to-date (but also might be slightly tweaked in the direction of 'stuff Duncan endorses'. Duncan no longer works at CFAR and if CFAR makes further updates to the handbook I could imagine them diverging)

comment by Phil Tanny · 2022-09-01T23:51:36.788Z · LW(p) · GW(p)

Here's a simple test which can be used to evaluate the qualifications of all individuals and groups claiming to be qualified to teach rational thinking.

How much have they written or otherwise contributed on the subject of nuclear weapons?

As example, as a thought experiment imagine that I walk around all day with a loaded gun in my mouth, but I typically don't find the gun interesting enough to discuss.  In such a case, would you consider me an authority on rational thinking?   In this example, the gun in one person's mouth represents the massive hydrogen bombs in all of our mouths.

Almost all intellectual elites will fail this test.  Once this is seen one's relationship with intellectual elites can change substantially.   

Replies from: Duncan_Sabien
comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2022-09-02T00:25:48.594Z · LW(p) · GW(p)

Note: despite the different username, I'm the author of the handbook and a former CFAR staff member.

I disagree with this take as specifically outlined, even though I do think there's a kernel of truth to it.

Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!

I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.

Phil may disagree with the claim that nuclear weapons are something like third on the list, rather than the top item, but that doesn't mean he's right. And CFAR staff certainly clear the bar of "spending a lot of time focusing on what seems to them to be the actually most salient threat."

I agree that if somebody seems to be willfully ignoring a salient threat, they have gaps in their rationality that should give you pause.

Replies from: Phil Tanny
comment by Phil Tanny · 2022-09-02T09:10:36.867Z · LW(p) · GW(p)

Hi again Duncan, 

Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!

Can AI destroy modern civilization in the next 30 minutes?   Can a single human being unilaterally decide to make that happen, right now, today?

I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population.  So if we're not talking about nukes, which we overwhelmingly are not across the culture at every level of society, it's not because we don't understand.  It's because we are in a deep denial similar to how we relate to our own personal mortality.   To debunk my own posts, puncturing such deep denial with mere logic is not very promising, but one does what one knows how to do.

I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.

Except that is not the test I proposed.  That's ignoring the most pressing threat to engage a threat that's more fun to talk about.  That said, any discussion of X risk must be applauded, and I do so applaud.

The challenge I've presented is not to the EA community in particular, who seem far ahead of other intellectual elites on the subject of X risk generally.  I'm really challenging the entire leadership of our society.  I tend to focus the challenge mostly at intellectual elites of all types, because that's who  I have the highest expectations of.   You know, it's probably pointless to challenge politicians and the media on such subjects.

Replies from: gilch
comment by gilch · 2022-09-10T06:08:05.536Z · LW(p) · GW(p)

Can AI destroy modern civilization in the next 30 minutes?

Doubt it, but it might depend on how much of an overhang [? · GW] we have. My timelines aren't that short, but if there were an overhang and we were just a few breakthroughs away from recursive self-improvement, would the world look any different than it does now?

Can a single human being unilaterally decide to make that happen, right now, today?

Oh, good point. Pilots have intentionally crashed planes full of passengers. Kids have shot up schools, not expecting to come out alive. Murder-suicide is a thing humans have been known to do. There have been a number of well-documented close calls in the Cold War. As nuclear powers proliferate, MAD becomes more complicated.

It's still about #3 on my catastrophic risk list depending on how you count things. But the number of humans who could plausibly do this remains relatively small. How many human beings could plausibly bioengineer a pandemic? I think the number is greater, and increasing as biotech advances. Time is not the only factor in risk calculations.

And likely neither of these results in human extinction, but the pandemic scares me more. No, nuclear war wouldn't do it [LW · GW]. That would require salted bombs, which have been theorized, but never deployed. Can't happen in the next 30 minutes. Fallout become survivable (if unhealthy) in a few days. Nobody is really interested in bombing New Zealand. They're too far away from everybody else to matter. Nuclear winter risk has been greatly exaggerated, and humans are more omnivorous than you'd think, especially with even simple technology helping to process food sources. Not to say that a nuclear war wouldn't be catastrophic, but there would be survivors. A lot of them.

A communicable disease that's too deadly (like SARS-1) tends to burn itself out before spreading much, but an engineered (or natural!) pandemic could plausibly thread the needle and become something at least as bad as smallpox. A highly contagious disease that doesn't kill outright but causes brain damage or sterility might be similarly devastating to civilization, without being so self-limiting. Even New Zealand might not be safe. A nuclear war ends. A pandemic festers. Outcomes could be worse, and it's more likely to happen, and becoming more likely to happen. It's #2 for me.

And #1 is an intelligence explosion. This is not just a catastrophic risk, but an existential one. An unaligned AI destroys all value, by default. It's not going to have a conscience unless we put one in. Nobody knows how to do that. And short of a collapse of civilization, an AI takeover seems inevitable in short order. We either figure out how to build one that's aligned before that happens, and it solves all the other solvable risks, or everybody dies.