Thoughts on tackling blindspots

post by Hazard · 2018-09-27T01:06:53.283Z · LW · GW · 7 comments

Contents

7 comments

I went to my first CFAR workshop the other week, and it was quite intense/a lot. The biggest change by far has that I came face to face with some huge blindspots and can see more clearly many of the ways I've been fooling myself, not allowing myself to care about things, and pushing people away. Since blindspots are a proto-typical, "How the fuck are you supposed to find a thing that you aren't capable of finding?" I wanted to share what I think are things that helped me spot some of mine.

This is rough draft mode, and I think I'm going to just settle for bullet points in this pass-through.

To quote Draco from HPMOR:
To figure out a strange plot, look at what happens, then ask who benefits

i.e. look at all of the first impression I make of people, and get suspicious that they all add up to "People aren't worth talking to" and be suspicious.

7 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2018-09-27T06:25:23.635Z · LW(p) · GW(p)

I wish the CFAR materials were freely available, or at least were for sale without a $4000 workshop. I don't understand the business model. Textbooks are no substitute for lectures, labs and seminars, but more of an advertisement for them, since doing all the work on one's own is virtually impossible. The current model also gives a vibe of cultish exclusivity, where only those committed, inducted and indoctrinated can be trusted with the sacred knowledge. Odds are, I am missing something.

Replies from: None, PeterMcCluskey
comment by [deleted] · 2018-09-27T14:50:37.150Z · LW(p) · GW(p)

You might already be aware, but there's the Unofficial CFAR Canon List [LW · GW] which compiles a lot of their earlier material (though some things have now changed) that someone put together a while back.

If you're looking for more derivative content written by people who have gone to CFAR workshops:

There's also the Hammertime [? · GW] sequence alkjash wrote and the Instrumental Rationality [? · GW] sequence I wrote.

Replies from: Hazard
comment by Hazard · 2018-09-28T12:19:32.037Z · LW(p) · GW(p)

Shminux, here's some useful framing from my experience. From having read the HammerTime sequence and the Instrumental Rationality sequence, there were only a handful of techniques or ideas introduced to me at CFAR that I'd never heard of before. Yet I got useful stuff out of almost every class. I really felt that the people + context were the main forces driving my learning/growth. So that to say, the things Owen linked to cover a huge swath of the CFAR content, though I'd still expect going to CFAR to be a fruitful experience for a given person who's read it all.

comment by PeterMcCluskey · 2018-09-28T20:44:06.052Z · LW(p) · GW(p)

CFAR doesn't have anything resembling a textbook that would help advertise a lecture or seminar.

Some better analogies for what they have are notes that would supplement an improv class, or a yoga class, or a meditation retreat. Unlike textbooks / lectures, this category of teaching involves a fair amount of influencing system 1, in ways that are poorly captured by materials that are directed mainly at system 2. Another analogy for what they provide is group psychotherapy - in that example, something textbook-like seems somewhat valuable, but I think there are still good reasons not to expect a close connection between a specific textbook and a specific instance of group psychotherapy.

And calling CFAR's strategy a business model is a bit misleading - a good deal of their strategy involves focusing on free or very low cost workshops for people who show AI-related promise. They seem to get enough ordinary rationalists who pay $4000 via word of mouth that they can afford to give low priority to attracting more participants who will pay full price.

comment by jmh · 2018-09-28T11:49:50.019Z · LW(p) · GW(p)

Interesting for me.

I started thinking about the whole seeing the blind spots (after reading this) and the idea of finding black holes. We (well, *they* but I'll include myself for fun) look at the distortions in the surrounding area to infer rather than directly observe. But in the case of personal blind spots I think we ultimately have the ability to closely examine them once we are able to identify them and have the strength & discipline to confront them.

This thinking made me wonder if a more important view than the shift from "reasonable/plausible" to "true" might be more that of "what am I protecting myself from?" Once we have that correctly identified one might then ask why one is doing so.

comment by binary_doge · 2018-10-01T17:44:11.320Z · LW(p) · GW(p)

This might be trivial, but in the most basic sense noticing where one has blind spots can be done by first noticing where one's behavior differs from how he predicted he would behave, or what the people around him behave. If you thought some task was going to be easy and its not, or that you would get mixed results in predicting something and you don't (even if you think you might be more accurate than average, what's important here is the difference) you might be neglecting something important.

Its kind of similar to the way some expert AI systems try to notice blind spots: they "view" either demonstrations of proper behavior or just recordings of plenty of other agents (probably humans) performing the relevant tasks, and if there's some difference from what they would do, it raises the probability of a blind spot in the model.

Once you find something like that, if you seem to rouse a strong emotional response in yourself when you ask yourself "why am I doing this differently?" that's a non-negligible red flag for a blind spot, IMO.

comment by Pattern · 2018-09-27T17:21:46.574Z · LW(p) · GW(p)
There is a need or want hiding there which unless you address it, the machinery in place that was trying to server that want will fight back.

server or serve?