Blind Goaltenders: Unproductive Disagreementspost by PDV · 2017-09-28T16:19:07.241Z · score: 24 (16 votes) · LW · GW · 8 comments
If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you'll be discussing it with people who disagree, at least in part.
The question that inspired this post was "Why are some forms of disagreement so much more frustrating than others?" Why do some disagreements feel like talking to a brick wall, while others are far more productive?
My answer is that some interlocutors are 'blind goaltenders'. They not only disagree about the importance of your problem, they don't seem to understand what it is you're worried about. For example, take AI Safety. I believe that it's a serious problem, most likely the Most Important Problem, and likely to be catastrophic. I can argue about it with someone who's read a fair chunk of LessWrong or Bostrom, and they may disagree, but they will understand. Their disagreement will probably have gears. This argument may not be productive, but it won't be frustrating.
Or I could talk to someone who doesn't understand the complexity of value thesis or orthogonality thesis. Their position may have plenty of nuances, but they are missing a key concept about our disagreement. This argument may be just as civil - or, given my friends in the rationalsphere, more civil - but it will be much more frustrating, because they are a blind goaltender with respect to AI safety. If I'm trying to convince them, for example, not to support an effort to create an AI via a massive RL model trained on a whole datacenter, they may take into account specific criticisms, but will not be blocking the thing I care about. They can't see the problem I'm worried about, and so they'll be about as effective in forestalling it as a blind goalie.
Things this does not mean
Blind goaltenders are not always wrong. Lifelong atheists are often blind goaltenders with respect to questions of sin, faith, or other religiously-motivated behavior.
Blind goaltenders are not impossible to educate. Most people who understand your pet issue now were blind about it in the past, including you.
Blind goaltenders are not stupid. Much of the problem in AI safety is that there are a great deal of smart people working in ML who are nonetheless blind goaltenders.
Goaltenders who cease to be blind will not always agree with you.
Things this does mean
Part of why AI safety is such a messy fight is that, given the massive impact if the premises are true, it's rare to understand the premises, see all the metaphorical soccer balls flying at you, and still disagree. Or at least, that's how it seems from the perspective of someone who believes that AI safety is critical. (Certainly most people who disagree are missing critical premises.) This makes it very tempting to characterize people who are well-informed but disagree, such as non-AI EAs, as being blind to some aspect. (Tangentially, a shout-out to Paul Christiano, who I have strong disagreements with in this area but who definitely sees the problems.)
This idea can reconcile two contrasting narratives of the LessWrong community. The first is that it's founded on one guy's ideas and everyone believes his weird ideas. The second is that anyone you ask has a long list of their points of disagreement with Eliezer. I would replace them with the idea that LessWrong established a community which understood and could see some core premises; that AI is hard, that the world is mad, that nihil supernum. People in our community disagree, or draw different conclusions, but they understand enough of the implications of those premises to share a foundation.
This relates strongly to the intellectual turing test, and its differences with steelmanning. Someone who can pass the ITT for your position has demonstrated that they understand your position and why you hold it, and therefore are not blind to your premises. Someone who is a blind goaltender can do their best to steelman you, even with honest intentions, but they will not succeed at interpreting you charitably. The ITT is both a diagnostic for blindness and an attempt to cure it; steelmanning is merely a more lossy diagnostic.
Comments sorted by top scores.