Blind Goaltenders: Unproductive Disagreements

post by PDV · 2017-09-28T16:19:07.241Z · LW · GW · 8 comments

Contents

  Things this does not mean
  Things this does mean
None
8 comments

If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you'll be discussing it with people who disagree, at least in part.

The question that inspired this post was "Why are some forms of disagreement so much more frustrating than others?" Why do some disagreements feel like talking to a brick wall, while others are far more productive?

My answer is that some interlocutors are 'blind goaltenders'. They not only disagree about the importance of your problem, they don't seem to understand what it is you're worried about. For example, take AI Safety. I believe that it's a serious problem, most likely the Most Important Problem, and likely to be catastrophic. I can argue about it with someone who's read a fair chunk of LessWrong or Bostrom, and they may disagree, but they will understand. Their disagreement will probably have gears. This argument may not be productive, but it won't be frustrating.

Or I could talk to someone who doesn't understand the complexity of value thesis or orthogonality thesis. Their position may have plenty of nuances, but they are missing a key concept about our disagreement. This argument may be just as civil - or, given my friends in the rationalsphere, more civil - but it will be much more frustrating, because they are a blind goaltender with respect to AI safety. If I'm trying to convince them, for example, not to support an effort to create an AI via a massive RL model trained on a whole datacenter, they may take into account specific criticisms, but will not be blocking the thing I care about. They can't see the problem I'm worried about, and so they'll be about as effective in forestalling it as a blind goalie.

Things this does not mean

Blind goaltenders are not always wrong. Lifelong atheists are often blind goaltenders with respect to questions of sin, faith, or other religiously-motivated behavior.

Blind goaltenders are not impossible to educate. Most people who understand your pet issue now were blind about it in the past, including you.

Blind goaltenders are not stupid. Much of the problem in AI safety is that there are a great deal of smart people working in ML who are nonetheless blind goaltenders.

Goaltenders who cease to be blind will not always agree with you.

Things this does mean

Part of why AI safety is such a messy fight is that, given the massive impact if the premises are true, it's rare to understand the premises, see all the metaphorical soccer balls flying at you, and still disagree. Or at least, that's how it seems from the perspective of someone who believes that AI safety is critical. (Certainly most people who disagree are missing critical premises.) This makes it very tempting to characterize people who are well-informed but disagree, such as non-AI EAs, as being blind to some aspect. (Tangentially, a shout-out to Paul Christiano, who I have strong disagreements with in this area but who definitely sees the problems.)

This idea can reconcile two contrasting narratives of the LessWrong community. The first is that it's founded on one guy's ideas and everyone believes his weird ideas. The second is that anyone you ask has a long list of their points of disagreement with Eliezer. I would replace them with the idea that LessWrong established a community which understood and could see some core premises; that AI is hard, that the world is mad, that nihil supernum. People in our community disagree, or draw different conclusions, but they understand enough of the implications of those premises to share a foundation.

This relates strongly to the intellectual turing test, and its differences with steelmanning. Someone who can pass the ITT for your position has demonstrated that they understand your position and why you hold it, and therefore are not blind to your premises. Someone who is a blind goaltender can do their best to steelman you, even with honest intentions, but they will not succeed at interpreting you charitably. The ITT is both a diagnostic for blindness and an attempt to cure it; steelmanning is merely a more lossy diagnostic.

8 comments

Comments sorted by top scores.

comment by Chris_Leong · 2017-09-28T21:45:48.378Z · LW(p) · GW(p)

Can you give a one or two sentence definition of what you mean by "blind goal-keepers"? This isn't explicitly stated anywhere.

Replies from: spiralingintocontrol, PDV
comment by spiralingintocontrol · 2017-09-28T22:37:35.777Z · LW(p) · GW(p)

+1, I feel like this post is getting at something useful, but I'm too confused by the use of terminology to understand it.

comment by PDV · 2017-09-28T23:09:57.323Z · LW(p) · GW(p)

I can try.
It's someone who doesn't understand your objection, and doesn't seem to understand why you think it's important that they understand it. (In stronger cases, they don't even understand that they don't understand it.) This generally feels like they are dodging the point of disagreement every way you bring it up, like it's foreign to their entire worldview.

Replies from: Chris_Leong
comment by Chris_Leong · 2017-09-29T00:57:16.192Z · LW(p) · GW(p)

Thanks, that helps, but I don't suppose you could break down, "doesn't seem to understand why you think it's important that they understand it".

How are they acting? Are they going, "Man, you're worried about AI? Clearly you are crazy!" or are they like, "Clearly, a super-intelligent AI would also be much more morally developed than any human, so there's no issue whatsoever".

If it is the second, what is the issue? Is it that they are completely convinced of their own perspective or is it that they don't understand that you might want to challenge that claim or is it something else?

Replies from: PDV
comment by PDV · 2017-09-29T02:26:58.433Z · LW(p) · GW(p)

Any of those could count.

comment by Said Achmiz (SaidAchmiz) · 2017-09-28T17:11:42.568Z · LW(p) · GW(p)

(Upvoted, but also: this is an excellent post, and I find it encouraging w.r.t. what kind of level of quality of content I expect to see on The New LessWrong.)

Question: I wonder how the following fits into your paradigm—whether you consider it covered by the cases you listed, or is it something else:

Suppose that a goaltender is blind, but then his eyes are opened; having learned to see, however, he chooses quite deliberately to ignore the ball, just as before. (That is, an interlocutor who truly understands, but nonetheless fundamentally disagrees.)

You say:

Goaltenders who cease to be blind will not always agree with you.

… which seems related to what I describe, but doesn’t (I think) quite account for everything. I have in mind this distinction:

  1. A blind goaltender may learn to see; and thereafter may agree with you about the importance of blocking the ball from getting into the net; but may nonetheless comprehensively disagree with you about how the ball should be blocked, how much effort should be expended on it, etc.

  2. Conversely, a blind goaltender may learn to see; but, seeing the ball, he may nonetheless disdain it as irrelevant, and may watch the ball fly right into the net with equanimity and even approval.

Does that seem like a reasonable breakdown? (And if so, which do you think is more common? I’m not sure, myself…)

comment by Gunnar_Zarncke · 2017-09-29T08:02:49.384Z · LW(p) · GW(p)

You write

If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger.

and I'd like to add the disclaimer '...if you want to focus on the problem'. Which you might want to as in you given main example of AI risk. It might not be the best way in general (and you explicitly say "general" there). It might not be the best way if the pro and con positions are more well-known or more equally distributed in the general population (or at least in that part of the population that is educated such things).

Replies from: PDV
comment by PDV · 2017-10-03T18:24:04.386Z · LW(p) · GW(p)

>If you're worried about an oncoming problem and discussing it with others to plan