Choosing battles (on the Internet)

post by PatrickDFarley · 2022-01-20T15:38:17.953Z · LW · GW · 2 comments

Contents

  The warm fuzzy hope of good-faith conversations
  Failure modes
  Purposeful disengagement
  Burnout risk
None
2 comments

A couple years ago, I noticed that I'd been holding onto a mindset that was counterproductive, and I've often seen this same mindset in some of the online communities I visit. I like talking with people who believe in the truth-seeking power of good-faith disagreements, but some of these people have a tendency to overestimate the value of argument, which causes them to engage in conversations they shouldn't. And I think I have good heuristics for avoiding that without giving up on the project of rational discourse entirely.

The warm fuzzy hope of good-faith conversations

There's this optimistic belief that careful, rational arguments, even between strongly opposed people, will always lead to truth. How could they not? There is only one system of logic, and only one reality from which to gather evidence.

And let me be clear that I've benefited tremendously from engaging with people who think like this. They're basically right—good-faith arguments help you weed out your false beliefs and sharpen your correct ones, and having correct beliefs is good and advantageous.

But often with that optimism comes a fear that if you ever ignore an argument, you're missing an opportunity to learn the truth. If I say reality is X, and you say it's Y, and I refuse to argue with you for whatever reason, then I almost feel like I'm daring reality to be Y, leaving me helplessly ignorant about it for the rest of my life—or worse, until I get into a situation where knowing X vs Y really matters, and I make the wrong decision. And because that notion scares me, I stay engaged in our argument no mater what.

Failure modes

This is why I sometimes see well-meaning writers/commenters repeatedly typing out carefully worded paragraphs to their opponents, who are obvious trolls, or who will ignore most points and twist what remains, or who don't stay on topic, or who can't seem to make their point more than once without changing it, or who just aren't demonstrating enough intelligence to warrant any hope of mutual understanding. I see this a lot online and it saddens me, because the person who's being more careful to offer value in the argument is getting a worse deal for it.

It also worries me because it can eventually lead those kinds of people to snap and give up on the value of argument altogether. More on that later.

The well-meaning commenters think, "Sure my efforts don't seem to be working, but what if my opponent is still right?" They tell themselves things like the following:

And there's truth in all of the above. If you're persistent, you can learn new facts and new counterarguments from people who aren't smart, or who are bad at communication, or who aren't acting in 100% good faith. And likewise, you are at risk of getting too comfortable dismissing disagreements for trivial reasons—we all know people who do this (and whose epistemologies suffer for it).

Purposeful disengagement

But here's the part I hadn't considered for a time: my time is a limited resource and therefore my arguing is a limited resource. Time in one conversation is time away from another. If I want to argue with others to improve my beliefs, I need to spend my arguing on situations that promise the greatest benefit/cost, where "benefit" is something like "the propagation or refinement or correction of truth-claims that I care about."

Still, screening for certain kinds of people to argue with can be a very dangerous habit. That's basically the recipe for an "echo chamber." Yet part of being human is accepting that you can't be perfectly open-minded, if for no other reason than you don't have the time.

The less dangerous move is to screen for certain topics (that are most important to you).

Which brings me to the other part I hadn't considered for a time: Badly conducted arguments shift in topic. When you engage in the sloppy kinds of conversations above, your own arguing will tend to drill down on your opponent's apparent errors, turning the conversation away from the object-level disagreement and toward a meta-level disagreement about how arguments ought to be conducted. For example, "Why don't you believe in global warming?" might become, "What evidence would convince you of global warming?" which might then become, "Here's why your standards of evidence are inconsistent." And then the entire argument is about one person's standards of evidence. Or about the importance of good faith. Or about why certain communication styles are misleading.

There's nothing necessarily wrong with that, but—is that what wanted to argue about? Are your [rules of proper argument] beliefs the beliefs that you most wanted to spread and/or challenge? Or was it your [global warming] beliefs? Oh yeah, the original topic.

When the topic shift happens, you're perfectly justified in passing up the opportunity to continue, because it just wasn't the opportunity you originally thought you were getting. Arguments are like other kinds of transactions, in that you need to pay attention and do some shopping in order to find the valuable ones. 

Burnout risk

It's not just that I feel bad for all the effortful arguers who trap themselves in bad arguments. It's also that if they get too fed up with their situation, they tend to snap in the other direction and take on some of the attitudes that I most worry about. I mean they give up entirely on the project of productive disagreement, and they start looking for ways to silence their opponents instead. I could write for days on why censorship is a convenient short-term solution with terrible long-term consequences, but that's not what this post is about.

I recently saw one little community that I follow calling for more censorship/bans specifically because, "It's exhausting to keep debunking the same things over and over." And the part that struck me was that I hold almost all the same beliefs they do, and I'd like to debunk the things they want to debunk, yet I'm never "exhausted." And that's simply because I only pick arguments that look like they'll benefit me in some way. I don't go on grand crusades to debunk everybody who's wrong on the Internet. I'm perfectly happy letting people be wrong on the Internet, and I think that's an attitude that needs to go along with the warm fuzzy hope, or else pursuing that hope will become unsustainable.

If you liked this post, consider subscribing to my blog at patrickdfarley.com.

2 comments

Comments sorted by top scores.

comment by Dagon · 2022-01-20T18:03:36.952Z · LW(p) · GW(p)

obXKCD already linked, so I don't need to do that, good.  I like that you're coming to the same conclusion from a different direction: you don't want to improve their models or "fix" the wrongness on behalf of someone else, you just want to learn and improve your own model (ok, probably all of the above, but focusing on internal knowledge).

> my arguing is a limited resource.

This generalizes.  Your thinking is a limited resource.  Some discussions on the internet (or in person, for that matter) are more valuable than the next-best thing you could do.  Many are not.  

Of course, there's a search cost, too - the debate in front of you may "waste" more time than reading a good book or finding a better forum/topic to join, or building a new toy on your rPi or whatever else you could be doing.  But it doesn't "waste" time in figuring out what to do, or where/what the better topics are.  I don't have a good solution for that problem, other than to notice when your current activity is not satisfying and consider the alternatives.

comment by philh · 2022-01-22T19:46:39.814Z · LW(p) · GW(p)

Tooting my own horn, but tapping out in two [LW · GW] is related. I'm not great at not having these arguments, but I've learned to limit their imposition on me.