Varieties Of Argumentative Experience

post by Scott Alexander (Yvain) · 2018-05-08T08:20:02.913Z · LW · GW · 11 comments

Contents

11 comments

In 2008, Paul Graham wrote How To Disagree Better, ranking arguments on a scale from name-calling to explicitly refuting the other person’s central point.

And that’s why, ever since 2008, Internet arguments have generally been civil and productive.

Graham’s hierarchy is useful for its intended purpose, but it isn’t really a hierarchy of disagreements. It’s a hierarchy of types of response, within a disagreement. Sometimes things are refutations of other people’s points, but the points should never have been made at all, and refuting them doesn’t help. Sometimes it’s unclear how the argument even connects to the sorts of things that in principle could be proven or refuted.

If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this:

Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability.

If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them.

If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect.

I’m not saying everything fits into this model, or even that most things do. It’s just a way of thinking that I’ve found helpful. More detail on what I mean by each level:

Meta-debate is discussion of the debate itself rather than the ideas being debated. Is one side being hypocritical? Are some of the arguments involved offensive? Is someone being silenced? What biases motivate either side? Is someone ignorant? Is someone a “fanatic”? Are their beliefs a “religion”? Is someone defying a consensus? Who is the underdog? I’ve placed it in a sphinx outside the pyramid to emphasize that it’s not a bad argument for the thing, it’s just an argument about something completely different.

“Gun control proponents are just terrified of guns, and if they had more experience with them their fear would go away.”

“It was wrong for gun control opponents to prevent the CDC from researching gun statistics more thoroughly.”

“Senators who oppose gun control are in the pocket of the NRA.”

“It’s insensitive to start bringing up gun control hours after a mass shooting.”

Sometimes meta-debate can be good, productive, or necessary. For example, I think discussing “the origins of the Trump phenomenon” is interesting and important, and not just an attempt to bulverizing the question of whether Trump is a good president or not. And if you want to maintain discussion norms, sometimes you do have to have discussions about who’s violating them. I even think it can sometimes be helpful to argue about which side is the underdog.

But it’s not the debate, and also it’s much more fun than the debate. It’s an inherently social question, the sort of who’s-high-status and who’s-defecting-against-group-norms questions that we like a little too much. If people have to choose between this and some sort of boring scientific question about when fetuses gain brain function, they’ll choose this every time; given the chance, meta-debate will crowd out everything else.

The other reason it’s in the sphinx is because its proper function is to guard the debate. Sure, you could spend your time writing a long essay about why creationists’ objections to radiocarbon dating are wrong. But the meta-debate is what tells you creationists generally aren’t good debate partners and you shouldn’t get involved.

Social shaming also isn’t an argument. It’s a demand for listeners to place someone outside the boundary of people who deserve to be heard; to classify them as so repugnant that arguing with them is only dignifying them. If it works, supporting one side of an argument imposes so much reputational cost that only a few weirdos dare to do it, it sinks outside the Overton Window, and the other side wins by default.

“I can’t believe it’s 2018 and we’re still letting transphobes on this forum.”

“Just another purple-haired SJW snowflake who thinks all disagreement is oppression.”

“Really, do conservatives have any consistent beliefs other than hating black people and wanting the poor to starve?”

“I see we’ve got a Silicon Valley techbro STEMlord autist here.”

Nobody expects this to convince anyone. That’s why I don’t like the term “ad hominem”, which implies that shamers are idiots who are too stupid to realize that calling someone names doesn’t refute their point. That’s not the problem. People who use this strategy know exactly what they’re doing and are often quite successful. The goal is not to convince their opponents, or even to hurt their opponent’s feelings, but to demonstrate social norms to bystanders. “Ad hominem” has the wrong implications. “Social shaming” gets it right.

Sometimes this works on a society-wide level. More often, it’s an attempt to claim a certain space, kind of like the intellectual equivalent of a gang sign. If the Jets can graffiti “FUCK THE SHARKS” on a certain bridge, but the Sharks can’t get away with graffiting “NO ACTUALLY FUCK THE JETS” on the same bridge, then almost by definition that bridge is in the Jets’ territory. This is part of the process that creates polarization and echo chambers. If you see an attempt at social shaming and feel triggered, that’s the second-best result from the perspective of the person who put it up. The best result is that you never went into that space at all. This isn’t just about keeping conservatives out of socialist spaces. It’s also about defining what kind of socialist the socialist space is for, and what kind of ideas good socialists are or aren’t allowed to hold.

I think easily 90% of online discussion is of this form right now, including some long and carefully-written thinkpieces with lots of citations. The point isn’t that it literally uses the word “fuck”, the point is that the active ingredient isn’t persuasiveness, it’s the ability to make some people feel like they’re suffering social costs for their opinion. Even really good arguments that are persuasive can be used this way if someone links them on Facebook with “This is why I keep saying Democrats are dumb” underneath it.

This is similar to meta-debate, except that meta-debate can sometimes be cooperative and productive – both Trump supporters and Trump opponents could in theory work together trying to figure out the origins of the “Trump phenomenon” – and that shaming is at least sort of an attempt to resolve the argument, in a sense.

Gotchas are short claims that purport to be devastating proof that one side can’t possibly be right.

“If you like big government so much, why don’t you move to Cuba?”

“Isn’t it ironic that most pro-lifers are also against welfare and free health care? Guess they only care about babies until they’re born.”

“When guns are outlawed, only outlaws will have guns.”

These are snappy but almost always stupid. People may not move to Cuba because they don’t want government that big, because governments can be big in many ways some of which are bad, because governments can vary along dimensions other than how big they are, because countries can vary along dimensions other than what their governments are, or just because moving is hard and disruptive.

They may sometimes suggest what might, with a lot more work, be a good point. For example, the last one could be transformed into an argument like “Since it’s possible to get guns illegally with some effort, and criminals need guns to commit their crimes and are comfortable with breaking laws, it might only slightly decrease the number of guns available to criminals. And it might greatly decrease the number of guns available to law-abiding people hoping to defend themselves. So the cost of people not being able to defend themselves might be greater than the benefit of fewer criminals being able to commit crimes.” I don’t think I agree with this argument, and I might challenge assumptions like “criminals aren’t that much likely to have guns if they’re illegal” or “law-abiding gun owners using guns in self-defense is common and an important factor to include in our calculations”. But this would be a reasonable argument and not just a gotcha. The original is a gotcha precisely because it doesn’t invite this level of analysis or even seem aware that it’s possible. It’s not saying “calculate the value of these parameters, because I think they work out in a way where this is a pretty strong argument against controlling guns”. It’s saying “gotcha!”.

Single facts are when someone presents one fact, which admittedly does support their argument, as if it solves the debate in and of itself. It’s the same sort of situation as one of the better gotchas – it could be changed into a decent argument, with work. But presenting it as if it’s supposed to change someone’s mind in and of itself is naive and sort of an aggressive act.

“The UK has gun control, and the murder rate there is only a quarter of ours.”

“The fetus has a working brain as early as the first trimester.”

“Donald Trump is known to have cheated his employees and subcontractors.”

“Hillary Clinton handled her emails in a scandalously incompetent manner and tried to cover it up.”

These are all potentially good points, with at least two caveats. First, correlation isn’t causation – the UK’s low murder rates might not be caused by their gun control, and maybe not all communist countries inevitably end up like the USSR. Second, even things with some bad features are overall net good. Trump could be a dishonest businessman, but still have other good qualities. Hillary Clinton may be crap at email security, but skilled at other things. Even if these facts are true and causal, they only prove that a plan has at least one bad quality. At best they would be followed up by an argument for why this is really important.

I think the move from shaming to good argument is kind of a continuum. This level is around the middle. At some point, saying “I can’t believe you would support someone who could do that with her emails!” is just trying to bait Hillary supporters. And any Hillary supporter who thinks it’s really important to argue specifics of why the emails aren’t that bad, instead of focusing on the bigger picture, is taking the bait, or getting stuck in this mindset where they feel threatened if they admit there’s anything bad about Hillary, or just feeling too defensive.

Single studies are better than scattered facts since they at least prove some competent person looked into the issue formally.

“This paper from Gary Kleck shows that more guns actually cause less crime.”

“These people looked at the evidence and proved that support for Trump is motivated by authoritarianism.”

“I think you’ll find economists have already investigated this and that the minimum wage doesn’t cost jobs.”

“There’s actually a lot of proof by people analyzing many different elections that money doesn’t influence politics.”

We’ve already discussed this here before. Scientific studies are much less reliable guides to truth than most people think. On any controversial issue, there are usually many peer-reviewed studies supporting each side. Sometimes these studies are just wrong. Other times they investigate a much weaker subproblem but get billed as solving the larger problem.

There are dozens of studies proving the minimum wage does destroy jobs, and dozens of studies proving it doesn’t. Probably it depends a lot on the particular job, the size of the minimum wage, how the economy is doing otherwise, etc, etc, etc. Gary Kleck does have a lot of studies showing that more guns decrease crime, but a lot of other criminologists disagree with him. Both sides will have plausible-sounding reasons for why the other’s studies have been conclusively debunked on account of all sorts of bias and confounders, but you will actually have to look through those reasons and see if they’re right.

Usually the scientific consensus on subjects like these will be as good as you can get, but don’t trust that you know the scientific consensus unless you have read actual well-conducted surveys of scientists in the field. Your echo chamber telling you “the scientific consensus agrees with us” is definitely not sufficient.

A good-faith survey of evidence is what you get when you take all of the above into account, stop trying to devastate the other person with a mountain of facts that can’t possibly be wrong, and starts looking at the studies and arguments on both sides and figuring out what kind of complex picture they paint.

“Of the meta-analyses on the minimum wage, three seem to suggest it doesn’t cost jobs, and two seem to suggest it does. Looking at the potential confounders in each, I trust the ones saying it doesn’t cost jobs more.”

“The latest surveys say more than 97% of climate scientists think the earth is warming, so even though I’ve looked at your arguments for why it might not be, I think we have to go with the consensus on this one.”

“The justice system seems racially biased at the sentencing stage, but not at the arrest or verdict stages.”

“It looks like this level of gun control would cause 500 fewer murders a year, but also prevent 50 law-abiding gun owners from defending themselves. Overall I think that would be worth it.”

Isolated demands for rigor are attempts to demand that an opposing argument be held to such strict invented-on-the-spot standards that nothing (including common-sense statements everyone agrees with) could possibly clear the bar.

“You can’t be an atheist if you can’t prove God doesn’t exist.”

“Since you benefit from capitalism and all the wealth it’s made available to you, it’s hypocritical for you to oppose it.”

“Capital punishment is just state-sanctioned murder.”

“When people still criticize Trump even though the economy is doing so well, it proves they never cared about prosperity and are just blindly loyal to their party.”

The first is wrong because you can disbelieve in Bigfoot without being able to prove Bigfoot doesn’t exist – “you can never doubt something unless you can prove it doesn’t exist” is a fake rule we never apply to anything else. The second is wrong because you can be against racism even if you are a white person who presumably benefits from it; “you can never oppose something that benefits you” is a fake rule we never apply to anything else. The third is wrong because eg prison is just state-sanctioned kidnapping; “it is exactly as wrong for the state to do something as for a random criminal to do it” is a fake rule we never apply to anything else. The fourth is wrong because Republicans have also been against leaders who presided over good economies and presumably thought this was a reasonable thing to do; “it’s impossible to honestly oppose someone even when there’s a good economy” is a fake rule we never apply to anything else.

I don’t think these are necessarily badly-intentioned. We don’t have a good explicit understanding of what high-level principles we use, and tend to make them up on the spot to fit object-level cases. But here they act to derail the argument into a stupid debate over whether it’s okay to even discuss the issue without having 100% perfect impossible rigor. The solution is exactly the sort of “proving too much” arguments in the last paragraph. Then you can agree to use normal standards of rigor for the argument and move on to your real disagreements.

These are related to fully general counterarguments like “sorry, you can’t solve every problem with X”, though usually these are more meta-debate than debate.

Disputing definitions is when an argument hinges on the meaning of words, or whether something counts as a member of a category or not.

“Transgender is a mental illness.”

“The Soviet Union wasn’t really communist.”

“Wanting English as the official language is racist.”

“Abortion is murder.”

“Nobody in the US is really poor, by global standards.”

It might be important on a social basis what we call these things; for example, the social perception of transgender might shift based on whether it was commonly thought of as a mental illness or not. But if a specific argument between two people starts hinging on one of these questions, chances are something has gone wrong; neither factual nor moral questions should depend on a dispute over the way we use words. This Guide To Words is a long and comprehensive resource about these situations and how to get past them into whatever the real disagreement is.

Clarifying is when people try to figure out exactly what their opponent’s position is.

“So communists think there shouldn’t be private ownership of factories, but there might still be private ownership of things like houses and furniture?”

“Are you opposed to laws saying that convicted felons can’t get guns? What about laws saying that there has to be a waiting period?”

“Do you think there can ever be such a thing as a just war?”

This can sometimes be hostile and counterproductive. I’ve seen too many arguments degenerate into some form of “So you’re saying that rape is good and we should have more of it, are you?” No. Nobody is ever saying that. If someone thinks the other side is saying that, they’ve stopped doing honest clarification and gotten more into the performative shaming side.

But there are a lot of misunderstandings about people’s positions. Some of this is because the space of things people can believe is very wide and it’s hard to understand exactly what someone is saying. More of it is because partisan echo chambers can deliberately spread misrepresentations or cliched versions of an opponent’s arguments in order to make them look stupid, and it takes some time to realize that real opponents don’t always match the stereotype. And sometimes it’s because people don’t always have their positions down in detail themselves (eg communists’ uncertainty about what exactly a communist state would look like). At its best, clarification can help the other person notice holes in their own opinions and reveal leaps in logic that might legitimately deserve to be questioned.

Operationalizing is where both parties understand they’re in a cooperative effort to fix exactly what they’re arguing about, where the goalposts are, and what all of their terms mean.

“When I say the Soviet Union was communist, I mean that the state controlled basically all of the economy. Do you agree that’s what we’re debating here?”

“I mean that a gun buyback program similar to the one in Australia would probably lead to less gun crime in the United States and hundreds of lives saved per year.”

“If the US were to raise the national minimum wage to $15, the average poor person would be better off.”

“I’m not interested in debating whether the IPCC estimates of global warming might be too high, I’m interested in whether the real estimate is still bad enough that millions of people could die.”

An argument is operationalized when every part of it has either been reduced to a factual question with a real answer (even if we don’t know what it is), or when it’s obvious exactly what kind of non-factual disagreement is going on (for example, a difference in moral systems, or a difference in intuitions about what’s important).

The Center for Applied Rationality promotes double-cruxing, a specific technique that helps people operationalize arguments. A double-crux is a single subquestion where both sides admit that if they were wrong about the subquestion, they would change their mind. For example, if Alice (gun control opponent) would support gun control if she knew it lowered crime, and Bob (gun control supporter) would oppose gun control if he knew it would make crime worse – then the only thing they have to talk about is crime. They can ignore whether guns are important for resisting tyranny. They can ignore the role of mass shootings. They can ignore whether the NRA spokesman made an offensive comment one time. They just have to focus on crime – and that’s the sort of thing which at least in principle is tractable to studies and statistics and scientific consensus.

Not every argument will have double-cruxes. Alice might still oppose gun control if it only lowered crime a little, but also vastly increased the risk of the government becoming authoritarian. A lot of things – like a decision to vote for Hillary instead of Trump – might be based on a hundred little considerations rather than a single debatable point.

But at the very least, you might be able to find a bunch of more limited cruxes. For example, a Trump supporter might admit he would probably vote Hillary if he learned that Trump was more likely to start a war than Hillary was. This isn’t quite as likely to end the whole disagreemnt in a fell swoop – but it still gives a more fruitful avenue for debate than the usual fact-scattering.

High-level generators of disagreement are what remains when everyone understands exactly what’s being argued, and agrees on what all the evidence says, but have vague and hard-to-define reasons for disagreeing anyway. In retrospect, these are probably why the disagreement arose in the first place, with a lot of the more specific points being downstream of them and kind of made-up justifications. These are almost impossible to resolve even in principle.

“I feel like a populace that owns guns is free and has some level of control over its own destiny, but that if they take away our guns we’re pretty much just subjects and have to hope the government treats us well.”

“Yes, there are some arguments for why this war might be just, and how it might liberate people who are suffering terribly. But I feel like we always hear this kind of thing and it never pans out. And every time we declare war, that reinforces a culture where things can be solved by force. I think we need to take an unconditional stance against aggressive war, always and forever.”

“Even though I can’t tell you how this regulation would go wrong, in past experience a lot of well-intentioned regulations have ended up backfiring horribly. I just think we should have a bias against solving all problems by regulating them.”

“Capital punishment might decrease crime, but I draw the line at intentionally killing people. I don’t want to live in a society that does that, no matter what its reasons.”

Some of these involve what social signal an action might send; for example, even a just war might have the subtle effect of legitimizing war in people’s minds. Others involve cases where we expect our information to be biased or our analysis to be inaccurate; for example, if past regulations that seemed good have gone wrong, we might expect the next one to go wrong even if we can’t think of arguments against it. Others involve differences in very vague and long-term predictions, like whether it’s reasonable to worry about the government descending into tyranny or anarchy. Others involve fundamentally different moral systems, like if it’s okay to kill someone for a greater good. And the most frustrating involve chaotic and uncomputable situations that have to be solved by metis or phronesis or similar-sounding Greek words, where different people’s Greek words give them different opinions.

You can always try debating these points further. But these sorts of high-level generators are usually formed from hundreds of different cases and can’t easily be simplified or disproven. Maybe the best you can do is share the situations that led to you having the generators you do. Sometimes good art can help.

The high-level generators of disagreement can sound a lot like really bad and stupid arguments from previous levels. “We just have fundamentally different values” can sound a lot like “You’re just an evil person”. “I’ve got a heuristic here based on a lot of other cases I’ve seen” can sound a lot like “I prefer anecdotal evidence to facts”. And “I don’t think we can trust explicit reasoning in an area as fraught as this” can sound a lot like “I hate logic and am going to do whatever my biases say”. If there’s a difference, I think it comes from having gone through all the previous steps – having confirmed that the other person knows as much as you might be intellectual equals who are both equally concerned about doing the moral thing – and realizing that both of you alike are controlled by high-level generators. High-level generators aren’t biases in the sense of mistakes. They’re the strategies everyone uses to guide themselves in uncertain situations.

This doesn’t mean everyone is equally right and okay. You’ve reached this level when you agree that the situation is complicated enough that a reasonable person with reasonable high-level generators could disagree with you. If 100% of the evidence supports your side, and there’s no reasonable way that any set of sane heuristics or caveats could make someone disagree, then (unless you’re missing something) your opponent might just be an idiot.

Some thoughts on the overall arrangement:

1. If anybody in an argument is operating on a low level, the entire argument is now on that low level. First, because people will feel compelled to refute the low-level point before continuing. Second, because we’re only human, and if someone tries to shame/gotcha you, the natural response is to try to shame/gotcha them back.

2. The blue column on the left is factual disagreements; the red column on the right is philosophical disagreements. The highest level you’ll be able to get to is the lowest of where you are on the two columns.

3. Higher levels require more vulnerability. If you admit that the data are mixed but seem to slightly favor your side, and your opponent says that every good study ever has always favored his side plus also you are a racist communist – well, you kind of walked into that one. In particular, exploring high-level generators of disagreement requires a lot of trust, since someone who is at all hostile can easily frame this as “See! He admits that he’s biased and just going off his intuitions!”

4. If you hold the conversation in private, you’re almost guaranteed to avoid everything below the lower dotted line. Everything below that is a show put on for spectators.

5. If you’re intelligent, decent, and philosophically sophisticated, you can avoid everything below the higher dotted line. Everything below that is either a show or some form of mistake; everything above it is impossible to avoid no matter how great you are.

6. The shorter and more public the medium, the more pressure there is to stick to the lower levels. Twitter is great for shaming, but it’s almost impossible to have a good-faith survey of evidence there, or use it to operationalize a tricky definitional question.

7. Sometimes the high-level generators of disagreement are other, even more complicated questions. For example, a lot of people’s views come from their religion. Now you’ve got a whole different debate.

8. And a lot of the facts you have to agree on in a survey of the evidence are also complicated. I once saw a communism vs. capitalism argument degenerate into a discussion of whether government works better than private industry, then whether NASA was better than SpaceX, then whether some particular NASA rocket engine design was better than a corresponding SpaceX design. I never did learn if they figured whose rocket engine was better, or whether that helped them solve the communism vs. capitalism question. But it seems pretty clear that the degeneration into subquestions and discovery of superquestions can go on forever. This is the stage a lot of discussions get bogged down in, and one reason why pruning techniques like double-cruxes are so important.

9. Try to classify arguments you see in the wild on this system, and you find that some fit and others don’t. But the main thing you find is how few real arguments there are. This is something I tried to hammer in during the last election, when people were complaining “Well, we tried to debate Trump supporters, they didn’t change their mind, guess reason and democracy don’t work”. Arguments above the first dotted line are rare; arguments above the second basically nonexistent in public unless you look really hard.

But what’s the point? If you’re just going to end up at the high-level generators of disagreement, why do all the work?

First, because if you do it right you’ll end up respecting the other person. Going through all the motions might not produce agreement, but it should produce the feeling that the other person came to their belief honestly, isn’t just stupid and evil, and can be reasoned with on other subjects. The natural tendency is to assume that people on the other side just don’t know (or deliberately avoid knowing) the facts, or are using weird perverse rules of reasoning to ensure they get the conclusions they want. Go through the whole process, and you will find some ignorance, and you will find some bias, but they’ll probably be on both sides, and the exact way they work might surprise you.

Second, because – and this is total conjecture – this deals a tiny bit of damage to the high-level generators of disagreement. I think of these as Bayesian priors; you’ve looked at a hundred cases, all of them have been X, so when you see something that looks like not-X, you can assume you’re wrong – see the example above where the libertarian admits there is no clear argument against this particular regulation, but is wary enough of regulations to suspect there’s something they’re missing. But in this kind of math, the prior shifts the perception of the evidence, but the evidence also shifts the perception of the prior.

Imagine that, throughout your life, you’ve learned that UFO stories are fakes and hoaxes. Some friend of yours sees a UFO, and you assume (based on your priors) that it’s probably fake. They try to convince you. They show you the spot in their backyard where it landed and singed the grass. They show you the mysterious metal object they took as a souvenier. It seems plausible, but you still have too much of a prior on UFOs being fake, and so you assume they made it up.

Now imagine another friend has the same experience, and also shows you good evidence. And you hear about someone the next town over who says the same thing. After ten or twenty of these, maybe you start wondering if there’s something to all of this UFOs. Your overall skepticism of UFOs has made you dismiss each particular story, but each story has also dealt a little damage to your overall skepticism.

I think the high-level generators might work the same way. The libertarian says “Everything I’ve learned thus far makes me think government regulations fail.” You demonstrate what looks like a successful government regulation. The libertarian doubts, but also becomes slightly more receptive to the possibility of those regulations occasionally being useful. Do this a hundred times, and they might be more willing to accept regulations in general.

As the old saying goes, “First they ignore you, then they laugh at you, then they fight you, then they fight you half-heartedly, then they’re neutral, then they then they grudgingly say you might have a point even though you’re annoying, then they say on balance you’re mostly right although you ignore some of the most important facets of the issue, then you win.”

I notice SSC commenter John Nerst is talking about a science of disagreement and has set up a subreddit for discussing it. I only learned about it after mostly finishing this post, so I haven’t looked into it as much as I should, but it might make good followup reading.

11 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2018-05-11T22:34:23.704Z · LW(p) · GW(p)

I like this post, though I wish it were explicit about the fact that the subject matter is really "relatively intractable disagreements between smart, well-informed people about Big Issues", not "arguments" in general.

If everyone in the discussion is smart and well-informed, and the subject is a Big Issue, then trying to resolve the issue by bringing up an isolated fact tends to be a worse use of time, or is further away from people's cruxes, than trying to survey all the evidence, which tends to be worse / less cruxy than delving into high-level generators. But:

  • A lot of arguments aren't about Big Issues. One example I've seen: Alice and Bob disagreed about whether a politician had made an inflammatory gesture, based on contradictory news reports. Alice tracked down a recording and showed it to Bob, while noting a more plausible explanation for the gesture; this convinced Bob, even though it was a mere fact and not a literature review or philosophical treatise.
  • A lot of Big-Issue-adjacent arguments aren't that sophisticated. If you read Scott's post and then go argue with someone who says "evolution is just a theory", it will often be the case that the disagreement is best resolved by just clarifying definitions, not by going hunting for deep generators.

An obvious reply is "well, those arguments are bad in their own right; select arguments and people-to-argue-with such that Scott's pyramid is true, and you'll be much better off". I tentatively think that's not the right approach, even though I agree that the examples I cited aren't good topics for rationalists to spend time on. Mostly I just think it's not true that smart people never believe really consequential, large-scale things for trivial-to-refute reasons. Top rationalists don't know everything, so some of their beliefs will be persistently wrong just because they misunderstood a certain term, never happened to encounter a certain isolated fact, are misremembering the results from a certain study, etc. That can lead to long arguments when the blind spot is hard to spot.

If people neglect mentioning isolated facts or studies (or clarifying definitions) because they feel like it would be lowly or disrespectful, they may just end up wasting time. And I worry that people's response to losing an argument is often to rationalize some other grounds for their original belief, in which case Scott's taxonomy can encourage people to mis-identify their own cruxes as being deeper and more intractable than they really are. This is already a temptation because it's embarrassing to admit that a policy or belief you were leaning on hard was so simple to refute.

(Possibly I don't have a substantive disagreement with Scott and I just don't like how many different dimensions of value the pyramid is collapsing. There's a sense in which arguments toward the top can be particularly valuable, but people who like the pyramid shouldn't skip over the necessary legwork at the lower levels.)

comment by Scott Alexander (Yvain) · 2019-12-20T20:49:01.114Z · LW(p) · GW(p)

I still generally endorse this post, though I agree with everyone else's caveats that many arguments aren't like this. The biggest change is that I feel like I have a slightly better understanding of "high-level generators of disagreement" now, as differences in priors, contexts, and categorizations - see my post "Mental Mountains" for more.

comment by Raemon · 2019-12-01T23:14:40.019Z · LW(p) · GW(p)

There's been several posts that classify argumentation [LW · GW] from different [LW · GW] angles, each with a somewhat different lens. Each lens seems useful in somewhat different contexts, and I'm still mulling over how to fit it all together into a comprehensive schema.

But I got a few concrete things from this post, including:

  • The distinction between disagreement vs social shaming. I think I'd seen this before but this article made the distinction more crisp.
    • the related, important point that by default, public debate often has a strong social component which is different from "honest disagreement." (Which doesn't necessarily point to any particular strategy for improving intellectual discourse, but which seems at least like an important model to have in whatever strategy you're pursuing)
  • The reminder that the meta-debate is not the debate, while often being easier/more-fun, and that you should at least be noticing when you're doing one or the other
  • A variety of examples of types of disagreements, which were individually useful to help notice what what's going on in a given conversation.
comment by Raemon · 2018-05-12T07:57:56.403Z · LW(p) · GW(p)

I'm curating this post, because it lays out a useful framework that I expect people to be referring back to for awhile, when sorting out why arguments aren't going anywhere productive.

The post gets some bonus points for being funny, loses some points for being long. (An issue the mods have chatted about a bit is how we actually generally want to incentivize people to write shorter stuff, except that some of our senior writers tend to write longer pieces, and have enough skill to get away with it. The naive outcome is that people tend to imitate the senior writers, resulting in a net tendency towards too-long)

comment by Jameson Quinn (jameson-quinn) · 2020-01-10T23:29:23.858Z · LW(p) · GW(p)

As I recall, this is a solid, well-written post. Skimming it over again prior to reviewing it, nothing stands out to me as something worth mentioning here. Overall, I probably wouldn't put it on my all-time best list, or re-read it too often, but I'm certainly glad I read it once; it's better than "most" IMO, even among posts with (say) over 100 karma.

comment by Evan_Gaensbauer · 2018-05-10T00:23:38.299Z · LW(p) · GW(p)
Nobody expects this to convince anyone. That’s why I don’t like the term “ad hominem”, which implies that shamers are idiots who are too stupid to realize that calling someone names doesn’t refute their point. That’s not the problem. People who use this strategy know exactly what they’re doing and are often quite successful. The goal is not to convince their opponents, or even to hurt their opponent’s feelings, but to demonstrate social norms to bystanders. “Ad hominem” has the wrong implications. “Social shaming” gets it right.

Anecdata: my own experience is depending on who the audience is looking past the shaming; ignoring the effects of the signals they're trying to send; and engaging them as if social shaming were just part of the debate can work. Social shaming is meant to send a signal, but by standing up to shaming and showing it's not going to work jams the signal. That Jordan B. Peterson was able to do this to in very sensational ways to social justice activists like nobody else seemed able to is in large part why he became so popular. I find rebuking an attempt at social shaming works better if the space it's taking place in isn't polarized to one side of the debate, and it values epistemic hygiene. If it's a space where everyone is shaming each other all the time, and they're already on the same side of a meme war anyway, going in there and standing up to they're shaming to prove some kind of point will almost certainly accomplish nothing.

comment by Jan_Kulveit · 2018-05-08T11:12:32.383Z · LW(p) · GW(p)

My experience is that in a sense, a layer of discussion "above generators" that mixes with "meta" is possible. But is rare.

Practically, I met a person who had a very similar style of thinking, very similar cognitive skills, very similar education and more. But we were born at different places, went through a different life trajectories, and came to somewhat different generators.

In a way, they were quite close to what I would think the bunch of computation that is "me" would think if it started with different initial conditions.

The experience was quite moving - I decided I can kind of "import" their lifetime of experience wholesale, and assigned a lot of uncertainty to generators where we did not agree, in one step.

comment by Ben Pace (Benito) · 2019-12-01T22:40:21.621Z · LW(p) · GW(p)

This post sums over a lot of argumentative experiences, and condenses them into an image to remember it by, which is a great way to try to help people understand communication. 

Many of Scott's posts provide a glimpse of this model, where he, say, shows why a particular sociology or medical study doesn't actually end a big debate, or responds to someone lower down the triangle by moving up a level and doing a literature review; but those are all in the context of very specific arguments, and aren't supposed to be about helping the reader look at this bigger picture. This post takes the lessons from all of those experiences and produces some high-level, general insights about how argument, communication and rationality works.

I think if you’d asked me to classify types of argument, I would’ve dived straight into the grey triangle at the top, and come back with some bad first-principles models (maybe looking a bit like my post with drawings about good communication [LW · GW]), and I really appreciate that someone with such a varied experience of arguing on the internet did this more empirical version, with so many examples.

comment by zulupineapple · 2018-05-08T13:43:51.596Z · LW(p) · GW(p)

About the structure of the pyramid:

I think that "Social shaming" should be moved to meta. It is an argument about what sort of topics are allowed in what space. In theory, the Jets could participate in rational debate on whether the bridge belongs to the Sharks. The fact that in practice this "debate" is not at all constructive, is another issue.

Then, the pyramid is missing the lowest rung, that would correspond to this social shaming. I would call it "Screaming". It's when you make a claim with nothing even remotely resembling a justification, and you are "correct" because your claim was louder and you said it more confidently than your opponent. E.g. "Hillary is a crook!", "Trump is a racist!", etc. This might work best with an audience, but it's also effective in person.

Also, if you accept a lowest rung which is not inherently social, then it looks even more like "high level generators of disagreement" (only for stupid people). Which hints at a sort of iterated approach to arguments, where intuitive disagreements are broken down into factual disagreements, which are then solved, revealing some deeper intuitive disagreements. It's cute, although I don't think it's possible to go more than a couple of iterations deep, even in the best conditions.

comment by TheWakalix · 2018-05-08T15:09:53.369Z · LW(p) · GW(p)
I might challenge assumptions like “criminals aren’t that much likely to have guns if they’re illegal”

Typo? "that much less likely" would make more sense in context.

This isn’t quite as likely to end the whole disagreemnt in a fell swoop

Typo: disagreement

comment by zulupineapple · 2018-05-08T10:27:07.345Z · LW(p) · GW(p)

About the high level generators of disagreement.

First is that there can be much factual truth hidden behind them. E.g. if "everything I’ve learned thus far makes me think government regulations fail", then we can talk about past regulations and whether they really "failed".

Second, even if we have black-box Bayesian priors with no debatable structure, we can do with them what priors are meant to be used for: predictions, bets. Does that count as a form of debate?