Burdensome Details

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-20T23:46:06.000Z · LW · GW · Legacy · 34 comments

Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative . . .

—Pooh-Bah, in Gilbert and Sullivan’s The Mikado

The conjunction fallacy is when humans assign a higher probability to a proposition of the form “A and B” than to one of the propositions “A” or “B” in isolation, even though it is a theorem that conjunctions are never likelier than their conjuncts. For example, in one experiment, 68% of the subjects ranked it more likely that “Reagan will provide federal support for unwed mothers and cut federal support to local governments” than that “Reagan will provide federal support for unwed mothers.”1

A long series of cleverly designed experiments, which weeded out alternative hypotheses and nailed down the standard interpretation, confirmed that conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.”2 By adding extra details, you can make an outcome seem more characteristic of the process that generates it. You can make it sound more plausible that Reagan will support unwed mothers, by adding the claim that Reagan will also cut support to local governments. The implausibility of one claim is compensated by the plausibility of the other; they “average out.”

Which is to say: Adding detail can make a scenario sound more plausible, even though the event necessarily becomes less probable.

If so, then, hypothetically speaking, we might find futurists spinning unconscionably plausible and detailed future histories, or find people swallowing huge packages of unsupported claims bundled with a few strong-sounding assertions at the center.

If you are presented with the conjunction fallacy in a naked, direct comparison, then you may succeed on that particular problem by consciously correcting yourself. But this is only slapping a band-aid on the problem, not fixing it in general.

In the 1982 experiment where professional forecasters assigned systematically higher probabilities to “Russia invades Poland, followed by suspension of diplomatic relations between the USA and the USSR” than to “Suspension of diplomatic relations between the USA and the USSR,” each experimental group was only presented with one proposition.3 What strategy could these forecasters have followed, as a group, that would have eliminated the conjunction fallacy, when no individual knew directly about the comparison? When no individual even knew that the experiment was about the conjunction fallacy? How could they have done better on their probability judgments?

Patching one gotcha as a special case doesn’t fix the general problem. The gotcha is the symptom, not the disease.

What could the forecasters have done to avoid the conjunction fallacy, without seeing the direct comparison, or even knowing that anyone was going to test them on the conjunction fallacy? It seems to me, that they would need to notice the word “and.” They would need to be wary of it—not just wary, but leap back from it. Even without knowing that researchers were afterward going to test them on the conjunction fallacy particularly. They would need to notice the conjunction of two entire details, and be shocked by the audacity of anyone asking them to endorse such an insanely complicated prediction. And they would need to penalize the probability substantially—a factor of four, at least, according to the experimental details.

It might also have helped the forecasters to think about possible reasons why the US and Soviet Union would suspend diplomatic relations. The scenario is not “The US and Soviet Union suddenly suspend diplomatic relations for no reason,” but “The US and Soviet Union suspend diplomatic relations for any reason.”

And the subjects who rated “Reagan will provide federal support for unwed mothers and cut federal support to local governments”? Again, they would need to be shocked by the word “and.” Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them. They would need to think, “Reagan might or might not cut support to local governments (1 bit), but it seems very unlikely that he will support unwed mothers (4 bits). Total absurdity: 5 bits.” Or maybe, “Reagan won’t support unwed mothers. One strike and it’s out. The other proposition just makes it even worse.”

Similarly, consider Tversky and Kahnemans (1983) experiment based around a six-sided die with four green faces and two red faces.4 The subjects had to bet on the sequence (1) RGRRR, (2) GRGRRR, or (3) GRRRRR appearing anywhere in twenty rolls of the dice. Sixty-five percent of the subjects chose GRGRRR, which is strictly dominated by RGRRR, since any sequence containing GRGRRR also pays off for RGRRR. How could the subjects have done better? By noticing the inclusion? Perhaps; but that is only a band-aid, it does not fix the fundamental problem. By explicitly calculating the probabilities? That would certainly fix the fundamental problem, but you can’t always calculate an exact probability.

The subjects lost heuristically by thinking: “Aha! Sequence 2 has the highest proportion of green to red! I should bet on Sequence 2!” To win heuristically, the subjects would need to think: “Aha! Sequence 1 is short! I should go with Sequence 1!”

They would need to feel a stronger emotional impact from Occam’s Razor—feel every added detail as a burden, even a single extra roll of the dice.

Once upon a time, I was speaking to someone who had been mesmerized by an incautious futurist (one who adds on lots of details that sound neat). I was trying to explain why I was not likewise mesmerized by these amazing, incredible theories. So I explained about the conjunction fallacy, specifically the “suspending relations ± invading Poland” experiment. And he said, “Okay, but what does this have to do with—” And I said, “It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it.” And he said, “Oh.”

Until then, he had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative. Someone presents you with a package of strange ideas, one of which is that universes replicate. Then they present support for the assertion that universes replicate. But this is not support for the package, though it is all told as one story.

You have to disentangle the details. You have to hold up every one independently, and ask, “How do we know this detail?” Someone sketches out a picture of humanity’s descent into nanotechnological warfare, where China refuses to abide by an international control agreement, followed by an arms race . . . Wait a minute—how do you know it will be China? Is that a crystal ball in your pocket or are you just happy to be a futurist? Where are all these details coming from? Where did that specific detail come from?

For it is written:

If you can lighten your burden you must do so.

There is no straw that lacks the power to break your back.

1Amos Tversky and Daniel Kahneman, “Judgments of and by Representativeness: Heuristics and Biases,” in Judgment Under Uncertainty, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (New York: Cambridge University Press, 1982), 84–98.

2 See Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315 and Daniel Kahneman and Shane Frederick, “Representativeness Revisited: Attribute Substitution in Intuitive Judgment,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002) for more information.

3 Tversky and Kahneman, “Extensional Versus Intuitive Reasoning.”

4 Ibid.


Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by joe5 · 2007-09-21T03:47:40.000Z · LW(p) · GW(p)

In some situations, part of the problem may simply be that instead of calculating a joint probability, they are calculating a conditional probability. The effect of mistakenly calculating a conditional probability is that one event may seem highly probable given the other event occurred; I believe this could be a mathematical reason explaining the plausibility effect in some situations.

For example: the probability that USA and USSR suspend diplomatic relations given that Russia invades Poland is probably more likely than the marginal event, USA and USSR suspend diplomatic relations.

comment by Felix2 · 2007-09-21T06:23:01.000Z · LW(p) · GW(p)

Here's a candidate for a question to illustrate a couple of related biases:

Given the following two dice roll records:



Which of the following is true:

A) 1 is more probable than 2.

B) 2 is more probable than 1.

C) Both are equally probable.

Now, I predict that there will be at least 1 "normal" person who answers C.

"Unbelievable," you say?

Stay tuned!

I will make a stronger prediction: If this question were posed to 1000 randomly selected, well-dressed, Nordic-looking people found purposely walking the downtown sidewalks during daytime in a large American city (with luck, eliminating the possibility that I cheat by selecting 1000 people from insane asylums or from people who know no English), I predict that there will be at least 1 person who answers C.

Why? Because it is a well known fact that there exist, in much larger numbers than 1 in 1000, people capable, willing, and even eager to use the "toilet paper tube fallacy". Any of such people combined with any of those who are susceptible to the "literalist fallacy" will answer C.

Let me make a stronger prediction. Even given a 4th choice, so slyly left out:

D) Beats me.

I predict that, still, at least one person will select C.

Now, list the following in order of probability:

a) That one person is a moron.

b) That one person is a computer programmer.

c) That one person is a card shark.

d) That one person believed that choice B was to be taken literally. That is, that B really (really!) means that the very first coin flip came out tails - NOT HEADS! - tails, the second heads, the third tails, and so on.

e) That one person ignored as much context around the dice roll question as he could. That is, that person pretended he was similar to a computer in seeing the world through what amounts to a toilet paper tube. Just the facts, Ma'am.

f) That one person is a card shark and a computer programmer.

g) b and c

h) d and e and f

i) All of the above.

"h", anyone? :)

But, a thought on this question: How to avoid the conjunction fallacy?

Perhaps a better way to do so than keying on the word "and", (which, as we all know, means "OR", but not "OR and not AND") is to key on the word "probability". That is, when you see that word (or sense its meaning) as a goal, hand the question to the modern equivalent of a four-function calculator and let it grind out the numbers. To do so otherwise would be like multiplying 10821 by 11409 in your head, wouldn't it?

Replies from: faul_sname, AstraSequi, ToveLyck
comment by faul_sname · 2012-01-08T21:52:59.317Z · LW(p) · GW(p)

A. There is significantly greater than a 1 in 2^31 chance that the coin is significantly biased towards heads. This sequence overwhelms almost all priors of fairness, and thus we can conclude that the coin is almost certainly biased towards heads.

Replies from: Kingreaper
comment by Kingreaper · 2012-01-11T00:11:12.018Z · LW(p) · GW(p)

He's rolling a die. As such, both "possibilities" are overwhelmingly improbable, as I have never seen a die labeled with heads and tails, and I spend a lot of time around dice.

Replies from: Bugmaster
comment by Bugmaster · 2012-01-11T00:22:30.534Z · LW(p) · GW(p)

Tabletop RPGs often use the term "roll M N-sided dice", or "MdN" for short, to mean, "generate M high-quality random numbers between 1 and N". The dice themselves are merely an implementation detail; they could be physical dice, or some random-number generator built into a collaborative RPG software program, etc. It's common to refer to coins as "d2"s, because that's the function that they serve.

Another interesting die roll that comes up quite often is "Md3"; the 3-sided die is usually implemented by taking the more familiar 6-sided die and replacing 4,5,6 with 1,2,3 on its faces.

The percentile die, which is a golf-ball sized polyhedron with 100 faces, is also quite iconic, though rarely used in practice due to being ridiculous. Most people just roll two 10-sided dice, instead.

Replies from: dlthomas, Kingreaper
comment by dlthomas · 2012-01-11T00:24:52.940Z · LW(p) · GW(p)

When I hear "high-quality random numbers" I think "crypto-quality random numbers" - which certainly suffice, but are clearly overkill...

Replies from: Bugmaster
comment by Bugmaster · 2012-01-11T00:25:43.141Z · LW(p) · GW(p)

You would be amazed at what tabletop gamers do and do not consider "overkill" :-)

EDIT: In the interests of full disclosure, I am a tabletop gamer, and yet I do consider crypto-quality random numbers to be overkill, but I may be in the minority on this.

Replies from: dlthomas
comment by dlthomas · 2012-01-11T00:28:14.181Z · LW(p) · GW(p)

Yes, but those are typically the same people who have rituals around their dice. Which, on reflection, seems kinda contradictory...

comment by Kingreaper · 2012-01-14T16:31:14.185Z · LW(p) · GW(p)

Yes, but a d2 has the values 1, and 2, not heads and tails.

comment by AstraSequi · 2012-02-12T20:16:02.200Z · LW(p) · GW(p)

d) That one person believed that choice B was to be taken literally. That is, that B really (really!) means that the very first coin flip came out tails - NOT HEADS! - tails, the second heads, the third tails, and so on.

I'm sorry - I suppose I'm probably missing something, but I can't think of any other possible way to interpret this question. I agree that it is far more probable to see a sequence equally containing both heads and tails than one containing only heads, but it seems like you are asking for the relative probabilities of two highly specific sequences of the same length. Could someone please explain?

comment by ToveLyck · 2012-06-27T11:26:23.290Z · LW(p) · GW(p)

Okay, Felix, I have read your painfully detailed description of a hypothetically situation. Now I wanna know what your point is.

comment by Felix2 · 2007-09-21T07:13:55.000Z · LW(p) · GW(p)

Ooooo! "Dice roll?" By, God, my good fellow, you mean, "coin flips!"

comment by Richard3 · 2007-09-21T18:50:01.000Z · LW(p) · GW(p)

Most of the time, detailed futuristic scenarios are not presented with the intent to say, "exactly this will happen." Rather, they are intended to say, "something like this will happen." Many people have trouble with purely abstract thinking, and benefit from the clarity of specific examples. If you make a general statement about the dangers of reckless scientific experiments (for example), it is likely that many of your listeners either won't be able to connect that to specifics, or will come up with examples in their minds that are very different from what you meant. If you write a novel about it called "Frankenstein," those people will get the point very vividly. The novel has approximately zero chance of coming to pass exactly as written, but it is easier to understand. Unfortunately, the detailed approach carries with it the very real danger that some people will take the fictional details to be the substance of the claim, rather than the abstract principle that they illustrate.

comment by Doug_S. · 2007-09-21T20:02:34.000Z · LW(p) · GW(p)

The variable-width typeface my browser is using typeface makes option 1 look longer than option 2; I had to copy/paste it into Notepad to see that both sequences were equally long. If I hadn't double checked, I would have said sequence 2 is more probable than sequence 1 because it is shorter.

comment by Nick_Tarleton · 2007-09-21T20:58:56.000Z · LW(p) · GW(p)

Echoing Richard, I can see another good reason (and yes, I read your last post) why a more complicated scenario could be assigned a higher probability. Take the USSR example. Suppose that a USSR invasion of Poland is the only reasonably-likely event that could cause a suspension of diplomatic relations. Suppose also that no one would have thought of that as a possibility until it was suggested. Suppose further that once it was suggested as a possibility, the forecasters would realize quickly that it was actually reasonably likely. (I know I've twisted the example beyond all plausibility, but there probably are real situations fitting this form.) Effectively, the forecasters who got the question including Poland have information the others don't - the realization that there is a probable incident that could cause suspension of diplomatic relations - and can rationally assign a higher probability. The forecasters are still at fault for a massive failure of imagination, but not for anything as simple and stupid as the Conjunction Fallacy.

Felix, are you saying that someone shouldn't answer C, because they should consider the context and consider a biased coin? If I knew the coin might be biased, I would answer A, but I don't see what that has to do with any of Eliezer's examples.

comment by g · 2007-09-21T22:03:26.000Z · LW(p) · GW(p)

Nick: Kahneman and Tversky actually mention the mechanism you describe as one cause of the conjunction fallacy, in their 1983 paper that Eliezer linked to. I agree that in the case where the people who see "X" and the people who see "X and Y" are different, this makes it rather unfair to call it a fallacy; K&T don't seem to think so, perhaps because it's clear that people in this situation are either underestimating the probability of X or overestimating that of X&Y or both, so they're making a mistake even if the explanation for that mistake is reasonably non-embarrassing.

I think that Felix is mostly making fun of people who try to think mathematically and who try to answer the question they're asked rather than some different question that they think might make more sense, rather than trying to make a serious point about the nature of biased coins.

comment by Felix2 · 2007-09-22T08:58:50.000Z · LW(p) · GW(p)

Nick: Nice spin! :) Context would be important if Eliezer had not asserted as a given that many, many experiments have been done to preclude any influence of context. My extremely limited experience and knowledge of psychological experiments says that there is a 100% chance that such is not a valid assertion. Imagine a QA engineer trying to skate by with the setups of psych experiments you have run in to. But, personal, anecdotal experience aside, it's real easy to believe Eliezer's assertion is true. Most people might have a hard time tuning out context, though, and therefore might have a harder time, both with conjunction fallacy questionnaires and accepting Eliezer's assertion.

g: Yes, keeping in mind that I would be first in line to answer C, myself!

Choice (B) seems a poster boy for "representation". So, that a normal person would choose B is yet another example of this, "probability" question not being a question about probability, but about "representation". Which is the point. Why is it hard to imagine that the word, "probable" does not mean, in such questions' contexts, or even, perhaps, in normal human communication, "probable" as a gambler or statistician would think of its meaning? Or, put another way, g, "who try to answer the question they're asked rather..." is an assumptive close. I don't buy it. They were not asked the question you, me, Eliezer, the logician or the autistic thought. They were asked the question that they understood. And, they have the votes to prove it. :)

So far as people making simple logical errors in computing probabilities, as is implied by the word, "fallacy", well, yeah. Your computer can beat you in both logic and probabilities. Just as your calculator can multiply better than you.

Anyway, I believe that the functional equivalent of visual illusions are inherent in anything one might call a mind. I'm just not convinced that this conjunction fallacy is such a case. The experiments mentioned seem more to identify and wonderfully clarify an interesting communications issue - one that probably stands out simply because there are, in these times, many people who make a living answering C.

comment by simplicio · 2010-03-10T20:21:10.918Z · LW(p) · GW(p)

Interesting post!

But: "Similarly, consider the six-sided die with four green faces and one red face."

I seem to be good at catching trivial mistakes.

Replies from: thomblake
comment by thomblake · 2010-03-10T20:34:29.740Z · LW(p) · GW(p)

I was going to reply that this is not obviously a mistake, since we might just be ignorant of what the other side is. Then I realized the guesses listed after suggest that the die has four red faces and one green face. Nice catch.

comment by JoshSN · 2010-05-30T17:31:20.308Z · LW(p) · GW(p)

Reagan would be unlikely to provide support to unwed mothers, but maybe as part of a deal in which he got what he wanted, a reduction in expenditures.

Replies from: JDM
comment by JDM · 2013-06-04T17:42:14.752Z · LW(p) · GW(p)

Irrelevant. If there is any possible explanation where he provides the support without that specific deal, it is automatically less likely that both happen, even if the most likely scenario (90%+) of supporting unwed mothers is given said deal. If it is the only possibility, the scenarios would be equally likely; the conjunction could still not possibly be more likely.

comment by simplicio · 2010-05-30T17:39:18.310Z · LW(p) · GW(p)

Similarly, consider the six-sided die with four green faces and one red face. The subjects had to bet on the sequence (1) "RGRRR", (2) "GRGRRR", or "GRRRRR" appearing anywhere in 20 rolls of the dice.

Did you mean to put a (3) in there?

comment by TobyBartels · 2010-07-27T23:03:06.929Z · LW(p) · GW(p)

add absurdities - where the absurdity is the log probability, so you can add it - rather than averaging them.

This is a very nice measure (which I've seen before) and term for it (which I have not seen).

Eliezer, did you develop this yourself? Should I say to other people ‘Artificial-intelligence researcher Eliezer Yudkowsky defines the absurdity of a proposition to be the opposite of the logarithm of its probability, A = –log P.’ as an introduction before I start to use it? (I threw in a minus sign so that higher probability would be lower absurdity; maybe you were taking the logarithm base 1/2 so you didn't have to do that.)

comment by FiftyTwo · 2011-09-04T13:37:30.506Z · LW(p) · GW(p)

One possibility is that our intuitive sense of 'is this statement likely to be true' is developed to detect lies by other human beings, rather than to simulate the external world.

For example if someone is trying to convince you of a tribe members bad behaviour, the ability to produce extra details (time/location/etc) makes it more plausible they are truthful rather than lying. However in probability terms each extra detail makes it less likely (e.g. 'probability of bad behaviour' 'probability of doing it in location x etc).

Cross-posted in the sequence re-run

comment by po8crg · 2012-09-21T17:53:25.563Z · LW(p) · GW(p)

Depends on people's definition of truth, surely?

If your scoring system for a conjunction statement where one part is true and the other is untrue is to score that as half-true, then the probabilities for the Reagan case are wholly reasonable.

(ie for "Reagan will provide federal support for unwed mothers and cut federal support to local governments", you score 1 for both parts true, 0.5 for one part true and 0 for neither part true, while for "Reagan will provide federal support for unwed mothers" you can only score 1 for true and 0 for false).

If - and it seems reasonable - the intuitive scoring system for a conjunctive statement is similar to this, then the predictions are wholly reasonable.

This means that when there is a real conjunction, we tend to misinterpret it. It seems reasonable then to guess that we don't have an intuitive approach to a true conjunction. If that's the case, then the approach to overcoming the bias is to analyse joint statements to see if a partial truth scores any points - if it does, then our intuition can be trusted more than when it does not.

comment by [deleted] · 2013-01-02T20:44:34.598Z · LW(p) · GW(p)

Logically, if conjunctions made something more likely, disjunctions would make them less likely, which is surprisingly another similar fallacy people succumb to. The more times the word "or" appears in an explanation or theory, the more likely an onlooker will say "Now you're just guessing" and lose confidence in the claim, even though it necessarily becomes more likely.

comment by Miciah · 2015-08-05T02:59:52.509Z · LW(p) · GW(p)

Moreover, they would need to add absurdities - where the absurdity is the log probability, so you can add it - rather than averaging them.

This is confusing for three reasons: (1) one takes the product of probabilities, not the average, to compute conjunct probabilities; (2) the sum of two logs is the log of a product, not an average; and (3) "absurdity" is not defined in this article beyond this inline define-it-as-you-go definition, the briefness of which is incommensurate with the profundity of the concept behind the term.

Did you mean "product" rather than "average", or am I missing something?

comment by [deleted] · 2016-01-22T05:11:59.830Z · LW(p) · GW(p)

tl;dr: internal validity doesn't imply external validity

comment by Nikolaus Hansen (nikolaus-hansen) · 2019-11-22T15:16:38.563Z · LW(p) · GW(p)

I would be interested to see whether computing falsely to the average of and would model the error well. Like this any detail that fits well to the very unlikely primary event increases its perceived likelihood.

comment by Starr Gaia St James (starr-gaia-st-james) · 2020-03-12T00:21:30.416Z · LW(p) · GW(p)

This got me thinking about making complicated plans in life. When you have fifteen steps laid out to get what you want, it sounds like a PLAN! It sounds like you covered your bases - look at all that detail! In reality a fifteen step plan has fifteen chances to go wrong, fifteen turning points where things can change. Every time you add a step the probability of your plan going awry increases. If you try to get 15 friends to the same brunch, some will be late and some won’t show up at all. Obviously complex plans can be necessary in life, but where they’re not, I’d like to avoid them.

Replies from: motkrrh
comment by motkrrh · 2020-07-25T17:30:16.176Z · LW(p) · GW(p)

I'm reminded of Eisenhower's quote "Plans are worthless, but planning is everything." You can't convince a million men to row across the channel by just saying "we're going to kill Hitler." You have to provide them with a string of proposals that are not likely to succeed by themselves, like jumping out of airplanes, sending thousands of bombers to Germany, splitting the atom in a bomb, in order for the mountainous proposition of invading Germany to end the war seem not just possible but probable.

comment by motkrrh · 2020-07-25T17:25:28.351Z · LW(p) · GW(p)

This seems to me to help explain 3 closely related, essentially fictional, phenomenons: futurism, conspiracy theories, and suspension of disbelief in narrative fiction.

Futurism, as mentioned, becomes more palatable as details are added. AI overlords seem dubious, but spin a tale of white coated researchers, misguided investments, out of control corporations, and it suddenly becomes imaginable.

Conspiracy theories need a rabbit hole to guide victims down. Carefully constructed stories nab readers by beginning with dubious but plausible claims ("this is a first person account of Epstein's limo driver"), followed by possible but improbable ("I was then transferred to Utah to dispose of bodies for the Romney family's out of control nephew"), followed by the ludicrous ("I personally witnessed Roger Stone raising bodies from the dead on Epstein's private island"). The final claim made in isolation would be rejected by anyone still standing at the rabbit hole's edge. We need to add many improbable details for the final point to stick.

Finally, fiction in general is constructed of unlikely or impossible scenarios that we somehow are able to integrate into our identity or understanding of the outside world. We can think of every line and paragraph as a conjunction to make the final theme, like love always triumphs or good overcomes evil, not only palatable but absorbing the reader with a feeling of conviction.

comment by toothpaste · 2021-01-25T22:21:48.829Z · LW(p) · GW(p)

One of the reasons I was having trouble with the Reagan example when I was reading this for the first time was that I was interpreting it as

“Reagan will provide federal support for unwed mothers AND cut federal support to local governments” is more probable than “Reagan will provide federal support for unwed mothers AND NOT cut federal support to local governments”.

The fact that in one option one of the the sentences was present and in the other option it wasn't made me think that the fact that it was not present made it implicit that it would NOT happen, when it wasn't the case.

I wonder how common is that line of reasoning.

comment by Josh Smith-Brennan (josh-smith-brennan) · 2021-04-25T18:53:35.376Z · LW(p) · GW(p)

How about the statement "Either Trump is responsible for the US response to Covid-19 or Biden is responsible for the US response to Covid-19"? To say Trump is responsible means ignoring the logistical miracle Biden was able to bring about, and give the former president credit for it, ignoring the year long disaster that was the former presidents response. But to say Biden was responsible ignores Trumps positive contributions, and also places his failures at the feet of Biden. In reality both are responsible for the response. Given the amount of time the former president was in office, it seems reasonable to give him more responsibility for the response, but the amount of action taken during Biden's term has had a huge positive effect.