The Flow-Through Fallacy

post by Chris_Leong · 2023-09-13T04:28:28.390Z · LW · GW · 7 comments

Contents

7 comments

There is something of an informal reasoning fallacy where you promote something that seems like something that we should "surely do", something that seems obviously important, but where we haven't thought through all of the steps involved and so we aren't actually justified in assuming that the impact flow through.

Here are some examples:

In each of these cases, the decision maker may not have chosen the same option if they'd taken the time to think it through and ask themselves if the benefits were likely to actually accrue.

Domain experience can be helpful to know that these kinds of issues are likely to crop up, but so is the habit of Murphyjitsu [? · GW]'ing your plans.

To be clear, I'm not intending to refer to the following ways in which things could go wrong:

(Please let me know if this fallacy already has a name or if you think you've thought of a better one.)

7 comments

Comments sorted by top scores.

comment by noggin-scratcher · 2023-09-13T08:30:26.675Z · LW(p) · GW(p)

There's the old syllogism,

  • Something must be done
  • This is something
  • Therefore: this must be done

Not sure if there's a snappy name for it

Replies from: Richard Horvath
comment by Richard Horvath · 2023-09-13T18:31:40.238Z · LW(p) · GW(p)

"Politician's logic"

Wiki: https://en.wikipedia.org/wiki/Politician%27s_syllogism

Snappy British sitcom clip:

comment by romeostevensit · 2023-09-13T05:10:13.037Z · LW(p) · GW(p)

Related effects referred to under the headings of lost purposes and principle agent problems.

comment by Nathaniel Monson (nathaniel-monson) · 2023-09-13T05:05:51.593Z · LW(p) · GW(p)

I think lots of people would say that all three examples you gave are more about signalling than about genuinely attempting to accomplish a goal.

Replies from: zrkrlc
comment by junk heap homotopy (zrkrlc) · 2023-09-13T15:34:48.492Z · LW(p) · GW(p)

I wouldn’t say that. Signalling the way you seem to have used it implies deception on their part, but each of these instances could just be a skill issue on their end, an inability to construct the right causal graph with sufficient resolution.

For what it’s worth whatever this pattern is pointing at also applies to how wrongly most of us got the AI box problem, i.e., that some humans by default would just let the damn thing out without needing to be persuaded.

Replies from: Viliam
comment by Viliam · 2023-09-13T19:43:35.835Z · LW(p) · GW(p)

How would one even distinguish between those who don't actually care about solving the problem and only want to signal that they care, and those who care but are too stupid to realize that intent is not magic? I believe that both do exist in the real world.

I would probably start charitably assuming stupidity, and try to explain. If the explanations keep failing mysteriously, I would gradually update towards not wanting to actually achieve the declared goal.

comment by Dweomite · 2023-09-14T02:14:26.897Z · LW(p) · GW(p)

Sounds similar to fabricated options [LW · GW].