"But It Doesn't Matter"
post by Zack_M_Davis · 2019-06-01T02:06:30.624Z · LW · GW · 17 commentsContents
17 comments
If you ever find yourself saying, "Even if Hypothesis H is true, it doesn't have any decision-relevant implications," you are rationalizing! The fact that H is interesting enough for you to be considering the question at all (it's not some arbitrary trivium like the 1923th binary digit of π, or the low temperature in São Paulo on September 17, 1978) means that it must have some relevance to the things you care about. It is vanishingly improbable that your optimal decisions are going to be the same in worlds where H is true and worlds where H is false. The fact that you're tempted to say they're the same is probably because some part of you is afraid of some of the imagined consequences of H being true. But H is already true or already false! If you happen to live in a world where H is true, and you make decisions as if you lived in a world where H is false, you are thereby missing out on all the extra utility you would get if you made the H-optimal decisions instead! If you can figure out exactly what you're afraid of, maybe that will help you work out what the H-optimal decisions are. Then you'll be a better position to successfully notice [LW · GW] which world you actually live in.
17 comments
Comments sorted by top scores.
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-01T05:44:51.969Z · LW(p) · GW(p)
Sounds like an argument for reading more celebrity gossip :)
Replies from: steven0461↑ comment by steven0461 · 2019-06-01T17:52:21.127Z · LW(p) · GW(p)
This is a valid criticism of the second sentence as it stands, but I think Zack is pointing at a real pattern, where the same person will alternate between suggesting it matters that H is true, and, when confronted with evidence against H, suggesting it doesn't matter whether or not H is true, as an excuse not to change the habit of saying or thinking H.
comment by Benquo · 2019-06-02T18:03:31.581Z · LW(p) · GW(p)
The beginning is important in a way I think a lot of commenters missed:
If you ever find yourself saying, "Even if Hypothesis H is true,
Ignoring questions with no decision-relevant implications - or choosing to ignore them - or even actively trying to write them off (as GuySrinivasan describes) - are all quite reasonable. But the specific wording here is a tell that you're motivated to dismiss Hypothesis H because there's evidence that it's true, which is in tension with the claim that it's not important whether it's true.
It would be helpful for such points to be made more explicitly, ideally contrasted with the good version of such behavior.
comment by SarahNibs (GuySrinivasan) · 2019-06-02T17:51:19.237Z · LW(p) · GW(p)
On the contrary, one of my go-to techniques for decision-making is to examine questions that seem relevant to see if I can tell by the magnitude of their possible answers whether I care about what their possible answers are. If my choice boils down to "if X > 100, yes, otherwise no" and I am pretty confident that X is somewhere around 90-110 and I find a question that seems relevant but it turns out any answer sways X by at most a tenth of a point, I will dismiss that question and look for more important questions.
It is a flag to check for rationalization, sure.
comment by Gordon Seidoh Worley (gworley) · 2019-06-06T00:07:49.311Z · LW(p) · GW(p)
First, let me start by saying this comment is ultimately a nitpick. I agree with the thrust of your position and think in most cases your point stands. However, there's no fun and nothing to say if I leave it at that, so grab your tweezers and let's get that nit.
Even if Hypothesis H is true, it doesn't have any decision-relevant implications,
So to me there seems to be a special case of this that is not rationalization, and that's in cases where one fact dominates another.
By "dominates" I here mean that for the purpose for which the fact is being considered, i.e. the decision about which the truth value of H may have relevant implications, there may be another fact about another hypothesis, H', such that if H' is true or H' is false then whether or not H is true or false will have no impact on the outcome because H' is relatively so much more important than H.
To make this concrete, consider the case of the single-issue voter. They will vote for a candidate primarily based on whether or not that candidate supports their favored position on the single issue they care about. So let's say Candidate Brain Slug is running for President of the World on a platform whose main plank is implanting brain slugs on all people. You argue with your single-issue voter friend they should not vote for Brain Slug because it will put a brain slug on them, but they say even if that's true, it's not relevant to their decision, because Brain Slug also supports a ban on trolley switches, which is your friend's single issue.
Now maybe you think your friend is being stupid, but in this case they're arguably not rationalizing. Instead they're making a decision based on their values that place such a premium on the issue of trolley switch bans that they reasonably don't care about anything else, even if it means voting for President Brain Slug and its brain slug implanting agenda.
comment by Celer · 2019-06-01T11:37:47.581Z · LW(p) · GW(p)
This seems very questionable: "does X matter?" is comparable to "is X vs not-X worth the cost of investigation?" If I'm constrained by resource limitations, and trying to acquire as much knowledge as I can given that, the ability to dismiss some answers as unimportant is critical.
comment by Vladimir_Nesov · 2019-06-01T08:22:31.076Z · LW(p) · GW(p)
With bounded attention, noticing less relevant things more puts you into a worse position to be aware of the worlds you actually live in.
Edit: Steven's comment [LW(p) · GW(p)] gives a more central interpretation of the post, namely as a warning against the bailey/motte pair of effectively disbelieving something and defending that state of mind by claiming it's irrelevant. (I think the motte is also invalid, it's fine to be agnostic about things, even relevant ones. If hypothetically irrelevance argues against belief, it argues against disbelief as well.)
Replies from: Thelo↑ comment by Thelo · 2019-06-01T18:55:19.869Z · LW(p) · GW(p)
Agreed.
All information has a cost (time is a finite resource), the value of any arbitrary bit of information is incredibly variable, and there is essentially infinite information out there, including tremendous amounts of "junk".
Therefore, if you spend time on low-value information, claiming that it has non-zero positive value, then you have that much less time to spend on the high-value information that matters. You'll spend your conversation energy towards trivialities and dead-ends, rather than on the central principle. You'll scoop up grains of sand while ignoring the pearls next to you, so to speak. And that's bad.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-06-01T19:11:20.025Z · LW(p) · GW(p)
(See the edit in the grandparent, my initial interpretation of the post was wrong.)
comment by philh · 2019-06-01T19:33:52.741Z · LW(p) · GW(p)
I feel it's important here to distinguish between "H doesn't matter [like, at all]" and "H is not a crux [on this particular question]". The second is a much weaker claim. And I suspect a more common one, if not in words then in intent, though I can't back that up.
(I'm thinking of conversations like: "you support X because you believe H, but H isn't true" / "no, H actually doesn't matter. I support X because [unrelated reason]".)
comment by RHollerith (rhollerith_dot_com) · 2019-06-01T13:20:44.056Z · LW(p) · GW(p)
>The fact that H is interesting enough for you to be considering the question at all means that it must have some relevance to the things you care about .
Even if H came to my attention because I read it in a comment on the internet?
Even if I live in medieval Europe and am surrounding by people who like to argue about how many angels can fit on the head of a pin?
Replies from: mary-chernyshenko↑ comment by Mary Chernyshenko (mary-chernyshenko) · 2019-06-01T14:47:16.215Z · LW(p) · GW(p)
If you lived in medieval Europe and people argued about such things, then I'd wager you would find it pretty much relevant.
Also, principles of classification (which later gave birth to biological systematics) went through the angels @ archangels stage, so - yes. Please don't neglect the discourse :)
comment by Zack_M_Davis · 2019-06-01T02:06:51.641Z · LW(p) · GW(p)
(Publication history note: this post is lightly adapted from a 14 January 2017 Facebook status update, but Facebook isn't a good permanent home for content, for various reasons.)
comment by tailcalled · 2024-07-05T09:04:08.701Z · LW(p) · GW(p)
The fact that H is interesting enough for you to be considering the question at all (it's not some arbitrary trivium like the 1923th binary digit of π, or the low temperature in São Paulo on September 17, 1978) means that it must have some relevance to the things you care about.
If there's not much information about H directly, then H is highly reflective of one's general priors. In domains where people care about estimating each other's priors (e.g. controversial political domains), they might jump onto H as a strong signal of those priors, but the very fact that there's not much evidence about H also puts bounds on how much effect H could have (because huge effects propagate somewhat further and thus provide more evidence, yet we know by assumption there isn't much evidence about H). When H finally gets settled, it likely becomes some annoying milquetoast thing that shouldn't really validate either prior (but often can be cast as validating one side or the other).
comment by mako yass (MakoYass) · 2019-06-01T23:53:23.150Z · LW(p) · GW(p)
One of the things that contribute to this effect is, if you've believed something to be false for a long time, your culture is going to have lots of plans and extrinsic values that it drew up on the basis of that, but if you just found out about this thing, you're still going to have most of those plans, and you're going to say "well look at these (extrinsic) values we still need to satisfy, this wont change our actions, we are still going to do exactly the same things as before, so it doesn't matter"
Only once you start going back and questioning why it is you have those values and whether they still make sense, in light of the new observation, only then will you start to see that your actions need to change. Since there isn't a straight connection between habit/action and beliefs in humans, that update still might not happen.
comment by mako yass (MakoYass) · 2019-06-01T22:44:27.510Z · LW(p) · GW(p)
I think this is really important: If an idea is new to you, if you're still in the stage of deciding whether to consider it more, that's a stage where you haven't explored its implications much at all, you cannot at that stage say "it isn't going to have important implications", so if you find yourself saying that about a claim you've just encountered, you're probably bullshitting
If you find yourself saying "but it definitely doesn't matter" about a claim with the same magnitude of "there are gods", you're almost certainly bullshitting. You might be right, but you can't be justified.
One example of a claim that's not being explored enough for anyone to understand why it matters because it's not being explored because nobody understands why it matters, is simulationism. As far as I'm aware, most of us are still saying "but it doesn't matter". We'll say things like "but we can't know the simulator well enough to bargain with it or make any predictions about ways it might intervene", but I'm fairly sure I've found some really heavy implications by crossing it with AGI philosophy and physical eschatology.
I'm going to do a post about that, maybe soon, maybe this week, but I don't think you really need to see the fruits of the investigation to question the validity of whatever heuristic convinced you there wouldn't be any. You wont fall for that any more, will you? The next time someone who hasn't looked in the box says "but there's nothing in the box, can you see anything inside the box? Don't bother to open it, you can't see anything so there must not be anything there", you will laugh at them, wont you?
comment by Richard_Kennaway · 2019-06-01T13:40:50.361Z · LW(p) · GW(p)
This is a series of non sequiturs backed up only by italics and exclamation marks. What is the post really about?