Hypotheticals: The Direct Application Fallacy
post by Chris_Leong · 2018-05-09T14:23:14.808Z · LW · GW · 19 commentsContents
Exploiting Opportunities for Learning Practise Exercises Don't Need to be Real Applying the Unrealistic to the Real Being Aware of Limitations Conclusion None 19 comments
A few years ago, I tried convincing people some commenters that hypotheticals were important even when they weren't realistic. That failed, but I think I've spend enough time reflecting to give this another go. This time, my focus will be on challenging the following common assumption:
The Direct Application Fallacy: If a hypothetical situation can't conceivably occur, then the hypothetical situation doesn't matter
I chose this name because it assumes that the only purpose of discussing a hypothetical is to know what would happen or what we should do in such a situation. It ignores the other lessons that such a discussion may teach us and how it might have logical consequences for situations that actually do occur.
(Note: This post was renamed from: Unrealistic Hypotheticals Still Contain Lessons)
Exploiting Opportunities for Learning
In The Least Convenient Possible World [LW · GW], Scott Alexander considers the classic objection to utilitarianism that it implies that a surgeon should be prepared to harvest the organs of a random traveller if it would allow them to save five other patients. Scott argues that pointing out that the random traveller's organs probably be genetic mismatches, while "technically correct", also "completely misses the point and loses a valuable opportunity to examine the nature of morality". He also notes that responding in this manner leaves too much "wiggle room". Even if we aren't consciously aware of it, we often construct arguments to avoid believing things that we don't want to, so we can improve our rationality by limiting our ability to avoid understanding the other person's perspective. While Scott is referring to people who completely miss the point of the hypothetical, I think that dismissing a hypothetical as unrealistic often also sacrifices opportunities for learning as we'll see below.
Practise Exercises Don't Need to be Real
Imagine that you are an instructor setting problems for your students so that they can learn an area like economics, physics or applied maths. How strongly do you care about these exercises being realistic? I would argue that this isn't very important and that this further applies to philosophy:
1. Simplification: Students may be at a point where a realistic exercise would be quite beyond their abilities. Imagine that you are trying to teach your students how to calculate falling objects. One student complains that you are ignoring air resistance. You try to explain that you can talk about air resistance after you've covered the basics, but they insist that any discussion without air resistance is utterly pointless. Eventually you concede, but most of the class ends up failing the quiz the next week because they weren't ready for the harder problems. Similarly, philosophical problems often assume "no-one will ever know" so that you can discuss moral principles without 90% of the time going into arguing about human psychology and sociology which had nothing to do with the point you were trying to illustrate.
2. Testing for Understanding: Students are often assigned questions as a way to gauge their understanding of a concept. Maybe no object has zero mass, but if someone can't tell you that this should create no gravitational force, they must have a misunderstanding somewhere. Maybe you could ask about an object that weighs 0.01 grams instead, but then they'd have to pull out a calculator. Similarly, even if utility monsters don't exist, they provide a useful tool for clarifying utilitarianism, as it explains why, "greatest good for the greatest number" isn't a completely accurate characterisation. And indeed, there's no reason why some people or organisms mightn't generally experience more utility than others.
3. Realism Trades off Against Other Factors: Perhaps, you could find simple exercises that are realistic or test for understanding with more realistic scenarios. However, your goal is to make your students learn and this is dependent on a whole host of factors. If you insist on questions always being realistic, then this trades off against other dimensions, such as engagement, memorability and time required to construct a situation. This last dimension is particularly important for conversations where people have to be able to construct these situations on the fly.
This is taken for granted when talking about maths and physics, but if you want to learn to deeply understand philosophy, you'll have to accept unrealistic practise questions as well.
Applying the Unrealistic to the Real
In maths, it is very common to take the limit of a formula as some variable, like as x approaches infinity. This technique is very useful for approximations. For example, it's easier to consider the limit as x approaches infinity of (2x^2-x+10)/(x^2+79) than to substitute in a specific value like a million. This is applied constantly throughout programming with Big-O Notation. Even though an infinite dataset is completely unrealistic, this heuristic is still incredibly useful for designing algorithms.
Similarly, when a utilitarian points out that strict versions of deontology will always allow us to construct situations where following the rules cost us infinite utility, the unrealism of the situation doesn't make it irrelevant. Just with Big-O Notation, step 2 of the argument could very well be to scale it down to a more realistic situation. Unfortunately, many people will assume that step 2 isn't coming and judge the argument as flawed at this stage. They may even interrupt the speaker with the objection that the argument isn't realistic. This often negatively affects the conversation, as it pushes the speaker to address step 2, before they've had the opportunity to ensure that everyone has understood step 1.
Being Aware of Limitations
Consider the formula y=10/(x[x-5]). This has two discontinuities at x=0 and x=5. I really want a more practical example, so if you have one, please list in the comment, but let's pretend x represents the number of people and we know there'll always be at least one person in practise. So someone could easily wave away the discontinuity with the x=0 case and completely miss the second one. But if rationality is winning, this isn't it. If they instead looked into it, they'd have realised that more general issue is the division by zero and not overlooked this issue. Yet it is very easy for the person getting it wrong to convince themselves that it is the person trying to figure out the situation with x=0 who is irrational.
Let's suppose that someone is promoting deontology and they aren't worried about theoretical situations. They just want a practical model or heuristic to help them act morality. If they are proposing a heuristic, they should fully expect it to have limitations and situations where it just completely breaks. And it would probably be useful to know what these limitations are. Some of these limitations mightn't be obvious and the heuristic may even be broken if some of these occur more often than they expect. Discussions of how the model behaves as the utility cost of a principle approaches infinity shouldn't be met by dismissal, but by either biting the bullet or acknowledging that the model seems to break down in those circumstances. It can still be defended as a heuristic or you can assert this kind of situation tends to break our intuitions (see epistemic learned helplessness), but either way it needs to be acknowledged as a limitation that can be weighed up against other limitations. After all, there could be a better model that has a solution to these issues.
Conclusion
One of the key threads of this post has been to not assume that you know where an argument is going. Just because someone is talking about an unrealistic situation, it doesn't follow that they aren't going to tie it back to reality. Further, you shouldn't assume that there's a single path for this to occur. At the very least, I would suggest replacing "This is unrealistic" with "How are you going to tie it back to reality?". The second question is far superior, as it doesn't make the unwarranted assumption that the only purpose of constructing a model is to attempt to directly apply it to reality.
19 comments
Comments sorted by top scores.
comment by Matt Goldenberg (mr-hire) · 2018-05-06T21:59:17.766Z · LW(p) · GW(p)
See also: Please Don't Fight The Hypothetical [LW · GW], plus the excellent comment from David Gerard [LW(p) · GW(p)] that explains why people exhibit this behavior, and why explaining to them really nicely that this will help them learn might be seen as disingenuous.
comment by zulupineapple · 2018-05-08T08:05:48.947Z · LW(p) · GW(p)
Does someone really believe that all unrealistic hypotheticals are useless? It seems likely to me that the people you're talking about have very specific issues with very specific hypotheticals, or that they have past experiences that lead them to believe that people who create unrealistic hypotheticals tend to be full of shit, and you're just strawmanning them.
philosophical problems often assume "no-one will ever know" so that you can discuss moral principles without having to have a detailed understanding of human psychology and sociology.
This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?
Replies from: Dagon, Chris_Leong↑ comment by Dagon · 2018-05-09T20:43:10.923Z · LW(p) · GW(p)
In fact, some hypotheticals are interesting _mostly_ in the way that they contradict reality. Pointing out a different intuition if "nobody will ever know" than "you'll know, and others probably won't automatically know, but there's a possibility with sufficient advances in forensics" is interesting and I'd be happy to play. Simply taking a result from a "nobody will ever know, and you know that with certainty" and trying to apply it to the real world is something I'd object to.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-10T08:02:40.352Z · LW(p) · GW(p)
You can be interested in whatever you want, but that's different from that thing being important or useful (except for its use in entertaining you).
Replies from: Dagon↑ comment by Dagon · 2018-05-11T22:52:34.578Z · LW(p) · GW(p)
I think we could have an interesting and/or useful discussion about the distinction between interesting and useful for known-to-be-impossible (which is what "unrealistic" means, right?) topics.
My claim: For purposes of this conversation (trying to determine whether all, some, or no abstract hypotheticals can be validly rejected by showing that they're unreal), we're ALREADY not talking about something other than reality. And for unreal things, "interesting" and "useful" are really hard to distinguish.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-12T06:12:36.935Z · LW(p) · GW(p)
"Useful" is a prediction. "X is useful" means that you expect to derive some utility from X in the future (or right now). And then if you have an explanation, in what way exactly X is going to give you some non-trivial utility, you may be able to convince someone that X really is useful.
"Interesting" is the feeling of curiosity, it is entirely subjective and requires no explanations. You are free to be interested in whatever you want. You can even be interested, exclusively, in useless things (to the extent that anything is really useless).
Replies from: Dagon↑ comment by Dagon · 2018-05-12T14:23:54.130Z · LW(p) · GW(p)
My internal evaluation of internet discussions is much less binary than this. My curiosity tends toward things that I may get some utility from, and much of the utility is in the form of enjoyment of exploration and discussion.
There are some cases where there's more direct utility in terms of behavioral changes I can apply, but almost never on the topic at hand (abstract unrealistic hypotheticals).
↑ comment by Chris_Leong · 2018-05-09T02:24:38.121Z · LW(p) · GW(p)
"Does someone really believe that all unrealistic hypotheticals are useless?" - I don't claim an explicit belief, just that many people have a strong bias in this direction and that this often causes them to miss things that would have been obvious if they'd spent even a small amount of time thinking about it.
"This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?" - Well, you can talk about psychology and sociology as it relates to the nature of morality; the point is to avoid complicating the discussion by forcing people to model the effect people finding out about a particular event would have on society and how they act.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-09T06:20:25.076Z · LW(p) · GW(p)
many people have a strong bias in this direction
And I'm suggesting that the bias might be justified. Though it's hard to talk about that without specific examples.
the point is to avoid complicating the discussion by forcing people to model the effect people finding out about a particular event would have on society and how they act.
What if this modeling explains 99% of moral choices, and when you remove it you're left with nothing but noise? Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off? I'm not trying to start an argument about whether this is true. I'm trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-09T13:56:13.305Z · LW(p) · GW(p)
"What if this modeling explains 99% of moral choices, and when you remove it you're left with nothing but noise?" - Even if it only applies to 1% of situations, it shouldn't be rounded off to zero. After all, there's a decent chance you'll encounter at least one of these situations within your lifetime. But more importantly, this is addressed by the section on Practise Exercise Don't Need to Be Real.
"Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off?" - I view this similarly to showing someone a really complicated maths proof and them saying, "Given my brain, it's literally impossible for me to understand a proof this complicated". In this case you'll just have to trust other people then. However if, like philosophy, experts disagree, well I suppose you'll just have to figure out which experts to trust. But that said, I'm skeptical that this is the kind of thing hardcoded into anyone's brain.
"I'm trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions." - The floating abstract model doesn't contain these assumptions. You've made the assumption that the model is supposed to be directly applied, which is unwarranted.
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-10T07:57:46.327Z · LW(p) · GW(p)
The core issue, I think, is that for you "usefulness" is an extremely low bar. Indeed, it might be possible to take any question and show that the utility of having an answer to that question is > 0 (it would be quite hard to find an answer of negative utility, and it would be even harder to show that the utility is exactly 0). So, if you believe that all questions are useful, then there is no way I'll convince you that some hypotheticals are useless.
And if you don't believe this at all, then please give a few examples of useless questions, because, clearly, I don't understand your metric of usefulness/importance.
By the way, why do you use "" quotes instead of the > blockquotes ? The latter are much more readable.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-10T14:29:10.433Z · LW(p) · GW(p)
"So, if you believe that all questions are useful, then there is no way I'll convince you that some hypotheticals are useless" - that's purely a function of proving a negative being difficult in general. Why do you expect this to be easy?
Replies from: zulupineapple↑ comment by zulupineapple · 2018-05-10T18:20:42.883Z · LW(p) · GW(p)
There are two distinct claims:
1. That discussing unrealistic hypotheticals is usually a valuable way to spend my time (or that I tend to underestimate the value of discussing them).
2. That discussing unrealistic hypotheticals usually, eventually, produces some non-zero value.
(1) is what we disagree on, but (2) is what you seem to be proving. If I wanted to convince you that (2) is false, then I would really have to prove negatives. But it's ok, I don't actually disagree with (2), that claim is trivially true. If (2) is how you understand "usefulness", then your post is correct, but also basically void of meaning.
(1) is the claim that some real, living, non-straw humans disagree with and it is not a claim that you defend well. And to disagree with (2) I don't need to prove negatives, I only need to pick one hypothetical I find rather useless, and ask you to show me that it is really useful. And then, if you're successful in convincing me, you will have proven that I do sometimes underestimate the value of such hypotheticals.
I tried to do this with the "no-one will ever know" hypotheticals, and I found your replies unconvincing. For example, you said:
Even if it only applies to 1% of situations, it shouldn't be rounded off to zero.
When you say that something is not zero, you are talking about (2). If you wanted to talk about (1), you could try to explain why this 1% is either very important, or a reasonable starting point, but then I could change the initial assumption to 0.1% and so on (in fact I initially wanted to say that it applies to 0% situations, but hesitated). At some point you have to agree that my beliefs about brains are making the whole "no-one will ever know" class of hypotheticals near useless to me, which sort of contradicts your initial point.
comment by Dagon · 2018-05-09T17:41:55.081Z · LW(p) · GW(p)
It's interesting that this discussion about abstract hypotheticals mirrors pretty closely the disputes that occur for specific unrealistic hypotheticals. My disagreement is with the abstraction and universality of application, not necessarily with the thesis itself.
Those hypotheticals which I choose not to engage without lots of specificity are the ones where I think the details matter, and the unreal-ness is making assumptions about those details or asserting that they don't matter. I disagree.
This post is ALSO saying that the specifics of the hypothetical under consideration don't matter, nor do the ways in which reality is assumed or ignored. I disagree.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-09T21:45:14.366Z · LW(p) · GW(p)
"My disagreement is with the abstraction and universality of application, not necessarily with the thesis itself" - I don't quite follow this issue. I don't claim that a non-direct application always exists, just that they often do. And trying to figure out when they do or don't is comparable to trying to figure out if a random bit of maths has any real world applications. You could try to just check a bunch of possibilities, but there could always be one that you just didn't think of.
"Those hypotheticals which I choose not to engage without lots of specificity are the ones where I think the details matter, and the unreal-ness is making assumptions about those details or asserting that they don't matter." - I have no issue with someone pointing out that analysis of a hypothetical shouldn't be directly applied. However, many people seem to insist that a hypothetical includes factor X, including when factor X would massively complicate the model and distract from the purpose of the exercise.
Replies from: Dagon↑ comment by Dagon · 2018-05-09T22:26:47.011Z · LW(p) · GW(p)
Hmm. I'm now even more of the opinion that you're complaining in the abstract about something that is sometimes justified and sometimes not, and you should instead complain specifically about the times when it's not. Making the abstract argument is suspicious to me - it feels like you're asking for something that I already give mixed with something that I don't intend to give.
In other words, now I'm not sure if I'm in the group of people you're complaining about / trying to change. If you'd give some specific examples, it'd be way easier to even know if we disagree :)
Note that I agree that this applies to random bits of maths too - It requires effort to distinguish between between "so what"?, "that's interesting", and "that has implications that matter". That effort is not automatically due from an audience - it needs to be done (or at least started) by the person presenting the theorem/result.
comment by Dagon · 2018-05-07T23:26:17.320Z · LW(p) · GW(p)
This is very context-dependent. Some hypotheticals are far enough from the real that it would be incorrect to extrapolate any results. Some map pretty well.
For the latter case, do the mapping and there can be no objection. When you say
One of the key threads of this post has been to not assume that you know where an argument is going. Just because someone is talking about an unrealistic situation, it doesn't follow that they aren't going to tie it back to reality.
I react with "great, this forum allows you to pre-emptively make your real argument. Tie it back to reality before I even have the chance to wonder whether this edge case has any utility."
You can internally translate my "this is unrealistic" to "please tie this back to reality". To the extent that you can do so, problem solved.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-05-08T05:55:48.200Z · LW(p) · GW(p)
I guess I'm trying to point out how a particular mindset limits people. If someone has already pre-judged something a particular way, then it'll always be harder to communicate with them and they'll need someone to explicitly spell out things they could have figured out for themselves.
Replies from: Dagon↑ comment by Dagon · 2018-05-08T16:01:09.406Z · LW(p) · GW(p)
Hmm. This seems a lot like strategizing on how to convince someone (get them into a mode where they can figure something out for themselves), rather than simply sharing information or insights (by showing what you think you have learned).
Probably appropriate sometimes, but if I notice it, it'll bug me and I'll be less inclined to listen.