The Art and Science of Intuition Pumping
post by adamShimi · 2022-02-22T00:18:21.535Z · LW · GW · 3 commentsContents
Intuition pumps’ origin story Choosing what to focus on Robustness to knob turning What to do with a robust intuition pump? Nonsense intuition pumps Conclusion None 3 comments
Epistemic status: exploratory [LW · GW]
What is “intuition pumping”? I had seen the term used on LW, probably used it myself, without wondering that much what “intuition pumps” even are. Pure pedagogical illustrations? Mere rhetorical devices? Subtle epistemic tools? My intuition about intuitions favored the latter, but I also knew well how intuition can mislead.
To make sense of all of this, I went back to the source, the inventor and master wielder of intuition pumps: Daniel Dennett. Not only did he coin the term and use it abundantly, but his “Intuition Pumps and Other Tools for Thinking” teaches how to use intuition pumps his way.
Spoiler alert: they can and should be used as thinking tools to find out the essential parts of the problem. But that requires a meta-level analysis that Dennett calls “turning the knobs”— checking the robustness of the intuition to various changes in the story.
Intuition pumps’ origin story
Everyone I’ve read on intuition pumps, Dennett included, points to his response to Searle’s Chinese Room as the birth of the term.
(The milk of human intentionality, Dennett, 1980)
Searle's form of argument is a familiar one to philosophers: he has constructed what one might call an intuition pump, a device for provoking a family of intuitions by producing variations on a basic thought experiment. An intuition pump is not, typically, an engine of discovery, but a persuader or pedagogical tool — a way of getting people to see things your way once you've seen the truth, as Searle thinks he has. I would be the last to disparage the use of intuition pumps — I love to use them myself — but they can be abused. In this instance I think Searle relies almost entirely on ill-gotten gains: favorable intuitions generated by misleadingly presented thought experiments.
What’s interesting here is that Dennett seems to have expanded his views on the usefulness of intuition pumps: here he explicitly criticizes the use of intuition pumps for discovery and clarification, but in Intuition Pumps and Other Tools for Thinking he clearly count them as thinking tools that have positive and research-relevant uses. He also gives a bit more of a definition there.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
Other thought experiments are less rigorous but often just as effective: little stories designed to provoke a heartfelt, table-thumping intuition —”Yes, of course, it has to be so!”— about whatever thesis is being defended. I have called these intuition pumps. I coined the term in the first of my public critiques of philosopher John Searle’s famous Chinese Room thought experiment, and some thinkers concluded I meant the term to be disparaging or dismissive. On the contrary, I love intuition pumps! That is, some intuition pumps are excellent, some are dubious, and only a few are downright deceptive.
One last point on origins, before I dig into the uses of intuition pumps: Dennett clearly says that he didn’t invent intuition pumps, he just coined the term. He actually sees them as the core and legacy of philosophy since the beginning (Plato’s Cave is one of his examples)
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
These are the enduring melodies of philosophy, with the staying power that ensures that students will remember them, quite vividly and accurately, years after they have forgotten the intricate surrounding arguments and analysis. A good intuition pump is more robust than any one version of it.
Choosing what to focus on
I can’t find the quote I have in mind, but Dennett clearly describes intuition pumps at multiple points as tools for simplification of problems. They gloss over the technical details and the subtleties, homing in on some part of the question.
Yet how do we know these are the relevant parts? In principle nothing forbids us to choose any aspect of the question and discard everything else. That’s what Dennett means when reminding us to be careful with intuition pumps: they don’t have to pump the correct intuitions.
Still, even when the choice of details is wrong, it can teach us something.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
Notice that there are two ways an intuition pump may prove valuable. If it’s well made, then either the intuition it pumps are reliable and convincing, in which case it nicely blocks some otherwise tempting path of error, or the intuitions still seem dubious, in which case the intuition pump may help focus attention on what is wrong with its own presuppositions.
To summarize:
- If the simplification actually works and focus on (at least some) relevant details, the intuition pump redirect our thoughts away from the confusion coming from irrelevant details
- If the simplification doesn’t work, then we get some feedback on which details are missing or more relevant by examining how the intuition pump fails.
I see an additional subtlety: Dennett distinguishes well-made intuition pumps that focus on the wrong parts, and badly-made intuition pumps. And my impression is that he proposes ways of verifying “well-madeness”, not so much which parts to focus on. I want to discuss that after presenting Dennett’s main way of checking the structural integrity of an intuition pump: turning the knobs.
Robustness to knob turning
The tool Dennett leverages for checking intuition pumps is called “turning the knobs”, following Hofstader.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
When Doug Hofstader and I composed The Mind’s I back in 1982, he came up with just the right advice on this score: consider the intuition pump to be a tool with many settings, and “turn all the knobs” to see if the same intuitions still get pumped when you consider variations.
By this he means changing elements of the story in many different ways, to see whether the intuition stays the same, or shifts with the changes. The claim here is that a good intuition pump (in the sense of well-made) should not be too sensitive to changes that look irrelevant.
Searle’s Chinese Room is Dennett’s favorite example of oversensitivity to turning the knobs. In “Intuition Pumps and Other Tools for Thinking”, Dennett focuses on turning the knob about the level of description of the whole system: if you look at the system as a whole, not just the man in the room, and think about what is needed in term of computation to hold a conversation in Chinese, then it’s not obvious anymore that the whole system doesn’t "understand" Chinese.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
Look at what we’ve just done. We’ve turned the knob on Searle’s intuition pump that controls the level of description of the program being followed. There are always many levels. At the highest level, the comprehending powers of the system are not unimaginable; we even get insights into just how the system comes to understand what it does. The system’s reply no longer looks embarrassing; it looks obviously correct. That doesn’t mean that AI of the sort Searle was criticizing actually achieves a level of competence worth calling understanding, nor that those methods, extended in the ways then imagined by those AI researchers, would likely have led to such high competences, but just that Searle’s thought experiment doesn’t succeed in what it claims to accomplish: demonstrating the flat-out impossibility of Strong AI.
There are other knobs to turn, but that task has been carried out extensively in the huge literature the Chinese Room has provoked. Here I am concentrating on the thinking tool itself, not the theories and propositions it was aimed at, and showing that it is a defective tool: it persuades by clouding our imagination, not exploiting it well.
In the Mind’s I, Hofstader and Dennett describe at least five knobs for the Chinese Room (what sort of matter is the system made of, what is the accuracy of the computation, what is the size of the system, what sort of being is inside the room, and how fast the being can work), and argue that turning them independently lead to variations in the pumped intuitions.
(The Mind’s I, Dennett and Hofstader, 1981)
Each setting of the dials on our intuition pump yields a slightly different narrative, with different problems receding into the background and different morals drawn. Which version or versions should be trusted is a matter to settle by examining them carefully, to see which features of the narrative are doing the work. If the oversimplifications are the source of the intuitions, rather than just devices for suppressing irrelevant complications, we should mistrust the conclusions we are invited to draw. These are matters of delicate judgment, so it is no wonder that a generalized and quite justified suspicion surrounds such exercises of imagination and speculation.
Why is sensitivity to turning the knobs so bad? As Dennett and Hofstader write above, if the intuition depends so much on the details, then it probably means it’s created by the details to a large extent. So it’s evidence of a made intuition instead of a revealed intuition. Every intuition is pumped and made to some extent, but we prefer when the bias depends on actual bits of evidence.
The way my theoretical computer scientist’s brain thinks about it is through smoothed analysis. Without getting into the weeds, smoothed analysis is a form of complexity analysis (time complexity for example) which discards isolated costly inputs. Isolated means that whenever you change even a little the input, its cost drops completely. The intuition for not considering those inputs is that you need perfect (often infinite) precision to get them, and any noise in the input will actually remove the costly part.
The analogy I see here is with the robustness to change: Dennett wants to discard the intuitions pumped only in isolated or quasi-isolated cases. Or at the very least he’s particularly suspicious of such intuition pumps, because they don’t seem to capture the underlying structure well.
To close this section, it’s hard to resist including this witty footnote of Dennett on this exact point
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
Doug zeroed in on the phrase “bits of paper” in Searle’s essay, and showed how it encouraged people to underestimate the size and complexity of the software involved by many orders of magnitude. His commentary on Searle in our book featured this criticism, and led to a ferocious response from Searle (1982) in the pages of the New York Review of Books, because, although we had reprinted his article correctly, in his commentary Doug slipped and wrote “a few slips” where Searle had said “bits,” and this, Searle claimed, completely misrepresented his argument! If Searle is right about this, if that small inadvertent mutation transformed the machinery, this actually proved our point, in a way: if such a tiny adjustment disables or enables a thought experiment, that is something that should be drawn to the attention of all whose intuitions are up for pumping.
What to do with a robust intuition pump?
Let’s assume that we have a decently robust intuition pump; what next? Here I find Dennett harder to follow, not necessarily because he doesn’t have an answer but because the lack of some core idea as clarifying as turning the knob.
The easiest case is probably when the intuition pump works for almost all settings of the knobs. In such a situation, it sounds reasonable to consider the intuition as good and use the pump to avoid errors and mistakes.
One such example is “Daddy’s a doctor”
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
A young child is asked what her father does, and she answers, “Daddy is a doctor.” Does she believe what she says? In one sense, of course, but what would she have to know to really believe it? (What if she’d said, “Daddy is an arbitrager” or “Daddy is an actuary”?) Suppose we suspected that she was speaking without understanding, and decided to test her. Must she be able to produce paraphrases or to expand on her claim by saying her father cures sick people? Is it enough if she knows that Daddy’s being a doctor precludes his being a butcher, a baker, a candlestick maker? Does she know what a doctor is if she lacks the concept of a fake doctor, a quack, an unlicensed practitioner? For that matter, how much does she need to understand to know that Daddy is her father? (Her adoptive father? Her “biological” father?) Clearly her understanding of what it is to be a doctor, as well of what it is to be a father, will grow over the years, and hence her understanding of he own sentence, “Daddy is a doctor,” will grow. Can we specify — in a nonarbitrary way — how much she must know in order to understand this proposition “completely”?
The intuition pumped here is that understanding, and thus belief, come in degree (in part due to logical non-omniscience). I find that strongly intuitive, but more relevant is the great robustness of this story. It doesn’t have to be about “doctor” (Dennett himself switches to “father” in the middle), it doesn’t have to be a child (just someone who still can learn something about the topic at hand, which includes almost everyone if the topic is chosen accordingly). And indeed, I expect that breaking the intuition requires isolated cases like choosing a person who knows everything (literally everything) about topic A.
What happens when it’s not as clear cut?
One way to think about it is the relative area of different intuitions (in the space of knob parameters). If two or more contradictory intuitions share the space in comparably sized chunks, that means it’s possible to pump contradictory intuitions with well-made pumps.
My impression is that Dennett thinks that such instances are ones where intuition pumps shouldn’t be used as arguments. When he criticizes a fallacious intuition pump, he either argues it’s not robust (like the Chinese Room) or that it’s possible to robustly pump the opposite intuition. His arguments against philosophical zombies and the Hard Problem of consciousness are in this latter category.
And yet Dennett is rarely content to cancel an intuition pump with another — he usually believes one of those intuitions is a mistake. One argument he uses is the lack of explanation power.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
I cannot prove that there is no Hard Problem, and Chalmers can’t prove that there is one. He has one potent intuition going for him, and if it generated some striking new predictions, or promised to explain something otherwise baffling, we might join him in trying to construct a new theory of consciousness around it, but it stands alone, hard to deny but otherwise theoretically inert.
Basically, if a robustly pumped intuition leads nowhere, then be wary of it.
In any case though, I expect Dennett to say that even a problematic intuition pump can teach us things. It can teach us how not to think, and what mistakes we risk falling for.
There’s one sort of intuition pump he doesn’t have any use for, though.
Nonsense intuition pumps
Dennett has no time for intuition pumps that completely break the laws of physics or biology.
(Intuition Pumps and Other Tools for Thinking, Dennett, 2013)
But there is also a deeper problem with such experiments. It is child’s play to dream up examples to “prove” further conceptual points. Suppose a cow gave birth to something that was atom-for-atom indiscernible from a shark. Would it be a shark? If you posed that question to a biologist, the charitable reaction would be that you were making a labored attempt at a joke. Or suppose an evil demon could make water turn solid at room temperature by smiling at it; would the demon-water be ice? This is too silly a hypothesis to deserve a response. Smiling demons, cow-sharks, zombies, and Swampmen are all, some philosophers think, logically possible, even if they are not nomologically (causally) possible, and these philosophers think this is important. I do not.
[...]
“No,” says the philosopher. “It’s not a false dichotomy! For the sake of argument we’re suspending the laws of physics. Didn’t Galileo do the same when he banished friction from his thought experiments?” Yes, but a general rule of thumb emerges from the comparison: the utility of a thought experiment is inversely proportional to the size of its departures from reality.
Just like isolated intuition pumps, these wildly non-causal one look like they’re missing the actual important stuff, and so shouldn’t be thought of as arguments. But Dennett sounds like he considers the intuition pumped there as just noise.
Conclusion
Dennett’s view on intuition is the only one I know that contains both productive models and tools (turning the knobs) while not falling for the false dichotomy of rejecting or venerating intuition. More than anything, he gives tools for thinking about intuition pumps, tools that still work even if one disagrees about his more object-level points.
Still, once the robustness to knob-turning and the relative area of knob parameter space are taken into consideration, I don't feel like I have a perfect grasp on the next step. I expect that this will require me rereading of a bunch of intuition pumps, maybe from other people too.
3 comments
Comments sorted by top scores.
comment by Mateusz Bagiński (mateusz-baginski) · 2023-01-19T18:38:22.213Z · LW(p) · GW(p)
In case you don't know it, Machery's Philosophy Within Its Proper Bounds is a book-length criticism of using (what could IMO be seen as) badly construed intuition pumps as philosophical arguments. A part of their badness is that their the judgments they produce are not robust to turning the knobs. Relatedly, his earlier book [LW · GW] may serve as an example of (at least attempted) deconfusion regarding what abstractions are most useful in cognitive psychology.
Replies from: adamShimicomment by TAG · 2022-02-23T01:58:45.076Z · LW(p) · GW(p)
He has one potent intuition going for him, and if it generated some striking new predictions, or promised to explain something otherwise baffling, we might join him in trying to construct a new theory of consciousness around it, but it stands alone, hard to deny but otherwise theoretically inert.
Basically, if a robustly pumped intuition leads nowhere, then be wary of it.
So an intuition pump can't serve any skeptical purpose? It can't remind you that you don't know as much as you think you know, that you are over confident, that there is something that had been left unexplained?
Why not?