Explanation vs Rationalization
post by abramdemski · 2018-02-22T23:46:48.377Z · LW · GW · 11 commentsContents
11 comments
Follow-up to: Toward a New Technical Explanation of Technical Explanation, The Bottom Line.
In The Bottom Line, Eliezer argues that arguments should only provide evidence to the extent that their conclusions were determined in a way which correlated them with reality. If you write down your conclusion at the bottom of the page, and then construct your argument, your argument does nothing to make the conclusion more entangled with reality.
This isn't precisely true. If you know that someone tried really hard to put together all the evidence for their side, and you still find the argument underwhelming you should probably update against what they're arguing. Similarly, if a motivated arguer finds a surprisingly compelling argument with much less effort than you expected, this should update you toward what they claim. So, you can still get evidence from the arguments of motivated reasoners, if you adjust for base rates of the argument quality you expected from them.
Still, motivated reasoning is bad for discourse, and aspiring rationalists seek to minimize it.
Yet, I think everyone has had the experience of trying to explain something and looking for arguments which will help the other person to get it. This is different than trying to convince / win an argument, right? I have been uneasy about this for a long time. Trying to find a good explanation is a lot like motivated cognition. Yet, trying to explain something to someone doesn't seem like it is wrong in the same way, does it?
A possible view which occurred to me is that you should only give the line of reasoning which originally convinced you. That way, you're sure you aren't selecting evidence; the evidence is selecting what you argue.
I think this captures some of the right attitude, but is certainly too strict. Teachers couldn't use this rule, since it is prudent to select good explanations rather than whichever explanation you heard first. I think the rule would also be bad for math research: looking for a proof is, mostly, a better use of your time than trying to articulate the mathematical intuitions which lead to a conjecture.
A second attempt to resolve the conflict: you must adopt different conversational modes for efficiently conveying information vs collaboratively exploring the truth. It's fine to make motivated arguments when you're trying to explain things well, but you should avoid them like the plague if you're trying to find out what's true in the first place.
I also think this isn't quite right, partly because I think good teaching is more like collaborative truth exploration, and partly because of the math research example I already mentioned.
I think this is what's going on: you're OK if you're looking for a gears-level explanation. Since gears-level explanations are more objective, it is harder to bend them with motivated cognition. They're also a handier form of knowledge to pass around from person to person, since they tend to be small and easily understood.
In the case of a mathematician who has a conjecture, a proof is a rigorous explanation which is quite unlikely to be wrong. You can think of looking for a proof as a way of checking the conjecture, sure; in that respect it might not seem like motivated cognition at all. However, that's if you doubt your conjecture and are looking for the proof as a test. I think there's also a case where you don't doubt your conjecture, and are looking for a proof to convince others. You might still change your mind if you can't find one, but the point is you weren't wrong to search for a proof with the motive to convince -- because of the rigorous nature of proofs, there is no selection-of-evidence problem.
If you are a physicist, and I ask what would happen if I do a certain thing with gyroscopes, you might give a quick answer without needing to think much. If I'm not convinced, you might proceed to try and convince me by explaining which physical principles are in play. You're doing something which looks like motivated cognition, but it isn't much of a problem because it isn't so easy to argue wrong conclusions from physical principles (if both of us are engaging with the arguments at a gears level). If I ask you to tell we what reasoning actually produced your quick answer rather than coming up with arguments, you might have nothing better to say than "intuition from long experience playing with gyroscopes and thinking about the physics".
If you are an expert of interior design, and tell me where I should put my couch, I might believe you, but still ask for an argument. Your initial statement may have been intuitive, but it isn't wrong for you to try and come up with more explicit reasons. Maybe you'll just come up with motivated arguments -- and you should watch out for that -- but maybe you'll articulate a model, not too far from your implicit reasoning, in which the couch just obviously does belong in that spot.
There's a lot of difference between math, physics, and interior design in terms of the amount of wiggle room gears-level arguments might have. There's almost no room for motivated arguments in formal proofs. There's lots of room in interior design. Physics is somewhere in between. I don't know how to cleanly distinguish in practice, so that we can have a nice social norm against motivated cognition while allowing explanations. (People seem to mostly manage on their own; I don't actually see so many people shutting down attempted explanations by labeling them motivated cognition.) Perhaps being aware of the distinction is enough.
The distinction is also helpful for explaining why you might want more information when you already believe someone. It's easy for me to speak from my gears level model and sound like I don't believe you yet, when really I'm just asking for an explanation. "Agents should maximize expected utility!" you say. "Convince me!" I say. "VNM Theorem!" you say. "What's the proof?" I say. You can't necessarily tell if I'm being skeptical or curious. We can convey more nuanced epistemics by saying things like "I trust you on things like this, but I don't have your models" or "OK, can you explain why?"
Probabilistic evidence provides nudges in one direction or another (sometimes strong, sometimes weak). These can be filtered by a clever arguer, collecting nudges in one direction and discarding the rest, to justify what they want you to believe. However, if this kind of probabilistic reasoning is like floating in a raft on the sea, a gears-level explanation is like finding firm land to stand on. Mathematics is bedrock; physics is firm soil; other subjects may be like shifting sand (it's all fake frameworks to greater/lesser extent) -- but it's more steady than water!
11 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2018-02-23T00:24:20.340Z · LW(p) · GW(p)
Often it feels like the thing I need to do in order to explain something to somebody is load a new ontology into them, which doesn't feel like it has much to do with either motivated reasoning or Bayesian evidence; I'm giving them the mental tools they need to understand the actual explanation which will only make sense once they have the ontology. (This is separate from the task of justifying why the ontology is useful.)
The cleanest example I can think of off the top of my head is teaching somebody the basic definitions of a field of math that's unfamiliar to them so you can give a proof using those definitions. That's a gears example, but I think I can also do this with fake frameworks in a way that's useful but not gearsy.
Replies from: abramdemski↑ comment by abramdemski · 2018-02-23T01:24:00.679Z · LW(p) · GW(p)
I like the frame of explaining the ontology in which your claim is true separately from arguing for it. I agree that this can happen with very non-gears-y models, but I imagine that's because the models are still sufficiently gears-like...
For example, the MTG color-wheel isn't very gears-y, because it taps into subjective conceptual clusters which differ from person to person. But, the extent that I'm using it as an explanation rather than a rationalization seems like it has to do with the extent to which I'm relying on stuff that definitely follows from the framework vs stuff that's more subjective (and it depends on the extent to which it's a canonical application of the MTG color wheel vs a case where you usually wouldn't invoke it).
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2018-02-23T04:40:40.250Z · LW(p) · GW(p)
Well, so one thing I'm sometimes trying to do is not justify a claim but justify paying attention to the claim, so the kind of thing I'm doing is not presenting evidence that it's true but just evidence that something sufficiently interesting is happening near the claim that it's worth paying attention to. I think this can get pretty non-gearsy in some sense; I'm often relying on mostly nonverbal intuitions in myself and hoping to trigger analogous nonverbal intuitions in someone else.
comment by johnswentworth · 2018-02-23T01:46:21.384Z · LW(p) · GW(p)
Tangential, but may generalize: I strongly disagree that, in math, looking for a proof is a better use of time than trying to articulate the intuitions behind the conjecture. Those intuitions are the reason the conjecture exists in the first place! They are the only conjecture-specific clue you have about how to go about a proof. Ignore the intuitions, and you're stuck searching through proof-space with generic heuristics.
This is a trap I've fallen into many times in my own research: coming up with an intuitive conjecture, and then looking for a proof without paying attention to the intuitions which led to the conjecture in the first place. Every time it's happened, I spent anywhere from days to months crunching symbols before stepping back and saying "why do I believe this in the first place?" And once I ask that question, I often go "oh, this suggests a whole different approach", and things get much easier.
Or as Wheeler put it, "never make a calculation until you know the answer". Same with proofs. It's a lesson which has been burned into me by repeated failure.
comment by Gordon Seidoh Worley (gworley) · 2018-02-26T00:36:42.341Z · LW(p) · GW(p)
I think the rule would also be bad for math research: looking for a proof is, mostly, a better use of your time than trying to articulate the mathematical intuitions which lead to a conjecture.
Having been a mathematician, I want push back on this a bit. Often the hard part of coming up with a proof is understanding a problem in a way that permits a solution, and that's all about intuitions. Further, I often find the limiting factor for people wanting to do math is not the ability to read or write proofs but to manipulate mathematical concepts in ways that suggest proofs.
Doesn't have much bearing on your broader point, but this is an idea I feel is overly popular, especially among mathematicians, where lots of knowledge gets transferred only by talking to folks because of norms against describing mathematical intuitions in writing in favor of proofs.
Replies from: abramdemski↑ comment by abramdemski · 2018-03-16T19:56:31.891Z · LW(p) · GW(p)
Yeah, I overstated this due to the point I was trying to make. I'm not sure quite what I should say instead, but...
- Clearly it would be bad for math to monomaniacally focus on original reasons for believing things; likely worse than the current monomaniacal focus on proofs.
- There's a use-case where it is better to look for proofs than to look for your true original reasons. The use case has to do with communicating. Your point about the problematic nature of putting only proofs in papers is well-taken, but there's also a good reason why publications have focused on that. Proofs are the extreme end of a spectrum of gears-y-ness.
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-02-24T20:44:34.587Z · LW(p) · GW(p)
Two comments.
First, I'm not sure whether the right category here is "gear level explanations"? It's just that, there is evidence which is so strong, that even when the evidence comes from a biased source, you are still compelled to believe it. In other words, this is the sort of evidence that you expect to be hard to find if the claim is wrong, even if you intentionally go looking for it. In theoretical computer science, this is exactly what a "proof" is: something which can be believed even coming from an adversarial agent.
Second, I think that there is an important difference between convincing yourself and convincing other people, namely, a lot of your intrinsic reasoning is non-verbal intuition. When you explain to another person then, either you are lucky and both of you share the same intuition, or (like often is the case) you don't. In the latter case you need to introspect harder and find a way to articulate the reasons for this intuition (which is not an easy task: your brain can store a particular intuition without attaching the whole list of examples that generated it).
Replies from: abramdemski, abramdemski↑ comment by abramdemski · 2018-03-16T20:32:44.977Z · LW(p) · GW(p)
I agree that there's a thing going on with "the evidence is so strong that I update significantly even if it is coming from someone's motivated cognition", but I think there's also something more general going on which has to do with gears-level.
If we were perfect Bayesians, then there would be no distinction between "the evidence that made us believe" and "all the evidence we have". However, we are not perfect bayesians, and logical induction captures some of what's going on with our bounded rationality.
According to my analysis, gears are parts of our model which are bayesian in that way; we can put weight on them based on all the evidence for and against them, because the models are "precise" in a way which allows us to objectively judge how the evidence bears on them. (Other parts of our beliefs can't be judged in this way due to the difficulty of overcoming hindsight bias.)
Therefore, filtering our state of evidence through gears-level models allows us to convey evidence which would have moved us if we were more perfectly Bayesian. We are speaking from a model, describing an epistemic state which isn't actually our own but which is more communicable.
This is all deliciously meta because this in itself is an example of me having some strong intuitions and attempting to put them into a gears-level model to communicate them well. I think there's a bigger picture which I'm not completely conveying, which has to do with logical induction, bounded rationality, aumann agreement, justification structures, hindsight bias, motivated cognition, and intuitions from the belief propagation algorithm.
↑ comment by abramdemski · 2018-03-16T20:00:45.111Z · LW(p) · GW(p)
I could be wrong here, but, isn't "intuition" basically "non-gears"? Isn't "introspect harder" basically "try to turn intuition into gears"?
Replies from: vanessa-kosoy↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-17T20:21:47.418Z · LW(p) · GW(p)
Maybe? Is the converse also true? Maybe a "gears" model = a model that resides fully in the conscious, linguistic part of the mind and can be communicated to another person with sufficient precision for em to reproduce its predictions, whereas a "non-gears" model = a model that relies on "opaque" intuition modules?
comment by niplav · 2021-06-03T12:44:43.929Z · LW(p) · GW(p)
This is nicely symmetric with Socratic Grilling on the other side (how can I explain without looking like I want to force the conclusion ←→ how can I ask questions without seeming confrontative/focused on rejecting the conclusion).
Also, "There's lots of room in interior design", lol. Thank you.