Against the Bottom Line
post by gRR · 2012-04-21T10:20:04.861Z · LW · GW · Legacy · 30 commentsContents
30 comments
In the spirit of contrarianism, I'd like to argue against The Bottom Line.
As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".
It sounds neat, but I think it is not psychologically feasible. I find that whenever I actually argue, I always have the conclusion already written. Without it, it is impossible to have any direction, and an argument without any direction does not go anywhere.
What actually happens is:
- I arrive at a conclusion, intuitively, as a result of a process which is usually closed to introspection.
- I write the bottom line, and look for a chain of reasoning that supports it.
- I check the argument and modify/discard it or parts of it if any are found defective.
It is at the point 3 that the biases really struck. Motivated Stopping makes me stop checking too early, and Motivated Continuation makes me look for better arguments when defective ones are found for the conclusion I seek, but not for alternatives, resulting in Straw Men.
30 comments
Comments sorted by top scores.
comment by fubarobfusco · 2012-04-21T16:57:30.344Z · LW(p) · GW(p)
As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".
Ooh! I see a "should" statement. Let's open it up and see what's inside!
\gets out the consequentialist box-cutter**
... its idea is that we will get worse consequences if we "start with a bottom line and then fill out the arguments."
Hmm. Is that what "The Bottom Line" says?
Let's take a look at what it says about some actual consequences:
If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is "Never repair anything expensive." If this is a good algorithm, fine; if this is a bad algorithm, oh well.
In other words, it's not like you're sinning or cheating at the rationality game if you write the bottom line first.
Rather, you ran some algorithm to generate that bottom line. You selected that bottom line out of hypothesis-space somehow. Perhaps you used the availability heuristic. Perhaps you used some ugh fields. Perhaps you used physics. Perhaps you used Tristan Tzara's cut-up technique. Or a Ouija board. Or following whatever your car's drivers' manual said to do. Or your mom's caring advice.
Well, how good was that algorithm?
Do people who use that algorithm tend to get good consequences, or not?
Once your bottom line is written — once you have made the decision whether or not to fix your brakes — the consequences you experience don't depend on any clever arguments you made up to justify that decision retrospectively.
If you come up with a candidate "bottom line" and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn't really a bottom line — your algorithm hadn't actually terminated. We can then ask, still, how good is your algorithm, including the exploring and maybe-rejecting? This is where questions about motivated stopping and continuation come in.
Replies from: orthonormal, gRR↑ comment by orthonormal · 2012-04-21T18:26:31.673Z · LW(p) · GW(p)
I like your comment-generating algorithm.
↑ comment by gRR · 2012-04-21T22:08:28.516Z · LW(p) · GW(p)
If you come up with a candidate "bottom line" and then explore arguments for and against it, and sometimes end up rejecting it, then it wasn't really a bottom line — your algorithm hadn't actually terminated.
Oh. That makes sense. So it's the bottom line only if I write it and refuse to change it forever after. Or, if it is the belief on which I actually act in the end, if it was all a part of a decision-making process.
Guess that's what everybody was telling me... feeling stupid now.
Replies from: fubarobfusco, beriukay↑ comment by fubarobfusco · 2012-04-21T23:22:38.230Z · LW(p) · GW(p)
's all good.
Your real decision is the one you act on. Decision theory, after all, isn't about what the agent believes it has decided; it's about actions the agent chooses.
Edited to add:
Also, you recognized where "the biases really struck" as you put it — that's a pretty important part. It seems to me that one reason to resist writing even a tentative bottom line too early is to avoid motivated stopping. And if you're working in a group, this is a reason to hold off on proposing solutions.
Edited again to add:
In retrospect I'm not sure, but I think what I triggered on, that led me to respond to your post was the phrase "a rationalist should". This fits the same grammatical pattern as "a libertarian should", "a Muslim should", and so on ... as if rationality were another ideological identity; that one identifies with Rationalist-ism first and then follows the rationality social rules, having faith that by being a good rationalist one gets to go to rationalist heaven and receive 3^^^3 utilons (and no dust specks), or some such.
I expect that's not what you actually meant. But I think I sometimes pounce on that kind of thing. Gotta fight the cult attractor! I figure David Gerard has the "keeping LW from becoming Scientology" angle; I'll try for the "keeping LW from becoming Objectivism" angle. :)
comment by lincolnquirk · 2012-04-21T10:40:09.922Z · LW(p) · GW(p)
Try this: when you find yourself ready to accept a conclusion based on your step 1, notice this before mentally committing to the conclusion ("writing the bottom line"). Consider the opposite / alternative conclusions, and write them, too. Then come up with lines of reasoning which support each conclusion. I think that's the five-second skill that I've learned, and it is very handy.
Example:
In my startup recently I was trying to decide whether the best strategy going forward was to change the product drastically, or keep pushing on the same idea trying to figure out how to make it work. I recognized this argument internally, and explicitly said to my cofounder, "so the argument for keeping our idea the same is that we've been working on this thing for months and we have lots of experience in this space. But the argument for changing it is: notice sunk costs, then think whether anything else in the world would be better to work on than this idea, considering we have no actual momentum, only ideas."
comment by byrnema · 2012-04-22T19:37:03.329Z · LW(p) · GW(p)
It sounds neat, but I think it is not psychologically feasible.
I agree with this. At least in a lot of contexts, you pick a side, write your bottom line, and then see how it goes. I think being rational, for people who think like me anyway, means being open to needing to toggle one's views frequently, in case you're wrong.
A little while ago, I needed to decide something that was important, complex, and it was unlikely, unfortunately, that events would ever give feedback one way or the other, as to whether I made the 'right' choice, due to the complexity and the fact there was risk and incompletely disclosed values involved.
Nevertheless, the decision was really important and the decision was weighing on me. I observed over a couple days how I made the decision. First,I chose one of the two decisions, rather randomly, just because I needed to choose already. And then I imagined defending it. "Owning" the decision and feeling responsible for it was an important step in the motivation to come up with the best arguments I couold. I imagined my family, my coworkers and my friends second-guessing me, and I kept arguing and arguing with them in my head. After a few hours of dedicated mental role play, I felt exhausted. This decision was way to difficult to defend. It would be easier, I thought, to just go the other way. So I (more half-heartedly) considered the other decision, and imagined defending that one. It turned out to be easier to defend, and some of the arguments struck me as especially compelling, so that in the end my mock-trial arguments convinced me. I felt (and feel) good about my decision, different then the one I began with.
It struck me, at the time, how similar this was to the court system. Certainly, they knew what they were doing with that. It also struck me that imagining the counter-arguments of friends and family was somewhat less effective because it tended to direct me towards making the decision that felt least 'shameful' rather than the one that would be most likely to 'win'. But due to the convolution of the problem with values, this seemed necessary.
Sometimes the decision is easier. I can just look at the facts and decide. But I don't know how often this is the case with decisions that are both really important and somewhat 'messy', as they tend to be in real life when they involve people and values and not just, for example, optimizing something with respect to a single factor.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-04-24T21:14:05.311Z · LW(p) · GW(p)
The small-scale version of this is the "coin test".
You have a choice with two outcomes. Flip a coin. If it comes up heads, pick the first outcome; tails, the second outcome.
And if you don't like your pick, just switch to the other one.
Often, it's not so much about the actual decision as it is about avoiding responsibility for error. The coin test forces you to choose.
comment by Shmi (shminux) · 2012-04-21T21:09:06.024Z · LW(p) · GW(p)
How often do you find that your intuitive conclusion had been faulty? What do you do in this case with your intuition?
Replies from: gRR↑ comment by gRR · 2012-04-21T22:15:05.169Z · LW(p) · GW(p)
Too frequently for comfort... I update down my estimate of its reliability.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-04-22T04:34:17.104Z · LW(p) · GW(p)
The why do you still "always have the conclusion already written"?
comment by DanArmak · 2012-04-21T11:47:50.629Z · LW(p) · GW(p)
It's a good point: having a target in mind, and then searching for the best arguments for it, is a familiar experience.
The "bottom line" might be a math theorem I'm trying to prove. Or a claim in a discussion with friends that I'm trying to argue for. Or the hope that I can solve a programming problem within certain constraints. In all these cases, my thinking is directed to a goal. And I can't currently imagine (which is weak evidence) a form of thinking that isn't, yet produces good results reliably.
The rationalist ideal isn't "don't argue towards a goal". It's "always remember that until you have a rock-solid argument, the goal is desired but unproven, and to be treated as an assumption at best. Always be ready to discard or modify the goal, as evidence comes in. And when you do have a proof, have it checked by counter-motivated others."
The Bottom Line doesn't tell you to argue without a goal. It just tells you to weight all evidence correctly, for and against your goal. It may help if your goal is explicitly formulated as a question rather than a statement which might be false. But in the end, it's just about fighting biases - remembering what we've assumed without proof to explore its consequences if true, and so on.
From that post, a description of wrong arguing:
First, he writes, "And therefore, box B contains the diamond!" at the bottom of his sheet of paper. Then, at the top of the paper, he writes, "Box B shows a blue stamp," [.... and so on]; yet the clever arguer neglects all those signs which might argue in favor of box A.
(my emphasis). And a description of right arguing:
she first writes down all the distinguishing signs of both boxes on a sheet of paper, and then applies her knowledge and the laws of probability and writes down at the bottom: "Therefore, I estimate an 85% probability that box B contains the diamond."
One difference is indeed that the second arguer did not write a bottom line before writing out all the evidence. But that's misleading. The second arguer already knew that she was arguing about which box contained the diamond. She had mentally written down at the bottom "and therefore box B contains the diamond with ___ probability", and filled in the actual probability later.
Bayes-wise that is identical to writing, first, "box B contains the diamond with 50% probability" (your prior); and then modifying that number as each piece of evidence is considered. The real difference is that you must not ignore or mistreat part of the evidence, as the first arguer did.
And if you trust your lack of biases enough, you may even write down as the bottom line, "box B contains the diamond with 78% probability" (or whatever your preconceived belief is). And then, as you evaluate the evidence, instead of modifying that number, you append to the end: "I currently give this proposition such and such a probability (belief) distribution." And you modify that last statement as the evidence comes in.
The whole issue may be summarized as follows:
- We must update on all evidence fairly, including evidence which causes us to lower the probability estimate of hypotheses we hold dear, which is a human bias to privilege.
- To avoid this bias, it can help not to mentally give any idea such a privileged status to begin with.
Do not hold ideas dear. Hold reality dear.
Of course that should not be taken to mean you shouldn't have goals that you try to argue for! Just remember that your attempts to argue can fail, and update accordingly.
Replies from: gRR↑ comment by gRR · 2012-04-21T12:39:24.823Z · LW(p) · GW(p)
In the spirit of further contrarianism, I'll note that although your points are all valid, they do not really save the message of "The Bottom Line" post, unless you start interpreting the message in a rather liberal way instead of taking it literally, and this is undesirable under commonly held LW values.
[For example, atheists usually balk when people start interpreting the bible left and right, keeping the desirable conclusions and throwing away the rest, etc.]
Replies from: orthonormal, DanArmak↑ comment by orthonormal · 2012-04-21T18:25:16.216Z · LW(p) · GW(p)
No, it's functionally identical to the original analogy. Rationalists make it easy to change their bottom line as new evidence comes in, so their bottom line isn't fixed forever at the start.
For example, I recently scrapped a post because I found out that the anecdote I was going to start with wasn't what I thought it was at all, which raised my estimate that I was oversimplifying the rest of it. Yeah, I started with an idea of what I wanted to write, but when I learned new things I changed my confidence in that idea.
↑ comment by DanArmak · 2012-04-21T14:15:00.261Z · LW(p) · GW(p)
I agree. "The Bottom Line" is not formulated as well as it might have been. It is possible to come away with a literal understanding like yours, which is wrong in important respects.
((edited here) There's no point in discussing what the post "really" means. Its only function is to transmit ideas to readers. People's understanding of it may be a map, but it's the map we care about here, more than the territory.)
comment by MinibearRex · 2012-04-27T04:39:03.280Z · LW(p) · GW(p)
I find that whenever I actually argue, I always have the conclusion already written. Without it, it is impossible to have any direction, and an argument without any direction does not go anywhere.
Maybe that's true, for arguments. But it shouldn't be true for all of your thinking. Taking a simple example, if I'm working on a math problem (which is hard enough that I can't just intuit my way out of it), I do not pick an answer at the beginning, and then try to justify it. I work at the problem, and eventually I arrive at an answer. Only then will I get into actual arguments with people, if they disagree with my answer, and their disagreement is improperly founded.
comment by Manfred · 2012-04-21T20:24:34.804Z · LW(p) · GW(p)
I think it is not psychologically feasible. I find that whenever I actually argue, I always have the conclusion already written.
So, whenever you argue, you have never ever changed your beliefs? It is beliefs, after all, that are "the bottom line" that should be mutable by the evidence, yet are immutable if you write them down first and then go looking for justification. If so, I suggest you work on that - it is feasible, I promise.
comment by prase · 2012-04-21T13:21:58.261Z · LW(p) · GW(p)
I understand the idea of the "bottom line" post a little distinctly. In my understanding it doesn't address the process of arguing (i.e. constructing verbal expressions capable of persuading others). Building an effective argument needs knowing the goal as a prerequisite, obviously. But the situation is different in the private, deciding what to believe. Quite commonly one selects one's belief on a totally inadequate basis (affect heuristics, political sympathies) and then reinforces this belief by arguments constructed with the belief in mind. This is what the post was warning against.
In the analogy with mathematical proofs: if a mathematician is reasonably certain that a theorem holds, he can go and try to find a proof. The proof is an argument presented to the public (here, other mathematicians) and should be clear, elegant and polished. But before that the mathematician must decide which theorem he should try to prove, and it would be a mistake to skip this phase, just formulate a "random" theorem and directly jump to the phase of constructing a proof. In mathematics it would be hard to succeed this way since to decide whether a proof is correct or not is relatively easy and straightforward, but outside mathematics the bottom-line approach is usually feasible and costly.
Your division of reasoning into three steps (guessing conclusion, justifying it, checking the justification) may be inevitable for small irreducible ideas where you can go through the whole process in few minutes. But most arguments are about complex hypotheses whose justification could be (and usually is) reduced to a chain of elementary inductive steps. For such hypotheses it is certainly feasible (psychologically or otherwise) to arrive at them gradually - guessing and rationalising the irreducible bits which can be easily checked, but not the hypothesis as a whole.
Replies from: DanArmak, gRR↑ comment by DanArmak · 2012-04-21T14:28:59.384Z · LW(p) · GW(p)
Quite commonly one selects one's belief on a totally inadequate basis (affect heuristics, political sympathies) and then reinforces this belief by arguments constructed with the belief in mind. This is what the post was warning against.
That's an application of the post's argument, true. But as gRR notes, the literal meaning of the post discusses how we judge information presented to us by other people, which we receive complete with arguments and conclusions.
Once an argument is given in favor of a belief, and that argument has no logical faults, we must update our beliefs accordingly. We don't have a choice to ignore a valid argument, if we are Bayesians. Even if the argument was deliberately built by someone trying to convince us, who is prone to biases, etc.
Yes, filtered evidence can in the extreme convince us of anything. Someone who controls all our incoming (true) information, and can filter but not modify it, can sometimes influence us to give believe anything they want. But the answer is not to discard information selected by non-objective partisans of beliefs. That would make us discard almost all information we receive at second hand. Instead, the answer is to try and collect information from partisans of different conflicting ideas, and to do confirmations ourselves or via trusted associates.
Eliezers's followup post discusses this.
Replies from: prase↑ comment by prase · 2012-04-21T14:59:06.930Z · LW(p) · GW(p)
But as gRR notes, the literal meaning of the post discusses how we judge information presented to us by other people, which we receive complete with arguments and conclusions.
The bottom line of the EY's post says:
This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don't like. For it is indeed a clever argument to say "My opponent is a clever arguer", if you are paying yourself to retain whatever beliefs you had at the start.
So I don't think the post literally means what you think it means.
Replies from: DanArmak↑ comment by DanArmak · 2012-04-21T15:07:44.910Z · LW(p) · GW(p)
That part was apparently added a bit later, when he posted What Evidence Filtered Evidence.
It cautions people against interpreting the entire preceding post in this literal way. Presumably it was added because people did interpret it so, and gRR's reading is not novel or unique.
Of course this reading is wrong - as it applies to reality, and as a description of Eliezer's beliefs. But it's right - as it applies to the post: it is a plausible literal meaning. It wasn't the intention of the writer, but if some people understand it this way, then it's the text's fault (so to speak), no the readers'. There is no "true" literal meaning to a text other than what people understand from it.
↑ comment by gRR · 2012-04-21T14:04:37.623Z · LW(p) · GW(p)
I understand the idea of the "bottom line" post a little distinctly. In my understanding it doesn't address the process of arguing (i.e. constructing verbal expressions capable of persuading others).
I have a general objection against this interpretation - it throws away the literal meaning of the EY's post.
But there is also a pragmatic difference, about where to direct the focus of attention when one tries to de-bias one's reasoning. With the three steps as I stated them, I know that I cannot really fix the step 1, beyond trying to catch myself before I commit, as lincolnquirk suggested. Step 2 is comparatively harmless, so it's the step 3 where I must put the real defense.
But most arguments are about complex hypotheses whose justification could be (and usually is) reduced to a chain of elementary inductive steps. For such hypotheses it is certainly feasible (psychologically or otherwise) to arrive at them gradually - guessing and rationalising the irreducible bits which can be easily checked, but not the hypothesis as a whole.
Could you mention specific examples of such complex hypotheses? I mean, where it would make sense to know the conclusion in advance, and yet the conclusion would not be reachable in a single intuitive leap. It seems contradictory.
Replies from: prase↑ comment by prase · 2012-04-21T14:46:59.884Z · LW(p) · GW(p)
I have a general objection against this interpretation - it throws away the literal meaning of the EY's post.
The literal meaning of the post, if any, is: no matter of carefully crafted post-hoc justification is going to make your conclusion correct. I don't think your interpretation is closer to it than mine.
Could you mention specific examples of such complex hypotheses? I mean, where it would make sense to know the conclusion in advance, and yet the conclusion would not be reachable in a single intuitive leap.
I am not sure what you mean by "making sense to know the conclusion in advance" and "reachable in a single intuitive leap". I am thinking of questions whose valid justification is not irreducible - either it is a chain of reasoning or it consists of independent pieces of evidence - just as:
Does God exist? Does global warming happen? Why did the non-avian dinosaurs become extinct? Is the millionth decimal digit of pi 8? Who is the best candidate for the upcoming presidential elections in Nicaragua?
Most questions I can think of now are like that, so there is probably some misunderstanding.
comment by alex_zag_al · 2015-11-01T18:06:07.360Z · LW(p) · GW(p)
As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".
I disagree. The idea, rather, is that your beliefs are as good as the algorithm that fills out the bottom line. Doesn't mean you shouldn't start by filling out the bottom line; just that you shouldn't do it by thinking of what feels good or what will win you an argument or by any other algorithm only weakly correlated with truth.
Also, note that if what you write above the bottom line can change the bottom line, that's part of the algorithm too. So, actually, I do agree that a rationalist should not write the bottom line, look for a chain of reasoning that supports it, and refuse to change the bottom line if the reasoning doesn't.
comment by Vladimir_Nesov · 2012-04-21T12:03:00.208Z · LW(p) · GW(p)
Related: A Rational Argument.
Replies from: cousin_it, b1shop↑ comment by cousin_it · 2012-04-22T12:41:59.984Z · LW(p) · GW(p)
Can someone try to make that argument more precise? It seems to me that the claim "Sorry. It can't be done" sounds plausible but fails in the most obvious limit case: a proof of a mathematical theorem doesn't become less correct if I found it by deliberately trying to prove the theorem. Since Bayesian reasoning approaches classical logic in the limit, the claim might be wrong for Bayesian reasoning too.
Replies from: pengvado↑ comment by pengvado · 2012-04-22T22:01:59.968Z · LW(p) · GW(p)
It is possible to gain evidence in favor of hypothesis X from Bob who you know has X as his bottom line. However, Bob can't force this outcome, because it's also possible that his attempt to convince you of X will backfire. For any fixed strategy on Bob's part, the effect on your beliefs tends to be towards the true value of X, not towards the value that Bob wants; with mixed strategies (or just silence) he can prevent you from gaining but can't reduce your net accuracy.
Applied to the finite likelihood case: Initially you assign some probability to X, and conditioned on any given value of X you have some probability distribution over the possible observations about X. Suppose Bob looks at those observations, filters out the ones that would be evidence against X if you had seen them directly, and gives you the remainder. But now what you're actually observing is "number of observations that pass Bob's filtering algorithm", which is another variable that you assign different distributions to given different values of X, and if it takes a value that's more likely given ~X than given X then you update downwards.
Applied to the deductive proof case: Initially you assign some probability to X. Bob goes looking for a mathematical proof of X. If there is one, then Bob tells you the proof, and you update to certainty of X. But if X is false, then there won't be a proof, you know that Bob looked and didn't find one, so you update downwards.
Replies from: cousin_it↑ comment by b1shop · 2012-04-21T14:55:49.447Z · LW(p) · GW(p)
An alternative takeaway from these posts is that we should segment our personality. In the same way I can only have emotionally honest conversations with close associates, maybe I can only have intellectually honest conversations with people I can trust. There's no sense trying to cooperate if the other side always defects.
I don't have the luxury of living in an ivory tower and the opponents in my particular quest will always push the bounds of reasonability.
comment by chaosmosis · 2012-04-24T16:20:32.415Z · LW(p) · GW(p)
Mentally, start with multiple bottom lines, even if they're contradictory, look for evidence that supports any of them.
In discussions, do exactly what you said.
Although judging from the comments below, "bottom line" means something different than what you thought and what I also thought.