Making your explicit reasoning trustworthy
post by AnnaSalamon · 2010-10-29T00:00:25.408Z · LW · GW · Legacy · 95 commentsContents
Explicit reasoning is often nuts Reasoning can be made less risky Skills for safer reasoning Why it matters (again) None 96 comments
Or: “I don’t want to think about that! I might be left with mistaken beliefs!”
Related to: Rationality as memetic immune disorder; Incremental progress and the valley; Egan's Law.
tl;dr: Many of us hesitate to trust explicit reasoning because... we haven’t built the skills that make such reasoning trustworthy. Some simple strategies can help.
Most of us are afraid to think fully about certain subjects.
Sometimes, we avert our eyes for fear of unpleasant conclusions. (“What if it’s my fault? What if I’m not good enough?”)
But other times, oddly enough, we avert our eyes for fear of inaccurate conclusions.[1] People fear questioning their religion, lest they disbelieve and become damned. People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger. And I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things.
Ostrich Theory, one might call it. Or I’m Already Right theory. The theory that we’re more likely to act sensibly if we don’t think further, than if we do. Sometimes Ostrich Theories are unconsciously held; one just wordlessly backs away from certain thoughts. Other times full or partial Ostrich Theories are put forth explicitly, as in Phil Goetz’s post, this LW comment, discussions of Tetlock's "foxes vs hedgehogs" research, enjoinders to use "outside views", enjoinders not to second-guess expert systems, and cautions for Christians against “clever arguments”.
Explicit reasoning is often nuts
Ostrich Theories sound implausible: why would not thinking through an issue make our actions better? And yet examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. Examples include, among many others:
- Most early Communists;
- Ted Kaczynski (The Unabomber; an IQ 160 math PhD who wrote an interesting treatise about the human impacts of technology, and also murdered innocent people while accomplishing nothing);
- Mitchell Heisman;
- Folks who go to great lengths to keep kosher;
- Friends of mine who’ve gone to great lengths to be meticulously denotationally honest, including refusing jobs that required a government loyalty oath, and refusing to click on user agreements for videogames; and
- Many who’ve gone to war for the sake of religion, national identity, or many different far-mode ideals.
In fact, the examples of religion and war suggest that the trouble with, say, Kaczynski wasn’t that his beliefs were unusually crazy. The trouble was that his beliefs were an ordinary amount of crazy, and he was unusually prone to acting on his beliefs. If the average person started to actually act on their nominal, verbal, explicit beliefs, they, too, would in many cases look plumb nuts. For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority. Someone else might risk their life-savings betting on an election outcome or business about which they were “99% confident”.
That is: many peoples’ abstract reasoning is not up to the task of day to day decision-making. This doesn't impair folks' actions all that much, because peoples' abstract reasoning has little bearing on our actual actions. Mostly we just find ourselves doing things (out of habit, emotional inclination, or social copying) and make up the reasons post-hoc. But when we do try to choose actions from theory, the results are far from reliably helpful -- and so many folks' early steps toward rationality go unrewarded.
We are left with two linked barriers to rationality: (1) nutty abstract reasoning; and (2) fears of reasoned nuttiness, and other failures to believe that thinking things through is actually helpful.[2]
Reasoning can be made less risky
Much of this nuttiness is unnecessary. There are learnable skills that can both make our abstract reasoning more trustworthy and also make it easier for us to trust it.
Here's the basic idea:
If you know the limitations of a pattern of reasoning, learning better what it says won’t hurt you. It’s like having a friend who’s often wrong. If you don’t know your friend’s limitations, his advice might harm you. But once you do know, you don’t have to gag him; you can listen to what he says, and then take it with a grain of salt.[3]
Reasoning is the meta-tool that lets us figure out what methods of inference are trustworthy where. Reason lets us look over the track records of our own explicit theorizing, outside experts' views, our near-mode intuitions, etc. and figure out which is how trustworthy in a given situation.
If we learn to use this meta-tool, we can walk into rationality without fear.
Skills for safer reasoning
1. Recognize implicit knowledge.
Recognize when your habits, or outside customs, are likely to work better than your reasoned-from-scratch best guesses. Notice how different groups act and what results they get. Take pains to stay aware of your own anticipations, especially in cases where you have explicit verbal models that might block your anticipations from view. And, by studying track records, get a sense of which prediction methods are trustworthy where.
Use track records; don't assume that just because folks' justifications are incoherent, the actions they are justifying are foolish. But also don't assume that tradition is better than your models. Be empirical.
2. Plan for errors in your best-guess models.
We tend to be overconfident in our own beliefs, to overestimate the probability of conjunctions (such as multi-part reasoning chains), and to search preferentially for evidence that we’re right. Put these facts together, and theories folks are "almost certain" of turn out to be wrong pretty often. Therefore:
- Make predictions from as many angles as possible, to build redundancy. Use multiple theoretical frameworks, multiple datasets, multiple experts, multiple disciplines.
- When some lines of argument point one way and some another, don't give up or take a vote. Instead, notice that you're confused, and (while guarding against confirmation bias!) seek follow-up information.
- Use your memories of past error to bring up honest curiosity and fear of error. Then, really search for evidence that you’re wrong, the same way you'd search if your life were being bet on someone else's theory.
- Build safeguards, alternatives, and repurposable resources into your plans.
3. Beware rapid belief changes.
Some people find their beliefs changing rapidly back and forth, based for example on the particular lines of argument they're currently pondering, or the beliefs of those they've recently read or talked to. Such fluctuations are generally bad news for both the accuracy of your beliefs and the usefulness of your actions. If this is your situation:
- Remember that accurate beliefs come from an even, long-term collection of all the available evidence, with no extra weight for arguments presently in front of one. Thus, they shouldn't fluctuate dramatically back and forth; you should never be able to predict which way your future probabilities will move.
- If you can predict what you'll believe a few years from now, consider believing that already.
- Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.
- Consider what emotions are driving the rapid fluctuations. If you’re uncomfortable ever disagreeing with your interlocutors, build comfort with disagreement. If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.
4. Update your near-mode anticipations, not just your far-mode beliefs.
Sometimes your far-mode is smart and you near-mode is stupid. For example, Yvain's rationalist knows abstractly that there aren’t ghosts, but nevertheless fears them. Other times, though, your near-mode is smart and your far-mode is stupid. You might “believe” in an afterlife but retain a concrete, near-mode fear of death. You might advocate Communism but have a sinking feeling in your stomach as you conduct your tour of Stalin’s Russia.
Thus: trust abstract reasoning or concrete anticipations in different situations, according to their strengths. But, whichever one you bet your actions on, keep the other one in view. Ask it what it expects and why it expects it. Show it why you disagree (visualizing your evidence concretely, if you’re trying to talk to your wordless anticipations), and see if it finds your evidence convincing. Try to grow all your cognitive subsystems, so as to form a whole mind.
5. Use raw motivation, emotion, and behavior to determine at least part of your priorities.
One of the commonest routes to theory-driven nuttiness is to take a “goal” that isn’t your goal. Thus, folks claim to care “above all else” about their selfish well-being, the abolition of suffering, an objective Morality discoverable by superintelligence, or average utilitarian happiness-sums. They then find themselves either without motivation to pursue “their goals”, or else pulled into chains of actions that they dread and do not want.
Concrete local motivations are often embarrassing. For example, I find myself concretely motivated to “win” arguments, even though I'd think better of myself if I was driven by curiosity. But, like near-mode beliefs, concrete local motivations can act as a safeguard and an anchor. For example, if you become abstractly confused about meta-ethics, you'll still have a concrete desire to pull babies off train tracks. And so dialoguing with your near-mode wants and motives, like your near-mode anticipations, can help build a robust, trust-worthy mind.
Why it matters (again)
Safety skills such as the above are worth learning for three reasons.
- They help us avoid nutty actions.
- They help us reason unhesitatingly, instead of flinching away out of fear.
- They help us build a rationality for the whole mind, with the strengths of near-mode as well as of abstract reasoning.
[1] These are not the only reasons people fear thinking. At minimum, there is also:
- Fear of social censure for the new beliefs (e.g., for changing your politics, or failing to believe your friend was justified in his divorce);
- Fear that part of you will use those new beliefs to justify actions that you as a whole do not want (e.g., you may fear to read a study about upsides of nicotine, lest you use it as a rationalization to start smoking again; you may similarly fear to read a study about how easily you can save African lives, lest it ends up prompting you to donate money).
[2] Many points in this article, and especially in the "explicit reasoning is often nuts" section, are stolen from Michael Vassar. Give him the credit, and me the blame and the upvotes.
[3] Carl points out that Eliezer points out that studies show we can't. But it seems like explicitly modeling when your friend is and isn't accurate, and when explicit models have and haven't led you to good actions, should at least help.
95 comments
Comments sorted by top scores.
comment by cousin_it · 2010-10-29T19:19:17.208Z · LW(p) · GW(p)
Reading your post felt very weird to me, as if you were deliberately avoiding the obvious conclusion from your own examples! Do you really believe that people follow kosher or die in religious wars due to using abnormally explicit reasoning? The common thing about your examples is putting ideals over personal gain, not reasoning over instinct. Too much acting on explicitly stated values, not explicitly stated beliefs. In truth, using rationality for personal gain isn't nearly as dangerous as idealism/altruism and doesn't seem to require the precautions you go on to describe. If any of the crazy things I do failed to help me, I'd just stop doing them.
Which prompts a question to everyone: what crazy things do you do that help you? (Rather than help save the light cone or something.)
Replies from: MichaelVassar, byrnema, jsalvatier↑ comment by MichaelVassar · 2010-11-03T22:42:02.344Z · LW(p) · GW(p)
I strongly disagree. I specifically think people DO die in religious wars due to using abnormally explicit reasoning.
Replies from: multifoliaterose, multifoliaterose, NancyLebovitz, Dr_Manhattan↑ comment by multifoliaterose · 2010-11-03T22:50:44.875Z · LW(p) · GW(p)
Can you elaborate here? My initial reaction is one of skepticism, if only because abnormally explicit reasoning seems uncommon.
Replies from: byrnema, MichaelVassar↑ comment by byrnema · 2011-05-09T15:38:19.649Z · LW(p) · GW(p)
I also agree with MichaelVassar, I think much religious harm comes from using abnormally explicit reasoning.
This is because (I hypothesize that) great moral failures come about when a group of people (often, a religion, but any ideological group) think they've hit upon an absolute "truth" and then expect they can apply this truth to wholly develop an ethical code. The evil comes in when they mistakenly think that morality can be described by some set of universal and self-consistent principles, and they apply a principle valid in one context to another with disastrous results. When they apply the principle to the inappropriate domain, they should feel a twinge of conscience, but they override this twinge with their reason -- they believe in this original principle, and it deduces this thing here, which is correct, so that thing over there that it also deduces must also be correct. In the end, they use reason to override their natural human morality.
The Nazis are the main example I have in mind, but to look at a less painful example, the Catholic church is another example of over-extending principles due to reasoning. Valuing human life and general societal openness to procreation are good values, but insisting that women not use condoms amidst an AIDS epidemic is requiring too much consistency of moral principles.
(Though apparently, I agree even more with user:cousin_it that it is the result of putting ideals of any kind over instinct. It's just that in some cases, the ideal is insisting on consistent, universal moral principles, which religions are fond of doing.)
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-05-09T18:44:20.578Z · LW(p) · GW(p)
Thanks for your feedback.
Here I would guess that you're underestimating the influence of (evolutionarily conditioned) straightforwardly base motivations: c.f. the Milgram and Stanford Prison Experiments. I recently ran across this fascinating essay by Ron Jones on his experience running an experiment called "The Third Wave" in his high school class. I would guess that the motivation that he describes (of feeling superior to others) played a significantly larger role than abnormally explicit reasoning in the case of the Nazi regime; that (the appearance of?) abnormally explicit reasoning was a result of this underlying motivation rather than the cause.
There may be an issue generalizing from one example here; what your describing sounds to me closer to why a LW poster might have become a Nazi during Nazi times than why a typical person might have become a Nazi during Nazi times. On the other hand, I find it likely that the originators of the underlying ideas ("Aryan" nationalism, communism, Catholic doctrines) used explicit reasoning more often than the typical person does in coming to their conclusions.
Replies from: byrnema, byrnema↑ comment by byrnema · 2011-05-09T19:17:22.482Z · LW(p) · GW(p)
I recently ran across this fascinating essay by Ron Jones
It really is fascinating. But I don't believe him. I don't believe it was 'kept secret' and this is most likely some kind of delusion he experienced. (A very small experiment of this kind might make him feel so guilty that the size of the project grew in his mind.) For example, I believe I would have felt the same way as his students, but I'm certain I would not have kept it secret.
Also, I'm confused about his statement
You are no better or worse than the German Nazis we have been studying.
That seems rather ridiculous. Being sent to the library for not wanting to participate in an assignment isn't beyond the pale.
However, something just clicked in my mind and I realized an evil that we do as a society that we allow, because we sanction it as a community. So, yes, I see now how people can go along with something that their conscience should naturally fight against.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2011-05-09T22:23:06.182Z · LW(p) · GW(p)
It really is fascinating. But I don't believe him.
I agree that the there are reasons to question the accuracy of Ron Jones' account.
Also, I'm confused about his statement
You are no better or worse than the German Nazis we have been studying.
Being sent to the library for not wanting to participate in an assignment isn't beyond the pale.
I think that Jones was not suggesting that the consequences of the students' actions are comparable to the consequences of Nazis' actions but rather was claiming that the same tendencies that led the Germans to behave as they did were present in his own students.
This may not literally be true; it's possible that the early childhood development environment in 1950's Palo Alto were sufficiently different from the environmental factors in the early 1900's so that the students did not have the same underlying tendencies that the Nazi Germans did, but it's difficult to tell one way or the other.
However, something just clicked in my mind and I realized an evil that we do as a society that we allow, because we sanction it as a community. So, yes, I see now how people can go along with something that their conscience should naturally fight against.
Right, this is what I was getting at. I think that there are several interrelated things going on here:
•High self-esteem coming from feeling that one is on the right side.
•Desire for acceptance / fear of rejection by one's peers.
•Desire to reaping material & other goods from the oppressed party.
with each point being experienced only on a semi-conscious level..
In the case of the Catholic Church presumably only the first two points are operative.
Of course empathy is mixed in there as well; but it may play a negligible role relative to the other factors on the table.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-01-03T15:27:23.677Z · LW(p) · GW(p)
Add in desire for something more interesting than school usually is.
↑ comment by byrnema · 2011-05-09T19:28:41.481Z · LW(p) · GW(p)
I have a question regarding the Milgram experiment. Were the teachers under the impression that the learners were continuing to supply answers voluntarily?
Replies from: Alicorn, multifoliaterose↑ comment by Alicorn · 2011-05-09T23:20:56.999Z · LW(p) · GW(p)
The learner was perceived to initially agree to the experiment, but among the recordings in the programmed resistance was one demanding to be let out.
Replies from: byrnema↑ comment by byrnema · 2011-05-10T02:13:23.579Z · LW(p) · GW(p)
Ah, also this sentence helped my understanding:
Teachers were instructed to treat silence as an incorrect answer and apply the next shock level to the student.
I imagine -- perhaps erroneously -- that I would have tried to obtain the verbal agreement of the learner before continuing. But, for example, this is because I know that continuous subject consent is required whereas this might not have been generally known or true in the early 60s.
Of course, I do see the pattern that this is probably such a case where everyone wants to rate themselves as above average (but they couldn't possibly all be). Still, I will humor my hero-bone by checking out the book and reading about the heroic exceptions, since those must be interesting.
↑ comment by multifoliaterose · 2011-05-09T22:29:11.513Z · LW(p) · GW(p)
Don't know the answer to your question; now that I look at the Wikipedia page I realize that I should only have referred to the Zimbardo Stanford Prison Experiment (the phenomenon in the Milgram experiment is not what I had in mind).
↑ comment by MichaelVassar · 2010-11-13T03:13:17.702Z · LW(p) · GW(p)
Abnormally much still doesnt have to be much,
↑ comment by multifoliaterose · 2010-11-03T23:15:06.442Z · LW(p) · GW(p)
In line with your comment:
Clarifying concepts is the most difficult to automate part of the rationalist's art.
(which I upvoted), I'm not really sure what you (or Anna, or cousin_it) mean by "abnormally explicit reasoning" and I can't tell whether the disagreement here is semantic or more substantive.
↑ comment by NancyLebovitz · 2013-01-03T15:25:48.167Z · LW(p) · GW(p)
My assumption has been that religious wars mostly use religion as a surrogate for territorial, ethnic, and economic interest groups. On the other hand, religion somewhat shapes ethnic groups. Still, I that those wars are driven (at the top-- everyone else is stuck with the war whether they want it or not, and are likely to be influences by propaganda-- by "Because we're us!" much more than by "Because God wills it! (see elaborate argument for what God wants)".
↑ comment by Dr_Manhattan · 2011-05-09T15:13:16.294Z · LW(p) · GW(p)
Upvoted.
"With or without religion, good people can behave well and bad people can do evil; but for good people to do evil—that takes religion. " - Steven Winberg
↑ comment by byrnema · 2010-10-29T20:00:31.688Z · LW(p) · GW(p)
I had a similar impression and response. Humanity seems to get in trouble when they try to make their values too explicitly consistent. The examples that come immediately to mind are when individuals or groups decide to become too strict, black and white or exacting about upholding a value that they have. They forget about or deny a larger context for that value.
I think that to avoid this, a person needs to learn to be comfortable with some inconsistency in their values. Even as they learn not to be comfortable with inconsistencies in their beliefs about reality. Our values don't represent truths about reality in the same way our beliefs about external reality do, and this seems to be a deeper source of the epistemological conflicts we have.
↑ comment by jsalvatier · 2010-10-29T19:59:04.655Z · LW(p) · GW(p)
I too noticed that some of the examples did not necessarily involve abnormally explicit reasoning.
comment by PhilGoetz · 2010-10-29T03:44:30.866Z · LW(p) · GW(p)
A quote from the linked-to "cautions for Christians against clever arguments”, to save others the pain of wading through it to figure out what it's talking about:
It always begins the same way. They swallow first the rather subtle line that it is necessary for each to think for himself, to judge everything by the light of whether it appears reasonable to him. There is never any examination of that basic premise, though what it is really saying is that the mind of man becomes the ultimate test, the ultimate authority of all life. It is necessary for man to reason and it is necessary for him to think for himself and to examine things. But we are creatures under God, and we never can examine accurately or rightly until we begin with the basic recognition that all of man's thinking, blinded and shadowed as it is with the confusion of sin, must be measured by the Word of God. There is the ultimate authority.
comment by Alex Flint (alexflint) · 2010-10-29T07:41:58.595Z · LW(p) · GW(p)
Thanks for a ton of great tips Anna, just wanted to nit pick on one:
Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already. Remember the Conservation of Expected Evidence more generally.
I suspect that reading enough X-ist books will affect my beliefs for any X (well, nearly any). The key word is enough -- I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.
Replies from: David_Gerard, None, Emile, Hschell↑ comment by David_Gerard · 2010-10-31T13:06:18.211Z · LW(p) · GW(p)
It wouldn't necessarily make you a believer. Worked example: I joined in the battle of Scientology vs. the Net in 1995 and proceeded to learn a huge amount about Scientology and everything to do with it. I slung the jargon so well that some ex-Scientologists refused to believe I'd never been a member (though I never was). I checked my understanding with ex-Scientologists to see if my understanding was correct, and it largely was.
None of this put me an inch toward joining up. Not even slightly.
To understand something is not to believe it.
That said, it'll provide a large and detailed pattern in your head for you to form analogies with, good or bad.
Replies from: Kingreaper, alexflint↑ comment by Kingreaper · 2010-11-01T09:50:23.871Z · LW(p) · GW(p)
Alexflint said:
I suspect that fully immersing myself in just about any subject, and surround myself entirely by people who advocate it, would significantly alter my beliefs, regardless of the validity of X.
It seems that your experience was learning about anti-Scientology facts while surrounded by people who advocated anti-Scientology.
So it's completely unsurprising that you remained anti-Scientology.
Had you been learning about Scientology from friends of yours who were Scientologists, you might have had a much harder time maintaining your viewpoint.
Similarly, learning about christianity through the skeptics annotated bible is very different from learning about christianity through a christian youth group.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-11-01T11:53:23.272Z · LW(p) · GW(p)
I actually first started reading alt.religion.scientology because I was interested in the substance of Scientology (SPOILER: there isn't any) from being a big William S. Burroughs fan. The lunacy is pretty shallow below the surface, which is why the Church was so desperately keen to keep the more esoteric portions from the public eye as long as possible.
But, um, yeah. Point.
OTOH, all the Scientologists I knew personally before that emitted weirdness signals. Thinking back, they behaved like they were trying to live life by a manual rather than by understanding. Memetic cold ahoy!
↑ comment by Alex Flint (alexflint) · 2010-10-31T21:10:06.969Z · LW(p) · GW(p)
Interesting! But I do think it's harder than we imagine to maintain that perfect firewall between arguments you read and arguments you believe (or at least absorb into your decisions). Cases where you're genuinely uncertain about the truth are probably more salient than cases like Scientology on this front.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-11-01T00:37:16.595Z · LW(p) · GW(p)
Well, yeah. Scientology is sort of the Godwin example of dangerous infectious memes. But I've found the lessons most useful in dealing with lesser ones, and it taught me superlative skills in how to inspect memes and logical results in a sandbox.
Perhaps these have gone to the point where I've recompartmentalised and need to aggressively decompartmentalise again. Anna Salamon's original post is IMO entirely too dismissive of the dangers of decompartmentalisation in the Phil Goetz post, which is about people who accidentally decompartmentalise memetic toxic waste and come to the startling realisation they need to bomb academics or kill the infidel or whatever. But you always think it'll never happen to you. And this is false, because you're running on unreliable hardware with all manner of exploits and biases, and being able to enumerate them doesn't grant you immunity. And there are predators out there, evolved to eat people who think it'll never happen to them.
My own example: I signed up for a multi-level marketing company, which only cost me a year of my life and most of my friends. I should detail precisely how I reasoned myself into it. It was all very logical. The process of reasoning oneself into the mouth of a highly evolved predator tends to be. The cautions my friends and family gave me were all heuristic. This was before I studied Scientology in detail, which would I suspect have given me some immunity.
I should write a post on the subject (see my recent comments) except Anna's post covers quite a lot of it.
Replies from: NancyLebovitz, CarlShulman↑ comment by NancyLebovitz · 2010-11-01T05:17:17.546Z · LW(p) · GW(p)
I hope you'll also post about how you reasoned yourself out of it.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-11-01T11:49:11.855Z · LW(p) · GW(p)
Reading the sucker shoot analogy in a Florence Littauer book (CAUTION: Littauer is memetic toxic waste with some potentially useful bits). That was the last straw after months of doubts, the bit where it went "click! Oh, this is actually really bad for me, isn't it?" Had my social life been on the internet then (this was 1993) this would have been followed with a "gosh, that was stupid, wasn't it?" post. I hope.
It may be relevant that I was reading the Littauer book because Littauer's books and personality theories were officially advocated in the MLM in question (Omegatrend, a schism of Amway) - so it seemed to be coming from inside. I worry slightly that I might have paid insufficient attention had it been from outside.
I'd be interested to know how others (a) suffered a memetic cold (b) got out of it. Possible post material.
Replies from: CarlShulman↑ comment by CarlShulman · 2011-04-26T05:19:03.967Z · LW(p) · GW(p)
Just re-read this thread, and I'm still keen to hear how you reasoned yourself into it.
↑ comment by CarlShulman · 2010-11-02T14:19:12.027Z · LW(p) · GW(p)
I would be interested in reading this, and especially about what caused the initial vulnerability.
↑ comment by [deleted] · 2010-10-31T16:30:48.160Z · LW(p) · GW(p)
Seems to me that, for most questions where there is any real uncertainty, many books are written advocating multiple points of view. If I were to read any one of these books, I would probably move closer to the author's point of view (since the author will select evidence to support his/her belief), but to know what I would believe after reading all of the books, I would have to actually read them to compare the strength of their arguments.
Replies from: alexflint↑ comment by Alex Flint (alexflint) · 2010-10-31T21:07:04.530Z · LW(p) · GW(p)
Yes I think you're mostly right, but I just don't think I'm quite good enough to weigh the evidence just right, even when I'm explicitly trying to do. Especially in cases where there is real uncertainty.
↑ comment by Emile · 2010-10-29T10:13:09.408Z · LW(p) · GW(p)
If you're built anything like me, the size of the effect does depend pretty strongly X; some may require a simple book, some may equire a full-fledged immersive indoctrination with a lot of social pressure. So I should move my belief towards any X that sounds like it could convince me with a simple book, which would cover a lot of (conflicting) theories on economics and history, but not a lot of religion or conspiracy theories or nationalist ideologies.
Another belief this would leave me away form is ideas that "people who believe in X are evil/crazy" for a lot of values of X.
↑ comment by HS2021 (Hschell) · 2022-06-14T22:35:33.102Z · LW(p) · GW(p)
comment by multifoliaterose · 2010-10-29T19:25:24.247Z · LW(p) · GW(p)
Anna - I'm favorably impressed by this posting! Thanks for making it. It makes me feel a lot better about what SIAI staff mean by rationality.
In the past I've had concerns that SIAI's focus on a future intelligence explosion may be born of explicit reasoning that's nuts (in the sense of your article), and the present posting does a fair amount to assuage my concerns - I see it as a strong indicator that you and some of the other SIAI staff are vigilant against the dangers of untrustworthy explicit reasoning.
Give Michael my regards.
comment by sixes_and_sevens · 2010-10-29T09:01:27.820Z · LW(p) · GW(p)
If you can predict what you'll believe a few years from now, consider believing that already.
I've been thinking about this lately. Specifically, I've been considering the following question:
If you were somehow obliged to pick which of your current beliefs you'd disagree with in eight years time, with real and serious consequences for picking correctly or incorrectly, what criteria would you use to pick them?
I'm pretty sure that difficulty in answering this question is a good sign.
Replies from: NancyLebovitz, Vladimir_Nesov↑ comment by NancyLebovitz · 2010-10-29T13:14:17.908Z · LW(p) · GW(p)
It seems to me that the problem splits into two parts-- changes in belief that you have no way of predicting (they're based on information and/or thinking that you don't have yet), and changes in belief that are happening slowly because you don't like the implications.
Replies from: anonym↑ comment by anonym · 2010-10-31T00:10:44.050Z · LW(p) · GW(p)
Like Nancy said for the seond class of problems, but a little more generally, I'd preferentially pick the ones that I have rational reasons to suspect at the moment and that seem to be persisting for reasons that aren't obvious to me (or aren't rational), and ones that feel like they're surviving because they exploit my cognitive biases and other undesirable habits like akrasia.
↑ comment by Vladimir_Nesov · 2010-10-29T10:09:28.984Z · LW(p) · GW(p)
You can predict that your belief will change, just not in what direction.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2010-10-29T10:12:58.679Z · LW(p) · GW(p)
I think the question has implied acceptance of this.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-29T10:52:41.892Z · LW(p) · GW(p)
Then, could you describe your idea in more detail?
Replies from: sixes_and_sevens, shokwave↑ comment by sixes_and_sevens · 2010-10-29T11:18:05.160Z · LW(p) · GW(p)
Well, how would you answer the question?
To apply it to a more manageable example, my beliefs about psychological sex differences in humans have changed considerably over both long and short timescales, to the point where I actively anticipate having different beliefs about them in the near future. In spite of this, I have no way of knowing which of those beliefs I'm going to demote or reject in future, because if I had such information it would be factored into the beliefs themselves.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-29T16:08:07.955Z · LW(p) · GW(p)
Well, how would you answer the question?
Beliefs about facts that were extensively studied probably won't change, unless I expect new observations to be made that resolve some significant uncertainty. For example, special relativity and population of USA in 2007 will stay about the same, while my belief about USD:EUR ratio in 2011 will change in 2011, updating with actual observations. I don't see any problem with being able to distinguish such cases, it always comes down to whether I expect new observations/inferences to be made.
Your second paragraph still sounds to me as if you continue to make the mistake I pointed out. You can't know how your beliefs will change (become stronger or become weaker), but you can know that certain beliefs will probably change (in one of these directions). So, you can't know which belief you'll accept in the future, but you can know that the level of certainty in a given belief will probably shift.
Replies from: sixes_and_sevens↑ comment by sixes_and_sevens · 2010-10-29T16:49:34.838Z · LW(p) · GW(p)
I don't think I'm making a mistake. I think we're agreeing.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-10-29T18:41:13.794Z · LW(p) · GW(p)
I don't have an understanding of that, but don't think it's worth pursuing further.
↑ comment by shokwave · 2010-10-29T11:15:47.365Z · LW(p) · GW(p)
I got the sense that the question is asking you to look for beliefs you predict will change for the worse. So, you can't predict which direction your beliefs will change in, but if you have an inkling that one will go in the direction of "false", then that is some sort of warning sign:
- You haven't thought the belief through fully, so you are semi-aware there might be contradictions down the line you haven't encountered yet, or
- You haven't considered all the evidence fully, so you are semi-aware that there might be a small amount of very strong evidence against the belief, or
- You have privileged your hypothesis, and you are semi-aware there might be explanations that fit the evidence better, or
- You are semi-aware that you have done one of these things, but don't know which because you haven't thought about it.
In any case, your motivated cognition has let you believe the belief, but motivated cognition doesn't feel precisely like exhaustive double-checking, and a question like this tries to find that feeling.
Replies from: AnnaSalamon, sixes_and_sevens↑ comment by AnnaSalamon · 2010-10-29T12:14:08.113Z · LW(p) · GW(p)
I got the sense that the question is asking you to look for beliefs you predict will change for the worse.
Er, no, I more meant beliefs that you'll change for the better. For example, some people find themselves flip-flipping from one fad or intellectual community to the next, each time being very enthusiastic about the new set of ideas. In such cases, their friends can often predict that later on their beliefs will move back toward their normal beliefs, and so the individual probably can too.
↑ comment by sixes_and_sevens · 2010-10-29T12:23:06.367Z · LW(p) · GW(p)
This was sort of what I was aiming for. Evidence saying you're going to change your mind about something should be the same as evidence for changing your mind about something.
comment by RobinHanson · 2010-10-30T18:44:50.169Z · LW(p) · GW(p)
I think it is a bit unfair to frame arguments to trust outside views or established experts as arguments to not think about things. Rather, they are arguments about how much one should trust inside views or your own thoughts relative to other sources.
comment by Johnicholas · 2010-10-29T13:24:00.159Z · LW(p) · GW(p)
Thanks for posting this, it's awesome.
I particularly endorse trying to build things out of your abstract reasoning, as a way of moving knowledge from "head-knowledge" to "fingers-knowledge".
Regarding this sentence: "Remember that if reading X-ist books will predictably move your beliefs toward X, and you know there are X-ist books out there, you should move your beliefs toward X already."
Since I'm irrational (memetic insecure) and persuasive deceptions (memetic rootkits) exist, the sentence needs some qualifier. Maybe: "If you believe that the balance of the unknown arguments favor believing X, then you have reason to believe X."
Replies from: jsalvatier↑ comment by jsalvatier · 2010-10-29T20:02:30.924Z · LW(p) · GW(p)
"fingers-knowledge" is a great phrase.
comment by billswift · 2010-10-29T04:41:51.577Z · LW(p) · GW(p)
Make every link in a chain of argument explicit. Most of the weirder conclusions I have seen in my own and others' beliefs have come about because they had conflated several different lines of reasoning or have jumped over several steps that appeared "obvious" but that included a mistaken assumption, but were never noticed because they weren't spelled out explicitly.
Also, be very careful about not confusing different meanings of a word, sometimes these can be very subtle so you need to be watchful.
For actually reasoning with an argument, keep it schematic. One of the reasons reading philosophy is so hard, is that it is written in prose. For any but the simplest arguments, though, you need to convert it to schematic form before you can actually reason about it effectively. Like trying to do mathematics or play music from a written description (though not quite that extreme), it just doesn't work well.
Replies from: NancyLebovitz, AnnaSalamon↑ comment by NancyLebovitz · 2010-10-29T13:06:53.933Z · LW(p) · GW(p)
I'm interested in examples for the sort of mistakes you're describing.
Replies from: billswift↑ comment by billswift · 2010-10-29T14:28:56.526Z · LW(p) · GW(p)
In general, when you see two people arguing past each other, these kinds of problems are often involved at the root. Two examples that I can give are the problem of "natural rights" and the problem of "authority". The natural rights issue needs a pretty long and involved discussion even to understand but it amounts to a long, convoluted sequence of conflations and assumptions.
The problem of authority is easier to describe, since it amounts to a single major error - authority conflates two distinct ideas - knowledge or expertise and justifiable or legitimate force. The two are necessarily linked in parental authority, but they are distinct ideas that tend to cause misunderstandings and resentment when conflated in institutional academic or state interactions.
A good source for understanding the root idea in a political context is Thomas Sowell's A Conflict of Visions where he points out that people tend to use the same word to mean different things - his main examples are "fairness" and "equality". Those distinct meanings rest on the fact that those words conflate those two (and more) meanings into their definitions - neither side is "misusing" the words - the words themselves, and the fact that most people don't notice the conflation, is the problem.
Replies from: jsalvatier↑ comment by jsalvatier · 2010-10-29T20:09:56.234Z · LW(p) · GW(p)
Basically: A human's guide to words
↑ comment by AnnaSalamon · 2010-10-29T10:19:05.323Z · LW(p) · GW(p)
Could you give examples?
comment by Roko · 2010-11-21T20:01:03.602Z · LW(p) · GW(p)
There is a much simpler way of winning than carefully building up your abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces.
The simpler way is just to recognize that, as a human in a western society, you won't lose much more or win much more than the other humans around you. So you may as well dump the abstract reasoning and rationality, and pick some humans who seem to live relatively non-awful lives (e.g. your colleagues/classmates) and take whatever actions they take. Believe what they believe, even if it seems irrational. Do what they do.
Careful probability estimation and actions taken based upon anticipations of consequences is the kind of cognitive algorithm befitting a lone agent who actually reaps what (s)he sows. For a human, herd-mentality seems to be the more elegant solution: elegant in the sense that the epistemology is hard to get right, but there is a robust argument about consequences and utilities: almost all of the relatively-average-strategy humans in the herd will get roughly the same deal out of life.
Research from hedonic psychology on the "Hedonic Treadmill" effect backs this up further: even if you make more (or less) money than average, you probably won't actually be happier or better (worse) off.
Of course there are details and complications: which subgroup of humans do you join? How do you make the tradeoff between different subcultures etc. But still, you don't even need a general solution to that problem, you only need to decide which of the handful of specific subcultures available to you seems best for you.
And, of course, it goes without saying that this strategy is useless for someone who is determined to invest emotionally in a nonstandard life-narrative, like utilitarian charity or life-extension. From this point of view, one might object that joining the herd is selfish in the sense that it isn't the action which maximizes utility across the herd; but then again most people don't have a utilitarian concept of selfishness and don't count benefit to random strangers as part of their actual near-mode, actionable goal set, so from their axiological point of view, herding is an acceptable solution.
Replies from: komponisto, pnrjulius↑ comment by komponisto · 2010-11-21T20:29:15.423Z · LW(p) · GW(p)
The simpler way is just to recognize that, as a human in a western society, you won't lose much more or win much more than the other humans around you
Well, unless you actually take specific steps to win more....which is kind of what this is about.
which subgroup of humans do you join? How do you make the tradeoff between different subcultures etc. But still, you don't even need a general solution to that problem, you only need to decide which of the handful of specific subcultures available to you seems best for you.
Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in
carefully building up [their] abstract-reasoning ability to the point where it produces usefully accurate, unbiased, well-calibrated probability distributions over relevant outcome spaces
is the most attractive.
Replies from: Roko, Roko↑ comment by Roko · 2010-11-21T20:47:14.554Z · LW(p) · GW(p)
Note that people probably tend to end up here by this very process. That is, of all the subcultures available to them, the subculture of people who are interested in
True ... but I suspect that people who end up here do so because they basically take more-than-averagely literally the verbally endorsed beliefs of the herd. Rationality as memetic immune disorder, failure to compartmentalize etc.
Perhaps I should amend my original comment to say that if you are cognitively very different from the herd, you may want to use a bit of rationality/self-development like a corrective lens. You'll have to run compartmentalization in software.
Maybe I should try to start a new trend: use {compartmentalization} when you want to invalidate an inference which most people would not make because of compartmentalization?
E.g. "I think all human lives are equally valuable"
"Then why did you spend $1000 on an ipad rather than giving it to Givewell?"
"I refute it thus: {compartmentalization: nearmode/farmode}"
↑ comment by Roko · 2010-11-21T20:42:55.677Z · LW(p) · GW(p)
What steps can a person actually take to really, genuinely win more, in the sense of "win" which most people take as their near-mode optimization target?
I suspect that happiness set-points mean that there isn't really much you can do.
In fact probably one of the few ways to genuinely affect the total of well-being over your lifetime is to take seriously the notion that you have so little control over it: you'll get depressed about it.
I recently read a book called 59 seconds which said that 50% of the variance in life satisfaction/happiness is directly genetically determined via your happiness set-point.
In fact the advice that the book gave was to just chill out about life, that by far the easiest way to improve your life is to frame it more positively.
Replies from: Vaniver↑ comment by Vaniver · 2010-11-21T20:46:22.052Z · LW(p) · GW(p)
I suspect that happiness set-points mean that there isn't really much you can do.
Happiness is a sham; focus on satisfaction. There don't seem to be satisfaction set points.
That said, I agree with what you seem to be saying- that optimization is a procedure that is itself subject to optimization.
↑ comment by pnrjulius · 2012-04-05T23:48:41.085Z · LW(p) · GW(p)
There's at least one very big problem with this sort of majoritarian herding: If everyone did it, it wouldn't work in the least. You need a substantial proportion of people actually trying to get the right answer in order for "going with the herd" to get you anywhere. And even then, it will only get you the average; you'll never beat the average by going with the average. (And don't you think that, say, Einstein beat the average?)
And in fact there are independent reasons from evolutionary psychology and memetics to suspect that everyone IS doing it, or at least a lot of people are doing it a lot of the time. Ask most Christians why they are Christian, and they won't give you detailed theological reasons; they'll shrug and say "It's how I was raised".
This is sort of analogous to the efficient market hypothesis, and the famous argument that you should never try to bet against the market because on average the market always wins. Well... if you actually look at the data, no it doesn't, and people who bet against the market can in some cases become spectacularly rich. Moreover, the reason the market is as efficient as it is relies upon the fact that millions of people buy their stocks NOT in a Keynesian beauty contest, but instead based on the fundamental value of underlying assets. With enough value investors, people who just buy market-wide ETFs can do very well. But if there were no value investors (or worse, no underlying assets! A casino is an example of a market with options that have no underlying assets), buying ETFs would get you nowhere.
comment by Vladimir_M · 2010-10-29T21:15:16.180Z · LW(p) · GW(p)
The main problem I see with this post is that it assumes that it's always advantageous to find out the truth and update one's beliefs towards greater factual and logical accuracy. Supposedly, the only danger of questioning things too much is that attempts to do so might malfunction and instead move one towards potentially dangerous false beliefs (which I assume is meant by the epithets such as "nutty" and "crazy").
Yet I find this assumption entirely unwarranted. The benefits of holding false beliefs can be greater than the costs. This typically happens when certain false beliefs have high positive signaling value, but don't imply any highly costly or dangerous behavior. Questioning and correcting such beliefs can incur far more cost than benefit; one can try to continue feigning them, but for most people it will be at least somewhat difficult and unpleasant. There are also many situations where the discovery of truth can make one's life miserable for purely personal reasons, and it's in the best interest of one's happiness to avoid snooping and questioning things too much.
It seems to me that the problem for uncompromising truth-seekers is not just how to avoid invalid reasoning leading to crazy false beliefs, but also how to avoid forming true beliefs that will have negative signaling consequences or drastically reduce one's happiness. Now, maybe you would argue that one should always strive for truth no matter what, but this requires a separate argument in addition to what's presented in the above post -- which is by itself insufficient to address the reasons for why people are "afraid to think fully about certain subjects."
Replies from: NancyLebovitz, simplicio, MichaelVassar↑ comment by NancyLebovitz · 2010-10-30T11:37:44.184Z · LW(p) · GW(p)
Speaking from experience, avoiding too much thought about true beliefs that negatively impact one's happiness without giving any value is done by monitoring one's happiness. Or possibly by working on depression.
For quite some time, my thoughts would keep going back to the idea that your government can kill you at any time (the Holocaust). Your neighbors can kill you at any time. (Rwanda)
Eventually, I noticed that such thoughts were driven by an emotional pull rather than their relevance to anything I wanted or needed.
There's still some residue-- after all, it's a true thought, and I don't think I'm just spreading depression to occasionally point out that governments could build UFAI or be a danger to people working on FAI.
Unfortunately, while I remember the process of prying myself loose from that obsession, I don't remember what might have led to the inspiration to look at those thoughts from the outside.
More generally, I believe there's an emotional immune system, and it works better for some people than others, at some times than others, and probably (for an individual) about some subjects than others.
↑ comment by simplicio · 2010-10-30T05:10:09.234Z · LW(p) · GW(p)
...true beliefs that will have negative signaling consequences or drastically reduce one's happiness.
Do you have some examples of such beliefs?
Replies from: Vladimir_M, christopherj↑ comment by Vladimir_M · 2010-10-30T06:12:54.902Z · LW(p) · GW(p)
The problem with the most poignant examples is that it's impossible to find beliefs that signal low status and/or disreputability in the modern mainstream society, and are also uncontroversially true. The mention of any concrete belief that is, to the best of my knowledge, both true and disreputable will likely lead to a dispute over whether it's really true. Yet, claiming that there are no such beliefs at all is a very strong assertion, especially considering that nobody could deny that this would constitute a historically unprecedented state of affairs.
To avoid getting into such disputes, I'll give only two weaker and (hopefully) uncontroversial examples.
As one example, many people have unrealistic idealized views of some important persons in their lives -- their parents, for example, or significant others. If they subject these views to rational scrutiny, and perhaps also embark on fact-finding missions about these persons' embarrassing past mistakes and personal failings, their new opinions will likely be more accurate, but it may make them much unhappier, and possibly also shatter their relationships, with all sorts of potential awful consequences. This seems like a clear and realistic example where less accurate beliefs are in the best interest of everyone involved.
Or, to take another example, the post mentions people who expend some effort to follow certain forms of religious observance. For many people in various religious and ethnic groups, such behavior produces pleasant feelings of one's own virtuousness, as well as positive signals to others that one is a committed, virtuous, and respectable member of the community, with all sorts of advantages that follow from that. Now, if such a person scrutinizes the beliefs on which this behavior is based, and concludes that they're just superstitious nonsense, they will be forced to choose between the onerous and depressing burden of maintaining a dishonest facade or abandoning their observance and facing awful social consequences. I don't see how this can be possibly seen as beneficial, even though it would mean that their beliefs would become closer to reality.
Replies from: HughRistik, simplicio, Strange7↑ comment by HughRistik · 2010-11-03T00:57:45.746Z · LW(p) · GW(p)
The problem with the most poignant examples is that it's impossible to find beliefs that signal low status and/or disreputability in the modern mainstream society, and are also uncontroversially true.
This is a good point. Most ideas that are mistreated by modern mainstream society are not obviously true. Rather, they are treated as much less probable than a less-biased assessment would estimate. This tendency leads to many ideas being given a probability of 0%, when they really deserve a probability of 40-60% based on the current evidence. This is consistent with your experience (and mine) of examining various controversies and being unable to tell which positions are actually correct, based on the current evidence.
The psychology seems to combine a binary view of truth combined with raising the burden of proof for low status beliefs: people are allowed to "round-down" or even floor their subjective probabilities for undesirable beliefs. Any probability less than 50% (or 90%, in some discussions) can be treated the same.
Unfortunately, the English language (and probably others, too) is horribly bad for communication about probability, allowing such sorts of forms of sophistry to flourish. And the real world is often insufficient to punish educated middle-class people for rounding or flooring the probabilities in the socially desirable direction, even though people making such abuses of probability would get destroyed in many practical endeavours (e.g. betting).
One method for avoiding bias is to identify when one is tempted to engage in such rounding and flooring of probabilities.
↑ comment by simplicio · 2010-10-30T06:56:36.236Z · LW(p) · GW(p)
I see your point. I agree that these people are moving away from a local optimum of happiness by gaining true beliefs.
As to the global optimum, it's hard to say. I guess it's plausible that the best of all possible happinesses involves false beliefs. Does it make sense that I have a strong ethical intuition to reject that kind of happiness?
(Anecdotally, I find the more I know about my loved ones' foibles, the more I look on them fondly as fellow creatures.)
↑ comment by Strange7 · 2012-03-07T03:59:55.853Z · LW(p) · GW(p)
As one example, many people have unrealistic idealized views of some important persons in their lives -- their parents, for example, or significant others. If they subject these views to rational scrutiny, and perhaps also embark on fact-finding missions about these persons' embarrassing past mistakes and personal failings, their new opinions will likely be more accurate, but it may make them much unhappier, and possibly also shatter their relationships, with all sorts of potential awful consequences.
Consequences like... getting out of a relationship founded on horror and lies? I agree that could be painful, but I have a hard time seeing it as a net loss.
↑ comment by christopherj · 2013-11-02T04:54:29.366Z · LW(p) · GW(p)
...true beliefs that will have negative signaling consequences or drastically reduce one's happiness.
Do you have some examples of such beliefs?
Here's a good example: "The paper that supports the conventional wisdom is Jensen, A. R., & Reynolds, C. R. (1983). It finds that females have a 101.41 mean IQ with a 13.55 standard deviation versus males that have a 103.08 mean IQ with a 14.54 standard deviation."
Now, people will lynch you for that difference of 1.67 IQ points (1.63 %), unless you make excuses for some kind of bias or experimental error. For one thing, the overall average IQ is supposed to be 100. Also some studies have females with the higher IQ.
But what about that other bit, the 7% difference in standard deviation? Stated like this, it is largely inoffensive because people who know enough math to understand what it means, usually know to disregard slight statistical variations in the face of specific evidence. But what if you take that to its logical conclusions concerning the male/female ratio of the top 0.1% smartest people, and then tell other people your calculated ratio? (to make sure it is a true belief, state it as "this study, plus this calculation, results in...") If you state such a belief, people will take it as a signal that you would consider maleness to be evidence of being qualified. And, since people are bad at math and will gladly follow a good cause regardless of truth, almost no one will care that looking at actual qualifications is necessarily going to swamp any effects from statistics, nor will they care whether it is supported by a scientific study (weren't those authors both males?). And the good cause people aren't even wrong -- considering that people are bad at math, and there is discrimination against women, knowledge of that study will likely increase discrimination, either through ignorance or intentional abuse -- regardless of whether the study was accurate.
If you accept the above belief, but decide letting others know about your belief is a bad idea, then you still have to spend some amount of effort guarding least you let slip your secret in your speech or actions. And odds are, such a belief would provide you zero benefits while exposing you to a small but constant loss of mental resources and a risk of social catastrophe.
Replies from: Vaniver, nshepperd, army1987, army1987↑ comment by Vaniver · 2013-11-02T22:46:50.282Z · LW(p) · GW(p)
But what if you take that to its logical conclusions concerning the male/female ratio of the top 0.1% smartest people, and then tell other people your calculated ratio?
This actually checks out.
↑ comment by nshepperd · 2013-11-02T07:06:31.073Z · LW(p) · GW(p)
But what if you take that to its logical conclusions concerning the male/female ratio of the top 0.1% smartest people, and then tell other people your calculated ratio?
You might be able to inoculate yourself against that by also calculating and quoting the conjugate male/female ratio of the lowest 0.1% of the population. Which is really something you should be doing anyway any time you look at a highest or lowest X% of anything, lest people take your information as advice to build smaller schools, or move to the country to prevent cancer.
Replies from: Vaniver, army1987↑ comment by Vaniver · 2013-11-02T22:48:12.249Z · LW(p) · GW(p)
You might be able to inoculate yourself against that by also calculating and quoting the conjugate male/female ratio of the lowest 0.1% of the population.
Why would that "inoculate" you? Yeah, it makes it obvious that you're not talking about a mean difference (except for, you know, the real mean difference found in the study), but saying "there are more men than women in prisons and more men than women that are math professors at Harvard" is still not gender egalitarian.
↑ comment by A1987dM (army1987) · 2013-11-03T09:04:29.982Z · LW(p) · GW(p)
Using that figures, 0.117% of males and 0.083% of females have IQs below 58.814, so if the sex ratio in whatever-you're-thinking-of is much greater than 1.4 males per female, something else is going on.
↑ comment by A1987dM (army1987) · 2013-11-03T08:58:43.400Z · LW(p) · GW(p)
Using that figures, 0.152% of males and 0.048% of females have IQs over 146.17, so if the sex ratio in whatever-you're-thinking-of is much greater than 3.2 males per female, something else is going on.
↑ comment by A1987dM (army1987) · 2013-11-02T22:22:35.260Z · LW(p) · GW(p)
1.67 IQ points (1.63 %)
The zero of the scale is arbitrary, so the "1.63%" is meaningless.
↑ comment by MichaelVassar · 2010-11-03T22:43:40.669Z · LW(p) · GW(p)
In my experience, practically speaking though not theoretically, true beliefs are literally always beneficial relative to false ones, though not always worth the cost of acquiring them.
comment by AlanCrowe · 2010-10-29T20:37:10.132Z · LW(p) · GW(p)
Propositional calculus is brittle. A contradiction implies everything.
In Set theory, logic and their limitations Machover calls this the Inconsistency Effect. I'm surprised to find that this doesn't work well as a search term. Hunting I find:
In classical logic, a contradiction is always absurd: a contradiction implies everything.
Another trouble is that the logical conditional is such that P AND ¬P ⇒ Q, regardless of what Q is taken to mean. That is, a contradiction implies that absolutely everything is true.
Any false fact that you believe acts as a logic bomb. Once you come across the true fact, the combination permits you to construct a logical argument to reach any conclusion. This is just what the monsters from the Id have been waiting for.
The inconsistency effect implies that fallible creatures dare not rely on pure logic.
Replies from: Johnicholas, Vladimir_Nesov↑ comment by Johnicholas · 2010-10-29T20:42:42.382Z · LW(p) · GW(p)
"Ex falso quodlibet" or "principle of explosion" might be the search term you are looking for. Relevance logic and other nonclassical logics are not explosive in the same way.
↑ comment by Vladimir_Nesov · 2010-10-29T20:40:40.067Z · LW(p) · GW(p)
The inconsistency effect implies that fallible creatures dare not rely on pure logic.
They can't consider themselves manifestations of logic, but since they are reasoning about the infallible logic, not about themselves, there is no problem.
comment by Peter_Lambert-Cole · 2010-10-29T13:32:21.201Z · LW(p) · GW(p)
I wouldn't say that this is a fear of an "inaccurate conclusion," as you say. Instead, it's a fear of losing control and becoming disoriented: "losing your bearings" as you said . You're afraid that your most trustworthy asset - your ability to reason through a problem and come out safe on the other side; an asset that should never fail you - will fail you and lead you down a path you don't want to go. In fact, it could lead to Game Over if you let that lead you to kill or be killed, as you highlight in your examples of the Unabomber, Mitchell Heisman and zealot soldiers.
I especially like the orientation metaphor here. And I think that your piece addresses this. First, you need to know where you are. Recognize when you are in far mode and thinking abstractly and when you are in near mode and thinking concretely. Then you can think about where you should be, near or far. Learn to recognize which one is better for your current situation and be able to switch between them. This is also part of being oriented. Finally, have a kill switch if you feel yourself losing control.
comment by TraderJoe · 2012-03-06T15:21:14.970Z · LW(p) · GW(p)
For example, a Christian might give away all their possessions, rejoice at the death of their children in circumstances where they seem likely to have gone to heaven, and generally treat their chances of Heaven vs Hell as their top priority.
Steven Landsburg used this reasoning, combined with the fact that Christians don't generally do this, to conclude not that Christians don't act on their beliefs, but that Christians don't generally believe what they claim to believe. I think the different conclusion is reached because he assigns a lot more rationality to people than you do. But certainly there are, for some people, very strong incentives against admitting that you've stopped believing in God.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-04-05T23:41:47.417Z · LW(p) · GW(p)
What does it mean, actually, to "believe" something? If it implies that you integrate it into your worldview and act accordingly, then these people clearly don't "believe" in that sense. But this may be an altogether too strong notion of what it is to "believe" something, since most people have things they'd say they "believe" that aren't applied in this way.
comment by mwaser · 2010-10-29T19:36:57.629Z · LW(p) · GW(p)
I, too, really appreciated this post.
Unfortunately, though, I think that you missed one of the most important skills for safer reasoning -- recognizing and acknowledging assumptions (and double-checking that they are still valid). Many of the most dangerous failures of reasoning occur when a normally safe assumption is carried over to conditions where it is incorrect. Diving three feet into water that is unobstructed and at least five feet deep won't lead to a broken neck -- unless the temperature is below zero centigrade.
comment by Relsqui · 2010-10-29T00:37:55.288Z · LW(p) · GW(p)
I like this; it seems practical and realistic. As a point of housekeeping, double-check the spaces around your links--some of them got lost somewhere. :)
Replies from: None↑ comment by [deleted] · 2010-10-29T02:09:14.956Z · LW(p) · GW(p)
More specifically, from "4. Update your near-mode anticipations, not just your far-mode beliefs" downwards, the links began lacking spaces with two exceptions.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-10-29T12:21:54.126Z · LW(p) · GW(p)
Are they still lacking spaces on your browser? I'm puzzled, because they were lacking spaces for me last night (confusingly, even though there were spaces in my text in the edit window), and then they disappeared this morning without my having changed the text meanwhile.
Replies from: None↑ comment by [deleted] · 2010-10-29T14:32:15.798Z · LW(p) · GW(p)
That's quite strange, but yes, they're fixed now.
For reference: What browser are you using? I'm currently running Google Chrome.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-10-29T16:28:31.379Z · LW(p) · GW(p)
Chrome as well. I've had this problem before with other posts and non-confidently suspect I tried other browsers at that time.
comment by komponisto · 2010-10-30T19:52:01.656Z · LW(p) · GW(p)
Posted on behalf of someone else who had the following comment:
I would have liked for [this post] to contain details about how to actually do this:
If you're uncomfortable not knowing, so that you find yourself grasping for one framework after another, build your tolerance for ambiguity, complexity, and unknowns.
comment by wedrifid · 2010-11-02T08:22:12.104Z · LW(p) · GW(p)
People fear questioning their “don't walk alone at night” safety strategy, lest they venture into danger.
I routinely walk (and run) alone at night. Indeed, I plan on going for a 40k run/walk alone tonight. Yet I observe that walking alone at night does really seem like it involves danger - particularly if you are an attractive female.
I actually know people (ok, so I am using my sisters as anecdotes) who are more likely to fear considering a "don't walk alone at night" strategy because it may mean they would have to sacrifice their exercise routine. Fortunately Melbourne is a relatively safe city as far as 'cities in the world' go.
comment by jsalvatier · 2010-10-29T20:01:20.530Z · LW(p) · GW(p)
I love this post, but I think "we can walk into rationality without fear" is too strong.
comment by jsalvatier · 2010-10-29T19:56:44.568Z · LW(p) · GW(p)
I'd just like to point out that 5 looks like a specific application of 1. Recognizing that your "goal" is just what you think is your goal, and you can be mistaken about it in many ways.
comment by JoshuaZ · 2010-10-29T15:13:18.815Z · LW(p) · GW(p)
Minor typo -"denotationally honest, including refusing to jobs that required a government loyalty oath" - no need for "to" before "jobs".
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-10-29T16:27:37.078Z · LW(p) · GW(p)
Thanks. Fixed.
Replies from: AlanCrowe↑ comment by AlanCrowe · 2010-10-29T20:07:43.539Z · LW(p) · GW(p)
Deontological?
I'm really confused now. I felt sure that the typo was denotational when it should be deontological
Replies from: JGWeissman↑ comment by JGWeissman · 2010-10-29T20:40:32.976Z · LW(p) · GW(p)
"denotationally honest" means speak the literal truth, though presumably your conotations and non-verbal communication may be misleading.
Committment to this principle certainly seems deontological, as opposed to consequentialist concern for achieving the goals of others having accurate beliefs. One might claim that it is based on the consequentialist goal of having a reputation of making literally honest statements, but I would suspect that to be a rationalization.