Why a New Rationalization Sequence?
post by dspeyer · 2020-01-13T06:46:33.813Z · LW · GW · 8 commentsContents
What is Rationalization and Why is it Bad? Didn't Eliezer Already Do This? Is There Hope? The Psychiatrist Paradox The Dreaming Paradox A Disclaimer Regarding Authorship Beyond Rationalization None 8 comments
This is the first in a five-post mini-sequence about rationalization, which I intend to post one-per-day. And you may ask, why should we have such a sequence?
What is Rationalization and Why is it Bad?
For those of you just tuning in, rationalization is when you take a conclusion you want to reach and try to come up with a argument that concludes it. The argument looks very similar to one in which you started from data, evaluated as well as you could, and reached this conclusion naturally. Almost always similar enough to fool the casual observer, and often similar enough to fool yourself.
If you're deliberately rationalizing for an outside audience, that's out-of-scope for this sequence. All the usual ethics and game theory apply.
But if you're involuntarily rationalizing and fooling yourself, then you've failed at epistemics. And your arts have turned against you [? · GW]. Know a lot about scientific failures? Now you can find them in all the studies you didn't like!
Didn't Eliezer Already Do This?
Eliezer wrote the against rationalization [? · GW] sequence back in 2007/8. If you haven't read it, you probably should. It does a good job of describing what rationalization is, how it can happen, and how bad it can be. It does not provide a lot of tools for you to use in protecting yourself from rationalization. That's what I'll be focusing on here.
And, besides, if we don't revisit a topic this important every decade or so with new developments, then what is this community for?
Is There Hope?
Periodically, I hear someone give up on logical argument completely. "You can find an argument for anything," they say, "Forget logic. Trust [ your gut / tradition / me ] instead." Which brushes over the question of whether the proposed alternative is any better. There is no royal road to knowledge.
Still, the question needs answering. If rationalization looks just like logic, can we ever escape Cartesian Doubt?
The Psychiatrist Paradox
A common delusion among grandiose schizophrenics in institutions is that they are themselves psychiatrists. Consider a particularly underfunded mental hospital, in which the majority of people who "know" themselves to be psychiatrists are wrong. No examination of the evidence will convince them otherwise. No matter how overwhelming, some reason to disbelieve will be found.
Given this, should any amount of evidence suffice to convince you that you are such a psychiatrist?
I am not aware of any resolution to this paradox.
The Dreaming Paradox
But the Psychiatrist Paradox is based on an absolute fixed belief and total rationalization as seen in theoretically ideal schizophrenics. (How closely do real-world schizophrenics approximate this ideal? That question is beyond the scope of this document.) Let's consider people a little more reality-affiliated: the dreaming.
Given that any evidence of awakeness is a thing that can be dreamed, should you ever be more than 90% confident you're awake? (Assuming 16 hours awake and 2 dreaming in a typical 24 hour period.)
(Boring answer: forget confidence, always act on the assumption that you're awake because it's erring on the side of safety. We'll come back to this thought.)
(Also boring: most lucid dreaming enthusiasts report they do find evidence of wakefulness or dreaminess which dreams never forge. Assume you haven't found any for yourself.)
Here's my test: I ask my computer to prime factor a large number (around ten digits) and check it by hand. I can dream many things, but I'm not going to dream that my computer doesn't have the factor
program, nor will I forget how to multiply. And I can't dream that it factored correctly, because I can't factor numbers that big.
You can't outsmart an absolute tendency to rationalize, but you can outsmart a finite one. Which, I suspect, is what we mostly have.
A Disclaimer Regarding Authorship
Before I start on the meat of the sequence (in the next post) I should make clear that not all these ideas are mine. Unfortunately, I've lost track of which ones are and which aren't, and of who proposed the ones which aren't. And the ones that aren't original to me have still gone through me enough to not be entirely as their original authors portrayed them.
If I tried to untangle this mess and credit properly, I'd never get this written. So onward. If you wish to fix some bit of crediting, leave a comment and I'll try to do something sensible.
Beyond Rationalization
Much of what appears here also applies to ordinary mistakes of logic. I'll try to tag such as they go.
The simplest ideal of thinking deals extensively with uncertainty of external facts, but trusts its own reasoning implicitly. Directly imitating this, when your own reasoning is not 100% trustworthy, is a bad plan. Hopefully this sequence will provide some alternatives.
Next: Red Flags for Rationalization [LW · GW]
8 comments
Comments sorted by top scores.
comment by ChristianKl · 2020-01-13T08:57:20.809Z · LW(p) · GW(p)
It seems to me like there's a high chance to start with mental models like the dreaming paradox that are based on wrong assumptions about how cognition works.
Being in feedback loops with the external world is fundamentally different then dreaming. Schizophrenics are similar in the way the get lost with internal abstractions and lose contact to being in touch with reality.
Building up sanity by untangling abstractions the General Semantics way and building up self-awareness by mediating does help with being in touch with the truth in a way that a standard approach of logic doesn't.
comment by DavidA · 2020-01-14T22:21:15.107Z · LW(p) · GW(p)
It has crossed my mind that Bayesian thinking is especially susceptible to rationalization since, when applied to real world problems prior probabilities are usually very difficult to establish and very easy to rationalize. I am concerned that this may represent a sufficiently severe flaw in "Bayesianism" that it invalidates the entire concept.
I wonder how many readers and commenters here have changed truly deeply held and emotional (e.g. political) views as a result of a process of rational thinking. And if this has not happened, is it evidence of anything interesting?
Replies from: Pattern↑ comment by Pattern · 2020-01-15T07:15:04.906Z · LW(p) · GW(p)
and very easy to rationalize.
If you adjust (the results of calculations) away from a conclusion 'because you're biased', then the direction you adjust them in is the way you're biased.
Replies from: DavidA↑ comment by DavidA · 2020-07-26T01:21:49.928Z · LW(p) · GW(p)
Well that works if you wait until you get an answer and then make adjustments. But I think most people (hopefully at least) manage to avoid that degree of bias. But there's still the issue of how you determine the prior probabilities in the first place - after all it's not like you can't make a good estimate of what sort of numbers are likely to produce the result you want before you ever do any calculations at all.
comment by TurnTrout · 2020-01-13T16:04:43.583Z · LW(p) · GW(p)
A common delusion among grandiose schizophrenics in institutions is that they are themselves psychiatrists. Consider a particularly underfunded mental hospital, in which the majority of people who "know" themselves to be psychiatrists are wrong. No examination of the evidence will convince them otherwise. No matter how overwhelming, some reason to disbelieve will be found.
Given this, should any amount of evidence suffice to convince you that you are such a psychiatrist?
I am not aware of any resolution to this paradox.
I don't think there's a resolution to this kind of thing. "What algorithm produces good epistemics, even though your memories and beliefs may be arbitrarily corrupted?". In the general case, you can't do better than approximating normatively correct reasoning as best you can, given the information you have.
Of course, as actual people, we quickly run into the following issue:
The simplest ideal of thinking deals extensively with uncertainty of external facts, but trusts its own reasoning implicitly. Directly imitating this, when your own reasoning is not 100% trustworthy, is a bad plan. Hopefully this sequence will provide some alternatives.
comment by kithpendragon · 2020-01-13T16:01:29.002Z · LW(p) · GW(p)
Re: The Psychiatrist Paradox
This example isn't sitting comfortably in my mind. How I'm thinking is this: First of all, (boring answer, see the Dreaming Paradox) you should always assume reality is as you directly observe it to be, for safety reasons.
[Objection] "But in this thought experiment, your observations and your inferences based on those observations are likely to be incorrect due to delusion."
[Response] In that case, there's no hope for being aligned with reality in the first place and you should abandon the project and continue to assume reality is as you observe it just in case you're right. I claim that it would be equally delusional for a real psychiatrist to believe themselves to be a patient.
Moving on:
- Some people are psychiatrists
- Some psychiatrists do work at mental hospitals
[Conclusion] finding yourself in a mental hospital should increase your chances of being a psychiatrist vs. finding yourself in, say, a grocery store or a gym.
I can think of at least a few pieces of evidence that should be very strong evidence that a person has a profession. e.g.:
- Being on the payroll (physical evidence: paycheck)
- Living somewhere else (physical evidence: driving home)
- Generally being able to leave the facility without being strongly challenged, despite obvious contact with security. [If you have to resort to violence to leave, you are escaping and should rethink your employment status.]
- Most other employees, particularly high-ranking employees, agreeing that they are your co-workers. Particularly, nobody challenging your profession. [Admittedly, this could still be sketchy in a delusional situation, but still not for nothing.]
It's not that experiences like these cannot be imagined by a sufficiently deluded mind, but if we're arguing on those grounds there should be no standard of evidence in general to prove that you are not a mental patient (or a brain in a jar, or simulated for that matter).
[Conclusion] Observing good hard evidence (like being allowed to drive home and cash your paycheck) should increase, not decrease your confidence that you are a psychiatrist and not a patient in a situation where you find yourself at a mental hospital. In the case that you're still wrong, you're presumably in the best available place to get the help you obviously need and shouldn't worry too much about it.
[Objection] "Shut up and actually do the math. You can assume some sufficiently extreme numbers from the scenario to be able to do some calculations. Maybe you're the only psychiatrist in a ten-thousand patient facility dedicated to delusional disorders."
[Response] Since the prior probability that you are a psychiatrist and not a patient should actually be quite high in either case, doing a Bayesian update on proportions consistent with the scenario should render only a small decrease in confidence. This is still a step in the wrong direction, though. My intuition is that, like e.g. the 2 envelope paradox, this will actually end up being the wrong math somehow.
[Objection] "You can't just decide the math is wrong because you don't like the answer."
[Response] Agreed, but an actual psychiatrist who does the math without error and updates on the results to become more confident that they are a patient has updated away from being correctly aligned with reality. The math is wrong, just like Newtonian physics is wrong. It works under a wide variety of circumstances, but when you plug in some extreme numbers of the right kind it just fails to make good predictions.
comment by Chris_Leong · 2020-01-14T17:46:32.548Z · LW(p) · GW(p)
Maybe you can't dream the actual process of factoring a large number, but you can dream of having just finished completing such a factoring with the result having come out correct
comment by Pattern · 2020-01-15T07:11:40.454Z · LW(p) · GW(p)
What is Rationalization and Why is it Bad?
What is Rationalization and What are the benefits of identifying it?
It does not provide a lot of tools for you to use in protecting yourself from rationalization.
I look forward to this. Wait, my rationalization, or others'? Both?
If rationalization looks just like logic, can we ever escape Cartesian Doubt?
What is Cartesian Doubt? (If rationalization looks just like logic, how can we tell the difference?)
Given that any evidence of awakeness is a thing that can be dreamed,
Sort of. Have you ever read a book in a dream? (There's evidence of dreaming that isn't produced by reality (in practice this may be a matter of rates).)
The simplest ideal of thinking deals extensively with uncertainty of external facts, but trusts its own reasoning implicitly. Directly imitating this, when your own reasoning is not 100% trustworthy, is a bad plan.
Been wondering about how this could be programmed (aside from 'run the calculation 3 times').
Hopefully this sequence will provide some alternatives.
Thanks for making this!