A List of Nuances
post by abramdemski · 2014-11-10T05:02:20.712Z · LW · GW · Legacy · 24 commentsContents
24 comments
Abram Demski and Grognor
Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.
- Map vs. Territory
- Eliezer’s sequences use this as a jump-off point for discussion of rationality.
- Many thinking mistakes are map vs. territory confusions.
- A map and territory mistake is a mix-up of seeming vs being.
- Humans need frequent reminders that we are not omniscient.
- Cached Thoughts [? · GW] vs. Thinking
- This document is a list of cached thoughts.
- Clusters vs. Properties
- These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.
- The mind projection fallacy [? · GW] is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.
- If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)
- Categories are inference applicability heuristics; ruling X an instance of Y without expecting novel inferences is cargo cult classification.
- Syntax vs. Semantics
- The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.
- Semantics vs. Pragmatics
- The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.
- An example of a message with no semantics and only pragmatics is a command, such as “Stop!”.
- Almost no messages lack pragmatics, and for good reason. However, if you seek truth in a discussion, it is important to foster a willingness to say things with less pragmatic baggage.
- Usually when we say things, we do so with some “point” which is beyond the semantics of our statement. The point is usually to build up or knock down some larger item of discussion. This is not inherently a bad thing, but has a failure mode where arguments are battles and statements are weapons, and the cleverer arguer wins.
- The meaning of a thing is the way you should be influenced by it.
- The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.
- Object-level vs. Meta-level
- The difference between making a map and writing a book about map-making.
- A good meta-level theory helps get things right at the object level, but it is usually impossible to get things right at the meta level before before you’ve made significant progress at the object level.
- Seeming vs. Being
- We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.
- This is yet another reminder that we are not omniscient.
- If we optimize too hard [? · GW] for things which seem good rather than things which are good, we will get things which seem very good but which may only be somewhat good, or even bad.
- The dangerous cases are the cases where you do not notice there is a distinction.
- This is why humans need constant reminders that we are not omniscient.
- We must take care to notice the difference between how things seem to seem, and how they actually seem.
- We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.
- Signal vs. Noise
- Not all information is equal. It is often the case that we desire certain sorts of information and desire to ignore other sorts.
- In a technical setting, this has to do with the error rate present in a communication channel; imperfections in the channel will corrupt some bits, making a need for redundancy in the message being sent.
- In a social setting, this is often used to refer to the amount of good information vs irrelevant information in a discussion. For example, letting a mediocre writer add material to a group blog might increase the absolute amount of good information, yet worsen the signal-to-noise ratio.
- Attention is a scarce resource; yes everyone has something to teach you, but many people are much more efficient sources of wisdom than others.
- Selection Effects
- Filtered evidence [? · GW].
- In many situations, if we can present evidence to a Bayesian agent without the agent knowing that we are being selective, we can convince the agent of anything we like. For example, if I want to convince you that smoking causes obesity, I could find many people who became obese after they started smoking.
- The solution to this is for the Bayesian agent to model where the information is coming from. If you know I am selecting people based on this criteria, then you will not take it as evidence of anything, because the evidence has been cherry-picked.
- Most of the information you receive is intensely filtered. Nothing comes to your attention with a good conscience.
- The silent evidence problem.
- Selection bias need not be the result of purposeful interference as in cherry-picking. Often, an unrelated process may hide some of the evidence needed. For example, we hear far more about successful people than unsuccessful. It is tempting to look at successful people and attempt to draw conclusion about what it takes to be successful. This approach suffers from the silent evidence problem: we also need to look at the unsuccessful people and examine what is different about the two groups.
- Observer selection effects.
- Filtered evidence [? · GW].
- What You Mean vs. What You Think You Mean
- Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.
- We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.
- Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.
- What You Mean vs. What the Others Think You Mean
- What You Optimize vs. What You Think You Optimize
- Evolution optimizes for reproduction but in doing so creates animals with a variety of goals which are correlated with reproduction.
- Extrinsic motivation is weaker than intrinsic motivation.
- The people who value practice for its own sake do better than the people who only value being good at what they’re practicing.
- “Consequentialism is true, but virtue ethics is what works.”
- Stated Preferences vs. Revealed Preferences
- Revealed preferences are the preferences we can infer from your actions. These are usually different from your stated preferences.
- X is not about Y:
- Food isn’t about nutrition.
- Clothes aren’t about comfort.
- Bedrooms aren’t about sleep.
- Marriage isn’t about love.
- Talk isn’t about information.
- Laughter isn’t about humour.
- Charity isn’t about helping.
- Church isn’t about God.
- Art isn’t about insight.
- Medicine isn’t about health.
- Consulting isn’t about advice.
- School isn’t about learning.
- Research isn’t about progress.
- Politics isn’t about policy.
- Going meta isn’t about the object level.
- Language isn’t about communication.
- The rationality movement isn’t about epistemology.
- Everything is actually about signalling.
- X is not about Y:
- Humans Are Not Automatically Strategic [? · GW]
- Never attribute to malice that which can be adequately explained by stupidity. The difference between stated preferences and revealed preferences does not indicate dishonest intent. We should expect the two to differ in the absence of a mechanism to align them.
- Hidden Motives vs. Innocent Failure
- People, ideas, and organizations respond to incentives.
- Evolution selects humans who have reproductively selfish behavioral tendencies, but prosocial and idealistic stated preferences.
- Social forces select ideas for virality and comprehensibility as opposed to truth or even usefulness.
- Organizations are by default bad at being strategic about their own survival, but the ones that survive are the ones you see.
- Revealed preferences are the preferences we can infer from your actions. These are usually different from your stated preferences.
- What You Achieve vs. What You Think You Achieve
- Most of the consequences of our actions are totally unknown to us.
- It is impossible to optimize without proper feedback.
- What You Optimize vs. What You Actually Achieve
- Consequentialism is more about expected consequences than actual consequences.
- What You Seem Like vs. What You Are
- You can try to imagine yourself from the outside, but no one has the full picture.
- What Other People Seem Like vs. What They Are
- When people assume that they understand others, they are wrong.
- What People Look Like vs. What They Think They Look Like
- People underestimate the gap between stated preferences and revealed preferences.
- What Your Brain Does vs. What You Think It Does
- You are running on corrupted hardware.
- The brain’s machinations are fundamentally social; it automatically does things like signal, save face, etc., which distort the truth.
- The reverse of stupidity is not intelligence [? · GW].
- Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.
- Producing a correct result plus justification is harder than producing only the correct result.
- Justifications are important, but the correct result is more important.
- Much of our apparent self-reflection is confabulation, generating plausible explanations after the brain spits out an answer.
- Example: doing quick mental math. If you are good at this, attempting to explicitly justify every step as you go would likely slow you down.
- Example: impressions formed over a long period of time. Wrong or right, it is unlikely that you can explicitly give all your reasons for the impression. Requiring your own beliefs to be justifiable would preempt impressions that require lots of experience and/or many non-obvious chains of subconscious inference.
- Impressions are not beliefs and they are always useful data.
- Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.
- You are running on corrupted hardware.
- Clever Argument vs. Truth-seeking; The Bottom Line [? · GW]
- People believe what they want to believe.
- Believing X for some reason unrelated to X being true is referred to as motivated cognition.
- Giving a smart person more information and more methods of argument may actually make their beliefs less accurate, because you are giving them more tools to construct clever arguments for what they want to believe.
- Your actual reason for believing X determines how well your belief correlates with the truth.
- If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
- If you believe true things when doing so improves your life, that is no credit to you at all. Everyone does that.
- People believe what they want to believe.
- Lumpers vs. Splitters
- A lumper is a thinker who attempts to fit things into overarching patterns. A splitter is a thinker who makes as many distinctions as possible, recognizing the importance of being specific and getting the details right.
- Specifically, some people want big Wikipedia and TVTropes articles that discuss many things, and others want smaller articles that discuss fewer things.
- This list of nuances is a lumper attempting to think more like a splitter.
- Fox vs. Hedgehog
- “A fox knows many things, but a hedgehog knows One Big Thing.” Closely related to a splitter, a fox is a thinker whose strength is in a broad array of knowledge. A hedgehog is a thinker who, in contrast, has one big idea and applies it everywhere.
- The fox mindset is better for making accurate judgements, according to Tetlock.
- Traps vs. Gardens
- Well-kept gardens die by pacifism [? · GW].
- Conversations tend to slide toward contentious and useless topics.
- Societies tend to decay.
- Systems in general work poorly or not at all.
- Thermodynamic equilibrium is entropic.
- Without proper institutions being already in place, it takes large amounts of constant effort and vigilance to stay out of traps.
- From the outside of a broken Molochian system it is easy to see how to fix. But it cannot be fixed from the inside.
- Well-kept gardens die by pacifism [? · GW].
24 comments
Comments sorted by top scores.
comment by GMHowe · 2014-11-14T03:25:57.478Z · LW(p) · GW(p)
Everything is actually about signalling.
Counterclaim: Not everything is actually about signalling.
Almost everything can be pressed into use as a signal in some way. You can conspicuously overpay for things to signal affluence or good taste or whatever. Or you can put excessive amounts of effort into something to signal commitment or the right stuff or whatever. That almost everything can be used as a signal does not mean that almost everything is being used primarily as a signal all of the time.
Signalling only makes sense in a social environment, so things that you would do or benefit from even if you were in a nonsocial environment are good candidates for things that are not primarily about signalling. Things like eating, wearing clothes, sleeping areas, medical attention and learning.
Some of the items from the list of X is not about Y:
"Food isn’t about nutrition. Clothes aren’t about comfort. Bedrooms aren’t about sleep. Laughter isn’t about humour. Charity isn’t about helping. Medicine isn’t about health. Consulting isn’t about advice. School isn’t about learning. Research isn’t about progress. Language isn’t about communication."
All these are primarily about something other than signalling. Yes they can be "about" signalling some of the time to varying degrees but not as their primary purpose. (At least not without becoming dysfunctional.)
comment by Error · 2014-11-10T17:13:09.437Z · LW(p) · GW(p)
People underestimate the gap between stated preferences and revealed preferences.
Everything is actually about signalling.
These two put together invite in me a sort of dysfunction. I have a stated preference for my stated preferences matching my revealed ones, i.e. genuine honesty over stated-preference-as-signaling. Yet it is highly likely that this stated preference itself is 1. inaccurate, and 2. signalling. And I treat both consistency and honesty as something like terminal values, so I find this situation unacceptable. That seems to leave me three options:
- Adjust my stated preferences to match my revealed ones. Abandon my ideas of what's good and right in favor of whatever the monkey brain likes.
- Rigidly adhere to my stated preferences, even when that leaves me unhappy due to not satisfying what (would have been) my revealed ones.
- Stop valuing intellectual integrity; accept hypocricy and doublethink. Be happy.
- Morbidly reflect on how fucked I am.
All of these alternatives seem horrible to me!
The brain fills in a false memory of what you meant without asking for permission.
Reference? This terrifies me if true.
Replies from: None, abramdemski, Richard_Kennaway, Unknowns↑ comment by [deleted] · 2014-11-12T11:31:47.580Z · LW(p) · GW(p)
(2) and (4) are the correct approaches. "Revealed preferences" are, by and large, just the balance of the monkey-brain's incentives, and scarcely yield any useful information or ordering about the choice you were originally trying to make anyway. Throw them out. You're allowed to be stressed-out about how "inhuman" it feels to throw them out, but throw them the hell out! Your conscious self will thank you later.
You are also allowed to optimize your life for taking care of the monkey-brain's wants and needs without impacting the goals of the conscious self.
You are also allowed to deliberatively choose which desires and goals get classified as "monkey brain" and which ones as "the real me". After all, in truth, everything comes at least partially from the monkey-brain and everything goes, at least at the last step before action, through the conscious self. Any apparent "division" into "several people" is just your model of what your brain is doing. The real you can eat cookies, wear leather jackets, and have sex sometimes -- oy gevalt, being a good person does not mean being a robot.
↑ comment by abramdemski · 2014-11-11T02:43:54.855Z · LW(p) · GW(p)
I advise something between path 1 and path 2. You fool yourself, saying one thing and doing another; but you legitimatly want to be consistent (because it is more convincing if you are). So, once you observer the inconsistency, you react to it. In the objectivist crowd, this has resulted in honesty about selfish behavior. In the lesswrong crowd, this has more often resulted in the dominance of the idealistic goals which previously served only as signalling.
Actually, in practice, 2 is fairly good signalling! It's a costly sign of commitment to altruism. This is basically the only reason the raltionalist community can socially survive, I guess. :p
3 is also perfectly valid in some sense, although it's much further from the lesswrong aesthetic. But, see A Dialog On Doublethink. And remember the Litany of Gendlin.
4 is also a necessary step I think, to see the magnitude of the problem. :)
The brain fills in a false memory of what you meant without asking for permission.
Reference? This terrifies me if true.
Again: good terror, justified terror.
I don't have a reference, just an observation. I think if you observe you will see that this is true. It also fits with what we hear from stuff like The Apologist and the Revolutionary and prettyrational memes. It makes social sense that we would do this: the best way to fool others into thinking we meant X is to believe it ourselves. This helps us appear to win arguments (or at least save face with a less severe loss) and even more importantly helps us to appear to have the best of intentions behind our actions. So, it makes a whole lot of sense that we would do this.
People who seem not to do it are mostly just more clever about it. However, the more everyone is aware of this, the less people can get away with it. If you want to climb out of the gutter, you have to get your friends interested in climbing out too -- or find friends who already are trying.
(Once you've convinced yourself it's worth doing!)
Replies from: abramdemski↑ comment by abramdemski · 2014-11-11T03:38:58.045Z · LW(p) · GW(p)
People who seem not to do it are mostly just more clever about it.
Hmm. This statement is troublesome because it falls into the category of "I expect you not to see evidence for X in case Y, so here's an excuse ahead of time!" type arguments.
And the rest of the paragraph is an argument that you should not only believe my claim, but convince your friends, too!
How convenient. :p
Replies from: William_Quixote↑ comment by William_Quixote · 2014-11-12T13:59:48.090Z · LW(p) · GW(p)
I would expect a witch to deny that they were signaling "not-witchness"
Replies from: abramdemski↑ comment by abramdemski · 2014-11-14T08:48:03.271Z · LW(p) · GW(p)
I would expect a witch to preemptively accuse herself so that no one else can gain status by doing so.
↑ comment by Richard_Kennaway · 2014-11-17T17:37:12.497Z · LW(p) · GW(p)
All of these alternatives seem horrible to me!
The good news is that there are others. Stated and "revealed" preferences don't come out of nowhere, take it or leave it, choose one or the other. I use the scare quotes because the very name "revealed preference" embeds into the vocabulary an assumption, a whole story, that the "revealed" preference is in fact a revelation of a deeper truth. Cue another riff on this.
No, call revealed preferences merely what they visibly are: your actions. When there is a conflict between what you (this is the impersonal "you") want to do and what you do, the thing to do is to find the roots of the conflict. What is actually happening when you do the thing you would not, and not the thing that you would?
Some will answer with this again, but real answers to questions about specific instances are not to be found in any story. Something happened when you acted the way you did not want to. There are techniques for getting at real answers to such questions, involving various processes of introspection and questioning ... which I'm not going to try to expound, as I don't think I can do the subject justice.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-11-12T07:08:35.246Z · LW(p) · GW(p)
Motte-and-bailey seems like it should be under What You Mean vs. What You Think You Mean
Replies from: OpenThreadGuy↑ comment by OpenThreadGuy · 2014-11-13T13:05:00.180Z · LW(p) · GW(p)
I agree that it makes sense there.
The reason I put it where it is, is: belief-edifice-memeplex-paradigm-framework-system-movement-whatevers have members who say different things. Some members say things that are more like a motte and others say things that are more like a bailey. Even if the individual members consistently claim one or the other, this looks suspiciously like a group responding to incentives by committing the fallacy.
comment by fortyeridania · 2014-11-10T07:35:48.405Z · LW(p) · GW(p)
I find this outline helpful. I do however have a quibble.
If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
This seems slightly inaccurate. It would imply that a truth-seeking judge would decide cases just as well (or better) without hearing from the lawyers as with, because lawyers are paid to advocate for their clients. More accurate would be:
If you believe X because you want to, your belief in X is devoid of informational context about X and should properly be ignored by a truth-seeker.
If you believe X for reasons unrelated to X being true, your testimony becomes worthless because your belief in X is not correlated with X. But arguments for X are another matter.
Example: Alice says, "There is no largest prime number," and backs it up with an argument. You are now in possession of two pieces of evidence for Alice's claim C:
(1) Alice's argument. Call this "Argument." It is evidence in the sense that p(C|argument) > p(C).
(2) Alice's own apparent belief that C. Call this "Alice." It is evidence in the sense that p(C|Alice) > p(C).
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to. If the claim in the post is correct, then both items of evidence are zeroed out, such that :
(3) p(C) = p(C|Argument) = p(C|Alice)
Whereas the correct thing to do is to zero out "Alice" but not "Argument" thus:
(4) p(C|Alice) = p(C)
(5) p(C|Argument) > p(C)
*Edited for formatting
Replies from: abramdemski, dxu↑ comment by abramdemski · 2014-11-11T03:28:54.252Z · LW(p) · GW(p)
I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn't matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.
In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when "evidence" includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!
Replies from: fortyeridania↑ comment by fortyeridania · 2014-11-12T05:50:06.362Z · LW(p) · GW(p)
Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
Replies from: Richard_Kennaway, CCC, Jiro↑ comment by Richard_Kennaway · 2014-11-12T21:10:01.030Z · LW(p) · GW(p)
The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.
Or their position on the issue could be motivated by some other issue you don't even know is on their agenda.
Or...pretty much anything.
↑ comment by CCC · 2014-11-12T09:24:48.727Z · LW(p) · GW(p)
Hmmm. It's better evidence that they want you to believe the claim is correct.
For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don't believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.
↑ comment by dxu · 2014-11-10T21:17:54.844Z · LW(p) · GW(p)
Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to.
Are we to assume that Alice would have presented a equally convincing-sounding argument for the opposite side had that been her boss' demand, or would she just have asserted the statement "There is a largest prime number" without an accompanying argument?
Replies from: fortyeridania↑ comment by fortyeridania · 2014-11-10T22:20:41.254Z · LW(p) · GW(p)
Hmm... I am not sure. Because the value of her testimony (as distinguished from her argument) is null whichever side she supports, I am not sure the answer matters. But I could be wrong. Does it matter?
Replies from: dxu↑ comment by dxu · 2014-11-11T02:05:14.247Z · LW(p) · GW(p)
Well, I agree that the value of Alice's testimony is null. However, depending on the answer to my original question, the value of her argument may also become null. More specifically, if we assume that Alice would have made an argument of similar quality for the opposing side had it been requested of her by her boss, then her argument, like her testimony, is not dependent upon the truth condition of the statement "There is no largest prime number", but rather upon her boss' request. Assuming that Alice is a skilled enough arguer that you cannot easily distinguish any flaws in her argument, you would be wise to disregard her argument the moment you figure out that it was motivated by something other than truth.
Note that for a statement like "There is no largest prime number", Alice probably would not be able to construct a convincing argument both for and against, simply due to the fact that it's a fairly easy claim to prove as far as claims go. However, for a more ambiguous claim like "The education system in America is less effective than the education system is in China", it's very possible for Alice's argument to sound convincing and yet be motivated by something other than truth, e.g. perhaps Alice is harbors heavily anti-American sentiments. In this case, Alice's argument can and should be ignored because it is not entangled with reality, but rather Alice's own disposition.
This advice does not apply to those who happen to be logically omniscient.