Posts
Comments
Why is frame control central to this post? While it explains frame control well, the focus seems to be about people consciously/unconsciously harmfully manipulating one another. How to avoid being manipulated, gaslighted, deceived, etc is an important topic to discuss and a valuable skill to have. And this post offers good advice on it (whether or not it intended to). But it could’ve done so without bringing up the concept of frame control.
LW 2.0 is a good example of trying to fix something that isn't broken and ending up breaking it further.
Thanks for trying it out. Hermes is still a work in progress and one of our top priorities now is improving responsiveness.
Looking forward to helping you out!
I recently launched a new service called Hermes. It connects users with dating experts for live texting advice. It runs on a unique platform designed to greatly simplify sharing and discussing text conversations. Since modern dating is changing so rapidly, especially with the rise of online dating apps and a growing population of young people glued to their phones, helping people improve their texting can greatly improve their dating life. I've been a software developer and dating coach for over 10 years so this is sort of my passion project.
I'd be happy to get some trial users. General feedback is greatly appreciated too.
In the case of voting for Trump and writing the note in the Wailing Wall, I think there's little to no risk of having it change your prior beliefs or weaken your self-deception defense mechanisms. They both require you to be dishonest about something that clashes with so many other strong beliefs that it's highly unlikely to contaminate your belief system. The more dangerous lies are the ones that don't clash as much with your other beliefs.
How's that related?
Inter alia, yes. But the step from "rationality is supposed to reduce X" to "I will act as if X has been reduced to negligibility" is not a valid one.
Well, isn't that a good technique to reduce X? Obviously not in all cases, but I think it's a valid technique in the cases we're talking about.
If you value your belief that's there are no ghost then it's irrational to be scared by ghosts.
Are you talking about "real" ghosts? You shouldn't be afraid of real ghosts because they don't exist, not because you value your belief that there are no ghosts. Why should beliefs have any value for you beyond their accuracy?
Funny you mention that anecdote because I actually wrote it http://lesswrong.com/lw/1l/the_mystery_of_the_haunted_rationalist/w9
Human brains aren't very good at detaching themselves from their actions
Isn't that what rationality is supposed to reduce?
The government picks arbitrary ages for when an individual has the mental capacity to make certain decisions, like drinking alcohol or having sex. But not everyone mentally matures at the same rate. It'd be nice to have an institution that allows minors with good backgrounds and who pass certain intelligence/rationality tests to be exempt from these laws.
observe the features common to the intuitions in different domains, and abstract the common features out.
Have you explicitly factored these out? If so, what are some examples?
I agree
I think it's because system 1 and system 2 update differently. System 1 often needs experiential evidence in order to update, while system 2 can update using logical deduction alone. Doing a bunch of research is effective in updating system 2, but less so system 1. I'd guess that if you continue being positive and and don't experience any downside to it, then eventually your system 1 will update.
I think interviewers rely more on their intuition to evaluate candidates for managerial positions. For purely engineering positions, a longer, more systematic evaluation is needed.
Yes, the way I wrote the scenario makes it seem like he deliberately got himself into an awkward situation for little benefit in return. And I see how this weakens the scenario as an illustration of the problem. So let me try improving the scenario:
Imagine he determined that refraining from disclosing the information to his mother was ethical. A week later, he finds himself in a similar situation. He wants to drink a couple of beers, but knows that by the time he'll finish, he'll need to drive his mother. This time he has no qualms about drinking, making the beer-drinking pleasure worth the consequences.
He might then profitably spend those two hours examining the underlying problem: why he chose to have those beers.
Why would this be a problem?
BTW, his mother already knows he's been drinking.
I didn't make it clear, but in the scenario she doesn't know.
So the question is, when your goals conflict with another's, when is it right to use force or subterfuge to get your way?
In the scenarios with the 5-year-old and the mother, the protagonist's goal conflicts with what he deems to be an irrational goal. From his perspective, if they were more rational, their goals wouldn't be conflicting in the first place. So there are two questions that arise 1) can he make that judgement call on their rationality and 2) can he remove their ability to act as agents because of his assessment?
you're treating them as an agent, but an adversarial one.
But if you thought of them as having agency, you'd want to respect their desires and therefore disclose the information, possibly hoping you'd come to some sort of compromise.
Reliable/predictable isn't high status.
The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents.
I've noticed that people are angrier at behaviors they can't explain. The anger subdues when they learn about the motives and circumstances that led to the behavior. If non-agents are suppose to be less predictable, I'd guess we're more inclined to judge/blame them.
After reading the article, it seems like their conclusion is still debated. I'm also not convicted, although I have updated that the general-purpose mechanism hypothesis is less likely correct. There needs to be an experiment with the context being non-social but frequently occurs in people's lives. For instance, "if you arrived to the airport less than 30 minutes before your departure, you are not able to check in." Then compare results with those from people who have never been on a plane before.
Edit: I realized my example can also be explained by the "cheater detector module". In fact, any question with the conext being a human imposed rule can be explained the same way. A better question would be "if your car runs out of fuel, your car cannot be driven."
Ah, I didn't know about holistic/analytical reasoning before. With the intuition/logical thinking styles I had in mind, I wouldn't have predicted that intuition thinkers would ignore situational over personality information. This may be a more cultural difference.
Yes, those are synonymous. I should clarify that.
Just curious, did you have any explicit beliefs that made you ignore your intuition?
Good observations.
As an intuition-dominant thinker, how did you improve your logical side?
I first discovered these recurring tendencies in my self and in others. Then, used inferences from what's scientifically known about intuition to explain how the nature of intuition might cause these tendencies in intuition thinkers.
I recall seeing research showing that intuitive thinkers performed better at math / logic problems if they were word problems involving social settings, eg amount of soda to buy for a party or people sitting next to each other.
I would explain this study's result using the following inferential steps:
1) People (some more than others) have a lot of experience being in social situations
2) It's not uncommon for people in social situations to face problems that can be easily formalized as math problems, e.g. how to split the bill at a restaurant or the examples mentioned in the study.
3) Intuition uses past experiences to solve problems
4) Intuition thinkers have probably trained their intuition to be able to solve problems found in social situations.
5) Intuition thinkers are more likely to be better at solving math/logic problems found in social situations than math/logic problems found in settings they don't have much experience with, yet have enough background knowledge to solve.
In the post, I also inferred that intuition thinkers have a hard time corresponding words in math problems with formulas they know. When the words involved are words they've corresponded to formulas in the past, they're more likely to make the right correspondence again.
If I read about the experiment before knowing the results, I wouldn't be too surprised if intuition thinkers beat out logical thinkers.
--
This is a good example of how I explained the tendencies in the rest of the post. I think the step that demands the most evidence is (3), but felt like there was enough scientific backing for it that lesswrongers know about. I believe the other inferences don't require additional evidence enough to the point that leaving them out greatly weakens the argument.
I may have definitely made inferences in the post that, without providing additional evidence, greatly weaken my argument. I'd appreciate any instance of this pointed out.
A nice scientific approach!
Done
Agreed. I should take it out.
I think you're right. I was using prior knowledge to interpret the argument correctly. The ambiguity in the language definitely makes my example weaker. I tried empathizing with the commenter as an intuition thinker to try figuring out what the most likely mistake caused the confusion. I still think the commenter most likely didn't pay attention to those words, but it's also quite likely he understood the technically correct alternative interpretation.
...picking up of Russian Norms task
Intuition thinkers probably wouldn't have the foresight to learn Russian norms. However, they wouldn't make a strict rule like "always smile". Even if they did normally smile, in Russia, their intuition would be thrown off and would probably execute a more optimal strategy. Without a strict rule, they'd also be more attuned to the immediate environment and intuit that smiling isn't customary.
Agreed and added a link with a resource I found with a few minutes of googling.
I think the answer may change depending on age. Older intuition thinkers probably have deeper ingrained habits and less motivation.
I am not convinced that it's easy, or even really possible, to change from one thinking style to the other. Everything else I've read suggests this sort of cognitive leaning is largely innate.
I too think it's uncommon to completely change thinking styles, but I do believe it's possible to improve the weaker one. I also suspect one thinking style struggles more to develop the weaker thinking style, but don't know which one.
Do you have anything other than your own experience to suggest otherwise?
Being around many people who are into self-development, I often see logical thinkers being more intuitive and vice versa. No one makes a complete 180, but incremental improvements are common.
I am having some difficulty understand the "Ignoring your emotions" section, much less seeing the use of "fixing" this "failing"
The idea is that feeding emotional data to your intuition can help you better understand your own preferences, understand why you experience certain emotions, and how to illicit certain emotions in yourself and others. If you're not an emotional person, this is probably not a big concern.
I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution
The biases are indirectly affected by evolution. The brain evolved "faulty thinking" because natural constraints put a premium on accuracy - especially when sometimes-accurate beliefs are sufficient.
http://blogs.discovermagazine.com/badastronomy/2011/07/15/disturbing-face-distortion-illusion/
Glad you liked the post.
This is one of the very few places where I'm not sure we agree. I agree, someone who is really different from others will have a harder time getting the empathy ball rolling. But I still think self-understanding is utterly critical. It's the only way you can control for projection.
I agree, I should've emphasized that finding a proxy is supplementary to self-understanding, not an alternative.
There's also the fact that some people identify with being unusual or different, but such people usually exaggerate their differences more than is justified.
Very much agree. This issue is especially prominent in societies that idealize individualism. Looking back, I think I should've edited out the caveat, not because I disagree with my past self, but because it may inhibit some readers from questioning their self-proclaimed differences.
I think that category of people are considered low status on average, and thus, not met with much sympathy. Maybe they have a small circle of people enabling their bad habits, but I suspect the strongest force is rationalization.
Sure, it may have had a small (overstatement) effect, but it was worth it.
Right, but more specifically, the annoying parts are their denial of the problem and reluctance to improve. We'd all be a lot more sympathetic otherwise.
Running around the block is a good start :)
I might write a follow-up post with the kind of advice you're looking for.
There are "empathy challenges" all around you. Whenever you observe or interact with someone, really try to understand why they behaved the way they did - feel it on a gut level. Feeling confident about your conclusions is key. Keeping a checklist similar to the one in the post is helpful to keep in mind when confronted with these challenges.
However, without actually interacting with people, entering relationships or reading about social dynamics, your models of people won't be entangled with reality. My advice is more about how to be an active learner given you are doing these things.
You can do that with a lot of topics on LW...
Explaining her flaws in such a scientific, matter-of-fact way shows how emotionally distant he was. She probably felt like the guy she loved just dropped off an eviction notice.
It's a good exercise in finding your true objections.
Unless the rejection is accompanied by occasional successes, this may be a good way to lower your self-esteem. The trick is learning to accept rejection - using each opportunity to succeed and learning from each failed attempt.
I find that intelligence is positively correlated with the amount one spends thinking about intelligence.
You'll feel more uneasy when someone's flirting.
Dating is for people who have trouble hooking up without making their intentions explicit.
Some thoughts regarding the difference between level 2 and 3:
Seems like a level 3 understanding necessitates an insight-producing ability (i.e. ability to improve existing models) -- otherwise your models wouldn't regenerate if destroyed. The question is why your insights with a level 2 understanding aren't evidence of a level 3 understanding. Or whether it's even possible to have insights with a level 2 understanding.
If we're able to regenerate a model, we obviously have model-making abilities. But isn't the same happening when you draw connections between your models? The moment you realize two or more models are connected, you've added to your model of reality. Neither model predicted their relationship with the other, your insight connected the two, improving your older model.
How's level 3 different from level 2?