Posts

Comments

Comment by Peter_Lambert-Cole on Avoid misinterpreting your emotions · 2012-02-15T03:20:38.247Z · LW · GW

I try to treat my emotions in the following way: Emotions just ''are'' and as such carry information only about emotions themselves. They have meaning only in relation to other emotions, both mine and those of others. I've found that the most effective way to consistently take the outside view. Once I made that leap, it became much easier to apply rationality in mastering them for my own benefit. I can collect empirical data about my emotions and make predictions about my emotions. I can devise strategies to change my emotions and then assess whether they work. If you feel sad and it's raining today, you might infer that rain leads to an increased probability of sadness. If you feel excited about a job opportunity, you might infer that you will generally be happy on a day to day basis. If I meet someone and feel comfortable talking to them, that's only an indication that I will feel comfortable talking to them in the future. And if you pay attention for long enough, you realize that many emotions are ultimately harmless. If you stop feeding them, they drift away, they pass.

It is partly a dissociative approach, being a spectator to your own emotions (as mentioned by EE43026F). But at the same time, it's like treating your emotions as you treat your toes. They are a part of you, but they're only mildly informative about whether you should change careers.

Looking back on what I just wrote, I should also say that dealing with emotions is a skill. I don't mean to suggest that one little insight outweighs practice. About two years and a half years ago I made a commitment to not be some completely oblivious to emotions and it's taken a while to develop the skills. The simplest skill is just identifying emotions. At various points of the day, ask yourself how you are feeling. When I started, I literally could not give a verbal response, I could not produce a word describing how I felt.

Comment by Peter_Lambert-Cole on Rationality Outreach: A Parable · 2011-03-18T15:51:49.770Z · LW · GW

Of course, Vox is not a Catholic so there is no "we" in his argument.

Moreover, this post is one in a series responding to New Atheists and others who explicitly argue that religious institutions, people and motivations are worse than the secular alternatives. He doesn't introduce the comparison between religious and secular as a counterattack. He is responding to people who have already made that moral comparison and is showing that the calculus doesn't work out as they claimed.

Comment by Peter_Lambert-Cole on Making your explicit reasoning trustworthy · 2010-10-29T13:32:21.201Z · LW · GW

I wouldn't say that this is a fear of an "inaccurate conclusion," as you say. Instead, it's a fear of losing control and becoming disoriented: "losing your bearings" as you said . You're afraid that your most trustworthy asset - your ability to reason through a problem and come out safe on the other side; an asset that should never fail you - will fail you and lead you down a path you don't want to go. In fact, it could lead to Game Over if you let that lead you to kill or be killed, as you highlight in your examples of the Unabomber, Mitchell Heisman and zealot soldiers.

I especially like the orientation metaphor here. And I think that your piece addresses this. First, you need to know where you are. Recognize when you are in far mode and thinking abstractly and when you are in near mode and thinking concretely. Then you can think about where you should be, near or far. Learn to recognize which one is better for your current situation and be able to switch between them. This is also part of being oriented. Finally, have a kill switch if you feel yourself losing control.

Comment by Peter_Lambert-Cole on Five-minute rationality techniques · 2010-08-19T04:45:51.333Z · LW · GW

I think skeptical people are too quick to say "Forer Effect" when they first do Myers-Briggs. They notice that their type only partially describes them and assume that something fishy is going on. But if you switch all the letters and read the description of the exact opposite type, there is almost nothing that could apply to you. That in itself means that there is some non-trivial classification going on. San Francisco may not be LA, but it sure isn't Moscow.

Comment by Peter_Lambert-Cole on Five-minute rationality techniques · 2010-08-11T14:20:54.323Z · LW · GW

Fixed.

Does it make sense to think of yourself as crazy to the same extent that people of other psychetypes are?

I don't think so. The term captures how radically different the another types are from your own. It's about relative distance between you and others, not an absolute quality.

Comment by Peter_Lambert-Cole on Five-minute rationality techniques · 2010-08-11T02:54:11.850Z · LW · GW

You mentioned Myers-Briggs types and "the idea that either I was crazy, or everyone else was." I think I had a similar experience but with a different analysis of the MBTI classifications. It was Personality Type: An Owner's Manual by Lenore Thomson and there is a wiki discussion here.

I found the scientific basis fairly flimsy. She connects the 8 cognitive functions to various regions of the brain - left and right, anterior and posterior - but it seems like a just so story to me. However, I have found it immensely useful as a tool for self-improvement.

The main insight I got from it is that while other people are crazy, they are crazy in a fairly well-defined, reproducible way. Other people see things completely differently from you, but it's fairly internally consistent and so you simulate it on your own hardware.

There are two ways I think about this:

One, your brain is is trying to constantly make sense of all this sensory data that comes in. So it determines that one part is the signal and one part is the noise. It tries to minimize the noise and focus on the signal. But then you realize there is a whole other signal in what you thought was noise and there are people tuning into that and think your signal is actually the noise. If you then turn into that signal, you can understand what other people have been listening to the whole time

The other is, we are all playing 8 board games simultaneously, where if we roll the dice our piece moves that amount in each of the games. In order to make sense of this, we focus on one of the games, trying to forget about the others, and try to win this one. But other people are focused on trying to win a different game. So when they try to talk to each other about who is winning, they completely talk past each other. But when you realize that someone thinks he is playing a different game and you figure out what it is, you can have a much more productive conversation/relationship.

Comment by Peter_Lambert-Cole on Alien parasite technical guy · 2010-08-02T15:04:26.081Z · LW · GW

This sounds like a "Yes, Minister" interpretation. In that series, the British politicians are nominally in charge of the various ministries, being the representatives of the party in charge, but in actuality the civil service bureaucracy runs the show. The minister, Jim Hacker, and the permanent secretary (top civil servant), Sir Humphrey Appleby, are constantly in conflict over some little policy or bureaucratic issue and the latter almost always wins while letting his "superior" feel like he actually got his way.

So consciousness lets us think we are in charge, in fact we are convinced we are in charge, when in reality we will constantly be thwarted by that part of our brain operating outside conscious awareness.

Comment by Peter_Lambert-Cole on Open Thread: July 2010, Part 2 · 2010-07-23T19:50:50.106Z · LW · GW

That's why it can be such an effective tactic when persuading normal people. You can get them to commit to your side and then they rationalize themselves into believing it's truth (which it is) because they don't want to admit they were conned.

Comment by Peter_Lambert-Cole on Open Thread: July 2010, Part 2 · 2010-07-23T18:06:33.090Z · LW · GW

There is something that bother's me and I would like to know if it bothers anyone else. I call it "Argument by Silliness"

Consider this quote from the Allais Malaise post: "If satisfying your intuitions is more important to you than money, do whatever the heck you want. Drop the money over Niagara Falls. Blow it all on expensive champagne. Set fire to your hair. Whatever."

I find this to be a common end point when demonstrating what it means to be rational. Someone will advance a good argument that correctly computes/deduces how you should act, given a certain goal. In the post quoted above, that would be maximizing your money. And in order to get their point across, they cite all the obviously silly things you could otherwise do. To a certain extent, it can be more blackmail than argument, because your audience does not want to seem a fool and so he dutifully agrees that yes, it would be silly to throw your money off of Niagara Falls and he is certainly a reasonable man who would never do that so of course he agrees with you.

Now, none of the intelligent readers on LW need to be blackmailed this way because we all understand what rationality demands of us and we respond to solid arguments not rhetoric. And Eliezer is just using that bit of trickery to get a basic point across to the uninitiated.

But the argument does little to help those who already grasp the concept improve their understanding. Absurdity does not mean you have correctly implemented a "reductio ad absurdum" technique. You have to be careful because he appealed to something that is self-evidently absurd and you should be wary of anything considered self-evident. Actually, I think it is more a case of being commonly accepted as absurd, but you should be just as wary of anything commonly accepted as silly. And you should be careful about where you think it is the former but it's actually the later.

The biggest problem, however, is that silly is a class in which we put things that can be disregarded. Silly is not a truth statement. It is a value statement. It says things are unimportant, not that they are untrue. It says that according to a given standard, this thing is ranked very low, so low in fact that it is essentially worthless.

Now, disregarding things is important for thinking. It is often impossible to think through the whole problem, so we at first concern ourselves with just a part and put the troublesome cases aside for later. In the Allais Malaise post, Eliezer was concerned just with the minor problem of "How do we maximize money under these particular constraints?" and separating out intuitions was part of having a well-defined, solvable problem to discuss.

But the silliness he cites only proves that the two standards - maximizing money and satisfying your intuitions - conflict in a particular case. It tells you little about any other case or the standards themselves.

The point I most want to make is "Embrace what you find silly," but since this comment has gone on very long, so I am going to break this up into several postings.

Comment by Peter_Lambert-Cole on Open Thread June 2010, Part 3 · 2010-06-20T17:49:40.030Z · LW · GW

I think one place to look for this phenomenon is when in a debate, you seize upon someone's hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.

But hidden assumptions aren't bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It's a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.

When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others' assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.

This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side's assumptions to see how they fit.

Comment by Peter_Lambert-Cole on Open Thread June 2010, Part 3 · 2010-06-18T00:12:26.949Z · LW · GW

I have an idea that I would like to float. It's a rough metaphor that I'm applying from my mathematical background.

Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.

First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.

Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.

Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.

My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.

If you noticed, this idea comes from Differential Geometry, where you use a collection ("atlas") of overlapping charts/local homeomorphisms to R^n ("maps") as a suitable structure for discussing manifolds.

Comment by Peter_Lambert-Cole on How to always have interesting conversations · 2010-06-15T02:08:40.188Z · LW · GW

Writing out a list of topics and connections is good but it's only one part of a conversation. You should also consider various reasons for having a conversation. For instance: passing the time, relieving anxiety, developing a relationship, maintaining a relationship, exchanging information, keeping updated on important information, debating a substantive point, getting someone to relax before asking them for something, being polite, making someone feel welcome, resolving a conflict. And when people have different goals for a conversation, it can be uncomfortable. If someone starts talking when they are nervous and you want to discuss the finer points of evolution, both people will get annoyed. When you are nervous, you want to talk about inane things because they are simple and an easy distraction; talking about science might be too complicated and compound your anxiety. Similarly, if you are really in the mood to talk complex subjects, you don't want to talk about irrelevant, silly things and can get annoyed because the other person has nothing to offer. (Of course, some people might find talking about science comforting, even if you find it boring. There is no fixed relationship between the inane/serious topic scale and the frivolous/deep conversation scale.)

So, you should develop your ability to know why you and the other person each want to have a conversation. Moreover, you should improve your ability to engage in various types of conversation. Often times, if you start a conversation on their terms, they will get comfortable with you and later on have the conversation you want.

You also have to think of conversation as a bargain between two people. You have a set of topics and conversation types you like/are strong at/want to do and the other person has hers. As with any negotiation, you have to work towards a mutually acceptable compromise. Of course, expanding your list of topics is helpful, because it increases the odds you will find common ground for someone, but your concept map does not necessarily help you quickly find something in common to talk about with another person.

Comment by Peter_Lambert-Cole on Less Wrong Book Club and Study Group · 2010-06-10T14:06:03.791Z · LW · GW

I'd like to participate as well