Make your training useful
post by AnnaSalamon · 2011-02-12T02:14:03.597Z · LW · GW · Legacy · 46 commentsContents
Learn error patterns ahead of time Action ideas, for three related biases: A. How does it help to know about overconfidence[1]? What can you do differently, once you know your impressions are unreliable? B. How does it help to know about the conjunction fallacy? What can you do differently, once you know specific stories are less likely than we generally expect? C. How does it help to know about confabulation? (I.e., how does it help to know that you are often mistaken about your motives, and that situational factors affect you far more than most people expect?) Do try this at home. Edited to add: Do please comment with your own attempts to turn LW rationality content into the kinds of specifics one can easily act on. None 46 comments
As Tom slips on the ice puddle, his arm automatically pulls back to slap the ground. He’s been taking Jiu-Jitsu for only a month, but, already, he’s practiced falling hundreds of times. Tom’s training keeps him from getting hurt.
By contrast, Sandra is in her second year of university mathematics. She got an “A” in calculus and in several more advanced courses, and she can easily recite that “derivatives” are “rates of change”. But when she goes on her afternoon walk and stares at the local businesses, she doesn’t see derivatives.
For many of us, rationality is more like Sandra’s calculus than Tom’s martial arts. You may think “overconfidence” when you hear an explicit probability (“It’s 99% likely I’ll make it to Boston on Tuesday”). But when no probability is mentioned -- or, worse, when you act on a belief without noticing that belief at all -- your training has little impact.
Learn error patterns ahead of time
If you want to notice errors while you’re making them, think ahead of time about what your errors might look like. List the circumstances in which to watch out and the alternative action to try then.
Here's an example of what your lists might look like. A bunch of visiting fellows generated this list at one of our rationality trainings last summer; I’m including their list here (with some edits) because I found the specific suggestions useful, and because you may be able to use it as a model for your own lists.
Action ideas, for three related biases:
A. How does it help to know about overconfidence[1]? What can you do differently, once you know your impressions are unreliable?
Action ideas:
- Try many things, including things you “know” won’t work. Try cheap ones.
- Don’t be so sure you can’t do things.
- Don’t be so sure that the things you are doing, are working:
- If a given “necessary” task is using a large portion of your week, test what happens if you skip that task.
- Ask others whether your efforts are working, and what you might try instead. Test their suggestions.
- Ask how you’ll know if you hit your goal: what specific observables will be different? (Not “I’ll know calculus” but “I’ll be able to solve all the problems on the AP calculus test”. Not “I’ll be happier” but “I’ll improve my score on the Beck Depression Inventory”). Track these observables.
- Be suspicious of received wisdom, since others are also overconfident. But don’t just ignore that wisdom in favor of your own error-prone impressions -- look for empirical tests.[2]
- Your friends and family are weirder (more unlike your models) than you think they are. Try to notice how.
B. How does it help to know about the conjunction fallacy? What can you do differently, once you know specific stories are less likely than we generally expect?
Action ideas:
- Use simple or disjunctive plans:
- Choose a (city/college/etc.) in which there are many promising possibilities, not one with a single, highly promising scenario.[3]
- Apply for many jobs, in many sectors of the economy.
- Gather re-purposable resources, such as money, rationality, sanity, capable friends, math skill, reading speed, mental and physical fitness. Focus on fundamentals more than on situation-specific techniques.
- Tell detailed stories when you want to convince someone:
- Describe specific scenarios to angel investors, potential customers, etc.
- Visualize specific scenarios, when you want convince the less verbal parts of yourself that your new (exercise plan / whatever) is worth the effort.
- Don’t put all your caution into safeguarding one particular step. For example, don’t “ensure your start-up will succeed” by focusing only on the programming step, or only on the “where to sell it” step. Brainstorm many ways your plans can go wrong.
- Realize that conjunction-ridden theories (e.g. the Church-Turing thesis[3], or "I will live out my career as a mathematician") are more likely to be mistaken than you might naively think.
C. How does it help to know about confabulation? (I.e., how does it help to know that you are often mistaken about your motives, and that situational factors affect you far more than most people expect?)
Action ideas:
- It’s not just that your beliefs about how to (make money / enjoy your Saturday / learn math / whatever) are probably overconfident. It’s also that they probably weren’t arrived at by asking “How can I do X?” *at all*. So get out a sheet of paper and a ten-minute timer; you may find better ideas immediately.
- Realize you (in a narrow verbal sense) don’t choose most of your actions. Even when you think you do. It’s therefore silly to expect your past choices to be the best choices you could have made, or to make up stories about why your actions were optimal.[5]
- Instead of asking “Why did I do that?”, ask “Why would someone else think I did that, if they were watching only my actions?”[6].
- Since your actions depend greatly on both habits and context:
- Train the actions you want until they’re automatic. Train the thinking habits you want, too. Don’t just verbally acknowledge their desirability.
- If you want robust change, train your new behavior *across contexts*, or tie your new actions to a “portable” context that can remain stable even when you move, change jobs, etc. (For example, build a habit of looking at your goals and mission statement every morning, or using a life coach.)
- Consider aiming for a high-status job, or a job that demands more of you, since others’ expectations may affect you more than you naively think.
- Don’t mistake background knowledge for unchangeable ability.
Do try this at home.
Many of the above examples are not well-tested. So don’t rely on them. But do try them. And, when you do, tell us about it; add your data to the common LW store.
Also, practice this sort of example-generation for any rationality content that you hope to master. Now that you know about Bayes’ theorem, outside view prediction methods, confirmation bias, or any of the others -- what can you do differently at work? in your relationship? while cooking dinner tonight?
The more specific your brainstorm is, the easier it will be to actually try things.
[1] By “overconfidence”, I mean the well-documented bias whereby people think they know more than they do -- I do not mean the bias of over-estimating one’s own abilities.
[2] “Empirical tests” here can include your own direct observations, friends’ anecdotes, published controlled studies, and anything else in the world that should look different, if [received wisdom / your own impression] is true. Many folks just throw up their hands or take a vote when they see folks that disagree with one another; but sorting out the evidence is a learnable skill. It’s worth doing this for medical treatments, job search strategy, driving safety, learning methods, and ... anything else that has much impact on your life.
[3] For example, prefer “I’ll go to college X, where there are many smart people and connections” to “I’ll go to college Y, which is renowned for bioinformatics in particular, since bioinformatics is my lifelong destiny and will let me work for Craig Venter”.
[4] The Church-Turing thesis may not sound like a conjunction. But for it to hold, physics needs to be as we expect along many different dimensions, which is a conjunction, and is the sort of possibility we tend to overestimate. Similarly, there are many different events that could interrupt your planned career, and we tend to overestimate the chances that all of these events, at once, will not occur.
[5] But it isn’t silly to try to make your future actions more (useful/moral/whatever). Even if most actions occur by habit, you can, little by little, change your habits, and increase your self-awareness and your deliberative self-control.
[6] Or: “What would I believe about someone else, if they acted as I’ve been acting?”
Edited to add: Do please comment with your own attempts to turn LW rationality content into the kinds of specifics one can easily act on.
46 comments
Comments sorted by top scores.
comment by Johnicholas · 2011-02-12T07:00:59.374Z · LW(p) · GW(p)
This article made me think of a list I've been informally trying to make, of what stupidity feels like on the inside. The point is to identify when I'm writing code poorly - as the output will probably be even more bugridden than normal, and possibly the output is appropriate to debug-by-starting-over (Though starting over violates my normal policy.)
Stupidity feels like being bored, being in pain, being distracted, wanting to do anything else than this. Stupidity feels like being unworthy of these divine (external) ideas. Stupidity feels like blind plodding obedience. Stupidity feels like lovely and/or grotesque baroque clevernesses.
Trying to stop working and recover when I notice myself being stupid might be the right move, but I think pushing through it (aside from staying up late, which is a mistake) is a better policy. You have to learn to be productive on demand rather than when you're in the mood for it.
Replies from: TheOtherDave, JGWeissman, Swimmer963↑ comment by TheOtherDave · 2011-02-12T07:09:22.426Z · LW(p) · GW(p)
This is something I thought about a lot while I was recovering from brain damage and thus transiently a lot stupider than I normally am.
A few of mine:
Stupidity feels like not having enough fingers to hold all of my thoughts in place.
It feels like being tired all the time, even when I'm not.
It feels like merging onto the highway when I can't see all the oncoming traffic.
It feels like someone's playing loud distracting music that I can't hear.
It feels like riding on a train with square wheels.
↑ comment by Jayson_Virissimo · 2011-02-12T17:16:16.905Z · LW(p) · GW(p)
It feels like merging onto the highway when I can't see all the oncoming traffic.
This metaphor is excellent. That is exactly how I feel when I undersleep/oversleep.
↑ comment by JGWeissman · 2011-02-12T07:21:33.511Z · LW(p) · GW(p)
You have to learn to be productive on demand rather than when you're in the mood for it.
That sounds like a way to end up spending your in the mood for productivity time undoing the damage you do when you try to be productive on demand.
Replies from: Jordan, Swimmer963, Just_existing↑ comment by Jordan · 2011-02-12T10:45:51.840Z · LW(p) · GW(p)
There are some pieces of code I can implement just as well whether I'm motivated or not (whether I'm sharp or not). I think knowing the personal difference between what I can do while motivated and what I can do otherwise is the real key to consistent productivity. I try and mentally break down my to-do lists into those categories, so that when I'm feeling less sharp I can just fall back into a cache of problems that I know I can solve well even given a temporary decrease in ability.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-12T18:39:20.784Z · LW(p) · GW(p)
Somewhat agreed, with the caveat that there is code I can implement when not sharp, which, when I am sharp, I realize why I shouldn't implement it and use a more elegant approach. This makes it important to set up the to-do lists you mention during sharp time.
Replies from: Jordan↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-13T01:06:37.915Z · LW(p) · GW(p)
"That sounds like a way to end up spending your in the mood for productivity time undoing the damage you do when you try to be productive on demand."
I find that once I've pushed myself to finish something I've been procrastinating on, or even to do a small amount of work on it, my mood and general sense of my own competence improves enough to make the minor pain worth it. This is true of any work that has a deadline, self-imposed or not, and yes it's usually true when I'm sleep deprived.
However, I'm not a programmer, so this applies more to school assignments or to extracurricular projects like composing music. Maybe coding is different. (My experience of programming is limited enough that there isn't much creativity involved, and the limit is my knowledge, not how sharp I am.)
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-13T01:48:00.733Z · LW(p) · GW(p)
Do any of these projects involve anything as hard as designing and implementing novel software that you may have to debug or expand a year later? Or designing an API that you will have to expand later after after lots of other programmers have written programs that use it, which you are not allowed to break?
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-13T23:04:51.383Z · LW(p) · GW(p)
I have no idea how to do any of those things. I would assume that they involve applying the rules of a given domain more rigorously than when you are, say, working on a novel or composing a piece of music. Which for sure would make them harder.
Also, I think I misunderstood your comment. You mean "damage" as in bad coding that you'll just have to redo later? As opposed to damage to your own motivational framework that will make it harder for you to motivate yourself in future? The latter is the way I understood it.
Regardless, is it not true that anyone who works as a programmer has deadlines, and can't afford to leave a project for later because they're not in the mood to be productive?
Replies from: JGWeissman↑ comment by JGWeissman · 2011-02-14T00:24:29.242Z · LW(p) · GW(p)
You mean "damage" as in bad coding that you'll just have to redo later?
Yes, though it is worse than that. Bad code can contaminate otherwise good code that interacts with it, if the interface is not right.
Regardless, is it not true that anyone who works as a programmer has deadlines, and can't afford to leave a project for later because they're not in the mood to be productive?
No, not necessarily. Usually the closest I have to a deadline is my own declared estimate of when I will be done. Sometimes it is an effort just get the relative priorities of my concurrent projects.
If you are working full time programming, you should manage to get in the productive zone at least once a day. "Not being in the mood" is not an excuse to put a project off indefinitely.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-14T00:48:24.382Z · LW(p) · GW(p)
"Bad code can contaminate otherwise good code that interacts with it, if the interface is not right."
That's kind of fascinating, but I can see that it would be really irritating as well having to deal with it every day.
"Usually the closest I have to a deadline is my own declared estimate of when I will be done."
Really? If I wasn't already halfway through my undergrad, I would consider programming as a career solely on that basis!
Replies from: Alicorn, Strangeattractor, Barry_Cotter↑ comment by Alicorn · 2011-02-14T00:57:22.841Z · LW(p) · GW(p)
If I wasn't already halfway through my undergrad, I would consider programming as a career solely on that basis!
Your existing undergrad experience is a sunk cost. Do you want to be a programmer, or a whatever-you've-already-started-learning-to-be? (For that matter, do you have spare time? You could learn to program therein.)
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-14T12:48:26.498Z · LW(p) · GW(p)
I am studying nursing and for various reasons, that's where I want to stay. (There's actually a lot of appeal for me in a field where I'll never be out of work, can travel and work pretty much anywhere, and can easily branch out into many, many related fields.)
I took programming as one of my electives and I've tried to continue the learning process by giving myself extracurricular projects, but spare time is a limiting factor. Thanks for the advice though. If you have any advice for good books/online tutorials to read, or challenging projects I could assign myself, I would really appreciate that.
Replies from: Strangeattractor↑ comment by Strangeattractor · 2011-03-10T12:28:13.314Z · LW(p) · GW(p)
Here are a few websites:
Software Carpentry: Getting Scientists to Write Better Code by Making Them More Productive http://software-carpentry.org/
Invent Your Own Computer Games With Python http://inventwithpython.com/
↑ comment by Strangeattractor · 2011-03-10T12:24:08.868Z · LW(p) · GW(p)
"Bad code can contaminate otherwise good code that interacts with it, if the interface is not right."
Coding is dealing with abstractions and the way that things relate to one another. You're not just constructing the pieces, you are constructing how they will be put together. If you set it up wrong, it can be tricky to change later, if other people are relying on the parts that you'd like to change to keep working the same way they have been working from the beginning. And creating new software is usually a process of discovering requirements and uses for the software as you go along, so it is often difficult to know how to approach something at the beginning. Also, working with other people on the code means working with people who set up abstractions differently and have a slightly different, or wildly different understanding of what the code is meant to do, and the assumptions that underly it.
Working on a software project is a good way to see how people think differently, and how errors and assumptions affect the project and how people work together.
I like Frederick P. Brooks' books on software design, and design in general. They are classics: "The Mythical Man-Month" and "The Design of Design".
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-03-10T13:29:21.347Z · LW(p) · GW(p)
Thank you, those look interesting.
↑ comment by Barry_Cotter · 2011-02-14T09:05:01.471Z · LW(p) · GW(p)
f I wasn't already halfway through my undergrad, I would consider programming as a career solely on that basis!
Then learn to programme and see if you like it enough to do it as a job, or if it could be helpful in the field you're doing your degree in. Being an X who can programme can be a powerful force multiplier of your effectiveness in quite a few fields. An assume very little intro to programming is Learn Python the Hard Way
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-14T12:44:26.937Z · LW(p) · GW(p)
I did take a class in programming last semester as one of my electives. My major is in nursing, and my father made the comment that "you'll be the only nurse in Canada who can program." I learned it pretty effortlessly (that class was the easiest A+ I've had in years) but it was a huge time sink, and I think I drove the TA insane by starting projects at home and then sending him emails at 2 am asking why my program wasn't working.
Also I'm sure you're right and it could be very helpful just to know programming as a nurse. At one of my part time jobs, the software we use to keep track of dialysis patients was actually written BY a dialysis patient, who I guess worked as a programmer and saw a need that wasn't being filled. (I'm not QUITE at the level where I can write big, complex, useful programs.)
↑ comment by Just_existing · 2012-07-27T21:39:43.533Z · LW(p) · GW(p)
A while ago i started to motivate myself with good old conditioning. Every time I started to work on a project I ate an M&M. (I actually got the idea from the very first lecture from what will soon be the "Center of Modern Rationality" concerning the sunken cost fallacy.) This "technique" worked suprisingly well. After a while my subconciousness seemed to become a little less reluctant to start working. My "mod" was better. Could be a placebo effect of course, and could only work on myself, but its cost nothing to try. Let me know please if someone actually does.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-13T00:56:01.185Z · LW(p) · GW(p)
"You have to learn to be productive on demand rather than when you're in the mood for it."
Very true, and it's part of the reason I like to arrange structured activities that force me, on a day-to-day basis, to do the things I enjoy. I could have taught myself programming (maybe) but taking a course as my elective forced me to actually write the code when the assignment was due the next morning, as opposed to when I felt like it. It's a good feeling, getting something done and knowing you did a good job, but it's depressing how bad I am at motivating myself to push through and get it done without a deadline. (I don't know if this is true of other people, but actually I've been told I'm unusually productive.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-02-13T04:39:25.954Z · LW(p) · GW(p)
As far as I can tell, making good use of unstructured time is very rare.
It's possible that conventional schooling makes people worse at it.
Replies from: DaFranker↑ comment by DaFranker · 2012-07-26T18:02:26.553Z · LW(p) · GW(p)
The best way to make a good use of unstructured time that I've ever found is to optimize towards highest utility. In plain terms, you have an ultimate goal, a be-all-end-all reason to be doing something, and you're doing your best to achieve this goal as efficiently as possible because each time-unit of delay is negative utility.
This takes a goal, of course, which is pretty much comparable to Step 7 of 11 For Building a Goedel Machine, if you've seen Eliezer's Summit presentation that touched that subject.
comment by Vladimir_Nesov · 2011-02-12T14:22:10.648Z · LW(p) · GW(p)
The basic fall-back routine that should come before what's described in this post is noticing your own errors or errors of others as they actually occur, developing heuristics that would prevent you from making these errors, and training yourself to follow these heuristics. It's important to apply this method even to errors that cost you nothing (in which case noticing them might be non-trivial), because the heuristics can save you in the more rare cases where they do cost you something, and following this routine can help you develop skill that helps in developing more important heuristics.
You can also try predicting some of the errors in advance without actually making them, and treat failure to predict a predictable error as an error in reasoning and preparedness, but the basic method is the same.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2011-02-12T15:05:05.575Z · LW(p) · GW(p)
Nice point. Can you give examples of noticing such errors, and of what it looks like to notice such errors?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-12T23:16:15.312Z · LW(p) · GW(p)
I wait at a bus stop. A car passes by at high speed, and a woman that was standing too close to the road has just the time to jump clear of a shower of slush. Note to self: be aware of this danger, never stand too close, as there is no benefit but potential for ruining your clothing. Next time I notice myself standing too close to puddle/slush on the road, I move away and reinforce the heuristic.
A person from my department at work admonishes me for breaking the standard procedure for connecting to the Internet, which resulted in me being able to work that evening while causing no harm. I attempt to reason with the man, relying on my usual analytic ability to clearly explain the situation to everyone's satisfaction. Since the argument matches the template some of my psychological adaptations recognize as confrontational, emotions start to interfere with my normal cognition, and as a result I'm unable to think carefully and my argument is much less persuasive than expected. Note to self: when expected to enter a situation that can evoke strong emotions, plan what to do and what to say in advance, before emotions start interfering with ability to think, rehearse the plan in your mind, and only then allow the exposure. Next time I notice that I started to argue with emotions rising up, I cut myself short and regroup. Later, I reflect on the signs that could allow me to notice the situation approaching in advance (such as an unusual social interaction, something I wouldn't already have the heuristic associated with), and rehearse the response of recognizing the situation when exposed to appropriate cues.
I slip on an iced street, but recover without falling. I look around, and realize that a low fence that goes along the road has sharp spikes on its top, and the adjoining building a sharp stone border, so that unlucky fall on either would have me injured. There is potential for harm in falling close to them, and no benefit in choosing to walk close to them as opposed to giving enough room to fall clear. So I adopt a heuristic of not walking close to dangerous structures on slippery surface, or going much slower where necessary. Next time I notice that I'm unnecessarily close to a dangerous structure while there's room to walk clear of it with no additional inconvenience, I correct my trajectory, thus reinforcing the heuristic.
Replies from: Swimmer963, Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-13T01:02:12.005Z · LW(p) · GW(p)
"Since the argument matches the template some of my psychological adaptations recognize as confrontational, emotions start to interfere with my normal cognition, and as a result I'm unable to think carefully and my argument is much less persuasive than expected."
I've trained myself to notice when I start to get emotionally involved in a confrontation, and to make a conscious effort to take "one step back" and I deliberately apologize to the person for something, though not necessarily what we're arguing about. That one step will a) snap me out of the confrontational mode, and b) convince whoever I'm talking to that I'm reasonable and open to their point of view. This method has helped me a lot at work and with family, and I wish I could remember to use it all the time.
Replies from: sketerpot↑ comment by sketerpot · 2011-02-13T03:14:10.635Z · LW(p) · GW(p)
This method has helped me a lot at work and with family, and I wish I could remember to use it all the time.
Just keep on using it as much as you can, perhaps periodically reminding yourself, and the habit will reinforce itself.
If you want to improve faster, you could try something like Benjamin Franklin's incredibly nerdy method: he would pick some good habit to reinforce (or bad one to avoid) and he would remind himself of this daily. Every time he fell short of his goal, he would make a check mark on a spreadsheet. When he'd gone a week without a single check mark, he would proceed on to the next habit on his list.
(Irrelevant story: Back in high school English class, we were assigned two essays about morality from the same time period. One was Sinners in the Hands of an Angry God, which was every bit as creepy as the title suggests. The other was Franklin's description of his spreadseet-of-virtue experiment, which would not have been out of place as a top-level post on Less Wrong. Reading these together produced some of the most severe mood whiplash that is physically possible.)
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-27T23:13:59.737Z · LW(p) · GW(p)
Note to self: when expected to enter a situation that can evoke strong emotions, plan what to do and what to say in advance, before emotions start interfering with ability to think, rehearse the plan in your mind, and only then allow the exposure. Next time I notice that I started to argue with emotions rising up, I cut myself short and regroup.
Addition a year later: I'm currently doing a study group with a friend based on a book called 'Crucial Conversations', which is entirely about being more effective at communicating in emotion-laden situations. Highly recommended.
Replies from: Zaine↑ comment by Zaine · 2013-05-11T04:20:41.623Z · LW(p) · GW(p)
Still recommended?
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2013-05-11T20:23:14.427Z · LW(p) · GW(p)
Still recommended.
comment by JGWeissman · 2011-02-12T04:09:22.138Z · LW(p) · GW(p)
The actionable idea I take from the experiments demonstrating the conjunction fallacy is that when trying to figure out how likely some event is, like a break down of relations between America and Russia or major flooding in California, I should try to come up with ways that event could happen, like Russia invades Poland, or an earthquake causes flooding, and remember that the probability of the event must be greater than the probability of it occurring for the reasons I imagined.
This idea can be extended by applying it recursively to the sub events that could lead to the event of interest, like Russia invades Poland, and given Russia invades Poland, diplomatic relations break down between America and Russia. Also, consider other ways it could happen, like America develops Star Wars missile defense, and generalize the sub events, like Russia invades another country.
Replies from: orthonormal↑ comment by orthonormal · 2011-02-12T19:21:55.274Z · LW(p) · GW(p)
Be careful, though: some events (e.g. a terrorist attack) can be broken down into a disjunction of very many things (e.g. a specific scenario), each of which taken by itself is highly improbable. If you only focus on the list of causes you can imagine, you'll miss a lot of fat tails (or end up focusing your efforts on a really non-exhaustive list).
Replies from: Nisan↑ comment by Nisan · 2011-02-13T07:38:21.614Z · LW(p) · GW(p)
Indeed. This is the message I took home from the Amanda Knox test.
comment by Aurini · 2011-02-13T00:04:40.303Z · LW(p) · GW(p)
I spend a lot of time thinking about politics, and I find it hugely beneficial to force myself into assigning probabilities to my predictions. It's silly, on the one hand, to intone "There is a 90% probability that this behaviour will continue" with the sureity of a Vulcan - my numbers are very, very poorly calibrated - but when I actually sit back to consider "How certain am I of this?" it helps remind me that I don't know most of the time. This can motivate me to search for further evidence - and since I'm explicitly researching from a position of ignorance, I'm less prone to confirmation bias.
A second technique I employ is the Drake Equation to avoid the conjunction fallacy - I'll always try and boil the elements down to the individual events, and multiply the probabilities. An interesting side effect of this is that it destroys almost every ideological movement - I'm thinking of environmentalism in particular. [EG Anthropic global warming X Catastrophic global warming X Reversability X (Prius+Carbon capture are net positives)] There are no easy solutions.
Replies from: k3nt, Manfred↑ comment by k3nt · 2011-02-15T01:05:08.030Z · LW(p) · GW(p)
Let me break this down and see if I understand you.
Every ideological movement makes specific factual predictions. I think I agree with that. Conservatives will tell you that if we don't do X, disaster will result. Liberals ditto. Marxists ditto. Gun control fanatics and gun nuts ditto. OK.
Those predictions are less likely to be correct than we tend to believe (conjunction fallacy). Agreed.
So I want to agree with you here.
But I don't see how the conclusion can be correct, because being moderate (avoiding the ideologues) is also a form of political ideology that makes specific predictions. "If we continue to muddle through and ignore the ideologues on all sides, things will be more or less ok" is also a prediction, isn't it?
Replies from: Aurini↑ comment by Aurini · 2011-02-16T17:49:35.350Z · LW(p) · GW(p)
Great point.
I think that Moderatism - by my sloppy definition - would also qualify as one of these ideological movements, the difference being that its core premise is "Polite dinner conversation is the be-all-end-all to politics: don't get controversial!"
Idealogical movements tend to start off with One Great Idea, which happily explains about 70% of reality by its heuristic, and then covers up the other 30% with 'Just So' explanations. Regardless of their roots, they become creatures designed for mass appeal - rather than rationalistic theories to explain reality.
Ignoring the party-specific heuristics, and looking at the tenets themselves, I come across some ideas which I'm extremely certain of, and are often the most radical within the movement - crazy ideas which have trouble flowering without the supporting manure of the rest of the ideology.
For instance, on the fringe Right: -Gay sex is a major health hazard, given the infection rates of HIV (roughly 700 times as dangerous as straight sex) -There are almost certainly intellectual and behavioural differences between the races -Patriarchal forces are far less damaging than Feminist ideology
On the fringe Left -Institutional violence is a distinct reality, with many specific examples which can be pointed to -There is no line in the sand between legal drugs and illegal drugs; you cannot differentiate between the two; heroin should be legal -The primacy of individual liberty is the only sane way to organize a society
Every camp has a few things that it's right about - Marxists, Anarchists, Statists, et cetera - but the individual tenets are extremely uncomfortable to hold or argue without (for example) buying into all of the comfortable delusions that Sarah Palin exemplifies (if you're a Conservative). It's easy to shout "I am an anti-Statist! Racism is bad! Borders and immigration laws are wrong!" It's far more difficult to say "I'm an anti-Statist, and racism is a poor heuristic, but importing people with no skills, and no cultural history of Classical Liberalism is a danger to our society."
I guess what I'm saying is that while this approach necessarily dissolves radical ideologies it doesn't necessarily affect radical ideas.
↑ comment by Manfred · 2011-02-15T01:39:57.565Z · LW(p) · GW(p)
Small note: If you want P(Prius is net positive), you should try P(AGW)×P(prius|AGW) + P(not AGW)×P(prius|not AGW). I.e. use the sum rule too, otherwise you end up calculating a big conditional probability instead of the total probability.
comment by Multiheaded · 2011-07-09T19:28:51.650Z · LW(p) · GW(p)
Frankly, I got more use - some of it immediate, private stuff though - out of this simple and clear post than the entire Luminosity sequence, which I skimmed through (not that I've anything bad to say about the Luminosity sequence itself). I've yet to do a reflection of why that is the case, as there's a huge aversion to Luminosity sitting there for some reason. Anyway, this one is practical as hell.
comment by Mass_Driver · 2011-02-12T08:52:51.481Z · LW(p) · GW(p)
This is a wonderful outline; thank you for sharing.
comment by Jonathan_Graehl · 2011-02-16T09:26:22.635Z · LW(p) · GW(p)
I like these suggestions.
I'm confused at why C.4 is under C at all. It seems entirely unrelated to confabulation.
comment by Alex Flint (alexflint) · 2011-02-13T13:34:56.256Z · LW(p) · GW(p)
Great outline, will raise this at the Oxford meetup this afternoon as we're planning to discuss practical rationality. Nice reference to Jitsu!
comment by steven0461 · 2011-02-12T22:57:42.340Z · LW(p) · GW(p)
Your friends and family are weirder (more unlike your models) than you think they are.
This seems like a weird thing to be confidently asserting in a discussion of overconfidence.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-13T01:07:46.950Z · LW(p) · GW(p)
I wasn't reading it as a confident statement of fact, more as a warning to consider the possibility that other minds are more unlike you than you've imagined.
comment by Algernoq · 2014-07-10T04:59:29.235Z · LW(p) · GW(p)
A useful question is how much rationality training is optimal. The brain can make intuitive guesses very quickly, and these guesses are fairly accurate most of the time, while meta-cognitive rationality checks slow decision-making and don't guarantee correctness. A Rationalist wants to make optimal decisions, and this often means going as meta as possible and striving to consider all known information: cognitive biases, general science, etc.
It is currently impossible to derive morality using logic and science (i.e. to derive "should" from "is"), and has been since the era of Wittgenstein or possibly Hume. So, I can't make any general statements about what anyone should do.
Assuming you want to be right, happy, and powerful, I recommend learning domain knowledge that helps build or do useful things (e.g. engineering or something else technical for a career; identifying what succeeds and practicing relevant techniques to succeed at routine tasks such as car-buying), and only studying enough psychology/logic/meta-cognition to usually avoid costly errors. The amount of valuable knowledge/skill I accumulate tracks closely with my success.
Practically, in engineering work, guarding against overconfidence is a huge part of the job. It's easy to be excited about a new and untested idea, but engineers typically learn humility after a few embarrassing and expensive failures. Experienced engineers are careful not to deliver an expensive or complicated product to customers until it's gone through extensive review and testing, and even then there is a budget/insurance for unexpected problems. This is for products using established methods and practices, that can be rigorously tested under known operating conditions. Meta-cognition is inherently more difficult to test (e.g. it's unwise to do destructive testing). LW rationality content generally describes well-validated theories, but prescribing actions based on these theories requires subjective value judgments.
tl;dr: Rationality helps but data/experience is what's critical for making effective decisions. If you haven't validated your theory with experiments, it's probably wrong.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-07-13T19:38:51.145Z · LW(p) · GW(p)
A Rationalist wants to make optimal decisions, and this often means going as meta as possible and striving to consider all known information: cognitive biases, general science, etc. [...] Rationality helps but data/experience is what's critical for making effective decisions. If you haven't validated your theory with experiments, it's probably wrong.
I don't think that's fair criticism. This community values doing Fermi estimates and checking whether the estimates are near the correct number. We have prediction book for calibrating our prediction abilities against the real world.