Checklist of Rationality Habits

post by AnnaSalamon · 2012-11-07T21:19:19.244Z · LW · GW · Legacy · 189 comments

Contents

189 comments
As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.

Below is the checklist of rationality habits we have been using in the minicamps' opening session.  It was co-written by Eliezer, myself, and a number of others at CFAR.  As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing.  We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.

I hope you find it useful; I certainly have.  Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.) 

---

This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop.  For each item, you might ask yourself: did you last use this habit...
  • Never
  • Today/yesterday
  • Last week
  • Last month
  • Last year
  • Before the last year

  1. Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination.
    1. When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)

    2. When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he’s driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he’s the passenger he gets mad at the driver when they don’t react similarly.)


    3. I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.)


    4. I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.)


    5. I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, "This point doesn't change the fixed amount of money we raised in past years, so it is good news because it implies that we can fix the strategy and do better next year.")


  2. Questioning and analyzing beliefs (after they come to your attention).
    1. I notice when I'm not being curious. (Recent example from Anna: Whenever someone criticizes me, I usually find myself thinking defensively at first, and have to visualize the world in which the criticism is true, and the world in which it's false, to convince myself that I actually want to know. For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicamp; and I had to visualize the consequences of [explaining to myself, internally, why I couldn’t have done any better given everything else I had to do], vs. the possible consequences of [visualizing how it might've been done better, so as to update my action-patterns for next time], to snap my brain out of defensive-mode and into should-we-do-that-differently mode.)


    2. I look for the actual, historical causes of my beliefs, emotions, and habits; and when doing so, I can suppress my mind's search for justifications, or set aside justifications that weren't the actual, historical causes of my thoughts. (Recent example from Anna: When it turned out that we couldn't rent the Minicamp location I thought I was going to get, I found lots and lots of reasons to blame the person who was supposed to get it; but realized that most of my emotion came from the fear of being blamed myself for a cost overrun.)


    3. I try to think of a concrete example that I can use to follow abstract arguments or proof steps. (Classic example: Richard Feynman being disturbed that Brazilian physics students didn't know that a "material with an index" meant a material such as water. If someone talks about a proof over all integers, do you try it with the number 17? If your thoughts are circling around your roommate being messy, do you try checking your reasoning against the specifics of a particular occasion when they were messy?)


    4. When I'm trying to distinguish between two (or more) hypotheses using a piece of evidence, I visualize the world where hypothesis #1 holds, and try to consider the prior probability I'd have assigned to the evidence in that world, then visualize the world where hypothesis #2 holds; and see if the evidence seems more likely or more specifically predicted in one world than the other (Historical example: During the Amanda Knox murder case, after many hours of police interrogation, Amanda Knox turned some cartwheels in her cell. The prosecutor argued that she was celebrating the murder. Would you, confronted with this argument, try to come up with a way to make the same evidence fit her innocence? Or would you first try visualizing an innocent detainee, then a guilty detainee, to ask with what frequency you think such people turn cartwheels during detention, to see if the likelihoods were skewed in one direction or the other?)


    5. I try to consciously assess prior probabilities and compare them to the apparent strength of evidence. (Recent example from Eliezer: Used it in a conversation about apparent evidence for parapsychology, saying that for this I wanted p < 0.0001, like they use in physics, rather than p < 0.05, before I started paying attention at all.)


    6. When I encounter evidence that's insufficient to make me "change my mind" (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver after someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad-driver parameter is higher.)


  3. Handling inner conflicts; when different parts of you are pulling in different directions, you want different things that seem incompatible; responses to stress.
    1. I notice when I and my brain seem to believe different things (a belief-vs-anticipation divergence), and when this happens I pause and ask which of us is right. (Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.)


    2. When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))


    3. When facing a difficult decision, I check which considerations are consequentialist - which considerations are actually about future consequences. (Recent example from Eliezer: I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn't seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn't change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).


  4. What you do when you find your thoughts, or an argument, going in circles or not getting anywhere.
    1. I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical. (Recent example from Michael Smith: Someone was worried that rationality training might be "fake", and I asked if they could think of a particular prediction they'd make about the results of running the rationality units, that was different from mine, given that it was "fake".)


    2. I try to come up with an experimental test, whose possible results would either satisfy me (if it's an internal argument) or that my friends can agree on (if it's a group discussion). (This is how we settled the running argument over what to call the Center for Applied Rationality—Julia went out and tested alternate names on around 120 people.)


    3. If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".) (Recent example from Anna: Advised someone to stop spending so much time wondering if they or other people were justified; was told that they were trying to do the right thing; and asked them to taboo the word 'trying' and talk about how their thought-patterns were actually behaving.)


  5. Noticing and flagging behaviors (habits, strategies) for review and revision.
    1. I consciously think about information-value when deciding whether to try something new, or investigate something that I'm doubtful about. (Recent example from Eliezer: Ordering a $20 exercise ball to see if sitting on it would improve my alertness and/or back muscle strain.) (Non-recent example from Eliezer: After several months of procrastination, and due to Anna nagging me about the value of information, finally trying out what happens when I write with a paired partner; and finding that my writing productivity went up by a factor of four, literally, measured in words per day.)


    2. I quantify consequences—how often, how long, how intense. (Recent example from Anna: When we had Julia take on the task of figuring out the Center's name, I worried that a certain person would be offended by not being in control of the loop, and had to consciously evaluate how improbable this was, how little he'd probably be offended, and how short the offense would probably last, to get my brain to stop worrying.) (Plus 3 real cases we've observed in the last year: Someone switching careers is afraid of what a parent will think, and has to consciously evaluate how much emotional pain the parent will experience, for how long before they acclimate, to realize that this shouldn't be a dominant consideration.)


  6. Revising strategies, forming new habits, implementing new behavior patterns.
    1. I notice when something is negatively reinforcing a behavior I want to repeat. (Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.)


    2. I talk to my friends or deliberately use other social commitment mechanisms on myself. (Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had some juice left over when work was done. I looked at Michael Smith and jokingly said, "But if I don't drink this now, it will have been wasted!" to prevent the sunk cost fallacy.) (Example from Eliezer: When I was having trouble getting to sleep, I (a) talked to Anna about the dumb reasoning my brain was using for staying up later, and (b) set up a system with Luke where I put a + in my daily work log every night I showered by my target time for getting to sleep on schedule, and a — every time I didn't.)


    3. To establish a new habit, I reward my inner pigeon for executing the habit. (Example from Eliezer: Multiple observers reported a long-term increase in my warmth / niceness several months after... 3 repeats of 4-hour writing sessions during which, in passing, I was rewarded with an M&M (and smiles) each time I complimented someone, i.e., remembered to say out loud a nice thing I thought.) (Recent example from Anna: Yesterday I rewarded myself using a smile and happy gesture for noticing that I was doing a string of low-priority tasks without doing the metacognition for putting the top priorities on top. Noticing a mistake is a good habit, which I’ve been training myself to reward, instead of just feeling bad.)


    4. I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)


    5. I use the outside view on myself. (Recent example from Anna: I like to call my parents once per week, but hadn't done it in a couple of weeks. My brain said, "I shouldn't call now because I'm busy today." My other brain replied, "Outside view, is this really an unusually busy day and will we actually be less busy tomorrow?")

189 comments

Comments sorted by top scores.

comment by lucidian · 2012-11-07T17:21:44.690Z · LW(p) · GW(p)

This may be the single most useful thing I've ever read on LessWrong. Thank you very, very much for posting it.

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

Often, when I am procrastinating, I find that the source of my procrastination is a feeling of being overwhelmed. In particular, I don't know where to begin on a task, or I do but the task feels like a huge obstacle towering over me. So when I think about the task, I feel a crushing sense of being overwhelmed; the way I escape this feeling is by procrastination (i.e. avoiding the source of the feeling altogether).

When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)

I picked this strategy up after realizing that the way I approached large programming projects (write the main function, then write each of the subroutines that it calls, etc.) could be applied to life in general. Now I'm about to apply it to the task of writing an NSF fellowship application. =)

Replies from: gwern, jooyous, amcknight, sketerpot, Swimmer963, lukeprog
comment by gwern · 2012-11-07T19:11:12.120Z · LW(p) · GW(p)

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

It's a classic self-help technique (especially in 'Getting Things Done') for a reason: it works.

comment by jooyous · 2012-11-08T03:37:23.150Z · LW(p) · GW(p)

Hello! I am procrastinating on writing the NSF fellowship! High five!

My current subproblem consists of filling in all the instances of "INSPIRATIONAL STUFF" with actual inspirational stuff, so this particular subproblem is looking pretty difficult. :(

Replies from: JulianMorrison
comment by JulianMorrison · 2012-11-09T02:54:16.918Z · LW(p) · GW(p)

Well your task spec is broken, so no wonder your brain won't be whipped into doing it.

"inspirational stuff" is a trigger for thinking in terms of things like advertising or religious revivals that are emotional grabs which are intended to disengage (or even flimflam) the reasoning faculties. Any rationalist would flinch away.

Re-frame: visualize your audience. You are looking to simply and clearly convey whatever part of their far mode utility function is advanced by the thing you are pushing.

comment by amcknight · 2012-11-08T22:43:19.233Z · LW(p) · GW(p)

For the slightly more advanced procrastinator that also finds a large sequence of tasks daunting, it might help to instead search for the first few tasks and then ignore the rest for now. Of course, sometimes in order to find the first tasks you may need to break down the whole task, but other times you don't.

comment by sketerpot · 2012-11-17T00:43:27.957Z · LW(p) · GW(p)

Here's one I use all the time: When a problem seems overwhelming, break it up into manageable subproblems.

This article would probably benefit from being re-read in smaller chunks over the course of several days. There are a lot of things in it that need to be thought about seriously in order to be effective, and I agree with you about its usefulness.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-09T03:27:02.591Z · LW(p) · GW(p)

When I notice myself doing this, I try to break the problem down into a sequence of high-level subtaks, usually in the form of a to-do list. Emotionally/metaphorically, instead of having to cross the obstacle in one giant leap, I can climb a ladder over it, one step at a time. (If the subtasks continue to be intimidating, I just apply this solution recursively, making lists of subsubtasks.)

I think the most important aspect of this, for me anyway, is being able to dump most of what you're working on out of your working memory, trusting yourself that it's organized on paper, so that you can free up more brain space to do each of the sub-parts.

comment by Kaj_Sotala · 2012-11-07T13:26:39.478Z · LW(p) · GW(p)

Very nice list! I feel like this one in particular is one of the most important ones:

I try not to treat myself as if I have magic free will; I try to set up influences (habits, situations, etc.) on the way I behave, not just rely on my will to make it so. (Example from Alicorn: I avoid learning politicians’ positions on gun control, because I have strong emotional reactions to the subject which I don’t endorse.) (Recent example from Anna: I bribed Carl to get me to write in my journal every night.)

To give my own example: I try to be vegetarian, but occasionally the temptation of meat gets the better of me. At some point I realized that whenever I walked past a certain hamburger place - which was something that I typically did on each working day - there was a high risk of me succumbing. Obvious solution: modify my daily routine to take a slightly longer route which avoided any hamburger places. Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

Replies from: Swimmer963, RomeoStevens, incariol, aelephant
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-07T15:51:42.201Z · LW(p) · GW(p)

Modifying your environment so that you can completely avoid the need to use willpower is ridiculously useful.

My personal example: arranging to go exercise on the way to or from somewhere else will drastically increase the probability that I'll actually go. There's a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be biking from. Even though the extra 10 minutes round trip is pretty negligable (and counts as exercise itself), I'm probably 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more drastic for my taekwondo class: it's a 45 minute bike ride from home and about a 15 minute bike ride from the campus where I have most of my classes. Even if I finish class at 3:30 pm and taekwondo is at 7 pm, it still makes more sense for me to stay on campus for the interim–if I do, there's nearly 100% likelihood that I'll make it to taekwondo, but if I go home and get comfy, that drops to less than 50%.

comment by RomeoStevens · 2012-11-07T22:58:58.692Z · LW(p) · GW(p)

For me this was the biggest insight that dramatically improved my ability to form habits. I don't actually decide things most of the time. Agency is something that only occurs intermittently. Therefore I use my agency on changing what sorts of things I am surrounded by rather than on the tasks themselves. This works because the default state is to simply be the average of what I am surrounded by.

Cliche example: not having junk food in the house improves my diet by making it take additional work to go out and get it.

comment by incariol · 2012-11-11T00:55:37.568Z · LW(p) · GW(p)

Another example: as I don't feel like getting in a relationship for the foreseeable future, I try to avoid circumstances with lots of pretty girls around, e.g. not going to certain parties, taking walks in those parts of the forest where I don't expect to meet any, and in general, trying to convince other parts of my brain that the only girl I could possibly be with exists somewhere in the distant future or not at all (if she can't do a spell or two and talk to dragons, she won't do ;-)).

It also helps being focused on math, programming and abstract philosophy - and spending time on LW, it seems. :)

Replies from: army1987, inblankets
comment by A1987dM (army1987) · 2012-11-15T17:24:17.082Z · LW(p) · GW(p)

I don't think you'd be likely to find yourself in a relationship despite not wanting to by going to parties with lots of pretty girls around, let alone by walking on a street where girls also walk rather than through a forest. And not developing social skills may make things much harder should you ever decide to try and get into a relationship later in your life.

Replies from: DaFranker, incariol
comment by DaFranker · 2012-11-15T17:33:38.241Z · LW(p) · GW(p)

Aha, but the clever arguer could respond that you could be likely to find yourself wanting to despite not wanting to want to be in a relationship, and thus that avoidance is a twice-effective method of willpower conservation!

Of course, that the above be true and applicable to this case is unlikely. If you're to end up wanting it, and that you'll end up wanting it enough to compensate for the opportunity costs regarding other things you might want incurred by eventual willpower expenses or time spent "succumbing" and attempting to get into a relationship, then I think it trivially follows that you should already have updated towards the more reflectively coherent behavior that seems to give higher expected utility. After all, we want to win.

Replies from: apotheon
comment by apotheon · 2012-11-15T17:54:33.793Z · LW(p) · GW(p)

It's the "Lead me not into temptation, but deliver me from weevils!" tactic. Well . . . maybe not weevils, but not evil either, in this case.

Your objection to the ultimate utility of avoidance doesn't seem to take the desire to avoid distraction and wasted time even when successfully resisting the biological urges toward relationship-establishing behavior into account. Even if you (for some nonspecific definition of "you") simply find yourself waylaid for a few minutes by a pretty girl, but ultimately ready to move on, the time spent not only in those few moments but also in thinking about it later on may prove a distraction from other things, regardless of whether you allow yourself to get caught up enough to actively pursue a relationship with her.

Replies from: DaFranker
comment by DaFranker · 2012-11-15T18:13:28.719Z · LW(p) · GW(p)

Well, yeah, my objection does take it into account, but I was being unfair in my implicit assumptions because I didn't think it likely that anyone here would object.

If you're to end up wanting it, and that you'll end up wanting it enough to compensate for the opportunity costs regarding other things (...)

Basically, this is where I lumped an implicit: "For most humans, the desire and expected benefits of successfully entering a relationship are much greater in terms of evolved values than the opportunity costs incurred, and it is reasonable to expect that the gains obtained from this would free up enough mental resources to actually make faster, rather than slower, progress on other goals of interest in the case of well-motivated individuals with above-average instrumental rationality."

However, estimating the costs you mentioned for humans-on-average is difficult for me, due to lack of data. Picture me as wearing a "typical mind fallacy warning!" badge on this particular issue.

comment by incariol · 2012-11-16T12:35:38.338Z · LW(p) · GW(p)

Well, it has happened to me before - girls really can be pretty insistent. :) But this is not actually what concerns me - it's the distraction/wasted time induced by pretty-girl-contact event like apotheon explained below.

comment by inblankets · 2013-02-24T16:26:33.032Z · LW(p) · GW(p)

I disagree with the commenters below-- I think you're fairly likely to find yourself wanting to be in a relationship if you're not careful. I'm a female, and I don't want to get married or have kids. Unfortunately, I'm 24, and some part of me/the body is really trying to marry me off and give me baybehs. So I try not to take in too much media that normalizes this vs. normalizing my goals, I don't babysit, and I am open about my intent so as not to attract invitations.

comment by aelephant · 2012-11-09T23:31:53.555Z · LW(p) · GW(p)

Set Future You up for success, rather than failure.

Edit: Thought of a personal example. I know that if I scratch my head, my head will become more itchy. It is a vicious cycle. If I cut my nails short, it seems to help. In the moment, I might not want to cut my nails because there is no immediate value. But it is, in a sense, "modifying my environment" so that in the future I'll be less likely to fall into the itchy-head trap.

comment by JenniferRM · 2012-11-07T17:14:03.489Z · LW(p) · GW(p)

Awesome list. I'm interested in the way there are 24 questions that are grouped into 6 overarching categories. Do they empirically cluster like this in actual humans? It would be fascinating to get a few hundred responses to each question and do dimensional analysis to see if there is a small number of common core issues that can be communicated and/or adjusted more efficiently :-)

comment by Shmidley · 2012-11-08T20:00:12.335Z · LW(p) · GW(p)

I'd like to add "noticing when you don't know something." When someone asks you a question, its surprisingly tempting to try to be helpful and offer them an answer even when you don't have the necessary knowledge to provide an accurate answer. It can be easy to infer what the truth might be and offer that as an answer, without explaining that you're just guessing and don't actually know. (Example: I recently purchased a new television and my co-worker asked me what sort of Parental Controls it offered. I immediately started providing him an answer I had inferred from limited knowledge, and it took me a moment to realize I didn't actually know what I was talking about and instead tell him, "I don't know.")

This is essentially the problem of confabulation mentioned here; in this case its a confabulation of knowledge about the world, as opposed to confabulating knowledge about the self. In terms of the map/territory analogy, this would be a situation where someone asks you a question about a specific area of your map, and you choose to answer as if that section of your map is perfectly clear to you, even when you know that its blurry. Don't treat a blurry map as if it were clear!

Replies from: John_Maxwell_IV, aelephant
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-13T06:21:18.699Z · LW(p) · GW(p)

I like your comment, but one problem is that telling people you don't know stuff projects low status. I think most people, including me, really know very little, but if you're honest about this all the time then this can contribute to persistent low status. (I tried the "don't care about status" thing for a while, but being near the bottom of the social totem pole just doesn't seem to work for me psychologically. So lately I've decided to optimize for status everywhere at least somewhat.)

Replies from: army1987, handoflixue
comment by A1987dM (army1987) · 2012-11-13T13:15:15.026Z · LW(p) · GW(p)

I like your comment, but one problem is that telling people you don't know stuff projects low status.

That only happens if it's credible, otherwise it's taken as counter-signalling. When I say I don't know much about something, people generally realize I'm just holding myself to a high standard and don't genuinely believe I know less than the typical person; the problem is that they also think that when I actually don't know shit about something (in the sense the typical person would use that phrase). Conversely, showing off knowledge can come across as arrogant in certain situations.

I tried the "don't care about status" thing for a while

Even if you don't care about status, I'd say that what X (e.g. “I don't know”) actually means in English is what English speakers actually mean when they say X, regardless of etymology (huh, it sounds tautological when put this way, doesn't it?), and if you're aware of this and use X to mean something else you're lying (unless your interlocutor knows you mean something else).

comment by handoflixue · 2012-11-14T01:09:02.880Z · LW(p) · GW(p)

"telling people you don't know stuff projects low status"

If it's a random stranger, I don't care about status. If it's a friend or a fellow "geek", it's probably a high status signal to send. That pretty much leaves work as the only area I'd potentially run in to this, and I've found "I don't know; but I can find out!" works wonders (part of this is that at work, I'm presumably expected to actually know these things)

I've found "I don't know, but isn't it fun to find out!" is a fairly successful tactic, but I'm also deliberately aiming to attract geeks and people who like that answer in my life :)

Replies from: army1987, wedrifid, CAE_Jones
comment by A1987dM (army1987) · 2012-11-14T14:08:35.878Z · LW(p) · GW(p)

“A physicist is someone who answers all questions with ‘I don't know, but I can find out.’” -- Someone (possibly Nicola Cabibbo, quoting from my memory)

comment by wedrifid · 2012-11-14T01:22:38.738Z · LW(p) · GW(p)

. If it's a friend or a fellow "geek", it's probably a high status signal to send.

Rarely. It is often a useful signal to send but seldom high status.

Replies from: handoflixue, DaFranker
comment by handoflixue · 2012-11-14T21:29:22.972Z · LW(p) · GW(p)

I don't really understand the reply. Are you saying it's rarely high status even within my social circles? Or are you saying that my social circles are unusual? To the former, all I can say is that we apparently have very different experiences. To the latter... well, duh, that's WHY I specified that it was specific to THOSE groups...

Replies from: wedrifid
comment by wedrifid · 2012-11-14T22:15:04.454Z · LW(p) · GW(p)

Are you saying it's rarely high status even within my social circles?

I am saying that is more likely that you are inflating the phrase "high status" to include things that are somewhat low status but overall socially rewarding than that your subculture is stretched quite that far in that (unsustainable) direction.

Replies from: handoflixue
comment by handoflixue · 2012-11-14T22:41:54.995Z · LW(p) · GW(p)

How would "I don't know" being high status be unsustainable?

For that matter, what distinction are you drawing between high status and socially rewarding?

Replies from: wedrifid
comment by wedrifid · 2012-11-15T00:16:32.454Z · LW(p) · GW(p)

For that matter, what distinction are you drawing between high status and socially rewarding?

Yes, "high status" being the inflated does seem to be the crux of the matter.

Socially rewarding behaviors that, ceritus paribus are low status.

  • Saying "please" or "thankyou".
  • Listening to what someone is saying. Even more if you deign to comprehend and accept their point.
  • Saluting.
  • Doing what someone asks.
  • Using careful expression to ensure you don't offend people.
Replies from: handoflixue
comment by handoflixue · 2012-11-16T19:33:17.148Z · LW(p) · GW(p)

My general experience has been that "I don't know, but I'll find out", said to someone currently equal or lower status than me, clearly but mildly correlates with most of the low status behavior you mentioned. I'm not as sure how it affects people higher status than me, since I don't have as many of those relationships / data points.

So I continue my assertion that, yes, it's high status, not merely socially rewarding. I still suspect this is a weird and unusual set of experiences, and probably has to do with how I position "I don't know" relative to others.

comment by DaFranker · 2012-11-14T22:05:39.015Z · LW(p) · GW(p)

In some circles, perceived signal usefulness is a causal factor towards the signal's status-level.

To unbox the above: In some groups I've been with, sending compressed signals that everyone in the group understands is a high-status signal, regardless of whether it's a "low-status" or "high-status" signal in other environments.

"Hey, I have an idea but I'm not quite sure how to go about putting it in practice" is a very low status signal in meatspace for all meatspaces I've been in except one, but a very high status signal in e.g. certain online hacking communities.

Likewise for the case at hand, there are places where "I don't know" can even be the highest status signal. For the most memorable example, I've once visited a church where the people at the top were answering "I don't know" to the most questions, signaling their closeness to divinity implicitly, while the "simpletons" at the bottom of the ladder had an opinion on everything, and thus would never "not know".

comment by CAE_Jones · 2012-11-14T01:27:56.394Z · LW(p) · GW(p)

I've had people tell me to taboo "I don't know" because I use it so much. These being fairly average or slightly above average people who are annoyed that I don't have a strong opinion about things like "what do you want to eat tonight?" Some have made jokes about putting "I don't know" on my tombstone. Assuming that I die and am later resurrected and discover this was actually done, I will be most displeased.

Replies from: handoflixue, TheAncientGeek, btoblake
comment by handoflixue · 2012-11-14T21:30:45.303Z · LW(p) · GW(p)

I usually interpret that context as "I don't have a preference", which I would readily agree is useful to taboo. If you genuinely don't know what you want (despite having an apparent hidden but strong preference) then ... that's a new one on me ^^;

comment by TheAncientGeek · 2014-03-19T20:34:27.152Z · LW(p) · GW(p)

Toss a mental coin and pretend to enthuse about the result?

comment by btoblake · 2014-03-19T18:46:47.349Z · LW(p) · GW(p)

Before declining to offer an opinion, it's worth considering whether you'd benefit from the decision being made. (For instance, you could get a prompt dinner.) If so, why not offer a little help? Decision making can be tiring work, and any input can make it easier.

You could:

  • mention any limiting factors (i.e. I have $20 or 1 hour)
  • Mention options that are convenient
  • Offer support to the person who makes the decision (particularly if you can avoid critiquing their choice).
comment by aelephant · 2012-11-09T23:21:48.493Z · LW(p) · GW(p)

Good one. I try to be very conservative with my language & preface everything I say with something that implies an amount of uncertainty.

There might be cultural differences. In China people will give you directions on the street even if they have no idea. I have yet to have someone reply to a request for help with "I don't know".

It seems like an Ego protection thing to me & it isn't helpful.

comment by SPLH · 2012-11-08T09:02:40.576Z · LW(p) · GW(p)

The example about stacks in 1.2 has a certain irony in context. This requires a small mathematical parenthese:

A stack is a certain sophisticated type of geometric structure which is increasingly used in algebraic geometry, algebraic topology (and spreading to some corners of differential geometry) to make sense of geometric intuitions and notions on "spaces" which occur "naturally" but are squarely out of the traditional geometric categories (like manifolds, schemes, etc.).

See www.ams.org/notices/200304/what-is.pdf for a very short introduction focusing on the basic example of the moduli of elliptic curves.

The upshot of this vague outlook is that in the relevant fields, everything of interest is a stack (or a more exotic beast like a derived stack), precisely because the notion has been designed to be as general and flexible as possible ! So asking someone working on stacks a good example of something which is not a stack is bound to create a short moment of confusion.

Even if you do not care for stacks (and I wouldn't hold it against you), if you are interested in open source/Internet-based scientific projects, it is worth having a look at the web page of the Stacks project (http://stacks.math.columbia.edu/), a collaborative fully hyperlinked textbook on the topic, which is steadily growing towards the 3500 pages mark.

comment by JoshuaFox · 2012-11-07T08:15:25.333Z · LW(p) · GW(p)

he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))

[Edit] But his utility function would predictably change under those circumstances.

I know that I have a status quo bias, hedonic treadmill, and strongly decreasing marginal utility of money (particularly when progressive taxation is factored in).

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now (roughly the factor described in the OP), I'd also be pretty much as happy as I am now, and want more money.

The logical conclusion is that we should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

Replies from: gwern, JoshuaFox, NancyLebovitz, katydee
comment by gwern · 2012-11-07T19:08:35.141Z · LW(p) · GW(p)

If I made 2/3 of what I do now, I'd be pretty much as happy as I am now, and want more money; if I made 3/2 of what I do now, I'd also be pretty much as happy as I am now, and want more money.

You're burying your argument in the constants 'pretty much' there. You can repeat your argument sorites-style after you have taken the 2/3 salary cut: "Well, if I made 2/3 what I do now, I'd still be 'pretty much as happy' as I am now" and so on and so forth until you have hit sub-poverty wages.

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?

(particularly when progressive taxation is factored in).

Here again more work is necessary. One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.

The logical conclusion is that I should lower the weight of salary increases in decisions, the opposite of the conclusion proposed here.

Of course there are people who are surely making the mistake of over-valuing salaries; but you're going to need to do more work to show you're one of them.

Replies from: Kindly, Kawoomba, army1987, CarlShulman, Vaniver, JoshuaFox
comment by Kindly · 2012-11-08T04:34:05.998Z · LW(p) · GW(p)

To keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Comparing these numbers tells you pretty much nothing. First of all, taking log($50k) is not a valid operation; you should only ever take logs of a dimensionless quantity. The standard solution is to pick an arbitrary dollar value $X, and compare log($50k/$X), log($120k/$X), and log($10^9/$X). This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.

This shouldn't be a surprise, because under the standard definition, utility functions are translation-invariant. They are only compared in cases such as "is U1 better than U2?" or "is U1 better than a 50/50 chance of U2 and U3?" The answer to this question doesn't change if we add a constant to U1, U2, and U3.

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

It would make sense to say, if your utility for money is logarithmic and you currently have $50k, that you're indifferent between a 100% chance of an extra $70k and a 8.8% chance of an extra $10^9 -- that being the probability for which the expected utilities are the same. If you think logarithmic utilities are bad, this is the claim you should be refuting.

Replies from: jmmcd, gwern, army1987
comment by jmmcd · 2012-11-08T23:24:02.834Z · LW(p) · GW(p)

you should only ever take logs of a dimensionless quantity

Goddammit I have a degree in mathematics and no-one ever told me that and I never figured it out for myself.

I see the beginnings of an explanation here [http://physics.stackexchange.com/questions/7668/fundamental-question-about-dimensional-analysis]. Any pointer to better explanation?

Replies from: aronwall, Kindly, shminux
comment by aronwall · 2012-12-17T02:19:58.970Z · LW(p) · GW(p)

Taking logs of a dimensionful quantity is possible, if you know what you're doing. (In math, we make up our own rules: no one is allowed to tell us what we can and cannot do. Whether or not it's useful is another question.) Here's the real scoop:

In physics, we only really and truly care about dimensionless quantities. These are the quantities which do not change when we change the system of units, i.e. they are "invariant". Anything which is not invariant is a purely arbitrary human convention, which doesn't really tell me anything about the world. For example, if I want to know if I fit through a door, I'm only interested in the ratio between my height and the height of the door. I don't really care about how the door compares to some standard meter somewhere, except as an intermediate step in some calculation.

Nevertheless, for practical purposes it is convenient to also consider quantities which transform in a particularly simple way under a change of units systems. Borrowing some terminology from general relativity, we can say that a quantity X is "covariant" if it transforms like X --> (unit1 / unit2 )^p X when we change from unit1 to unit2. Here p is a real number which indicates the dimension of the unit. These things aren't invariant under a change of units, so we don't care about them in a fundamental way. But they're extremely useful nevertheless, because you can construct invariant quantities out of covariant ones by multiplying or dividing them in such a way that the units cancel out. (In the concrete example above, this allows us to measure the door and me separately, and wait until later to combine the results.)

Once you're willing to accept numbers which depend on arbitrary human convention, nothing prevents you from taking logs or sines or whatever of these quantities (in the naive way, by just punching the number sans units into your calculator). What you end up with is a number which depends in a particularly complicated way on your system of units. Conceptually, that's not really any worse. But remember, we only care if we can find a way to construct invariant quantities out of them. Practically speaking, our exprience as physicists is that quantities like this are rarely useful.

But there may be exceptions. And logs aren't really that bad, since as Kindly points out, you can still extract invariant quantities by adding them together. As a working physicist I've done calculations where it was useful to think about logs of dimensionful quantities (keywords: "entanglement entropy", "conformal field theory"). Sines are a lot worse since they aren't even monotonic functions: I can't imagine any application where taking the sine of a dimensionful quantity would be useful.

Replies from: Eliezer_Yudkowsky, None, Richard_Kennaway, shminux
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-17T06:00:53.758Z · LW(p) · GW(p)

I think it'd be obvious how to take the log of a dimensional quantity.

e^(log apple) = apple

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-17T06:46:56.066Z · LW(p) · GW(p)

Right, but then log (2 apple) = log 2 + log apple and so forth. This is a perfectly sensible way to think about things as long as you (not you specifically, but the general you) remember that "log apple" transforms additively instead of multiplicatively under a change of coordinates.

comment by [deleted] · 2012-12-17T13:00:31.234Z · LW(p) · GW(p)

Isn't the argument to a sine by default a quantity of angle, that is Radians in SI? (I know radians are epiphenomenal/w/e, but still)

comment by Richard_Kennaway · 2012-12-17T12:01:51.260Z · LW(p) · GW(p)

I can't imagine any application where taking the sine of a dimensionful quantity would be useful.

Machine learning methods will go right ahead and apply whatever collection of functions they're given in whatever way works to get empirically accurate predictions from the data. E.g. add the patient's temperature to their pulse rate and divide by the cotangent of their age in decades, or whatever.

So it can certainly be useful. Whether it is meaningful is another matter, and touches on this conundrum again. What and whence is "understanding" in an AGI?

Eliezer wrote somewhere about hypothetically being able to deduce special relativity from seeing an apple fall. What sort of mechanism could do that? Where might it get the idea that adding temperature to pulse may be useful for making empirical predictions, but useless for "understanding what is happening", and what does that quoted phrase mean, in terms that one could program into an AGI?

comment by shminux · 2012-12-17T08:15:07.373Z · LW(p) · GW(p)

"units are a useful error-checking homomorphism"

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-17T09:42:51.716Z · LW(p) · GW(p)

I don't think "homomorphism" is quite the right word here. Keeping track of units means keeping track of various scaling actions on the things you're interested in; in other words, it means keeping track of certain symmetries. The reason you can use this for error-checking is that if two things are equal, then any relevant symmetries have to act on them in the same way. But the units themselves aren't a homomorphism, they're just a shorthand to indicate that you're working with things that transform in some nontrivial way under some symmetry.

Replies from: shminux
comment by shminux · 2012-12-17T17:39:15.352Z · LW(p) · GW(p)

I don't think "homomorphism" is quite the right word here.

The map from dimensional quantities to units is structure-preserving, so yes, it is a homomorphism between something like rings. For example, all distances in SI are mapped into the element "meter", and all time intervals into the element "second". Addition and subtraction is trivial under the map (e.g. m+m=m), and so is multiplication by a dimensionless quantity, while multiplication and division by a dimensional quantity generates new elements (e.g. meter per second).

Converting between different measurement systems (e.g. SI and CGS) adds various scale factors, thus enlarging the codomain of the map.

comment by Kindly · 2012-11-09T00:07:02.505Z · LW(p) · GW(p)

I don't know of any good explanations; this seems relevant but requires a subscription to access. Unfortunately, no-one's ever explained this to me either, so I've had to figure it out by myself.

What I'd add to the discussion you linked to is that in actual practice, logarithms appear in equations with units in them when you solve differential equations, and ultimately when you take integrals. In the simplest case, when we're integrating 1/x, x can have any units whatsoever. However, if you have bounds A and B, you'll get log(B) - log(A), which can be rewritten as log(B/A). There's no way A and B can have different units, so B/A will be dimensionless.

Of course, often people are sloppy and will just keep doing things with log(B) and log(A), even though these don't make sense by themselves. This is perfectly all right because the logs will have to cancel eventually. In fact, at this point, it's even okay to drop the units on A and B, because log(10 ft) - log(5 ft) and log(10 m) - log(5 m) represent the same quantity.

Replies from: satt
comment by satt · 2012-11-10T14:41:05.086Z · LW(p) · GW(p)

I don't know of any good explanations; this seems relevant but requires a subscription to access.

Most of that paper is the authors rebutting what other people have said about the issue, but there are two bits that try to explain why one can't take logs of dimensional things.

Page 68 notes that , which "precludes the association of any physical dimension to any of the three variables b, x, and y".

And on pages 69-70:

The reason for the necessity of including only dimensionless real numbers in the arguments of transcendental function is not due to the [alleged] dimensional nonhomogeneity of the Taylor expansion, but rather to the lack of physical meaning of including dimensions and units in the arguments of these function. This distinction must be clearly made to students of physical sciences early in their undergraduate education.

That second snippet is too vague for me. But I'm still thinking about the first one.

[Edited to fix the LaTeX.]

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2012-11-13T03:02:34.845Z · LW(p) · GW(p)

The (say) real sine function is defined such that its domain and codomain are (subsets of) the reals. The reals are usually characterized as the complete ordered field. I have never come across units that--taken alone--satisfy the axioms of a complete ordered field, and having several units introduces problems such as how we would impose a meaningful order. So a sine function over unit-ed quantities is sufficiently non-obvious as to require a clarification of what would be meant by sin($1). For example--switching over now to logarithms--if we treat $1 as the real multiplicative identity (i.e. the real number, unity) unit-multiplied by the unit $, and extrapolate one of the fundamental properties of logarithms--that log(ab)=loga+logb, we find that log($1)=log($)+log(1)=log($) (assuming we keep that log(1)=0). How are we to interpret log($)? Moreover, log($^2)=2log($). So if I log the square of a dollar, I obtain twice the log of a dollar. How are we to interpret this in the above context of utility? Or an example from trigonometric functions: One characterization of the cosine and sine stipulates that cos^2+sin^2=1, so we would have that cos^2($1)+sin^2($1)=1. If this is the real unity, does this mean that the cosine function on dollars outputs a real number? Or if the RHS is $1, does this mean that the cosine function on dollars outputs a dollar^(1/2) value? Then consider that double, triple, etc. angles in the standard cosine function can be written as polynomials in the single-angle cosine. How would this translate?

So this is a case where the 'burden of meaningfulness' lies with proposing a meaningful interpretation (which now seems rather difficult), even though at first it seems obvious that there is a single reasonable way forward. The context of the functions needs to be considered; the sine function originated with plane geometry and was extended to the reals and then the complex numbers. Each of these was motivated by an (analytic) continuation into a bigger 'domain' that fit perfectly with existing understanding of that bigger domain; this doesn't seem to be the case here.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-13T08:40:24.580Z · LW(p) · GW(p)

How are we to interpret [the logarithm of one dollar] in the above context of utility?

You pick an arbitrary constant A of dimension "amount of money", and use log(x/A) as an utility function. Changing A amounts to adding a constant to the utility (and changing the base of the logarithms amounts to multiplying it by a constant), which doesn't affect expected utility maximization. EDIT: And once it's clear that the choice of A is immaterial, you can abuse notation and just write “log(x)”, as Kindly says.

comment by shminux · 2012-11-08T23:57:11.593Z · LW(p) · GW(p)

You can only add, subtract and compare like quantities, but log(50000*1dollar)=log(50000)+log(1 dollar), which is a meaningless expression. What's the logarithm of a dollar?

Replies from: army1987, Thomas, jmmcd
comment by A1987dM (army1987) · 2012-11-10T15:42:37.769Z · LW(p) · GW(p)

What's the logarithm of a dollar?

An arbitrary additive constant. See the last paragraph of Kindly's comment.

comment by Thomas · 2012-11-10T15:53:08.941Z · LW(p) · GW(p)

What's the logarithm of a dollar?

What do you need to "exponate" to get a dollar?

That, whatever that might be, is the logarithm of a dollar.

comment by jmmcd · 2012-11-09T08:25:48.504Z · LW(p) · GW(p)

Well, we could choose factorise it as log(50000 dollars) = log(50000 dollar^0.5 * 1 dollar^0.5) = log(50000 dollar^0.5) + log(1 dollar^0.5). That does keep the units of the addition operands the same. Now we only have to figure out what the log of a root-dollar is...

the logarithm of a dollar

It's really just the same question again -- why can't I write log(1 dollar) = 0 (or maybe 0 dollar^0.5), the same as I would write log(1) = 0.

Replies from: satt
comment by satt · 2012-11-10T13:17:06.737Z · LW(p) · GW(p)

It's really just the same question again -- why can't I write log(1 dollar) = 0 (or maybe 0 dollar^0.5), the same as I would write log(1) = 0.

$1 = 100¢. Now try logging both sides by stripping off the currency units first!

comment by gwern · 2012-11-20T21:41:48.473Z · LW(p) · GW(p)

This is equivalent to comparing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an arbitrary constant.

This is what I did, without the pedantry of the C.

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

I don't follow at all. How can utilities not be comparable in terms of multiplication? This falls out pretty much exactly from your classic cardinal utility function! You seem to be assuming ordinal utilities but I don't see why you would talk about something I did not draw on nor would accept.

Replies from: Kindly, The_Duck
comment by Kindly · 2012-11-20T22:34:29.142Z · LW(p) · GW(p)

This is what I did, without the pedantry of the C.

The point is that because the constant is there, saying that utility grows logarithmically in money underspecifies the actual function. By ignoring C, you are implicitly using $1 as a point of comparison.

A generous interpretation of your claim would be to say that to someone who currently only has $1, having a billion dollars is twice as good as having $50000 -- in the sense, for example, that a 50% chance of the former is just as good as a 100% chance of the latter. This doesn't seem outright implausible (having $50000 means you jump from "starving in the street" to "being more financially secure than I currently am", which solves a lot of the problems that the $1 person has). However, it's also irrelevant to someone who is guaranteed $50000 in all outcomes under consideration.

Replies from: gwern
comment by gwern · 2012-11-20T23:00:14.462Z · LW(p) · GW(p)

However, it's also irrelevant to someone who is guaranteed $50000 in all outcomes under consideration.

Then how do you suggest the person under discussion evaluate their working patterns if log utilities are only useful for expected values?

Replies from: Kindly
comment by Kindly · 2012-11-20T23:14:11.800Z · LW(p) · GW(p)

By comparing changes in utility as opposed to absolute values.

To the person with $50000, a change to $70000 would have a log utility of 0.336, and a change to $1 billion would have a log utility of 9.903. A change to $1 would have a log utility of -10.819.

Replies from: gwern
comment by gwern · 2012-11-20T23:33:43.994Z · LW(p) · GW(p)

I see, thanks.

comment by The_Duck · 2012-11-20T22:19:36.344Z · LW(p) · GW(p)

How can utilities not be comparable in terms of multiplication?

"The utility of A is twice the utility of B" is not a statement that remains true if we add the same constant to both utilities, so it's not an obviously meaningful statement. We can make the ratio come out however we want by performing an overall shift of the utility function. The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities. But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.

Replies from: gwern
comment by gwern · 2012-11-20T23:05:53.316Z · LW(p) · GW(p)

The fact that we think of utilities as cardinal numbers doesn't mean we assign any meaning to ratios of utilities.

Kindly says the ratios do have relevance to considering bets or risks.

But it seemed that you were trying to say that a person with a logarithmic utility function assesses $10^9 as having twice the utility of $50k.

Yes, I think I see my error now, but I think the force of the numbers is clear: log utility in money may be more extreme than most people would intuitively expect.

comment by A1987dM (army1987) · 2012-11-08T17:12:50.708Z · LW(p) · GW(p)

In particular, it's invalid to say "U1 is twice as good as U2". For that matter, even if you don't like utility functions, this is suspicious in general: what does it mean to say "I would be twice as happy if I had a million dollars"?

This is what I immediately thought when I first read about the Repugnant Conclusion on Wikipedia, years ago before having ever heard of the VNM axioms or anything like that.

comment by Kawoomba · 2012-11-07T21:56:59.086Z · LW(p) · GW(p)

[D]o you really think if someone handed you a billion dollars and you filled your world-famous days competing with Musk to reach Mars or something insanely awesome like that, you would only be twice as happy as when you were a low-status scrub-monkey making 50k?

Only twice as?

Adaptation level theory suggests that both contrast and habituation will operate to prevent the winning of a fortune from elevating happiness as much as might be expected. ... As predicted, lottery winners were not happier than controls

It's a well replicated phenomenon.

Replies from: gwern
comment by gwern · 2012-11-20T22:26:23.668Z · LW(p) · GW(p)

Lottery-winners are self-selected for a number of things including innumeracy or foolishness and not having grand projects materially advanced by winnings, and the famous lottery winner examples are for relatively small sums as far as I know - most of the winners in that paper were $400k or less at a time of higher tax rates, with a serious selection issue there as well (less than half of the winners interviewed).

comment by A1987dM (army1987) · 2012-11-08T08:59:56.165Z · LW(p) · GW(p)

One of the chief suggestions of positive psychology is donating more and buying more fuzzies... and guess what is favored by progressive taxation? Donating.

You don't get to decide where most of your tax money goes, which I guess means that for a large fraction of people taxes don't count as fuzzy-buying donations.

Replies from: scav, gwern
comment by scav · 2012-11-08T18:24:45.501Z · LW(p) · GW(p)

Which is a failure mode of most people's thinking about taxes. Most of your tax money goes to boring things you don't want to concern yourself with and which you don't have any expertise in, such that you deciding exactly where the money went would be disastrous. Someone with the required expertise is doing their best to make sure the limited available money is spent carefully on those things, in most cases.

I like to think that in general, taxes are my subscription fee for living in a civilisation rather than a feudal plutocracy.

There are some specific things my taxes are spent on that I actively resent, but the response to that is to oppose those specific things, and I accept democracy and debate as the means to (slowly and unreliably) improve the situation.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-09T13:17:01.441Z · LW(p) · GW(p)

I think of taxes as a “subscription fee for living in a civilisation”, too, but I think you're overestimating how useful what most of the tax money is spent on is to most of the population and underestimating the extent to which present-day First World countries are plutocracies.

Replies from: scav
comment by scav · 2012-11-09T22:14:49.514Z · LW(p) · GW(p)

Well, neither of us have quantified our estimates for the usefulness of government spending, or broken it down by sector or demographics. So, how much am I overestimating it, and in what specific ways? :)

I live in Scotland. I consider it to be a civilised country mostly. It has good free education and health care, and businesses are regulated as to employment law, health and safety, and environmental impact. I don't claim more expertise in how all that gets arranged than the people who arrange it, and I would be sceptical if you did, without seeing evidence.

The civilisation of the USA has some existential risk for feudal plutocracy, but I think it narrowly avoided one of the risk factors this week and I hold out some hope for steady improvement if it can stop shitting its pants over imaginary terrorist threats and start taking human rights seriously again. But even if I'm wrong about that, I never said that taxes were sufficient to prevent social breakdown. Just necessary.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-10T11:25:34.300Z · LW(p) · GW(p)

I don't claim more expertise in how all that gets arranged than the people who arrange it, and I would be sceptical if you did, without seeing evidence.

I'm not questioning their expertise, I'm questioning their goals. I usually try to apply Hanlon's razor to single individuals, but I'm reluctant to apply it to entire governments. I'm pretty sure that spending on defence an amount comparable to (or, in certain countries, even greater than) that spent on research has a point, I just don't think it's to benefit most of the population.

The civilisation of the USA has some existential risk for feudal plutocracy, but I think it narrowly avoided one of the risk factors this week

In terms of what he's actually done, as opposed to what he says, Obama's economic policy isn't that different to Republicans'. Or do “issues like peace, immigration, gay and women's rights, prayers in school”¹ (to quote the article linked) suffice to make a government not count as a plutocracy?

Anyway, how much have you heard about lobbying, associations such as the Bilderberg Group or the Trilateral Commission, etc.? (Unfortunately, the people who talk about those things also tend to spew out lots of nonsense about Reptilians and whatnot, but I have my own hypothesis about why they do that.)


  1. When I posted that article on Facebook, the only comment was from a gay friend of mine pointing out that with one president gay rights would go back to the 1800s and with the other they might be allowed to marry.
Replies from: scav
comment by scav · 2012-11-12T11:12:09.800Z · LW(p) · GW(p)

This is wandering away from the topic a bit. I doubt anyone could make a good case for any of:

  • taxes are inherently harmful and always misspent
  • taxes are always spent wisely
  • there exists any political system under which immensely rich people couldn't wield a lot of political power to try to further enrich themselves.
  • the immensely rich bother to conspire for any other purpose or actually care about politics much beyond what it can get them personally
  • there is literally nothing a democratically elected government can or will do to limit the political power of the immensely rich in any way.
Replies from: MugaSofer, army1987
comment by MugaSofer · 2012-11-12T17:00:26.570Z · LW(p) · GW(p)

there exists any political system under which immensely rich people couldn't wield a lot of political power to try to further enrich themselves.

Sure there does. A military dictatorship, for one.

Replies from: scav, FAWS
comment by scav · 2012-11-12T19:10:31.059Z · LW(p) · GW(p)

Name one where the dictator and his cronies were not also embezzling the wealth of the country and living it up with their rich buddies. That's what they grab power for.

Even if the guy at the top has ideological principles that forbid such behaviour (rare) and isn't a hypocrite about them (super rare), there is always someone high up in the hierarchy who is in the market for favours, and due to the nature of a dictatorial hierarchy, essentially untouchable.

Replies from: thomblake
comment by thomblake · 2012-11-12T19:29:35.408Z · LW(p) · GW(p)

You're describing a situation in which politically powerful people become rich, not one in which rich people become politically powerful.

Replies from: scav
comment by scav · 2012-11-13T16:36:54.444Z · LW(p) · GW(p)

That's a distinction with no significance. Those who grab political power to enrich themselves will peddle influence as one way of so doing. Or have you got a real-life counter-example?

I find the offered hypothetical and unprecedented military dictatorship where political power is kept separate from economic power ... unpersuasive.

comment by FAWS · 2012-11-12T17:47:35.226Z · LW(p) · GW(p)

Do you have an example of a military dictatorship where the immensely rich were allowed to keep their wealth, but couldn't use it to exert political influence?

Replies from: MugaSofer
comment by MugaSofer · 2012-11-13T08:50:44.737Z · LW(p) · GW(p)

Well, no. Not offhand, anyway. But people can become rich after the revolution, and I can't think of any examples of people gaining "a lot of political power to try to further enrich themselves" this way. Of course, those who already have such power (due to corruption or whatever) do tend to use it to acquire wealth...

EDIT: Put much better here.

comment by A1987dM (army1987) · 2012-11-12T11:16:55.867Z · LW(p) · GW(p)

I ADBOC with the negation of those statements (provided “there exists” in the third one means “there has existed so far” rather than “there could ever exist in principle”).

comment by gwern · 2012-11-08T14:06:31.858Z · LW(p) · GW(p)

That wasn't what I meant to imply.

comment by CarlShulman · 2012-11-08T01:29:03.020Z · LW(p) · GW(p)

to keep the limits of the log argument in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Ln $100 is 4.6, at which point it's doubtful that you can survive.

Replies from: gwern
comment by gwern · 2012-11-08T03:17:40.217Z · LW(p) · GW(p)

Ah, but suppose subsistence wages plummeted as in Hanson's em hell scenario? Ln $100 merely shows that 'the poor also smile' and the utility-maximizing thing is quadrillions of impoverished minds!

Replies from: CarlShulman
comment by CarlShulman · 2012-11-08T05:11:29.276Z · LW(p) · GW(p)

If we continue to use Utility=ln($) then utilities go infinitely negative as you approach zero :).

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-11-09T15:06:20.177Z · LW(p) · GW(p)

Allowing us to refute the repugnant conclusion. Quadrillions of minds with $(1+e). We should start a campaign to use very large currency units in preparation for the Singularity.

comment by Vaniver · 2012-11-07T21:45:22.059Z · LW(p) · GW(p)

guess what is favored by progressive taxation? Donating.

Sort of? I mean, the primary work here is being done by the deduction of charity donations by income. Progressive taxation helps in that charitable donations are cheaper the richer you are (each dollar given away only costs 70 cents, instead of 100 if there were no deduction / you were paying no income taxes), but that's shaping the incentive, not making it.

comment by JoshuaFox · 2012-11-07T21:16:41.942Z · LW(p) · GW(p)

... 50k ... a billion dollars...

Sure, that's why I said 2/3 and 3/2 rather than more significant multipliers.

Also: Sometimes you settle yourself into a local maximum, and even if it is not a global maximum, not switching may be OK if the local is not too much lower than the global maximum.

favored by progressive taxation? Donating

Yes, I agree that using your tax deduction gives an extra boost to donating.

comment by JoshuaFox · 2012-11-08T08:26:20.438Z · LW(p) · GW(p)

I realized that what bothers me is the neglect of utility-function differences in the counterfactual world.

Should you start using heroin? Let's try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing your decision. If you were a heroin addict, and had lost everything, and heroin were your only friend and consolation, would you want to stop? Maybe not. So go ahead, shoot up.

If, despite your deep desire to go into classical music as a career (which in real life you did, to your great satisfaction), you had followed the money into the financial sector, and after years of 80-hours weeks, had sunk into cynicism and no longer cared for anything but making more money to support your extravagant spending habits, would you then want to leave the financial industry for a life of music and a modest income? Probably not, so go ahead, follow the money, burn out your soul, and buy yourself a Porsche.

Replies from: handoflixue, Omegaile
comment by handoflixue · 2012-11-14T01:51:09.070Z · LW(p) · GW(p)

I have trouble believing that in those situations, I'd actually prefer to be that sort of rock-bottom, burnt-out person rather than thinking "I wish I'd made difference choices when I was 20, oh foolish foolish me."

Having been in some rather bad situations, I've never once thought "Gosh, this is so much better than if I'd had a successful, high-paying, yet enjoyable career!"

comment by Omegaile · 2012-11-08T10:40:02.599Z · LW(p) · GW(p)

This method of reducing bias only works for rational decisions using your current utility. Otherwise you will be prone to circular decisions like those you describe (decisions that feed themselves).

comment by NancyLebovitz · 2012-11-08T03:28:26.741Z · LW(p) · GW(p)

Shouldn't we include the costs of moving? Even if the social costs are held as negligible (they probably shouldn't be), there's the time spent and the monetary costs of moving.

comment by katydee · 2012-11-07T09:11:08.678Z · LW(p) · GW(p)

Yes, but money isn't just about being happy.

Replies from: JoshuaFox
comment by JoshuaFox · 2012-11-07T10:06:50.476Z · LW(p) · GW(p)

Sure, one of the things I most like about having more money is being able to donate more. However, the main consideration of her brother and others in these circumstances is, I strongly suspect, not maximizing their donation capacity, but rather a more generic personal utility calculation.

comment by roland · 2012-11-07T16:30:25.848Z · LW(p) · GW(p)

Recent example from Anna: Using grapefruit juice to keep up brain glucose, I had

The idea that will power or thinking depletes brain glucose has been debunked:

http://www.psychologytoday.com/blog/ulterior-motives/201211/is-willpower-energy-or-motivation http://lesswrong.com/r/discussion/lw/ej7/link_motivational_versus_metabolic_effects_of/

Replies from: gwern, aelephant
comment by gwern · 2012-11-07T17:22:23.499Z · LW(p) · GW(p)

But nevertheless, the suggestion of sweets will still work per your own links. A nice example of how revised theories remain consistent with old observations...

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-13T06:22:53.264Z · LW(p) · GW(p)

Supposedly gargling sugary lemonade works: http://www.forbes.com/sites/daviddisalvo/2012/11/08/need-a-self-control-boost-gargle-with-sugar-water/

Edit: sorry, this is redundant w/ roland's links.

comment by aelephant · 2012-11-09T23:27:46.692Z · LW(p) · GW(p)

I missed this somehow. Thanks for posting the links.

comment by Qiaochu_Yuan · 2013-02-04T07:42:25.095Z · LW(p) · GW(p)

I put the checklist into an Anki deck a week or two ago that I've been reviewing (as cloze deletions). Subjectively it seems to have helped the relevant concepts come more readily to mind, although that could just be the CFAR workshop (though we didn't talk about the checklist then and some of the ideas in the checklist, like social commitment mechanisms, weren't otherwise explicitly mentioned).

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-04-26T15:02:16.914Z · LW(p) · GW(p)

Would you mind sharing this deck? I would be a nice addition to Anki decks by LW users.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-27T20:00:40.689Z · LW(p) · GW(p)

I admit I'm not entirely sure how to share a deck.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-04-28T01:36:09.381Z · LW(p) · GW(p)

Ah, you are not the first! This comment by tgb taught me how to do. (I'm assuming you are using Anki 2.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-04-28T05:06:49.672Z · LW(p) · GW(p)

Cool. Here it is!

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-04-28T14:11:13.792Z · LW(p) · GW(p)

Thanks. The deck is now listed.

comment by A1987dM (army1987) · 2012-11-07T16:16:57.987Z · LW(p) · GW(p)

This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:

Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.

For some reason my brain is more comfortable working with numbers that with visualizations, instead. That can be bad for signalling: a few years ago there was a terrorist attack in London which affected IIRC about 300 people; my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.

Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.)

There's a huge difference: someone living in Silicon Valley on $70K + x and considering whether to stay there or move to Santa Barbara and earn x would be used to living on $70K + x; whereas someone living in Santa Barbara on x and considering whether to move to Silicon Valley and earn x + $70K or stay there would be used to living on x. This would affect how much each of them would enjoy a given amount of money. Also, the former would already have a social circle in Silicon Valley, and the latter wouldn't.

Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.

Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.

Replies from: Vaniver, DaFranker, apophenia, BrassLion
comment by Vaniver · 2012-11-08T04:08:44.582Z · LW(p) · GW(p)

my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.

Your math is right but your mother has the right interpretation of the situation. If your friend is dead, calling him does neither of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.

Replies from: DaFranker
comment by DaFranker · 2012-11-08T15:49:30.363Z · LW(p) · GW(p)

A different approach might be to do the math on how likely it is that someone the friend knows was involved in the incident. Or maybe just call to discuss the possible repercussions and the probable overreactions that the local government will have.

However, for most of my own friends, if I did call them in exactly such a situation, they'd tell me almost exactly what army1987 said to their mother. Unless they happened to be dead or lost a friend to the event or something.

comment by DaFranker · 2012-11-07T16:26:26.881Z · LW(p) · GW(p)

Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.

The thing is, it seems quite clear that the problem wasn't about how likely they are to respond badly, but that Anna (?) would visualize and anticipate the negative response beforehand based on no evidence that they would respond poorly, simply as a programmed mental habit. This would end up creating a vicious circle where each time the negatives from past times make it even more likely that this time it feels bad, regardless of the actual reactions.

The tactic of smiling reinforces the action of sending emails instead of terrorizing yourself into never sending emails anymore (which I infer from context would be a bad thing), and once you're rid of the looming vicious circle you can then base your predictions of the reaction on the content of the email, rather than have it be predetermined by your own feelings.

(Obligatory nitpicker's note: I agree with pretty much everything you said, I just didn't think that the real event in that example had a bad decision as you seemed to imply.)

comment by apophenia · 2012-11-07T22:05:51.637Z · LW(p) · GW(p)

This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep.

Interesting you should say that. About a week ago I simplified this into a more literal checklist designed to be used as part of a nightly wind-down, to see if it could maintain or instill habits. I designed the checklist based largely on empirical results from NASA's review of the factors for effectiveness of pre-flight safety checklists used by pilots, although I chased down a number of other checklist-related resources. I'm currently actively testing effects on myself and others, both trying to test to make sure it would actually be used, and getting the time down to the minimum possible (it's hovering around two minutes).

P.S. I'm not associated with CFAR but the checklist is an experiment on their request.

If you were to test your suggestion for two weeks, I would be interested to hear the results. My prediction (with 80% certainty) is: Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf. (Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)

Replies from: Metus, army1987, army1987
comment by Metus · 2012-11-08T00:17:11.193Z · LW(p) · GW(p)

Can you point us to the more interesting checklist resources?

Replies from: apophenia
comment by apophenia · 2012-11-18T00:45:23.099Z · LW(p) · GW(p)

Absolutely. I can give better resources if you can be more specific as to what you're looking for.

I recommend The Checklist Manifesto first as an overview, as well as a basic understanding of akrasia, and trying and failing to make and use some checklists yourself.

The resources I spent most of my time with were very specific to what I was working on, and so I wouldn't recommend them. However, just in case someone finds it useful, Human Factors of Flight-Deck Checklists: The Normal Checklist draws attention to some common failure modes of checklists outside the checklist itself.

comment by A1987dM (army1987) · 2012-12-01T01:56:30.752Z · LW(p) · GW(p)

Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf.

That's indeed what happened.

(Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)

That's just a hypocorism for my first name. I have never been in the armed forces. (I regret picking this nickname because it has generated confusion several times, but I've used it on the Internet ever since I was 12 and I'm kind of used to it.)

comment by A1987dM (army1987) · 2012-11-08T15:22:28.958Z · LW(p) · GW(p)

This sounds interesting. I wasn't entirely serious, but I'm going to do this for real now. (I haven't decoded the rot13ed part, of course.)

comment by BrassLion · 2012-11-12T20:48:37.937Z · LW(p) · GW(p)

You have the right conclusion but the wrong reason. Most people would appreciate being thought of in a disaster, so calling him if he's alive would be good - except that the phone networks, particularly cell networks, tend to be crippled by overuse in sudden disasters. Staying off the phones if you don't need to make a call helps with this.

comment by therufs · 2012-11-28T19:15:25.614Z · LW(p) · GW(p)

It's much less pretty than the PDF, but if anyone else wants a spreadsheet with write-in-able blanks, I have made a Google doc.

comment by aceofspades · 2012-11-09T02:00:33.612Z · LW(p) · GW(p)

I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-09T02:09:15.510Z · LW(p) · GW(p)

Longer? Probably not. Happier? Possible, depending on that person's baseline, since we don't know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.

Replies from: aceofspades
comment by aceofspades · 2012-11-09T02:48:55.600Z · LW(p) · GW(p)

I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components.

But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I'm maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.

Replies from: Swimmer963, chaosmosis
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-09T03:24:35.718Z · LW(p) · GW(p)

In fact constructing this abstract system does not seem to convincingly help me further its purported goal

I think this is a common problem. That doesn't mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it's definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the "default.")

As for me, I've decided that happiness is too elusive of a goal–I'm bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.

Replies from: aceofspades
comment by aceofspades · 2012-11-11T00:35:26.340Z · LW(p) · GW(p)

So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won't continue along that line.

To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn't seem that we disagree about anything important.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-11T01:40:37.572Z · LW(p) · GW(p)

But this is just arguing by definition, so I won't continue along that line.

Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it's easy to evaluate my subgoals and measure how well I'm achieving them. But maybe you find it simpler to have only one mental construct, "happiness", instead of lots.

The second paragraph seems to say what I intended the second paragraph of my previous comment to mean.

I guess I explicitly don't allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was "you're doing it wrong," but it's possible that having an unconnected mental system doesn't sabotage your motivation the same way it does mine. Also, "what I actually end up doing" doesn't, to me, have to connotation of "choosing and achieving subgoals", it has the connotation of not having goals. But it sounds like that's not what it means to you.

comment by chaosmosis · 2012-11-10T01:54:56.587Z · LW(p) · GW(p)

I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.

Replies from: wedrifid, aceofspades
comment by wedrifid · 2012-11-10T02:10:34.405Z · LW(p) · GW(p)

I would argue that the altruism should be part of the selfish utility function.

Excellent! This nuance of what "selfish" means is something I find myself reiterating all too frequently. (Where the latter means I've done it at least three times that I can recall.)

comment by aceofspades · 2012-11-11T00:28:10.523Z · LW(p) · GW(p)

This is reaching the point of just arguing about definitions, so I reject this line of discussion as well.

Replies from: chaosmosis
comment by chaosmosis · 2012-11-11T00:36:44.380Z · LW(p) · GW(p)

It's not an argument about definitions, it's an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they're only experienced internally. (I'm using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don't think those grounds make sense.)

Replies from: Swimmer963, aceofspades
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-25T02:40:56.434Z · LW(p) · GW(p)

all impulses are selfish because they're only experienced internally.

I think defining "selfish" as "anything experienced internally" is very limiting definition that makes it a pretty useless word. The concept of 'selfishness' can only be applied to human behaviour/motivations–physical-world phenomena like storms can't be selfish or unselfish, it's a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you're ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word "selfish" at all, since there's nothing that isn't selfish.

There's also the argument of common usage–it doesn't matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people's definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define "selfishness" such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn't had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don't have introspective access to our neurons firing, it's meaningful for most people to use selfishness or unselfishness as labels.

Replies from: TorqueDrifter, chaosmosis
comment by TorqueDrifter · 2012-11-26T20:25:23.685Z · LW(p) · GW(p)

To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say "welp, guess all drives are selfish" or something.

comment by chaosmosis · 2012-11-25T06:34:05.622Z · LW(p) · GW(p)

Sally doesn't give Jack the cake because Jack hasn't had any, rather, Sally gives Jack the cake because she wants to. That's why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don't mandate sacrifice or asceticism or any other similar nonsense). You say that it's obvious that all actions occur from internally motivated states as a result of neurons firing, but it's not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-11-25T07:29:06.631Z · LW(p) · GW(p)

Why not just specify to people that motivations or obligations are "subjective and rooted in individual values"? Then you don't have to bring in the word "selfish", with all its common-usage connotations.

Replies from: chaosmosis
comment by chaosmosis · 2012-11-26T19:59:40.422Z · LW(p) · GW(p)

I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person's perspective. I don't think that people should ever get mad at defectors in Prisoner's Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.

comment by aceofspades · 2012-11-14T01:13:36.832Z · LW(p) · GW(p)

This line of discussion says nothing on the object level. The words "altruistic" and "selfish" in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.

Replies from: chaosmosis
comment by chaosmosis · 2012-11-14T05:23:25.569Z · LW(p) · GW(p)

Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.

Replies from: aceofspades
comment by aceofspades · 2012-11-19T20:08:36.011Z · LW(p) · GW(p)

The reason I rejected the utility function and why I rejected this argument is that I judged them useless.

What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people's answers to this question.

Replies from: chaosmosis
comment by chaosmosis · 2012-11-20T22:10:18.509Z · LW(p) · GW(p)

I don't understand how your reply is responsive.

I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics - it's not the aggregate amount of well being that matters, it's only mine.

Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God's wrath or something.

Does my position make more sense now?

Replies from: aceofspades
comment by aceofspades · 2012-11-25T02:26:13.452Z · LW(p) · GW(p)

Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about "logical necessity" and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?

Replies from: chaosmosis
comment by chaosmosis · 2012-11-26T19:56:59.770Z · LW(p) · GW(p)

I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don't really care what you call it.

comment by Jakeness · 2012-11-08T05:04:10.328Z · LW(p) · GW(p)

Thanks for posting this. I always enjoy these "in-practice" oriented posts, as I feel they help me check if I truly understand the concepts I learn here, in a similar way that example problems in textbooks check if I know how to correctly apply the material I just read.

comment by Selquist · 2018-05-29T22:44:59.913Z · LW(p) · GW(p)

I would be interested in an updated checklist. This seems potentially quite useful for a single post.

Replies from: Raemon
comment by Raemon · 2018-05-29T22:55:37.403Z · LW(p) · GW(p)

I'm not 100% sure how different it is, but CFAR's website has what is presumably the most up to date version.

comment by MaoShan · 2012-11-14T03:17:34.341Z · LW(p) · GW(p)

There are some good ideas here that I can pick up on. Among the things that I already successfully implement, it may sound stupid, but I think of my different brain modules as different people, and have different names for them. That way I can compliment or admonish them without thinking, "Oh..kay, I'm talking to myself?" That makes it easier to remember that I'm not the only one reacting and making the sole decisions, but avoids turning everything into similar-sounding entities (me, myself, I, my brain, my mind, etc.) Example: This morning, I kept getting the feeling that something was not quite right, I felt lighter for some reason. I recognized that feeling as Jeffery trying to tell me something, so I had to stop and evaluate what I had done that morning so far. I realized that I was still wearing my slippers, and probably would not have realized it until I retracted my kickstand to leave for work. I gave credit where credit is due, and thought (without speaking) "Good catch, Jeffery!" (Jeffery [spelled that way because I "mistyped" it both times just now, before deciding that that's how he wants to spell it] is the one who handles the autopilot functions of my daily life, and while he does his best in unfamiliar situations, usually does not consult and does foolish things unless I have programmed him with routines. He is named after the anthropomorphic half chicken/half goat/half man protector of the "Deadly Maze" in Chowder. I interpreted the Deadly Maze as an allegory for the subconscious mind.)

Replies from: aleksiL, Michelle_Z, Kenny
comment by aleksiL · 2012-11-18T12:04:21.303Z · LW(p) · GW(p)

Interesting, I've occasionally experimented with something similar but never thought of contacting Autopilot this way. Yeah, that's what I'll call him.

I get the feeling that this might be useful in breaking out of some of my procrastination patterns: just call Autopilot and tell him which routine to start. Not tested yet, as then I'd forget about writing this reply.

Replies from: MaoShan
comment by MaoShan · 2012-11-19T03:41:42.029Z · LW(p) · GW(p)

It's as if your own body is a guy that does his job if you train him right, but makes stupid decisions when something unexpected happens. I just take a more literal approach with the interaction. I also refer to him as "my answering machine" when I am woken up in the middle of the night. It took my wife a while to realize that the person she was talking to was "not me". My answering machine can make perfectly normal-sounding replies to normal questions, but is unable to come up with creative answers to unusual questions, and I have no memory of the events. Another unnamed, possibly separate module runs when my body is alarmed, but I am not yet conscious. It constantly asks for data, verbally questioning other humans nearby, "What is happening? What is going on? What time is it?" Unlike situations with the answering machine, I retain conscious memory of the occurrence, but not from a first-person perspective, more like I remember somebody telling me about what happened, but in this case that person was (allegedly) me.

comment by Michelle_Z · 2012-12-24T00:51:17.055Z · LW(p) · GW(p)

Funny. I do something similar- Except I call mine "Planner," "Want," "Bum," and "Cynic." I never really considered my autopilot mode anything particular. Usually I just do this when I am struggling with motivation, and usually those four concepts are the main issue- Planning to do something, then wanting to do something else, feeling like not doing anything, and realizing I'm not going to do it so why bother anyway... and reminding myself that they're learned habits and I can get rid of it if I bring in new habits.

comment by Kenny · 2015-02-21T17:47:51.048Z · LW(p) · GW(p)

This is basicaly Internal Family Systems Model tho its focus is therapy, i.e. improving dysfunctional behavior.

But your point of regularly communicating with your various 'parts' seems like a really good idea. How well have you maintained this as a habit since your comment?

comment by ialdabaoth · 2012-11-11T07:46:12.714Z · LW(p) · GW(p)

I'm currently trying to evaluate how to adjust some of these for problems related to mental illness. For example, 4.3:

If I find my thoughts circling around a particular word, I try to taboo the word, i.e., think without using that word or any of its synonyms or equivalent concepts. (E.g. wondering whether you're "smart enough", whether your partner is "inconsiderate", or if you're "trying to do the right thing".)

Whenever I taboo words, I start developing pressured speech, and begin mumbling the tabooed words subconsciously. If I continue to try to force the taboo, this eventually develops into self-harming behavior.

Another example is 5.2:

I quantify consequences—how often, how long, how intense.

Whenever I attempt to quantify consequences, I have to push through absurd imaginings - if I believe someone is angry at me, even if they're a good friend, my imagination tends to produce vivid imagery of them dismembering, raping and torturing me while simultaneously performing actions to keep me alive longer, even if I know they don't even possess the skills necessary to perform the acts I'm imagining. It takes an extraordinary amount of mental effort and energy to push through that to actually quantify consequences.

Another example is 6.2:

I talk to my friends or deliberately use other social commitment mechanisms on myself.

I tend to not have very many friends that I can commit to, and when I do, I tend to only use commitment to perform a self-shaming and self-punishment cycle, rather than to actually goad me to perform the desired behavior.

Replies from: aelephant
comment by aelephant · 2012-11-12T11:30:31.951Z · LW(p) · GW(p)

Is your mental illness being treated? Are you seeing someone trained & experienced in managing mental illness? I would put much, much more emphasis on getting to a place where you aren't self-harming than on trying to develop rationality habits, especially if the latter seems to be interfering with the former.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T00:52:18.631Z · LW(p) · GW(p)

No, because I'm currently not good at keeping a job, and equally not good at navigating the bureaucracies necessary to suckle on the government's teat. "Getting to a place where I'm not self-harming" is a nice pipe dream, but as it is, we optimise for those goals which we can actually stand a reasonable chance of accomplishing.

Put another way, if P(n)-sub-t is my probability of getting into therapy after expending n units of resource on getting into therapy, and U(n)-sub-t is the utility of getting therapy after spending n units of resource on getting into therapy, and P(n)-sub-r is the probability of becoming more rational after spending n units trying to become rational, and U(n)-sub-r is the utility of becoming more rational after spending n units becoming rational, and I only have n resource units available, then if P(n)-sub-t U(n)-sub-t < P(n)-sub-r U(n)-sub-r, then I know what to spend those n resource units on, no matter how much P(n+delta)-sub-t U(n+delta)-sub-t > P(n+delta)-sub-r U(n+delta)-sub-r, because I don't have that extra delta worth of resource units.

Sometimes poor people make what looks like bad choices from the outside because it's the best choice they have.

Replies from: aelephant
comment by aelephant · 2012-11-13T10:30:10.245Z · LW(p) · GW(p)

I'm not much for suckling on the government's teat either. How much of a chance do you think you'd have of keeping a job if you put your mind to it?

There could be other options aside from therapy. A lot of people that I respect have recommend Nathaniel Branden's books. I have heard some about Internal Family Systems (IFS) as well, which as far as I know can be done by yourself. I'm by no means an expert, but maybe these can act as leads for you to get started on your own (presuming you haven't already looked into them).

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T19:50:36.244Z · LW(p) · GW(p)

How much of a chance do you think you'd have of keeping a job if you put your mind to it?

Empirically, a very poor one. Or rather, more accurately: I either have a very poor chance of keeping a job if I put my mind to it, OR I have a very poor chance of putting my mind to it. I'm not sure how to tell which is actually the case, right now, but maybe I could tell if I actually put my mind to it (heh).

Unfortunately, since "putting my mind to things" is a big part of what's actually broken, I'm not sure where to proceed - or even whether I should proceed. Often times, my strongest impulse leans towards slapping a big "DEFECTIVE" label on my forehead and tossing myself in the recycle bin.

Replies from: TimS
comment by TimS · 2012-11-13T20:00:10.992Z · LW(p) · GW(p)

I urge you to strongly consider the possibility that your mind is telling you that you don't like this kind of work. At best, defective is a circular label, not an analytical result of your personality.

That may not be the most useful information, economically speaking. But it may help you avoid generalizing your experiences at the current job on to future jobs. In short, you aren't lazy, you just haven't found situations that put you in a position to succeed (by ensuring sufficient appropriate motivation).

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T20:32:47.681Z · LW(p) · GW(p)

I used to think that way. The frustrating thing is, I used to LOVE work of all kind. What I hated was people with arbitrary power over me deliberately sabotaging my work, mostly (it seemed) because they were angry that I enjoyed it so much. One of the most powerful lessons I ever learned was that people at my socioeconomic level don't GET to "enjoy" their work. Even by accident.

I never really learned diplomacy and power politics, primarily due to being taught a form of "learned helplessness" about it when I was very young (I was not in a socioeconomic class where it was appropriate to display the amount of enthusiasm, talent and intelligence that I had, and I didn't know how to hide it).

Unfortunately, this led to making a lot of really, really bad political mistakes, each of which slowly eroded my enthusiasm at doing... well, at this point, at doing anything.

After a few years of being out of practice, I now find that I can't even bring myself to get out of bed in the morning and work on something interesting, because "what's the point?"

To me, there is NO difference between "lazy" and "haven't found situations that put you in a position to succeed". They are IDENTICAL. If society doesn't put you in positions to succeed, it has decided that you are lazy, and that means you ARE lazy. Agency has nothing to do with culpability, only blame.

Replies from: TimS, TimS
comment by TimS · 2012-11-13T21:11:22.174Z · LW(p) · GW(p)

Your rules seemed designed to sabotage you by making you feel miserable. The impulse to create scripts of how interactions are supposed to go is a good one, but the point of these scripts is to prepare you to succeed.

You need a new social environment. If none of the people you hang out with is really your friend, stop spending time with them. Particularly if they aren't emotionally safe.

We talked about boardgaming as one possible new environment. What about charitable volunteering. If you find the right charity, the organizations are desperate for your help.

Regardless of what specific thing you do, find something to succeed at. Don't set the bar ridiculously high - if what you can do is show up, then find something where showing up is success. You are absolutely worth it. Your negative feelings are a habit that you can break.

Where do you live? Maybe I can help? (Private message if you prefer).

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T21:19:58.444Z · LW(p) · GW(p)

This post is being made while repressing a massive array of scripted responses, so if it bounces around or seems incoherent, it's because only a VERY small portion of my brainpower is currently available for rational analysis.

  1. I tend to sabotage friendships, due to being inherently distrustful / untrustworthy (my cynical disposition has led me to believe that these are ultimately the same thing). Thus, your offer to help personally is admirable, but I have a very high threshold to pass before I can trust it as actually helpful. Does this make sense?

  2. I've performed actions of charitable volunteering, but over the past few years I've had very little energy for anything. I tend to have less than half an hour's worth of useful energy per day for anything that involves leaving my little hovel, and by the end of that half an hour I tend to start socially self-destructing.

  3. It's not as much a problem that friends aren't emotionally safe for me, as that I am not emotionally safe for me. Actual friends tend to actually empathize, which means that they quickly become freaked out and leave when they realize how helpless they are to do anything but watch me self-harm. This provides a filter that ensures that when I DO absolutely need emotional interaction with other human beings, the only ones who are left are the ones who don't care as much about the waves of misery I'm exuding.

Replies from: TimS
comment by TimS · 2012-11-13T21:27:00.245Z · LW(p) · GW(p)

Thus, your offer to help personally is admirable, but I have a very high threshold to pass before I can trust it as actually helpful. Does this make sense?

Makes sense. Whether you believe it or not, I'm not doing this for my benefit. I care about you, and so does everyone else who is offering you advice.

This post is being made while repressing a massive array of scripted responses.

Do you think these scripts make you happier? Are there changes to the scripts that you can imagine that would cause them to make you happier?

More generally, is there any change you could make in your life that you think you would really make that would lead to any increased happiness? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude?

My experience with anxiety is that the feelings never went away, I just got better at doing what I thought needed doing, even with the anxious feelings.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T21:42:17.693Z · LW(p) · GW(p)

Do you think these scripts make you happier?

No, but I have spent almost 30 years doing script-modification, and I be sore tired.

Are there changes to the scripts that you can imagine that would cause them to make you happier?

Possibly, but the effort involved in doing more script-modification is no longer something I have the energy for.

My experience with anxiety is that the feelings never went away, I just got better at doing what I thought needed doing, even with the anxious feelings.

Absolutely. That's how I describe most of what people call my "super-powers". I tend to be amazingly competent in crisis situations, simply because I don't panic, I immediately assess the best plan of action, I identify everyone who is panicking, and I immediately give them short commands that are clearly identifiable as helping the situation, so they feel like they can actually do something about whatever's terrifying them. People have asked me how I manage to be completely unafraid of life-or-death situations, and I've simply explained "of course I'm completely terrified. I just do it anyways." (and then I usually go throw up, because if the situation has calmed enough that people can ask me how I pulled it off, then the situation has calmed enough that I can go throw up).

More generally, is there any change you could make in your life that you think you would really make that would lead to any increased happiness? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude?

The problem is, I've already tried to solve this problem by editing out "personal happiness" as a goal to seek. I spent about 5 years on this, and in the process have managed to edit out a good amount of personal identity, self-preservation, and so on. It turns out there are biological safeguards in place that keep me from going all the way with it, so what I've got is a collection of extraordinarily buggy and non-adaptive scripts, usually running in direct competition with each other and tying up all my system resources without actually accomplishing anything whatsoever.

Of course, since they're using up all my system resources, I no longer have enough free processor or swap space to further modify my scripts. I'm kinda stuck without outside resources, and I'm no longer capable of generating those.

But ultimately, neurological and biological systems are incredibly complex, and they all (so far as we know) break down eventually. I don't think this breakdown process is particularly extraordinary or noteworthy, compared to any other possible way that I could degrade into non-functionality.

Replies from: TimS, Strange7
comment by TimS · 2012-11-14T15:35:19.012Z · LW(p) · GW(p)

The problem is, I've already tried to solve this problem by editing out "personal happiness" as a goal to seek.

Do you think that removing personal happiness as one of your goals has helped you be more productive? What could you take to add some amount of personal happiness as one of your goals? Would that be worthwhile?

Do you think it is likely that you would take those steps? If there are reasons to not make that change, do you think the reasons are realistic in likelihood and it magnitude?

(I'm asking questions because I hope this will help more than other types of interactions. There's no reason that you should feel obligated to be emotionally vulnerable towards me. Without emotional vulnerability - from taking apart your personality - specific suggestions / instructions about what to change can easily be taken the wrong way. But if questions like this are coming off as passive-aggressive, I want to stop.)

comment by Strange7 · 2012-11-13T21:50:27.176Z · LW(p) · GW(p)

Have you tried being a volunteer firefighter?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T21:55:43.426Z · LW(p) · GW(p)

Actually, yes! Two years ago. I spent about 2 years beforehand getting into the best shape I had ever been in in my life - took Capoeira, spent an hour a day in the gym, ran 3 miles every morning - I set a goal that as soon as I broke 150 lbs (starting from 110), I'd go in and apply.

Still didn't pass the physical.

comment by TimS · 2012-11-13T21:18:01.997Z · LW(p) · GW(p)

Also, this (warning, quite emotionally raw).

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-13T21:23:38.020Z · LW(p) · GW(p)

Heh. Believe it or not, that's not as much of a problem. I've lived with constant suicidal ideation for almost 27 years now, since I was 12. I've become almost completely inured to it, and I've performed enough unsuccessful attempts that my mid-brain has learned very well not to bother. It's amusing to think that learned helplessness can be turned into a tool to combat suicidal ideation, but there it is. (I imagine this is why so many anti-depressants increase the risk of suicide - the learned helplessness is a tighter cycle, so it gets lifted faster, at which point the ideation hasn't faded yet and suddenly you imagine the possibility of something actually working, and it all finally being over for real.)

comment by incariol · 2012-11-11T23:42:09.668Z · LW(p) · GW(p)

What about "when faced with a hard problem, close your eyes, clear your mind and focus your attention for a few minutes to the issue at hand"?

It sounds so very simple, that I routinely fail to do it when, e.g. I try to solve some project euler problem or another, and I don't see a solution in the first few seconds, do something else for a while, until I finally get a handle on my slippery mind, sit down and solve the bloody thing.

comment by jooyous · 2012-11-08T03:33:50.350Z · LW(p) · GW(p)

At some point I started feeling like my bf is more interested in telling me things than having a conversation with me. So I started trying to flag the instances where he did it and the instances where he didn't, and it kinda felt like it matched my feeling since I had several more examples of one than the other. But I didn't document then carefully or anything, so how do I know I'm not falling into the confirmation bias trap? Or is this just the wrong way to handle something that started out as a ... feeling?

Replies from: TheOtherDave, Decius, Manfred
comment by TheOtherDave · 2012-11-08T14:55:35.973Z · LW(p) · GW(p)

In your position, I would do a few different things.

One is what you describe: actually count instances and see if the pattern conforms to my expectations.

But also, I would try to articulate more clearly what the choices are. That is, what do I look for when I want to see if he is interested in having a conversation? Am I looking for him to listen to what I have to say? To ask questions about it? To not challenge it when he disagrees? To look directly at me and not do other things while I'm talking? To allow me to pause in the middle of what I'm saying with out treating that as an opportunity to change the subject? Something else? All of the above?

Also, I would ask myself what would follow if it turned out that I was overcounting confirmations? That is, let's say I conclude that one thing that makes me feel like my boyfriend isn't interested in having a conversation with me is when he interrupts me. I might ask myself, suppose I start actually counting instances and I conclude that he only interrupts me one conversation out of ten, when I had estimated it was nine conversations out of ten. It is likely, then, that I'd succumbed to confirmation bias.

But... what follows from that?

One possibility is "Oh... well, 10% interruptions isn't that big a deal. I should get over it."
Another possibility is "Clearly, 10% interruptions is enough to upset me. We should try for a lower rate."

Knowing how I would go about making that choice for a measured probability once I have it is, IME, an important part of actually improving the system. Otherwise I'm just making measurements.

Replies from: chaosmosis, jooyous
comment by chaosmosis · 2012-11-10T01:57:59.535Z · LW(p) · GW(p)

Clearly, 10% interruptions is enough to upset me. We should try for a lower rate.

I'm confused why she should measure it at all. This line of reasoning seems to preclude the need for measurement.

comment by jooyous · 2012-11-08T18:27:44.246Z · LW(p) · GW(p)

Yeah, I think this is the hardest part because in some cases, examining the actual facts does make me feel better. But in this case, if it does turn out to be 10% but the bad feeling doesn't go away, I'm going to feel like a jerk. Also, it's impossible to compare to the past at this point, which is when it felt like we had more real conversations, but I have no data from it because back then I didn't have any reason to track it.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-11-08T18:45:52.366Z · LW(p) · GW(p)

if it does turn out to be 10% but the bad feeling doesn't go away, I'm going to feel like a jerk

Why?

comment by Decius · 2012-11-08T07:00:53.264Z · LW(p) · GW(p)

To break confirmation bias, you need an objective log. Write down every time you recognize a confirming event, as well as every time you recognize an even which is nonconfirming. Then, estimate the likelihood that you would recognize and write down a confirming event, and the likelihood that you would recognize and write down a nonconforming event. Use your surprise that a nonconfirming event just occurred, as well as your surprise that you noticed it and made a note of it to form that estimate.

If you find yourself more surprised that you made a not of a nonconfirming event than that it happened, it probably happens much more often than you note it.

comment by Manfred · 2012-11-08T09:59:42.001Z · LW(p) · GW(p)

This seems tricky. What is (I would guess) important about your situation is that you want to have more conversations with him. So hey, if you want to have more conversations, do things that will result in that happening.

If your number of conversations changes noticeably and that feeling doesn't go away, or you get the same feeling about something else instead, then yeah, maybe the root cause is something else. (It's like when I'm procrastinating and I feel like I really want to visit website X, and then I feel I really want to read book Y, but the feeling is really just "procrastination-feeling" from not wanting to start chore Z.)

comment by curiousepic · 2015-06-10T14:43:21.899Z · LW(p) · GW(p)

Has the checklist been revisited or optimized in any way since its original formulation? (By CFAR or otherwise?)

comment by Iabalka · 2012-11-14T14:56:19.644Z · LW(p) · GW(p)

Why are these rationality habits? Based on what? All the examples are personal. Isn't it possible to give (also) a scientific examples for each habit : study ..... shows that .... hence 1) the habit is useful for dealing with this bias 2) it doesn't create or reinforce other biases.

comment by Steven_Bukal · 2012-11-10T19:34:30.383Z · LW(p) · GW(p)

Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.

comment by MarkL · 2012-11-08T18:34:17.202Z · LW(p) · GW(p)

Something to add: allocating attention in the correct order:

  1. emotions
  2. felt meaning
  3. verbal thoughts

Otherwise you have the failure mode of avoiding painful emotions (even if they're being triggered erroneously) and then all sorts of bad things happen. So check in with (1) before (2) and (3). And check in with (2) before applying (3), because otherwise you're using cached thoughts.

comment by FiftyTwo · 2012-11-08T00:19:09.019Z · LW(p) · GW(p)

The PDF version is very nice looking and very readable, thanks for making it. I think people on here often underestimate the benefits of low hanging aesthetic fruit.

comment by goonthen · 2016-07-07T04:57:31.848Z · LW(p) · GW(p)

I just joined the community, how can I save or mark this article so it is available for me to read at anytime?

Replies from: root
comment by root · 2016-07-07T05:01:36.914Z · LW(p) · GW(p)

Bookmarks in your browser. There's also the diskette icon between the two horizontal bars that separate the article and the comment section.

Replies from: gjm
comment by gjm · 2016-07-07T15:13:29.662Z · LW(p) · GW(p)

I think the "liked" tab on your user page displays precisely those articles that you've upvoted. So upvoting an article will make it available there in the future.

Replies from: Good_Burning_Plastic
comment by Good_Burning_Plastic · 2016-07-07T22:26:27.447Z · LW(p) · GW(p)

And downvoting an article will add it to the "disliked" tab. But please don't vote articles solely for this purpose.

comment by 3p1cd3m0n · 2015-01-31T02:51:51.947Z · LW(p) · GW(p)

I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.

comment by jtmedley21 · 2012-11-19T22:36:12.390Z · LW(p) · GW(p)

Great list. My guide post for rationality and related issues has been the works of Carl Sagan, as he had many books and good advice for thinking critically. His works are an absolute must read (or watch) for anybody wanting to wade through the mass of misdirection that exists in the world.

comment by PaulingL · 2012-11-17T01:50:31.378Z · LW(p) · GW(p)

This all sounds quite groovy, but are there any suggestions on how I could go about implementing them into my daily pattern of thought? I wonder if perhaps an Anki deck would have any merit whatsoever in accomplishing this...

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-13T06:27:49.124Z · LW(p) · GW(p)

Another one: You see a way to do things that in theory might work better that what everyone else is doing, but in practice no one seems to use. Do you investigate it and consider exploiting it?

Example: You're trying to get karma on reddit. You notice that http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months. Do you think "hm, that's weird" and keep looking for a subreddit to submit your link in, or do you think "oh wow, karma feast!"

Replies from: Transfuturist, Larks
comment by Transfuturist · 2013-09-21T19:39:36.187Z · LW(p) · GW(p)

Third option: turn the subreddit's style off (if you have RES), or subscribe yourself and see what happens to the number to discover what they've been doing.

comment by Larks · 2015-04-12T05:16:11.453Z · LW(p) · GW(p)

http://www.reddit.com/r/randomization/ has almost a million subscribers but no new submissions in the past two months.

Apparently that subreddit just lies about how many subscribers it has

comment by shminux · 2012-11-07T20:00:22.508Z · LW(p) · GW(p)

For each item, you might ask yourself: did you last use this habit...

Maybe it's worth a poll, if someone feels like creating one. I'm not sure how to make a multi-level poll and it probably would be too presumptuous of me to create 24 replies with one poll in each.

Replies from: AnnaSalamon, Hawisher
comment by AnnaSalamon · 2012-11-08T00:23:42.490Z · LW(p) · GW(p)

It's easy to make a checklist by going to Google docs / Google drive, clicking "create", and choosing "form".

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-11-08T03:47:56.834Z · LW(p) · GW(p)

The Checklist Manifesto is very interesting about what goes into an excellent checklist rather than a casually constructed checklist. It's about institutional checklists rather than personal checklists, though.

comment by Hawisher · 2012-11-07T20:03:59.746Z · LW(p) · GW(p)

You can't do multi-response polls? As in, check all that apply?

Replies from: shminux
comment by shminux · 2012-11-07T20:15:28.893Z · LW(p) · GW(p)

There are 24 separate subquestions with 6 answer options each.