Urges vs. Goals: The analogy to anticipation and belief
post by AnnaSalamon · 2012-01-24T23:57:04.122Z · LW · GW · Legacy · 71 commentsContents
Implication 1: You can import a lot of technique for "checking for screwy beliefs" into "checking for screwy goals". Implication 2: "Status" / "prestige" / "signaling" / "people don't really care about" is way overused to explain goal-urge delinkages that can be more simply explained by "humans are not agents". Implication 3: Humans cannot live by urges alone Implication 4: Your agency failures do not imply that your ideals are fake. Implication 5: You can align urges and goals using the same sort of effort and training that it takes to align anticipations and beliefs. Conclusion: None 71 comments
Partially in response to: The curse of identity
Related to: Humans are not automatically strategic, That other kind of status, Approving reinforces low-effort behaviors.
Joe studies long hours, and often prides himself on how driven he is to make something of himself. But in the actual moments of his studying, Joe often looks out the window, doodles, or drags his eyes over the text while his mind wanders. Someone sent him a link to which college majors lead to the greatest lifetime earnings, and he didn't get around to reading that either. Shall we say that Joe doesn't really care about making something of himself?
The Inuit may not have 47 words for snow, but Less Wrongers do have at least two words for belief. We find it necessary to distinguish between:
- Anticipations, what we actually expect to see happen;
- Professed beliefs, the set of things we tell ourselves we “believe”, based partly on deliberate/verbal thought.
This distinction helps explain how an atheistic rationalist can still get spooked in a haunted house; how someone can “believe” they’re good at chess while avoiding games that might threaten that belief [1]; and why Eliezer had to actually crash a car before he viscerally understood what his physics books tried to tell him about stopping distance going up with the square of driving speed. (I helped Anna revise this - EY.)
A lot of our community technique goes into either (1) dealing with "beliefs" being an evolutionarily recent system, such that our "beliefs" often end up far screwier than our actual anticipations; or (2) trying to get our anticipations to align with more evidence-informed beliefs.
And analogously - this analogy is arguably obvious, but it's deep, useful, and easy to overlook in its implications - there seem to be two major kinds of wanting:
- Urges: concrete emotional pulls, produced in System 1's perceptual / autonomic processes
(my urge to drink the steaming hot cocoa in front of me; my urge to avoid embarrassment by having something to add to my accomplishments log) - Goals: things we tell ourselves we’re aiming at, within deliberate/verbal thought and planning
(I have a goal to exercise three times a week; I have a goal to reduce existential risk)
Implication 1: You can import a lot of technique for "checking for screwy beliefs" into "checking for screwy goals".
Urges, like anticipations, are relatively perceptual-level and automatic. They're harder to reshape and they're also harder to completely screw up. In contrast, the flexible, recent "goals" system can easily acquire goals that are wildly detached from what we actually do, wildly detached from any positive consequences, or both. Some techniques you can port straight over from "checking for screwy beliefs" to "checking for screwy goals" include:
The fundamental:
- "What's the positive consequence?" This is the equivalent of "What's the evidence?" for beliefs. All the other cases involve not asking it, or not asking hard enough.
The Hansonian:
- Goals as clothes / goals as tribal affiliation: “We are people who have free software (/ communism / rationality / whatever) as our goal”. Before you install Linux, do you think "What's the positive consequence of installing Linux?" or does it just seem like the sort of thing a free-software-supporter would do? (EY says: What positive consequence is achieved by marching in an Occupy Wall Street march? Can you remember anyone stating one, throughout the whole affair - "if we march, X will happen because of Y"?)
- Goals as a signal of one’s value as an ally: Sheila insists that she wants to get a job. We inspect her situation and she's not trying very hard to get a job. But she's in debt to a lot of her friends and is borrowing more to live on a month-to-month basis. It's not hard to see why Sheila would internally profess strongly that she has a goal of getting a job.
- Goals as personal fashion statements: A T-Shirt that says “Give me coffee and no one gets hurt” seems to state a very strong desire for coffee. This is clearly a goal professed directly to affect how others see you, and it's more a question of affecting a 'style' than anything directly tribal or status-y.
The satiating:
- Having goals as optimism: "I intend to lose weight" can be created by much the same sort of internal processes that would make you believe "I will lose weight", in cases where the goal (belief) would not yet seem very plausible to an outside view.
- Having goals as apparent progress: My current to-do list has "write thank-you notes for wedding gifts". This makes me feel like I've appeased the demand for internal attention by having a goal. (EY: I have "send Anna and Carl their wedding gift" on my todo list. This was very effective at appeasing the need to send them a wedding gift.)
Implication 2: "Status" / "prestige" / "signaling" / "people don't really care about" is way overused to explain goal-urge delinkages that can be more simply explained by "humans are not agents".
This post was written partially in response to The Curse of Identity, wherein Kaj recounts some suboptimal goal-action linkages - wanting to contribute to the Singularity, then teaching himself to feel guilty whenever not working; founding the Finnish Pirate Party, then becoming the spokesperson which involved tasks he wasn't good at; helping Eliezer on writing his book, and feeling demotivated because it seemed like work "anyone could do" (which is just the sort of work that almost nobody is motivated to do).
Kaj forms the generalization "as soon as my brain adopted a cause, my subconscious reinterpreted it as the goal of giving the impression of doing prestigious work for the cause". I worry that our community has a tendency to explain as e.g. status signaling or "people really don't care about X", observations that can also be explained by less malice/selfishness and more "our brains have known malfunctions at linking goals to urges". People are as bad at looking into hospitals for their own health as for the sake of their parents' health; Kaj didn't actually gain much prestige from feeling guilty about his relaxation time.
We do have a status urge. It does affect a lot of things. People do tend to massively systematically understate it in much the same way that Victorians pretended that sex wasn't everywhere. But that's not the same cognitive problem as "Our brain is pretty bad at linking effective behaviors to goals, and will sometimes reward us for just doing things that seem roughly associated with the goal, instead of actions that cause the consequence of the goal being achieved." And our brains not being coherent agents is something that's even more massive than status.
Implication 3: Humans cannot live by urges alone
Like beliefs, goals often get much wackier than urges. I've seen a number of people react to this realization by concluding that they should give up on having goals, and lead an authentic life of pure desire. This wouldn't work any more than giving up on having beliefs. To precisely anticipate how long it takes a ball to fall off a tower, you have to manipulate abstract beliefs about gravitational acceleration. I have an urge to drive a car that runs smoothly, but if I didn't also have a goal of having a well-maintained car, I would never get around to having it serviced - I have no innate urge to do that.
I really have seen multiple people (some of whom I significantly cared about) malfunctioning as a result of misinterpreting this point. As a stand-alone system for pulling your actions, urges have all kinds of problems. Urges can pull you to stare at an attractive stranger, to walk to the fridge, and even to sprint hard for first base when playing baseball. But unless coupled with goals and far-mode reasoning, urges will not pull you to the component tasks required for any longer-term goods. When I get into my car I have a definite urge for it not to be broken. But absent planning, there would never be a moment when the activity I most desired was to take my car for an oil change. To find and keep a job (let alone a good job), live in a non-pigsty, or learn any skills that are not immediately rewarding, you will probably need goals. Even though human goals can easily turn into fashion statements and wishful thinking.
Implication 4: Your agency failures do not imply that your ideals are fake.
Obvious but it needs to be said: People are as bad at looking into hospitals for their own health as for the sake of their parents' health. It doesn't mean that they don't really care about their parents, and it doesn't mean that they don't really care about survival. They would probably run away pretty fast from a tiger, where the goal connected to the urge in an ancestrally more reliable way and hence made them more 'agenty'; and they might fight hard to defend their parents from a tiger too.
There's a very real sense in which our agency failures imply that human beings don't have goals, but this doesn't mean that our ungoaly ideals are any more ungoaly than anything else. Ideals can be more ungoaly because they're sometimes about faraway things or less ancestral things - it's probably easier to improve your agency on less idealy goals that link more quickly to urges - but as entities which can look over our own urges and goals and try to improve our agentiness, there's no rule which says that we can't try to solve some hard problems in this area as well as some easy ones.[2]
Implication 5: You can align urges and goals using the same sort of effort and training that it takes to align anticipations and beliefs.
Although I've heard people saying that we discuss willpower-failure too much on Less Wrong, most of the best stuff I've read has been outside Less Wrong and hasn't made contact with us. For a starting guide to many such skills, see Eat That Frog by Brian Tracy [3]. Some basic alignment techniques include:
- Get in the habit of asking "What is the positive consequence?" (Probably more needs to be written about this so that your brain doesn't just answer "I'll be a free software supporter!" which is not what we mean to ask.)
- Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"
- Whenever you sit down to work, naming a single, high-priority accomplishment for that session. Visualizing that accomplishment, and its positive rewarding consequences, until you have an urge for it to happen (instead of just having an urge to log today's hours).
And much the same way that a lot of craziness stems, not so much from "having a wrong model of the world", as "not bothering to have a model of the world", a lot of personal effectiveness isn't so much about "having the right goals" as "bothering to have goals at all" - where unpacking this somewhat Vassarian statement would lead us to ideas like "bothering to have something that I check my actions' consequences against, never mind whether or not it's the right thing" or "bothering to have some communication-related urge that animates my writing when I write, instead of just sitting down to log a certain number of writing hours during which I feel rewarded from rearranging shiny words".
Conclusion:
Besides an aspiring rationalist, these days I call myself an "aspiring consequentialist".
[1] IMO the case of somebody who has the belief "I am good at chess", but instinctively knows to avoid strong chess opponents that would potentially test the belief, ought to be a more central example in our literature than the person who believes they have an dragon in their garage (but instinctively knows that they need to specify that it's invisible, inaudible and generates no carbon dioxide, when we show up with the testing equipment).
[2] See also Ch. 20 of Methods of Rationality:
Professor Quirrell: "Mr. Potter, in the end people all do what they want to do. Sometimes people give names like 'right' to things they want to do, but how could we possibly act on anything but our own desires?"
Harry: "Well, obviously I couldn't act on moral considerations if they lacked the power to move me. But that doesn't mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!"
[3] Thanks to Patri for recommending this book to me in response to an earlier post. It is perhaps not written in the most LW-friendly language -- but, given the value of these skills, I’d recommend wading in and doing your best to pull useful techniques from the somewhat salesy prose. I found much of value there.
71 comments
Comments sorted by top scores.
comment by Cosmos · 2012-01-24T08:32:45.843Z · LW(p) · GW(p)
I have also found Eat That Frog to be an unusually good collection of the major productivity techniques. Incidentally, I also heard about the book from Patri via Divia.
For a shorter and more rationality-friendly version of the book, I summarized it here:
EDIT: http://becomingeden.com/summary-of-eat-that-frog/
Replies from: AnnaSalamon, witzvo, quentin↑ comment by AnnaSalamon · 2012-01-24T08:41:27.586Z · LW(p) · GW(p)
Great summary; just read it and bookmarked it. Much thanks for writing this. I had thought I needed to reread Eat That Frog but had been reluctant to take the hours required; now I don't have to.
Replies from: Cosmoscomment by Grognor · 2012-01-24T08:52:10.781Z · LW(p) · GW(p)
this analogy is arguably obvious, but it's deep, useful, and easy to overlook in its implications - there seem to be two major kinds of wanting:
and
Obvious but it needs to be said: People are as bad at looking into hospitals for their own health as for the sake of their parents' health.
I found neither of these things the least bit obvious. I hadn't realized Implication 4 until I had been reading Less Wrong for many months and it was not obvious in retrospect. I hadn't even considered the distinction between urges and goals at all, though it did seem obvious in retrospect - only in retrospect.
I say this because I have had a ton of trouble grasping the concept that things that are obvious to me aren't necessarily obvious to other people.
(Though I don't want to make the same mistake and assume that other people also have this problem.)
comment by CronoDAS · 2012-01-24T06:02:54.144Z · LW(p) · GW(p)
I really have seen multiple people (some of whom I significantly cared about) malfunctioning as a result of misinterpreting this point. As a stand-alone system for pulling your actions, urges have all kinds of problems. Urges can pull you to stare at an attractive stranger, to walk to the fridge, and even to sprint hard for first base when playing baseball. But unless coupled with goals and far-mode reasoning, urges will not pull you to the component tasks required for any longer-term goods. When I get into my car I have a definite urge for it not to be broken. But absent planning, there would never be a moment when the activity I most desired was to take my car for an oil change. To find and keep a job (let alone a good job), live in a non-pigsty, or learn any skills that are not immediately rewarding, you will probably need goals. Even though human goals can easily turn into fashion statements and wishful thinking.
I sort of run this way. Contrary to the description, though, I sometimes do get urges to clean, do laundry, etc. This usually occurs when I happen to be annoyed by the feel of dirt on my bare feet, or find my clothes hamper full, or some other stimulus triggers the behavior. Incidentally, I also am the one in my family who takes the cars for oil changes.
On the other hand, I also have no job. I have a hard time acting on anything that I don't have an urge to do; fortunately or unfortunately, I also have parents to provide me with reasons to have urges to do things I wouldn't otherwise have an urge to do. (This might also be why I once said I didn't have an understanding of how people did things they didn't feel like doing, because the process I use to decide what activity to do at any given moment seems to consist of weighing my various urges in order to figure out what it is that I "feel like" doing, and then doing it, which is a process that almost entirely relies on emotional/unconscious processes rather than conscious verbal reasoning.)
Replies from: AnnaSalamon, Swimmer963↑ comment by AnnaSalamon · 2012-01-24T07:59:54.176Z · LW(p) · GW(p)
fortunately or unfortunately, I also have parents to provide me with reasons to have urges to do things I wouldn't otherwise have an urge to do.
A good point.
Social incentives that directly incentivize the immediate steps toward long-term goals seem to be key to a surprisingly large portion of functional human behavior.
People acquire the habit of wearing seatbelts in part because parents'/friends' approval incentivizes it; I don't want to be the sort of person my mother would think reckless. (People are much worse at taking safety measures that are not thus backed up by social approval; e.g. driving white or light-colored cars reduces one's total driving-related death risk by ord mag 20%, but this statistic does not spread, and many buy dark cars.)
People similarly bathe lest folks smell them, keep their houses clean lest company be horrified, stick to exercise plans and study and degree plans and retirement savings plans partly via friends' approval, etc.; and are much worse at similar goals for which there are no societally cached social incentives for goal-steps. The key role social incentives play in much apparently long-term action of this is one reason people sometimes say "people do not really care about charity, their own health, their own jobs, etc.; all they care about is status".
But contra Robin, the implication is not "humans only care about status, and so we pretend hypocritically to care about our own survival while really basically just caring about status", the implication is "humans are pretty inept at acquiring urges to do the steps that will fulfill our later urges. We are also pretty inept at doing any steps we do not have a direct urge for. Thus, urges to e.g. survive, or live in a clean and pleasant house, or do anything else that requires many substeps… are often pretty powerless, unless accompanied by some kind of structure that can create immediate rewards for individual steps.
(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)
Replies from: multifoliaterose, None↑ comment by multifoliaterose · 2012-01-25T22:33:55.745Z · LW(p) · GW(p)
(People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.)
Is lack of social skills typically the factor that prevents unhappily single folk from finding relationships? Surely this is true in some cases but I would be surprised to learn that it's generic.
↑ comment by [deleted] · 2012-01-29T22:23:42.040Z · LW(p) · GW(p)
People rarely exhibit long-term planning to acquire social status any more than we/they exhibit long-term planning to acquire health. E.g., most unhappily single folk do not systematically practice their social skills unless this is encouraged by their local social environment.
Long-term planning for status: Long-term education plans (e.g., law school or medical school)
For health: Controlling weight; regular medical check-ups
[I omit the last because I don't understand what it means to "practice social skills."]
You overstate the degree of goal-urge disconnect. Usually, when people ignore their professed goals, it's a case of "approving of approving." If goals were truly so disconnected from conduct as you imply (and have apparently convinced yourself is the case), they would serve little real function (except Hansonian signaling). You report that your friends came to grief by living by their urges alone, but if goals have minimal inherent power to guide conduct (that is, if they don't tend spontaneously to recruit urges in their support), then we would all (or most of us) be living like your unfortunate friends, since most people don't go through the self-help exercises of conscientiously attaching urges to goals.
A hypothesis better accounting for the facts is that we often don't pursue our goals because our limited supply of will-power produces decision fatigue. We have to carefully focus our efforts and only pursue the goals most valuable at the margin. But that doesn't mean we practically ignore our paramount goals.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-24T20:54:23.276Z · LW(p) · GW(p)
This might also be why I once said I didn't have an understanding of how people did things they didn't feel like doing.
Do you still feel this way, or do you feel that you understand what I meant in Action and Habit? Have you changed any of your decision-making methods?
Replies from: CronoDAS↑ comment by CronoDAS · 2012-01-26T06:26:59.227Z · LW(p) · GW(p)
I think I understand, sort of, but I haven't actually changed my decision-making methods. I don't even know how I would begin to go about doing that. Also, would changing my decision-making methods tend to increase or reduce urge-satisfaction?
comment by irrationalist · 2012-01-26T05:42:32.280Z · LW(p) · GW(p)
I think I might be living by urges alone. Whenever I see something about "goals" or "self-discipline" or "self-improvement" I immediately shut down and get miserable. My brain says "I don't want to, dammit!" Of course, people tell me I am self-disciplined, but I see that as merely being practical; if it makes any sense, I'm willing to be practical but severely freaked out by aspirational or normative thinking.
comment by [deleted] · 2012-01-24T06:00:49.174Z · LW(p) · GW(p)
Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"
I have been doing this deliberately for a few months because I was starting to get fed up with fighting my instincts every time I chose to program for an hour and wanted to spend that hour reading science fiction, so I actually started standing and exercising to watch anime or read books I did not really need to, while rewarding myself for getting something done that was relatively high up my priority list by sitting / eating food I liked.
It seems like no matter how I try to rebalance the incentives and consequences, negative stimuli always seem to have a much stronger effect on my behaviour however, so although I get more studying done now I think it comes more from the significant increases in my perceived free time than from reward mechanisms.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-01-24T07:37:24.129Z · LW(p) · GW(p)
It seems like no matter how I try to rebalance the incentives and consequences, negative stimuli always seem to have a much stronger effect on my behaviour
What are the negative stimuli? Have you looked into simple behaviorist methods for making studying less painful, instead of just making it more rewarding?
Replies from: Giles, None↑ comment by Giles · 2012-01-26T00:59:19.001Z · LW(p) · GW(p)
Or making the alternatives more painful?
Replies from: AnnaSalamon, army1987, None, AspiringKnitter↑ comment by AnnaSalamon · 2012-01-26T20:51:59.231Z · LW(p) · GW(p)
Nope.
↑ comment by A1987dM (army1987) · 2012-02-22T00:47:21.051Z · LW(p) · GW(p)
beeminder.com works for me...
↑ comment by AspiringKnitter · 2012-01-27T02:52:13.482Z · LW(p) · GW(p)
Assuming klfwip wants to maximize xyr own happiness, changing the situation by adding more pain wouldn't help. It might increase the amount of studying, but klfwip could also do that by enjoying studying more (possibly by altering xyr study habits), which would have a greater expected utility because it also makes klfwip happier.
Replies from: None↑ comment by [deleted] · 2012-01-28T02:10:44.027Z · LW(p) · GW(p)
I study for perceived benefits that include happiness but are broad enough that I am willing to suffer in the short term for greater motivation. If someone put a gun to my head and ordered me to study, I would have to cooperate and probably be very productive, but I am just paranoid enough and value my current existence too much to let this happen.
However, after forcing myself to act in ways that violate my natural hyperbolic discounting for months, it seems to have sunk in a bit so it seems like even minor penalties for behavior I do not want to encourage are enough to change most of my habits if I am consistent enough.
I would not argue everyone should place themselves in self defined bootcamp to try and improve their abilities, but it has been an interesting experiment at least. Many organizations use similar tactics to brainwash members because it works, and it seems to be at least somewhat effective even when self administered.
↑ comment by [deleted] · 2012-01-28T02:19:37.637Z · LW(p) · GW(p)
The greatest negative stimuli of studying are hard to address for me, actual failure to comprehend a difficult set of problems for weeks is itself enough to make me want to give up completely at times, and the simplest ways to eliminate this would be by no longer caring about results or actually succeeding at everything I do. The first would remove most of my incentive to learn in the first place, the second I would love to do but I don't expect this to be feasible any time in the near future.
There are probably effective ways to make studying more fun that I have not really explored though... Studying in groups with other people dealing with the same problems seems to be effective but can be hard to do practically, Nicotine can be used to artificially associate actions with pleasant feelings, along with other drugs. If nicotine actually works as well as Gwern and some others suggest I may try it, but age and then financial constraints have been too limiting.
comment by Spurlock · 2012-01-25T16:51:56.621Z · LW(p) · GW(p)
- Anticipations, what we actually expect to see happen;
- Professed beliefs, the set of things we tell ourselves we “believe”, based partly on deliberate/verbal thought.
This distinction helps explain how an atheistic rationalist can still get spooked in a haunted house;
I apologize if this seems nitpicky, but the implication seems to be that in Yvain's post he is merely "professing" to not believe in ghosts, but "anticipating" that they exist. I believe the actual point of the post was that Yvain both professes and anticipates the nonexistence of ghosts (hence his willingness to place a bet with the bookie as he flees the mansion), he simply hasn't internalized this anticipation/belief on a gut-level.
Porting this back to Urges vs. Goals, perhaps the analogy shouldn't be "Urges : Anticipations", but instead "Urges : Gut-level Internalizations".
Replies from: roystgnr↑ comment by roystgnr · 2012-01-25T19:32:51.255Z · LW(p) · GW(p)
Adding a third category, (Urges, Feelings, Goals), we get a rewording of (Things I "want", Things I "like", Things I "want to want" or "approve of"), IIRC also from previous LessWrong discussions. So (Internalizations, Anticipations, Professed beliefs) seems like a close enough analogy. Your gut-level internalization/urge tells you to jump at the scary noise or to eat lots of the junk food, but that doesn't mean you wouldn't actually be surprised if you saw a real ghost or felt really contently satiated afterwards, and throughout both actions that voice in the back of your mind is telling you what an idiot you're being.
This is starting to sound like (Id, Ego, Superego), as well, which is a little worrisome. It's a better model for human behavior than a unified mind, but reinventing pop psychology is probably not something to be proud of, and I'm sure any binary/trinary dichotomy is still an over-simplification. I'm not just a triumvirate; I contain multitudes.
Replies from: Spurlock↑ comment by Spurlock · 2012-01-25T20:04:15.134Z · LW(p) · GW(p)
I can't deny feeling a wave of "Uh oh" when you mention the similarity to Freud... but let's keep in mind "The world's greatest fool may say the Sun is shining..." etc. The idea that there is a difference between our conscious and unconscious selves is hardly a novel observation on this site (Type 1 vs. Type 2 reasoning, the whole nature of cognitive biases, etc.), and the same is true of the difference between our actual current selves and our aspirations/goals ("I want to become stronger"). It does seem like a realistic and useful trichotomy, Freud or no Freud.
And if we need additional levels to describe ourselves more accurately, I certainly have no problem including them as they become necessary :-)
Edit: For anyone who may be interested, I believe the prior discussion roystngr is referring to is also Yvain.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-01-26T09:26:47.563Z · LW(p) · GW(p)
Why exactly is it a taboo to say that Freud made a good approximation of something?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-01-26T12:22:32.147Z · LW(p) · GW(p)
It's no more taboo than it is to say the Sun goes round the Earth. We just know better than to take Freud seriously about anything. (Or so I generally understand without having looked closely. If you want to justify the claim that Freud made a good approximation of something, go ahead, but the argument won't be with me.)
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-01-26T22:58:03.970Z · LW(p) · GW(p)
We just know better than to take Freud seriously about anything.
Do you agree or disagree with the following things?
- People sometimes do things which are not fully conscious, though if we think about these actions, we might find some hidden motive. Seems like reason is only one of the forces that move our mind; desire and group values are other significant forces.
- Healing psychical problems by hypnosis is not safe. The "healed" problems usually reappear later.
- People often think about sex (surely much more often than is polite to admit in Victorian society).
- Our dreams are related to our emotions.
Because for me, this is the historical contribution of Freud to psychology. It does not mean he invented it all, but at least he popularized it, and I guess it was pretty controversial at that time.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-01-30T13:30:40.148Z · LW(p) · GW(p)
The second point relates to the Victorian fad for Mesmerism [1], the fourth is wisdom of the ages, and the other two are Freud lite. Where are his id, superego, and ego now? One might as well credit medieval alchemists with modern chemistry. What do you think of the well-known claims by various critics that he "set psychiatry back one hundred years", or that psychoanalysis is the "most stupendous intellectual confidence trick of the twentieth century"? (Quotes from here.)
[1] Hypnotherapy still exists, but it's curious that there has never been a single substantial mention of it on LessWrong. The Google box brings up just two mentions-in-passing. I guess the idea of getting into a verge-of-falling-asleep state while listening to a voice droning suggestions into one's ear isn't going to appeal much here, for all the magical powers attributed to it in fiction and by NLP practitioners (do I repeat myself?). Searching for "hypnosis" gives a lot more hits, but from a quick glance, little discussion.
Replies from: juliawise, Viliam_Bur↑ comment by juliawise · 2012-02-10T13:59:24.014Z · LW(p) · GW(p)
the fourth is wisdom of the ages
I'm not sure that's true. In the pre-Freud examples I can think of, dreams were interpreted as predicting actual future events. (Think Joseph interpreting Pharaoh's dream, or the portentious dreams in Shakespeares's Julius Caesar, or lots of folk methods for dreaming about a future spouse.) Freud's claim that dreaming about a crop failure meant something about your fears or emotions, rather than actual future weather conditions, was a new idea.
↑ comment by Viliam_Bur · 2012-01-30T17:10:56.135Z · LW(p) · GW(p)
The second point relates to the Victorian fad for Mesmerism
At time when Freud worked, Mesmerism was a popular topic, today it is not. Of course today criticizing Mesmerism would be a waste of time. (Hopefully a hundred years later people will consider criticizing homeopathy or creationism a waste of time. But it does not mean that people who are criticizing it today are wasting time.) I do not know enough about history of medicine to estimate how much Mesmerism was popular among physicians in that era. By the way, at the beginning Freud also used and advocated the hypnotic cure, but later he said "Oops". He completely reworked his theories at least twice.
the fourth is wisdom of the ages
Sure, but how did people use this wisdom? There were many attempts to explain dreams, but seems to me they either required some irreproducible personal talent or a dictionary saying "X means Y" without any explaining what is the relationship between X and Y or how to explain things you don't find in the dictionary. Saying that dreams are censored metaphorical scenarios of our supressed wishes coming true, and actually using this framework to explain some specific dreams, seems like an improvement to me.
Where are his id, superego, and ego now?
Used by psychoanalysts; shortly revived and popularized by Eric Berne in 1960s.
One might as well credit medieval alchemists with modern chemistry.
Yes.
Freud was not a scientist. Scientists make hypotheses, construct experiments, evaluate them statistically, etc. Freud was a physician -- he tried to cure his patients when the general state of knowledge in his area was pathetic: mostly useless, often harmful. So he made up some heuristics, they seemed to work (though it could also be a placebo effect), compiled them into theories, and published books. He trained a few followers, and some people found his theories (with some updates) useful for a few decades. I would classify his teachings as an "expert opinion", not "science". And if you'd prefer the word "pseudoscience", I wouldn't say you are wrong. This is how psychology was done at that time.
Unfortunately, many criticism of Freud comes from people preferring their own, ahem, pseudosciences. If you compare Freud's theories with contemporary experiments that sometimes use computers, brain scanners, analyzers of chemicals in blood, or drugs produced by multi-billion-dollar pharmaceutical research... but sometimes only a clever idea of an experiment and statistical processing of the results, then of course Freud is on a similar level as medieval alchemists. But in my experience, most people don't waste their time to state the obvious. Most criticism of Freud is merely signalling preference to some other school of psychology; typically Jungian or behaviorist.
There are cca 5 traditional branches of psychology, all of them pseudoscientific, though some of them believing to be more scientific than others because the did some simple experiments with animals, and then widely generalized their results. Doing experiments is correct, of course. The wrong part is concluding that "my experiment discovered X, therefore everything in animal or human mind is X" and then selectively gathering evidence that supports it, and inventing rationalizations when contrary evidence is unavoidable. All the traditional branches were guilty of this. If they would at least contradict each other, their debates could be resolved by an experiment. But all of them claimed to understand everything and refute the competition, while carefully making testable predictions (if any) only in a very small area where their teachings originated, and where probably their maps matched the territory best.
It's like saying "Newton was wrong" when some poeple mean that "Newton was wrong, because space-time is curved, as Einstein has shown", while other people mean that "Newton was wrong, apples fall down because they are made of the earth element mixed with water element, and these elements always want to travel downwards", or even worse "Newton was wrong, apples don't fall down, the power of zodiac levitates them; only when a bad person radiates a negative energy, they fall down" (that was suppossed to be an analogy to "only sick people think about sex all the time" and similar).
What do you think of the well-known claims by various critics that he "set psychiatry back one hundred years", or that psychoanalysis is the "most stupendous intellectual confidence trick of the twentieth century"?
The second quote comes from an immunologist, not psychologist. The first quote comes from a competing Jungian school, but it comes from a guy who actually made a questionnaire to measure some Jung's concepts. Too bad for Freudians they didn't have a guy who would make an equivalent questionnaire to measure something that Freud described; and I guess it means something that they didn't have one. But for me, these quotes are simply signalling superiority of one tribe over another.
(For me the conflict between Freud, Jung, Adler and others is already over. We don't need to argue whether people truly care about sex, about spiritual wisdom, or about power, or love, or self-actualization, or whatever. First, humans can have many values, not only one terminal goal. Second, it can be all different aspects of the same underlying evolutionary process: we need to survive and reproduce, for that we search power and love, and for that we signal our skills and wisdom. Though I admit my sympaties for the Freudian tribe.)
Hypnotherapy still exists, but it's curious that there has never been a single substantial mention of it on LessWrong.
I remember a short discussion about how to administer a "placebo hypnosis" for a double-blind test. :D
If you can recommend some easily reproducible test and write an article, I think some readers will participate. Problem is to design the test to prevent a placebo effect and other methodological problems.
Replies from: None↑ comment by [deleted] · 2012-01-30T18:39:30.193Z · LW(p) · GW(p)
Scientists make hypotheses, construct experiments, evaluate them statistically, etc.
This seems too narrow a conception of science: Did Darwin do science that way?
What Freud didn't succeed in is to elevate psychology from a preparadigm state (in Thomas Kuhn's sense). But Freud's main concern was mental conflict, and I don't think its study has today reached the stage of genuine science. Cognitive-behavioral approaches to treatment largely ignore mental conflict, and the result is that they are more collection of tricks than a theory. Because students typically set the bar too high for psychoanalysis, Freud's own principal trick, free association, is vastly under-utilized.
comment by Kaj_Sotala · 2012-01-24T09:42:33.964Z · LW(p) · GW(p)
Kaj forms the generalization "as soon as my brain adopted a cause, my subconscious reinterpreted it as the goal of giving the impression of doing prestigious work for the cause". I worry that our community has a tendency to explain as e.g. status signaling or "people really don't care about X", observations that can also be explained by less malice/selfishness and more "our brains have known malfunctions at linking goals to urges".
I actually agree with this, and have somewhat changed my mind about the explanation in my original post, but I just haven't had the time to write about it. Hopefully this week.
EDIT: Done.
comment by Vladimir_Golovin · 2012-01-25T06:07:58.041Z · LW(p) · GW(p)
Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"
I'm adopting this. Could someone point me to the source? I tried to google for Andrew Critch's "greedy algorithm" but haven't found anything except this LW post. Update: Sent a PM to Andrew, asked for more details.
Update 2:
I tried this for a while but alas, it didn't stick - I couldn't find a trigger for this habit. To use programming terms, my brain never fires an event "you're wanting to do something you want to want", so I don't have a place to put the rewarding "habit-code" into.
On the other hand, I do notice when I actually do what I want to want to do, so I started rewarding myself for this, both mentally and physically (e.g. by grabbing some nuts / dried berries from the bowl, taking a good break, bragging to coworkers etc.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2012-01-25T08:08:46.750Z · LW(p) · GW(p)
Critch, aka Academian, taught it in minicamp and unfortunately has yet to write it up anywhere. I wish he would. pm academian and ask him to :)
comment by lukeprog · 2012-01-24T04:48:03.981Z · LW(p) · GW(p)
Great post! "Aspiring consequentialist" has a nice ring to it.
I just realized I also have "Send Carl & Anna their wedding gift" on my to-do list.
We know a thing or two about the neurobiology behind the divide between urges and goals. Those interested can read about it here.
Replies from: Solventcomment by Vladimir_Nesov · 2012-01-25T20:00:53.008Z · LW(p) · GW(p)
More generally, for the basic decision-making tools we have a collection (automatic application, automatic correction, deliberative application, deliberative correction). For goals, that's (wanting, liking, approving, approving of approving); for beliefs, (anticipation, learning/surprise, professed belief, correspondence with referent (taskian truth)).
For example, correcting wrong belief in belief (professed belief) that doesn't reflect more accurate anticipation then corresponds to getting rid of fake professed utility functions that don't reflect the actual detailed values. Both are errors of mishandling the tools for deliberative reasoning (about beliefs and goals), and fixing both of these errors should in theory improve the quality of one's decisions (or of theorizing about decision-making).
comment by torekp · 2012-01-25T01:47:46.028Z · LW(p) · GW(p)
Andrew Critch's "greedy algorithm": Whenever you catch yourself really wanting to do something you want to want, immediately reward yourself - by feeding yourself an M&M, or if that's too difficult, immediately pumping your fist and saying "Yes!"
Closely related to the parenting advice of "Catch them being good" - which works wonders on kids. I expect it will generalize well to adults.
comment by Giles · 2012-01-26T01:36:29.634Z · LW(p) · GW(p)
A lot of our community technique goes into either (1) dealing with "beliefs" being an evolutionarily recent system, such that our "beliefs" often end up far screwier than our actual anticipations; or (2) trying to get our anticipations to align with more evidence-informed beliefs.
Wow. I hadn't heard this expressed quite like this before... We have one territory and two maps, and we can help get both maps in sync with reality by getting them both in sync with each other.
Is (2) related to taking ideas seriously?
To me there seems to be more going on here than in Belief in Belief where Yudkowsky encourages people to do (1) - to update vacuous or nonsensical professed beliefs. From Yudkowsky's article I didn't pick up that (2) needs to happen as well.
comment by Jonathan_Graehl · 2012-01-24T23:16:02.952Z · LW(p) · GW(p)
And much the same way that a lot of craziness stems, not so much from "having a wrong model of the world", as "not bothering to have a model of the world", a lot of personal effectiveness isn't so much about "having the right goals" as "bothering to have goals at all" - where unpacking this somewhat Vassarian statement would lead us to ideas like "bothering to have something that I check my actions' consequences against, never mind whether or not it's the right thing" or "bothering to have some communication-related urge that animates my writing when I write, instead of just sitting down to log a certain number of writing hours during which I feel rewarded from rearranging shiny words".
Simple, useful preaching.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-01-26T10:24:27.450Z · LW(p) · GW(p)
I'd say it's useful, but that is not a simple explanation.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2012-01-26T20:32:01.577Z · LW(p) · GW(p)
Perhaps I really meant "persuasive", not "simple".
Simple would be: Try to have a model of the world. Try to have goals. Don't worry yet about mistakes. By trying at all, you'll probably do better than most people.
comment by Solvent · 2012-01-24T05:05:47.635Z · LW(p) · GW(p)
Thank you for this post. The central insight that we should consider instrumental rationality by analogy to epistemic rationality is something that has never occured to me before. I wish I had thought of it.
Besides an aspiring rationalist, these days I call myself an "aspiring consequentialist".
I think I'll do that too.
comment by Kallisti · 2012-02-14T19:07:57.021Z · LW(p) · GW(p)
I think "goals" are the wrong way to look at it.
Very few people have a complete, coherent system of terminal values. The few who do usually seem to suffer from their excessive rigidity. I can't commit to an exhaustive set of goals, all the way to the end of my life. I've had to discard and change my plans too many times. What looks like a great idea today may turn out to be fruitless on inspection.
Instead of goals I think about resources. I don't know specifically what I'm going to want to do, but whatever it is, money will be helpful. As will health. And human capital. And a track record of accomplishments.
When a goal isn't a resource, it usually turns out to be a fake goal. "I want to learn Latin." Well, if learning Latin isn't going to help me with anything, then guess what? I'll never get around to learning Latin. It's just an empty aspiration. Learning awk, however, isn't an empty aspiration -- I learned the hard way how badly I need it. Awk is a resource.
It's not a perfect distinction. Sometimes a resource looks like a mere aspiration because its usefulness is too far away in time. Asking "is this a resource?" will tend to bias you towards what you need most now. But it's not a bad algorithm. (I think of it as gradient ascent.) You get done what you need to do, and you worry a lot less about what you vaguely think you should be doing.
comment by ChristianKl · 2012-01-25T14:03:20.207Z · LW(p) · GW(p)
He personally had the experience of believing "If the last day where I remember having gone to bed was a Tuesday today shouldn't be Monday but Wednesday". Before the belief got challenged by hard reality I have never paid any conscious attention to the belief. Getting it challenged on the other hand produced one of the three stongest feelings of cognitive dissonce that I felt in my life.
We all have a bunch of beliefs which a very reasonable but for which they are edge cases where the beliefs don't hold.
I think the common term for those beliefs is "common sense".
Then there a other little area of believes that we commonly call "perception". If I see a red carpet I believe that the carpet is red. It doesn't necessarily have to be and there a psychological tricks that one can use to give people false perceptions.
comment by multifoliaterose · 2012-01-25T05:31:49.432Z · LW(p) · GW(p)
I strongly endorse your second and fourth points; thanks for posting this. They're related to Yvain's post Would Your Real Preferences Please Stand Up?.
comment by JoachimSchipper · 2012-01-24T11:57:39.609Z · LW(p) · GW(p)
This should be on the front page.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-24T21:19:26.647Z · LW(p) · GW(p)
I expect it will be, fairly soon.
comment by atucker · 2012-01-24T11:11:35.971Z · LW(p) · GW(p)
When I was in San Francisco, I recall the phrase "goals not roles" popping up a lot.
I find that it's a fairly easy way to remember that it's even a question whether I'm trying to accomplish something, or just do some things that make it look like I'm trying to accomplish it.
comment by james_edwards · 2012-01-24T05:42:18.011Z · LW(p) · GW(p)
Important and timely (the next Melbourne LW meetup will focus on setting good goals, an exercise which has always confounded me).
I find particularly interesting the "wedding gift todo" example, where imagined achievement of the goal stands-in for actually achieving the stated goal (giving a wedding gift). We want to have and act on "goals" rather than "urges". But setting goals is the kind of activity where "urges" can dominate. To me this looks like the analogue of belief-in-belief. We want our reasoning processes to be reflexively consistent, but in practice they often fail to work that way.
Edit: And when I go back and look at "Belief in Belief", that's where Eliezer outlines the "invisible dragon" example, so my main point is already implicit in this post!
comment by Dmytry · 2012-01-26T21:25:57.564Z · LW(p) · GW(p)
Well it seems to me that a rational goal professed by a rationalist should correspond to a few anticipations (that goal is achievable, that achieving the goal will achieve some rational super goal, serve an urge, or otherwise be positive). Not an analogy but a straightforward correspondence.
Unless of course one adopts goals of the form - suppose I am the leader of the tribe and you are regular member and I tell you to defend this hill. Or vice versa. And we adopt defending of the hill as a goal without any knowledge as to why we are defending this hill and how is that linked to the urge to live or other urges. Such knowledge may be hard to convey and requirement of communication of such knowledge would easily be a handicap (especially if the language is not very well developed).
Likewise for the passed down generations goal-set of the tribe, which itself is an entity that can evolve.
I imagine that sort of thing happened a lot all the way to couple millions years back.
Detachment of goals allows us to have all sorts of screwy goals ranging from more noble varieties like pursuing higher education (which dramatically decreases the reproduction rate) to really bad ones (suicidal attacks). [Note that from evolutionary point of view those goals are all rather screwy in so much that they don't improve reproduction rate of any particular gene]
I imagine that if biological evolution would be allowed to continue for a million years or two we would have more intelligent human with precisely 1 strong urge - to reproduce - everything else being derived as goals rationally. But as of now we got the urge system of a monkey that won't be able to arrive at any action or decision starting from such high level goal as reproduction.
Replies from: None↑ comment by [deleted] · 2012-03-27T21:31:31.568Z · LW(p) · GW(p)
I imagine that if biological evolution would be allowed to continue for a million years or two we would have more intelligent human with precisely 1 strong urge - to reproduce
This would seem true if the only force involved in evolution were natural selection. But selection selection seems to have played a considerable role in human evolution. A horizon limited to reproduction doesn't seem very sexy to my intuitions.
Replies from: Dmytrycomment by Giles · 2012-01-26T01:51:12.866Z · LW(p) · GW(p)
Visualizing that accomplishment, and its positive rewarding consequences, until you have an urge for it to happen
I so have to try this hack. No agency without urgency?
This fits in reasonably well with an anti-akrasia framework I've been thinking over: Rephrase goal X as "I honestly believe that I will achieve X", and then carry on thinking until you actually have a reasonably solid case for believing that. This particular trick translates to breaking down the statement into "I will force myself to develop an urge to do X. And once I have an urge to do X I will surely do X because I tend to follow my urges".
"Status" / "prestige" / "signaling" / "people don't really care about" is way overused to explain goal-urge delinkages that can be more simply explained by "humans are not agents".
I think that those are sort of describing the same thing but at different levels of abstraction. Thinking in terms of evolutionary adaptations, our professed goals and actual behavior may differ for status/signaling reasons. But thinking about cognition we just come to the conclusion that we're not very agenty and aren't so bothered about the evolutionary reason why.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2012-01-28T21:42:02.909Z · LW(p) · GW(p)
Rephrase goal X as "I honestly believe that I will achieve X", and then carry on thinking until you actually have a reasonably solid case for believing that.
As a rationalist, you can frame that as "I prefer to reward future versions of me that have achieved this by having correctly predicted their behavior. "
comment by john2219 · 2012-01-30T16:58:25.099Z · LW(p) · GW(p)
I agree with your article. I think that this example doesn't quite illustrate it:
Before you install Linux, do you think "What's the positive consequence of installing Linux?" or does it just seem like the sort of thing a free-software-supporter would do?
The first few times I did this, it was the second motivation. After a while, it became the first, namely that I got a system I had better control over, incorporating high-quality software. However, the first motivation was very good for the second. Without (lots of) people doing the sort of thing a free-software supporter would do, most of the good software I use out of work would not be available to me.
Your sentence has a strong/weak dichotomy whereas I think either motivation is acceptable.
comment by [deleted] · 2012-01-29T00:52:19.364Z · LW(p) · GW(p)
Urges vary in strength, but it isn't usual to speak of one goal being stronger than another—except in the sense that it's powered by more urges. But goals, too, would seem to vary in strength. A goal's strength would bear some relationship to the expected value of striving to attain it.
You overstate the disconnection between urges and goals because you don't consider the consequences of goals having intrinsic strength, apart from their extrinsic association with urges. A stronger goal exerts a stronger pull to recruit urges to its service. Unless we're neurotic, we don't typically ignore our strongest goals because of a dearth of supporting urges.
Thought provoking post.
comment by ahartell · 2012-01-27T02:52:17.163Z · LW(p) · GW(p)
I got a lot out of this post, and it's obviously very high quality, but I have one humble gripe.
"and why Eliezer had to actually crash a car before he viscerally understood what his physics books tried to tell him about stopping distance going up with the square of driving speed. (I helped Anna revise this - EY.)"
I feel as if the parenthetical statement at the end of the quoted text would be unnecessarily alienating to an outside reader. Maybe it's that it feels unprofessional (I'm not really sure), but it seems like the kind of thing that would seem weird to me if I were reading this after having it sent to me by a friend. And this is definitely the kind of post I would send to a friend.
In other news, I agree vehemently about your point in the first footnote. I'll definitely start using the "chess player" example when I'm explaining belief in belief to others.
Replies from: Prismattic↑ comment by Prismattic · 2012-01-27T03:40:32.290Z · LW(p) · GW(p)
The parenthetical is clearly there to show that she is not using this anecdote without EY's permission, since it might be taken as status-reducing.
Replies from: ahartell↑ comment by ahartell · 2012-01-27T03:46:51.141Z · LW(p) · GW(p)
Yeah, but maybe it would have been better as a footnote. And would newer readers know what "EY" meant?
Replies from: Ben_Welchner↑ comment by Ben_Welchner · 2012-01-27T06:52:00.935Z · LW(p) · GW(p)
And would newer readers know what "EY" meant?
Given it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.
Replies from: ahartellcomment by fburnaby · 2012-01-26T05:35:04.557Z · LW(p) · GW(p)
My workplace seems, at times, to be well-designed to align my urges and goals for me.
(also: Congratulations, Anna and Carl on your wedding!)
Replies from: None↑ comment by [deleted] · 2012-01-27T05:16:24.137Z · LW(p) · GW(p)
How so?
Replies from: fburnaby↑ comment by fburnaby · 2012-01-27T23:49:08.214Z · LW(p) · GW(p)
When I'm there, I feel like working and when I'm anywhere else, I don't. I haven't ever stopped to try and figure out what it is about the place, but I've assumed that someone must be thinking about it.
If you'd like to have some guesses:
- it's a very sterile environment with no distractions
- I feel pressure to demonstrate that I'm working right this second, which may help me stay in near-mode
(One necessity for all this to work is, of course, that my goals be related to furthering my career and to accomplishing and learning stuff that's positively correlated to my employer's goals.)
comment by Jonathan_Graehl · 2012-01-24T22:56:02.135Z · LW(p) · GW(p)
I love this post's anticipations : professed beliefs :: urges : professed goals. Planning seems more necessary (although I guess it's actually rare) than talking about your beliefs (which is easy to do to excess).
All the other cases involve not asking it, or not asking hard enough.
"the cases" is unclear. I assume you mean the rest of the "ways to screw up in choosing goals" yet to be listed.
Ideals can be more ungoaly because they're sometimes about faraway things or less ancestral things - it's probably easier to improve your agency on less idealy goals that link more quickly to urges -
Redundant and opaque. "idealy" and "ungoaly" aren't words. The entire paragraph states a simple point (that I agree with): we shouldn't automatically give up on the goals where agency is hardest - it may be worth making progress toward those goals, even if it's expensive, or incomplete.
comment by AnlamK · 2012-01-25T02:53:52.683Z · LW(p) · GW(p)
The Inuit may not have 47 words for snow
The Inuit does not have 47 words for snow! Please, don't propagate this falsehood, especially on a 'rationality' blog.
Edit: Sorry I read incorrectly. My apologies! It says 'may not'...