The enemy within
post by Roko · 2009-07-05T15:08:05.874Z · LW · GW · Legacy · 18 commentsContents
18 comments
I read an article from the economist subtitled "The evolutionary origin of depression" which puts forward the following hypothesis:
As pain stops you doing damaging physical things, so low mood stops you doing damaging mental ones—in particular, pursuing unreachable goals. Pursuing such goals is a waste of energy and resources. Therefore, he argues, there is likely to be an evolved mechanism that identifies certain goals as unattainable and inhibits their pursuit—and he believes that low mood is at least part of that mechanism. ...
This ties in with Kaj and PJ Eby's idea that our brain has a collection of primitive, evolved mechanisms that control us via our mood. Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do.
Eliezer has already told us about Adaptation-Executers, not Fitness-Maximizers; evolution mostly created animals which excecuted certain adaptions without really understanding how or why they worked - such as mating at a certain time or eating certain foods over others.
But, in humans, evolution didn't create the perfect the perfect consequentialist straight off. It seems that evolution combined an explicit goal-driven propositional system with a dumb pattern recognition algorithm for identifying the pattern of "pursuing an unreachable goal". It then played with a parameter for balance of power between the goal-driven propositional system and the dumb pattern recognition algorithms until it found a level which was optimal in the human EEA. So blind idiot god bequeathed us a legacy of depression and akrasia - it gave us an enemy within.
Nowadays, it turns out that that parameter is best turned by giving all the power to the goal-driven propositional system because the modern environment is far more complex than the EEA and requires long-term plans like founding a high-technology startup in order to achieve extreme success. These long-term plans do not immediately return a reward signal, so they trip the "unreachable goal" sensor inside most people's heads, causing them to completely lose motivation.
However, some people seem to be naturally very determined; perhaps their parameter is set slightly more towards the goal-driven propositional system than average. These people rise up from council flats to billionaire-dom and celebrity status. People like Alan Sugar. Of course this is mere hypothesis; I cannot find good data to back up the claim that certain people succeed for this reason, but I think we all have a lot of personal evidence that suggests that if we could just work harder, we could do much better. It is now well accepted that getting into a positive mood counteracts ego depletion, see, for example, this paper1 . One might ask why on earth evolution designed the power-balance parameter to vary with your mood; but suppose that the mechanism is that the "unreachable goal" sensor works as follows:
{pursuing goal} + {sad} = {current goal is unachievable} ==> decrease motivation
{pursuing goal} + {happy} = {current goal is being achieved} ==> increase motivation
And the "mood" setting takes a number of inputs to determine whether to go into the "happy" state or the "sad" state, such as whether you have recently laughed, whether you received praise or a gift recently, and whether your conscious, deliberative mind has registered the "subgoal achieved" signal.
In our EEA, all of the above probably correlated well with being in pursuit of a goal that you are succeeding at: since the EEA seems to be mostly about getting food and status in the tribe, receiving a gift, laughing or getting more food probably all correlated with with doing something that was good - such as making allies who would praise you and laugh and socialize with you. Conversely, being hungry and lonely and frustrated indicate that you are trying something that isn't working, and that the best course of action for your genes is to hit you with a big dose of depression so that you stop doing whatever you were doing.
Following PJ Eby's idea of the brain as a lot of PID feedback controller circuits, we can see what might happen in the case of someone who "makes it": they try something which works, and people praise them and give them gifts (e.g. money, business competition prizes, corporate hospitality gifts, attention, status), which increases their motivation because it sets their "goal attainability" sensor to "attainable". This creates a positive feedback loop. Conversely, if someone does badly and then gets criticism for that bad performance, their "unreachable goal" sensor will trip out and remove their will to continue, creating a downward spiral of ever diminishing motivation. This downward spiral failure mode wouldn't have happened in the EEA, because the long-term planning aspect of our cognition was probably useful much more occasionally in the EEA than it is today, hence it was no bad thing for your brain to be quite eager to switch it off.
So what are we to do? Powerful anti-depressants would seem to be your friend here, as they might "fool" your unreachable goal sensor into not tripping out. In a comments thread on Sentient Developments, David Pearce and another commenter claimed that there are some highly motivating antidepressants which could help. Laughing and socializing in a positive, fun way also seem like good ideas, or even just watching a funny video on youtube. But we should definitely think about developing much more effective ways to defeat that enemy within; I have my eye on hypnosis, meditation and antidepressants as big potential contributors, as well as spending time with a mutually praising community.
1. Restoring the self: Positive affect helps improve self-regulation following ego depletion, Ticea et al, Journal of Experimental Social Psychology
18 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2009-07-07T07:01:22.024Z · LW(p) · GW(p)
Following PJ Eby's idea of the brain as a lot of PID feedback controller circuits
Ahem. Bill Powers' idea, which I introduced to LW.
comment by Vladimir_Nesov · 2009-07-05T20:49:11.541Z · LW(p) · GW(p)
This article starts by introducing and developing an interesting hypothesis, but then gives a highly unrepresentative anecdote and concludes with a suggestion to take psychiatric medication that failed to get regulatory approval.
The first part would be much better off without the second.
ETA: The updated version is much better, thank you.
comment by Sideways · 2009-07-05T15:47:32.197Z · LW(p) · GW(p)
All animals except for humans had no explicit notion of maximizing the number of children they had, or looking after their own long-term health. In humans, it seems evolution got close to building a consequentialist agent...
Clarification: evolution did not build human brains from scratch. Humans, like all known life on earth, are adaptation executers. The key difference is that thanks to highly developed frontal lobes, humans can predict the future more powerfully than other animals. Those predictions are handled by adaptation-executing parts of the brain in the same way as immediate sense input.
For example, consider the act of eating bacon. A human can extrapolate from the bacon to a pattern of bacon-eating to a future of obesity, health risks, and reduced social status (including greater difficulty finding a mate). This explains why humans can dither over whether to eat bacon, while a dog just scarfs it down--dogs can't predict the future that way. (The frontal lobes also distinguish between bad/good/better/best actions--hence the vegetarian's decision to abstain from bacon on moral grounds.)
Eliezer's body of writing on evolutionary psychology and P.J. Eby's writing on PCT and personal effectiveness seem to be regarded as incompatible by some commenters here (and I don't want to hijack this thread into yet another PCT debate), but they both support the proposition that akrasia and other "sub-optimal" mental states result from a brain processing future-predictions with systems that evolved to handle data from proximate environmental inputs and memory.
Replies from: Roko, thomblake↑ comment by Roko · 2009-07-05T16:16:27.369Z · LW(p) · GW(p)
Humans, like all known life on earth, are adaptation executers.
well, being a consequentialist is a particular adaptation you can execute. "Consequentialist" is a subset of "Adaption Excecuter"
Humans certainly come much closer to pure consequentialism - of explicitly representing a goal and calculating optimal actions based upon the environment you observe to achieve that goal - than any other creature does.
Replies from: Sideways↑ comment by Sideways · 2009-07-05T18:47:25.279Z · LW(p) · GW(p)
I agree. My comment was meant as a clarification, not a correction, because the paragraph I quoted and the subsequent one could be misinterpreted to suggest that humans and animals use entirely different methods of cognition--"excecut[ing] certain adaptions without really understanding how or why they worked" versus an "explicit goal-driven propositional system with a dumb pattern recognition algorithm." I expect we both agree that human cognition is a subsequent modification of animal cognition rather than a different system evolved in parallel.
I'm not sure I agree that humans are closer to pure consequentialism than animals; if anything, the imperfect match between prediction and decision faculties makes us less consequentialist. Eating or not eating one strip of bacon won't have an appreciable impact on your social status! Rather, I would say that future-prediction allows us to have more complicated and (to us) interesting goals, and to form more complicated action paths.
Replies from: Rokocomment by gjm · 2009-07-05T19:54:13.052Z · LW(p) · GW(p)
The Wikipedia article about amineptine to which Roko linked doesn't seem to support the claim he's quoting; it says that its regulatory troubles were because of an immediate stimulant effect -- which is not the same thing as its antidepressant effect, or at least so says the article and it seems plausible to me. (Depression is not the same thing as unhappiness; stimulants are not the same thing as antidepressants.)
Replies from: Roko↑ comment by Roko · 2009-07-05T20:16:52.718Z · LW(p) · GW(p)
I was rather hoping that someone who knew what they were talking about would clarify this; amineptine is of course one possible antidepressant to consider, but there are hundreds of them. What is the situation? Do there exist antidepressants that can help people to motivate themselves beyond the normal level of human motivation?
Considering that there exists a tradition within the medical community to draw a fairly arbitrary distinction between "treatment" and "enhancement", and to only spend time researching and administering "treatment" but to shun "enhancement", I would not be surprised if there were some low hanging fruit here.
Regrettably, I know a lot about general relativity and category theory, but virtually nothing about psychopharmacology so I cannot really add much.
Replies from: MineCanary↑ comment by MineCanary · 2009-07-07T01:18:32.360Z · LW(p) · GW(p)
If you can find any antidepressants that actually reliably cure depression without making a lot of people have unacceptable side-effects, ranging from suicide to short-term memory loss, please tell me.
Most of the people I know who've been in the mental healthcare system (including me) have had to try several medication before one (if any) actually helped their symptoms, which were/are often debilitating. A very good reason for drawing the line between "treatment" and "enhancement" is that a lot of the time you'd only put up with psychopharmacology if you were utterly miserable and unable to function without it. And even then it's only a bet.
The effect you're looking for does seem to be one that people often go to stimulants looking for, so perhaps you should be looking for the best stimulant for you.
And I do not think the distinction between "treatment" and "enhancement", as you say, is arbitrary. A focus on enhancement may be profitable--and is for the regrettably mostly pseudoscientific or downright dishonest supplement and self-help industries--but mental illness is friggin' terrible. I know: I've had it for 16 years and I'm only 18 years old. There's a huge difference between being disappointed with yourself because you procrastinate a little and hearing voices and having the impression that they're surrounding you and stabbing you as you twitch and shudder and try to focus on responding to someone speaking to you with at least grunts or flat, simple answers--which can also be caused by procrastination, or, rather, part of a vicious cycle of depression and anxiety that centers around procrastination and disbelief in one's own ability to accomplish anything. I know: I've been there.
comment by MineCanary · 2009-07-07T01:41:24.106Z · LW(p) · GW(p)
Mmm, am I the only one not thinking right, or does the article debunk its own suggestion?
Their conclusion was that those who experienced mild depressive symptoms could, indeed, disengage more easily from unreachable goals. That supports Dr Nesse’s hypothesis. But the new study also found a remarkable corollary: those women who could disengage from the unattainable proved less likely to suffer more serious depression in the long run.
I'm not sure how they define "mild depressive symptoms", but it looks like depression in the sense of the word I expect--the serious illness that is among the top 10 causes of disability worldwide--is not necessarily linked to the mechanism of low mood => give up unattainable goals. The article also suggests that this giving up motivation is adaptive because it allows one to focus on new goals--or at least to rationally appraise the situation and see if you want to keep going or if there's a better alternative.
Additionally, in what looks to me suspiciously like an example of bad science reporting, the article devotes a considerable part of itself to this:
The importance of giving up inappropriate goals has already been demonstrated by Dr Wrosch. Two years ago he and his colleagues published a study in which they showed that those teenagers who were better at doing so had a lower concentration of C-reactive protein, a substance made in response to inflammation and associated with an elevated risk of diabetes and cardiovascular disease. Dr Wrosch thus concludes that it is healthy to give up overly ambitious goals. Persistence, though necessary for success and considered a virtue by many, can also have a negative impact on health.
Okay, first, no mention of how they measured ability to give up "inappropriate goals". That seems methodologically difficult to me. Second, they used a proxy measure (C-reactive protein) for total health, which puts one more link in the cause-effect chain to potentially be a weak link. Third, correlation does not prove causation, even if it seems plausible. Fourth, higher concentrations of C-reactive protein do not equal overall health, so a lot more study would have to be done to tell whether the measured variable has an overall effect on health or not, so the conclusion seems premature.
So I can see why you might discount the article's main argument that the low mood => give up goal mechanism is adaptive even today, but why not accept its challenge? Perhaps your persistence in pursuing the task of heightened motivation is maladaptive--putting yourself at greater risk for psychological problems as you continually fail yourself, taking up more energy than is worthwhile, and keeping you from noticing other opportunities that work more in harmony with your abilities rather than against. I don't see that there's anything in this line of speculation to point in one direction or another--and I do know from experience that if you're working on working on your goals, you're not working.
comment by pjeby · 2009-07-06T03:27:56.028Z · LW(p) · GW(p)
Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do.
I don't think the anthropomorphic frame is helpful, though. It's like saying that a thermostat is "trying" to prevent the sun from warming up the room as it "wants" to.
It'd be a bit more accurate to simply say that we can have control circuits that require mutually incompatible states or actions.
It seems that evolution combined an explicit goal-driven propositional system with a dumb pattern recognition algorithm for identifying the pattern of "pursuing an unreachable goal".
PCT has a more general form of this idea that doesn't require a specialized notion of an unreachable goal: it proposes a reorganization system that responds to chronic or intrinsic error signals by reorganizing the control circuits involved until the error goes away. An unreachable goal would just be one example of what could trigger this system.
PCT considers this "reorganziation" system to be the basis of trial-and-error learning as well, that is, continued failure to reach a goal prompts a series of variations in behavior until the error signal goes away. In the case of an unreachable goal, the reorganization will simply continue until the organism "gives up" - that is, reorganizes the control system so that the current condition is no longer considered an error, and thus experiences no motivation to pursue it.
This isn't to say that we don't have any ability to handle unreachable goals specially, just that the PCT model doesn't need one.
These long-term plans do not immediately return a reward signal, so they trip the "unreachable goal" sensor inside most people's heads, causing them to completely lose motivation.
This isn't true. I can get immediate reward signals thinking about my goals over the next year or two. The real issues have more to do with whether your thoughts also trigger any error signals due to predictions of difficulty, stress, having to give up other things, etc. When those things are present, any positive reward is going to get drowned out by shorter-term negatives.
One might ask why on earth evolution designed the power-balance parameter to vary with your mood;
It didn't, because the model you're describing here is unnecessarily complicated. Ego depletion is simply a function of conflict between controllers -- PCT predicts that two systems in conflict will trigger maximal activation of the neural pathways involved in the conflict, like two competing thermostats simultaneously running the heat and A/C at maximum. And this would naturally expected to result in overuse of brain fuel (e.g. glucose).
Conversely, if someone does badly and then gets criticism for that bad performance, their "unreachable goal" sensor will trip out and remove their will to continue, creating a downward spiral of ever diminishing motivation.
This isn't really how it works, either. The difference between success and struggling is in one's interpretation of events, not the events themselves. A successful person responds to negative events with, "Ah! I love a challenge!", or at worst, "Well, I guess I learned one more way that doesn't work."
Conversely, a natural struggler's interpretation of good things is that they won't last, aren't "real", or "don't count".
Dweck's research into mindsets also shows that it's ridiculously easy to get people to think in this mindset, even without exposing them to any actual adversity whatsoever! So, it's not a matter of exposure to adveristy or success; a person who has nothing but success in their early years may spend the rest of their life handicapped by it. (The downward spiral of many child stars being an all-too-obvious example.)
But we should definitely think about developing much more effective ways to defeat that enemy within
You can't defeat it, and you don't need to. What you need is to resolve your actual priorities. I'm still integrating the results of PCT into my own work, but the effects have been mindblowing at times.
This weekend, for example, I realized that every time I ever tried to "define my priorities" in the past (as requested by virtually every self-help book in existence), I was actually trying to make a list of what I thought my priorities should be... rather than what they truly were. And just realizing that tiny difference in perspective has completely changed my outlook on planning, and what I'll be doing in the next few weeks.
The point? While I was trying to do what I thought I "should", I was essentially ensuring that I would always be in conflict, always fighting this so-called "enemy" within. But by instead doing what I really want in the first place...
There's no longer any "enemy" to "fight".
Replies from: orthonormal, orthonormal↑ comment by orthonormal · 2009-07-06T06:52:36.343Z · LW(p) · GW(p)
Eby's theory is that many of us have circuits that try to prevent us from doing the things we want to do.
I don't think the anthropomorphic frame is helpful, though. It's like saying that a thermostat is "trying" to prevent the sun from warming up the room as it "wants" to.
It's not the language of "wanting" that seems to be the problem; after all, you yourself talk about "what I really want in the first place". I think, rather, you're suggesting Roko amends that quote from "the things we want to do" to "the things we think we want to do" or "the things we think we should want to do".
If that's the case, I agree with you; the parts of my mind of which I'm not directly conscious are not something external to me, and fighting them as an "enemy within" has been (in my experience) a recipe for disaster. In particular, I'm often unconsciously aware of many factors that it's hard to consciously acknowledge at the time; when I've felt the strain of forcing a conscious decision against unspecified internal resistance, that decision usually turns out badly for factors I should have seen at the time.
And yet, there is an important counterpoint: I would in fact choose to self-modify into an agent that values utilons more and fuzzies less, were it possible to ask Omega to do that. (Yes, I know, it's easy to signal that I would do that, but Bayes damn it, I really would.) This limits the types of compromises that I am willing to consider between my various wants.
Replies from: pjeby↑ comment by pjeby · 2009-07-06T17:18:31.277Z · LW(p) · GW(p)
It's not the language of "wanting" that seems to be the problem; after all, you yourself talk about "what I really want in the first place". I think, rather, you're suggesting Roko amends that quote from "the things we want to do" to "the things we think we want to do" or "the things we think we should want to do".
Yes, but that wasn't my main objection. My main objection is that "trying to prevent us from doing the things we want to do" implies an opponent whose goal is to frustrate you, rather than a blind controller simply trying to restore a variable to its programmed range.
It's not "out to get you" in some fashion, and far too much self-help material creates that kind of paranoia already. Certainly, I don't want anybody getting the impression that I promote such irrational paranoia myself.
Replies from: orthonormal↑ comment by orthonormal · 2009-07-06T17:41:19.103Z · LW(p) · GW(p)
Now it's my turn to be puzzled about whether we're disagreeing. Isn't this quite compatible with what I wrote in the second paragraph?
In any case, IAWYC, but I haven't yet seen evidence that would lead me to conclude all of my unconscious mental processes are best represented as control circuits of that sort; there could be some relatively sophisticated modeling there as well, just hidden from my conscious ratiocination.
Replies from: pjeby↑ comment by pjeby · 2009-07-07T02:42:01.332Z · LW(p) · GW(p)
Now it's my turn to be puzzled about whether we're disagreeing. Isn't this quite compatible with what I wrote in the second paragraph?
I'm not disagreeing with what you said, I'm only disagreeing with what you said I said. Clearer now? ;-)
I haven't yet seen evidence that would lead me to conclude all of my unconscious mental processes are best represented as control circuits of that sort; there could be some relatively sophisticated modeling there as well, just hidden from my conscious ratiocination.
It's true that PCT (at least as described in the 1973 book) doesn't take adequate account of predictive modeling. The model that I was working with (even before I found out about the "memory-prediction" framework) was that people's feelings are readouts of predictions the brain makes, based on simple pattern recognition of relevant memories... aka, the "somatic marker hypothesis".
What I've realized since finding out about PCT, is that these predictions can be viewed as memory-based linkages between controllers - they predict, "if this perception goes to this level, then that perception will go to that level", e.g. "IF I have to work hard, THEN I'm not smart enough".
I already had this sort of IF-THEN rule formulation in my model (described in the first draft of TTD), but what I was missing then is that in order for a predictive rule like this to be meaningful, the target of the "then" has to be some quantity under control -- like "self-esteem" or "smartness" in the previous example.
In the past, I considered these sort of simple predictive rules to be the primary drivers of human behavior (including rationalizations and other forms of verbal thinking), and they were the primary targets of my mindhacking work, because changing them changed people's automatic responses and behavior, and quite often changed them permanently. (Presumably, in cases where we found a critical point or points in the controller network.)
This seemed like a sufficient model to me, pre-PCT, because it was easy to find these System 1 rules just underneath System 2's thinking, whenever a belief or behavior pattern wasn't working for someone.
Post-PCT, however, I realized that these rules are purely transitional-- merely a subset of the edges of the control hierarchy graph. Where before I assumed that they were passive data, subject to arbitrary manipulation (i.e. mind-hacking), it's become clear now that the system as a whole can add or drop these rules on the basis of their effects on the controllers.
Anyway, I'm probably getting into too much detail, now, but the point is that I agree with you: merely having controllers is not enough to model human behavior; you also need the memory-predictive links and somatic markers (that were already in my model), and you need PCT's idea of the "reorganization system" -- something that might be compared to an AI's ability to rewrite its source code, only much much dumber. More like a simple genetic-programming optimizer, I would guess.
↑ comment by orthonormal · 2009-07-06T17:48:42.128Z · LW(p) · GW(p)
Ego depletion is simply a function of conflict between controllers -- PCT predicts that two systems in conflict will trigger maximal activation of the neural pathways involved in the conflict, like two competing thermostats simultaneously running the heat and A/C at maximum. And this would naturally expected to result in overuse of brain fuel (e.g. glucose).
OK, this is very good; this is an area in which PCT seems to make relatively clear testable predictions, one of which correctly predicts already known data on ego depletion, brain activity and glucose level. Why didn't you bring this up earlier? This is exactly the sort of thing we've been asking for. Clever fMRI studies showing a wide variety of mental distress as conflicts between different systems, escalating in activity and glucose use until one can't keep up, would be strong evidence in favor of your account.
As I remarked elsewhere in the thread, it looks quite reasonable to me that we have some control circuits at various levels of our mental architecture; what I balk at is the assertion that these control circuits comprise all (or nearly all) of the architecture. But if evidence of this sort were found, I could be convinced.
Replies from: pjeby↑ comment by pjeby · 2009-07-07T02:20:48.803Z · LW(p) · GW(p)
OK, this is very good; this is an area in which PCT seems to make relatively clear testable predictions, one of which correctly predicts already known data on ego depletion, brain activity and glucose level. Why didn't you bring this up earlier?
Probably because it seemed way too obvious to me. In the first draft of Thinking Things Done, I predicted we'd eventually find ego depletion to be an energy drain due to muscles fighting each other (rather than nerves as predicted by PCT), because that was an expected outcome from my model of conflicting impulses.
I thus viewed PCT as merely a minor enhancement over my own model (in this specific area), since it showed how you could get the effect even without any muscle movement. (My hypothesis was that emotion-suppression tasks in ego-depletion research were physically draining because they required you to override somatic markers.)
I actually think it's pretty likely that both are the case, though -- i.e., PCT's maximum neural outputs would in some cases also cause conflicting muscle contractions, in addition to the neurally-based energy depletion. (Also, when I made my prediction, the research showing widespread brain activity for ego-depleting tasks hadn't been done yet, or at least hadn't made its way to me yet.)
Anyway, I have a tendency to forget that most people don't know what I know; things like this seem obvious to me, as there are far fewer inferential steps between my (old) model and PCT, than there are between naive anthropomorphic psychology and PCT.