Motivation: You Have to Win in the Moment
post by Ruby · 2019-03-01T00:26:07.323Z · LW · GW · 20 commentsContents
Your Motivation System Motivation towards a behavior X is the experience of an urge to do X. Motivation is a Competition You Have to Win in the Moment How to Win Appendix: Willpower Recommended Reading Endnotes None 20 comments
Epistemic status: Confidence that the models expressed are true to reality: 80%; Confidence that the inferred advice isn’t bad advice: 80%; Confidence that the models presented are complete: less than 50%. I'd bet I'm still missing an important piece or two from the picture.
If often seems that if only we could get our motivation in order, we could work wonders. Wouldn’t it be good if we always had the motivation to do what we really wanted or ought to do? Our homework, our taxes, our jobs, our diets and exercise, our artistic and creative hobbies, our study plans, on and on. There are so many things we feel like we theoretically could do and yet we don’t. There are so many things we do, but only by fighting ourselves each step of the way [1].
Wrestling with a few novel motivational challenges of my own, I’ve been revisiting my models and skills. It feels like a number of pieces have clicked into place recently and I have some insights worth sharing. Hopefully, you will benefit from this post and/or tell me how it is wrong or incomplete.
Your Motivation System
We can reasonably believe that your brain has a motivation system (or an overall motivation system comprised of a great many motivation subsystems). One reason to believe this is the extensive scientific literature captured in tens of thousands of papers and built from probably at least as many researcher-years. Compared to that, the models described in this post are the crudest simplification. However, the models in this post have the advantage that they’re much quicker to read and much easier to apply to your life.
The second reason to believe that your brain has a motivation system is a mix of your first-person experience and first-principles reasoning. Consider that you have only one body and that in any given moment you can execute approximately only a single activity. You are eating or you are sleeping, but not both. Something in you must make a decision at every given moment about which activity you will execute.
Now, you might think that is you who makes the decision about what to do. “It is me, I think with my free will and weigh my options and then decide what to do!” Ignoring for the moment that we so often struggle to do what we “decide” to do, I will assert that there far, far too many considerations when deciding what to do for your conscience explicit mind to handle it all on its own. You’d need to be tracking all your biological needs (hunger, thirst, sleep, temperature), all your opportunity costs, all possibly relevant pieces of experience and heuristic that feed into decisions, the costs, benefits, risks, and on and on. Maybe you could build a conscious mind to handle all that, but I don’t think we humans are built that way.
Instead, we have these handy, dandy subconscious mental systems which aggregate all the relevant info and ultimately produce experienced felt urges or drives to do particular things. Some scientists might like to define motivation as the energizing of behavior in pursuit of a goal [2], but I think for our practical purposes it’s better to define the fundamental definition motivation as follows:
Motivation towards a behavior X is the experience of an urge to do X.
Via urges (which sometimes promise reward if acted upon, but not necessarily), our mind tells us what to do. If I am thirsty, I experience an urge to go find a drink. If I am tired, I experience an urge to sleep. If I see a tasty donut, I experience an urge to eat it. If I am anxious about failing my calculus class, I experience an urge to go study.
Note that although they are not the same thing as urges, emotions are deeply connected to them. Major theories of emotion acknowledge five components of emotion, one of which is action tendency [3]. This is close to saying that emotions usually come with urges. Anger with the urge to lash out, shame with the urge to hide, love with the urge to express caring or affection.
Motivation is a Competition
In any moment, there are a great many appealing actions we might take. Taking a nap might be pretty good, or listening to music, or having a snack, or going on FB, or finishing off that report for work. We can’t experience compelling urges to do all of them, and definitely not equally strong urges. That would defeat the point. The point of motivation, given the rivalrous nature of actions, is that we experience an urge to do the one thing we most want/ought to do.
Admittedly, we do often experience multiple competing urges in the moment. You might be very hungry but also really want to finish the level of the video game you’re playing - you’re experiencing an urge for both. This isn’t uncommon, but I think it is uncommon to have ten strong urges towards ten different things at once. It might be reasonable to have a couple of competing urges at once: the conscious mind experiencing those urges can then arbitrate, perhaps using some abstract S2 knowledge not available to the mostly S1 motivation system.
The overall point here is that for you to have motivation towards doing X (an urge/drive towards doing it), X can’t be outcompeted by Y within your motivation system. If X is outcompeted by Y, you are very likely to find yourself doing Y instead.
You Have to Win in the Moment
Not only is motivation a competition, but that competition is real-time. In each moment, you must choose your actions from amongst many possibilities and, in each moment, your mind generates guiding urges towards some options but not others.
You might suppose that you can make a decision at 11:00am about you will do three hours later at. 2:00pm -- this doesn’t free you from making a decision later on nor abrogate the need for motivation later on. When 2:00pm comes round, you will either feel motivation to execute your 11:00am plan or you won’t (as is so common), and you will have to decide whether to stick to your plan or not. You can make plans in advance, you can form intentions in advance, you attempt to set up commitment mechanisms in advance -- yet what you ultimately do in the moment will be decided in the moment by the conditions that obtain the moment.
The consequence of this is that any successful plan of action requires that you have sufficient motivation/urges (or willpower, see Appendix) to overcome all other competing motivation/urges throughout all the moments you need to be executing that plan.
It is not enough that you feel really, really motivated towards your plan when you are making it if that motivation will later be outcompeted by other things once it’s execution-time. Alternatively stated, people make bad plans because they fail to account for the competing urges which will predictably arise when they’re trying to actually execute. People further fail to correct their plans because even when they’re not succeeding at executing them, they’re not noticing that the reason they’re not executing them is because they’re losing the motivation competition continually at the moment-to-moment scale. It’s easy to focus on your motivation to do X while forgetting about your competing motivations to do Y and Z.
Consider different meanings we might assign to “I am motivated to do X.” This might mean that:
A. I have a preference to be in a world where I have done X. Perhaps then I will experience more pleasure or less pain.
B. I believe X will instrumentally result in a desired outcome A such that X is an action I would benefit from taking.
C. Sometimes I experience an urge to do X.
D. I experience sufficient urge to do X that it outcompetes urges for other things I might do enough of the time that I actually make meaningful progress towards X.
I contend that for any action X, you have are only truly motivated to do X if you have achieved state D. Anything else amounts to “I am motivated to do X, but not, like, actually motivated enough to do it instead of other things.”
How to Win
Saying that motivation is a matter of winning in the moment is all very good, but how does one actually do that?
Unfortunately, a proper treatment of this not-so-small topic will make this past far too long and instead requires its own post (Motivation Part 2: How to Win, coming soon to a screen near you!). Nonetheless, I can offer a high-level summary here:
- Understand how your motivation system performs its moment-to-moment prioritization and design around that. This will lead you to things like prospect theory, hyperbolic discounting, and trivial inconveniences [LW · GW]. Turns out that we humans overvalue things that are immediately in front of us and undervalue things that require even small amounts of effort. Good planning around motivation accounts for this. Also, understand how high your motivational proclivities are predictably different in different circumstances.
- You win a competition by either getting stronger than your opponents or making them weaker than you. I conjecture that almost all good motivation advice and techniques are clearly one or the other.
- Making the competition weaker: removing them or making them less appealing or subject to delay. No cookies in the house, delays of blocks on distracting websites such as Facebook, making your phone grayscale,
- Making your champion stronger: envisioning the value you will attain, making good plans that you actually believe in, altering the plan to have fewer costs or greater benefits, propagating urges to S1, layering on greater rewards to actions such as social rewards, increasing the cost of not taking desired action, e.g. social penalty, financial penalty (think Beeminder).
Good handling of motivation requires recognizing which of 2a and 2b you need. Removing distractions will only help a little if the X you’re trying to do feels disconnected from anything you value or just unlikely to work. In that case, better to increase your positive motivation towards X before anything else. Conversely, if X seems important and appealing, but it’s really hard to stop bingeing on good TV, then meditating on just how much you like X might be an inferior method to a TV reducing commitment mechanism. Of course, usually you’ll want a dose of both
Though they require a tad more complexity to model, I’d say that ingraining habits, setting intentions, and generally making commitments to yourself all fit within this paradigm too. Consider that a habit of doing X involves 1) lowering the cost of X because there’s less overhead in deciding when and how to do it, 2) creates a cost of not doing X because breaking your habit feels bad. These are effects which are felt in the moment, which is why they can be effective. A motivation-mechanism boosting mechanism like “if you run 90 out of the next 100 days”, then you get cake might struggle because the reward is distant, and distant rewards struggle in the face of how the motivation system discounts hyperbolically.
That’s the high-level in my picture. I hope to provide more concrete advice in Part 2.
Appendix: Willpower
I’ve been talking as though all the things we do derive from urges, yet we all recognize the act of doing things despite strong urges to the contrary. All those times you get out of bed despite it feeling so good beneath the covers. The common term for this ability to go against our urges is, of course, willpower. My stance is that willpower is crucial and yet frequently used as a substitute for handling one’s motivation well.
People overly rely on willpower to make themselves do things even “when they don’t want to.” They imagine if that only they had more willpower - more ability to make themselves do things in the face of their urges not to - then they could do anything. This is a bit sad. Instead of wishing you wanted to do things, you just wish you could make yourself do things despite having urges not to do them.
The model I like of willpower is that it is a limited override capability over the default motivation system which lets you decide to do X even when your motivation system has created a greater urge behind Y. This is useful because we often find that the motivation system is making a mistake. That is why is not inherently a bad thing to want to reflectively shift the motivational urges you experience.
Default motivational urges can tend to be a little short-sighted, impulsive, simplistic, and generally subject to host of reasoning errors. Consider how you might use willpower to not eat an extremely delicious but poisoned cake, not have a fling with someone who is known trouble, or actually write your term essay for your long-term benefit over binging Netflix.
But just as it is important to be able to overrule the motivation system sometimes, it’s important that you can’t overrule it all the time. Humans who had the unlimited ability to override their basic urges would probably kill themselves [LW · GW] through lack of sleep, food, or drink. Or they might get stuck in really bad loops which they fallaciously decided were a good idea even though their urges are screaming at them to stop. It’s good that we have a system of checks and balances where neither S1 or S2 can have complete power.
At the end of the day, one set of biases and heuristic such as hyperbolic discounting, scope insensitivity, and loss aversion mean we can’t trust S1 all the time; and another set of biases such as motivated reasoning, commitment effects, and just plain bad reasoning mean we can’t always trust S2. A limited pool of willpower to be used wisely isn’t the worst solution given this situation.
Much of handling motivation well is using willpower in the right places and not in the wrong ones. The natural heuristic for this is sustainability. If your plans seem to keep failing because you run out of willpower, it might be best to alter your plans to not need so much.
The role of willpower reinforces the importance of viewing motivation as being decided in the moment. This is because your ability to execute doesn’t depend on how much willpower you have while making your plans: if your plan to do X depends on having willpower to do X despite contrary urges, then it only matters how much willpower you have in the moment of attempting to do X. If you’re going to rely on willpower, then you’ll at least need to get good at predicting how much willpower you will have at times other than when you’re making the plan.
Recommended Reading
- The Upstream Effects section in Sebastian Marshall’s book PROGRESSION. Though it might sound like this is about advance planning, in actuality Marshall exemplifying what it looks like to take the right actions in advance so that you can win in the moment.
- LessWrong Wiki: Akrasia. Contains a list of about a dozen relevant and useful posts, note the post on Picoeconomics.
- Blue-Minimizing Robot Sequence [? · GW] by Scott Alexander. Relevant to how parts of our motivation system are executing relatively dumb behaviors rather than maximizing utility in any meaningful way. Will be relevant for Part 2: How to Win
- Ugh fields [LW · GW] describes an instance of failure in the moment.
- Productivity through self-loyalty by Nate Soares is not immediately connected to this, but it does describe approaches to weakening your more enemy, or more accurately, how to turn your enemy into your ally.
- A Crash Course in the Neuroscience of Human Motivation [LW · GW] by lukeprog. Long and thorough, but written as a theoretical/academic text rather than a practical guide. Recommended for those with a strong academic curiosity. Last updated in 2011, so parts may be outdated.
Endnotes
[1] The lack of motivation is approximately synonymous with the concept of akrasia. LessWrong had an akrasia phase about a decade ago when many rationalists were seeking to fully understand and overcome the phenomenon. I think it’s a shame people don’t talk so much about motivation and akrasia anymore, they’re no less important topic now.
[2] Simpson & Balsam (2015), The Behavioral Neuroscience of Motivation: An Overview of Concepts, Measures, and Translational Applications in Behavioral Neuroscience of Motivation, p.3
[3] Sander (2013), “Models of Emotion” in Cambridge Handbook of Affective Neuroscience, p.17
Taken together, the major theories of emotion acknowledge the existence of five components: (1) appraisal, (2) expression, (3) autonomic reaction, (4) action tendency, and (5) feeling. These components are discussed in detail in the section, “Theories of Emotion and Emotion Components.”
20 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2019-03-03T04:24:55.542Z · LW(p) · GW(p)
I think this is the best, mostly clearly and accurately written explanation of this insight to appear within the rationalsphere so far. Most of us, myself [LW · GW] definitely included, have focused our explanations largely on narrow ways to approach this point without doing justice to the breadth of it, and I'm not really sure why we've all done that, though my guess is we focus too much on our own entry points, and possibly you've done the same but your way into the insight happened to be one that naturally admits a general explanation. Either way, kudos.
That said, this wouldn't be a very LessWrongy comment if I didn't have few, possibly antithetical, things to say about it.
First, I agree that you get the model right, but it's a model that is only very compelling from a certain stage of development, my strongest evidence being it was once very compelling to me and now it's more like the kind of understanding I would have if I was asked to manifest my understanding without explaining below a certain level of detail, and the other being I think I've seen a similar pattern of discovering this and then focusing on other thing in the writing of others. That doesn't make any of it wrong or not useful, but it does suggest it's rather limited, as I think fellow commenter Romeo also points out [LW(p) · GW(p)]. That is, what's going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, "where do the beliefs that power motivations come from?" and "why do you even prefer one thing to another?") you'll see it explode apart in a way that will make you go "oh, I had it right, but I didn't really understand it before" the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say "oh, I'm amazed I even had such good understanding before given how little I really understood".
I say this not because I want to show off how great I am, even if it seems that way, but because I think you're on the path and want to make it absolutely clear to you that you made progress and that there's much, much deeper to go, whether you pursue that now or later. I say this too because I wish someone had said it to me sooner, as I might have wasted less time being complacent.
Second, just to set expectations, it's unfortunately unlikely that having this model will actually help many people. Yes, it will definitely help some who are ready to see it, but years of trying to explain my insights has taught me that one of the great frustrations is that fundamental insights come in a particular order, they build on each other, and the deeper you go the smaller the audience of people explaining your insights to will help. This doesn't mean we shouldn't do it, as I think anyone who figures these thing out can attest because we've all had both the experience of reading or hearing something of someone else's insight that helped us along and of figuring something out and then helping others see it through our explanations, but it also means we're going to spend a lot of time writing things that people just won't be ready to appreciate yet when they read it. Again, this is a pattern it took me a long while to accept, and once I understood what was going on I overcame much of my previous feelings that I was misunderstanding things despite clear evidence to the contrary because when I tried to explain my understanding it often was met with confusion, misunderstanding, or hostility (my Hegelian writing style not withstanding).
I very much look forward to seeing part 2, and hope it ends up helping many people towards gaining better understanding of how motivations work!
Replies from: Ruby↑ comment by Ruby · 2019-03-03T18:09:57.343Z · LW(p) · GW(p)
Thanks! It’s encouraging words like those which keep me writing.
I’d say one of the things I attempt to do with my writing (and in my thinking too) is clarify the foundations sufficiently clearly that you can never forget them and therefore always apply them appropriately. I find that points that initially feel obvious to me are actually still a bit murky, haven’t been fully absorbed, and therefore I don’t actually use them or appreciate their many applications. Getting clear and accurate explanations (just for myself) makes them accessible enough to my mind that become a lot more useful.
A further benefit and the original reason I found myself doing this style of writing is that clear foundations allow you then to express more complicated, profound, nuanced pieces within a solid context. Being clear on the foundations, at least for me, makes all the more advanced pieces fall into place and seem much more necessary than if I had them on their own. Gives you a framework to hang things on. The make your champion stronger vs make the competition weaker is an example of this.
The overall result is a bunch of writing that doesn’t register as particularly profound, but is very clear. Or so I’ve been told.
My entry point for this is actually thinking and writing about planning. I’ve been writing something of a sequence/book on planning and any practical planning book for humans would benefit greatly from a decent treatment of motivation (and planning around it). The insights/models in this post arose naturally from that context.
With that background, I think I can explain why I disagree with many of your points.
First, I agree that you get the model right, but it's a model that is only very compelling from a certain stage of development, my strongest evidence being it was once very compelling to me and now it's more like the kind of understanding I would have if I was asked to manifest my understanding without explaining below a certain level of detail, and the other being I think I've seen a similar pattern of discovering this and then focusing on other thing in the writing of others. That doesn't make any of it wrong or not useful, but it does suggest it's rather limited, as I think fellow commenter Romeo also points out [LW(p) · GW(p)]. That is, what's going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, "where do the beliefs that power motivations come from?" and "why do you even prefer one thing to another?") you'll see it explode apart in a way that will make you go "oh, I had it right, but I didn't really understand it before" the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say "oh, I'm amazed I even had such good understanding before given how little I really understood".
The insights in the post exist at a certain level of abstraction, as you say, it doesn’t manifest understanding below a certain level of detail. That’s quite intentional. I disagree that it is only compelling from a certain stage of development or is rather limited. Far from it.
I’ve been aware of underlying details (just see the references) for a lot longer than I’ve appreciated the high-level general points here because I think the lower-level points easily obscure the higher-level picture. This is perhaps related to your assertion that other writers haven’t done justice to the breadth of the ideas here. If you can’t stop thinking about transistors, you will find it hard to focus on and fully appreciate the boolean algebra you’re executing on your logic gates made out of transistors. It’d be even harder to teach someone to understand boolean algebra (let’s say minimizing digital circuits) if you want them to keep transistor operation in mind at all times. And if your abstractions are tight (not leaky) enough, you actually don’t need to understand the underlying complexity for them to be useful. Transistors and logic gates are human design though. A better example might be understanding evolutionary selection processes. If you can't abstract away from the biological implementation of sexual reproduction for a few minutes, you're likely to miss the higher-level picture of why sexual reproduction is even a thing. What problem was it trying to solve and what does it imply for the the implementation?
In this context though. Hmm. I think the insights/models here aren’t sufficient own their own to help you manage your motivation well, and perhaps that’s your point. The original post had to split into two parts because adding in the further models needed was going to balloon out the post to six thousand words easily. If you’re point is “there isn’t enough detail here to be practically useful”, that’s kind of true.
A major hope for this piece is that if you appreciate the abstraction at this level, you will understand why all the lower-level pieces look how they look. Many people have created lists of anti-akrasia/motivation-enhancing techniques and also highly-detailed reviews of how motivation work (just see Luke’s review [LW · GW]). A goal with this post is that you see enough of the general picture that it is clear why various motivation-techniques work and when they're needed. If I launched into talking about Hebbian learning and prospect theory, I assert you'd probably miss the very design problem that its heart the mind/motivation system is trying to solve. Hence holding off on that lower-level for other posts.
Second, just to set expectations, it's unfortunately unlikely that having this model will actually help many people. Yes, it will definitely help some who are ready to see it, but years of trying to explain my insights has taught me that one of the great frustrations is that fundamental insights come in a particular order, they build on each other, and the deeper you go the smaller the audience of people explaining your insights to will help.
I think I’m more optimistic than you about communicating ideas, though perhaps I’m just sufficiently early in writing “career” to be naive. I’m working on the premise that sufficiently clear explanations delivered systemically in the write order can recreate in the minds of others much of the understanding you have in your own mind. That does require people are willing to invest the time, but I think people do invest in reading writing that is sufficiently enjoyable and valuable-seeming.
Replies from: mr-hire, gworley↑ comment by Matt Goldenberg (mr-hire) · 2019-03-04T14:35:30.133Z · LW(p) · GW(p)
>If you can’t stop thinking about transistors, you will find it hard to focus on and fully appreciate the boolean algebra you’re executing on your logic gates made out of transistors.
I think the point Gordon was making was the opposite. you've described a leaky abstraction of logic gates that works at a base level, but that doesn't hold muster when you actually look at the transistors.
For me for instance, a basic strategy of "make the alternatives I endorse really easy and highly rewarding, and the alternatives I don't really hard and highly punishing" worked really really well for me for a long time, and was sufficient to overcome some of my most obvious bottlenecks.
However, that kind of thinking actively became harmful at a certain point in my development, when I hit diminishing returns on brute forcing my motivation system (I encountered problems that couldn't be brute forced that way, and these problems were my bottlenecks) and had to take a step back to understand what was actually going on, understanding my internal parts, belief orientations and awareness, etc.
↑ comment by Gordon Seidoh Worley (gworley) · 2019-03-04T21:18:08.625Z · LW(p) · GW(p)
And if your abstractions are tight (not leaky) enough, you actually don’t need to understand the underlying complexity for them to be useful.
This sounds like the crux of the disagreement: I think no abstraction is sufficiently non-leaky that you don't (eventually) need to understand more of the underlying complexity within the context I see this post sitting in, which is the context of what we might call cognitive, personal, psychological, or personal development (or to put it in non-standard term, the skill of being human). Unless your purpose is only to unlock a little of what you can potentially do as a human and not all of it, every abstraction is eventually a hindrance to progress, even if it is a skillful hinderance during certain phases along the path that helps you progress until it doesn't.
For what it's worth, I also suspect the biggest hurdle we have to overcome to make progress on being better at being humans is gaining enough cognitive capacity to handle more complex, multi-layered abstractions at once, i.e. to see both the machine and the gears at the same time. Put another way, it's gaining the ability to not simply abstract "away" details but to see the details and the abstraction all at once, and then do this again and again with more layers of abstractions and more fine-grained details.
Replies from: Ruby↑ comment by Ruby · 2019-03-05T18:38:38.727Z · LW(p) · GW(p)
Hmm, I'd like to step back and tally the different claims that have been surfaced so we can at least be clear where we disagree, even if we don't end up agreeing. Among the claims:
A: Abstractions are sometimes useful.
B: Abstractions eventually break down and the underlying complexity needs to be understood for further usefulness.
C: The abstraction in my post is only compelling from a certain stage of development / it's limited (but this assertion was accompanied by the statement that this doesn't mean it was wrong or useful).
D: The abstraction in my post is unlikely to help many people.
E: The particular abstraction in my post is leaky, is limited, and can become harmful after a certain point in development.
F: That abstractions are indispensable and are needed to guide understanding even when you dip to lower layers.
G: That it's harmful to always be trying to look at lower levels of abstraction without higher levels.
H. That not understanding lower levels means that you don't understand very much at all.
That's not every assertion, but it's maybe enough to start getting some clarity. I think that Gordon, mr-hire, and myself all agree on A and B broadly, though we might disagree on where the line is drawn for each. Gordon, you write:
This sounds like the crux of the disagreement: I think no abstraction is sufficiently non-leaky that you don't (eventually) need to understand more of the underlying complexity within the context I see this post sitting in, which is the context of what we might call cognitive, personal, psychological, or personal development (or to put it in non-standard term, the skill of being human). Unless your purpose is only to unlock a little of what you can potentially do as a human and not all of it, every abstraction is eventually a hindrance to progress, even if it is a skillful hinderance during certain phases along the path that helps you progress until it doesn't.
I mean, I wouldn't disagree with that in general. Though I think a lot of work is being done by the word "eventually" and "sufficiently non-leaky". I think there are contexts where you get away without needing to go all the way deep. Most (I would think overwhelming majority) of people who code don't know assembly and certainly don't understand how logic gates are implemented - and this is pretty fine 99.99% of the time.
It is fair to say that the abstraction/model in my post is not nearly as good as the abstraction of high-level computer languages. That's true. I mean, actually it breaks pretty quickly. Part 2 of this post will dive deeper. Nonetheless, I do think it's quite useful even if one don't read further. Gordon, I'm unclear what your stance is - you first state that it's useful and then state that it's unlikely to help many people, so I'm not sure of your actual thought.
I do disagree with C (compelling only from a certain stage of development) in that I think even once you have much deeper understanding, the higher levels of abstraction remain crucially important. Just because you understand electromagnetism really well and know the limits of conventional circuit theory (e.g. designing super duper tiny transistors), doesn't mean you want to throw out circuit theory and just solve Maxwell's equations everywhere - even if eventually sometimes you have to.
I don't disagree that you need more detail for a lot of applications. As mentioned, this unfortunately couldn't make it into the first single post. As I wrote:
Saying that motivation is a matter of winning in the moment is all very good, but how does one actually do that?
Unfortunately, a proper treatment of this not-so-small topic will make this past far too long and instead requires its own post (Motivation Part 2: How to Win, coming soon to a screen near you!). Nonetheless, I can offer a high-level summary here:
But again, I don't think what was presented here stops being compelling later on.
I also think D (unlikely to help many people) is somewhat false, depending on what counts as "many people". Another commenter [LW(p) · GW(p)] felt this post was quite useful, someone else on FB found it rather revelationary, and I'd infer from those who I know of that several more benefited even if I don't know of it directly. That's beyond the inside view that abstraction/model presented can be applied already. mr-hire also states [LW(p) · GW(p)] simpler ideas worked well for a really long time (though I'm not sure which simpler ideas or what counts as "brute force".
Back to B (abstractions break down, eventually become a hindrance). Definitely agree here.
I think if your initial comment, Gordon, had been something like:
I think this model/abstraction is correct and useful to an extent, but I want to flag that it is just a very high-level abstraction which is missing a lot of the very messy detail that is relevant (and quite necessary) detail for this domain. It'll help you on it's own at a certain stage, but after that you'll need more.
Then I wouldn't have disagreed at all. I think the disagreement might mostly be around a) how quickly abstractions break down, b) how much you still need them even once you understand.
To be honest, I did bristle at some of the way things were phrased, but that's on me. It felt like there was some kind of implication that I personally didn't have any deeper understanding and that felt.
That is, what's going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, "where do the beliefs that power motivations come from?" and "why do you even prefer one thing to another?") you'll see it explode apart in a way that will make you go "oh, I had it right, but I didn't really understand it before" the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say "oh, I'm amazed I even had such good understanding before given how little I really understood".
This felt a like sleight since I think the post references much more detailed resources and even flags in the opening lines that what's presented is the "crudest simplification". Yet quite possibly you were addressing the hypothetical reader rather than me. But even if not, still shouldn't let that influence my response too much. The additionals words of caution about the limitation of abstractions (especially here) are worthwhile. I regret perhaps that because of these very long comments, readers might not see this point and our overall agreement on it.
Hmm, I do think the thing I haven't addressed here is more on my stance that better abstractions and better understanding of abstractions (something I think is neglected in the domain of self-understand and self-improvement) is actually key to using lower-level understanding more systematically and in a less ad-hoc way. Perhaps save that for another very long comment :P
By the way, thanks for engaging so much. Don't think I've ever dived into such a detailed discussion.
↑ comment by Matt Goldenberg (mr-hire) · 2019-03-07T14:11:21.943Z · LW(p) · GW(p)
mr-hire also states [LW(p) · GW(p)] simpler ideas worked well for a really long time (though I'm not sure which simpler ideas or what counts as "brute force".
I'm very much interested in the object level of this post, and want to return to that.
To be more explicit about the levels of development here.
At some point, I was all about pragmatics. Every single thing change I could make that made me more likely to take my endorsed actions and less likely to take my unendorsed actions was used. I had a Pavlok. I used Beeminder. I had blocking software. I used social pressure when it helped and avoided it when it didn't. I reframed my beliefs to be more powerful. Comfort zone expansion was my default - when something scared me, I felt the fear and did it anyway. I even used techniques that would become central in the next stage of development - looking at beliefs, using introspection, using mindfulness and being in the moment - but the framing of it was all in the idea of a big pragmatic "use the things that make me more likely to take my intended actions."
At some point, this type of thinking just hit a brick wall. It led me to crashes, where I would follow my endorsed actions for months, and then crash, unable to force myself to go forward even with all of the pragmatic motivation tools I had set up. It also caused me to get myself into trouble one too many times - one too many subconscious Chesterton fences that I ignored in the pursuit of taking the action that was "obviously correct."
It became clear that there was something being missed in the simple piling on of pragmatic motivational tools. At this point, it became necessary to delve deeper into the relation between subconscious beliefs and actions taken. Introspection became very important. Understanding how tools like mindfulness related to how I oriented to my internal beliefs. Tools like the part's model became much more useful, and understanding the good that came from situations became important. I started seeing the previous motivational tools as "brute forcing", trying to go against the grain of the more fundamental influences of beliefs, parts, belief orientations. I used them more sparingly, surgically, here and there as tools to shape beliefs and get things done pragmatically, while being aware of the pitfalls.
Hopefully that gives a bit of more clear picture of where I (and I suspect Gordon) am coming from.
Edit: This post gives some more explicit pointers towards my current model, although it's obviously a bit behind: https://www.lesswrong.com/posts/mFvuQTzHQiBCDEKw6/a-framework-for-internal-debugging
Replies from: Ruby↑ comment by Ruby · 2019-03-08T03:43:30.380Z · LW(p) · GW(p)
Thanks for the elaboration. Yes, I see what you mean by brute force, and I also see how my post might be read to be advising an approach similar to what you described. I don't know whether a pragmatic approach like that is a good developmental stage to go through? Maybe for a bit, but I'm not sure.
If the post didn't shed any light on how a brute force approach is not the only option and not necessarily the best, I think it's because I forgot that someone might approach motivation in that way. Only reading your description brought it back into my mind.
Go back five to six years I did have a phase when I was very big on "discipline", I certainly tried to muster willpower to make myself do things - but it was never that successful or systematized. Around the time I did begin making more serious efforts to be productive I was already engaged with CFAR, reading mindingourway.com, and generally being coached into an approach of non-willpower-reliance and non-self-coercion. Yet it must have been long enough ago that I think I'd forgotten that there's a very a natural approach to motivation where you pile on productivity tricks in a not quite sustainable/healthy way.
So, thanks for pointing that all out. That's a good reminder.
For the public record, I think ideal motivation is attained when you have something resembling a state of harmony in your mind and with yourself. You might take actions to make actions seem more attractive and/or do things to decrease temptation, but it isn't coercive or depleting. This is difficult to achieve and requires a lot introspection, self-awareness, resolving inner conflicts, etc., etc. If you're doing it right, you're not suffering. You don't crash. It doesn't feel like you're coercing yourself.
It's possible I should have stated something like that in the post itself.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2019-03-08T16:55:41.473Z · LW(p) · GW(p)
I still think there's cruxes there that you're not seeing. My approach just accentuated the problems of looking at things at the level of a motivation system, they're still there even if you have the idea of harmony... they stick until you realize that the harmony is the thing, and the motivation system analogy is just crudely approximating that. (of course, I'm sure the harmony is just crudely approximating something even more fundamental). Note that this is the same thing that stuck out to me during your ACT presentation - missing that the harmony was the thing, not the ability to take actions.
I don't think there's much much more of a gap that can be bridged here, at least not with my skills. I won't be replying anymore but I appreciate you engaging :).
Replies from: Ruby↑ comment by Gordon Seidoh Worley (gworley) · 2019-03-05T23:06:58.264Z · LW(p) · GW(p)
I do disagree with C (compelling only from a certain stage of development) in that I think even once you have much deeper understanding, the higher levels of abstraction remain crucially important. Just because you understand electromagnetism really well and know the limits of conventional circuit theory (e.g. designing super duper tiny transistors), doesn't mean you want to throw out circuit theory and just solve Maxwell's equations everywhere - even if eventually sometimes you have to.
So maybe it would help if I was a little more specific about this point. When I'm saying "compelling" here I meant to point to something like both intellectually interesting and useful because it feels new and like it's engaging with the edge of development. Stuff like this becomes uncompelling as one gains mastery, so I think I was trying to pass on the wisdom of my accumulated experience in this area from building, learning, using, and presenting models like this one and then, upon reconsidering, finding them limiting but having been useful at one point because I didn't have access to any deeper details to help me along.
My objective in pointing this out is tied in with the next bit, so we'll just go ahead and segue to that.
To be honest, I did bristle at some of the way things were phrased, but that's on me. It felt like there was some kind of implication that I personally didn't have any deeper understanding and that felt.
To be honest, there is an implication like that, based on what I've read here. I could maybe believe you intentionally didn't address some of the deeper points you might understand about the details that I think are relevant, but if that were the case I would expect your footnotes and asides to address topics more about beliefs, preferences, and especially perception and less about those things munged together and rounded off to "motivation". Instead I read this as your honest best effort to explain what's going on with motivation, and I'm telling you I think there's much more going on in directions much more fine-grained than those you seem to have explored, even in the references.
"Motivation" and "intention" are huge, confounded concepts that I believe can be broken apart, thinking of yourself as having a "motivation system" is another confusion, but unfortunately I've not worked out all the details well enough for myself that I'm happy to share my current state of partial knowledge in this area. Unfair, I admit, but it's where I stand. All I can point to is there's a bunch of stuff going on that can be reified into the concept of "motivation" and working with motivation as a concept will be helpful for a while but ultimately "motivation" doesn't cut reality at the joints so thinking in those terms has to be largely abandoned to go further.
Should I have publicly passed judgement on you in the comments section? Probably not, but for some reason I already did so we'll just have to deal with it now. Sorry about that.
My goal here is to be encouraging, however it might come across, and to make clear there is a way forward. As I said to another person recently when I responded in a similar way to something they said, I've been realizing a lot recently the ways in which I limited myself by thinking I understood things. I see in this work clues that you having an understanding similar to how I thought about motivation maybe 3 years ago, and maybe I would already have a ready-at-hand alternative if I hadn't spent so much time thinking I had it right. So I want you to explain what you've figured out, I think your way of explaining what you have is going to be useful for others, I don't want to say anything that might put you off either of those goals, and I also want to push you along so you don't suffer the worst of all calamities: thinking you understand something!
I also think D (unlikely to help many people) is somewhat false, depending on what counts as "many people". Another commenter [LW(p) · GW(p)] felt this post was quite useful, someone else on FB found it rather revelationary, and I'd infer from those who I know of that several more benefited even if I don't know of it directly. That's beyond the inside view that abstraction/model presented can be applied already. mr-hire also states [LW(p) · GW(p)] simpler ideas worked well for a really long time (though I'm not sure which simpler ideas or what counts as "brute force".
Sure, I guess I was hoping to set expectations appropriately, since I know I've been let down many times broaching these topics with folks. Yes, there will always be some people who you manage to connect with in part because of what you write and in part because of where they are, i.e. they are ready to listen to what you have to say and have it click. They are the cherished folks with little enough dust in their eyes that you write for. But for every person you help, there are probably 20 more who will read this and for one reason or another it won't connect the way you'd hope it would. They might not hate it, and might say they get it, but then they'll just keep on doing what they were doing, not changing anything really, not really having gained any understanding. I was demoralized a lot by this, thinking it must have been me, until I figured out the base rate of success for this kind of thing is pretty low unless you're tackling stuff way down at the bottom of the developmental ladder. I suspect, based on the quality of your explanation, that this post will perform better than average, but that to me probably means something like connecting with 7% of the people who read it instead of 5%.
If you don't know that going in and depending on what your expectations are that can be pretty brutal when you realize it (especially if, unlike it sounds like for you, you focus more on the people it doesn't work for that the people it does), and I feel like you did well enough on this post that you might do more and you deserve to know this in case it will affect your self-esteem and your likelihood of doing writing more things like this. Again, this is in the category of "things I wish someone had told me 5 years ago because then I wouldn't have had to figure it out the hard way for myself".
comment by Yoav Ravid · 2019-03-01T06:35:55.279Z · LW(p) · GW(p)
Was weird not to hear of temporal motivation theory in a post about modeling motivation. it's the most successful model in the literature to date (as far as i know), it was developed by piers steel, he wrote about it in his book the procrastination equation, which Luke made a summery of in this post [LW · GW].
Replies from: Ruby↑ comment by Ruby · 2019-03-01T06:49:20.041Z · LW(p) · GW(p)
You're right. Thanks for pointing that out.
Unfortunately, I couldn't include them in a single post of reasonable length. Temporal motivation theory / the procrastination equation will feature in the eventual Part 2 to this post.
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2019-03-07T09:29:13.627Z · LW(p) · GW(p)
It seems piers steel has even went beyond TMT in motivation modeling, and also has something which he calls "Goal Phase System (GPS)". it is talked about in this paper - The Building Blocks of Motivation
comment by romeostevensit · 2019-03-01T02:23:39.358Z · LW(p) · GW(p)
It isn't all the moments you're doing the action but the moments of another candidate winner being surfaced in consciousness giving you the affordance to do something else instead. In that moment you can reaffirm your commitment to the thing or choose the distraction. Poor modeling of the self who has to make that microdecision is behind much confusion.
Replies from: Ruby↑ comment by Ruby · 2019-03-01T03:15:05.418Z · LW(p) · GW(p)
This is a good clarification. Technically you're right, you can maintain motivation so long as you reaffirm commitment in each moment you are tempted, which admittedly might not every moment (consider flow states).
Though I'd still argue that you should still be thinking about causing yourself to win in every moment. You might maintain motivation through to the completion of a task because either: a) you successfully reaffirmed commitment, or b) no alternative candidate winner was surfaced in a given moment to begin with, yet it matters that one of those is true for every moment of necessary execution.
Though not covered properly in this post, the eventual goal here is to explore how to engineer circumstances, both internal and external, so that you win in each moment whether it be because of a) or b). And that applies to every moment.
comment by ErickBall · 2019-03-01T19:45:25.985Z · LW(p) · GW(p)
Thank you, I'd been thinking about some related issues recently (especially with regard to the blue-minimizing robot) and this post helped clarify things quite a bit. In particular, it highlights the distinction between urges that arise out of fear of long-term consequences and overrides accomplished by willpower, which I have often tended to confuse. I look forward to the second post.
comment by Groudon466 · 2024-02-27T18:00:57.240Z · LW(p) · GW(p)
Did you ever end up making a Part 2?
comment by ArthurLidia · 2019-03-01T01:22:07.162Z · LW(p) · GW(p)
Akraisa can be over comes. I won't try to sell you snakeoil by the 5 second rulev was a good book to skim too ->. For explosive phase (jump on chair and do homework.
So homework until 3 pm is the -> struggle phase