Fundamental Uncertainty: Chapter 4 - Why don't we do what we think we should?
post by Gordon Seidoh Worley (gworley) · 2022-08-29T19:25:16.917Z · LW · GW · 6 commentsContents
Of Two Minds Distrusting Ourselves None 6 comments
N.B. This is a chapter in a planned book about epistemology [LW · GW]. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I'll have an editor help me clean up the text further.
If you're like me, you've probably experienced the following scenario. You need to wake up at a certain time tomorrow, say 6 am. You also want to be well-rested for the day, so you want to get at least 7 hours of sleep. If you wake up late or don't get enough sleep, your day will be noticeably worse. You strongly believe that the right thing to do is to go to bed no later than 11 pm. Ten o'clock rolls around and you think "alright, I'll finish up what I'm doing and then go to bed". Next thing you know you look at the clock and it's midnight.
What happened? You knew what you wanted to do. You had a plan. And then you totally ignored it, stayed up too late, and were tired the next day. "What's wrong with me?" you might ask yourself. "I knew what to do and then just didn't do it!" Why did you stay up too late when you knew it wasn't the best thing to do?
Sometimes it's due to something simple. For example, maybe you were watching a show or reading a book and got caught up in it and lost track of time. It's not that you didn't want to go to bed on time, it's just that you didn't realize what time it was because your attention was elsewhere. No worries, that's a problem that can be easily overcome by setting an alarm to remind yourself to go to bed. Bedtime solved!
Only, one night you hear the alarm and think "oh, I can stay up a little later before going to bed". And then it's 1 am and you're tired the next day. So you try increasingly aggressive solutions: setting multiple alarms, getting your roommate or spouse to charge you $5 if you stay up late, rigging the lights in your home to turn off at 11. Each of these works for a while, but eventually you stay up too late anyway, and even if you do manage to find a solution that works, every night feels like a battle to make yourself go to bed on time.
And this problem is not restricted to sleep. There's all manner of things that people believe they should do but struggle to make themselves do. Here's some examples:
- drink alcohol in moderation
- eat a healthy diet
- get regular exercise
- not smoke
- leave early to show up on time to appointments
- drive without distractions
- always tell the truth
- do homework, taxes, bill paying, etc. on time
In each case many people—and perhaps you in particular—believe these are the right things to do. And yet at times we find ourselves having one drink too many at the bar, eating a greasy cheeseburger when we could have had a salad, neglecting exercise, smoking a cigarette, showing up late to an appointment, looking at our phone while driving, telling lies to smooth over an awkward social situation, or neglecting important tasks. Which is strange, because we knew and had the ability to do the right thing, but did the opposite. Why?
On the one hand this is an easy question to answer because the phenomenon is so widespread: we're irrational, hypocritical beings. We say one thing and do another. It's so common that we just expect most everyone to fail to live up to their own ideals. Even the people we most expect to live up to their ideals, like priests who take religious vows and public servants who take oaths to work for the benefit of their country, sometimes break their vows and oaths. It happens often enough that the fallen priest and corrupt politician are stereotypical characters in stories.
Yet on the other hand, if we really dig into the question of why we would act in irrational and hypocritical ways, it's not so clear. There seemingly isn't any advantage to not doing what one believes is best. For example, every year people die in car accidents because a driver took their eyes off the road to look at their phone. All or nearly all of these people knew that distracted driving was risky and could get them or someone else killed. And yet they looked at their phones anyway! Why didn't they have the willpower to keep their eyes on the road rather than give in to the temptation to check their texts?
Perhaps they, and everyone else who has ever believed they should do one thing but then did another, suffered from a weakness of willpower, a disorder known by its ancient Greek name, akrasia. The basic idea behind akrasia is that willpower is like a mental muscle. If a person's willpower muscle isn't strong enough, they won't be able to make themselves do things, much like a person can't lift a heavy weight if their muscles aren't strong enough or are worn out from overuse. We could then say that people don't do the right thing and suffer from akrasia because they don't have strong enough willpower.
Akrasia certainly does a good job of explaining my own experience with not doing what I think is best. For example, consider my personal attempts to eat healthy. I believe that the healthiest breakfast for me to eat each day is a bowl of high-fiber cereal with soy milk. I eat it most days, but some days I eat donuts. Donut days start out innocently enough: I go through my morning routine and get ready to pour myself a bowl of cereal. But then... I... just... can't. I have no motivation to make or eat the cereal. I'd rather skip breakfast than suffer one more bowl of high-fiber flakes. Donuts become the only palatable food in the world. Sometimes I can force myself to eat cereal because I know it's good for me, but if I do that I'll feel fatigued for most of the morning, like I need to recover my strength before I can make myself do things again. If I eat a donut instead I'll feel renewed and ready to tackle the day.
I'm not alone. We know from modern psychology that many people report suffering from akrasia, although in scientific studies it goes by the more modern name ego depletion. Unfortunately those same studies also show that akrasia probably doesn't exist in that there's no measurable difference between how much stuff people who claim to be out of willpower and those who don't are able to get done. Which is strange, because akrasia is an intuitive explanation for our hypocrisy that fits lived experience. How can something that feels so obviously true also be proven to not exist?
To find the answer, let's consider what akrasia doesn't explain. Running out of willpower only accounts for why we don't do some things; it doesn't account for why we do other things. For example, akrasia explains why I don't always eat cereal and drivers don't always keep their eyes on the road, but it doesn't explain why I eat donuts or why distracted drivers check their phones. If the world worked the way akrasia naively explains it, then running out of willpower should result in a person doing nothing rather than some other thing since they'd have no willpower to make themselves do anything. So there must be some reason why I eat donuts, distracted drivers check their phones, and other people do the things they do when it feels like they've run short on willpower.
As it happens, I know exactly why I eat donuts instead of cereal: it's because donuts are tasty! After a few days of cereal I yearn for something more. Sometimes I can make myself eat cereal when I really want donuts. I try to convince myself to do the right thing. My inner voice says something like "you don't really want to eat donuts, what you actually want is a healthy bowl of cereal". Somewhere from deep in the recesses of my mind another voice cries out "DONUTS"! Sometimes one voice wins, sometimes the other one does. If I were a cartoon character you might imagine me with a cereal-angel sitting on one shoulder and a donut-devil on the other, both vying for my breakfast soul.
I'm not alone. The existence of the shoulder angels and devils trope is proof enough. Many people experience this sense that they want contradictory things and that their minds are a battleground of conflicting desires. As luck would have it, there's a whole theory of how the mind works that explains this phenomenon: dual process theory.
Of Two Minds
Let me tell you a story about our friends from last chapter who disagreed about the color of phoobs, Alice and Bob.
Alice was hanging out with Bob one day. By accident, she said something that hurt Bob's feelings. Bob didn't make a big deal of it at the time, but later confided to Alice and Bob's mutual friend Carol that he was really hurt by what Alice said. Carol let Alice know she hurt Bob's feelings. Alice feels guilty and wants to apologize to Bob, but everytime she tries to call him she freezes. She knows it's going to feel bad to admit she hurt Bob, and she just can't bring herself to call him.
What's going on with Alice? She is rationally convinced that she should call Bob and apologize, but fear overwhelms her when she tries to pick up the phone. She wants to apologize, but she also wants to hide from the shame she feels about hurting Bob's feelings. What's the source of these conflicting motivations?
Dual process theory offers an answer. The simplified version of dual process theory says that Alice's conflicting motivations are the result of the conscious part of her mind wanting something different from the unconscious part. The conscious part is rational and thinks deliberately. It knows that the right thing to do is to call Bob and apologize. The unconscious part is emotional and reactive. It doesn't care what the "right" thing to do is; all it knows is that it feels bad to even think about hurting Bob's feeling, let alone admitting that Alice was the one to do the hurting. So the unconscious part does everything it can to stop the conscious part from doing what it wants because that's what the unconscious part believes will protect Alice from doing something that is going to hurt.
Because the unconscious part can wield powerful emotions to get its way, the conscious and unconscious parts of the mind are sometimes compared to a rider atop an elephant. The rider does its best to control and direct the elephant, but the elephant is bigger and ultimately in control of where the rider goes. The elephant listens to the rider sometimes—especially if the rider tells it what it wants to hear—but most of the time the rider has only limited capacity to get the elephant to do something it doesn't want to do. With a lot of work the conscious rider can "train" the unconscious elephant, but even then the elephant might get out of control if put under enough stress.
The rider and elephant metaphor for dual process theory helps us explain why akraisa seems to account for why we don't always do what's best. The elephant has some limited tolerance for taking commands from the rider. When the elephant's tolerance runs out, the rider feels like they've run out of power to will the elephant to do what they want. When the elephant recovers some tolerance for taking commands, the rider feels like they've regained their willpower to exert some control over the elephant.
This explanation accounts for why I sometimes eat donuts for breakfast: it's because the metaphorical rider of my conscious mind has lost control of the elephant of my unconscious mind, and the elephant wants to snack on donuts, so I snack thusly. It similarly accounts for why Alice finds it hard to call Bob, why drivers get distracted, and why other cases of people not doing what they think is best happen.
Talking in terms of a conscious rider and unconscious elephant is intuitive, but it's also misleading because our brains aren't neatly divided between conscious and unconscious parts. Daniel Kahneman provides a more nuanced and accurate presentation of dual process theory in his book Thinking, Fast and Slow. He describes the theory not in terms of conscious and unconscious parts of the mind but in terms of two systems, System 1 and System 2. System 1 works quickly and intuitively and takes action before it enters our conscious awareness; System 2 works slowly and deliberately and leaves time for self-reflection. These two systems are best thought of not as physically distinct parts in our brains but rather as two different modes of operation our brains can switch between when needed.
Kahneman explains that we depend on both systems working together because they serve different purposes. System 1 is fast, so it's great for things like keeping us alive in dangerous situations, but in order to be fast it takes shortcuts and sometimes makes mistakes. System 2 thinks carefully and can avoid many of the mistakes System 1 makes, but it achieves this only by thinking too slowly to be of use in many daily activities. As a result we need to engage both systems at the appropriate times to make the right choices to survive and thrive.
Many, if not most, people identify with System 2 more than System 1. They feel like System 2 is their "true self" and System 1, along with their body, is the clunky shell in which their "soul" is trapped. This makes sense because System 2 is responsible for most of our reflective, introspective thinking. System 1, by contrast, is more like a mysterious void from which ideas spring fully formed with no explanation. That's why the rider and elephant metaphor is so intuitive: we feel like we are System 2 riding around on the System 1 elephant. And as a result we tend to tell ourselves that the things System 2 wants and thinks are better than the things System 1 would have us do.
But is that really giving System 1 a fair shake? Our brains have both these systems because we rely on both of them working with each other to bring "us" into existence. Consider Alice and the fear System 1 is creating in her to prevent her from calling Bob to apologize. From System 2's perspective this is just getting in the way of her doing the right thing. But from System 1's perspective, System 2 is trying to get Alice to do a thing that has a high probability of leading to tears, sadness, and a few days of depressed mood. System 1 might be a bit self-centered, but it's doing what it thinks is best. As outside observers, we think the better thing is for Alice to listen to System 2 and apologize. But that's an opinion we form by listening to our own System 2s, and if our System 2s knew what Alice's System 1 knew we might agree with its course of action.
To wit, suppose I left out a key detail about Alice's situation: Bob has a history of bottling up his emotions and letting them out days later. By the time he admits how he's feeling, Bob is like a pressure cooker of anger ready to explode. But his seething won't go on forever: after a few more days he'll vent to a friend and his wrath will dissipate. Alice understands this intuitively: she couldn't provide an explicit model of Bob's behavior like I have, but her unconscious, System 1 mind knows that if she just waits a couple weeks she can apologize without enduring Bob's yelling. Now her fear seems justified, even rational, because it's guiding her to a better outcome than what System 2 would have her do.
That listening to System 1 instead of System 2 sometimes achieves better outcomes casts akrasia in a new light. We have a tendency to think that System 2 knows best and dismiss System 1 as irrational and emotional. That may often be true, but perhaps equally often System 2 would have us do something that sounds good in theory but would be bad for us in practice. System 1 protects us from making mistakes of that order. Rather than thinking of akrasia as simply a failing to will ourselves to do what's best, we should consider what System 1 is telling us. Maybe System 1 really is wrong, like when it tries to get us to check our text messages while driving, but sometimes it's right, like when it tries to protect us from an emotionally painful situation. That we don't always understand why System 1 motivates us to do the things we do doesn't automatically make it wrong.
This puts us in a tricky spot, though, because we have to decide if we're going to listen to System 1 or System 2 in any given situation. Do we trust System 1 and its gut feelings or System 2 and its careful thinking? It's tempting to say we should always listen to one or the other, but we've all had times where our intuition steered us truer than our reason and vice versa. Is there a way we can still trust our own judgment about what's best when we have more than one way of determining what is best?
Distrusting Ourselves
You know who only has one idea about what's best? Our friends from last chapter, the Bayesians.
Recall that Bayesians are ideal reasoners who use Bayes' Theorem to ensure that they have the most accurate beliefs possible given the evidence available to them. Consequently, when they update their beliefs based on new evidence, they don't just update one or a few of their beliefs, but all of them all at once in light of the new evidence. This includes making updates to beliefs that are indirectly affected by the implications of new evidence, even if those updates are minuscule. For example, if our hypothetical Bayesian from last chapter saw that the sky was red, not only do they increase the probability they assign to the sky being red, but also decrease their probability that the sky is blue, green, or any other color. They also update their beliefs about things like the likelihood they are on Mars, Earth, or another planet and the likelihood that their senses are being fooled, say because they took a drug and are hallucinating. Because their beliefs work this way, it's helpful to think of Bayesians as having a complex network of beliefs where updating one beliefs causes a cascade of updates to shoot out through the network until every belief is updated, however minutely, to accurately reflect everything a Bayesian has learned.
As a result, a Bayesian's beliefs are guaranteed to be self-consistent. Thus, when they're deciding what would be the best thing to do in any situation, there can only be one best action to take because if there were more than one those multiple "bests" would be inconsistent with each other. This is how they ensure they always act in an optimal, rational way given their prior beliefs and the evidence they've encountered.
We're not so lucky. Like Bayesians we also maintain a network of beliefs, though unlike them we don't update all our beliefs every time we learn something new. We do our best, but our brains lack the capacity to update all of our beliefs at once, so they instead prioritize the beliefs that seem most relevant to update given whatever new information we've just learned. Later, if we have time, we can think through the implications to our broader belief network. This ensures we update the most important beliefs to update first, but since there's rarely enough time for us to update our entire belief network, we end up holding contradictory beliefs, or at least holding contradictory justifications for our beliefs.
The same is true of our wants and desires. For myself, I might know that high-fiber cereal is the healthiest thing I can eat for breakfast, but I also know that donuts are really tasty. If I eat donuts, my breakfast is no longer healthy, but if I don't eat them it's not very tasty. Thus I'm in self-conflict about what to do because my beliefs and desires point in contradictory directions. Psychologist refer to this state of self-conflict as cognitive dissonance, and it's a phenomenon that affects us all.
How can we respond? One option is to give up and abandon all hope of addressing inconsistency. Another option is to reject contradiction and fight for every scrap of consistency that can be achieved. There's some appeal to both approaches, but both also force us to give something up, whether that be consistency or our time. Is there some middle path where we can live with contradiction without giving up all hope of resolving it?
The Bayesians offer us a clue about what we can do. Bayesians are uncertain about each one of their beliefs, in that the probability that they assign to a belief being true is also a measure of how likely a belief is to be false. Taken in aggregate over all their beliefs, we can say they have some degree of uncertainty about all the beliefs in their network. And while we can't hope to achieve their degree of mathematically precise uncertainty, we can at least hold our beliefs with a healthy level of uncertainty and skepticism.
In essence, this means we can accept that we are permanently uncertain about the world and what is best to do. This should not be surprising: we have limited information, we only know what we've had a chance to know, and there's more world out there than we can ever learn about. There will always be things we are uncertain about, and as long as we are uncertain about one thing we must be a little bit uncertain about everything. This is what it means to say that we suffer from fundamental uncertainty.
And this fundamental uncertainty is not just about what words mean, what is right and wrong, or what is best to do. It runs far deeper than that. In the next chapter we'll see that fundamental uncertainty exists down at the very root of how we know anything at all.
6 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-08-30T02:42:30.272Z · LW(p) · GW(p)
That's not a bad writeup, a bit simplistic, but accessible to most aspiring rationalists. I like the reminder that we are not to trust ourselves unconditionally, the way we are prone to do when not thinking about it "I know/feel I'm right!" is an all too common argument. I would like to add that the System 1/System 2 model, or another "two minds" model is very much simplified, though it is an understandable first step toward modeling human inconsistency.
Now, a bit of an aside: the general principle is that to understand how something works, you don't study it in equilibrium, because you miss all the essential stuff. You don't see the hidden gears because they are nearly exactly compensated by a different set of gears. I cannot emphasize enough how much this error is made when trying to understand the human mind. What we see is "normal", which is basically tiny deviations from equilibrium. To learn anything interesting about the mind you study minds out of equilibrium, and, because the IRB will never authorize "taking a mind out of equilibrium" with psychoactive substances, torture and emotional abuse (can't blame them!) one has to study the subjects in vivo.
Back to the topic at hand. "Two minds" is what we glean from "normal" people. Those whose minds fell apart expose many more (broken) parts to the outside world, full as well as fragmented, and one can observe extreme versions of akrasia in those living with multiple personalities. Similarly, for the case of Bayesian updating, the interesting situations are for example, where some updating is explicitly punished, like in a cult-like setting, and this forced self-inconsistency and cognitive dissonance due to being prevented from "updating the most important beliefs to update".
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-08-30T14:10:24.915Z · LW(p) · GW(p)
Thanks for your comments. Yes, this chapter, like the previous two, glosses over a lot of details because my purpose isn't to explain these topics in detail, but say just enough to sow the seeds of doubt about the certainty many people have. It's a fine line to walk between oversimplification and giving too much detail. I probably still haven't got it right! For example, I would kind of love it if I didn't have to talk about Bayesians, but it just seemed the most straightforward way to contrast their ideal with our reality as humans. Maybe in later drafts I'll find a better way to approach it.
Replies from: abramdemski, shminux↑ comment by abramdemski · 2023-03-08T20:36:56.090Z · LW(p) · GW(p)
In fact I think it is a bit misleading to talk about Bayesians this way. Bayesianism isn't necessarily fully self-endorsing [LW · GW], so Bayesians can have self-trust issues too, and can get stuck in bad equilibria with themselves which resemble Akrasia. Indeed, the account of akrasia in Breakdown of Will still uses Bayesian rationality, although with a temporally inconsistent utility function.
It would seem (to me) less misleading to make the case that self-trust is a very general problem for rational agents, EG by sketching the Lobian obstacle, although I know you said you're not super familiar with that stuff. But the general point is that using some epistemics or decision theory doesn't imply endorsing it reflectively, similar to Godel's point about the limits of logic. So "by default" you expect some disconnect; it doesn't actually require a dual-process theory where there are two different systems conflicting. What a system reflectively endorses is already formally distinct from what it does.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2023-03-10T18:11:11.415Z · LW(p) · GW(p)
Okay, I'll try to look into it again. Thanks for the suggestion.
↑ comment by Shmi (shminux) · 2022-08-30T17:54:19.589Z · LW(p) · GW(p)
Yeah, that makes sense, I think that chapter accomplishes its purpose then.
comment by Gunnar_Zarncke · 2022-08-30T09:46:59.915Z · LW(p) · GW(p)
Later, if we have time, we can think through the implications to our broader belief network. This ensures we update the most important beliefs to update first, but since there's rarely enough time for us to update our entire belief network, we end up holding contradictory beliefs, or at least holding contradictory justifications for our beliefs.
This is an argument for more slack - esp. slack that is used to reflect and feel out things you learned. And this is probably even more true when you are young.