Fundamental Uncertainty: Chapter 5 - How do we know what we know?
post by Gordon Seidoh Worley (gworley) · 2022-12-28T01:28:50.605Z · LW · GW · 2 commentsContents
The Many Ways of Knowing Rational Belief None 2 comments
Alice is walking with her precocious young son, Dave. Dave's looking around, holding Alice's hand, when suddenly he asks a question.
Dave: Can plants think?
Alice: I don't think so. They don't have brains.
Dave: Why do they need brains to think?
Alice: Because brains are where thinking happens.
Dave: How do we know that?
Alice: I guess because when people get hurt in their brains they can't think anymore, so that must be where the thinking happens.
Dave: But couldn't plants think some other way, like with their roots?
Alice: Maybe? But how would we know if plants think? They don't talk to us.
Dave: I guess. But what's thinking?
Alice: It's that thing you do with your brain to come up with ideas.
Dave: Yeah, but plants don't have brains, so that's not fair.
Alice: Why not?
Dave: Because you're saying brains do thinking and thinking is what brains do, but that doesn't tell me what thinking really is. I need to know what thinking is without brains.
Alice: Thinking is having thoughts and ideas. Do plants have those?
Dave: Maybe, and we just don't know how to talk to them to find out what they are. This still seems unfair. How do we know if plants are thinking if we can't talk to them?
Alice: I guess maybe we can't. I don't know.
Dave's line of questioning seems reasonable. He wants his mom to help him figure out what the definition of thinking is so he can reckon for himself if plants think, but she can't offer him much more than some observations on how the word thinking is used. That's fine for everyday use of the word "thinking", but if Dave wants to grow up to advance the field of botanical cognition, he's going to need a more formal definition of "thinking" in order to say for sure whether or not plants think.
In Chapter 2 we saw that words are grounded in our direct experience. But how can we ground the definition of "thinking" in our direct experience? We know what it's like to be ourselves and to have ideas pop into our head. Other people tell us to call this thinking, so we guess that this means they have something similar happening in their heads. Everyone seems to agree on what "thinking" means, so it feels like it's something objective, but in reality our conceptions of thinking are grounded in personal experience. This works fine for the most part, but then we run into edge cases.
For example, people often say that computers and machines think in everyday conversation, like when someone says they're waiting on the computer to finish thinking while it runs some calculations. But if you press them on a philosophical question like "can computers really think?" they're likely to say "no" because this violates some intuition about what constitutes thinking.
Similarly, most people don't think plants can think. Since plants don't have brains and don't talk, we feel pretty confident saying they don't think. But how can we be sure they aren't doing something that we could reasonable call thinking? Maybe chemical processes in their roots, stems, and leaves perform equivalent actions to what our brains do when we think, but because plants have no way to talk—or at least no way to talk that we yet know how to listen to—we don't recognize what they do as thinking. The problem seems to be that we don't have a firm grasp on what the essence of thinking is. If we could figure out what that essence is then maybe we'd know if plants, computers, and other things really think.
But such an essence is hard to find, as Chapter 3 showed us. Recall the city divided between Food Stealers and Never Stealers, split over whether it is right or wrong to steal food to eat. It would have been nice if The Food Stealers and Never Stealers could have found some essential property that determines what is good and bad and used that to resolve their disagreement about stealing to eat, but our investigation of the foundations of good and bad didn't turn up anything particularly essential.
Instead, good and bad proved to be grounded in our most deeply held beliefs. From a mix of innate tendencies, learning from our parents, and influences from our culture, we develop our sense of morality. Because our individual tendencies, learning, and influences are not identical, we each end up with different foundations for what we consider moral, and this makes it hard to agree with each other about what's good and bad if our foundations differ too much. But if we all had the same moral foundations then maybe we'd all agree on what's right and wrong. How could we achieve that? The most straightforward solution, given that we develop our moral intuitions on a foundation of deeply held beliefs, would be if we could find a way to agree on which moral foundations are the right ones. Then all we'd need to do is convince ourselves and each other of their rightness.
But convincing ourselves of what is right or best is not so easy, as we saw in Chapter 4. It would be nice if we could trust ourselves to know what's best, but this trust is hard to establish. Because our minds have two modes of thinking—the unconscious, emotional System 1 and the conscious, rational System 2—we readily have at least two ideas at all times about what's best to do. And although it's tempting to say we should always listen to System 2 at the expense of System 1, sometimes System 1 knows better than System 2, so that would be a mistake. Further, our network of beliefs is fractured and self-inconsistent, so even if we could get Systems 1 and 2 to agree we'd still be able to come up with more than one reasonable idea about what's best to do. Thus we seem to be resigned to live with uncertainty about what is best.
And that resignation to uncertainty is not restricted to mundane questions like what to eat for breakfast or what clothes to wear. The fractured nature of our beliefs means we are also uncertain about what words mean, what's right and wrong, and ultimately what is true. Even if we manage to be totally certain of our own beliefs, others are not, and this uncertainty prevents us from collectively agreeing on what is best and true. This is a real source of strife in the world because disagreements over the meaning of words and what's right and wrong power our most contentious political and cultural debates. Our inability to collectively resolve uncertainty stands in our way of agreement and, ultimately, peace and harmony.
Over the this and the next two chapters we'll see what, if any, certainty can be found. The driving question of our investigation will be: how can I be certain I know the truth? We'll begin by breaking this complex question apart and exploring just one part of it—the part where we say "I know".
The Many Ways of Knowing
What does it mean to say "I know"?
This might seem like a strange question to ask since knowing is such a fundamental and intuitive activity. It's hard to imagine being a person and not knowing things. In fact, the only time in our lives when we aren't swimming in a sea of knowledge is when we're newborns, and we quickly wade in by learning the voice, face, and touch of our mother, father, and other caregivers. Within days we begin to tell our family apart from strangers, and our long relationship with knowledge begins. So if we want to understand what it means to know, we're going to have to spend some time exploring how we use this almost invisible word.
When we say to each other that we know something, we generally mean that we're able to recall an idea, hold it in our mind, reason about it, and say things based on understanding it. If pressed for a formal definition, I'd say that knowing is the ability to give form to the thought that tells one thing apart from another and from the indivisible wholeness of the world. Or, more briefly, knowing is the ability to tell this from that. But that's rather abstract, and there's more nuance to knowing than can be captured in a single sentence. Consider these examples of what I claim to know:
- I know my friend Eric.
- I know how to tie my shoes.
- I know how to speak English.
- I know that Paris is the capital of France.
- I know what it's like to ride a rollercoaster.
- I know that if I eat too much food I'll get a stomach ache.
Although all of these sentences begin "I know," the knowledge expressed in each is not the same. Knowing that Paris is the capital of France is a fact. When I say that I know my friend Eric, though, I'm not claiming to know a fact, but rather that I can recognize him by sight and sound and am familiar with his patterns of behavior. There's also no fact of what it is like to experience the thrill of riding a roller coaster: it's something lived that simply is. Rather than using "know" to mean many things, perhaps it would be useful to have different words for these different forms of knowledge.
The ancient Greeks offer us a solution. They had more than one word for knowledge. They broke down knowing into several different types:
- episteme: things you know because you reasoned them from evidence, like knowing that water boils at 100 degrees celsius because you boiled a pot of water using a thermometer to see when it started to boil
- doxa: things you know because others told you, like knowing what happened at the party you didn't attend because your friend tells you
- mathema: things you know because you were educated in them, like knowing how to spell words because you were taught them in school
- gnosis: things you know through direct experience, like how it feels to jump into a lake
- metis: practical wisdom, usually collectively constructed from many people's experiences, and shared with others, often starting at a young age, like how you know to look both ways before crossing the street
- techne: procedural knowledge that comes from doing, like the "muscle memory" of how to ride a bike
These types of knowledge are not necessarily exhaustive of all ways of knowing, and the same fact or idea can be known in different ways at different times by different people. For example, most doxa starts out as someone else's gnosis. Suppose Alice sees Bob wearing a banana taped to his head. She has gnosis of his appearance because she saw him for herself. Later she tells Carol about Bob and the banana. Carol gains doxa of Bob's headwear because she heard about the banana secondhand. Later, Carol sees Bob and the banana for herself and develops gnosis.
Interestingly, we might say Carol knows more than Alice now, even though Carol first learned about Bob from Alice, because while Alice only has gnosis, Carol has both doxastic knowledge from Alice and gnostic knowledge from her own experience. There aren't always clean lines between these different types of knowledge, and the same information can be known multiple ways by the same person. Perhaps these complications are why in many languages we've collapsed all these distinctions and use a single word to describe the act of knowing.
We might also use a single word because, while each kind of knowledge is important, one isn't really better than another. Instead, they serve different purposes. We need episteme and gnosis to expand our understanding of the world. We need doxa, mathema, and metis so that we don't have to individually rediscover the knowledge and wisdom that was already hard-won by others. And we need techne to navigate the daily activities of life. We need each of these ways of knowing to make the most of the information the world offers us.
Yet making distinctions between different types of knowledge is useful at times, because it helps us see when we've mistaken one kind for another. Political science professor James C. Scott explores the risks of making such mistakes in his book Seeing Like a State. In it he examines how modern and historical governments differ in their treatment of knowledge. In the past, states placed a high value on metis by upholding tradition, often to the point of killing those who dared to challenge the established order. Modern states, characterized by the adoption of Enlightenment values and industrialization, throw out tradition in favor of rational, scientifically-grounded episteme. This prioritization of episteme over metis has yielded many benefits for the people living in modern states—more reliable food supplies, better medicine that treats more diseases, and increased access to a wider variety of goods and services, to name a few—but, as Scott explains, these benefits are not guaranteed. Sometimes overturning tradition leads to disastrously worse outcomes.
The middle half of the 20th century saw numerous attempts to modernize agriculture that ended in failure. Between the 1920s and the 1970s, Russia, China, and other communist countries saw the worst famines in recorded history when they adopted the misguided theories of Soviet scientist Trofim Lysenko. Lysenko didn't trust the work of other scientists and relied heavily on episteme. To some extent this is what all scientists must do, but he took his commitment to episteme farther than most. When other scientists presented evidence that his ideas were mistaken, he dismissed them as ideologically impure. Lysenko thought he was the only one who could be trusted to know what was true. Consequently, when he got key details about how to maximize crop yields exactly wrong—his ideas would produce far worse harvests than traditional methods—no one could convince him otherwise. Lysenko was firmly trusted by communist leaders, so when his ideas were combined with forced implementation by central planning and collectivization, the result was crop failures that killed tens of millions of people.
Over-reliance on episteme was not a problem unique to communist countries. In the United States, the Dust Bowl and associated food scarcity of the 1930s was the direct result of careless industrialization of farming in the 1920s. Early agricultural science thought it could achieve higher crop yields simply by working the land harder, and for about a decade this was true. Using mechanized plows and harvesting equipment, farmers converted a hundred million of acres of prairie from low productivity cattle pasture to fields of wheat and corn. This was a boon to the people of the Great Plains and the United States, right up until drought hit in the 1930s. The intensive agriculture of the previous decade had damaged the natural ability of the land to protect itself, and the fields crumbled to dust as they baked under cloudless skies. It took another decade for the land to recover and for farmers and scientists to learn from their mistakes.
Do we know better now? We like to think so. Certainly we're less naive because we tell each other about the failures of the past (metis) and learn about them in school (mathema). Politicians no longer propose that we solve all our problems with science, and we know that episteme doesn't have a monopoly on the truth. Yet even as we've learned from our mistakes, we risk overcorrecting and forgetting the power of episteme. In fact, many would say we still don't value episteme highly enough and too often ignore scientific results that point to clear ways of improving the world.
The issue may not be that we have the wrong balance between different forms of knowledge. Instead, we may be confused about how to correctly incorporate everything we know in our search for the truth. Let's see if our friends the Bayesians can offer any insights into how we can better utilize all the evidence at hand, no matter the source or type.
Rational Belief
What does it mean for a Bayesian to know something? Since they have rational beliefs that have been optimally calculated based on their prior beliefs and the evidence they've observed, a Bayesian's knowledge is nothing more or less than the sum total of their beliefs about the world, carefully expressed in terms of the probability that each belief is true. This makes them quite different from us humans, who tend to think of knowledge as an all or nothing affair.
To return to the earlier example of Bob and his banana hat, Alice and Carol would both likely say that they know it's true that Bob is wearing a banana on his head. A Bayesian would have to qualify any such statement with a probability, like by saying that they're 80% sure Bob is wearing a banana on his head. They might go on to say that they think there's a 15% chance he's wearing a wax facsimile of a banana, and 4.5% chance the so-called banana is actually a plantain, and a 0.5% chance they're mistaken and Bob is not wearing a banana-like thing on his head at all. This is all quite a bit more nuanced than simply claiming to know if Bob is wearing a banana on his head or not.
Bayesians perform a similar analysis when trying to figure out if trickier things are true, like whether or not ghosts exist. For example, a Bayesian doesn't do something naive like survey everyone in the world about whether or not they think ghosts are real and then set their probability on the claim to whatever proportion said they exist. Instead, they consider how much new information each piece of evidence they see gives them. So if they survey 100 people and the first 99 tell them ghosts are real, but all 99 only offer personal stories about seeing ghosts, they make few to no updates past the first person who told them a ghost story. Each additional story is evidence, but it's weakly confirming evidence that may not change their probability for the existence of ghosts much.
Meanwhile, the last person to talk to the Bayesian claims that ghosts are fake and offers scientific evidence that most ghost sightings can be explained as misinterpretations of natural phenomena or hallucinations. Whereas our Bayesian made very small updates after hearing from the 98th and 99th persons, they now make a large update upon hearing from the 100th because they've been provided with new information. Exactly where they land on the question of ghosts will depend on the quality of the evidence presented.
Bayesians reason about truth this way because they are designed to treat knowledge as justified, true belief, reflecting a popular theory in analytic philosophy of what knowledge is. Their beliefs are justified insofar as their reasoning process is optimal, and their beliefs are true insofar as they assign probabilities of truth that can be justified by the evidence they've observed. Since they are optimal reasoners by definition, their rational beliefs would seem to be the closest we can come to knowing what's true.
Do Bayesians then point to the limits of our certainty? If we learned to apply Bayes' Theorem consistently like a Bayesian would we always come to true beliefs, and any uncertainty left would be only the fundamental uncertainty we ran into in the previous three chapters? Some people think so, but I think our uncertainty runs much deeper.
The challenge is that Bayesians, much as they are superhuman, optimal reasoners, take something for granted. The thing they take for granted is so fundamental that it may well have evaded your notice thus far. So allow me to point it out.
Bayesians were designed by humans to be ideal mathematical reasoners who calculate the probability that statements are true by correctly weighing evidence in proportion to its quality. But how was the definition of "correctly" chosen? Some of the smartest humans of the 20th century decided that Bayesians should update their beliefs according to rationality norms: rules of mathematical reasoning that, when followed, lead to logically consistent beliefs. Since logical consistency is the longstanding method by which the correctness of mathematical proofs is validated, it seemed self-evident this is what Bayesians should do.
The claim that logical consistency is self-evidently best should draw our curiosity. If we want to know how to be certain we know the truth, we must also be certain that our method of knowing the truth is itself true. We'll tackle this recursive question about how to know the way to know the truth next.
2 comments
Comments sorted by top scores.
comment by Dacyn · 2022-12-28T22:34:05.417Z · LW(p) · GW(p)
-"Bayesians reason about truth this way because they are designed to treat knowledge as justified, true belief, reflecting a popular theory in analytic philosophy of what knowledge is."
Hasn't that theory been discredited (by Gettier)? I don't think it is popular anymore.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-12-29T05:40:04.579Z · LW(p) · GW(p)
Yes, but the point stands that, to the best of my understanding, the this is the sort of knowledge folks like von Neumann had in mind when they laid the groundwork for what would become our modern model of Bayesian reasoners.