Fundamental Uncertainty: Chapter 7 - Why is truth useful?
post by Gordon Seidoh Worley (gworley) · 2023-04-30T16:48:58.312Z · LW · GW · 3 commentsContents
Cybernetics An Answer None 3 comments
N.B. This is a chapter in a planned book about epistemology [LW · GW]. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I'll have an editor help me clean up the text further.
Last chapter we proved that our knowledge of the truth is fundamentally uncertainty. But this didn't mean there was no truth to be found. Instead, it simply meant that we could not guarantee that our beliefs about what's true are always 100% accurate. We saw that there is always some irreducible error in our models of the world, because even if we manage to justify all of our beliefs upon a handful of assumptions, those assumptions cannot themselves be fully verified.
Despite the impossibility of certainty about the truth, we are able to get on with our lives anyway and accomplish many things. We're even able to do science, engineering, medicine, and more that requires understanding deep truths about the working of the world. We do this via pragmatism, by intuitively basing our knowledge of relative truth upon that which is useful.
But what's so special about usefulness? Why base our knowledge on what's useful rather than something else? To begin our investigation, let's check back in on Dave.
Several more years have passed, and Dave has spent nearly a decade in college. As an undergrad he majored in biology with a specialization in botany and a minor in philosophy. He then went on to earn a PhD in neurobiology and has done research on plant cells that behave, in some limited ways, like neurons. Alas, after much research, he's been forced to conclude that, although plants exhibit many complex behaviors, it's very unlikely that plants can meaningfully be said to think.
Even though plants can't think, Dave learned that they do work towards goals. He was surprised by this notion at first. Since plants don't think, it seemed impossible for them to have goals. But then he realized he had already proved that plants are goal-oriented, he just hadn't realized it at the time.
Recall Dave's science experiment. He showed that plants grow to get into the light, not just along a predefined plan, but by twisting and turning to get access to as much light as possible. They don't think about doing this or make plans, but instead carry out simple behaviors, like growing stems and leaves on the parts of themselves that are receiving the most light, that serve to fulfill the goal of growing towards the light.
But is this really goal-oriented behavior? Plants, it could be argued, are just carrying out their genetic program and so have no real goals. Fair enough, but then why do plants have a particular genetic program and express their particular behaviors?
Stepping back, plants need light, among other things, to survive. Further, they are the product of hundreds of millions of years of evolution. In each generation, only those plants that behaved in ways that allowed them to survive long enough to reproduce became the ancestors of modern plants. So even though plants know nothing about survival or reproduction, they have been molded by evolution to achieve the twin goals of survival and reproduction. Every bit of plant behavior exists because it either enables them to survive and reproduce, or it is an experiment, fueled by genetic and environmental variation, to find better ways of surviving and reproducing.
We, and all living things, similarly share the goals of survival and reproduction. As modern humans we can sometimes forget this because our lives are full of relative luxury. We've made a world where the struggle to survive is more a metaphor about getting the things we want out of life than an actual struggle to find enough food to stay alive, and we've co-opted our desire to reproduce to form romantic relationships that bear no children but bring us great happiness. Yet, beneath all the layers of civilization and convenience, we still ultimately care about survival—otherwise we wouldn't invest so much collective effort in insulating ourselves from deadly risks. We've not outgrown the goals evolution gave us; we've just gotten so good at fulfilling them that we can focus on other things.
What other things do we focus on? Writing books for one. Each time I sat down to work on this book, I generally wasn't thinking much about survival or reproduction. Instead I was thinking about sharing some ideas that I think are important for others to know. But why do I think these ideas are important? Because I believe we face existential threats in the near future, be it from biologically engineered viruses or advanced artificial intelligence or increasingly powerful weapons, that we will struggle to address if don't understand fundamental uncertainty. Looking deeper, why do we need to address these existential threats? Because, among other things, I want both my own life and the lives of my friends and family to continue. So pealing back the layers, I'm ultimately writing this book because I care about survival even if that rarely enters my conscious awareness. And if you look closely enough at nearly everything humans do, you'll similarly find either survival or reproduction as the ultimate motivating force, no matter how deeply buried.
What does this have to do with truth and usefulness? We seek the truth because truth is useful. Why is truth useful? Because it helps us achieve our goals. One of our most fundamental goals is survival, and our survival depends on having an accurate enough understanding of the world to navigate it safely. We're not hardy lifeforms like tardigrades or lichens that can survive in extreme conditions. We're soft and squishy and easily killed…expect insofar as we are able to use our brains to keep ourselves alive. So for us, relative truth is essential to our continued existence, and we create it because we would die without it.
This is quite a bit different from how we usually conceptualize truth. When we learn facts, debate claims, and study philosophy, we tend to think of truth as abstract, eternal, and fixed knowledge that is either handed down to us by an authority or uncovered as if we were explores finding hidden treasures in forgotten places. But that's not what truth is nor where it ultimately comes from. Truth—specifically relative truth—comes from trying to understand the world well enough to get things done, so it is practical knowledge we create to serve our purposes, and thus it is contingent on those purposes. It only seems as if it might not be because we've lost sight of our ultimate motivations, when in fact there would be no relative truth if it were not useful to our goals because we wouldn't have bothered to create any if it didn't matter to us.
And it's not just us humans who find truth useful. The world is full of things trying to achieve goals, and since all those goals must be achieved in the same physical reality, having a more accurate picture of that reality is generally useful to achieving any goal. But does this imply that plants, non-human animals, and even machines can know the relative truth? Yes, at least up to their capacity for knowledge. To see why, we need to examine how goal-oriented things are alike, and the organizational patterns that allow them to achieve their goals.
Cybernetics
The universe is made up of stuff interacting with other stuff. Most of these interactions are fairly simple, like gravity pulling planets into orbit around a star or magnets attracting and repelling each other. But sometimes stuff is organized in complex ways such that one thing controls the behavior of another. When that happens the things involved are said to form a control system.
Control systems are everywhere when you learn to look for them. Classic examples include:
- toasters, which apply heat to bread until a timer turns them off
- thermostats, which turn on and off a heating or cooling system to maintain a desired temperature
- flush toilets, which refill their tanks after flushing without overflowing
- steam engine governors, which keep steam engines running at constant speed
Let's look at one control system in detail to see how they work. Consider the electric kettle that heats the water for my tea each morning. When I turn it on, it sends power to a resistive heating element to increase the temperature of the water. It keeps doing that until a little temperature sensor in the kettle detects that the water is boiling and then shuts itself off. The switch controls the heating element, and the switch is controlled by a thermometer that is sensing the water temperature.
In the technical jargon of control systems, we say that the switch is an actuator, the inside of the kettle with the heating element and the water is a controlled process, the thermometer is a sensor, and the mechanism that decides to turn the switch off when the desired temperature is reached is a controller. Control systems with all these parts are called closed-loop systems because the actuator, controlled process, sensor, and controller create a feedback loop that allows the system to control itself. Control systems that are missing one or more of these parts don't form a feedback loop because the would-be loop is broken, so we call them open-loop or feedforward systems because control passes out through the system rather than back into itself.
Whether a control system is open- or closed-loop, their parts are typically arranged to carry out goals. One kind of goal they have is purpose, which is the thing the system is designed to achieve. In the case of my electric kettle, its purpose is making water hot. A toaster has the purpose of toasting bread, a thermostat the purpose of maintaining room temperature, and a toilet the purpose of disposing of excrement. Purpose is often imposed on a control system by some outside force, like the engineers who designed my kettle, and can be achieved whether or not the control system actively responds to changing conditions, like the way a toaster can produce toast despite never checking if the bread is getting warm and crispy.
Closed-loop control systems, like my kettle checking the temperature of the water and shutting itself off when the desired temperature is reached, have an additional kind of "internal" goal that they actively try to achieve. We call it telos, and it refers to the goal of the feedback loop created by the cyclical interaction of the actuator, controlled process, sensor, and controller. My kettle's telos is to make its thermometer read 100 degrees Celsius, and by achieving it the kettle is able to fulfill its purpose of making hot water. By contrast a toaster doesn't have telos because its components don't form a feedback loop, but thermostats do because its thermometer will read below or above the target temperature based on if the heating or cooling system was turned on or off.
Purpose and telos make control systems a lot like living things, which have survival and reproduction as their purposes, and, like us, have many proximate, teleological goals. This is not mere coincidence: living things are full of control systems. The plants in Dave's science experiment are a perfect example. His plants were exhibiting tropism—specifically phototropism—which is the means by which plant cells elongate in the direction of light. And that's just one small example. Homeostasis—the class of closed-loop control systems that keep biological subsystems in balance—is extremely common. When you manage to breath enough air, drink enough water, or eat enough food, that's the result of multiple homeostatic systems working together to regulate the conditions necessary to sustain life.
We also find control systems beyond the realms of manufactured and living things. Many seemingly abstract processes either form or are made up of control systems. A great example is financial markets, which create a feedback loop between buyers and sellers with every transaction. Another is evolution, where the differential reproduction of genes determines which species survive and which die out. In fact, we can model nearly everything with control systems, up to and including the entire universe—if we relax the expectation that control systems have a purpose.
The idea that we can model most things with control systems is called cybernetics, a term that comes from the ancient Greek word for steersman, so chosen because the pilot of a ship forms a control system with the rudder and the movement of the vessel. The field was pioneered by a group of MIT researchers in the 1940s, and one of them, Norbert Wiener, wrote the field's founding text: Cybernetics: Or Control and Communication in the Animal and the Machine. From there, many others explored cybernetics in depth, and they so thoroughly integrated its ideas into various fields that today few people find need to study cybernetics directly. Except for us, because there is no thorough integration of cybernetics into philosophy except in some pockets of the field, so we are left to explore for ourselves the impact of modeling epistemological processes with control systems.
Thus we must ask, in terms of control systems, how is relative truth created? It starts with feedback loops, and specifically sensors. When any closed-loop control system observes the world through its sensors, like my kettle measuring the temperature of the water with its thermometer, it converts those observations into a signal that it sends to the controller. That signal is like a tiny map of the world. In my kettle, it's a map to whether the water in the kettle is boiling, as observed by the thermometer. In other systems, it may be a map to air temperature or hormone levels or the shapes and colors of the things before our eyes.
But the sensor cannot create the map by itself. To make the signal a map it must be part of a feedback loop. Without the controller, actuator, and controlled process, the signal is just meaningless noise. Taken in isolation, my kettle's thermometer does nothing more than change the voltage on a wire. What gives that voltage change meaning is the feedback loop because it causes the signal to serve the control system's telos, and in so doing creates knowledge by drawing a map of reality that serves a goal.
Humans are quite a bit more complex than kettles, but the principle is the same. Humans aren't a single control system, but many layers of control systems using different mechanisms to feed information to themselves and each other. At a high level, though, our neurons create a feedback loop between our sensory organs (sensors), our brain (controller), our muscles and other organ tissue (actuators), and our full bodies living in the world (controlled process). Our sensory organs send signals to the brain that tell it about the world, these signals are interpreted by the brain and given meaning by their relevance to our goals, and thus we draw a map of the reality we inhabit. In this way we, and everything comprised of closed-loop control systems, is continually engaged in creating relative truth out of simple observations intended to help us meet our needs.
We've only scratched the surface of what cybernetics has to teach us, and you'd be well served by spending some additional time understanding the world in terms of control systems. But for the purposes of this book, we've explored it just enough to finally be able to give a full account of fundamental uncertainty.
An Answer
Putting the pieces together from this and the previous two chapters, we can at last answer our motivating question: how can I be certain I know the truth?
In short, we can't, because truth is not accessible with certainty. Any certainty we feel we have exists only because we've taken our assumptions for granted. Instead, truth is known via relative truth, which is a map that points to the absolute truth, just as it is. This relative truth cannot be grounded in the certainty of logic because that would leave us unable to justify logic itself, but it can be grounded in the pragmatic usefulness of truth for fulfilling our goals. These goals are fundamental to how we and all knowing things understand reality.
This is why truth seems fixed and eternal. Everywhere we look we see things creating truth in exactly the same way, and because relative truth points to the same shared absolute truth of reality, we expect each thing to create nearly the same relative truth because it's most useful to have the most accurate map possible. Unfortunately, making sense of the world is complex and we don't all share the same goals, so we sometimes disagree on the details of our maps, thus we argue over what's relatively true even as absolute truth stares us in the face. We can become blinded by our individual goals and fail to notice that our search for truth is fundamentally subjective and uncertain because, at its base, truth depends on what we care about rather than an unobtainable, abstract criterion of truth.
Having found this answer, we can use it to reconsider how we think about a variety of topics. We'll explore a few of those in the next, penultimate chapter.
3 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2023-08-23T23:03:51.189Z · LW(p) · GW(p)
Note to self: work in a reference to a book on cybernetic models of mind, e.g. something like Surfing Uncertainty.
Or maybe this book: https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness
comment by jmh · 2023-05-01T04:48:00.763Z · LW(p) · GW(p)
small omission:
that we will struggle to address if don't understand fundamental uncertainty.
Also, I was initial confused by your shift from "truth" to "relative truth" and started to wonder if you were going to slip a concept that was not really truth but continue as if you were still talking about truth as I suspect most understand the word. That is, something of an absolute and unrelated to usefulness or practicality. If that was intentional that's find. If not you might consider a be more of an introduction to that shift as your following text does clarify the difference and why you used the term. Just might be less jarring for other readers -- assuming you were not intentionally attempting to "jar" the reader's mind at that point.
I'm not sure if this will be a good comment but if you've never heard of an old counter-culture Christmas time story, The Hog Father, you might find it interesting. In a sense it's a mirror image of your position. Basically we need to believe little lies in order to believe the big lies (like morality, ethics, truth, right/wrong).
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2023-05-01T17:53:58.613Z · LW(p) · GW(p)
Thanks for your comment. I introduce the relative/absolute split in notions of truth in a previous chapter, so I expect readers of this chapter, as they progress through the book, to understand what it means.