Fundamental Uncertainty: Chapter 6 - How can we be certain about the truth?

post by Gordon Seidoh Worley (gworley) · 2023-03-06T13:52:09.333Z · LW · GW · 18 comments

Contents

  The Problem of the Criterion
  Facing Fundamental Uncertainty
None
18 comments

N.B. This is a chapter in a planned book about epistemology [LW · GW]. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I'll have an editor help me clean up the text further.

Several years have passed since Dave asked his mother, Alice, if plants can think. He's a teenager now and presenting a science fair project with the tantalizing title "Can Plants Think?". A judge for the fair comes by to ask about his project.

Judge: So, tell me about your project.

Dave: Sure! Ever since I was a kid I've wondered: can plants think? Obviously they don't have brains, so they can't possibly think the same way we do. But maybe they can do something worthy of the name "thinking". So I did a study to see if plants are be able to think.

Judge: Okay. Tell me about your hypothesis. How did you test if plants think?

Dave: First I had to define thinking in a way that could be tested. I thought about what makes things that we know think, like humans, different from things that we know don't think, like rocks. I also needed a definition that I could observe. What I came up with was that thinking happens when a thing does something inside itself to change its behavior in response to external stimuli. So humans and animals clearly think, while rocks obviously don't.

Judge: Alright, so what experiment did you run?

Dave: I wanted to see if plants would respond to external stimuli. So I created a setup with two plants covered by boxes. Each box had a hole at one end to let in light and no hole at the other end. I placed one plant directly under the hole in its box and the other one as far as possible from the hole. The null hypothesis is that plants can't think, and if it's true plants will only grow straight up, so I would expect only the plant directly under the hole to grow. The other one won't grow because it's not getting enough light. If plants can respond to stimuli and think, I'd expect the plant not under the hole to bend itself to reach the light.

Judge: And what happened when you ran the experiment?

Dave: The plant not under the hole bent and twisted in order to get closer to the hole! So I disproved the null hypothesis and proved that plants can think!

Judge: I see. But how do you know that plants can really think and aren't just behaving mechanically? If I put a paperclip at one end of a box and then put a powerful magnet at the other end, is the paperclip thinking when it moves across the box to get close to the magnet?

Dave: No, the paperclip is just responding to the physical force of the magnet. It doesn't change or move on its own. It's pulled by the magnet.

Judge: Right, but how do you know the plant isn't being "pulled" by the light? What makes you think the plant is thinking rather than simply responding to physical forces?

Dave: Because plants are alive and grow and this one grew to get into the light. The light didn't immediately pull the plant towards it. Instead it took days for the plant to grow itself into the light.

Judge: Yes, but maybe the light pulled the plant very slowly.

Dave: I guess that's possible.

Judge: Your experiment is interesting, but I think you're trying to prove too much. You showed that plants will grow to get into the light, but I don't see why that would prove that plants think.

Dave: Ah, but remember I said thinking is what a thing does inside itself to change its behavior in response to external stimuli. And my experiment showed that plants do this in the specific case of light.

Judge: Yes, but you didn't really prove that your definition of thinking is right.

Dave: What do you mean?

Judge: Well, how do you know this definition of thinking is what thinking really is. Maybe thinking means having self awareness, and you didn't prove that plants have that. What makes you believe your definition of thinking is true?

Dave: I needed a definition of thinking that could be tested. I don't think I could test your definition.

Judge: Sure, but whether or not a definition can be tested easily doesn't change whether or not it's the true meaning of a word.

Dave: I guess, but then how do I know what's true if I can't test it?

Judge: You still need some way to run a test, but testing doesn't have to be easy. Maybe you can only check if it's true by seeing if it meets some abstract criterion for truth, like being consistent with mathematical logic.

Dave: Hmm, I guess I have to think about this some more. Before you go, can you tell me how I did on my project?

Judge: It was fine. You understand the basics of the scientific method. If it's what you want to do, I think you could make a fine scientist one day, though you need to study a bit more philosophy if you want to tackle deep questions about the nature of thinking.

Dave had hoped to use a straightforward scientific experiment to prove that plants can think, but as the science fair judge noticed, showing that plants do certain things under certain conditions is not enough to prove so bold a claim as plants can think. Science gives us methods to prove direct claims about observable evidence, like whether or not a plant will grow towards the nearest source of light, but asks us to use reason and inference to draw further conclusions about the meaning of experimental results. Dave ran into trouble because he tried to prove too much by sneaking in an assumption about what it means to think. Rather than taking for granted that he already knows what it means to think, he needs to step back and first figure out what it means to think. Then he might be able to come up with an experiment that could prove if plants can really think or not.

How can he figure out what it means to think? Recall that in Chapter 2 we explored the question of where words ultimately get their meaning and found that they come from a combination of our lived experience noticing patterns in the world around us and other people pointing at those same patterns and telling us their names. But if this is the case, any meaning we give the word "thinking" is unavoidably constructed by us humans and may be subject to our whims and misunderstandings. So however we define "thinking" or any other word, that doesn't automatically mean it's the true definition. If anything, it calls into question what we really mean by "true" and how we could ever find the true meaning of a word.

So let's take another step back and consider the admittedly recursive question of what we mean by truth. Like knowing, it's something we intuitively understand because we live with it every day, yet can struggle to explain it crisply in words. One way of putting it is that truth is that which simply is regardless of what you say or think about it. To paraphrase science fiction author Philip K. Dick, truth is that which doesn't go away when you stop believing in it.

Unfortunately, truth is surprisingly hard to put our hands on. It's one thing to look at the world and see how it is. It's another to try to put our observations into words and understanding. The task seems simple enough at first. If you see some deli meat and cheese between two slices and bread and learn that this is called a "sandwich", it seems pretty obvious what a sandwich is. But what if the meat was replaced by vegetables or hummus: is it still a sandwich? What if the bread was replaced by a hot dog bun or a tortilla? Where do sandwiches end and subs and wraps and burgers begin? Is there any truth about what's really a sandwich and what isn't?

Ask a hundred people and you'll get several different answers as to what does and doesn't count as a sandwich. That's because the meaning of words depend in part on our individual experiences, so we're forced to introduce something of ourselves when we observe the world around us, interpret what we observe, and then cram what we find into labeled boxes based on how we interpret our observations. Thus when someone, like Dave, tries to define what "thinking" means, they run headlong into the messy, subjective experience of what different people think counts as thinking.

Despite this mess, we seem to know the truth anyway—or at least, the truth of some things. How do we do it? By discovering the truth we know rather than the truth that is. I'll explain.

Rather than thinking of truth as one thing, let's think of it as two: absolute truth and relative truth. So far we've been talking as if the only truth that exists is the absolute truth—the way things are, independent of anything we have to say about them. But we can also talk about relative truth—the way words, images, and symbols can be true if they accurately point to how the world is. This might seem like a subtle distinction, but the gulf between the relative and absolute is huge. We might say that absolute truth is like the moon hanging in the sky and that relative truth is like a finger pointing at the moon. There's truth in pointing at the moon, saying "moon", and that meaning something in the world to you and other people. It's also different than the moon simply existing on its own regardless of what anyone says or thinks about it.

A common mistake is to see the finger pointing at the moon and think the finger is the moon itself. This seems silly when stated this way, but it's an easier mistake to make than you might think. For example, remember in Chapter 3 when we talked about good and bad? "Good" and "bad" are words that point to concepts that are grounded by our moral foundations, which can be thought of as preferences that we share in common with many other people. There's some ground truth about what we and other people prefer, and then there's the words "good" and "bad". In abstract we might know that the truth of what's called "good" and "bad" is a relative truth, as opposed to the absolute truth of what things people prefer and disprefer, but in conversation it's easy to get lost and start to think of good and bad as objective qualities of things. If you've ever tried to debate morality with someone you've probably experienced this phenomenon yourself, and if you haven't you need only listen to two people with very different worldviews debate what's right and wrong to see them lose track of the grounding of good and bad, assuming they ever had hold of it to start.

For this reason it can be helpful to think of relative and absolute truth like the difference between a map and the territory it depicts. Maps describe the territory, and good maps tell you enough about the territory that you can find your way around, but you'd be mistaken if you thought that looking at a map was the same thing as going to the place it depicts. After all, looking at a map of Paris and seeing the location of the Eiffel Tower is quite a different experience from standing on the Champ de Mars and seeing it for yourself.

You might now reasonably ask, how can we know if our map is (relatively) true? That is, how can we know if our words and beliefs accurately point towards the absolute truth of reality?

The usual way to know if something has a property is to test it. For example, if I want to know if sheet of metal is magnetic, I grab a magnet and see if the magnet sticks to the metal. Can we find such a test for truth? Is there some means by which we can be certain if a statement is relatively true and points to absolute truth?

Alas, the answer is no.

This is an even bolder claim than Dave's assertion that plants can think, so it deserves an extended explanation.

The Problem of the Criterion

Pyrrho of Elis was a Greek philosopher born around 365 BCE. When Alexander the Great campaigned in India, Pyrrho tagged along. There he came in contact with Indian philosophy and, upon his return to Greece, founded the Western philosophical tradition of skepticism. Later sources attest to him the first known formulation of the problem of the criterion.

The problem of the criterion is simple enough to state. If we want to know if a statement or claim is true, then we need to know some criterion—a test or method—by which we can tell if the statement or claim is true. But this criterion of truth is itself a statement or claim, so how can we know if it is true unless we already know that the criterion of truth is true? This creates a cycle of infinite recursion that must be broken if we're to trust that anything is true.

Since the problem of the criterion is concerned with statements and claims, this means it's dealing with words and thus relative rather than absolute truth. Perhaps we can ground the criterion of truth by appealing to absolute truth? If so, how can we make that happen?

Modern philosopher Roderick Chisholm has studied the problem of the criterion in depth in a book simply titled The Problem of the Criterion. In it he explores three possible solutions to the problem of the criterion: particularism, methodism, and skepticism. Let's see if one of them can ground relative truth to our satisfaction.

Particularism tries to solve the problem of the criterion by picking particular things and declaring them to be true by assumption, usually because they are claims that seem obviously true. If you've ever dealt with axioms in a formal systems, like Euclid's geometry or Peano's arithmetic or even standard logic, you've seen particularism in action. The nice thing about particularism is that it makes explicit what unjustified assumptions we are making so we can notice if they don't hold. Unfortunately it doesn't put any limits on what we assume, so whatever truth we justify based on our assumptions can only be trusted so far as we trust that the assumptions made are true.

Methodism aims to build trust in our knowledge of the truth by doing the inverse of particularism by assuming the criterion of truth is known rather than assuming particular statements or claims are known. If you're familiar with René Descartes and his famous starting principle "cogito, ergo sum" ("I think, therefore I am"), that's an example of methodism where the measure of truth is personal experience. But is methodism actually different from particularism? Chisholm says "no" because the criterion of truth is itself a statement that is assumed to be true, so methodism turns out to just be a special case of particularism.

Skepticism takes a different route to solving the problem of the criterion by giving up on knowing the truth or anything with any degree of certainty. This was Pyrrho's solution, which he likely learned from Indian ascetics who practiced in traditions that people continue to observe to this day. Yet Chisholm is not content to let the skeptics be. He points out that skeptics choose skepticism over particularism, so whatever means they use to make that choice is implicitly being assumed true in order to justify skepticism, thus they are really methodists, and thus particularists, in disguise who happen to have chosen a criterion of truth that claims nothing is true.

The result of Chisholm's analysis is that particularism is the only real solution to the problem of the criterion. That's a bummer because it implies that all relative truth, and thus everything we know and believe to be true, is ultimately justified by one or more assumptions that have to be taken on faith and cannot themselves be proven true, and thus all our ideas about what's true stand on shaky ground. Are there any alternatives to Chisholm's finding?

No. Try as they might, alternatives still fall victim to the problem of the criterion.

There are two broad classes of attempts to route around the problem of the criterion. The first is to claim that direct access to the absolute truth is possible in some cases so that at least some relative truths can be known with certainty. For example, some religions claim that relative truth can be grounded by faith in a deity. Alternatively, some people claim we are born with innate knowledge of certain truths, or that some truths can be known through introspection.

Unfortunately, even if we believe we have direct access to the absolute, we still can't escape the problem of the criterion because we need some method by which to verify our claim that we have direct access to ground truth. We might try to appeal again to our direct access to the truth, but how can we be sure we're not deluding ourselves? From inside our own experience, there's no way to tell if we really have direct access to absolute truth or are hallucinating it. After all, psychiatric wards are filled with people convinced they have uniquely privileged access to hidden truths. Thus relying on direct access to truth is just a special kind of methodism, and thus is particularism, and therefore the problem of the criterion is not escaped. 

The other class of attempts try to dodge the problem of the criterion by reframing the relationship between relative and absolute truth. This can be done a few ways. One, known as relativism, is to disconnect the relative and absolute and say that relative truths aren't bound by absolute truths so that relative truth is totally subjective and not influenced by our interactions with reality. Another, called coherentism, is to say that relative truths are true so long as they are consistent with other relative truths, but relative truth need not be grounded because consistency is enough to prove truth. And then there's positivism, the view that relative truth can be derived through reason and sensory experience alone.

Unfortunately, each of these views fails at its objective and merely obscures the problem of the criterion. Relativism has the same problem as skepticism—it must somehow be known that the relative and absolute are not connected. Coherentism is a special kind of methodism and thus is also actually particularism. And positivism combines methodism and particularism by both assuming reason is the right method to know what is true and assuming that all sensory experiences are particular facts known to be true.

So if the problem of the criterion and its solution, particularism, are inescapable, what's there to do about it? Are we doomed to never know the truth?

Remember what I said back in Chapter 1: it all adds up to normality. Somehow, despite the problem of the criterion and the limitations of particularism, we overcome our lack of certainty about what's true to live our lives and uncover some facts about the world while we do it. So the question is less "what do we do about the problem of the criterion?" and more "what it is that we're already doing that lets us get on with knowing at least some of the truth despite the theoretical impossibility of knowing any of it?".

Facing Fundamental Uncertainty

Chisholm presents an airtight argument: all attempts to resolve the problem of the criterion ultimately come back to particularism. If we want to know the truth, our only option is to pick some starting assumptions that we can't be sure are true themselves. We are thus condemned to be fundamentally uncertain about the truth.

Despite this fundamental uncertainty, we need not give up all hope of knowing the truth. We can pick our starting assumptions pragmatically in ways that help us discover relative truths that are as close to the absolute truth as possible. If relative truth is a map of the absolute truth territory, then this approach, simply called "pragmatism", says to pick the assumptions that cause the map to be a useful guide to the territory.

Lucky for us, humans are natural pragmatists. We don't need any special method to determine which assumptions to choose, so much so that we don't even consciously think of ourselves as choosing assumptions—we just try things and see what produces a good map. Partly this is biological: we are the product of millions of years of evolution from creatures who needed to understand the world well enough to survive and reproduce, and our brains bear the mark of this struggle for understanding. But it's also cultural: hundreds of generations of our ancestors have worked to accumulate metis about how to understand the world accurately that they've passed down to us. So between our brains being oriented towards perceiving the world in truthful ways and our cultures teaching us tools like logic to better reason about the truth, we're able to find relative truths that reasonably accurately point towards reality.

And we can get better at finding relative truth by observing the quality of our maps and learning to produce better ones. Think about people living in the past: compared to us, it seems like they believed all kinds of false things. They were able to do well enough to become our ancestors, but also lived lives full of confusion about what causes diseases, why fire burns, and how the stars move across the sky. In the last few centuries we've gotten a lot better at uncovering truth through the application of science, philosophy, engineering, mathematics, and more. And as long as more accurate maps are useful, it seems like we should be able to invest additional effort to generate marginally more relative truth.

But if we're already so good at pragmatically resolving the problem of the criterion and finding relative truth, why do we need to worry about the fundamental uncertainty of relative truth at all? Because truth is a bit like the air we breathe: we can get away with not noticing it until it's missing or we're trying to understand breathing.

In our everyday lives we can act as if we know the truth with certainty because the feedback loop is tight. For example, if I believe there's a cake sitting on my kitchen counter, I'll very rarely walk into the kitchen, look at the counter, see a cake, and then be wrong to claim that there's a cake in my kitchen. I might even be tempted to say that I'm 100% sure there's a cake in my kitchen, because I'll be wrong so rarely that I can ignore the error. But sometimes the cake is a lie, like when it's not a cake but a cardboard cutout of a cake. And if I try to make more ambitious claims, like trying to claim that the cake in my kitchen can think, I'll quickly discover I need a more nuanced view on truth because I don't even know exactly what thinking is, let alone whether or not cakes—or plants—can do it.

We might think we can just ignore this uncertainty if we stay away from uncertain topics, but our exploration of the problem of the criterion already shows us this is not possible. We know that the relative truth is fundamentally uncertain. Even if the level of uncertainty in any particular claim is very small, it is always non-zero. Trying to ignore this fact won't change the situation. So the only thing to do is figure out how to make the best of our uncertainty.

We can look to the Bayesians for inspiration. First, they quantify their uncertainty when they make claims like "I'm 70% sure a hot dog is a sandwich" or "I'm 95% sure that plants can't think". The probabilities that they assign to their beliefs are a measure of how confident they are that the map drawn by their beliefs predicts the territory they will find.

Second, they adopt assumptions that have demonstrated themselves to reliably produce beliefs that lead to better outcomes. The big assumption Bayesians make is rationality—they assume that some rules of logic and probability are infallibly correct and are always true. Why? Well, Bayesians themselves didn't make the choice, but the mathematicians who worked out the idea of Bayesian reasoning did, and they were trying to create a mathematical model of optimal reasoners who are theoretically best at believing true things and making decisions that lead to the best outcomes. So they picked what they believed to be the best reasoning norms for Bayesians to start from, and did so in part because when they made Bayesians assume them, Bayesians seemed to behave optimally in a wide variety of situations.

So if the Bayesians can't be certain about the truth, do we have any hope? Alas, no. Uncertainty is fundamental to the project of knowing about the world. Any certainty we might think we've attained is ultimately hollow and unjustified. Given that you may just now be realizing the loss of certainty you previously thought was possible, is there anything you can do to cope?

First, recall once again that everything adds up to normality. You've already been living with fundamental uncertainty and getting on with your life, you just didn't know about it. Any feelings of existential crises or disillusionment will settle in time as you integrate your new understanding of fundamental uncertainty with everything else you already understand about the world. Yes, you might now change your mind about some things, but that's become you understand something new that you didn't before.

Second, there are two aphorisms that I find helpful for making sense of fundamental uncertainty. One is popular among statisticians: all models are wrong, but some are useful. Statisticians say it to mean that all statistical models make assumptions that gloss over important details and thus contain errors but are nevertheless useful for understanding the world. I take this idea as a more general claim about the truth.

Truth, and in particular relative truth, is not some perfect, metaphysical thing. Instead, relative truth is made up of mental models of reality, and all these models are fallible human attempts to make sense of absolute truth. We can experience absolute truth, but we can only understand it through the lenses of perception and reasoning. We can polish these lenses, but they necessarily introduce distortions that make our models contain errors. Thus there's no point in worrying that our models are wrong—we know they are! Instead, we can turn our focus to a more attainable task: making our models less wrong.

Another aphorism I like captures this same idea: the map is not the territory. Our relative truth map is not the same as the absolute truth territory. Our maps naturally contain errors. What matters is if the map is a useful guide to the territory. The project of seeking truth is the project of drawing maps that better serve our need to navigate the territory.

Speaking of map and territory, that metaphor has one other lesson to teach us. Maps are useful for navigating territory, yes, but it's also true that maps exist only because we care about navigating the territory, since if we weren't trying to get anywhere we wouldn't need any maps. Thus, as well explore in the next chapter, the very existence of relative truth depends on our need to understand the world well enough to get along in it.

18 comments

Comments sorted by top scores.

comment by Portia (Making_Philosophy_Better) · 2023-03-06T16:38:15.409Z · LW(p) · GW(p)

I know the plant thing is just an example, not the point of the text, but it is very much a pet peeve of mine. 

Yes, plants can react to stimuli, they can, to a degree, show anticipation and learning, synthesise stimuli, even give warnings to, trade nutrients with and in that sense engage in cooperation with other plants. It is seriously fucking cool.

The field, and especially its popular reporting, is currently still littered with claims on plant sentience and plant cognition and plant social structure that is so utterly beyond what the evidence actually supports, and in fact, in strong contrast to what the evidence supports, that it drives me nuts. 

I managed to get what was essentially an angry rant with citations into an academic competition on this, if you are interested, though I would meanwhile update it a lot; I feel it engaged too much with straw-men at the time. There is seriously cool stuff on plants coming out, I just see calling it "thinking" "talking" "feeling" etc. as a massive disservice to the philosophical concepts there, insofar as plants are not conscious, and the difference between what they and even quite simple animals do is so vast. Every time I see someone speaking of plant neuroscience, I want to hit them about the head with pictures of individual bee neurons for comparison. And every time someone speaks of plant sentience, I am deeply worried that this will make the situation for animal rights worse by diluting the sentience term beyond reason. I also fear this will come back to bite us in the butt when we try to  ascertain AI capabilities.

comment by Yoav Ravid · 2023-03-06T16:20:13.379Z · LW(p) · GW(p)

Dave definitely seems to make a mistake a defining thinking in a nonstandard way, but the judge seems to make some mistakes of his own when pointing that out:

  1. "proving" that his definition of thinking is right
  2. talking about the "true meaning" of a word.

It's similar to #16 in 37 Ways That Words Can Be Wrong [LW · GW].

Instead I would tell Dave he's using a nonstandard definition and is possibly fooling himself (and others) into thinking something else was tested as they don't use the same definition of thinking, and even he probably doesn't think that way of thinking most of the time. I would suggest he taboos [? · GW] "thinking" and try to figure out exactly what the plant is doing, and exactly what the human is doing, and later he can compare the two.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-06T23:04:14.625Z · LW(p) · GW(p)

This is great. Alas, the poor judge is a rhetorical device doomed to never read the sequences. He exists purely to advocate a different sort of wrong view that is commonly held by many folks, though thankfully not by a sizable chunk of folks on this site.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-03-07T05:10:22.206Z · LW(p) · GW(p)

Yeah, I suspected that to be the case. In that case it's fine (I haven't yet read further to see if his position is criticized as well)

comment by tailcalled · 2023-03-06T15:05:46.714Z · LW(p) · GW(p)

If you wanted to know how the word "thinking" is usually interpreted, you could go out to look at the contexts where the distinction between thinking and not thinking is used in practice, and study what dynamics it refers to there.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-06T23:05:48.429Z · LW(p) · GW(p)

Sounds like you're talking about extensional definitions. This is useful to know about, but not really the point here. I deal with words more directly in Chapter 2, and the extension/intensional distinction is something that didn't make the original cut but after writing this chapter I already made a note to myself to go back and work it in because it's quite relevant to the discussion in this chapter.

Replies from: tailcalled
comment by tailcalled · 2023-03-07T09:02:17.904Z · LW(p) · GW(p)

This technique isn't necessarily limited to extensional stuff. Once you know what extension a community of people use a word for, you can analyze what key intensional properties are needed in order to use the word for that thing, and then you can apply those intensional properties to other things outside the domain that the community usually works in.

comment by Victor Novikov (ZT5) · 2023-03-07T06:45:50.794Z · LW(p) · GW(p)

Finally, someone who gets it. 
Are you familiar with David Chapman's writing on nebulosity?
I think this is a very fundamental and important idea that can be hard to grasp. You certainly seem to understand it.

As for the question of relative vs absolute truth:
I see logic as a formless sea, containing all possible statements in the superposition of true and false.
So the absolute truth is: there is no absolute truth.
But, as you say: "All models are wrong, but some are useful"
Some models, some paradigms are very useful, and allow you to establish a "relative truth" to be used in the context of the paradigm. But there (probably) are always truths that are outside your paradigm, and which cannot be integrated into it. Being able to evaluate the usefulness and validity of specific paradigms, and to shift fluidly between paradigms as needed, is a meta-rationality skill (also written on by David Chapman).

Mostly, I find it more valuable to develop these ideas on my own as this leads them being better integrated. But I enjoyed reading these words, and seeing how the high-level patterns of your thoughts mirror mine.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-08T19:14:25.307Z · LW(p) · GW(p)

Yes, very familiar with Chapman's work. In fact, I helped popularize his ideas in some rationalist circles (and I have him in a group chat somewhere). :-)

comment by Gordon Seidoh Worley (gworley) · 2023-09-19T17:59:02.685Z · LW(p) · GW(p)

note to self: add in some text here or in the next chapter that's more explicit about about how we can have abstractions that agree by glossing over enough details to enable robust agreement at the cost of being able to talk precisely about fine grained details of the world.

comment by Shmi (shminux) · 2023-03-07T02:17:44.791Z · LW(p) · GW(p)

The trick is to come up with a new name for whatever definition you want to adopt. For example, Scott Aaronson replaced Chalmer's unsolvable Hard Problem of Consciousness with a more testable "Pretty Hard problem of consciousness" https://www.scottaaronson.com/talks/iitfail.ppt

comment by TAG · 2023-03-08T13:07:39.535Z · LW(p) · GW(p)

The usual way to know if something has a property is to test it. For example, if I want to know if sheet of metal is magnetic, I grab a magnet and see if the magnet sticks to the metal. Can we find such a test for truth? Is there some means by which we can be certain if a statement is relatively true and points to absolute truth?

Alas, the answer is no.

That depends on how you are definitely truth. The problem is particularly acute for correspondence-truth , because there is no direct test for correspondence. In science it is hoped that some combination of predictiveness and simplicity adds up to correspondence...but it is hard to see how that works, hence philosophy -of-science is still an open problem.

If truth is lack of contradiction, as coherentism claims, then it can be tested ...But other problems ensue. It is obvious that coherence doesn't pick out a singular truth, because multiple systems of mutually consistent propositions are possible. (Indeed, they are actual -- ideological systems such as communism and Christianity are examples ) So, uniqueness, absolute truth, is a desideratum coherentists have to give up, however reluctantly

Full strength relativism about truth doesn't just admit that there can be relative approximations to absolute truth, it rejects the very idea of of a unique absolute truth, in favour of the idea that there are multiple truths, based on no criterion beyond the fact that people consider true whatever they happen to believe. Whereas circular justification and coherentism are unable to sustain the I see of a unique truth as a (possibly unintentional and unwanted) implication criterion they are using, alethic relativism embraces it directly and enthusiastically

comment by TAG · 2023-03-08T01:58:18.027Z · LW(p) · GW(p)

Yet Chisholm is not content to let the skeptics be. He points out that skeptics choose skepticism over particularism, so whatever means they use to make that choice is implicitly being assumed true in order to justify skepticism, thus they are really methodists, and thus particularists, in disguise who happen to have chosen a criterion of truth that claims nothing is true.

Scepticism doesn't have to be a positive claim...It can be a set of negative claims. Along the lines of falsificationism, whereby can disprove things but not prove them.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-08T19:12:30.271Z · LW(p) · GW(p)

The assumption that a negative claim (and whatever it is that such a skeptic means by a negative claim) can be made must itself be assumed, hence not a special case.

Replies from: TAG
comment by TAG · 2023-03-08T19:22:43.973Z · LW(p) · GW(p)

You can show that disproof is possible by offering disproofs: that's evidence, not assumption.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-08T19:26:36.509Z · LW(p) · GW(p)

But what does "disproof" mean here. How does the evidence evaluation process work? Is that itself evidence? There's some embodied process of generating disproof that must itself be assumed to be right. If it was itself disproved, how could such a skeptic be sure the disproof was correct since they're depending on the process by which they disprove thing to disprove disproof?

My answer is that they end up right back stuck in the problem of the criterion, or they end up as approximately a Platonic idealist.

Replies from: TAG
comment by TAG · 2023-03-08T19:43:20.042Z · LW(p) · GW(p)

You still need some assumptions, like contradictions indicating falsehood, but that's not very contentious.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-03-10T17:59:04.386Z · LW(p) · GW(p)

Contentiousness is irrelevant to the line of argumentation I'm making (based on Chisholm). No matter how obvious something is, it's still an assumption if not justified.