Why I Reject the Correspondence Theory of Truth
post by pragmatist · 2015-03-24T11:00:28.828Z · LW · GW · Legacy · 30 commentsContents
Evaluating Models Models and Reality Correspondence Can't Be Causal Back to Pragmatism None 30 comments
This post began life as a comment responding to Peer Gynt's request for a steelman of non-correspondence views of truth. It ended up being far too long for a comment, so I've decided to make it a separate post. However, it might have the rambly quality of a long comment rather than a fully planned out post.
Evaluating Models
Let's say I'm presented with a model and I'm wondering whether I should incorporate it into my belief-set. There are several different ways I could go about evaluating the model, but for now let's focus on two. The first is pragmatic. I could ask how useful the model would be for achieving my goals. Of course, this criterion of evaluation depends crucially on what my goals actually are. It must also take into account several other factors, including my cognitive abilities (perhaps I am better at working with visual rather than verbal models) and the effectiveness of alternative models available to me. So if my job is designing cannons, perhaps Newtonian mechanics is a better model than relativity, since the calculations are easier and there is no significant difference in the efficacy of the technology I would create using either model correctly. On the other hand, if my job is designing GPS systems, relativity might be a better model, with the increased difficulty of calculations being compensated by a significant improvement in effectiveness. If I design both cannons and GPS systems, then which model is better will vary with context.
Another mode of evaluation is correspondence with reality, the extent to which the model accurately represents its domain. In this case, you don't have much of the context-sensitivity that's associated with pragmatic evaluation. Newtonian mechanics may be more effective than the theory of relativity at achieving certain goals, but (conventional wisdom says) relativity is nonetheless a more accurate representation of the world. If the cannon maker believes in Newtonian mechanics, his beliefs don't correspond with the world as well as they should. According to correspondence theorists, it is this mode of evaluation that is relevant when we're interested in truth. We want to know how well a model mimics reality, not how useful it is.
I'm sure most correspondence theorists would say that the usefulness of a model is linked to its truth. One major reason why certain models work better than others is that they are better representations of the territory. But these two motivations can come apart. It may be the case that in certain contexts a less accurate theory is more useful or effective for achieving certain goals than a more accurate theory. So, according to a correspondence theorist, figuring out which model is most effective in a given context is not the same thing as figuring out which model is true.
How do we go about these two modes of evaluation? Well, evaluation of the pragmatic success of a model is pretty easy. Say I want to figure out which of several models will best serve the purpose of keeping me alive for the next 30 days. I can randomly divide my army of graduate students into several groups, force each group to behave according to the dictates of a separate model, and then check which group has the highest number of survivors after 30 days. Something like that, at least.
But how do I evaluate whether a model corresponds with reality? The first step would presumable involve establishing correspondences between parts of my model and parts of the world. For example, I could say "Let mS in my model represent the mass of the Sun." Then I check to see if the structural relations between the bits of my model match the structural relations between the corresponding bits of the world. Sounds simple enough, right? Not so fast! The procedure described above relies on being able to establish (either by stipulation or discovery) relations between the model and reality. That presupposes that we have access to both the model and to reality, in order to correlate the two. In what sense do we have "access" to reality, though? How do I directly correlate a piece of reality with a piece of my model?
Models and Reality
Our access to the external world is entirely mediated by models, either models that we consciously construct (like quantum field theory) or models that our brains build unconsciously (like the model of my immediate environment produced in my visual cortex). There is no such thing as pure, unmediated, model-free access to reality. But we often do talk about comparing our models to reality. What's going on here? Wouldn't such a comparison require us to have access to reality independent of the models? Well, if you think about it, whenever we claim to be comparing a model to reality, we're really comparing one model to another model. It's just that we're treating the second model as transparent, as an uncontroversial proxy for reality in that context. Those last three words matter: A model that is used as a criterion for reality in one investigative context might be regarded as controversial -- as explicitly a model of reality rather than reality itself -- in another context.
Let's say I'm comparing a drawing of a person to the actual person. When I say things like "The drawing has a scar on the left side of the face, but in reality the scar is on the right side", I'm using the deliverances of visual perception as my criterion for "reality". But in another context, say if I'm talking about the psychology of perception, I'd talk about my perceptual model as compared (and, therefore, contrasted) to reality. In this case my criterion for reality will be something other than perception, say the readings from some sort of scientific instrument. So we could say things like, "Subjects perceive these two colors as the same, but in reality they are not." But by "reality" here we mean something like "the model of the system generated by instruments that measure surface reflectance properties, which in turn are built based on widely accepted scientific models of optical phenomena".
When we ordinarily talk about correspondence between models and reality, we're really talking about the correspondence between bits of one model and bits of another model. The correspondence theory of truth, however, describes truth as a correspondence relation between a model and the world itself. Not another model of the world, the world. And that, I contend, is impossible. We do not have direct access to the world. When I say "Let mS represent the mass of the Sun", what I'm really doing is correlating a mathematical model with a verbal model, not with immediate reality. Even if someone asks me "What's the Sun?", and I point at the big light in the sky, all I'm doing is correlating a verbal model with my visual model (a visual model which I'm fairly confident is extremely similar, though not exactly the same, as the visual model of my interlocutor). Describing correspondence as a relationship between models and the world, rather than a relationship between models and other models, is a category error.
So I can go about the procedure of establishing correspondences all I want, correlating one model with another. All this will ultimately get me is coherence. If all my models correspond with one another, then I know that there is no conflict between my different models. My theoretical model coheres with my visual model, which coheres with my auditory model, and so on. Some philosophers have been content to rest here, deciding that coherence is all there is to truth. If the deliverances of my scientific models match up with the deliverances of my perceptual models perfectly, I can say they are true. But there is something very unsatisfactory about this stance. The world has just disappeared. Truth, if it is anything at all, involves both our models and the world. However, the world doesn't feature in the coherence conception of truth. I could be floating in a void, hallucinating various models that happen to cohere with one another perfectly, and I would have attained the truth. That can't be right.
Correspondence Can't Be Causal
The correspondence theorist may object that I've stacked the deck by requiring that one consciously establish correlations between models and the world. The correspondence isn't a product of stipulation or discovery, it's a product of basic causal connections between the world and my brain. This seems to be Eliezer's view. Correspondence relations are causal relations. My model of the Sun corresponds with the behavior of the actual Sun, out there in the real world, because my model was produced by causal interactions between the actual Sun and my brain.
But I don't think this maneuver can save the correspondence theory. The correspondence theory bases truth on a representational relationship between models/beliefs and the world. A model is true if it accurately represents its domain. Representation is a normative relationship. Causation is not. What I mean by this is that representation has correctness conditions. You can meaningfully say "That's a good representation" or "That's a bad representation". There is no analog with causation. There's no sense in which some particular putatively causal relation ends up being a "bad" causal relation. Ptolemy's beliefs about the Sun's motion were causally entangled with the Sun, yet we don't want to say that those beliefs are accurate. It seems mere causal entanglement is insufficient. We need to distinguish between the right sort of causal entanglement (the sort that gets you an accurate picture of the world) and the wrong sort. But figuring out this distinction takes us back to the original problem. If we only have immediate access to models, on what basis can we decide whether our models are caused by the world in a manner that produces an accurate picture. To determine this, it seems we again need unmediated access to the world.
Back to Pragmatism
Ultimately, it seems to me the only clear criterion the correspondence theorist can establish for correlating the model with the world is actual empirical success. Use the model and see if it works for you, if it helps you attain your goals. But this is exactly the same as the pragmatic mode of evaluation which I described above. And the representational mode of evaluation is supposed to differ from this.
The correspondence theorist could say that pragmatic success is a proxy for representational success. Not a perfect proxy, but good enough. The response is, "How do you know?" If you have no independent means of determining representational success, if you have no means of calibration, how can you possibly determine whether or not pragmatic success is a good proxy for representational success? I mean, I guess you can just assert that a model that is extremely pragmatically successful for a wide range of goals also corresponds well with reality, but how does that assertion help your theory of truth? It seems otiose. Better to just associate truth with pragmatic success itself, rather than adding the unjustifiable assertion to rescue the correspondence theory.
So yeah, ultimately I think the second of the two means of evaluating models I described at the beginning (correspondence) can only really establish coherence between your various models, not coherence between your models and the world. Since that sort of evaluation is not world-involving, it is not the correct account of truth. Pragmatic evaluation, on the other hand, *is* world-involving. You're testing your models against the world, seeing how effective they are at helping you accomplish your goal. That is the appropriate normative relationship between your beliefs and the world, so if anything deserves to be called "truth", it's pragmatic success, not correspondence.
This has consequences for our conception of what "reality" is. If you're a correspondence theorist, you think reality must have some form of structural similarity to our beliefs. Without some similarity in structure (or at least potential similarity) it's hard to say how one meaningfully could talk about beliefs representing reality or corresponding to reality. Pragmatism, on the other hand, has a much thinner conception of reality. The real world, on the pragmatic conception is just an external constraint on the efficacy of our models. We try to achieve certain goals using our models and something pushes back, stymieing our efforts. Then we need to build improved models in order to counteract this resistance. Bare unconceptualized reality, on this view, is not a highly structured field whose structure we are trying to grasp. It is a brute, basic constraint on effective action.
It turns out that working around this constraint requires us to build complex models -- scientific models, perceptual models, and more. These models become proxies for reality, and we treat various models as "transparent", as giving us a direct view of reality, in various contexts. This is a useful tool for dealing with the constraints offered by reality. The models are highly structured, so in many contexts it makes sense to talk about reality as highly structured, and to talk about our other models matching reality. But it is also important to realize that when we say "reality" in those contexts, we are really talking about some model, and in other contexts that model need not be treated as transparent. Not realizing this is an instance of the mind projection fallacy. If you want a context-independent, model-independent notion of reality, I think you can say no more about it than "a constraint on our models' efficacy".
That sort of reality is not something you represent (since representation assumes structural similarity), it's something you work around. Our models don't mimic that reality, they are tools we use to facilitate effective action under the constraints posed by reality. All of this, as I said at the beginning, is goal and context dependent, unlike the purported correspondence theory mode of evaluating models. That may not be satisfactory, but I think it's the best we have. Pragmatist theory of truth for the win.
30 comments
Comments sorted by top scores.
comment by MockTurtle · 2015-03-24T16:48:51.407Z · LW(p) · GW(p)
I think I may be a little confused about your exact reason to reject the correspondence theory of truth. From my reading, it seems to me that you reject it because it cannot justify any truth claim, since any attempt to do so is simply comparing one model to another - since we have no unmediated access to 'reality'. Instead, you seem to claim that pragmatism is more justified when claiming that something is true, using something along the lines of "it's true if it works in helping me achieve my goals".
There are two things that confuse me: 1) I don't see why the inability to justify a truth statement based on the correspondence theory would cause you to reject that theory as a valid definition of truth. In your post, you seem to accept that there IS a world which exists independently of us, in some way or other. If I say, "I believe that 'this snow is white' is true, which is to say, I believe that there exists a reality independent from my model of it where such objects in some way exist and are behaving in a manner corresponding to my statement"... That is what I understand by the correspondence theory of truth, so even if I cannot ever prove it (this could all be hallucinations for all I know), it still is a meaningful statement, surely? At least philosophically speaking? To me, there is a difference between 'if the statement that snow is white is true, it is because I am successful in my actions if I act as if snow is white' and 'if the statement that snow is white is true, it is because there exists an actual reality (which I have no unmediated access to), independent of my thoughts and senses, which has corresponding objects acting in corresponding ways to the statement, which somehow affect my observations'. When people argue about what truth really means, I don't see how it is only meaningful to advocate for the former definition over the latter, even if the latter is admittedly not particularly useful in a non-philosophical way. 2) Isn't acting on the world to achieve your goals a type of experiment, establishing correspondence between one model (if I do this, I will achieve my goal) and another model (my model of reality as relayed by the success or failure of my actions)? I don't see how, just because there is a goal other than finding out about an underlying reality, it would be any more correct or meaningful to say that this experiment reveals more truth than experiments whose only goal is to try to get the least mediated view of reality possible.
As far as I can see, if we assume even the smallest probability that our actions (whether they be pragmatic-goal-achieving or pure observation) are affected by some underlying, unmediated reality which we have no direct access to, then the more such actions we take, the more is revealed about this thing which actually affects our model.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-03-24T20:32:19.289Z · LW(p) · GW(p)
Rather than saying "I believe snow is white" we should be saying "that whiteness is snow", since we layer models around percepts rather than vice versa. If discovering the truth means to fit percept to a model, it seems obvious that you need access to a complete model to begin with, and thus follow the OP's complaints about needing unmediated access to reality. This is I think responsible for the confusion surrounding this topic.
comment by Shmi (shminux) · 2015-03-24T15:50:44.177Z · LW(p) · GW(p)
Huh, I did not realize how close our views are. Thank you for posting this. Did you have problems reading the Sequences where the Correspondence Dualism (reality vs models) is implicit and relied upon?
I would add a couple of points.
One is AIXI-like: the "naive" Correspondence Theory of Truth presumes (an?) external reality which we strive to understand and therefore affect, like a perturbation theory. Sort of like in Special Relativity we postulate that the spacetime is flat and unchanging. However, in many situations "reality" is determined by how we model it, like the market sentiment determines market behavior, what Eliezer called the anti-inductive property. This reminds me of General Relativity (or at least its Initial Value formulation), where there is no spacetime until you build it out of matter. Sometimes, when the interaction between spacetime and matter is weak, you can use the background calculations, such as the parametric post-Newtonian expansion, or Quantum Field Theory on a curved spacetime background. But this approach breaks down when the interactions are strong and the perturbations do not converge, resulting in non-nonsensical results, such as the Hawking radiation blowing up at the black hole horizon. But, as long as you stay away from the highly nonlinear cases, it makes sense to use the concept of External Reality and the Correspondence Theory of Truth.
Another is the question "what are you predicting, if there is no reality?" Where do the inputs you use to build your models and estimate their success come from? You call it
a brute, basic constraint on effective action.
But this is just a poetic way of saying the same thing: there are stimuli coming from outside of our hierarchy of models, at least down at the bottom level. These are definitely affected by our actions, which are mostly interactions within the hierarchy of models, but at some point leak outside of it and steer future stimuli in the desired direction (if the model hierarchy is successful in its goals). So, either you assume this invisible feedback loop and call it reality, or you bite another bullet and assume that there are models all the way down. Given that we have not yet found any barrier to building our hierarchy of models, I tend toward the latter, but I am not super-convinced that this is a better meta-model than the former.
comment by eternal_neophyte · 2015-03-24T20:12:50.763Z · LW(p) · GW(p)
Your position seems to entail that comparing one model of the world with another cannot produce a model that corresponds to reality. I don't see why that should be the case. If you have a transitive correspondence relationship of the form R => M(R) => M'(M) with R standing for reality and M, M' for models, that may well establish a limit on the accuracy of M'(R) but it doesn't make it impossible. Indeed if it did there could be no pragmatic basis for a model since you could never expect your model to correspond with reality in any context.
Moreover I don't think your characterization of sense data in the visual field as a "visual model" is warranted. Sense data is primitve, which is to say yellow is nothing but yellow. It models nothing else. The relationships between posited to exist between different bits of yellow, red, orange etc. and the postulated range of possible changes those relationships could undergo constitute the model. If there were no such thing as primitive sense data, then what could the relation between the position of the corner of a square and its center be but a relation between yet more relations? How would you be able to form any models at all without becoming trapped in an infinitely descending hierarchy?
If the deliverances of my scientific models match up with the deliverances of my perceptual models perfectly, I can say they are true. But there is something very unsatisfactory about this stance. The world has just disappeared. Truth, if it is anything at all, involves both our models and the world. However, the world doesn't feature in the coherence conception of truth. I could be floating in a void, hallucinating various models that happen to cohere with one another perfectly, and I would have attained the truth. That can't be right.
You don't have to subscribe to the idea that coherence is truth if you just treat this as a description of how human beings practically function. The world goes nowhere, nothing is true just because a coherent model exists that shows it to be true. We just assume it does in the absence of better evidence (or the absence of any need for a better model). All we've done is removed the ability to know that we know something, the ability to know it remains intact. In other words, M'(R) may hold even if we can't know that M(R) does, and hence know that M'(R) does.
You can meaningfully say "That's a good representation" or "That's a bad representation". There is no analog with causation. There's no sense in which some particular putatively causal relation ends up being a "bad" causal relation.
Again I perceive an insistence on knowing that you know something which is in my view unwarranted. To know anything at all, by such reasoning, you have to know it, and know that you know it, and know that you know that you know it, ad infinitum.
Replies from: torekp↑ comment by torekp · 2015-04-03T19:43:17.334Z · LW(p) · GW(p)
Sense data is primitve, which is to say yellow is nothing but yellow
"This soup tastes like chicken." "No, you're wrong - it tastes like turkey." "Gosh, you're right, it does taste like turkey."
If sense data are primitive, either we can never talk about them, or the above conversation is impossible. But it's thoroughly possible. The minute you describe anything - even sense experience - the possibility of error creeps in. I.e., you are making a model.
I think the correspondence theory might work none the less, though. You're onto something in your last paragraph. Perfect epistemic access is not required for a semantic (correspondence) relation to make sense.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-04-04T00:56:38.269Z · LW(p) · GW(p)
Are you are the two parties disagreeing about what their sense data actually are, or about what the sense data match in some way? The same sense data can be matched against different things (think of the sketch which seems to be a young woman when you look at it one way, and an old one at second glance).
Replies from: torekpcomment by Vladimir_Nesov · 2015-03-24T15:00:42.281Z · LW(p) · GW(p)
The correspondence theory of truth, however, describes truth as a correspondence relation between a model and the world itself. Not another model of the world, the world. And that, I contend, is impossible. We do not have direct access to the world.
What do you mean by "direct access to the world"? It seems natural to describe looking at something in the world as "access", reading of world's state, communicating information about its state to my mind. Just as information can be read from a hard drive and communicated to the state of a register in a CPU, similarly it can be read from the state of a tree outside my window and communicated to the state of my brain, or that of a camera taking a photo.
We may form a model of correspondence between the state of the tree and the state of a camera (between the tree and its photo). Such correspondence may result from camera's observation of the tree, the act of accessing the tree, that communicated information about the tree into the form of state of the camera. The model that describes the correspondence between the tree and the camera is not the same thing as the photo itself, perhaps a person may form that model.
The same situation occurs when instead of a photo in a camera there is knowledge in a mind. I may consider whether your knowledge of a tree corresponds to the tree. Or I may consider whether my own knowledge of a tree corresponds to the tree. The two models, the model of the tree and the model of the first model's correspondence to the tree, don't need to be represented by the same mind, but nothing changes when they are.
We may then further consider whether the second model (of the correspondence between the photo and the tree) corresponds to the actual correspondence between them, by considering how it formed in the mind etc. This third model may also be located in someone else's mind. For example, I may have looked at a photo and thought that it's accurate, but you may consider my judgement of photo's accuracy and decide that the judgement is wrong, that the photo does not represent what I thought it represented, that instead it represents something else.
There are just these strange artifacts of knowledge in people's heads that may be understood as relating all kinds of things in the world, including representations of knowledge, and the act of understanding them as relating things constitutes production of another artifact of that kind. Given that the parts of the world that hold these artifacts are in principle understood very well (building blocks like atoms etc.), pragmatically it doesn't matter whether "models are fundamental" or "reality is fundamental", in the sense that the structure of how representations of knowledge relate to things in the world (including other representations of knowledge) would be the same even if all mentions of reality are rewritten in a different language.
Replies from: gedymin↑ comment by gedymin · 2015-03-24T18:01:40.323Z · LW(p) · GW(p)
What do you mean by "direct access to the world"?
Are you familiar with Kant? http://en.wikipedia.org/wiki/Noumenon
comment by Rob Bensinger (RobbBB) · 2015-03-24T19:12:55.645Z · LW(p) · GW(p)
My only objection is to your use of the word "we". Who is this "we"?
I exist; and my models of other minds exist; but I never have direct, unmediated access to another mind. What would it even mean to say that there are 'other people', as opposed to models of other people inside my mind? How could I possibly be justified in thinking there are any experiences had by someone, that aren't had by me? Or any models employed by agents, other than the models I myself am employing?
Replies from: pragmatist, Vladimir_Nesov, Vladimir_Nesov↑ comment by pragmatist · 2015-03-24T19:16:00.733Z · LW(p) · GW(p)
Judgements of existence are model-relative. I believe electrons exist because I have an excellent (highly useful) model that involves ontological commitment to electrons. Same for other minds. Same for my own mind, for that matter.
I don't first determine what exists and only then build models of those things. My models tell me what exists.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2015-03-24T19:53:52.205Z · LW(p) · GW(p)
My comment was in response to:
Our access to the external world is entirely mediated by models, either models that we consciously construct (like quantum field theory) or models that our brains build unconsciously (like the model of my immediate environment produced in my visual cortex). There is no such thing as pure, unmediated, model-free access to reality. But we often do talk about comparing our models to reality. What's going on here? Wouldn't such a comparison require us to have access to reality independent of the models?
You made the point that there is no pure, unmediated access to mind-independent reality; and you think this is an important insight that calls for some sort of reform to naive views of truth. That may be true relative to very naive views, of the sort that correspondence theorists also reject; in which case correspondence theory and the-view-that-we-lack-unmediated-access-to-mind-independent-reality can both win. But who are the flesh-and-blood correspondence theorists who think that their theory gives us a practical way to directly compare Ultimate Reality to our models, as opposed to just giving us a pragmatically useful desideratum? Who are the correspondence theorists who deny "I don't first determine what exists and only then build models of those things. My models tell me what exists."?
Skepticism about the existence of Ultimate Model-Independent Reality is analogous to skepticism about the existence of Other Minds; and thinking that 'truth' is a dubious concept if it relates Ultimate Model-Independent Reality to a model, is analogous to thinking that relationships between my mind and Other Minds are dubious.
... Which makes sense, since minds are just a special case of Ultimate Model-Independent Reality, albeit at a much higher level of complexity than a quark. Anger and models-of-anger are two different things; if they weren't, then there would likewise be no distinction between models-of-anger and models-of-models-of-anger.
Truth is just a resemblance relationship between (assertion-like) mental maps and stuff. This includes maps of maps in my head, maps of maps outside my head, maps of non-maps outside my head, and maps of non-maps inside my head. It works like other resemblance relationships, like 'being the same color as' or 'occurring on the same continent as'. Some concepts are hard or impossible to verify (e.g., 'existing exactly 100,000 years apart in time'), but there's no deep philosophical puzzle about the meaning of those concepts.
Judgements of existence are model-relative. I believe electrons exist because I have an excellent (highly useful) model that involves ontological commitment to electrons.
Reliable judgments of existence are evidence-dependent. We agree about that, I think: my model's success is likelier if electrons exist, so it constitutes evidence that they do exist. What evidence did you acquire that convinced you of the further claim 'judgments of existence are model-relative', if that claim is distinct from 'reliable judgments of existence are evidence-dependent'?
↑ comment by Vladimir_Nesov · 2015-03-25T01:33:20.427Z · LW(p) · GW(p)
Do you have "direct, unmediated access" to your own mind? Its state in the present is separate from its state in the past and has to be communicated in time, in a way similar to observation of others' actions taken in the past. What makes accessing something "direct, unmediated", and why is it an interesting concept? (Being identical seems more interesting, for example.)
↑ comment by Vladimir_Nesov · 2015-03-25T01:33:25.955Z · LW(p) · GW(p)
How can you know that you exist (and in what sense)? Knowing you don't exist seems easier. If you are in a hypothetical taking an action that makes your existence impossible (e.g. acting on a threat that proves unnecessary), then you don't exist, and yet your thinking (including thinking about own thinking) should be reasoned about in the same way as if you exist, to make accurate predictions. You don't necessarily know if your past (hypothetical) existence is proven impossible by your (hypothetical) future actions, so asserting own existence at present can be invalid with respect to logical uncertainty.
Relative claims of existence may be more interesting, such as "Assuming I exist, then this particular future scenario I intend to enact also exists, while that other one doesn't". But the conclusion of own conditional existence would be trivial: "Assuming I exist, then I exist".
comment by [deleted] · 2015-03-24T18:34:56.533Z · LW(p) · GW(p)
The "Correspondence Can't Be Causal" section was unconvincing. The Ptolemaic model made advance predictions about astronomy that were experimentally falsified, leading to general acceptance of the Copernican system a millennium later. Models correspond to reality when the predictions they make are experimentally verified.
Regarding multi-model pragmatism, I get the feeling that you are not really arguing in the spirit of the original request for steel-manning. Due to my physics- and chemistry-laboratory training I have direct personally observed evidence that quantum mechanics and Einsteinium relativity are better descriptions of reality -- correspond more accurately -- than Newtonian mechanics and Galilean relativity. Yet I pragmatically choose to use classical models on a day to day basis. Because if one model corresponds better than another, it does not replace the old model. The old model still corresponds with reality, just perhaps not as precisely as once thought. The decision of what model to choose when making predictions depends on how accurately those models reflect reality within the problem domain, how efficient the model is to compute, and how much error can be tolerated. Pragmatism and correspondence are related but different issues.
Replies from: TAG↑ comment by TAG · 2023-12-12T11:44:55.398Z · LW(p) · GW(p)
Models correspond to reality when the predictions they make are experimentally verified.
It's a lot more complex than that. For one thing, verification of predicted observation sis never final -- there is always room for extra accuracy.
Worse that, that ontologically wrong theories can be very accurate. For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles ... although it is false, in the sense of lacking ontological accuracy, since epicycles don't exist.
Worse than that, ontological revolutions can make merely modest changes to predictive abilities -- one can't assume one is incrementally approaching a realistic model just on the basis that a series of models has incrementally improving predictive power. Relativity inverted the absolute space and time of Newtonian physics, but its predictions were so close that subtle experiments were required to distinguish the two, and so close that Newtonian physics is acceptable for many purposes. Moreover, we can't rule out a further revolution, replacing current scientific ontology.
Since we don't know how close we are to the ultimately accurate ontology, even probablistic reasoning can't tell us how likely our theories are in absolute terms. We only know that better theories are more probably correc than worse ones, but we don't really know whether current theories are 90% correct or 10% correct, from a God's eye point of view.
comment by TAG · 2023-12-10T20:21:42.214Z · LW(p) · GW(p)
The correspondence theory of truth is mainly a claim about what truth means....not about what you should use. There are various arguments to the effect that you can't confirm correspondence,but that doesn't directly affect the semantic argument. The inaccessibility of correspondence would mean there is no justification for correspondence truth, but truth can exist without justification.
Pragmatism, similarly, is the claim that truth is... means...usefulness. It's not a claim about reality.
Ultimately, it seems to me the only clear criterion the correspondence theorist can establish for correlating the model with the world is actual empirical success.
Again, the correspondence theorist could have reasons to think that truth is correspondence, whilst still using success as a justificational criterion.
But I don’t think this maneuver can save the correspondence theory. The correspondence theory bases truth on a representational relationship between models/beliefs and the world. A model is true if it accurately represents its domain. Representation is a normative relationship. Causation is not
Causation can be. A causal relation isn't necessarily normative, for some version of normativity, but it could be...but it takes extra information to determine that it is. This can easily be seen from computing: computers can perform normatively correct computations, but can have bugs.
comment by sediment · 2015-03-26T21:01:21.822Z · LW(p) · GW(p)
Always glad to see pragmatism represented on LW. I feel like rationalist types instinctively lean towards a correspondence theory of truth, but I feel like as a group, they are actually (or at least, could be) more sympathetic to the pragmatist view of truth than they realized.
This post follows pretty closely the argument I was going to make in a LW-targeted defence of pragmatism of my own which I had been half-heartedly planning to post for a long time. Thanks for doing a good job of it.
comment by torekp · 2015-04-03T19:53:58.110Z · LW(p) · GW(p)
Representation is a normative relationship. Causation is not. What I mean by this is that representation has correctness conditions. You can meaningfully say "That's a good representation" or "That's a bad representation". There is no analog with causation. There's no sense in which some particular putatively causal relation ends up being a "bad" causal relation.
This is an important point, and if I agreed, I think I would join you in rejecting the correspondence theory of truth. (In my case, I'd just go to quietism - a polite refusal to have a "theory of truth" beyond Tarski's formula - rather than pragmatism.)
But here's the thing. There can be a bad causal relation, with semantic significance. If "horse" utterances are sometimes caused by cows, but usually by horses, and the cow-to-"horse" causation relation is asymmetrically dependent on the horse-to-"horse" causation relation, then the cow-to-"horse" causal route is a bad one. The asymmetric dependence is a sign that "horse" means horse, and not cow. (H/t Jerry Fodor.)
Replies from: TAGcomment by [deleted] · 2015-03-24T12:02:19.555Z · LW(p) · GW(p)
comment by Slider · 2015-03-24T15:40:45.060Z · LW(p) · GW(p)
If this were true we ought to be blind to what is not of potential use to us. We are also sometimes unclear on what we want but our understanding doesn't collapse under it's own weight in this situation.
Pure sense data could argued to be model free. Sense data is also the model that is simultanoeusly in territority. Therefore anything that is coherent with sense data has atleast something to do with the actual world. Now given we don't usually process it as such but make goal-oriented low-level representations. But also if we did not any level to fall back on when our needs change we would be held prisoner by our early needs as it would be impossible to see the world as anything else.
This reads like the more correct title would be "Why I Reject Correspondence Theory Of Utility". You could argue that we don't test beliefs for truth but for utility but all that could achieve is to say that the seeking a theory of truth is useless.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-03-24T20:25:19.502Z · LW(p) · GW(p)
The key distinction between sense-data and models is our inability to will or predict changes in sense data with complete certainty. Anything which cannot be perfectly and completely changed by force of will must be treated as external to the agent. In that sense whether the yellow disc in the visual field truly represents a ball of hydrogen undergoing nuclear fusion, or is placed in the visual field by a daemon is of no consequence. The fact is that you cannot make it dissapear by force of will, and it is therefore external to your mind, whatever the most accurate model of how that disc came to occupy your visual field may be. That is what sense data being "primitve" seems to mean.
Replies from: Slider↑ comment by Slider · 2015-03-25T00:31:07.048Z · LW(p) · GW(p)
Doesn't the advanced techonology of closing your eyes make it go away?
A demon would not be required to be consistent, but it appears we can detect patterns in sensedata. I could even argue or assume that we can detect these patterns before any pattern recognition rule is invented (ie that a theory need not be that sohisticated to use big wordy concepts such as time or space to get the patterns recognised). Sure we have deterministic function from sensedata to higher level abstractions but that doesn't mean the "being the case that (highlevel object)" would be somehow free to mean anything. Once you fix the meaning of your words you are not free to use them as you wish.
Replies from: eternal_neophyte↑ comment by eternal_neophyte · 2015-03-25T13:59:54.017Z · LW(p) · GW(p)
Closing your eyes isn't an act of "pure" will, it's always attended by other sensations. If you could will away the yellow disc without seeing black there instead and feeling your eyelids compress, then you could will it away by pure force of will.
A demon would not be required to be consistent, but it appears we can detect patterns in sensedata.
It's possible that reality is just an infinite flux of random events, with rare islands where coincidence gives rise to the illusion of consistency, and we'd still be able to detect patterns if we were in fact dwelling on such an island.
comment by gedymin · 2015-03-24T18:19:53.436Z · LW(p) · GW(p)
I've got feeling that the implicit LessWrong'ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. "true is that what is knowable using probability theory". May also throw in "..for a perfect computational agent".
My speculation is that the declared LW's sympathy towards the correspondence theory of truth stems from political / social reasons. We don't want to be confused with the uncritically thinking masses - the apologists of homoeopathy or astrology justifying their views by "yeah, I don't know how it works either, but it's useful!"; the middle-school teachers who are ready to threat scientific results as epistemological equals of their favourite theories coming from folk-psychology, religious dogmas, or "common sense knowledge", because, you known, "they all are true in some sense". Pragmatic theories of truth are dangerous if they come into the wrong hands.
Replies from: Bugmaster↑ comment by Bugmaster · 2015-03-25T21:48:28.017Z · LW(p) · GW(p)
We don't want to be confused with the uncritically thinking masses - the apologists of homoeopathy or astrology justifying their views by "yeah, I don't know how it works either, but it's useful!";
I think this statement underscores the problem with rejecting the correspondence theory of truth. Yes, one can say "homeopathy works", but what does that mean ? How do you evaluate whether any given model is useful of not ? If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard. All you've got left are your internal thoughts and feelings, and, as it turns out, certain goals (such as "eradicate polio" or "talk to people very far away") cannot be achieved based on your feelings alone.
Replies from: gedymin↑ comment by gedymin · 2015-03-25T22:18:00.564Z · LW(p) · GW(p)
How do you evaluate whether any given model is useful of not?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.
Solomonoff induction provides a universal standard for "perfect" inductive inference, that is, learning from observations. It is not entirely parameter-free, so it's "a standard", not "the standard". I doubt if there is the standard for the same reasons I doubt that Platonic Truth does exist.
All you've got left are your internal thoughts and feelings
Umm, no, this is a false dichotomy. There is a large area in between "relying on one's intuition" and "relying on an objective external word". For example, how about "relying on the accumulated knowledge of others"?
See also my comment in the other thread.
Replies from: Bugmaster↑ comment by Bugmaster · 2015-03-25T23:05:09.331Z · LW(p) · GW(p)
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
Right, but I meant, in practice.
that is, learning from observations.
Observations of what ? Since you do not have access to infinite computation or perfect observations in practice, you end up observing the outputs of models, as suggested in the original post.
For example, how about "relying on the accumulated knowledge of others"?
What is it that makes their accumulated knowledge worthy of being relied upon ?
Replies from: gedymin↑ comment by gedymin · 2015-03-26T08:58:27.064Z · LW(p) · GW(p)
you end up observing the outputs of models, as suggested in the original post.
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
What is it that makes their accumulated knowledge worthy of being relied upon ?
Usefulness? Just don't say "experimental evidence". Don't oversimplify epistemic justification. There are many aspects - how well knowledge fits with existing models, with observations, what is it's predictive power, what is it's instrumental value (does it help to achieve one's goals) etc. For example, we don't have any experimental evidence that smoking causes cancer in humans, but we nevertheless believe that is does. The power of Bayesian approach is in the mechanism to fuse together all these different forms of evidence and to arrive at a single posterior probability.