My Kind of Pragmatism
post by Nora Belrose (nora-belrose) · 2023-05-20T18:58:48.574Z · LW · GW · 11 commentsContents
11 comments
Recently I've been thinking about pragmatism, the school of philosophy which says that beliefs and concepts are justified based on their usefulness. In LessWrong jargon, it's the idea that "rationality is systematized winning [LW · GW]" taken to its logical conclusion— we should only pursue "true beliefs" insofar as these truths help us "win" at the endeavors we've set for ourselves.
I'm inclined to identify as some sort of pragmatist, but there are a lot of different varieties of pragmatism, so I've been trying to piece together a "Belrosian pragmatism" that makes the most sense to me.
In particular, some pragmatisms are a lot more "postmodernist-sounding" (see e.g. Richard Rorty) than others (e.g. Susan Haack). Pragmatism leads you to say relativist-sounding things because usefulness seems to be relative to a particular person, so stuff like "truth is relative" often comes out as a logical entailment of pragmatist theories.
A lot of people think relativism about truth is just a reductio of any philosophical theory, but I don't think so. Respectable non-relativists, like Robert Nozick in Invariances, have pointed out that relativism can be a perfectly coherent position. Furthermore, I think much of the initial implausibility of relativism is due to confusing it with skepticism about the external world. But relativism doesn't imply there's no mind-independent reality: there can be one objective world, but many valid descriptions of that world, with each description useful for a different purpose. Once you make this distinction, relativism seems a lot more plausible. It's not totally clear to me that every pragmatist has made this distinction historically, but I'm going to make it.
There's one other hurdle that any pragmatist theory needs to overcome. Pragmatism says that we should believe things that are useful, but to determine if a belief is useful we need some background world model where we can imagine the counterfactual consequences of different beliefs. Is this world model a totally separate mental module that's justified on non-pragmatist grounds? Most pragmatists would say no, and adopt some form of coherentism: we assess the utility of a belief or concept with respect to background beliefs that we aren't questioning at the moment. Those background beliefs can come into the foreground and be questioned later. The hope is that this procedure will lead to an approximate fixed point, at least for a little while until new evidence comes in. Notably, this basic view is pretty popular in non-pragmatist circles and was popularized by Quine in the 1960s. I think something like this is right, although I want to say something more on this issue (see point 3 below).
Here are some possibly-distinctive aspects of my personal variety of pragmatism, as it stands right now:
- Objective reality exists, but objective truth does not.
This is because truth presupposes a language for describing reality, and different languages are useful for different purposes. - Values are utterly subjective.
This is a pretty important issue for pragmatists. We reduce truth to utility, so if utility is objective, then truth would be objective, and the whole position becomes a lot more "realist." Historically, there's some evidence that C.S. Peirce, the founder of pragmatism, shifted to a moral realist position in reaction to the more "relativist" pragmatists like William James, who he vehemently disagreed with. But for reasons I won't go into here, I don't think values could possibly be objective— the phrase "objective morality" is like "square circle." See some of Lance Bush's YouTube videos (e.g. this one) to get a taste of my view on this. - Not all beliefs are means to an end.
There does seem to be a clear distinction between beliefs about ordinary objects and direct sensory experiences— stuff like "this chair exists" or "I'm happy right now"— and beliefs about scientific or philosophical theories. I don't consciously think of my beliefs about ordinary objects in terms of their utility, I just take them for granted. It's also hard for me to imagine a scenario in which I would start to question the utility of these beliefs. Importantly, I do question the metaphysical nature of ordinary objects and experiences; I often wonder if I'm in a simulation, for example. But I take that to be a secondary question, since even if I'm in a simulation, these objects and experiences are "real" for my purposes. On the other hand, I do consciously think about the practical utility of scientific and philosophical theories. I still feel a bit confused as to why this distinction exists, but my best guess is that my values latch onto ordinary objects and direct experiences, so I can't question those without throwing out my values, whereas other stuff is more "instrumental."
11 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2023-05-20T19:45:51.336Z · LW(p) · GW(p)
I think it might be useful to consider the framing of being an embedded agent in a deterministic world (in Laplace's demon sense). There is no primitive "should", only an emergent one. The question to ask in that setup is "what kind of embedded agents succeed, according to their internal definition of success?" For example it is perfectly rational to believe in God in a setup in a situation where this belief improves your odds of success, for some internal definition of success. If one's internal definition of success is different, fighting religious dogma might count as a path to success, or success in itself. This gets complicated in a hurry, since an agent's internal definition of success changes in time and is not necessarily unique or coherent for a single agent. The whole concept of agency is an emergent one, as well, and is not always applicable. But the framework of embedded agency is a good start and useful grounding when one gets lost in complications.
comment by TAG · 2023-05-21T13:26:25.424Z · LW(p) · GW(p)
Recently I’ve been thinking about pragmatism, the school of philosophy which says that beliefs and concepts are justified based on their usefulness. In LessWrong jargon, it’s the idea that “rationality is systematized winning” taken to its logical conclusion— we should only pursue “true beliefs” insofar as these truths help us “win” at the endeavors we’ve set for ourselves.
What motivates it? Rationalism has never shown that all forms of truth amount to winning, or indeed that there is just one one form of rationalism, with instrumental rationalism subsuming epistemic rationalism. "Rationalists should win" is a much weaker claim than "all truth is usefulness".
One can accept that usefulness, or "winning" has value, without accepting it as the only value. Prima facie , there are useful lies and useless truths. So why not accept that truth and usefulness are just distinct things that don't reduce to each other?
Replies from: nora-belrose↑ comment by Nora Belrose (nora-belrose) · 2023-05-21T15:41:48.708Z · LW(p) · GW(p)
Say you have two agents, Rorty and Russell, who have ~the same values except that Rorty only optimizes for winning, and Russell optimizes for both winning and having “true beliefs” in some correspondence theory sense. Then Rorty should just win more on average than Russell, because he’ll have the winning actions/beliefs in cases where they conflict with the truth maximization objective, while Russell will have to make some tradeoff between the two.
Now maybe your values just happen to contain something like “having true beliefs in the correspondence theory sense is good.” I’m not super opposed to those kinds of values, although I would caution that truth-as-correspondence is actually hard to operationalize (because you can’t actually tell from sense experience whether a belief is true or not) and you definitely need to prioritize some types of truths over others (the number of hairs on my arm is a truth, but it’s probably not interesting to you). So you might want to reframe your truth-values in terms of “curiosity” or something like that.
Replies from: TAG↑ comment by TAG · 2023-05-21T17:02:51.986Z · LW(p) · GW(p)
Say you have two agents, Rorty and Russell, who have ~the same values except that Rorty only optimizes for winning, and Russell optimizes for both winning and having “true beliefs” in some correspondence theory sense. Then Rorty should just win more on average than Russell, because he’ll have the winning actions/beliefs in cases where they conflict with the truth maximization objective, while Russell will have to make some tradeoff between the two.
I'm primarily making the point that truth and usefulness are different concepts in theory, rather than offering practical advice. They are still different concepts, even if usefulness is more useful!
It's not obvious that winning is what you should be doing because there are many definitions of "should". It's what you should be doing according to instrumental rationality ....but not according to epistemic rationality.
Even if winning is what you should be doing...that doesnt make truth the same concept as usefulness.
I’m not super opposed to those kinds of values, although I would caution that truth-as-correspondence is actually hard to operationalize
I'll say! That's one of my favourite themes
Replies from: nora-belrose↑ comment by Nora Belrose (nora-belrose) · 2023-05-21T18:11:17.234Z · LW(p) · GW(p)
I'm sort of fine with keeping the concepts of truth and usefulness distinct. While some pragmatists have tried to define truth in terms of usefulness (e.g. William James), others have said it's better to keep truth as a primitive, and instead say that a belief is justified just in case it's useful (Richard Rorty; see esp. here).
It's not obvious that winning is what you should be doing because there are many definitions of "should". It's what you should be doing according to instrumental rationality ....but not according to epistemic rationality.
Well, part of what pragmatism is saying is that we should only care about instrumental rationality and not epistemic rationality. Insofar as epistemic rationality is actually useful, instrumental rationality will tell you to be epistemically rational.
It also seems that epistemic rationality is pretty strongly underdetermined. Of course the prior is a free parameter, but you also have to decide which parts of the world you want to be most correct about. Not to mention anthropics, where it seems the probabilities are just indeterminate and you have to bring in values [? · GW] to determine what betting odds you should use. And finally, once you drop the assumption that the true hypothesis is realizable (contained in your hypothesis space) and move to something like infra-Bayesianism, now you need to bring in a distance function to measure how "close" two hypotheses are. That distance function is presumably going to be informed by your values.
Replies from: TAG↑ comment by TAG · 2023-05-21T19:18:15.813Z · LW(p) · GW(p)
others have said it’s better to keep truth as a primitive, and instead say that a belief is justified just in case it’s useful (Richard Rorty; see esp. here).
But usefulness doesnt particularly justify correspondence-truth.
Well, part of what pragmatism is saying is that we should only care about instrumental rationality and not epistemic rationality
Using which definition of "should"? Obviously by the pragmatic definition...
It also seems that epistemic rationality is pretty strongly underdetermined
Yes, which means it can't be usefully implemented , which means it's something you shouldnt pursue according to pragmatism.
Of course, the fact that pragmatic arguments are somewhat circular doesn't mean that non pragmatic ones aren't. Circularities are to be expected , because it takes an epistemology to decide an epistemology.
But even if you can't do anything directly useful with unattainable truth , you can at least get a realistic idea of your limitations.
Replies from: nora-belrose↑ comment by Nora Belrose (nora-belrose) · 2023-05-21T22:10:24.459Z · LW(p) · GW(p)
But usefulness doesnt particularly justify correspondence-truth.
Neither I nor Rorty are saying that it does.
Using which definition of "should"? Obviously by the pragmatic definition...
No, I mean it in the primitive, unqualified sense of "should." Otherwise it would be a tautology. I personally approve of people solely caring about instrumental rationality.
Yes, which means it can't be usefully implemented , which means it's something you shouldnt pursue according to pragmatism.
I don't think it can be implemented at all; people just imagine that they are implementing it, but on further inspection they're adding in further non-epistemic assumptions.
Replies from: TAG↑ comment by TAG · 2023-05-21T22:57:04.150Z · LW(p) · GW(p)
I personally approve of people solely caring about instrumental rationality
Are you saying "I personally approve of.." is the primitive, unqualified meaning of "should"?
I don’t think it can be implemented at all
I've agreed with that.
"Then why care about it".
Replies from: nora-belroseeven if you can’t do anything directly useful with unattainable truth , you can at least get a realistic idea of your limitations.
↑ comment by Nora Belrose (nora-belrose) · 2023-05-22T00:56:33.107Z · LW(p) · GW(p)
Are you saying "I personally approve of.." is the primitive, unqualified meaning of "should"?
At the very least it's part of the unqualified meaning. Moral realists mean something more by it, or at least claim to do so.
even if you can’t do anything directly useful with unattainable truth , you can at least get a realistic idea of your limitations.
Okay. I think it's probably not the most effective way to do this in most cases.
comment by Luis Enrique Urtubey De Cesaris (luis-enrique-urtubey-de-cesaris) · 2024-05-10T14:41:23.666Z · LW(p) · GW(p)
On pragmatism, including about how to put it to work in the real world, I find the combo by Bernstein quite good:
1.
Better than the book it is supposed to be about, IMO.
2. Some social science directly inspired by pragmatism, also trying to find how its principles might be operative in the real world, is also good, IMO. First two things that come to mind:
https://www.amazon.com/Pragmatist-Democracy-Evolutionary-Learning-Philosophy/dp/0199772444
https://brill.com/view/journals/copr/9/2/copr.9.issue-2.xml