Truth seeking is motivated cognition

post by Gordon Seidoh Worley (gworley) · 2022-10-07T19:19:27.456Z · LW · GW · 39 comments

Contents

39 comments

Seeking the truth is a form of motivated cognition. Put another way, truth is a teleological [LW · GW] concept. If that's intuitive you can stop reading this post. Otherwise, press on.

What is truth? Like, for real, actually, please define it. Try this for yourself. Here's some spoiler space before we go on.













Got a definition? Good.

There's a few common ways of defining truth. Yours probably either is or fits within one or more of these paradigms. The following theories are not strictly exclusive of one another (there's overlap).

There are more, each with some subtle distinctions.

Now, look over those theories. What unifies them? Think about it. I'll give you some more spoiler space to think.













Got an idea? Good. If we were speaking in person I'd ask you what you came up with and we'd engage in some dialogue so I could get you to see the point I'd like to make. But we're not so I can't, so I'll just jump to the conclusion.

What they all have in common is that they provide some criterion [? · GW] by which we can assess which things are true or not. Even the theories that suppose truth isn't meaningful must make a case that there's no criterion by which any statement can be meaningfully true or equivalently choose a criterion which no statement satisfies.

Why does this matter? Because the existence of a criterion of truth means that it had to be picked over other possible criterions. If this weren't so there would be no question as to what truth is. How is this choice made? It's made by humans. Even if you're religious and believe truth is handed down by a deity, humans had to choose to believe the deity (this is observably so, since even if you think you believe in the real deity, other people believe in other deities and made the choice to believe the wrong thing).

This means any criterion of truth is a norm that reflects what we care about and prefer. Both individually and collectively, we choose how to use the concept of truth. And that means whatever we want to claim to be true is ultimately motivated by whatever it is we care about that led us to choose the definition of truth we use.

Why do I need to explain any of this? Everything I've said is straightforward. I think it's because humans struggle to deal with more than one level of abstraction at once. It's easy to evaluate if something is true against some fixed notion of truth. It's easy to think about truth as a concept and argue which notion of truth is best. It's much harder to think both about how to evaluate if something is true and remember at the same time that this method of evaluation is contingent on which method you think is best.

This difficulty leads to confusion and mistakes, like thinking that truth is something objective and independent of any human process. It's easy to get so caught up in a particular notion of truth that we forget we had to choose that notion and thus that truth is contingent upon our motivations and preferences.

Why argue all this? Because I think people suffer and cause unnecessary harm because they get wrapped up in their own ideas about truth (and many other things!). By learning to look up [LW · GW], even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is. And if we can do that, maybe we can make a little progress on some of the hard problems we face instead of spinning our wheels trying to solve the problems we make up in our heads.

39 comments

Comments sorted by top scores.

comment by Viliam · 2022-10-08T21:50:16.064Z · LW(p) · GW(p)

Either I completely don't get it, or this is some kind of sophistry that seems deep. Humans make a choice what they mean when they pronounce the sound "truth". Therefore, truth is arbitrary.

The same argument can be made about anything else. Humans make a choice what they mean when they pronounce the sound "circle". Therefore, circles are arbitrary.

By learning to look up, even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is.

Okay, so instead of using the word "truth" we should be saying "world as it really is"? Other than getting an extra point for poetry, I think this is what many people already mean by truth.

Similarly, the problem with the criterion -- I have no idea whether I agree or disagree with you here -- is that we gradually learn about the world, and the criteria we used in the past may turn out to be less good than we assumed. For example, one might start with "it is true if I can see it" and then realize that sometimes things make sense even if we cannot directly observe them by our senses (because they happened in the past, happen far away, require x-ray vision, etc.), so the criterion would change to... something else, which again might require an update in the future. That does not make it arbitrary, it just makes it... learning.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-09T02:01:00.502Z · LW(p) · GW(p)

To my reading you violently agree with me but are framing it a different way. I never said anything about arbitrariness. I said truth (and all concepts) is contingent. That you think contingency implies arbitrariness is a related but different kind of confusion I hope to address, but not in this post.

comment by gjm · 2022-10-07T20:13:44.747Z · LW(p) · GW(p)

I think your argument here equivocates between two different claims.

  1. "When we use the word 'truth' or 'true' we may mean different things by it, so the meaning of a sentence with 'truth' or 'true' in it is dependent on somewhat-arbitrary choices made by humans."
  2. "That specific thing you (or I) mean by 'truth' is dependent on somewhat arbitrary choices made by humans."

The first is hard to disagree with. (And the same applies for literally any other term as well as "truth"/"true".) The second, not so much.

An analogy: Suppose something is vibrating and I say "The fundamental frequency of that vibration is approximately 256 Hz". Just as we can all propose subtly (or not so subtly) different ideas of what it means to say that something is "true", so we can all propose different definitions for "hertz".[1] Or for that matter for "fundamental" or "frequency". So two people making that statement might mean different things. But I don't think it's helpful to say that this means that the fundamental frequency of an oscillation is subjective. Once you decide what you would like "fundamental frequency" to mean and what units you'd like it to be in, any two competently done measurements will give the same value.

[1] If you think this is silly, you might want to suppose that instead I had said "... is approximately that of middle C". You could measure frequency in "octaves relative to middle C" exactly as well as in hertz, but different groups of people at different times really have called different frequencies "middle C".

Similarly, at least prima facie it's possible that (a) everything you say about the existence of different criteria-for-truth is correct but none the less (b) there is a fact of the matter, not dependent on anyone's kinda-arbitrary decisions, about e.g. what things remain when you stop believing in them, or what beliefs will reliably lead a given class of agent to more accurate predictions about the future, or what sets of beliefs and inference rules constitute consistent formal systems.

Perhaps it turns out that for some or many or all plausible notions of truth (b) is not, er, true, so that what I claimed claim 2 above is, er, true. That would be an interesting, er, truth -- to me, much more interesting than the less controversial claim 1. But if you've given any reason here for believing it, I haven't seen it.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-07T20:38:30.676Z · LW(p) · GW(p)

But I don't think it's helpful to say that this means that the fundamental frequency of an oscillation is subjective.

I think you might be imagining I'm saying more than I am, because as I see it this statement of yours contains exactly the point I'm making in this post. The very fact that claiming some claim about truth can be "helpful" is a manifestation of the point that I'm making.

I'm not saying the choice of what truth means is arbitrary. I'm saying it's contingent on what matters to humans. Another way to make my point: can you define truth in a way that is sensible to rocks?

Replies from: gjm
comment by gjm · 2022-10-08T01:09:31.438Z · LW(p) · GW(p)

Let me try to restate what I think you're saying your point is, to see whether I have it right. "When we say something is 'true', there are any number of different things we could conceivably mean. The specific meaning we have in mind, to whatever extent there is one, will depend on what we are interested in and what we want. So it is a mistake to think of 'truth' as some sort of objective thing not dependent on human interests and preferences."

If my paraphrase is correct or near to it, then I think my point stands. The last sentence in that paraphrase, which if I've got it right expresses your main conclusion, is importantly ambiguous, and the version of it that follows from what's gone before is (it seems to me) not actually interesting or important.

The version that follows from what's gone before is just observing that the way we define our words, and the questions we find it worth asking, depend on our interests and preferences. Yup, they do, but that doesn't conflict with what I think people (at least otherwise sensible and clever people) generally mean when they say things like "I believe in objective truth".

No, I can't define truth, or anything, in a way that is sensible to rocks, because nothing is sensible to rocks. And because nothing is sensible to rocks, the fact that I can't define truth to be sensible to rocks tells us nothing about truth that would distinguish it from beauty, or rest mass, or anything else.

Perhaps I am all wrong in thinking that the "weak" version of the final claim is not interesting or important. Could you maybe give an example of a concrete error you think someone generally sensible and clever has made as a result of not seeing the truth of the "weak" version, and which they would plausibly not have made if they had seen it?

(I think what you're saying by "contingent on what matters to humans" is much the same as what I was saying by "somewhat arbitrary", just with different emphasis. I would not disagree, e.g., with "somewhat arbitrary, with the particular choices we tend to make being shaped by what matters to us". It is not coincidence that my choice of the word "helpful" is consonant with the point you're making; it was deliberately chosen to be.)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-08T02:13:18.858Z · LW(p) · GW(p)

That you don't think it's interesting or important suggests you probably already grasp the point of this post and are just framing it differently than I would. For some readers what I'm saying here is sort of bind-blowing because they're walking about thinking that truth is like an objective, hard, real thing that exists totally independent of humans, hence my choice of emphasis. Sounds to me like you may already grasp my fundamental point and are seeing that it all adds back up to normality.

That said, I wrote a post a while ago [LW · GW] with several examples of how understanding the "weak" version of the final claim matters.

Replies from: philh
comment by philh · 2022-10-16T11:30:04.699Z · LW(p) · GW(p)

For some readers what I’m saying here is sort of bind-blowing because they’re walking about thinking that truth is like an objective, hard, real thing that exists totally independent of humans, hence my choice of emphasis.

Another hypothesis here is that some readers misunderstand your point and think you're saying something different than you intend to say.

If I follow the discussion so far (and I confess I've just skimmed it), then the meaning I take from the words "truth doesn't exist independent of humans" is not a meaning you intend to convey. To convey the meaning I think you intend to convey, I would say something like: ""truth" doesn't exist independent of humans, in that we can define the word in many ways; but truth itself, for most definitions of the word in common use, does exist independent of humans".

And I agree with what I think gjm to be saying, that this is trite. It may indeed be that some people find it mind blowing.

But, it seems to me that most commenters on this post took you to be saying the same thing that I took you as saying; roughly, the thing that the words "truth doesn't exist independent of humans" conveys to me.

So I consider it a decent guess, that if someone thinks the thing you're saying is deep, it's not because they think the-thing-I-think-is-trite is deep. It may be they they misunderstood you in the same way that most commenters on this post misunderstood you.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-16T20:26:55.229Z · LW(p) · GW(p)

Nothing exists independently. Everything is causally connected. So although I'm making a point about truth here because I think it's a case where failing to understand this interconnectedness matters, it's a fully general point.

Perhaps the real problem is I didn't try to convince folks in this post of this, rather than focusing on a specific consequences that I think is rather important for folks who read Less Wrong.

Replies from: philh
comment by philh · 2022-10-17T09:25:30.402Z · LW(p) · GW(p)

It's not clear to me how this was intended as a respose to my comment. Was it "I reject that hypothesis because..." or "no you're misunderstanding what's being said" or...?

But it seems to me that the biggest problem with the post is likely one of two things:

  1. You're not yourself confusing the quotation with the referent, but you write in a way that doesn't clearly distinguish them. This makes some readers think you're confusing them. Perhaps it makes other readers think you're saying something deep.

    If this is the problem, then explaining why you're making the point you're making might be helpful. But I suggest it would be more helpful to make the point you're making clearer, and that explicitly distinguishing quotation from referent would help with that.

  2. You are confusing the quotation with the referent. For example, when you say "I’m making a point about truth here", you think you are indeed making a point about truth; whereas I (and I believe gjm) claim you are making a point about the word "truth". I read you as saying to gjm "yeah you understand what I'm saying, you just don't think it's very interesting, that's fine, other people do". Perhaps so, but another possibility I have to consider is that you yourself misunderstand what you're making a point about, and misunderstand gjm when he tries to explain.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-17T16:43:07.538Z · LW(p) · GW(p)

All I can do is point; you have to look for yourself.

My previous comment reflects the fact that I think there's a big inferential gap here caused by having not tackled another topic.

comment by Alex Flint (alexflint) · 2022-10-17T18:52:36.060Z · LW(p) · GW(p)

I have the sense that it's ferociously difficult to get at the kind of thing you're pointing at here in purely conceptual terms. I wonder if it might help to give some examples of where people have made the kind of mistake you're point at here, and perhaps the solution that they've been missing. I have the sense that the proof-based agents line of research ran right into this issue via the limits to the coherence operationalization of truth in your list. I also have the sense that we are at the moment running into the limits of the predictive operationalization of truth when we try to locate human values within physical human brains. The most interesting one to me, though, is: what is the truth status of practice as a path to the end of suffering?!

comment by Alex Flint (alexflint) · 2022-10-17T18:51:53.499Z · LW(p) · GW(p)

I have the sense that it's ferociously difficult to get at the kind of thing you're pointing at here in purely conceptual terms. I wonder if it might help to give some examples of where people have made the kind of mistake you're point at here, and perhaps the solution that they've been missing. I have the sense that the proof-based agents line of research ran right into this issue via the limits to the coherence operationalization of truth in your list. I also have the sense that we are at the moment running into the limits of the predictive operationalization of truth when we try to locate human values within physical human brains. The most interesting one to me, though, is: what is the truth status of practice as a path to the end of suffering?!

comment by tailcalled · 2022-10-08T13:50:04.177Z · LW(p) · GW(p)

On a meta level, you've written a bunch about the problem of the criterion, and I don't really feel like I see much productive from it. The problem with the problem of the criterion is that in general the appropriate criterion depends on what criterion you have, on the case you're dealing with. So it's extremely limited what universal things you can say about the problem of the criterion.

Less cryptically, let's take an example. What is "truth"? Well, concretely, my girlfriend recently lost her phone, what defines the truth of "where her phone was"? Well, it's the place where her phone was; the place she could go to and reach for to get the phone. That's fairly simple, and it's a resolution that is straightforward and intrinsic to the sentence of "where her phone was", rather than general.

Now you could say that "where her phone was" is defined based on what she cares about. But the sentence makes sense to you too, even though you don't know her and aren't really affected by the sentence. So clearly language is much more broad than what you just care about.

You claim:

[...9 It's easy to think about truth as a concept and argue which notion of truth is best. It's much harder to think both about how to evaluate if something is true and remember at the same time that this method of evaluation is contingent on which method you think is best.

[...]

Why argue all this? Because I think people suffer and cause unnecessary harm because they get wrapped up in their own ideas about truth (and many other things!). By learning to look up [LW · GW], even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is. And if we can do that, maybe we can make a little progress on some of the hard problems we face instead of spinning our wheels trying to solve the problems we make up in our heads.

But none of this philosophizing actually helped find the phone. Breaking free of our self-construct dreams of "where her phone was" wouldn't help at all; what did help was calling her phone and following the sound, which made it turn out that it was hidden behind a bag on the table. This only really worked because we had a good map-territory correspondence that allowed us to understand what happened in various cases, which requires a vary solid grasp on truth, and it's probably also more available because we've learned this heuristic from other cases where a lost phone was a problem.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-09T01:58:18.897Z · LW(p) · GW(p)

That she and you aren't trying to resolve problems where the contingency of facts matters is a blessing. Please enjoy not having to deal with these problems.

This is not sarcastic or a joke. Really, you're lucky!

comment by Q Home · 2022-10-24T10:58:49.908Z · LW(p) · GW(p)

Would you like to discuss a stronger claim, that motivated cognition may be a good epistemology?

Usually people use "logical reasoning + facts". Maybe we can use "motivated reasoning + facts". I.e. seek a balance between desirability and plausibility of a hypothesis. 

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-24T15:41:47.754Z · LW(p) · GW(p)

I would say that of course motivated reasoning can lead to good epistemology since my claim is that all epistemology is done at the behest of some motivation, good being relative here to some motivation. :-)

For example, it's quite reasonable to pick a norm like logic or Bayesian rationality and expect reasoning to conform to it in order to produce knowledge of a type that is useful, say to the purpose of predicting what the world will be like in future moments.

Replies from: Q Home, Q Home
comment by Q Home · 2022-10-24T22:15:05.334Z · LW(p) · GW(p)

Sorry, I meant using motivated cognition as a norm itself. Using motivated cognition for evaluating hypotheses. I.e. I mean what people usually mean by motivated cognition, "you believe in this (hypothesis) because it sounds nice".

Here's why I think that motivated cognition (MC) is more epistemically interesting/plausible than people think:

  • When you're solving a problem A, it may be useful to imagine the perfect solution. But in order to imagine the perfect solution for the problem A you may need to imagine such solutions for the problems B, C, D etc. ... if you never evaluate facts and hypotheses emotionally, you may not even be able to imagine what the "perfect solution" is.
  • MC may be a challenge: often it's not obvious what's the best possibility is. And the best possibilities may look weird.
  • Usual arguments against MC (e.g. "the universe doesn't care about your feelings", "you should base your opinions on your knowledge about the universe") may be wrong. Because feelings may be based on the knowledge about reality.
  • Modeling people (even rationalists) as using different types of MC may simplify their arguments and opinions.
  • MC in the form of ideological reasoning is, in a way, the only epistemology known to us. Bayesianism is cool, but on some important level of reality it's not really an epistemology (in my opinion), i.e. it's hard/impossible to use and it doesn't actually model thinking and argumentation.

If you want we can discuss those or other points in more detail.

comment by Q Home · 2022-11-23T03:33:07.521Z · LW(p) · GW(p)

I wrote a post [LW · GW] about motivated cognition in epistemology, a version [LW · GW] of "the problem of the criterion" and (a bit) about different theories of truth. If you want, I would be happy to discuss some of it with you.

comment by JacobW38 (JacobW) · 2022-10-08T02:09:58.557Z · LW(p) · GW(p)

Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as "that which pays the most rent in anticipated experience"; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I'm constantly looking really hard at the evidence I examine and asking myself, am I convinced of this for the right reasons? What would have to happen to unconvince me? How can I take a detached stance toward this belief, if ever there comes a time when I may no longer want it? So in what way my truth-seeking could be called motivated, I aim to constrain it to at least being solely motivated by adherence to the scientific method, which is something I am unashamed to simply acknowledge.

comment by Donald Hobson (donald-hobson) · 2022-12-01T01:08:53.641Z · LW(p) · GW(p)

And that means whatever we want to claim to be true is ultimately motivated by whatever it is we care about that led us to choose the definition of truth we use.

 

People who speak different languages don't use the symbols "truth". To what extent are people using different definitions of "truth" just choosing to define a word in different ways and talk about different things. 

In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn't depend on it's utility function. And it can't be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model. 

If the world model is seriously broken, the agent is just non functional. The workings of the world model isn't a choice for the agent. It's a choice for whatever made the agent. 

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-12-01T17:56:05.822Z · LW(p) · GW(p)

In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn't depend on it's utility function. And it can't be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model. 

AIXI doesn't exist in a vacuum. Even if AIXI itself can't be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.

If the world model is seriously broken, the agent is just non functional. The workings of the world model isn't a choice for the agent. It's a choice for whatever made the agent. 

Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.

comment by Shmi (shminux) · 2022-10-08T00:31:36.815Z · LW(p) · GW(p)

It pays to taboo the term, as I've been advocating for years here, with little success. 

Say what you really mean instead of this nebulous misleading concept! Sometimes it is a truth value of a provable or disprovable mathematical statement (e.g. the Pythagoras theorem), sometimes it is someone's best guess at the truth value of a mathematical statement (P is is very likely provably not equal NP), sometimes it is a statement about accuracy of some model of the physical world (e,g, Quantum Mechanics is "true" in its domain of applicability), sometimes it is a statement of faith ("my truth" vs "your truth") etc.

Tabooing "truth" avoids pointless arguments over statements of the form "unprovable/untestable but true", like MWI is obviously true", or "Genghis Khan liked horse milk" or "BB(10)'th digit of Pi has 10% probability of being 0". Alternatives to the term "true" are "testably accurate", "holds in all but measure zero possible worlds, given a certain set of assumptions", "something I fervently believe in" and other items from your list. 

"Fact" is another term that is worth tabooing, I call it yet another four-letter f-word.

Replies from: JacobW
comment by JacobW38 (JacobW) · 2022-10-08T02:18:58.126Z · LW(p) · GW(p)

I like this proposal. In light of the issues raised in this post, it's important for people to come into the custom of explaining their own criteria for "truth" instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn't be talking about the world as though we have actual means of knowing things about it with probability 1.

comment by tailcalled · 2022-10-07T22:26:59.792Z · LW(p) · GW(p)

My notion of truth doesn't fit with any of the theories you listed. Truth is a relationship between propositions and the world, e.g. the proposition "this comment contains 1 or more y's" is true because this comment contains 1 or more y's.

This doesn't technically invalidate your point that truth is human-chosen. But specifically, the human-chosen element is the language we use. If we spoke a different language where the meanings of the words "more" and "fewer" were swapped, the statement would become false.

Though my counterargument here unfairly skews things to my advantage. AFAIK, there is a lot of shared structure between different human languages. Usually human languages can be translated near-losslessly into each other, but a combinatorial argument shows that this is not the case for the overwhelming majority of mathematically conceivable languages.

However, I think similar combinatorial arguments show that it is not possible to obtain information about the truth in the overwhelming majority of mathematically conceivable languages. Conceptually, if the truth of statements can depends mainly on things that you do not observe (which concepts will if they depend on random stuff, since there's a lot of unseen stuff they could potentially depend on), then you cannot learn anything about the truth.

You're saying that our languages are based on our motivations and preferences. But almost any motivations and preference would favor a language that can express concepts that are observable, as well as concepts linked to observables. I bet there's an instrumental convergence argument that could be made here; do you disagree?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-07T23:15:32.260Z · LW(p) · GW(p)

This doesn't technically invalidate your point that truth is human-chosen.

I'm not sure what you're trying to argue here. I give a bunch of examples of theories of truth, but there can of course be more, my list is not exhaustive. Your theory still has the property of depending on a criterion that distinguishes that which is true from what is not, so it doesn't change the remainder of my arguments.

You're saying that our languages are based on our motivations and preferences. But almost any motivations and preference would favor a language that can express concepts that are observable, as well as concepts linked to observables. I bet there's an instrumental convergence argument that could be made here; do you disagree?

The sort of thing we find it useful to label "truth" reflects what's useful to us, which includes saying things about what we observe. If you had a language where that wasn't possible, you'd probably invent a way to do it. Because many humans care about the same things, we converge on finding the same sort of things useful, so they become fixed concepts we teach each other and build into our languages.

I'm not sure if we can truly make a case that this is instrumental convergence because I don't think any of this is happening independently enough where that's meaningful, but my point could be phrased that we care about truth for instrumental reasons, and many people have the same instrumental reasons for the same reasons.

Replies from: tailcalled
comment by tailcalled · 2022-10-08T06:57:16.788Z · LW(p) · GW(p)

Truth isn't necessarily about what's useful to us, though. There's a truth to the matter about whether Russell's teapot exists, but that doesn't mean it is useful.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-08T13:07:02.564Z · LW(p) · GW(p)

That someone cares what the answer is is a kind of usefulness.

Replies from: tailcalled
comment by tailcalled · 2022-10-08T13:31:03.622Z · LW(p) · GW(p)

I think there are lots of propositions that can be phrased in English and that nobody cares about.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-09T01:54:25.167Z · LW(p) · GW(p)

I think maybe you're meaning something different by "care" than I am. You seem to mean something like "important". I mean something like "care enough to even ever bother thinking about it". That there are infinite statements no one cares about by my definition doesn't seem a problem, but in fact an important thing to know.

comment by Victor Novikov (ZT5) · 2022-10-10T12:47:18.608Z · LW(p) · GW(p)

I would say you if hold the correct understanding of what truth is, then truth seeking is cognition motivated by seeking truth.

So yes, it is motivated cognition. But the motivation is correct.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-10T12:54:25.115Z · LW(p) · GW(p)

This seems like circular reasoning that doesn't ground out to anything. How do you know if you have the correct ("true") understanding of truth?

Replies from: ZT5
comment by Victor Novikov (ZT5) · 2022-10-20T05:45:29.219Z · LW(p) · GW(p)

There exists an objective reality. A true statement correctly describes that objective reality. A false statement incorrectly describes that objective reality.

It is really is quite simple (though people manage to get very confused about that anyway, somehow).

Yes, there is "circularity" to it, in that the mind uses itself to validate itself.

But it's not just validating the definition of truth against itself (if it did, "truth" would just be a floating concept not connected to anything. So it could mean anything and still validate).

It is validating my definition of truth against all my sensory input, against all my knowledge, against all my memories. Does this definition of truth add up to a coherent reality?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-20T15:04:27.544Z · LW(p) · GW(p)

How do you know that this objective reality exists? What about the world is explained by the existence of objective reality that can't also be explained as an illusion of your own mind?

Replies from: ZT5
comment by Victor Novikov (ZT5) · 2022-10-20T15:33:36.759Z · LW(p) · GW(p)

This isn't news to me. Nor do I feel this an interesting topic to discuss.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-20T16:17:11.103Z · LW(p) · GW(p)

That you think this comic captures this discussion means I've missed the mark with you, because you've failed to grasp the intended meaning. I suspect, like many other commenters here, you've interpreted my words to say more than they do.

Replies from: ZT5
comment by Victor Novikov (ZT5) · 2022-10-20T16:54:35.803Z · LW(p) · GW(p)

Then I'm not sure what you are trying to say. Perhaps it would be easier if you explain your beliefs instead of trying to get me to question mine?

It seems like you are trying to break people out of an over-reliance on concepts and trying to point them at the fundamental thing behind the concepts? 

My beliefs validate; I don't see it being worth my time to explain the validation process in full detail.

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-21T01:35:34.196Z · LW(p) · GW(p)

Indeed, this post is more focused on breaking people out of one set of concepts rather than fully explaining another because it's a long process to explain the thing I'm pointing at and this post was a way for me to play with writing about a couple ideas I had for a larger writing project.

If it's not worth you're time, that's well worth knowing!

Replies from: ZT5
comment by Victor Novikov (ZT5) · 2022-10-21T04:58:00.410Z · LW(p) · GW(p)

Thanks, that clarifies your position somewhat.

It is not worth my time because I already understand the thing you are trying to communicate. Or so I believe.

If you are trying to get me to "look up" and "look away from my phone", but we are communicating over phones, so how do I demonstrate I already know to do this?

If you are trying to get me to see the truth beyond words and concepts, but we are communicating in words, so how do I demonstrate I already don't see words as the truth?

I also feel that maybe you have gone too far in on that one, and from realizing that words are not, in themselves, the truth, decided to assume that words cannot meaningfully connect to the truth at all. And that the only way to get people to see the truth is to "crash their program", is to force them to "look up"?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-10-21T17:42:04.805Z · LW(p) · GW(p)

What does it matter if you've demonstrated you know something to me? I'm just some guy posting things on Less Wrong.

I never said that words cannot meaningfully connect to truth or any other thing. Words are clearly quite useful for pointing at stuff about the world! I only claimed that this connection is not independent of our motivations.