Truth: It's Not That Great
post by ChrisHallquist · 2014-05-04T22:07:54.354Z · LW · GW · Legacy · 58 commentsContents
58 comments
Rationality is pretty great. Just not quite as great as everyone here seems to think it is.
The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...
The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.
-Robin Hanson, "Who Loves Truth Most?"
A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."
I saw this, and commented:
<puts rubber Robin Hanson mask on>
What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.
</rubber Robin Hanson mask>
In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."
This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.
This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.
Or, consider statements like:
- Some truths don't matter much.
- People often have legitimate reasons for not wanting others to have certain truths.
- The value of truth often has to be weighed against other goals.
Do these statements sound heretical to you? But what about:
- Information can be perfectly accurate and also worthless.
- People often have legitimate reasons for not wanting other people to gain access to their private information.
- A desire for more information often has to be weighed against other goals.
I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.
There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.
Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.
Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?
Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information.
"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.
Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.
58 comments
Comments sorted by top scores.
comment by Louie · 2014-05-05T11:14:44.545Z · LW(p) · GW(p)
2009: "Extreme Rationality: It's Not That Great"
2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"
2013: "How about testing our ideas?"
2014: "Truth: It's Not That Great"
2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"
2016: "In Defense Of Putting Babies In Wood Chippers"
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2014-05-05T21:27:42.012Z · LW(p) · GW(p)
2016: "In Defense Of Putting Babies In Wood Chippers"
Heck, I could write that post right now. But what's it got to do with truth and such?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-05T22:26:58.554Z · LW(p) · GW(p)
I think it got something to do with countersignaling and being contrarian.
comment by 110phil · 2014-05-05T04:10:26.445Z · LW(p) · GW(p)
I read the "heretical" statements as talking about truth replacing falsehood. I read the non-heretical statements as talking about truth replacing ignorance. If you reword the "truth" statements to make it clear that the alternative is not falsehood, they would sound much less heretical to me.
comment by Lumifer · 2014-05-05T15:39:55.977Z · LW(p) · GW(p)
This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism".
Huh? What? Wait a moment....
These two are entirely different things. Under the local definitions, rationalism is making sure the map looks like the territory and doing stuff which will actually advance your goals. Notably, rationalism is silent about values -- it's perfectly possibly to be a rational Nazi. You can crudely define rationalism as "being grounded in reality".
Altruism, on the other hand, is all about values. A very specific set of values.
You can't "rebrand" a movement that way -- what you imply is a wholesale substitution of one movement with another.
Replies from: ChristianKl, Arran_Stirton↑ comment by ChristianKl · 2014-05-05T16:06:38.705Z · LW(p) · GW(p)
Altruism, on the other hand, is all about values.
We are speaking about effective altruism not altruism in general.
In practice there seems to be quite an overlap between the EA and the LW crowd and there are people deciding whether to hold EA or LW meetups.
Replies from: blacktrance, Lumifer↑ comment by blacktrance · 2014-05-05T22:50:11.325Z · LW(p) · GW(p)
Just because there's an overlap doesn't mean that LW should be rebranded as EA.
↑ comment by Lumifer · 2014-05-05T16:13:24.595Z · LW(p) · GW(p)
We are speaking about effective altruism not altruism in general.
Effective altruism is a subtype of altruism.
there seems to be quite an overlap between the EA and the LW crowd
There is also an overlap between neoreactionaries and the LW crowd. So?
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-05T16:24:17.342Z · LW(p) · GW(p)
There is also an overlap between neoreactionaries and the LW crowd. So?
There's only a few percent neoreactionaries and I have yet to hear that people are seriously considered whether to run a neoreactionary or an LW meetup.
Replies from: Nornagest↑ comment by Nornagest · 2014-05-05T16:35:01.675Z · LW(p) · GW(p)
Specifically, according to the 2013 survey, 2.4% of LW identifies as neoreactionary, while 28.6% identifies as effective altruist. The "reactionary" option is buried in a second-tier politics question, so I suspect it's underrepresenting LWers with neoreactionary sympathies, but I'd still be surprised if we were looking at more than single digits.
Replies from: None, Lumifer↑ comment by [deleted] · 2014-05-07T21:15:44.475Z · LW(p) · GW(p)
Specifically, according to the 2013 survey, 2.4% of LW identifies as neoreactionary,
Admittedly, I'd bet this is higher than the rate among the general population, if only because LW-ers are more likely to have heard of obscure ideologies at all.
Replies from: Nornagest↑ comment by Nornagest · 2014-05-07T21:17:11.757Z · LW(p) · GW(p)
Probably. LW wasn't where I met my first neoreactionary, but it was where I met my second through my fifth.
It also draws on a similar demographic: disaffected mostly-young mostly-nerds with a distrust of conventional academia and a willingness to try unusual things to solve problems.
Replies from: None↑ comment by [deleted] · 2014-05-07T21:26:28.396Z · LW(p) · GW(p)
In defense of distrusting conventional academia, I currently work in conventional academia, and it has plenty of genuine problems above and beyond the mere fact that someone on the internet might have some hurt feelings about not fitting in at graduate school (or some secret long-held resentment about taking a lucrative industry job instead of martyring themselves to the idol of Intellect by... going to grad-school).
I still trust a replicated scientific study more than most other things, but I don't necessarily trust academia anymore to have done the right studies in the first place, and I have to remind myself that studies can only allocate belief-mass between currently salient hypotheses.
Replies from: Nornagest, TheAncientGeek↑ comment by TheAncientGeek · 2014-05-07T21:41:17.981Z · LW(p) · GW(p)
You seemed be saying that conventional academia doesn't do well by absolute standards, but that doesn't mean anyone else is doing better relatively.
Replies from: None↑ comment by Lumifer · 2014-05-05T16:36:54.689Z · LW(p) · GW(p)
while 28.6% identifies as effective altruist.
So, getting back to the original issue, does it look reasonable to "rebrand a movement" if somewhat less than a third of it identifies itself with a new brand?
Replies from: Nornagest, ChristianKl↑ comment by Nornagest · 2014-05-05T16:44:53.233Z · LW(p) · GW(p)
Wasn't trying to stake out a claim there. Since you ask, though, I'd expect under half of LW contributors to identity as rationalists in the sense of belonging to a movement, and I wouldn't be surprised if those people were also more likely to identify as effective altruists.
The survey unfortunately doesn't give us the tools to prove this directly, but we probably could correlate between meetup attendance and EA identification.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-05T16:49:55.921Z · LW(p) · GW(p)
I'd expect under half of LW contributors to identity as rationalists in the sense of belonging to a movement
A good point. And speaking of, why did this whole idea of LW being a "movement" pop up?
LW is a movement like Something Awful is a movement. At least the Goonies used to be able to whistle up large fleets in Eve... X-D
Replies from: Nornagest↑ comment by Nornagest · 2014-05-05T16:51:07.437Z · LW(p) · GW(p)
And speaking of, why did this whole idea of LW being a "movement" pop up?
I imagine the Craft and the Community sequence has something to do with it.
↑ comment by ChristianKl · 2014-05-05T21:33:26.118Z · LW(p) · GW(p)
The whole point of rebranding is that normally before you rebrand nobody identifies with the new brand.
↑ comment by Arran_Stirton · 2014-05-11T17:22:48.203Z · LW(p) · GW(p)
This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism".
I'm fairly certain ChrisHallquist isn't suggesting we re-brand rationality 'effective altruism', otherwise I'd agree with you.
As far as I can tell he was talking about the kinds of virtues people associate with those brands (notably 'being effective' for EA and 'truth-seeking' for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things, whereas truth-seeking leads to (as he says) analysis paralysis.
Replies from: Lumifer↑ comment by Lumifer · 2014-05-12T02:28:17.669Z · LW(p) · GW(p)
the kinds of virtues people associate with those brands (notably 'being effective' for EA and 'truth-seeking' for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things,
The virtue of "being effective" is not always virtuous unless you're willing to see virtue in constructing effective baby-mulching machines...
Replies from: Arran_Stirton↑ comment by Arran_Stirton · 2014-05-14T17:36:33.240Z · LW(p) · GW(p)
I think we’re using different definitions of virtue. Whereas I’m using the definition of virtue as a a good or useful quality of a thing, you’re taking it to mean a behavior showing high moral standards. I don’t think anyone would argue that the 12 virtues of rationality are moral, but it is still a reasonable use of English to describe them as virtues.
Just to be clear: The argument I am asserting is that ChrisHallquist is not in any way suggesting that we should rename rationality as effective altruism.
I hope this makes my previous comment clearer :)
comment by LoganStrohl (BrienneYudkowsky) · 2014-05-05T04:03:47.537Z · LW(p) · GW(p)
I was not signaling. Making it a footnote instead of just editing it outright was signaling. Revering truth, and stating that I do so, was not.
Now that I've introspected some more, I notice that my inclination to prioritize the accuracy of information I attend to above its competing features comes from the slow accumulation of evidence that excellent practical epistemology is the the strongest possible foundation for instrumental success. To be perfectly honest, deep down, my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".
I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques that I double-thought myself into believing accuracy was not so great.
But I was wrong, you see. Having accurate beliefs is a ridiculously convergent incentive, so whatever my goal structure, it was only a matter of time before I'd recognize that. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.
Recognizing only abstractly that map-territory correspondence is useful does not produce the same results. Cultivating a deep dedication to ensuring every motion precisely engages reality with unfailing authenticity prevents real-world mistakes that noting the utility of information, just sort of in passing, will miss.
For some people, dedication to epistemic rationality may most effectively manifest as excitement or simply diligence. For me, it is reverence. Reverence works in my psychology better than anything else. So I revere the truth. Not for the sake of the people watching me do so, but for the sake of accomplishing whatever it is I happen to want to accomplish.
"Being truth-seeking" does not mean "wanting to know ALL THE THINGS". It means exhibiting patters of thought and behavior that consistently increase calibration. I daresay that is, in fact, necessary for being well-calibrated.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2014-05-06T21:31:32.719Z · LW(p) · GW(p)
...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".
Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?
Replies from: BrienneYudkowsky↑ comment by LoganStrohl (BrienneYudkowsky) · 2014-05-06T23:40:29.407Z · LW(p) · GW(p)
It is indeed a cue to look for motivated reasoning. I am not neglecting to do that. I have scrutinized extensively. It is possible to be motivated by very simple emotions while constraining the actions you take to the set endorsed by deliberative reasoning.
The observation that something fits the status-seeking patterns you've cached is not strong evidence that nothing else is going on. If you can write off everything anybody does by saying "status" and "signaling" without making predictions about their future behavior--or even looking into their past behavior to see whether they usually fit the patterns-- then you're trapped in a paradigm that's only good for protecting your current set of beliefs.
Yes, I do have good reasons to think the things I'm imitating are causes of their success. Yes, they're more successful on average than people who don't do the things, and indeed I think they're probably more successful with respect to my values than literally everybody who doesn't do the things. And I don't "happen" to be in close proximity to them; I sought them out and became close to them specifically so I could learn from them more efficiently.
I am annoyed by vague, fully general criticisms that don't engage meaningfully with any of my arguments or musings, let alone steel man them.
comment by Jack · 2014-05-06T07:15:55.814Z · LW(p) · GW(p)
Truth-telling seems clearly overrated (by people on Less Wrong but also pretty much everyone else). Truth-telling (by which I mean not just not-lying but going out of your way and sacrificing your mood, reputation or pleasant socializing just to say something true) is largely indistinguishable from "repeating things you heard once to signal how smart or brave or good you are. "
Truth-seeking as in observing and doing experiments to discover the structure of the universe and our society still seems incredibly important (modulo the fact that obviously there are all sorts of truths that aren't actually significant). And I actually think that is true even if you call it information gathering, though 'information gathering' is certainly vastly less poetic and lacks the affective valence of Truth.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-05-06T07:45:02.989Z · LW(p) · GW(p)
'information gathering' is certainly vastly less poetic and lacks the affective valence of Truth.
"Information gathering" also suggests a stroll in the park, gathering up the information that is just lying around. Getting at the truth is generally harder than that.
comment by NancyLebovitz · 2014-05-06T12:57:33.815Z · LW(p) · GW(p)
There's simply more territory than our maps can encompass. If you want to get anything done, there comes a point when you have to act on information that's not ideally complete. What's more, you aren't going to have complete information about when to stop searching.
Any recommendations for discussions of this problem?
"Effective altruism" is at risk of turning into a signal, though perhaps not quite as quickly are "rationality" or "truth-seeking".
comment by Eugine_Nier · 2014-05-06T01:18:04.582Z · LW(p) · GW(p)
"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others.
You should even more doubt your motivations for deceiving yourself.
comment by ChristianKl · 2014-05-05T00:42:43.914Z · LW(p) · GW(p)
Edit: For the following clicking agree is supposed to mean that you consider a statement heretical
"Some truths don't matter much." sounds heretical [pollid:691]
"People often have legitimate reasons for not wanting others to have certain truths." sounds heretical [pollid:692]
"The value of truth often has to be weighed against other goals." sounds heretical [pollid:693]
"Information can be perfectly accurate and also worthless." sounds heretical [pollid:694]
"People often have legitimate reasons for not wanting other people to gain access to their private information. " sounds heretical [pollid:695]
"A desire for more information often has to be weighed against other goals." sounds heretical [pollid:696]
Replies from: Jack, alex_zag_al↑ comment by Jack · 2014-05-06T07:00:14.387Z · LW(p) · GW(p)
What is meant by heretical?
Replies from: ChristianKl, Richard_Kennaway↑ comment by ChristianKl · 2014-05-06T19:25:56.171Z · LW(p) · GW(p)
I personally simply copied the wording in the article above and wanted to test whether the claim is true. It seems indeed to be the case that there are a bunch of people who consider the statement about "truth" more heretical than "information".
↑ comment by Richard_Kennaway · 2014-05-06T07:41:11.606Z · LW(p) · GW(p)
I don't know how ChristianKl meant it, but in general it appears to mean either (1) "this idea is so utterly false that it must be strenuously opposed every time it rears its head", or (2) "the crowd say that this idea is so utterly false that it must be opposed every time it rears its head, therefore I shall defiantly proclaim it to demonstrate my superior intellect".
The very concept of "heresy" presupposes that arguments are soldiers and disagreement is strife. "Heresy" is a call to war, not a call to truth.
Replies from: Fronken↑ comment by alex_zag_al · 2014-05-21T19:46:40.404Z · LW(p) · GW(p)
The first and third ones, about info sometimes being worthless, just made me think of Vaniver's article on value of information calculations. So, I mean, it sounded very LessWrongy to me, very much the kind of thing you'd hear here.
The second one made me think of nuclear secrets, which made me think of HPMOR. Again, it seems like the kind of thing that this community would recognize the value of.
I think my reactions to these were biased, though, by being told how I was expected to feel about them. I always like to subvert that, and feel a little proud of myself when what I'm reading fails to describe me.
comment by alicey · 2014-05-04T23:05:16.632Z · LW(p) · GW(p)
i'm into epistemic rationality, but this all seems pretty much accurate and stuff
not sure what to conclude from having that reaction to this post.
Replies from: sattcomment by Dagon · 2014-05-05T09:36:50.026Z · LW(p) · GW(p)
Seems fairly uncontroversial to me, but that's likely because it stays far-mode. If you get specific and near-mode, I suspect you'll stir up some disagreement. Leave aside which beliefs you'd rather other people have or not have - that's a separate dark arts topic. For your own goal-achievement-ability, which true beliefs are you better off not having?
I completely agree that I have limited resources and need to prioritize which beliefs are important enough to spend resources on. I far less agree that true beliefs (in the paying rent sense of the word, those which have correct conditional probability assignments to your potential actions) ever have negative value.
Replies from: satt↑ comment by satt · 2014-05-07T03:14:51.785Z · LW(p) · GW(p)
For your own goal-achievement-ability, which true beliefs are you better off not having?
Nick Bostrom suggests some examples in his "Information Hazards" paper.
Replies from: Nonecomment by fowlertm · 2014-05-04T23:05:41.134Z · LW(p) · GW(p)
Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.
Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.
Relevant post:
http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2014-05-06T10:42:32.500Z · LW(p) · GW(p)
What happens when you try to replicate what your boss is doing? For example when you decide to start your own competing company.
Then I suspect it would be useful to know the truths like "my boss always says X, but really does Y when this situation happens", so that when the situation happens, you remember to do Y instead of X. Even if for an employee, saying "you always say X, but you actually do Y" to your boss would be dangerous.
So, some truths may be good to know, while dangerous to talk about in front of people who have a negative reaction to hearing them. You may remember that "X" is the proper thing to say to your boss, and silently remember that "Y" is the thing that probably contributes to the success in the position of your boss.
Replacing your boss is not the only situation where knowing the true boss-algorithm is useful. For example knowing the true mechanism how your boss decides who will get bonus and who will get fired.
comment by alex_zag_al · 2014-05-21T19:51:55.693Z · LW(p) · GW(p)
Truth is really important sometimes, but so far I've been bad about identifying when.
I know a fair bit about cognitive biases and ideal probabilistic reasoning, and I'm pretty good at applying it to scientific papers that I read or that people link through Facebook. But these applications are usually not important.
But, when it comes to my schoolwork and personal relationships, I commit the planning fallacy routinely, and make bad predictions against base rates. And I spend no time analyzing these kinds of mistakes or applying what I know about biases and probability theory.
If I really operationalized my belief that only some truths are important, I'd prioritize truths and apply my rationality knowledge to the top priorities. That would be awesome.
comment by Kurros · 2014-05-12T00:45:46.575Z · LW(p) · GW(p)
I am curious; what is the general LessWrong philosophy about what truth "is"? Personally I so far lean towards accepting an operational subjective Bayesian definition, i.e. the truth of a statement is defined only so far as we agree on some (in principle) operational procedure for determining its truth; that is we have to agree on what observations make it true or false.
For example "it will rain in Melbourne tomorrow" is true if we see it raining in Melbourne tomorrow (trivial, but also means that the truth of the statement doesn't depend on rain being "real", or just a construction of Descartes' evil demon or the matrix, or a dream, or even a hallucination). It is also a bit disturbing because the truth of "the local speed of light is a constant in all reference frames" can never be determined in such a way. We could go to something like Popper's truthlikeness, but then standard Bayesianism gets very confusing, since we then have to worry about the probability that a statement has a certain level of "truthlikeness", which is a little mysterious. Truthlikeness is nice in how it relates to the map-territory analogy though.
I am inclined to think that standard Bayesian style statements about operationally-defined things based on our "maps" makes sense, i.e. "If I go and measure how long it takes light to travel from the Earth to Mars, the result will be proportional to c" (with this being influenced by the abstraction that is general relativity), but it still remains unclear to me precisely what this means, in terms of Bayes theorem that is: i.e. the probability P("measure c" | "general relativity") implies that P("general relativity") makes sense somehow, though the operational criteria cannot be where its meaning comes from. In addition we must somehow account for that fact "general relativity" is strictly False, in the "all models are wrong" sense, so we need to somehow rejig that proposition into something that might actually be true, since it makes no sense to condition our beliefs on things we know to be false.
I suppose we might be able to imagine some kind of super-representation theorem, in the style of de-Finetti, in which we show that degrees of belief in operational statements can be represented as the model average of the predictions from all computable theories, hoping to provide an operational basis for Solomonoff induction, but actually I am still not 100% sure what de-Finetti's usual representation theorem really means. We can behave "as if" we had degrees of belief in these models weighted by some prior? Huh? Does this mean we don't really have such degrees of belief in models but they are a convenient fiction? I am very unclear on the interpretation here.
The map-territory analogy does seem correct to me, but I find it hard to reconstruct ordinary Bayesian-style statements via this kind of thinking...
Replies from: ChristianKl↑ comment by ChristianKl · 2014-05-12T01:20:33.452Z · LW(p) · GW(p)
I am curious; what is the general LessWrong philosophy about what truth "is"?
To the extend that there a general philosophy it's http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ but individual people might differ slightly.
Replies from: Kurroscomment by HungryHippo · 2014-05-09T10:18:59.082Z · LW(p) · GW(p)
Truth, having a one-to-one correspondence between the map and the territory[1], is only useful if you're able to accurately navigate an accurate map.
However, if, when navigating an accurate map, you still veer to the left when trying to reach your destination, you're faced with two choices: 1) Un-value truth, and use whatever map gets you to your destination no matter the relation between the "map" and the territory. 2) Terminally value truth, damn the disutility of doing so!
[1] For convenience, I assume that the territory exists. (For some definitions of existence.)
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-05-09T12:03:46.367Z · LW(p) · GW(p)
What would cause you to veer? Bias? Akrasia?
And what would bring you back on track? Wholesale disdain for truth? Or a careful selection of useful lies?
Replies from: HungryHippo↑ comment by HungryHippo · 2014-05-09T19:52:24.009Z · LW(p) · GW(p)
Suppose person A is always 5 minutes late to every appointment. Someone secretly adjusts A's watch to compensate for this, and now person A is always on-time.
Now, A is being fed misinformation continuously (the watch is never correct!), and yet, judging by behavior, A is extremely competent in navigating the world.
(Since "every truth is connected", there is a problem with person B asking A for the time and so on, but suppose A uses the clock on the cellphone to synchronize the time against everyone else.)
comment by keen · 2014-05-08T14:46:00.556Z · LW(p) · GW(p)
Human brains do experience in-group reinforcement, so we ought to aim that reinforcement at something like truth-seeking, which tends to encourage meta-level discussions like this one, thus helping to immunize us against death spirals. Note that this requires additional techniques of rationality to be effective. Consider that some truths--like knowing about biases--will hurt most people.
comment by Ben Pace (Benito) · 2014-05-07T07:44:47.329Z · LW(p) · GW(p)
But that's just your opinion.
comment by TheAncientGeek · 2014-05-05T14:58:24.165Z · LW(p) · GW(p)
The claims about .truth mostly looked ambiguous to me . There are differences between truth-telling ,truth-preferring and truth-seeking...which can pull in different directions. "Know all, say nowt"
comment by [deleted] · 2014-05-07T00:38:19.817Z · LW(p) · GW(p)
Truth is completely and utterly worthless, except for its being instrumentally useful for every single thing ever.
THAT IS THE TRUTH OF THIS WORLD! SUBMIT TO THAT TRUTH, YOU PIGS IN HUMAN CLOTHING!
(There, threw in some pointless signalling.)
comment by private_messaging · 2014-05-05T12:20:49.237Z · LW(p) · GW(p)
Your post seem to still assume that "rationalism" actually has something to do with seeking truth, rather than with seeking, for example, self congratulation about truth.
To give a concrete example, there's a lot of truths to learn in physics.
A lot of those truths are in the physics textbooks and the rest require very hard work to figure out. So, when someone's interested in knowing truths that have to do with physics, they learn the math, they study physics, they end up able to answer questions from the homework section of a textbook, and sometimes, even able to answer the unanswered questions. You get a fairly reasonable island of knowledge, not a protruding rock of "MWI is true" in the vast sea of near total ignorance.