Rationality: Appreciating Cognitive Algorithms
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T09:59:38.292Z · LW · GW · Legacy · 135 commentsContents
135 comments
Followup to: The Useful Idea of Truth
It is an error mode, and indeed an annoyance mode, to go about preaching the importance of the "Truth", especially if the Truth is supposed to be something incredibly lofty instead of some boring, mundane truth about gravity or rainbows or what your coworker said about your manager.
Thus it is a worthwhile exercise to practice deflating the word 'true' out of any sentence in which it appears. (Note that this is a special case of rationalist taboo.) For example, instead of saying, "I believe that the sky is blue, and that's true!" you can just say, "The sky is blue", which conveys essentially the same information about what color you think the sky is. Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.
Try it with these:
- I believe Jess just wants to win arguments.
- It’s true that you weren’t paying attention.
- I believe I will get better.
- In reality, teachers care a lot about students.
If 'truth' is defined by an infinite family of sentences like 'The sentence "the sky is blue" is true if and only if the sky is blue', then why would we ever need to talk about 'truth' at all?
Well, you can't deflate 'truth' out of the sentence "True beliefs are more likely to make successful experimental predictions" because it states a property of map-territory correspondences in general. You could say 'accurate maps' instead of 'true beliefs', but you would still be invoking the same concept.
It's only because most sentences containing the word 'true' are not talking about map-territory correspondences in general, that most such sentences can be deflated.
Now consider - when are you forced to use the word 'rational'?
As with the word 'true', there are very few sentences that truly need to contain the word 'rational' in them. Consider the following deflations, all of which convey essentially the same information about your own opinions:
-
"It's rational to believe the sky is blue"
-> "I think the sky is blue"
-> "The sky is blue" -
"Rational Dieting: Why To Choose Paleo"
-> "Why you should think the paleo diet has the best consequences for health"
-> "I like the paleo diet"
Generally, when people bless something as 'rational', you could directly substitute the word 'optimal' with no loss of content - or in some cases the phrases 'true' or 'believed-by-me', if we're talking about a belief rather than a strategy.
Try it with these:
- "It’s rational to teach your children calculus."
- "I think this is the most rational book ever."
- "It's rational to believe in gravity."
Meditation: Under what rare circumstances can you not deflate the word 'rational' out of a sentence?
...
...
...
Reply: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).
E.g.:
"It's (epistemically) rational to believe more in hypotheses that make successful experimental predictions."
or
"Chasing sunk costs is (instrumentally) irrational."
You can't deflate the concept of rationality out of the intended meaning of those sentences. You could find some way to rephrase it without the word 'rational'; but then you'd have to use other words describing the same concept, e.g:
"If you believe more in hypotheses that make successful predictions, your map will better correspond to reality over time."
or
"If you chase sunk costs, you won't achieve your goals as well as you could otherwise."
The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement.
Similarly, a rationalist isn't just somebody who respects the Truth.
All too many people respect the Truth.
They respect the Truth that the U.S. government planted explosives in the World Trade Center, the Truth that the stars control human destiny (ironically, the exact reverse will be true if everything goes right), the Truth that global warming is a lie... and so it goes.
A rationalist is somebody who respects the processes of finding truth. They respect somebody who seems to be showing genuine curiosity, even if that curiosity is about a should-already-be-settled issue like whether the World Trade Center was brought down by explosives, because genuine curiosity is part of a lovable algorithm and respectable process. They respect Stuart Hameroff for trying to test whether neurons have properties conducive to quantum computing, even if this idea seems exceedingly unlikely a priori and was suggested by awful Gödelian arguments about why brains can't be mechanisms, because Hameroff was trying to test his wacky beliefs experimentally, and humanity would still be living on the savanna if 'wacky' beliefs never got tested experimentally.
Or consider the controversy over the way CSICOP (Committee for Skeptical Investigation of Claims of the Paranormal) handled the so-called Mars effect, the controversy which led founder Dennis Rawlins to leave CSICOP. Does the position of the planet Mars in the sky during your hour of birth, actually have an effect on whether you'll become a famous athlete? I'll go out on a limb and say no. And if you only respect the Truth, then it doesn't matter very much whether CSICOP raised the goalposts on the astrologer Gauquelin - i.e., stated a test and then made up new reasons to reject the results after Gauquelin's result came out positive. The astrological conclusion is almost certainly un-true... and that conclusion was indeed derogated, the Truth upheld.
But a rationalist is disturbed by the claim that there were rational process violations. As a Bayesian, in a case like this you do update to a very small degree in favor of astrology, just not enough to overcome the prior odds; and you update to a larger degree that Gauquelin has inadvertantly uncovered some other phenomenon that might be worth tracking down. One definitely shouldn't state a test and then ignore the results, or find new reasons the test is invalid, when the results don't come out your way. That process has bad systematic properties for finding truth - and a rationalist doesn't just appreciate the beauty of the Truth, but the beauty of the processes and cognitive algorithms that get us there.[1]
The reason why rationalists can have unusually productive and friendly conversations at least when everything goes right, is not that everyone involved has a great and abiding respect for whatever they think is the True or the Optimal in any given moment. Under most everyday conditions, people who argue heatedly aren't doing so because they know the truth but disrespect it. Rationalist conversations are (potentially) more productive to the degree that everyone respects the process, and is on mostly the same page about what the process should be, thanks to all that explicit study of things like cognitive psychology and probability theory. When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
Is rationality-use necessarily tied to rationality-appreciation? I can imagine a world filled with hordes of rationality-users who were taught in school to use the Art competently, even though only very few people love the Art enough to try to advance it further; and everyone else has no particular love or interest in the Art apart from the practical results it brings. Similarly, I can imagine a competent applied mathematician who only worked at a hedge fund for the money, and had never loved math or programming or optimization in the first place - who'd been in it for the money from day one. I can imagine a competent musician who had no particular love in composition or joy in music, and who only cared for the album sales and groupies. Just because something is imaginable doesn't make it probable in real life... but then there are many children who learn to play the piano despite having no love for it; "musicians" are those who are unusually good at it, not the adequately-competent.
But for now, in this world where the Art is not yet forcibly impressed on schoolchildren nor yet explicitly rewarded in a standard way on standard career tracks, almost everyone who has any skill at rationality is the sort of person who finds the Art intriguing for its own sake. Which - perhaps unfortunately - explains quite a bit, both about rationalist communities and about the world.
[1] RationalWiki really needs to rename itself to SkepticWiki. They're very interested in kicking hell out of homeopathy, but not as a group interested in the abstract beauty of questions like "What trials should a strange new hypothesis undergo, which it will notfail if the hypothesis is true?" You can go to them and be like, "You're criticizing theory X because some people who believe in it are stupid; but many true theories have stupid believers, like how Deepak Chopra claims to be talking about quantum physics; so this is not a useful method in general for discriminating true and false theories" and they'll be like, "Ha! So what? Who cares? X is crazy!" I think it was actually RationalWiki which first observed that it and Less Wrong ought to swap names.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Firewalling the Optimal from the Rational"
Previous post: "Skill: The Map is Not the Territory"
135 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T09:47:40.476Z · LW(p) · GW(p)
The distinction between epistemic and instrumental rationality is standard.
Postmodernists/relativists have emphasized use of the word 'true' as a mere emphasis, which I admit is a common use.
I haven't particularly seen anyone pioneering a rationalist technique of trying to eliminate the word 'true' to avoid use as a mere emphasis. The deflationary theory of truth says that all uses of "truth" are deflatable - which this sequence denies; but the idea of deflating "true" out of sentences is clearly precedented, as is the Tarski-inspired algorithm for doing so.
I haven't particularly seen anything which emphasizes that the residue of trying to eliminate the word "truth" is abstraction and generalization over the behavior of map-territory correspondences.
Similarly, I haven't previously seen anyone advocate, as a rationality technique, trying to eliminate the word 'rational'; or emphasizing that the non-eliminable residue will be about cognitive algorithms. I wouldn't particularly expect to see that; this is me trying to narrow the definition a particular way that I think is useful.
Replies from: David_Gerard, TimS↑ comment by David_Gerard · 2012-10-06T22:20:44.680Z · LW(p) · GW(p)
A "mainstream status" on LW philosophy posts is an excellent idea. Nice one.
↑ comment by TimS · 2012-10-10T19:39:16.368Z · LW(p) · GW(p)
Postmodernists/relativists have emphasized use of the word 'true' as a mere emphasis, which I admit is a common use.
More boo lights.
Postmodernists assert that object-level morality abuses the concept of truth in order to reinforce the acceptance of normative claims. You noted that some thinkers call that "the halo effect."
Kuhn and Feyerabend assert that the interpretations of evidence and what evidence is available to interpret are affected by social factors than a naive philosophy of science wouldn't suspect.
Common use is not better evidence of postmodern thought than folk psychology is evidence of what Kahneman thinks.
What postmodern position are you actually attacking here?
Replies from: thomblake↑ comment by thomblake · 2012-10-10T19:50:51.036Z · LW(p) · GW(p)
What postmodern position are you actually attacking here?
You seem to have misread. Eliezer's comment was intended to point out connections between what he's talking about and "mainstream" ideas / writing. He noted in the article that "true" is sometimes used as mere emphasis, and noted here that postmodernists have made the same observation. I don't see why that would be characterized as an "attack".
Replies from: TimS↑ comment by TimS · 2012-10-10T20:16:56.119Z · LW(p) · GW(p)
Eliezer is attacking a particular usage of the word "true." That point is well taken. Further, I appreciate his explicit linking of his thoughts into the larger philosophical debate.
But I am unaware of any philosophical movement that uses "true" the way Eliezer attacks. The sentence I quote could have made the same point (and been more accurate) if postmodernist/relativism was omitted entirely. What purpose do you think including the label had? In particular, why was the label (inaccurately) applied to a position that Eliezer just demonstrated was false?
Replies from: DaFranker, thomblake↑ comment by DaFranker · 2012-10-10T20:27:03.999Z · LW(p) · GW(p)
From Wikipedia:
In essence, postmodernism is based on the position that reality is not mirrored in human understanding of it, but is rather constructed as the mind tries to understand its own personal reality. Postmodernism is therefore skeptical of explanations that claim to be valid for all groups, cultures, traditions, or races, and instead focuses on the relative truths of each person. In the postmodern understanding, interpretation is everything; reality only comes into being through our interpretations of what the world means to us individually.
If "postmodernists" have this opinion as stated, I suspect that when they aren't using the word "true" to attack or criticize other philosophical ideas, they would be using it as a form of emphasis on a particular interpretation, or to assert the dominance of a particular interpretation, as this interpretation then literally becomes more "true" (in their model, according to my model of their model).
Replies from: TimS↑ comment by TimS · 2012-10-10T20:32:38.599Z · LW(p) · GW(p)
I think the next paragraph is a bit more accurate:
Postmodernism postulates that many, if not all, apparent realities are only social constructs and are therefore subject to change. It claims that there is no absolute truth and that the way people perceive the world is subjective and emphasises the role of language, power relations, and motivations in the formation of ideas and beliefs. In particular it attacks the use of sharp binary classifications such as male versus female, straight versus gay, white versus black, and imperial versus colonial; it holds realities to be plural and relative, and to be dependent on who the interested parties are and the nature of these interests. Postmodernist approaches therefore often consider the ways in which social dynamics, such as power and hierarchy, affect human conceptualizations of the world to have important effects on the way knowledge is constructed and used. Postmodernist thought often emphasizes constructivism, idealism, pluralism, relativism, and scepticism in its approaches to knowledge and understanding.
The key point of political theory post-modernist is that certain social norms are claimed to be true or universal when that is not the case. Further, binary distinctions (black/white, capitalist/proletariat) are inherently misleading, organizing the world in particular ways in order to advance particular moral agendas.
Replies from: DaFranker, Eugine_Nier↑ comment by DaFranker · 2012-10-10T21:06:31.123Z · LW(p) · GW(p)
Thanks, I shall update towards most postmodernists being less of the extreme philosophical kind and more about practical matters like those.
Most self-titled "postmodernists" I've encountered and discussed with were more of the extreme philosophical kind - the kind that would claim ontologically basic mental entities or some other really weird postulate if asked "But where did the first 'reality' come from if there never was any objective reality for us to base our own ones on?"
Replies from: TimS↑ comment by TimS · 2012-10-10T21:21:04.190Z · LW(p) · GW(p)
As a discipline, postmodernism seems unusually terrible at producing competent practitioners. The average academic chemist is a better scientist than the average postmodernist is as a philosopher.
That said, a lot of conventional wisdom in fields like sociology or Legal Realism have very strong postmodern flavors. Honestly, a lot of the meta-type analysis of norms is using scientific data to show what various humanities thinkers had been saying all along.
↑ comment by Eugine_Nier · 2012-10-10T23:10:03.315Z · LW(p) · GW(p)
Further, binary distinctions (black/white, capitalist/proletariat) are inherently misleading,
Some are some aren't. Furthermore, it's impossible to say anything without using distinctions.
Replies from: TimS↑ comment by TimS · 2012-10-11T01:49:59.156Z · LW(p) · GW(p)
Not all moral distinctions are on-off buttons. Some (most?) are sliding scales.
I don't expect king-of-postmodernism-is-nonsense and mister-I-think-postmodernism-makes-good-points to come to agreement, but I'm interested in where exactly we disagree.
Do you think some agents could gain advantage by treating a sliding-scale moral quality as discrete?
Do you think some agents could gain advantage by treating a discrete moral quality as sliding-scale?
What sort of evidence is useful in deciding whether a particular moral quality is discrete or sliding scale?
↑ comment by Eugine_Nier · 2012-10-11T04:31:35.868Z · LW(p) · GW(p)
First binary distinctions aren't just for moral systems.
If we restrict to moral distinctions, most moral distinctions are Schelling points.
↑ comment by thomblake · 2012-10-10T20:34:40.733Z · LW(p) · GW(p)
the sentence I quote could have made the same point (and been more accurate) if postmodernist/relativism was omitted entirely.
The point of that sentence was that postmodernists/relativists have emphasized something. Removing "postmodernists/relativists" from that sentence removes the entire point of the sentence. The comment was about what mainstream folks have talked about.
In particular, why was the label (inaccurately) applied to a position that Eliezer just demonstrated was false?
It was not. The label was applied to people who noticed something that Eliezer also noticed. He did not even say that postmodernists/relativists think that is the correct use of the word "true". If anything, it was praise for postmodernists/relativists for having already covered something that Eliezer wanted to talk about.
Replies from: TimS↑ comment by TimS · 2012-10-10T20:41:56.289Z · LW(p) · GW(p)
My mental model of Eliezer Yudkowsky is that he thinks all postmodernism is nonsense - as others have noted. If he intended to say something equivalent to "Postmodernist got this point right" then what he wrote is not how I expect he would say it. Further, the attack that I am reading into his words is a standard understanding of postmodernism in this community.
But the community seems to agree with you more than I - so I'm adjusting slightly in favor of me misreading Eliezer's intent.
Replies from: Pudlovichcomment by CronoDAS · 2012-10-06T15:24:18.638Z · LW(p) · GW(p)
Saying "I believe X" does seem to have different connotations than simply stating X; I'd be more likely to say "I believe X" when X is controversial, for example.
Replies from: GDC3, Xachariah, roystgnr, yli↑ comment by GDC3 · 2012-10-06T18:52:28.881Z · LW(p) · GW(p)
Specifically they're different because of the pragmatic conversation rule that direct statements should be something your conversation partner will accept, in most normal conversations. You say "X" when you expect your conversation partner to say something like "oh cool, I didn't know that." You say "I believe X" when they may disagree and your arguments will come later or not at all. "It's true that X" is more complicated; one example of use would be after the proposition X has already come up in conversation as a belief and you want to state it as a fact.
A: "I hear that lots of people are saying the sky is blue." B: "The sky is blue."
The above sounds weird. (Unless you are imagining it with emphasis on "is" which is another way to put emphasis on the truth of the proposition.) "The sky is blue" is being stated without signaling its relationship to the previous conversation so it sounds like new information; A will expect some new proposition and be briefly confused; it sounds like echolalia rather than an answer.
B: "The sky really is blue.
or
B: "It's actually true that the sky is blue."
sounds better in this context.
Replies from: CronoDAS, Bruno_Coelho↑ comment by CronoDAS · 2012-10-06T20:23:20.494Z · LW(p) · GW(p)
That's a better explanation than I could come up with.
On a completely irrelevant note, why is "the sky is blue" the standard for "obviously true fact"? The sky is black about half the time, and it's pretty common for it to be white, too.
Replies from: army1987, GDC3, BlazeOrangeDeer↑ comment by A1987dM (army1987) · 2012-10-07T08:53:49.691Z · LW(p) · GW(p)
The sky is black about half the time
If you count navy as blue rather than as black, that happens more rarely than “half the time”. (I'd say “10% of the time” as I have that number cached in my mind as the duty cycle of fluorescence detectors for ultra-high-energy cosmic rays.) You know, the moon.
and it's pretty common for it to be white, too.
And when that happens, in places where electric lighting is widely used, it tends to become orange (not quite -- does that colour have a name?) during the night!
Replies from: pure-awesome↑ comment by pure-awesome · 2013-04-10T20:17:45.284Z · LW(p) · GW(p)
I believe CronoDAS was referring to overcast days when they said the sky is sometimes white.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-10T20:29:31.963Z · LW(p) · GW(p)
Yes, I was talking about his claim that “the sky is black about half the time”; I didn't touch his claim that “it's pretty common for it to be white”.
EDIT: Okay, failed reading comprehension of my own comment.
↑ comment by GDC3 · 2012-10-07T05:13:54.836Z · LW(p) · GW(p)
When the sky is white, it's not the sky; it's clouds blocking the sky. When the sky is black it's just too dark to see the sky. At least that was my intuition before I knew that the sky wasn't some conventionally blue object. I guess its a question of word usage whether the projective meaning of "blue" which is something like "looks blue under good lighting conditions" should still be applied when it's not caused by reflectance. Though it's not blue from all directions is it?
Replies from: DanielLC, army1987↑ comment by DanielLC · 2012-10-07T05:30:12.961Z · LW(p) · GW(p)
I would consider the clouds part of the sky, like the air, or the stars.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-10-07T08:58:56.999Z · LW(p) · GW(p)
I'd say “sky” is a relative concept and depends on where you are. If I was on the mountainside and had clouds below me, I still wouldn't say I'm in the sky. (But I would if I was on a plane, so it's not as simple as “anything that's above me”.)
Replies from: DanielLC, CCC↑ comment by CCC · 2012-10-07T10:36:39.847Z · LW(p) · GW(p)
I consider anything that is contiguously attached to the planet (or moon) which I am currently on (e.g. a man on a mountaintop), or less than about two metres from the ground (e.g. a man jumping up and down) to not be in the sky. Anything further than that from ground surface, and either currently ascending or able to maintain that altitude, counts as 'in the sky'; anything further than that from ground surface and not able to maintain that altitude, counts as 'falling from the sky'.
Replies from: Kindly, DanielLC↑ comment by Kindly · 2012-10-07T16:27:35.408Z · LW(p) · GW(p)
If I jump out of a second-floor window, I'm certainly falling, but I'm hardly falling from the sky.
Replies from: CCC↑ comment by CCC · 2012-10-08T07:13:54.221Z · LW(p) · GW(p)
The building is contiguously attached to the ground (unless it's some sort of flying building). You need to be more than two metres away from it and falling to count as 'falling from the sky'.
For safety reasons, it's probably also better to throw an object - I'd suggest a tennis ball - if you actually want to perform an experiment. You could get it to the state 'falling from the sky' by throwing it hard enough horizontally from a fourth- or fifth-floor window, or dropping it off a bridge.
Hmmm... I may need to update my definition to consider the 'dropped-from-a-bridge' case.
↑ comment by DanielLC · 2012-10-07T17:56:19.089Z · LW(p) · GW(p)
I'd say that it has to be far enough from the ground that you wouldn't notice the parallax effect if you walked around below it, it has to be above the horizon. Also, it can't be an airplane or something. I'm not sure why exactly that last rule is there, given that meteors and such count. Maybe most people would consider it part of the sky. I'd say it's in the sky, but not part of it.
↑ comment by A1987dM (army1987) · 2012-10-07T09:01:33.381Z · LW(p) · GW(p)
I guess its a question of word usage whether the projective meaning of "blue" which is something like "looks blue under good lighting conditions" should still be applied when it's not caused by reflectance.
What would you call a glass absorbing red/orange/yellow light and letting the rest through?
Replies from: GDC3↑ comment by GDC3 · 2012-10-07T19:11:34.800Z · LW(p) · GW(p)
As I understand it, the sky does let red-yellow light through. It scatters blue light and lets red light through relatively unchanged. So it looks red-yellow near the light source and blue everywhere else.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-10-07T21:17:49.900Z · LW(p) · GW(p)
Yes.
↑ comment by BlazeOrangeDeer · 2012-10-07T07:55:00.842Z · LW(p) · GW(p)
It's something that everybody has quick access to. Another version would be "things fall", which is better but also only works on a planet and with objects denser than air for example. It would be ideal to have some unchanging reference object that we can make statements about, instead we have something that everyone has seen and they can say "I have seen that, it was pretty much blue"
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-10-07T18:04:23.958Z · LW(p) · GW(p)
That it's hard to come up with an "obviously true fact" that is in fact true without qualifications is itself interesting.
↑ comment by Bruno_Coelho · 2012-10-07T13:53:25.493Z · LW(p) · GW(p)
With close friends this works, saying "I believe X" signals uncertains where someone could help with avaliable information. But in public debates if you say "I believe X" instead of "X", people will find more confidente and secure.
Replies from: GDC3↑ comment by GDC3 · 2012-10-07T19:14:20.023Z · LW(p) · GW(p)
You're right. I think the lesson we should take from all this complexity is to remember that the wording of a sentence is relevant to more than just it's truth conditions. Language does a lot more than state facts and ask questions.
Replies from: Bruno_Coelho↑ comment by Bruno_Coelho · 2012-10-09T02:27:45.377Z · LW(p) · GW(p)
But this bring a tradeoff, how much do you sacrifice to show security and confidence? I suppose, there are people who tell the truth even in situations where this attitude will cause complications.
↑ comment by Xachariah · 2012-10-10T00:03:33.518Z · LW(p) · GW(p)
"God exists" - "I've had conversations with God; he's a good fellow."
"I believe that God exists" - "A lot of people say that God exists and I agree with them."
"I believe that I believe that God exists" - "I do see some inconsistencies about God but I go to church and I pray. Plus all my friends are Christian, that means I'm a believer, right?"
"I believe that I believe that I believe that God exists" - "I think that what it means to believe in something is an aggregate of the actions you take and the anticipations you feel. So I can have doubt at the object level but still count as believing if I respond similarly to other believers..."
"I believe that I believe that I believe that I believe that God exists" - "Okay, I need to talk to fewer rationalists."
Each 'I believe' implies a different meta level that you're analyzing things. Kind of like confidence levels inside and outside an argument.
Replies from: alex_zag_al↑ comment by alex_zag_al · 2012-10-11T14:57:32.838Z · LW(p) · GW(p)
Doesn't seem to me like the first "believe" you append implies a different meta level, just a different reason for believing. After all, the one who asserts "God exists" also believes God exists.
Or, maybe the way you've set it out, "I believe that God exists" is belief in belief, in which case in the next one, the extra "I believe" just indicates uncertainty.
I think that the general trend that you observed, that you tend to get more meta as you add more "I believes", may be making you miss when the words "I believe" add nothing, or just mean "probably".
Replies from: afeller08↑ comment by afeller08 · 2012-10-12T12:26:26.742Z · LW(p) · GW(p)
I agree with Xachariah's view of semantics. I think that the first 'I believe' does imply a different meta level of belief (often associated with a different reason for believing). His example does a good job of showing how someone can drill down many levels, but the distinction in the first level might be made more clear by considering a more concretely defined belief:
"We're lost" -- "I'm you're jungle leader, and I don't have a clue where we are any more."
"I believe we're lost" -- "I'm not leading this expedition. I didn't expect to have a clue where we were going, but it doesn't seem to me like anyone else knows where we are going either."
--
"Sarah won state science fair her senior year of high school" -- "I attended the fair and witnessed her win it."
"I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and she's the best experimentalist I've ever met."
"I believe that I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and I don't believe for one second that she'd make that sort of thing up. That said, she's not, so far as I can tell, particularly good at science, and it shocks me that she might somehow have been able to win."
--
"Parachuting isn't all it's cracked up to be." -- "I've gone parachuting, and frankly, I've gotten bigger adrenaline rushes playing poker."
"I don't believe parachuting's all it's cracked up to be." -- "I haven't gone parachuting. There's no way I would spend $600 for a 4 minute experience when I can't imagine that it's enough fun to justify that."
Without the 'I believe,' what I tend to be saying is, I trust the map because I drew it and I drew it carefully. With the 'I believe,' I tend to be saying I trust this map because I trust it's source even though I didn't actually create it myself. In the case of the parachuting, I don't know where the map comes from, it's just the one I have.
Placing additional "I believe"s in front of a statement changes what part of the statement you have confidence in.
The statement 'I believe God exists' usually does mean that someone places confidence in eir community's ability to determine if God exists or not rather than placing confidence in the statement itself. Most of the religious people I know would say 'God exists' rather than 'I believe God exists' and most of them believe that they have directly experienced God in some way. However, most of them would say 'I believe the Bible is true' rather than 'the Bible is true' -- and when pressed for why they believe that, they tend to say something along the lines of "I cannot believe that God would allow his people to be generally wrong about something that important" or something else that asserts that their confidence is in their community's ability to determine that 'the Bible is true' rather than their confidence being in the Bible itself. I don't know if this is a very localized phenomenon or not since all of the people I've had this conversation with belong to the same community. It's how I would tend to use the word 'believe' too, but I grew up in this community, so I probably tend to use a lot of words the same way as the people in this community do.
In Xachariah's example the certainty/uncertainty is being placed on the definition of 'believe' at each step past the first one, so the way that the the statement is changing is significantly different in the second and third application of 'I believe' than it is in the first. The science fair example applies the 'I believe' pretty much the same way twice.
When I say "Sarah won science fair," I'm claiming that all of the uncertainty lies in my ability to measure and accurately record the event. Her older sister is really good at science too; it's possible that I'm getting the two confused but I very strongly remember it being Sarah who won. On the other hand, I'm extremely confident that I wouldn't give myself the wrong map intentionally -- I have no reason to want to convince myself that Sarah is better at science than she actually is.
That source of uncertainty essentially vanishes when the source of my information becomes Sarah herself. I now have a new source of uncertainty though because she does have a reason to convince me that she is better at science than she actually is. However, I trust the map because it agrees with what I'd expect it to be. I'd still think she was telling the truth about this if she lied to me about other things.
In the third case, I'm once again extremely confident that Sarah won science fair. She told me she did, and she tells the truth. What she's told me does not at all agree with my expectations; I don't really place confidence in the map, I place confidence a great deal of confidence in Sarah's ability to create an accurate map, and I place a great deal of confidence in her having given me an accurate map. The map seems preposterous to me, but I still think it's accurate, so when someone asks me if I believe that Sarah won science fair, I wince and I say "I believe that I believe that Sarah won science fair" and everyone knows what I mean. My statement isn't really "Sarah won science fair." It's "Sarah doesn't lie. Sarah says she won science fair. Therefore, Sarah won science fair." If I later find out that Sarah isn't quite as honest as I think she is, this is the first thing she's told me that I'll stop believing. Unless that happens, I'll continue to believe that she won.
↑ comment by roystgnr · 2012-10-08T15:11:17.074Z · LW(p) · GW(p)
Precisely: for some reason you're not allowed to say "I assign a 70% probability to X being true" without people looking at you funny, and even "I think X is more likely than not-X, but you shouldn't be as confident of this as you are of most things I tell you" is kind of awkward, but "I believe so" is a pretty standard idiom for expressing high-probability-which-is-still-non-negligibly-different-from-1.
If you're stuck trying to communicate in an innumerate language then you use whatever phrasing you have available.
Replies from: CronoDAS↑ comment by yli · 2012-10-07T08:27:55.611Z · LW(p) · GW(p)
If you're willing to say "X" whenever you believe X, then if you say "I believe X" but aren't willing to say "X", your statement that you believe X is actually false. But in conversations, the rule that you're willing to say everything you believe doesn't hold.
Replies from: Pudlovichcomment by Vaniver · 2012-10-06T17:10:48.489Z · LW(p) · GW(p)
"Why you should think the paleo diet has the best consequences for health"
"I like the paleo diet"
Those look significantly different to me- someone who likes the paleo diet because it lets them eat bacon all day and is indifferent to the health consequences is very different from someone who believes the health consequences of paleo are best, but doesn't like it because they enjoy bread and beer too much.
Replies from: David_Gerard↑ comment by David_Gerard · 2012-10-06T22:16:31.318Z · LW(p) · GW(p)
With diets, liking it is pretty much essential to staying on it, as far as I can tell from myself and people I know. e.g. I've been on Tim Ferriss' slow-carb diet for a year and a half, and it's great, but only because I like all the food on it already and it suits me. If it didn't I'd have quit in a week. So I laughed at the bit you quote, but I'd say in practice it's not far off the mark and I laughed because it implies my first sentence.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-07T05:20:27.576Z · LW(p) · GW(p)
Liking a diet may be necessary for it to have positive health consequences, as you say, but for most people it's not sufficient. So it's probably a mistake to treat "X diet has positive health consequences" as equivalent to "I like X diet" when uttered by most people. (For example, I might be able to experimentally demonstrate the latter and demonstrate the opposite of the former for the same diet and speaker.)
Replies from: David_Gerard↑ comment by David_Gerard · 2012-10-07T08:30:12.114Z · LW(p) · GW(p)
I took it as literary allusion in a place where such may not have been suited to something in a literalist genre. Which may count as a mistake.
comment by A1987dM (army1987) · 2012-10-06T12:03:52.760Z · LW(p) · GW(p)
Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.
To me, the first means that I assign a probability > 50%, the latter that I assign a probability close to 1.
Replies from: ThrustVectoring, BlazeOrangeDeer, None↑ comment by ThrustVectoring · 2012-10-06T17:13:22.165Z · LW(p) · GW(p)
That's because the "I believe" part of "I believe X" acts as a kind of socially acceptable way to back off from a statement if it turns out to be wrong. People tend to say "I believe X" when they want to be able to later admit that they were wrong about X, so that's why it's less of a probabilistic commitment.
↑ comment by BlazeOrangeDeer · 2012-10-07T07:57:42.004Z · LW(p) · GW(p)
I wouldn't say that for something at just over 50%, i'd say "will probably". An unqualified statement implies confidence.
↑ comment by [deleted] · 2012-10-06T13:09:16.928Z · LW(p) · GW(p)
Why not say "I assign 'The Democrats will win the election.' probability greater than 50%." instead?
Replies from: army1987, wuncidunci↑ comment by A1987dM (army1987) · 2012-10-06T14:37:11.026Z · LW(p) · GW(p)
Because that may sound weird to certain people. (What about “I think Democrats are more likely than not to win the next election”?)
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2012-10-06T14:44:09.651Z · LW(p) · GW(p)
Why not just say "The democrats are more likely than not to win the next election?"
Replies from: jimrandomh, faul_sname, army1987, None↑ comment by jimrandomh · 2012-10-06T18:47:21.239Z · LW(p) · GW(p)
Why not just say "The democrats are more likely than not to win the next election?"
Because that's four extra syllables, and it shifts emphasis from the statement to the meta-statement about probability.
(These aren't good reasons to speak imprecisely, but it's a broadly observed fact of linguistics that people favor shorter ways of saying things when their meaning is sufficiently similar.)
↑ comment by faul_sname · 2012-10-06T17:49:38.424Z · LW(p) · GW(p)
Because next-election-winningness is not an attribute of the democrats, it's an attribute of your mental model of the democrats.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-07T05:15:02.247Z · LW(p) · GW(p)
I'm not quite sure what you even mean by this comment.
I have a mental model in which the Democrats won the 2008 U.S. Presidential election.
Is that election-winningness, on your view, an attribute of the Democrats? Or of my mental model of Democrats? Or both?
↑ comment by faul_sname · 2012-10-07T06:02:51.750Z · LW(p) · GW(p)
I phrased badly: that should have been "likelihood-of-democrats-winning-next-election" corresponds to your mental model of the democrats, not the democrats themselves. The democrats will either win or they won't, but if you don't know which you'll say "I think/believe the democrats will win the next election". Since the democrats actually did win the 2008 election, your mental model does correspond to the real world, so it doesn't matter whether you're referring to your mental model or the real world. Since you have less confidence in your mental model of future democratic performance, it makes sense to use different phrases for each ("I believe the democrats will win the next election" feels different than "The democrats won the last election").
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-07T06:18:36.109Z · LW(p) · GW(p)
Well, I certainly agree that it makes sense to use different phrases to indicate different levels of confidence in an assertion, and I agree that the distinction between "X" and "I believe X" is often used this way.
↑ comment by A1987dM (army1987) · 2012-10-07T09:10:51.724Z · LW(p) · GW(p)
I'd say “I think” if I only have poor knowledge of the facts, have to heavily rely on my priors and my intuition, and hence I could easily shift my probability assignment (and narrow what E.T. Jaynes calls my Ap distribution) by (e.g.) looking stuff up, if I could be bothered to. I'd omit it if I already had as much relevant information as I could reasonably gather, and so I don't expect my probability assignment to shift or my Ap distribution to narrow in the near future.
↑ comment by wuncidunci · 2012-10-06T13:48:22.841Z · LW(p) · GW(p)
I think the main issue about language is the question of who you're talking to. If you're speaking to a friend with a very weak grasp of Rationality and Probability the sentence that sentence will not make sense, and be needlessly convoluted.
To me it looks like Eliezer is trying to set up a new standard (perhaps just for this sequence) about when and how we are allowed to use the loaded words 'truth' and 'rationality'. So it doesn't make sense to try to apply this to every single conversation (especially outside of Less Wrong).
comment by Wei Dai (Wei_Dai) · 2012-10-06T11:50:26.434Z · LW(p) · GW(p)
"It's rational to believe the sky is blue" -> "I think the sky is blue" -> "The sky is blue"
These sentences have different truth conditions (sets of possible worlds in which they are true). For example, in a possible world where aliens have changed the color of the sky to green and installed filters into everyone's optic nerves, "I think the sky is blue" and "It's rational to believe (in the sense of assigning the most probability to) the sky is blue" are true but "the sky is blue" is false. In a possible world where I irrationally believe the sky is blue, "I think the sky is blue" is true but "It's rational to believe the sky is blue" is false.
I think I should pick the sentence that best matches the probability distribution over possible worlds that I have. For example if I'm pretty sure that it's rational to believe the sky is blue but not highly certain the sky really is blue, I might want to say "It's rational to believe the sky is blue". If I'm not sure about either, "I think the sky is blue" would be best, etc.
Replies from: philh, thomblake↑ comment by philh · 2012-10-06T15:26:35.486Z · LW(p) · GW(p)
Denotatively: in your two hypothetical worlds, one of the statements may be false but all three are presenting essentially the same information, which is "I think the sky is blue". You're unlikely to say "I think the sky is blue but the sky is green", or "it is rational to believe the sky is blue but I think the sky is green".
Connotatively: I do think there's a connotative difference between the statements. "I think the sky is blue" assigns less probability to a blue sky than "the sky is blue" does; and "it is rational to believe X" could mean something like "I ought to disbelieve in ghosts, but I'll still run screaming from a supposedly-haunted building", or "it is rational (for children) to believe in God (because they don't have any other explanation for religion)", or "the current best hypothesis is that the Higgs boson exists, but we've got an LHC to run before we can collect actual data".
Replies from: TheOtherDave, Eliezer_Yudkowsky↑ comment by TheOtherDave · 2012-10-06T15:34:45.030Z · LW(p) · GW(p)
You're unlikely to say "I think the sky is blue but the sky is green", or "it is rational to believe the sky is blue but I think the sky is green".
And yet, I am not-infrequently in a state where saying "I think I'm going to do really badly on this project, but the truth is I probably won't" seems to make perfect sense to me, as does "it is rational to believe I'll do well on this project but the truth is I think I won't."
This does not surprise me too much, as I don't expect my brain to be internally consistent, and I consider all thoughts it thinks my thoughts because the alternative seems more dissociative than necessary. Depression and anxiety frequently cause me to think things that contradict my rationally endorsed beliefs.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-10-06T20:17:45.718Z · LW(p) · GW(p)
I don't expect my brain to be internally consistent, and I consider all thoughts it thinks my thoughts because the alternative seems more dissociative than necessary
What does it mean to consider these thoughts "your" thoughts, what does this ownership signify? Your brain produced them; what else is there to say? Endorsement of correctness doesn't need to relate to personal identity.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-06T22:09:57.826Z · LW(p) · GW(p)
I'm puzzled by the question.
What it means to consider these thoughts my thoughts is more or less the same thing that it means to consider these fingers I'm typing with my fingers, or the words I've typed my words. I assume you're not asking me to taboo the general concept of ownership here, though I'll try to if you are, and I don't think I'm using it in an unusual way.
But I'm not quite sure what you are asking.
Can you clarify?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-10-06T22:23:30.261Z · LW(p) · GW(p)
You said, "I consider all thoughts [my brain] thinks my thoughts because the alternative seems more dissociative than necessary". In this statement, you seem to be comparing the position where you consider all thoughts "your thoughts" with the position where you only consider some of the thoughts "your thoughts". In the latter case, you might, for example, declare incorrect aliefs "not part of you".
My point is that I'm not sure there should be much of a distinction between the concept of endorsing certain thoughts (e.g. for correctness, or for expressing certain values), and the concept of ownership over them. More specifically I'm suggesting that it might be a good idea to get rid of the concept of ownership over thoughts (where it's selective, so that not every thought your brain thinks is seen as "your own"), and only use the concept of endorsement, so that the question of relating endorsed thoughts with owned thoughts would become trivial/meaningless.
(The idea of endorsement generalizes better to weird situations, as you may endorse something an algorithm running on your computer suggested, something other people think, implementation of a social norm, or something an AI does. It seems that it's more accurate to treat such things as "part of you" than not, in considerations that would normally make use of the concept of "part of you", but the concept of "part of you" as it's normally used fits them worse than the concept of "endorsement", thus the latter is more useful, and the former is potentially misleading, drawing attention away from such generalizations.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-06T22:45:34.660Z · LW(p) · GW(p)
Ah, I see. Thanks for the clarification.
So, in that context, what it signifies to consider as mine thoughts I don't endorse is that I consider myself more than just the subset of my brain that thinks thoughts I endorse.
So, why do I do that, rather than only model myself as the subset? Hm. No particularly good reason, I suppose... I mean, I could model myself as just the subset, and treat the thoughts thought by the brain in which I reside as belonging to someone else, or to noone at all. It would take some training, but I expect it's possible. It's not something I've done, but neither is it something I've explicitly rejected.
Do you recommend it?
What benefits ought I expect from doing that?
Edit: I should say explicitly that if your answer is "the same benefits EY lists in the posts I linked to," that's fine; I just didn't want to treat his thoughts as yours.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-10-06T22:59:05.331Z · LW(p) · GW(p)
I mean, I could model myself as just the subset, and treat the thoughts thought by the brain in which I reside as belonging to someone else, or to noone at all.
I'm not sure the grandparent clarified the argument then.
Does it mean anything to declare that certain thoughts are "part of you", apart from your endorsement of those thoughts? In what way is modeling a thought as "your own" different from modeling it as "someone else's"? One should model what's possible about one's whole psychology, and in characterizing that activity I don't understand in what way "modeling as myself" is distinct from just "modeling".
You say that you consider more than those thoughts that are endorsed as "part of you". I don't understand what this is intended to mean, what is the difference between drawing the boundary of the concept "part of me" in one way vs. the other, and what would it mean to retrain yourself to change this boundary. I expect the valid use of the concept of "part of me" derives mostly from the concept of endorsement, and I'm not sure what the useful distinction might be (there are actual distinctions in connotations, the question is whether they have any role to play).
(I guess I am asking to taboo the concept of ownership, as applied to thinking.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-07T02:22:08.368Z · LW(p) · GW(p)
I guess I am asking to taboo the concept of ownership, as applied to thinking.
Well... OK.
This is tricky to do in abstract terms without becoming entirely meaningless, so let's take a step back here and see if a more concrete example helps establish a shared useful framework.
In that vein: what does it mean to say that these fingers I'm typing with are mine, rather than to say they aren't mine?
Well of course there are lots of ways I own my fingers, but in the sense I think we mean here: roughly speaking, it means that when I form impulses to perform certain tasks with my fingers, those are the fingers that perform the tasks, not some other fingers. When those fingers interact with the external world, my mind receives tactile input, not some other mind. And various other facts along those lines. More broadly, it means that these fingers interact with my intentions and my perceptions in various specific ways.
If that stopped being true, I might still refer to them as my fingers, but I wouldn't mean quite the same thing by doing so. And if it started being true of other fingers that it currently isn't true of (e.g., fingers on a prosthetic arm connected to my nervous system) I would probably start referring to those fingers as mine in the same sense. (And if it started being true of arbitrary fingers in unpredictable ways, I would probably eventually discard the concept as useless... no fingers would be especially mine, and all fingers might be mine, and it would just be a silly thing to talk about.)
All of which is so banal as to not be worth saying, but perhaps dropping down to the incredibly banal is a useful place to start, since we seem to be missing each other when we get too abstract.
So, OK, does that align with your understanding of ownership as it applies to fingers in this context? (Of course, it is possible to own fingers in many other ways, but that's what I usually mean when I talk about my fingers.)
Assuming it does... I would say that when I describe certain thoughts as mine, I mean something similar. When I experience the physical symptoms of anxiety, those are the associated anxious thoughts I experience -- not anxious thoughts in some other brain. When I form the desire to remember my grandmother's first name, the subsequent thought of my grandmother's first name is my thought, not someone else's. And so forth.
And, much as with fingers, this seems utterly banal and uninteresting. They are my fingers/thoughts, which labels a certain way of interacting causally with those fingers/thoughts as opposed to other fingers/thoughts.
Is that thing which I just described what you understand "my thought" to denote?
Is it something you expect derives from the concept of endorsement?
If so, can you explain how it does so in your view?
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T18:45:49.123Z · LW(p) · GW(p)
I shall concede that the sentence, "It's rational for X to believe Y, but really Z" can sometimes make sense - it says that you have different evidence from X. In most cases, though, this will underestimate the power of rationality and ask too little of X. (The last time I can remember saying anything like this was in a strictly fictional context, Chapter 20.)
Replies from: evand↑ comment by evand · 2012-10-07T04:00:28.085Z · LW(p) · GW(p)
Isn't that exactly the question we often ask juries to consider in, for example, liability lawsuits? "It was rational for the defendant to assume (or conclude under the circumstances) X, but in fact not-X, because of Y or because they got really unlucky, and therefore we find them not liable."
I will happily concede that juries do a poor job accounting for hindsight bias, and hold defendants to low standards of rationality, but it seems to me that the usual question is something like "even though not-X, was it rational to believe X?"
Whether "reasonable" and "rational" really mean the same thing in this case is open, but I submit that it is the same question as whether juries hold defendants to reasonable standards of rationality.
↑ comment by thomblake · 2012-10-08T18:34:46.862Z · LW(p) · GW(p)
In a possible world where I irrationally believe the sky is blue, "I think the sky is blue" is true but "It's rational to believe the sky is blue" is false.
But in that possible world, you could just as well say "I think the sky is blue" and "It's rational to believe the sky is blue" and "The sky is blue". You shouldn't find yourself in the situation of saying "I believe that the sky is blue, but the sky is not actually blue".
comment by Wei Dai (Wei_Dai) · 2012-10-07T00:16:04.481Z · LW(p) · GW(p)
I use "I think X" to indicate more uncertainty than just "X" all the time, and so does Eliezer. I just checked his recent comments, and on the first page, he used "I think" two times to indicate uncertainty, and one time to indicate that others may not share his belief. The statement "Consider the following deflations, all of which convey essentially the same information about your own opinions" just seems plain wrong.
Replies from: robert-miles↑ comment by Robert Miles (robert-miles) · 2012-10-08T11:57:42.296Z · LW(p) · GW(p)
Agreed. The use of "I think" relies on its connotations, which are different from its denotation. When you say "I think X", you're not actually expressing the same sentiment as a direct literal reading of the text suggests.
comment by David_Gerard · 2012-10-06T22:13:22.205Z · LW(p) · GW(p)
SkepticWiki or something like it would be a much better name for RationalWiki, but it's probably too late. The name is a bit of a historical accident. SkepticWiki was already taken ... though they've now given up, and skepticwiki.org redirects to RW. Ah well.
(RW has now reached the stage where its popularity is sending it broke. Also, they need a new sysadmin. And I just volunteered, Dawkins help me.)
Of course, most of RW is shit. But the good bits, they're lovely. (The LW article is only a bit shit.)
comment by Vladimir_Nesov · 2012-10-06T20:35:30.877Z · LW(p) · GW(p)
In an argument, a good rule of thumb is to only assert statements which your interlocutor is expected to mostly accept (as GDC3 reminds us in another comment). If the statement X won't be accepted, the statement that "I believe X" may well be accepted, if you are not expected to lie (or be significantly mistaken) about your beliefs. It's a statement different from X that provides weak evidence about X, and draws attention to X, prompting to think about the possibility of X in greater detail, perhaps raising its probability from obscure to plausible as a result.
Thus, stating "I believe X" is similar to stating "X is somewhat plausible", in that both communicate weak evidence about X and draw attention to X, allowing to notice its greater plausibility through inference. But stating "X is somewhat plausible" would be a misrepresentation of your understanding of the world if you in fact believe that X is likely (you don't believe that it's only "somewhat plausible"), and stating "X is likely" breaks the rule of only asserting statements that will be accepted. Therefore, in this case the best choice is to state "I believe X", and not "X" or "X is somewhat plausible".
comment by [deleted] · 2012-10-07T00:01:56.410Z · LW(p) · GW(p)
We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality). [emphasis added}
Taboo "systematically."
Replies from: Eliezer_Yudkowsky, thomblake↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-07T02:27:57.414Z · LW(p) · GW(p)
Rather than talking about only a single case where you might be tempted to conclude that X is rational because it leads to your desired conclusion that global warming is false, or true, or whatever; one can explain why this (tends probabilistically to) work in the general case, given the general sort of universe we live in - that's "systematically".
Replies from: None↑ comment by [deleted] · 2012-10-07T23:10:24.651Z · LW(p) · GW(p)
Yes, when we talk about something having systematic effects, we mean it tends probabilistically to have those effects. But this is cheating on the taboo: you are merely substituting another term of art. "Tends probabilistically" is no clearer than "tends systematically."
Probabilities pertain to degrees of belief, not to states of the world. To think otherwise is to commit a mind projection fallacy. On this we agree. So, how can a probability describe a systematic tendency in the universe?
Replies from: Vaniver, None, curiousepic↑ comment by Vaniver · 2012-10-07T23:26:43.409Z · LW(p) · GW(p)
Yes, when we talk about something having systematic effects, we mean it tends probabilistically to have those effects.
Typically, when I want to discuss probabilistic relations I'll use a probabilistic word, like "probably" or "tends to" or "correlates with." When I use the word "systematically," I typically want to imply a causal relationship. Taking Eliezer's old example, if I put a pebble in the bucket when a sheep leaves the fold and take a pebble out of the bucket when a sheep returns to the fold, I've created a causal system, which will have the systematic effect of letting me know how many sheep are outside the fold by checking the level of the bucket. Whether the system is deterministic or stochastic doesn't make much difference for how I think about the graph connecting the nodes, though it will change the underlying mathematics.
Now, I'll note my answer is very different from Eliezer's, and I suspect that's because "tends probabilistically to" is a simpler concept than a causal system; I might be trying to explain addition using multiplication.
↑ comment by [deleted] · 2012-10-08T16:29:10.835Z · LW(p) · GW(p)
How about "if you try this many times, it will usually work." I'm not sure you can taboo 'usually' (or 'systematically'). It seems to be one way to invoke a rather fundamental-seeming process of abstracting from specific cases to general categories about which you can then form summarizing beliefs. If you ask someone to taboo something too basic, the best they can do is to rephrase and hope you'll get what basic thing they were referring to.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-10-08T21:42:38.704Z · LW(p) · GW(p)
What does it mean to try the same thing many times?
Replies from: khafra↑ comment by khafra · 2012-10-15T17:20:02.587Z · LW(p) · GW(p)
It's a little tautological that, by whatever method of counting things together you've worked out, you count certain things together, and that number is the denominator in your probability number; and then you count a subset of those things together, and that's the numerator in your probability number. It's so tautological, given the definition of probability, that it might not count as "tabooing probability." But it seems worth pointing out anyway.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-10-16T03:24:51.583Z · LW(p) · GW(p)
First I assume you mean to reply to some other comment.
Furthermore, you description doesn't really work as a definition of probability since it implicitly assumes all the things are equally probable.
Replies from: khafra↑ comment by khafra · 2012-10-16T11:17:25.553Z · LW(p) · GW(p)
I'm confused about your assumption.
You're right that I didn't clearly describe probability, though; I needed to make it clear that in the denominator you must count everything, however you group it.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-10-17T03:57:55.906Z · LW(p) · GW(p)
When I flip a coin, it can land on heads, tails, or edge; however, the probability that it lands on edge is not 1/3.
Replies from: khafra↑ comment by khafra · 2012-10-17T12:09:41.537Z · LW(p) · GW(p)
Yes; to count everything that can occur when you flip an actual, physical coin, you must first invent the universe. It could also be swallowed by a passing bird, which then blunders into a metal foundry and is built into a new space probe, never landing at all. As a human, you just happen to count a huge number of outcomes together under "heads," a huge number of outcomes together under "tails," and a somewhat smaller number of outcomes together under "edge."
Replies from: wedrifid, Kindly, Eugine_Nier↑ comment by wedrifid · 2012-10-17T12:26:39.251Z · LW(p) · GW(p)
Yes; to count everything that can occur when you flip an actual, physical coin, you must first invent the universe.
In fact, it may be more than merely our universe. The probability assignment actually incorporates doubt about what the precise details of the physics of our universe are. So you may need to invent Kolmogorov complexity and Tegmark's Ultimate Ensemble before you get to the serious counting.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-10-18T05:33:26.260Z · LW(p) · GW(p)
Even that isn't enough since it doesn't incorporate our uncertainty about mathematics.
↑ comment by Eugine_Nier · 2012-10-18T05:32:23.611Z · LW(p) · GW(p)
The problem is that "everything" contains infinitely many possibilities, so putting the number of possibilities in the denominator to calculate the probability doesn't work.
↑ comment by curiousepic · 2012-10-10T19:22:52.258Z · LW(p) · GW(p)
Whenever I see semantic dissection in major posts, I always worry that language is just too messy, just a towering stack of cards, and wonder why there isn't more discussion about why we use English when the language doesn't seem optimal for science and seeking Truth. Obviously it's rational for those who already speak English to continue using it for lack of an immediately available, preferable alternative, but I don't see much discussing this fact, arguing whether we should start over, etc. Admittedly, I haven't yet grokked the "ways words can be wrong" sequence.
My previous post on the topic.
Help me get over my linguistic-existential dread by refuting (or accepting) the statement "To develop optimal rationality skills, the first step should be to redesign our linguistic operating system."
Replies from: Richard_Kennaway, Alejandro1, TheOtherDave↑ comment by Richard_Kennaway · 2012-10-10T23:56:37.035Z · LW(p) · GW(p)
Back in the 17th century, several people had a go at redesigning the whole thing, motivated by the flood of new knowledge coming from scientific investigations and the great exploratory voyages, and a felt inadequacy of the language of the time for expressing it. The languages they designed never came into use, although a direct intellectual line can be traced from there down to the formalisation of mathematical logic and the development of the first computers.
But introducing a whole new language is much harder than introducing a new keyboard layout, and how far has Dvorak got? Nobody will bother except for a few geeks. Qapla'!
Instead, there have been useful suggestions in various sources for small, local tools that one can simply pick up and use. Here's a list of the ones that occur to me. The first three are from Korzybski, and 4 is from General Semantics (on which there's a thread here), but invented by David Bourland. 6 and 7 are my own observations, and 5 and 8 are easily googleable for more information.
Subscripting, to draw attention to the fact that Fred(2010) is not Fred(2012), Freda(@home) is not Freda(@work), and Genghis(drunk) is not Genghis(sober).
Liberal use of the word "etc." to remind one that one never knows all about an object.
Avoidance of elementalistic divisions between things that are not divisible ("mind" vs "body", "reason" vs "emotion", "space" vs "time", etc.), and their replacement by non-elementalistic language.
Try writing in E-Prime: English with all forms of the verb "to be" excluded. And one must do a proper job of tabooing the verb, not merely making a rote replacement by other words without re-examining the thought. One need not write exclusively in E-Prime, but the exercise will train one to look carefully at this troublesome word.
Learn Loglan or Lojban, not necessarily to use as a language, but to use as a linguistic exercise, for the different structure it has, based on mathematical logic. (Learn mathematical logic at the same time, if you don't know it already.) I think Korzybski would approve, had he lived to see it. The originator, James Cooke Brown, saw Loglan as also a descendant of the old philosophical languages.
Avoid the word "really" and all its synonyms: "actually", "fundamentally", "essentially", etc. Often, they're an attempt to push reality away by saying "X is really Y" when the simple fact is that X is not Y. ("Fundamentally, there's a dragon in my garage.")
Take a hard look at the word "the" now and then. Make sure that you are not attempting a magic spell, trying to conjure something into existence by prefixing a noun phrase with "the".
Write short stories in 100 words. (Google the word "drabble".) I've been doing one of these a week for nearly a year now. It's quite fascinating to find how a first draft twice as long can be shrunk without losing anything. Ordinary language begins to seem absurdly padded. You don't necessarily want to perform liposuction on everything you write, but knowing it is possible prompts me to always be asking "are these words doing any work?" If you're going to write at length, with a lot of repetition, circling around and around the subject, saying everything several times over in different ways, and with a lot of repetition, better to do it deliberately, for some definite reason.
↑ comment by Alejandro1 · 2012-10-11T00:07:21.409Z · LW(p) · GW(p)
As RichardKennaway says, it has been tried before, and never worked. You might be interested in Umberto Eco's book "Search for the Perfect Language".
↑ comment by TheOtherDave · 2012-10-10T19:48:51.610Z · LW(p) · GW(p)
Well, one place to start is to stop conflating "our linguistic operating system" with the languages we speak.
The former is a cognitive structure which all languages intelligible by humans have in common. Redesigning that might very well be a valuable step, but it's way outside our current capabilities, and is unlikely to be a first step (or even a tenth or a hundredth step).
But, OK, fine then, should we redesign the languages we speak?
I'm inclined to doubt it. What I expect happens once a large number of people speak the language is that the actual spoken language gets creolized and that it's just as easy to express fallacies in it as in any other human language.
That said, speaking a particular language might be valuable in a sort of ritual sense... as a way of reminding ourselves that we are "speaking as rationalists," and should therefore strive for more precision and clarity and truth-preservation than we do in our ordinary lives.
That said, there's a lot of site jargon that serves that purpose quite well already building on an English frame.
So on balance, I'm inclined to reject the statement.
↑ comment by thomblake · 2012-10-08T18:42:17.464Z · LW(p) · GW(p)
Taboo is useful when you notice two people arguing over an equivocation, to get them to stop doing that. I'm not sure what the use is when there wasn't already confusion about the word "systematically".
Replies from: None↑ comment by [deleted] · 2012-10-08T19:09:14.890Z · LW(p) · GW(p)
If it's useful when people argue about an equivocation, it should be useful when there simply is an equivocation. Here, it would be easier to expose the equivocation of someone tried to spell out what "systematic" means in this context, which is the problem of what the concept of probability means when you try to apply it to the usefulness of an algorithm.
The equivocation in question is between recognizing that an algorithm's effectiveness depends on the concrete particulars of a given problem and recognizing that an algorithm must be reliable to use it to prove knowledge claims. This would have been easier to show equivocal if someone made a serious attempt to unpack "systematically" (or "tends probabilistically") which really does all the work in this account.
Replies from: thomblake↑ comment by thomblake · 2012-10-08T19:34:54.585Z · LW(p) · GW(p)
if someone made a serious attempt to unpack "systematically"
Since you seem to understand that there's an equivocation, wouldn't it be easier to just state up front what the two different meanings are supposed to be?
I'm still not sure what you're trying to point out here. Can you be more explicit/specific?
comment by David_Gerard · 2012-10-06T22:21:45.183Z · LW(p) · GW(p)
Other possibly useful reference: the Wikipedia saying "verifiability not truth." The idea being that Wikipedia is written by mere humans without direct access to cosmic truth, so what's verifiable is all we have to go on. (The details of Wikipedia's epistemology can get a bit stupid at the edges, but the point stands.)
comment by TraderJoe · 2012-10-09T06:55:16.841Z · LW(p) · GW(p)
I often add "I believe" to sentences to clarify that I am not certain.
"Did you feed the dog?" "Yes"
and
"Did you feed the dog?" "I believe so"
have different meanings to me. I parse the first as "I am highly confident that I fed the dog" and the second as "I am unable to remember for sure whether I fed the dog, but I am >50% confident I did so."
Replies from: graviton↑ comment by graviton · 2012-10-16T21:14:15.130Z · LW(p) · GW(p)
It always seems to me that any little disclaimer about my degree of certainty seems to disproportionately skew the way others interpret my statements.
For instance, if I'm 90% sure of something, and carefully state it in a way that illustrates my level of confidence (as distinct from 100%), people seem to react as if I'm substantially less than 90% confident. In other words, any acknowledgement of less-than-100%-confidence seems to be interpreted as not-very-confident-at-all.
Replies from: buybuydandavis, shminux↑ comment by buybuydandavis · 2013-01-07T06:48:40.014Z · LW(p) · GW(p)
I find a similar effect. It looks to me like most people systematically overstate probabilistic claims above their overestimation of certainty.
So that when they say P(?) = C, their internal estimate of P(?) = C(1-delta), while the long run expectation when they say P(?) = C is more like E(?) = C(1-delta)(1-gamma).
So when you say it, they downgrade what you say by (1-delta).
Kind of a Gresham's law for probabilistic predictions - over confident predictions drive out appropriately confident predictions.
↑ comment by Shmi (shminux) · 2012-10-16T21:29:44.158Z · LW(p) · GW(p)
It always seems to me that any little disclaimer about my degree of certainty seems to disproportionately skew the way others interpret my statements.
Evolution is just a theory!
comment by Wei Dai (Wei_Dai) · 2012-10-07T05:03:30.063Z · LW(p) · GW(p)
When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
Having someone who would occasionally point out deficiencies in one's rational processes sounds awesome. Do you think it is possible for LWers to perform this service for each other on this forum, or does it require much closer interactions and/or intimate knowledge?
(It seems like the real meat of this post is in the second half, but a lot of people, including myself, got distracted by problems in the first half.)
Replies from: Vaniver, TheOtherDave, gwern, fortyeridania, Peterdjones↑ comment by Vaniver · 2012-10-07T23:28:54.954Z · LW(p) · GW(p)
Do you think it is possible for LWers to perform this service for each other on this forum, or does it require much closer interactions and/or intimate knowledge?
So, one of the easiest ways to detect curiosity is to notice things like posture and demeanor- which seems difficult to do over a text-based channel! I have noticed that online comments telling me "I think you're suffering from bias X" have seemed more like arguments than observations, whereas similar statements in person can be more like observations than arguments.
↑ comment by TheOtherDave · 2012-10-07T05:28:10.984Z · LW(p) · GW(p)
There are people on this site whose thinking I respect enough that, were they to say something of this sort to me, I would at least acknowledge that I ought to re-evaluate the process that got me to where I am, despite their having not much intimate knowledge about me. (There are also people in my real life who have that property.)
Whether I would actually do it is a much more complicated and contingent question.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-10-07T06:52:24.381Z · LW(p) · GW(p)
I was asking more about the sending side of the advice rather than the receiving side. How do I debug someone else's rationality processes using just the information I can get from their posts and comments on LW? (Assuming they are not a newbie with really obvious flaws, but closer to Eliezer's level.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-07T06:58:38.847Z · LW(p) · GW(p)
Hm.
Are you asking "how could I tell that someone else isn't being rational?" or "how could I communicate to someone else that they aren't being rational in such a way that they'd benefit from it?" or something else?
↑ comment by Wei Dai (Wei_Dai) · 2012-10-07T20:40:56.717Z · LW(p) · GW(p)
Are you asking "how could I tell that someone else isn't being rational?" or "how could I communicate to someone else that they aren't being rational in such a way that they'd benefit from it?" or something else?
Something else: I can sometimes tell that someone else on LW isn't being rational but can't see which part of their rationality process is broken, or not sufficiently activated. (Communicating this to them may also be a problem but wasn't the one I specifically had in mind.) I'm wondering if Eliezer thinks it is possible to do this over LW. Perhaps others have better skills for this than I do, or we should just try harder?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-08T01:52:34.879Z · LW(p) · GW(p)
Ah, gotcha. Yes, that makes sense; thanks for clarifying.
↑ comment by gwern · 2012-10-08T00:37:09.527Z · LW(p) · GW(p)
It's been suggested: http://lesswrong.com/lw/6j1/find_yourself_a_worthy_opponent_a_chavruta/
↑ comment by fortyeridania · 2012-11-01T16:26:49.465Z · LW(p) · GW(p)
Do you think it is possible for LWers to perform this service for each other on this forum, or does it require much closer interactions and/or intimate knowledge?
This could have perverse consequences, because "You don't seem very curious about this" seems like a criticism.
In my case anyway, having an irrationality-cop would have two effects. First, it would motivate me to avoid the criticism by being more rational (like it does for Eliezer). Second, it would motivate me to avoid the criticism by hiding my irrationality better. The latter effect would be bad, because then both the cop and I would overestimate my level of rationality. (Why would I, too, overestimate it? Because I'd hide my failures from myself as well, in an unconscious effort to hide them from others more effectively.)
I think the fundamental issue here is that I dread criticism. (Solutions to this problem include exposure therapy and CBT.) People for whom this is less of a hurdle are likely to benefit more from having an irrationality-cop.
↑ comment by Peterdjones · 2012-10-08T12:38:50.406Z · LW(p) · GW(p)
It is certainly possible, it happens, and it generally results in the point-er losing karma. At leat when the point-er suggests all the answers might not be found in the Sequences.
Replies from: fortyeridania↑ comment by fortyeridania · 2012-11-01T16:33:45.754Z · LW(p) · GW(p)
I assume you are employing hyperbole. Nevertheless, I think your comment is unfair. Even just on LW, lots of great stuff isn't included in the Sequences. Moreover, people here regularly recommend materials (e.g., books) other than the Sequences.
Replies from: Peterdjones↑ comment by Peterdjones · 2012-11-01T16:49:41.346Z · LW(p) · GW(p)
comment by chaosmosis · 2012-10-06T22:48:43.024Z · LW(p) · GW(p)
I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
How does one increase their curiosity levels?
Replies from: robert-miles, TheOtherDave↑ comment by Robert Miles (robert-miles) · 2012-10-08T11:59:42.825Z · LW(p) · GW(p)
@Eliezer Perhaps it's worth making "try to increase them" a link to lukeprog's "Get Curious" article?
↑ comment by TheOtherDave · 2012-10-07T05:25:12.234Z · LW(p) · GW(p)
I increase my curiosity about a topic by attending closely to what specific questions related to that topic I'm not confident I know the answer to, what predictions I would make differently given higher confidence in various different answers to those questions, and what the consequences might be of being right about those predictions.
Also, the longer I spend trying to think of such questions/predictions and failing, the more confident I become that increasing my curiosity about the topic is not a productive use of my time.
comment by Vladimir_Nesov · 2012-10-06T10:41:52.365Z · LW(p) · GW(p)
The reference to footnote 1 is missing from the post.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T10:54:24.766Z · LW(p) · GW(p)
Heh, looks like the referring text got revised out. I've deleted the footnote.
Replies from: DSimon↑ comment by DSimon · 2012-10-06T21:22:23.849Z · LW(p) · GW(p)
The remaining footnote, the one about RationalWiki, should probably also be removed. It doesn't add anything to the article's point, and it's somewhat rude (in the social sense as well as the logical sense, i.e. you are presenting a strawman response to a question they were never asked).
Replies from: David_Gerard↑ comment by David_Gerard · 2012-10-06T22:19:27.854Z · LW(p) · GW(p)
As a long-time RW regular, I thought it was pretty accurate. Everyone thinks they're rational, particularly the infuriatingly hard-of-thinking. RW is Internet television and an enjoyable waste of your time at best. With useful bits. (This is approximately how I treat LW as well, of course.)
Replies from: Legolan, Luke_A_Somers↑ comment by Legolan · 2012-10-06T22:32:57.334Z · LW(p) · GW(p)
I agree. For those familiar with RationalWiki, I actually thought that it provided a nice contrasting example, honestly. Eliezer's definition for rationality is (regrettably, in my opinion) rare in a general sense (insofar as I encounter people using the term), and I think the example is worthwhile for illustrative purposes.
↑ comment by Luke_A_Somers · 2012-10-07T00:50:06.784Z · LW(p) · GW(p)
Internet television
nice phrase!
comment by Ian Televan · 2021-04-22T22:32:36.412Z · LW(p) · GW(p)
I thought of a slightly different exception for the use of "rational": when we talk about conclusions that someone else would draw from their experiences, which are different from ours. "It's rational for Truman Burbank to believe that he has a normal life."
Or if I had an extraordinary experience which I couldn't communicate with enough fidelity to you, then it might be rational for you not to believe me. Conversely, if you had the experience and tried to tell me, I might answer with "Based only on the information that I received from you, which is possibly different from what you meant to communicate, it's rational for me not to believe the conclusion." There I might want to highlight the issue with fidelity of communication as a possible explanation for the discrepancy (the alternative being, for example, that the conclusion is unwarranted even if the account of the event is true and compete).
comment by [deleted] · 2012-12-14T07:02:01.527Z · LW(p) · GW(p)
From the common usage of the word "I believe" referred to in this context I think you could generate an interpretation as follows:
When a person says I believe that something is going to happen
They're communicating their degree of belief to that something is going to happen, are not as confident
They're expecting that particular scenario to occur more likely than other scenarios, but not for certain
They're concentrating on that particular anticipation and expect it to happen regardless of other plausible scenarios
They're acknowledging their current state of evidence and if presented further evidence, ready to change their opinion
As opposed to
When a person says it's true that something is going to happen
They're communicating their degree of belief that something is going to happen, and are comparatively confident
They're expecting that particular scenario to occur and are confident other scenarios are excluded
They're concentrating on several possibilities, but deductively find only one plausible scenario
They're acknowleding their current state of evidence, but expect further evidence to just confirm their current belief
I don't intend these as epistemological descriptions, but rather social descriptions on how people are communicating their beliefs and their stances by choosing to say true instead of belief. To create an example:
A: I believe the asteroid F4-K3 will miss the earth approximately at distance 750.000km December 25th 2012.
B: The asteroid F4-K3 will miss the earth approximately at distance 750.000km December 25th 2012.
C: It's true that the asteroid F4-K3 will miss the earth approximately at distance 750.000km December 25th 2012.
In this case it would seem to me that the common way to interpret A as opposed to B is that A is uncertain and B is confident, and C using the word true seems not to really add anything except it does seem to put extra emphasis on the confidence in the issue. So for an example if someone says A, C seems like a more natural response than B. You think that's gonna happen? Yeah it will happen, I'm confident about it.
While when saying "I think" you're communicating different type of uncertainity, that is: To say "I believe" in contrast to "I think" seems to be like picking favourites while "I think" seems to communicate "I'm personally reasoning that way"
This though was more about commonplace social use of language rather than epistemology, and anyway that's just what I think.
comment by Reality_Check · 2012-10-09T21:38:00.055Z · LW(p) · GW(p)
.
Replies from: TimS, ArisKatsaris, Eugine_Nier↑ comment by ArisKatsaris · 2012-10-10T09:47:08.402Z · LW(p) · GW(p)
You said "truth=opinion", but to defend that you ask people not to do something true to you that isn't a matter of opinion, but to "give you a statement that does not resolve to opinion".
That's false reasoning. You didn't originally say "all true statements are produced by people's opinions" which is trivially true according to some definition of "opinions", as all statements people can make are by necessity produced by their minds.
But if e.g. you get in an accident and you lose your leg, nobody will have offered you an opinion, but nonetheless it'll be true that you'll be missing a leg. If you then say it's only a matter of opinion that you'll have lost your leg, I direct you to the well-known Monty Python sketch....
Your failure seems to arise from a very basic confusion between map and territory, where you think that because statements about reality derive from opinion, then reality itself must derive from opinion. That doesn't follow at all. In truth: F(x)-> y and Mind(Reality) -> "Statements about Reality". -- you didn't disprove the existence of x, just by illustrating that all y can be mapped from x through a function F.
Replies from: Amanojack↑ comment by Amanojack · 2012-10-11T08:42:25.072Z · LW(p) · GW(p)
truth=opinion
I'd phrase it as "truth is subjective," but I agree in principle. Truth is a word for everyday talk, not for precise discourse. This may sound pretty off-the-wall, but stepping back for a second it should be no surprise that holding to everyday English phrasing would interfere with our efforts to speak precisely. I'll put this more specifically below.
But if e.g. you get in an accident and you lose your leg, nobody will have offered you an opinion, but nonetheless it'll be true that you'll be missing a leg.
This is actually begging the question in that you tacitly assume objective truth by using the standard English phrasing. That there is such a thing as an objective truth is precisely the conclusion you hope to establish. Unfortunately English all but forces you to start by assuming it. Again, carrying over the habits of everyday talk into a precise discussion is a recipe for confusion. We'll have to be a little more careful with phrasing to get at what's going on.
I'd first point out that when you say, "you lose your leg," you are speaking as if there is some omniscient narrator who knows "the objective facts of reality." Parent's point is exactly that there is no such omniscience. There are only individuals, including you and I, who have [subjective] experiences.
To get specific, we would have to identify who it is that witnesses the loss of Parent's leg. If you had said, "e.g. you find that you get in an accident and that you lose your leg," it would not be convincing to follow up with, "but nonetheless it'll be true that you'll be missing a leg."
We could all have witnessed (what we experience as) Parent losing a leg. It will be "true" for us (everyday talk), but none among us is an omniscient narrator qualified to state any more than what we experienced. Nowhere is any objective truth to be found. If we were to call it an "objective truth," we would simply be referencing the fact that all three of our experiences seem to match up. It would be at best an inter-subjective "truth," but this "truth" is a lie to someone else who thinks they see Parent with both legs still attached. To avoid confusion, we had best call it a subjective report or something. Hence, while perhaps not ideal, "truth=opinion" is not too bad a way to put it after all.
↑ comment by Eugine_Nier · 2012-10-10T06:51:29.462Z · LW(p) · GW(p)
I would like to challenge everyone to give me a statement that does not resolve to opinion.
If I hit you with this stick it will hurt. (If you insist that's false, I will continue hitting you.)
comment by Reality_Check · 2012-10-08T14:25:48.844Z · LW(p) · GW(p)
truth=opinion