The Fallacy of Gray

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-07T06:24:55.000Z · LW · GW · Legacy · 81 comments

The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”

The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .”

—Marc Stiegler, David’s Sling

I don’t know if the Sophisticate’s mistake has an official name, but I call it the Fallacy of Gray. We saw it manifested in the previous essay—the one who believed that odds of two to the power of seven hundred and fifty million to one, against, meant “there was still a chance.” All probabilities, to him, were simply “uncertain” and that meant he was licensed to ignore them if he pleased.

“The Moon is made of green cheese” and “the Sun is made of mostly hydrogen and helium” are both uncertainties, but they are not the same uncertainty.

Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black. Or even if not, we can still compare shades, and say “it is darker” or “it is lighter.”

Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks, especially the sentence in bold:

A guilty system recognizes no innocents. As with any power apparatus which thinks everybody’s either for it or against it, we’re against it. You would be too, if you thought about it. The very way you think places you amongst its enemies. This might not be your fault, because every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it. You come from one of the latter and you’re being asked to explain yourself to one of the former. Prevarication will be more difficult than you might imagine; neutrality is probably impossible. You cannot choose not to have the politics you do; they are not some separate set of entities somehow detachable from the rest of your being; they are a function of your existence. I know that and they know that; you had better accept it.

Now, don’t write angry comments saying that, if societies impose fewer of their values, then each succeeding generation has more work to start over from scratch. That’s not what I got out of the paragraph.

What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me.

It was the whole notion of the Quantitative Way applied to life-problems like moral judgments and the quest for personal self-improvement. That, even if you couldn’t switch something from on to off, you could still tend to increase it or decrease it.

Is this too obvious to be worth mentioning? I say it is not too obvious, for many bloggers have said of Overcoming Bias: “It is impossible, no one can completely eliminate bias.” I don’t care if the one is a professional economist, it is clear that they have not yet grokked the Quantitative Way as it applies to everyday life and matters like personal self-improvement. That which I cannot eliminate may be well worth reducing.

Or consider an exchange between Robin Hanson and Tyler Cowen.1 Robin Hanson said that he preferred to put at least 75% weight on the prescriptions of economic theory versus his intuitions: “I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.” Tyler Cowen replied:

In my view there is no such thing as “straightforwardly applying economic theory” . . . theories are always applied through our personal and cultural filters and there is no other way it can be.

Yes, but you can try to minimize that effect, or you can do things that are bound to increase it. And if you try to minimize it, then in many cases I don’t think it’s unreasonable to call the output “straightforward”—even in economics.

“Everyone is imperfect.” Mohandas Gandhi was imperfect and Joseph Stalin was imperfect, but they were not the same shade of imperfection. “Everyone is imperfect” is an excellent example of replacing a two-color view with a one-color view. If you say, “No one is perfect, but some people are less imperfect than others,” you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all.

(Whenever someone says to me, “Perfectionism is bad for you,” I reply: “I think it’s okay to be imperfect, but not so imperfect that other people notice.”)

Likewise the folly of those who say, “Every scientific paradigm imposes some of its assumptions on how it interprets experiments,” and then act like they’d proven science to occupy the same level with witchdoctoring. Every worldview imposes some of its structure on its observations, but the point is that there are worldviews which try to minimize that imposition, and worldviews which glory in it. There is no white, but there are shades of gray that are far lighter than others, and it is folly to treat them as if they were all on the same level.

If the Moon has orbited the Earth these past few billion years, if you have seen it in the sky these last years, and you expect to see it in its appointed place and phase tomorrow, then that is not a certainty. And if you expect an invisible dragon to heal your daughter of cancer, that too is not a certainty. But they are rather different degrees of uncertainty—this business of expecting things to happen yet again in the same way you have previously predicted to twelve decimal places, versus expecting something to happen that violates the order previously observed. Calling them both “faith” seems a little too un-narrow.

It’s a most peculiar psychology—this business of “Science is based on faith too, so there!” Typically this is said by people who claim that faith is a good thing. Then why do they say “Science is based on faith too!” in that angry-triumphal tone, rather than as a compliment? And a rather dangerous compliment to give, one would think, from their perspective. If science is based on “faith,” then science is of the same kind as religion—directly comparable. If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars. It would make sense to say, “The priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests’ faith can’t do the same.” Are you sure you wish to go there, oh faithist? Perhaps, on further reflection, you would prefer to retract this whole business of “Science is a religion too!”

There’s a strange dynamic here: You try to purify your shade of gray, and you get it to a point where it’s pretty light-toned, and someone stands up and says in a deeply offended tone, “But it’s not white! It’s gray!” It’s one thing when someone says, “This isn’t as light as you think, because of specific problems X, Y, and Z.” It’s a different matter when someone says angrily “It’s not white! It’s gray!” without pointing out any specific dark spots.

In this case, I begin to suspect psychology that is more imperfect than usual—that someone may have made a devil’s bargain with their own mistakes, and now refuses to hear of any possibility of improvement. When someone finds an excuse not to try to do better, they often refuse to concede that anyone else can try to do better, and every mode of improvement is thereafter their enemy, and every claim that it is possible to move forward is an offense against them. And so they say in one breath proudly, “I’m glad to be gray,” and in the next breath angrily, “And you’re gray too!

If there is no black and white, there is yet lighter and darker, and not all grays are the same.

The commenter G2 points us to Asimov’s “The Relativity of Wrong”:

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

1Hanson (2007), “Economist Judgment,” http://www.overcomingbias.com/2007/12/economist-judgm.html. Cowen (2007), “Can Theory Override Intuition?”, http://marginalrevolution.com/marginalrevolution/2007/12/how-my-views-di.html.

81 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Tiiba2 · 2008-01-07T07:40:38.000Z · LW(p) · GW(p)

I suggest this post for the "start here" list. It's unusually close to perfection.

comment by James_Bach · 2008-01-07T08:31:31.000Z · LW(p) · GW(p)

It sounds like you are trying to rescue induction from Hume's argument that it has no basis in logic. "The future will be like the past because in the past the future was like the past" is a circular argument. He was the first to really make that point. Immanuel Kant spent years spinning elaborate philosophy to try to defeat that argument. Immanuel Kant, like lots of people, had a deep need for universal closure.

An easier way to go is to overcome your need for universal closure.

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive. That's pretty good. That's why people can't just wave their hands and claim reality is whatever anyone believes-- if they do that, they will discover that acting on that belief won't necessarily, say, win them the New York lottery.

My concern with your argument is, again, structural. You are talking about "gray", and then you link that to probability. Wait a minute, that oversimplifies the metaphor. You present the idea of gray as a one-dimensional quantity, similar to probability. But when people invoke "gray" in rhetoric they are simply trying to say that there are potentially many ways to see something, many ways to understand and analyze it. It's not a one-dimensional gray, it's a many dimensional gray. You can't reduce that to probability, in any actionable way, without specifying your model.

Here's the tactic I use when I'm trying to stand up for a distinction that I want other people to accept (notice that I don't need to invoke "reality" when I say that, since only theories of reality are available to me). I ask them to specify in what way the issue is gray. Let's distinguish between "my spider senses are telling me to be cautious" and "I can think of five specific factors that must be included in a competent analysis. Here they are..."

In other words, don't deny the gray, explore it.

A second tactic I use is to talk about the practical implications of acting-as-if a fact is certain: "I know that nothing can be known for sure, but if we can agree, for the moment, that X, Y, and Z are 'true' then look what we can do... Doesn't that seem nice?"

I think you can get what you want without ridiculing people who don't share your precise worldview, if that sort of thing matters to you.

Replies from: robert-miles, omalleyt
comment by Robert Miles (robert-miles) · 2012-11-16T13:40:03.827Z · LW(p) · GW(p)

Induction is a behavior that seems to help us stay alive.

Well, it has helped us to stay alive in the past, though there's no reason to expect that to continue...

comment by omalleyt · 2016-09-06T20:20:25.197Z · LW(p) · GW(p)

But let's really look at the statement "The future will be like the past because in the past the future was like the past."

If by "like the past," do we mean obey the same physical laws?

If we do, then I think what we're trying to estimate is the chance, over a specified time frame, that the physical laws will change.

The problem then reduces to the problem of drawing red and blue marbles out of a hat. We can look at all the available time frames that we have "drawn" up to this point and get a confidence estimate on how likely it is that the physical laws will change over the next "draw" of the time frame

comment by Elver · 2008-01-07T08:35:11.000Z · LW(p) · GW(p)

This post is unusually white. The two arguments -- all shades of gray being seen as the same shade and science being a demonstrably better "religion" -- have seriously expanded my mind. Thank you!

comment by Dan_Burfoot · 2008-01-07T09:56:30.000Z · LW(p) · GW(p)

That which I cannot eliminate may be well worth reducing.

I wish this basically obvious point was more widely appreciated. I've participated in dozens of conversations which go like this:

Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"

Great post. I encourage you to expand on the idea of the Quantitative Way as applied to areas such as self improvement and everyday life.

Replies from: ChristroperRobin, Roxton
comment by ChristroperRobin · 2012-07-19T10:33:58.911Z · LW(p) · GW(p)

Seeing Dan_Burfoot's comment from four years ago, I felt compelled to join the discussion.

I've participated in dozens of conversations which go like this:

Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"

I would put it like this

Libertarian: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad."

Me: "Coercive violence is dissuading me from killing you. So maybe coercive violence is not so bad, after all."

Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests. "Government" is not "coercive violence", it is the agreement between rational people that they will allow their

Replies from: wedrifid
comment by wedrifid · 2012-07-19T11:01:43.330Z · LW(p) · GW(p)

Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests.

I was nodding along until: "The ground upon which all rationality rests".

You seem to have fallen into same trap of self-defeating hyperbole that the quoted straw-libertarian has fallen into. It is enough to make your point that government, and the implied threat of violence is not all bad and is even useful. Don't try to make ridiculous claims about "all rationality". Apart from being a distasteful abuse of 'rational' as an applause light it is also false. With actual rational agents all sorts of alternative arrangements not fitting the label "government" would be just as good---it is the particular quirks of humans that make government more practical for us right now.

Replies from: ChristroperRobin
comment by ChristroperRobin · 2012-07-19T12:28:30.220Z · LW(p) · GW(p)

I am embarrassed that I accidentally clicked "close" before I was done writing my comment. While I was off composing it in the sandbox, you saw the first draft and commented on it. And you are correct, I think. Is my face red, or what? I have retracted my original comment. My browser shows it as struck out, anyway.

So, yeah, saying that government is "coercive violence" is a straw argument. I think we agree.

I think we agree. What are "actual rational agents"? I am new here, so maybe I should do some more reading. I'm sure Eliezer has published extensively on defining that term. My prejudice would be that "actual rational agents" are entities which "rationally" would want to protect their own existence. I mean, they may be "rational", but they still have self-interest.

So what I'm saying is that "government" is a system for settling claims between competing rational agents. It's a set of game rules. Game rules enshrined by rational agents, for the purpose of protecting their own rational self-interests, are rational.

Rational debate, without the existence of these game rules, which is what government is, is impossible. That's what I'm saying.

Here's another way to look at it. The Laws of Logic (A is A, etc.) are also game rules. We don't think of them that way because we don't accept the Laws of Logic voluntarily. We are forced to accept them because they are necessarily true. Additional rules, which we call government, are also necessary. We write our own Constitution, but we still need to have one.

Replies from: wedrifid, TheLooniBomber
comment by wedrifid · 2012-07-19T13:43:45.664Z · LW(p) · GW(p)

I think we agree. What are "actual rational agents"? I am new here, so maybe I should do some more reading. I'm sure Eliezer has published extensively on defining that term. My prejudice would be that "actual rational agents" are entities which "rationally" would want to protect their own existence. I mean, they may be "rational", but they still have self-interest.

We are using approximately the same meaning. (I would only insist that they value something, it doesn't necessarily have to be their own existence but that'll do as an example.)

So what I'm saying is that "government" is a system for settling claims between competing rational agents. It's a set of game rules. Game rules enshrined by rational agents, for the purpose of protecting their own rational self-interests, are rational.

Rational debate, without the existence of these game rules, which is what government is, is impossible. That's what I'm saying.

I'm disagreeing that government is actually necessary. It is a solution to cooperation problems but not the only one. It just happens to be the one most practical for humans.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-19T14:35:50.711Z · LW(p) · GW(p)

Well, for sufficiently large groups of humans.

comment by TheLooniBomber · 2013-01-26T23:29:57.255Z · LW(p) · GW(p)

Bringing party politics into a discussion about rationality makes you the straw man, my friend. Attacking a philosophy of limited government would imply that every government action is the same shade of grey, and all must be necessary, because a group of people voted on a policy, therefore it must be thought out. Politics in itself is not the product of careful examination and rational thinking about public issues, but rather a way of conveying ones interests in a manner that appears to benefit the target audience and gain support. Not all rules are necessary or of the same necessity, simply because they are written.

I would also add that we do, in fact accept the Laws of Logic voluntarily, but only if we are not indoctrinated to do otherwise. To believe that we don't, would suggest that the first philosophers had to have been taught, perhaps by some supernatural or extraterrestrial deity, or perhaps the first logical thought was triggered by a concussion.

comment by Roxton · 2013-05-29T15:26:54.532Z · LW(p) · GW(p)

Doesn't "coercive violence is bad" beg the question in a way that would only be deemed natural if one were implicitly invoking the noncentral fallacy?

Replies from: Larks
comment by Larks · 2013-05-29T17:27:48.178Z · LW(p) · GW(p)

No, many people think coercion qua coercion is wrong - for example, philosophers of a Kantian bent, which is very common in political philosophy.

Replies from: Roxton, Juno_Watt
comment by Roxton · 2013-05-29T17:57:58.416Z · LW(p) · GW(p)

Point taken, but I would advance the view that the popularity of such a categorical point stems from the fallacy. It seems to be the backbone that makes deontological ethics intuitive.

In any event, it's still clearly an instance of begging the question.

But my goal was to cast a shadow on the off-topic point, not to derail the thread.

Replies from: Larks
comment by Larks · 2013-05-29T22:05:24.221Z · LW(p) · GW(p)

it's still clearly an instance of begging the question.

I'm not sure it is; that government involves coercion is a substantive premise.

But my goal was to cast a shadow on the off-topic point, not to derail the thread.

Unfortunately, people who agree with the off-topic point can hardly accept such behaviour without response.

comment by Juno_Watt · 2013-05-29T18:59:15.381Z · LW(p) · GW(p)

Many libertarians think that. I'm not so sure about that. I don't think he would have wished "no criminals should be captured" or "Everyone should dodge taxes" to be the Universal Law.

Replies from: Larks
comment by Larks · 2013-05-29T22:02:05.689Z · LW(p) · GW(p)

I'm not referring to Kant, I mean contemporary philosophers, like Michael Blake, who is not a libertarian.

comment by Ben_Jones · 2008-01-07T12:39:36.000Z · LW(p) · GW(p)

Agreed - best post in ages, many thanks. That is all.

comment by RobinHanson · 2008-01-07T14:23:47.000Z · LW(p) · GW(p)

All who love this post, do you love it because it told you something you didn't know before, or because you think it would be great to show others who you don't think understand this point? I worry when our reader's favorite posts are based on how much they agree with the post, instead of how much they learned from it.

Replies from: eyelidlessness, redlizard
comment by eyelidlessness · 2012-07-29T06:32:33.781Z · LW(p) · GW(p)

It's possible both are true: that the reader understood the point already, but learned a better way to articulate it in an effort to advance another conversation.

comment by redlizard · 2014-05-01T18:09:45.858Z · LW(p) · GW(p)

I already knew it, but this post made me understand it.

comment by Mike_Kenny · 2008-01-07T14:42:17.000Z · LW(p) · GW(p)

For me, the main point is incremental advancement towards perfection means expending resources and creating other consequences. The questions ultimately have to be 'how much is it worth to move closer to perfection? What other consequences probably will happen?' This question obviously depends on your context. It appears that some kinds of perfectionism, as far as I can tell, have negative effects on the holder of perfectionistic standards, in the view of psychologists, relevant experts on the matter, and that costs have to be considered when moving in the direction of perfection--and it might even be worthwhile to move away from perfection in one context if the costs are too great and benefits too small.

That said, I think the ethos of this blog seems to be "We're too comfortable with our imperfections in thinking," which I think is true enough. On the other hand, emphasizing how bad or dopey we are is depressing or off-putting, true though it may be in many cases, and focusing on how we'd be happier and more powerful with less bias is exciting, and it can be fun (lots of people like betting, which can help us see our biases, for example).

comment by LG · 2008-01-07T14:54:02.000Z · LW(p) · GW(p)

Robin, I think people tend to be enthusiastic when an idea they've known on a more or less intuitive level for a long time is laid out eloquently, and in a way they could see relaying to their particular audience. It's a form of relief, maybe.

So it's not so much "I like it because I agree with it," it's more "I like it because I knew it before but I could never explain it that well."

/unscientific guessing

comment by Ben_Jones · 2008-01-07T15:32:10.000Z · LW(p) · GW(p)

Robin,

I'm with LG, the answer to your question is 'neither'. I also enjoy posts which reinform my way of thinking, but a straight account of what I already think myself wouldn't draw praise. Crystallization of a hitherto-unclear concept can be invaluable - I quote:

"What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me."

Mike, any action or updating of beliefs will have a net effect on 'whiteness' (or 'blackness'). If you're worried that improving in manner x will lead to worsening in manner y, weigh one against the other and take action. 'Perfectionism' holds negative connotations that Tsuyoku Naritai seems not to.

comment by Utilitarian2 · 2008-01-07T15:59:32.000Z · LW(p) · GW(p)

Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?

When used appropriately, the "science is based on faith too" point is meant to cast doubt upon specific non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer). Scientific evidence doesn't distinguish between these hypotheses; it's taken on faith that the first of these is "simpler" and deserves higher prior probability. Maybe these priors are derived from Kolmogorov complexity or something similar, but it still must be taken on faith that those measures are meaningful. (This is, of course, what you recognized when you said, "Every worldview imposes some of its structure on its observations [...].")

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive.

Isn't this argument premised on induction, i.e., things that helped organisms stay alive in the past will help them stay alive in the future?

comment by Peter_de_Blanc · 2008-01-07T16:22:17.000Z · LW(p) · GW(p)

Utilitarian, you said:

non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer).

How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

comment by Utilitarian2 · 2008-01-07T17:24:08.000Z · LW(p) · GW(p)

How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

Not much; it's possible that these hypotheses are falsifiable (in the sense of having a likelihood ratio < 1 compared against the other corresponding hypothesis). I was assuming this wasn't true given only the evidence currently available, but I'd be glad to hear if you think otherwise.

comment by Nick_Tarleton · 2008-01-07T18:26:30.000Z · LW(p) · GW(p)

It's easy to think of potential observations that would very strongly favor dualism or intelligent design, and the absence of those observations counts as falsifying evidence.

comment by steven · 2008-01-07T18:39:40.000Z · LW(p) · GW(p)

I think it's worth keeping the distinction between falsification (a likelihood ratio of 0) and disconfirmation (a likelihood ratio < 1). Usually when people say "unfalsifiable" they really mean "undisconfirmable" or "unstronglydisconfirmable".

comment by Peter_Kim · 2008-01-07T18:42:40.000Z · LW(p) · GW(p)

Dan Burfoot, permit me to join in those conversations:

Me: "No, coercive violence is merely a shade of gray. Another harm of the status quo, like sick children, may be a darker shade of gray, in which case I'm willing to become a little darker so I can gain more lightness overall. For example, I don't think there's much opposition to using coercive violence to protect the life of infants (criminalizing infanticide, taxation to support wards of state, etc.). Of course, opinions on the relative light/darkness of coercive violence vs. other 'bad' differ, and therein lies the popular contention between 'big govt' vs. 'small govt,' not whether government based on coercive violence, or that coercive violence is bad."

comment by G2 · 2008-01-07T19:12:31.000Z · LW(p) · GW(p)

This post reminds me of Isaac Asimov's The Relativity of Wrong, which is excellent. Wikipedia page

Replies from: Hul-Gil
comment by Hul-Gil · 2012-04-20T18:24:41.605Z · LW(p) · GW(p)

It reminded me of that as well. Here is the full article; I'm glad it's online, because the errors he (and Yudkowsky, above) clears up are astonishingly prevalent. I've had cause to link to it many times.

comment by josh · 2008-01-07T21:50:42.000Z · LW(p) · GW(p)

LG, Doesn't that mean you like the post, specifically becuase it appeals to confirmation bias, one of the known biases we should be seeking to overcome?

comment by Nathan_Myers · 2008-01-08T23:29:21.000Z · LW(p) · GW(p)

In other words, "numbers matter". But I suppose mentioning numbers eliminates most of your audience.

comment by Zander · 2008-01-09T17:23:31.000Z · LW(p) · GW(p)

Ah, I love the way the cheap shots just keep on coming...

comment by david_foster · 2008-01-11T16:16:32.000Z · LW(p) · GW(p)

Arthur Koestler has some thoughts that are relevant here.

comment by ksvanhorn · 2011-01-21T19:41:27.678Z · LW(p) · GW(p)

Thanks, Eliezer, for an excellent article. Some of my favorite quotables:

  • the Quantitative Way

  • Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black.

  • If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars.

  • "Everyone is imperfect" is an excellent example of replacing a two-color view with a one-color view.

comment by dspeyer · 2011-11-17T03:38:19.442Z · LW(p) · GW(p)

Then there's the fallacy of shades of gray: that every space can be reasonably modeled as 1-dimensional.

Replies from: Hul-Gil
comment by Hul-Gil · 2012-04-20T18:30:24.504Z · LW(p) · GW(p)

I'm trying to imagine the other dimension we could add to this. If we have "more right" and "less right" along one axis, what's orthogonal to it?

I initially felt this comment was silly (the post isn't saying every space can be reasonably modeled as one-dimensional, is it?), but my brain is telling me we actually could come up with a more precise way to represent the article's concept with a Cartesian plane... but I'm not actually able to think of one. False intuition based on my experience with the "Political Compass" graph, perhaps.

Replies from: dlthomas, dspeyer
comment by dlthomas · 2012-04-20T18:50:42.344Z · LW(p) · GW(p)

Direction of divergence?

Neither (1, 5) nor (5, 1) may be "more wrong" when the answer is (2, 2), but may still be quite meaningfully distinct for some purposes.

Replies from: Hul-Gil
comment by Hul-Gil · 2012-04-20T19:39:15.025Z · LW(p) · GW(p)

That's true. They could be wrong in different ways (or "different directions", in our example), which could be important for some purposes. But as you say, that depends on said purposes; I'm still uncertain as to the fallacy that dspeyer refers to. If our only purpose is determining some belief's level of correctness, absent other considerations (like in which way it's incorrect), isn't the one dimension of the "shades of grey" model sufficient?

Although -- come to think of it, I could be misunderstanding his criticism. I took it to mean he had an issue with the original post, but he could just be providing an example of how the shades-of-grey model could be used fallaciously, rather than saying it is fallacious, as I initially interpreted.

comment by dspeyer · 2012-04-26T05:25:55.798Z · LW(p) · GW(p)

I meant my comment more as a warning to readers than as a criticism of the article. When you've upgraded your mental model, don't stop and be satisfied -- see if there are more low-hanging upgrades. This is especially important if having recently improved your model biases you toward overconfidence (which I suspect is common).

To address your actual challenge...

Probability of correctness may actually be one dimensional. Though in practice it's worth keeping around what the big hunks of uncertainty are so you can update them easily if needed (i.e. P(my_understanding) = P(I_understood_what_I_read) P(the_author_was_honest) ... is easier to update if you later learn the author was a troll).

Degrees of correctness are more complex. "The geography of the Earth is as shown on a Mercator map" and "The geography of the Earth is as shown on a Peters map" are both false. They are both useful approximations. Is one more useful than the other? That depends on what you want to do with it.

There were other examples in the article besides correctness. "Every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it" and some maximize it with regard to their perspective on murder and minimize it with regard to their perspective on shellfish. "No one is perfect, but some people are less imperfect than others" and some people are imperfect in different ways from others, which are more or less harmful in different circumstances.

comment by [deleted] · 2012-02-19T17:14:49.893Z · LW(p) · GW(p)

This was a very useful post and one I will be adding into my daily dossier I know. I agree this is a good "start post" because it is lucid, clear, and useful. There's little I feel to add at the moment as doing so would simply be glorifying the item itself rather than using the knowledge gained, so thank you for the post.

comment by wobster109 · 2012-02-26T19:27:46.615Z · LW(p) · GW(p)

I'm glad this post is here! Today, I came across this lovely little statement on Xanga: "Richard Dawkins admitted recently that he can't be sure that God does not exist. He is generally considered the World's most famous Atheist. So this question is for Atheists. Can you be sure that God does not exist?"

It made me cranky right away (I promise, I was more patient many many instances of this sentiment ago), and my first response was to link here in a comment. Well, I'm glad this post is here to link to. Grr.

comment by David_Gerard · 2012-12-02T14:36:45.680Z · LW(p) · GW(p)

Surprised no-one's yet noted that the proper name for this is the continuum fallacy or sorites fallacy.

comment by non-expert · 2013-01-08T08:52:20.508Z · LW(p) · GW(p)

i don't follow the relevance of article, as it seems quite obvious. the real problem with the black and white in the world of rationality is the assumption there is a universal answer to all questions. the idea of "grey" helps highlight that many answers have no one correct universal answer. what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong. this simplifies a world that is not so simple. what am i missing?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T10:58:58.721Z · LW(p) · GW(p)

Offtopic: Have you considered running your comments through a spell- and grammar- checker? It might help with legibility and signalling competence.

Ontopic:

what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong.

Rationalists, or at least Bayesians, use probabilities, not binary right-or-wrong judgments. There is, mathematically, only one "correct" probability given the data; is that what you mean?

Replies from: non-expert
comment by non-expert · 2013-01-09T04:15:20.904Z · LW(p) · GW(p)

Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue). Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world? You don't have to sign on completely to Nietzsche's theory to see its potential application, even if limited in scope. For example, a number of communication techniques employ a type of perspectivism because different people view issues through an "individual lens". In either case, seeing the world as constructed of shades of grey seems more practical and accurate relative to using probabilities. This seems at odds with Bayesian judgments that assume that probabilities yield one correct answer AND that a person can and should be able to derive that correct answer.

The point i raise about communication techniques relates to your "offtopic" point. I assume you are a rationalist, and thus believe yourself to have superior decision making skills (at least relative to those that are not students (or masters) of rationality). If so, what is the value of your "off topic" point -- you clearly were able to answer my question despite its shortcomings -- why belittle someone that is trying to understand an article that is well-received by LW? Is the petty victory of pointing out my mistakes, from your perspective, the most rational way to answer my comment? I'm not insulted personally (this type of pettiness always makes me smile), but I'm interested in understanding the logic of your comments. From my perspective, rationality failed you in communicating in an effective way. It seems your arrogance could keep many from following and learning from LW -- unless of course the goal is to limit the ranks of those that employ rationality. What am I missing? (and the answer is no, i haven't considered using a spell or grammar checker other than the one provided by this site).

Replies from: MugaSofer
comment by MugaSofer · 2013-01-09T10:34:31.572Z · LW(p) · GW(p)

Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue).

Unreliable evidence, biased estimates etc. can, in fact, be taken into account.

Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world?

This.

You don't have to sign on completely to Nietzsche's theory to see its potential application, even if limited in scope. For example, a number of communication techniques employ a type of perspectivism because different people view issues through an "individual lens". In either case, seeing the world as constructed of shades of grey seems more practical and accurate relative to using probabilities. This seems at odds with Bayesian judgments that assume that probabilities yield one correct answer AND that a person can and should be able to derive that correct answer.

Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes.

The point i raise about communication techniques relates to your "offtopic" point. I assume you are a rationalist, and thus believe yourself to have superior decision making skills (at least relative to those that are not students (or masters) of rationality). If so, what is the value of your "off topic" point -- you clearly were able to answer my question despite its shortcomings -- why belittle someone that is trying to understand an article that is well-received by LW? Is the petty victory of pointing out my mistakes, from your perspective, the most rational way to answer my comment? I'm not insulted personally (this type of pettiness always makes me smile), but I'm interested in understanding the logic of your comments. From my perspective, rationality failed you in communicating in an effective way. It seems your arrogance could keep many from following and learning from LW -- unless of course the goal is to limit the ranks of those that employ rationality. What am I missing? (and the answer is no, i haven't considered using a spell or grammar checker other than the one provided by this site).

Oh, I'm not going to downvote your comments or anything. I just thought you might prefer your comments to be easier to read and avoid signalling ... well, disrespect, ignorance, crazy-ranting-on-the-internet-ness, and all the other low status and undesirable signals given off. Of course, I'm giving you the benefit of the doubt, but people are simply less likely to do so when you give off signals like that. This isn't necessarily irrational, since these signals are, indeed, correlated with trolls and idiots. Not perfectly, but enough to be worth avoiding (IMHO.)

Replies from: non-expert
comment by non-expert · 2013-01-09T14:22:36.518Z · LW(p) · GW(p)

Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes.

If all you're looking for is confidence, why must you assign probabilities? I'm pushing you in hopes of understanding, not necessarily disagreeing. If I'm very religious and use that as my life-guide, I could be extremely confident in a given answer. In other words, the value of using probabilities must extend beyond confidence in my own answer -- confidence is just a personal feeling. Being "right" in a normative sense is also relevant, but as you point out, we often don't actually know what answer is correct. If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance -- this is simply not practical in many situations precisely because the world is so complex. I guess it boils down to this -- what is the value of being "right" if what is "right" cannot be determined? I think there are decisions where what is right can be determined -- and rationality and the bayesian model works quite well. I think far more decisions (social relationships, politics, economics -- particularly decisions that do not directly affect the decision maker) are too subjective to know what is "right" or accurately model inputs. In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.

I think I'm the only one on LessWrong that finds EY's writing maddening -- mostly the style -- I keep screaming to myself, "get to the point!" -- as noted, perhaps its just me. His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.

Replies from: TheOtherDave, Peterdjones, MugaSofer
comment by TheOtherDave · 2013-01-09T14:56:47.838Z · LW(p) · GW(p)

I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.

Yes, this community is generally concerned with methods for, as you say, getting "the right answer more often than not."

And, sure, sometimes a marginal increase in my chance of getting the right answer isn't worth the cost of securing that increase -- as you say, sometimes "accurately identifying the proper inputs and valuing them correctly [...] is simply not practical" -- so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get.

To say that "rationality falls short" in these cases suggests that it's being compared to something. If you're saying it falls short compared to perfect knowledge, I absolutely agree. If you're saying it falls short compared to something humans have access to, I'm interested in what that something is.

I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it's "the best answer" has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C.

I have no idea what you mean by the truth being "relative".

Replies from: non-expert
comment by non-expert · 2013-01-10T07:49:24.979Z · LW(p) · GW(p)

I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.

i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right? If it gives confidence/ego/comfort that you've derived the right answer, being "right" in actuality is not necessary to have those feelings.

To say that "rationality falls short" in these cases suggests that it's being compared to something.

Fair. The use of rationality and the belief in its merits generally biases the decision maker to form a belief that rationality will yield a correct answer, even if it does not -- it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don't know they are accurate). To say it differently, to the extent a question has no clear answer (for example, because we don't have enough information or it isn't worth the cost), I think we'd be better off withholding judgment altogether than forming a judgment for the sake of having an opinion. Rumsfeld had this great quote -- "we dont know what we don't know" -- we also don't know the importance of what we don't know relative to what we do know when forming judgments. From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know. Rationality cannot take into account information that is not known to be relevant -- what is the value of forming a judgment in this case? To be clear, I'm not "throwing my hands up" for all of life's questions and saying we don't know anything -- I'm trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).

Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue. This is because our notions of truth are man-made, even if we account for the possibility that there are certain universal truths (what relevance do those truths have if only you know them?). Despite the logic underlying probability theory/science in general, truths derived therefrom are accepted as such only because people value and trust probability theory and science. All other matters of truth are even more subjective -- this does not mean that contradicting beliefs are equally true or equally valid, instead, truth is subjective precisely because we cannot even attempt prove anything as true outside of human comprehension. We're stuck debating and determining truth only amongst ourselves. Its the human paradox of freedom of expression/reasoning trapped within an animal form that is fallible and will die. From my perspective, determining universal truth, if it exists, requires transcending the limitations of man -- which of course i cannot do.

Replies from: MugaSofer, TheOtherDave
comment by MugaSofer · 2013-01-10T10:36:08.589Z · LW(p) · GW(p)

i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right?

Because it helps us make decisions.

Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful 'round here for producing fruitful discussions - there's no point arguing about whether the tree in the forest made a sound if I mean "auditory experience" and you mean "vibrations in the air". This is known as "Rationalist's Taboo", after a game with similar rules, and replacing a word with (your) definition is known as "tabooing" it.

Replies from: non-expert
comment by non-expert · 2013-01-14T07:55:45.180Z · LW(p) · GW(p)

I actually don't think we're using the word differently -- the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of "confidence" is the same -- it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate "correctness" of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:31:30.060Z · LW(p) · GW(p)

Indeed. And probability is confidence, and Bayesian probability is the correct amount of confidence.

comment by TheOtherDave · 2013-01-10T18:44:52.370Z · LW(p) · GW(p)

What was your understanding of how Mugasofer used "confident as we should be"?

Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.

what the value of being "right" is if we can't determine what is in fact right?

I'm not quite sure what "right" means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there's no value in knowing whether A or B is true.

it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don't know they are accurate).

Yes, pretty much. I wouldn't say "errs", but semantics aside, we're always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem.

to the extent a question has no clear answer (for example, because we don't have enough information or it isn't worth the cost), I think we'd be better off withholding judgment altogether than forming a judgment for the sake of having an opinion.

There are many decisions I'm obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question "is the world A or B?" has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective.

But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.)

From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know.

Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results.

Rationality cannot take into account information that is not known to be relevant -- what is the value of forming a judgment in this case?

As above, there are many situations where I'm obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often.

I'm trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).

I'd say LW is willing to push rationality as the best "theory" in all cases short of perfect knowledge right up until the point that a better one comes along, where "better" and "best" refer to their ability to reliably obtain benefits.

That's why I asked you what you're comparing it to; what it falls short relative to.

Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue.

So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice.
You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.

Did I understand that correctly?

Replies from: non-expert
comment by non-expert · 2013-01-14T07:48:20.918Z · LW(p) · GW(p)

Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.

How is this different than being "comfortable" on a personal level? If it isn't, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer -- the "truth" is irrelevant. For example, as previously noted in the thread, if I'm super religious, I could use scripture to guide a decision and have the same confidence (on a subjective, personal way). Once the correctness of the belief cannot be determined as right or wrong, the manner in which the belief is created becomes irrelevant, EXCEPT to the extent laws/norms change because other people agree. I've taken the idea of absolute truth and simply converted it social truth because I think its a more appropriate term (more below).

You are suggesting that rationality provides the "best way" to get answers short of perfect knowledge. Reflecting on your request for a comparatively better system, I realized you are framing the issue differently than I am. You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers). In that model, looking for the "best system" to find answers makes sense. In other words, you assume answers exist, and only the manner in which to derive them is unknown. I am proposing that there are issues for which answers do not necessarily exist, or at least do not exist within world of human comprehension. In those cases, any model by which someone derives an answer is equally ridiculous. That is why I cannot give you a comparison. Again, this is not to throw up my hands, its a different way of looking at things. Rationality is important, but a smaller part of the bigger picture in my mind. Is my characterization of your position fair? If so, what is your basis for your position that all issues have answers?

So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice. You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.

I am only talking about the relevance of truth, not the absolute truth, because the absolute truth cannot be necessarily be known beforehand (as in your example!). Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent. Related to the point I made above, if you presuppose Truth exists, it is easy to question or point out how people could be wrong about what it is. I don't think we have the luxury to know the Truth in most cases. Until future events prove otherwise, truth is just what we humans make of it, whether or not it conforms with the Truth -- thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.

In your example, immediately after the vial is taken -- we find out we're right or wrong -- and our subjective truths may change. They remain subjective truths so long as future facts could further change our conclusions.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-14T14:10:00.258Z · LW(p) · GW(p)

You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers).

Yes. The vial is either poisoned or it isn't, and my task is to decide whether to drink it or not. Do you deny that?

In that model, looking for the "best system" to find answers makes sense.

Yes, I agree. Indeed, looking for systems to find answers that are better than the one I'm using makes sense, even if they aren't best, even if I can't ever know whether they are best or not.

I am proposing that there are issues for which answers do not necessarily exist,

Sure. But "which vial is poisoned?" isn't one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting.

Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent.

This is where we disagree.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.

And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours.

thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.

Again, this is where we disagree. The relevance of "Truth" (as you're referring to it... I would say "reality") is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion.

In your example, immediately after the vial is taken -- we find out we're right or wrong -- and our subjective truths may change.

Sure, that's true.

But it's far more useful to better entangle our decisions (our "subjective truths," as you put it) with reality ("Truth") before we make those decisions.

Replies from: non-expert
comment by non-expert · 2013-01-14T18:42:50.438Z · LW(p) · GW(p)

With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.

But as noted above, if we cannot derive the truth, it is just as good as not existing. If the "vial picker" knows the truth beforehand, or is able to derive it, so be it, but immediately before he picks the vial, the Truth, as the vial picker knows it, is of limited value -- he is unsure and everyone around him thinks hes an idiot. After the fact, everyone's opinion will change accordingly with the results. By creating your own example, you're presupposing (i) an answer exists to your question AND (ii) that we can derive it -- we don't have that luxury in the real life, and even if we have that knowledge to know an "answer" exists, we don't know whether the vial picker can accurately pick the appropriate vial based on the information available.

The idea of subjective truth (or subjective reality) doesn't rely solely on the fact that reality doesn't exist, most generally it is based on the idea that there may be cases a human cannot derive what is real even where there is some answer. If we cannot derive that reality, the existence of that reality must also be questioned. We of course don't have to worry about these subtleties if the examples we use assume an answer to the issue exists.

The meaning of this is that rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived. If the answer to (i) and (ii) are yes, rationality sounds great. If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we're going about things in the best way. In such a world, there will be great uncertainty as to the appropriate human course of action.

This is why I'm asking why you are confident the answer to (i) is yes for all issues. You're describing a world that provides a level of certainty such that the rationality model works in all cases -- I'm asking why you know that amount of certainty exists in the world -- its convenience is precisely what makes its universal application suspect. As noted in my answer to MugaSofer, perhaps your position is based on assumption/faith without substantiation, which I'm comfortable with as a plausible answer, but not sure that is the basis you are using for the conclusion (for the record, my personal belief is that any sort of theory or basis for going about our lives requires some type of faith/assumptions because we cannot have 100% certainty)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-14T19:26:29.736Z · LW(p) · GW(p)

rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived.

Or at least approximated. Yes.

If the answer to (i) and (ii) are yes, rationality sounds great.

Lovely.

If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we're going about things in the best way

I would say, rather, that it has no purpose at all in the context of that question. Having a false belief is not a useful purpose.

This is why I'm asking why you are confident the answer to (i) is yes for all issues.

And, as I've said before, I agree that there exist questions without answers, and questions whose answers are necessarily beyond the scope of human knowledge, and I agree that rationality doesn't provide much value in engaging with those questions... though it's no worse than any approach I know of, either.

You're describing a world that provides a level of certainty such that the rationality model works in all cases

As above, I submit that in all cases the approach I describe either works better than (if there are answers, which there often are) or as well (if not) as any other approach I know of.
And, as I've said before, if you have a better approach to propose, propose it!

I'm asking why you know that amount of certainty exists in the world

I don't know that. But I have to make decisions anyway, so I make them using the best approach I know.
If you think I should do something different, tell me what you think I should do.

OTOH, if all you're saying is that my approach might be wrong, then I agree with you completely, but so what? My choice is still between using the best approach I know of, or using some other approach, and given that choice I should still use the best approach I know of. And so should you.

for the record, my personal belief is that [..] we cannot have 100% certainty

For the record, that's also the consensus position here.

The interesting question is, given that we don't have 100% certainty, what do I do now?

comment by Peterdjones · 2013-01-09T14:57:48.011Z · LW(p) · GW(p)

Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world?

Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases

Inasmuch as subjectivism is a form of relativism, those comments seem to contradict each other.

Replies from: non-expert
comment by non-expert · 2013-01-10T06:45:44.585Z · LW(p) · GW(p)

Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, "Murder is wrong," even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.

Replies from: MugaSofer, Peterdjones
comment by MugaSofer · 2013-01-10T10:37:35.289Z · LW(p) · GW(p)

Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.

Replies from: non-expert
comment by non-expert · 2013-01-14T08:01:43.161Z · LW(p) · GW(p)

What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions?

If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:38:56.447Z · LW(p) · GW(p)

My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective".

Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis.

EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".

Replies from: non-expert, brianmts
comment by non-expert · 2013-01-14T18:04:03.349Z · LW(p) · GW(p)

We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective"

I don't dispute the possibility that your conclusion may be correct, I'm wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I'm comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.

From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I've chosen to operate as if they are relative because (i) if moral truths exist but I don't know what they are, I'm in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn't mean you don't use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.

OK, i'll try to search for those EY writings, thanks.

comment by brianmts · 2013-05-28T18:37:57.679Z · LW(p) · GW(p)Replies from: MugaSofer
comment by MugaSofer · 2013-05-29T10:53:55.401Z · LW(p) · GW(p)

I, ah ... I'm not seeing anything here. Have you accidentally posted just a space or something?

comment by Peterdjones · 2013-01-10T12:50:27.493Z · LW(p) · GW(p)

Thanks for the clarifiction.

comment by MugaSofer · 2013-01-10T10:26:53.780Z · LW(p) · GW(p)

If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance -- this is simply not practical in many situations precisely because the world is so complex.

Indeed. One of the purposes of this site is to help people become more rational - closer to a mathematical perfect reasoner - in everyday life. In math problems, however - and every real problem can, eventually, be reduced to a math problem - we can always make the right choice (unless we make a mistake with the math, which does happen.)

I think I'm the only one on LessWrong that finds EY's writing maddening -- mostly the style -- I keep screaming to myself, "get to the point!" -- as noted, perhaps its just me.

Unfortunately for you, most of the basic introductory-level stuff - and much of the really good stuff generally - is by him. So I'm guessing there's a certain selection effect for people who enjoy/tolerate his style of writing.

His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.

I'm still not sure how truth could be "relative" - could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?)

EDIT:

In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.

A lot of people here - myself included - practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.

comment by alanf · 2013-12-13T08:52:46.487Z · LW(p) · GW(p)

Science is not based on faith, nor on anything else. Scientific knowledge is created by conjecture and criticism. See Chapter I of "Realism and the Aim of Science" by Karl Popper.

comment by Adam Zerner (adamzerner) · 2015-01-17T21:45:21.204Z · LW(p) · GW(p)

I came across a good example of this. I recently graduated from a coding bootcamp and am looking for jobs. I applied to a selective company and was declined. They said, "unfortunately we won't be able to move forward with your candidacy at this time". They didn't say anything about the actual reason why I was rejected.

(paraphrased conversation with my friend)

  • Me: I hate when people sugarcoat. I wish they just said, "you don't seem as smart as the other candidates".
  • Him: It isn't necessarily true that they don't think you're as smart. Maybe it's for some other reason. Like maybe it's because you're in NY and they're looking for people in SF.
  • Me: They asked if I was able to relocate to SF, and I said "yes, I want to relocate to SF".
  • Him: Maybe they thought that you were smart, but just that it wasn't the right fit.
  • Me: The position is for a software developer intern. I just graduated from a coding bootcamp. They use JavaScript-based technologies. I learned the same/similar technologies. They're an education company. I'm very interested in education. They want unconventional and ambitious people. I'm definitely unconventional and ambitious.
  • Him: ...
  • Me: So what do you think the reason is for why they rejected me?
  • Him: I don't know, they didn't tell you so I can't say.
Replies from: Morendil
comment by Morendil · 2015-01-17T22:08:30.410Z · LW(p) · GW(p)

Is there any reason you couldn't email back saying something along the lines of "I'd appreciate your pointing out what specific weaknesses made you rule out my application, so that I can improve to become a stronger candidate for later or for other similar companies, and possibly so that I can send candidates your way that better fit the profile?"

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-01-17T22:21:32.126Z · LW(p) · GW(p)

I figured that they're really busy and don't have time to address that. Like if they did have time, I figure that they would have addressed it in the rejection email. Plus, I feel pretty confident that it's because they don't think I'm as smart as the other candidates.

But you're the second person to recommend this, so perhaps I'm wrong in my assumptions. So I'm going to send them an email doing what you say.

comment by Bound_up · 2015-03-16T17:35:47.140Z · LW(p) · GW(p)

My favorite part of this post directly after reading was the highlighting of the apparent contradiction between the faithist's pride in their faith and the condemnation in their accusation of faith's use by science.

But I noticed I didn't feel I totally understood the dynamics in play in such a mind, and decided to think about it over pasta.

My tentative conclusion:

This is not, I think, a case of bare-faced irrationality per se, as per "What would you do with immortality" when conjoined with "I have an immortal soul."

The condemnation in the faithist's accusation of science's use of faith is not made in a vacuum per "Because your system employs faith, your system is bad," but rather is made in the context of a presumed and anticipated denial of any utility of faith, or any employment thereof in the accused system. They anticipate that scientists will deny their use of faith, and are accusing them implicitly of hypocrisy.

I feel there's still something missing, but I think this is whiter, at least.

EDIT: This was apparently nothing new, as demonstrated from the following retrieved from "No one can exempt you from rationality's laws."

"For example, one finds religious people defending their beliefs by saying, "Well, you can't justify your belief in science!" In other words, "How dare you criticize me for having unjustified beliefs, you hypocrite! You're doing it too!" - EY

comment by ahbwramc · 2015-06-12T22:08:30.669Z · LW(p) · GW(p)

When I first read this post back in ~2011 or so, I remember remembering a specific scene in a book I had read that talked about this error and even gave it the same name. I intended to find the quote and post it here, but never bothered. Anyway, seeing this post on the front page again prompted me to finally pull out the book and look up the quote (mostly for the purpose of testing my memory of the scene to see if it actually matched what was written).

So, from Star Wars X-Wing: Isard's Revenge, by Michael A Stackpole (page 149 of the paperback edition):

Tycho stood. "It's called the gray fallacy. One person says white, another says black, and outside observers assume gray is the truth. The assumption of gray is sloppy, lazy thinking. The fact that one person takes a position that is diametrically opposed to the truth does not then skew reality so the truth is no longer the truth. The truth is still the truth."

So maybe not exactly the same sentiment as this post, but not a bad rationality lesson for a Star Wars book, really.

(for those interested: my memory of the scene was pretty much accurate, although it occurred much later in the book than I had thought)

comment by benwhalley · 2021-05-04T08:02:22.326Z · LW(p) · GW(p)

I think one important problem, elided here, is that when problems are highly multidimensional then shades of grey will be harder to distinguish. At the extremes, yes, we can say that Gandhi and Stalin are imperfect in quantitatively different amounts. But most of the important life decisions we make can be evaluated on so many different dimensions of value that discriminating and integrating across them feels intractable. Even 3 or 4 dimensions makes the problem so effortful (and perhaps impossible if the dimensions are not commensurable) that falling back to intuition becomes the only pragmatic solution.

comment by MichaelDickens · 2022-12-02T16:40:21.686Z · LW(p) · GW(p)

A related pattern I noticed recently:

  • Alice asks, "What effect does X have on Y?"
  • Bob, an expert in Y, replies, "There are many variables that impact Y, and you can't reduce it to simply X."

Alice asked for a one-variable model with limited but positive predictive power, and Bob replied with a zero-variable model with no predictive power whatsoever.

comment by Pacificmaelstrom · 2023-09-08T18:04:38.781Z · LW(p) · GW(p)

Necro but maybe I can add something to the debate....

A problem I see is there are common cases where it is rational to be irrational, for example if being rational causes you emotional distress due to circumstances beyond your control.

And this is a big problem if one's will to be "rational" is at root based on an emotional will to be "less wrong" for the purpose of improving internal feelings of one's own value.

Because if that is the naked honest goal, then that rationalism is Hedonism by yet another name.

But realizing that might be destabilizing to the rationalist since, rationally, pure maximization of a social utilitarian value function is not a rational way to maximize a personal hedonistic value function no matter how hard one may try to contrive it....

So armed with intrepid rationality they may come to see the shades of grey rhetoric where Stalin is darker than Ghandi based on supposed utilitarian morality as a bad joke... and holding to utilitarian morality an irrational way to cope with the fact that power is everything and they have little of it.

To avoid this one would need to find a reason to be a rationalist other than their hedonistic value function. But the hedonistic value function is biological and innate, so the task is as impossible as winning the lottery.

But people do still win the lottery.

Would that suggest the difference between Stalin and Ghandi was little more than Ghandi's bad luck? Because who really wouldn't rather be in Stalin's circumstances? (while of course believing they would avoid his evils and do good instead).

An uncomfortable thought... but then we're always free to be irrational and just ignore what makes us uncomfortable...

comment by MacroMint · 2023-09-18T19:07:50.700Z · LW(p) · GW(p)

“I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.”

Another problem with this is "economic theory" is not monolithic. There are different schools of thought within economics, and applying economic theory No. 1 from X school might imply completely different things than applying it from Y school. Economics is a fractured, competitive field of concepts to say the least. Go listen to an argument between Neoclassical economists and Post-Keynesian economists and see what they agree on.