Incremental Progress and the Valley

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T16:42:38.405Z · LW · GW · Legacy · 113 comments

Contents

113 comments

Yesterday I said:  "Rationality is systematized winning [LW · GW]"

"But," you protest, "the reasonable person doesn't always win!"

What do you mean by this?  Do you mean that every week or two, someone who bought a lottery ticket with negative expected value, wins the lottery and becomes much richer than you?  That is not a systematic loss; it is selective reporting by the media.  From a statistical standpoint, lottery winners don't exist—you would never encounter one in your lifetime, if it weren't for the selective reporting.

Even perfectly rational agents can lose.  They just can't know in advance that they'll lose.  They can't expect to underperform any other performable strategy, or they would simply perform it.

"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would.  I'm talking about how religious people are happier—"

Ah.  Well, here's the the thing:  An incremental step in the direction of rationality, if the result is still irrational in other ways, does not have to yield incrementally more winning.

The optimality theorems that we have for probability theory and decision theory, are for perfect probability theory and decision theory.  There is no companion theorem which says that, starting from some flawed initial form, every incremental modification of the algorithm that takes the structure closer to the ideal, must yield an incremental improvement in performance.  This has not yet been proven, because it is not, in fact, true.

"So," you say, "what point is there then in striving to be more rational?  We won't reach the perfect ideal.  So we have no guarantee that our steps forward are helping."

You have no guarantee that a step backward will help you win, either.  Guarantees don't exist in the world of flesh; but contrary to popular misconceptions, judgment under uncertainty is what rationality is all about.

"But we have several cases where, based on either vaguely plausible-sounding reasoning, or survey data, it looks like an incremental step forward in rationality is going to make us worse off.  If it's really all about winning—if you have something to protect more important than any ritual of cognition—then why take that step?"

Ah, and now we come to the meat of it.

I can't necessarily answer for everyone, but...

My first reason is that, on a professional basis, I deal with deeply confused problems that make huge demands on precision of thought.  One small mistake can lead you astray for years, and there are worse penalties waiting in the wings.  An unimproved level of performance isn't enough; my choice is to try to do better, or give up and go home.

"But that's just you.  Not all of us lead that kind of life.  What if you're just trying some ordinary human task like an Internet startup?"

My second reason is that I am trying to push some aspects of my art further than I have seen done.  I don't know where these improvements lead.  The loss of failing to take a step forward is not that one step, it is all the other steps forward you could have taken, beyond that point.  Robin Hanson has a saying:  The problem with slipping on the stairs is not falling the height of the first step, it is that falling one step leads to falling another step.  In the same way, refusing to climb one step up forfeits not the height of that step but the height of the staircase.

"But again—that's just you.  Not all of us are trying to push the art into uncharted territory."

My third reason is that once I realize I have been deceived, I can't just shut my eyes and pretend I haven't seen it.  I have already taken that step forward; what use to deny it to myself?  I couldn't believe in God if I tried, any more than I could believe the sky above me was green while looking straight at it.  If you know everything you need to know in order to know that you are better off deceiving yourself, it's much too late to deceive yourself.

"But that realization is unusual; other people have an easier time of doublethink because they don't realize it's impossibleYou go around trying to actively sponsor the collapse of doublethink.  You, from a higher vantage point, may know enough to expect that this will make them unhappier.  So is this out of a sadistic desire to hurt your readers, or what?"

Then I finally reply that my experience so far—even in this realm of merely human possibility—does seem to indicate that, once you sort yourself out a bit and you aren't doing quite so many other things wrong, striving for more rationality actually will make you better off.  The long road leads out of the valley and higher than before, even in the human lands.

The more I know about some particular facet of the Art, the more I can see this is so.  As I've previously remarked, my essays may be unreflective of what a true martial art of rationality would be like, because I have only focused on answering confusing questions—not fighting akrasia, coordinating groups, or being happy.  In the field of answering confusing questions—the area where I have most intensely practiced the Art—it now seems massively obvious that anyone who thought they were better off "staying optimistic about solving the problem" would get stomped into the ground.  By a casual student.

When it comes to keeping motivated, or being happy, I can't guarantee that someone who loses their illusions will be better off—because my knowledge of these facets of rationality is still crude.  If these parts of the Art have been developed systematically, I do not know of it.  But even here I have gone to some considerable pains to dispel half-rational half-mistaken ideas that could get in a beginner's way, like the idea that rationality opposes feeling, or the idea that rationality opposes value, or the idea that sophisticated thinkers should be angsty and cynical.

And if, as I hope, someone goes on to develop the art of fighting akrasia or achieving mental well-being as thoroughly as I have developed the art of answering impossible questions, I do fully expect that those who wrap themselves in their illusions will not begin to compete.  Meanwhile—others may do better than I, if happiness is their dearest desire, for I myself have invested little effort here.

I find it hard to believe that the optimally motivated individual, the strongest entrepreneur a human being can become, is still wrapped up in a blanket of comforting overconfidence.  I think they've probably thrown that blanket out the window and organized their mind a little differently.  I find it hard to believe that the happiest we can possibly live, even in the realms of human possibility, involves a tiny awareness lurking in the corner of your mind that it's all a lie.  I'd rather stake my hopes on neurofeedback or Zen meditation, though I've tried neither.

But it cannot be denied that this is a very real issue in very real life.  Consider this pair of comments from Less Wrong:

I'll be honest —my life has taken a sharp downturn since I deconverted. My theist girlfriend, with whom I was very much in love, couldn't deal with this change in me, and after six months of painful vacillation, she left me for a co-worker. That was another six months ago, and I have been heartbroken, miserable, unfocused, and extremely ineffective since.

Perhaps this is an example of the valley of bad rationality of which PhilGoetz spoke, but I still hold my current situation higher in my preference ranking than happiness with false beliefs.

And:

My empathies: that happened to me about 6 years ago (though thankfully without as much visible vacillation).

My sister, who had some Cognitive Behaviour Therapy training, reminded me that relationships are forming and breaking all the time, and given I wasn't unattractive and hadn't retreated into monastic seclusion, it wasn't rational to think I'd be alone for the rest of my life (she turned out to be right). That was helpful at the times when my feelings hadn't completely got the better of me.

So—in practice, in real life, in sober fact—those first steps can, in fact, be painful.  And then things can, in fact, get better.  And there is, in fact, no guarantee that you'll end up higher than before.  Even if in principle the path must go further, there is no guarantee that any given person will get that far.

If you don't prefer truth to happiness with false beliefs...

Well... and if you are not doing anything especially precarious or confusing... and if you are not buying lottery tickets... and if you're already signed up for cryonics, a sudden ultra-high-stakes confusing acid test of rationality that illustrates the Black Swan quality of trying to bet on ignorance in ignorance...

Then it's not guaranteed that taking all the incremental steps toward rationality that you can find, will leave you better off.  But the vaguely plausible-sounding arguments against losing your illusions, generally do consider just one single step, without postulating any further steps, without suggesting any attempt to regain everything that was lost and go it one better.  Even the surveys are comparing the average religious person to the average atheist, not the most advanced theologians to the most advanced rationalists.

But if you don't care about the truth—and you have nothing to protect—and you're not attracted to the thought of pushing your art as far as it can go—and your current life seems to be going fine—and you have a sense that your mental well-being depends on illusions you'd rather not think about—

Then you're probably not reading this.  But if you are, then, I guess... well... (a) sign up for cryonics, and then (b) stop reading Less Wrong before your illusions collapse!  RUN AWAY!

113 comments

Comments sorted by top scores.

comment by e_j · 2009-04-04T18:20:51.173Z · LW(p) · GW(p)

oh, shi*.. RUNS AWAY

comment by ChrisHibbert · 2009-04-04T19:53:14.154Z · LW(p) · GW(p)

"No," you say, "I'm talking about how startup founders strike it rich by believing in themselves and their ideas more strongly than any reasonable person would. ..."

It's important to realize that this is another myth perpetuated by the media and our ignorance of the statistics. Most startups fail; I think the statistics are that 80% die in the first 5 years. But the ones that get written up in glowing articles are the ones that succeeded. Of course all those founders who struck it rich believed strongly in their ideas, but so did many of those that failed. That irrational belief may be a crucial ingredient for success, but it doesn't supply a guarantee. Most of the people who held that irrational belief worked for businesses that failed--but they didn't get their name in the paper, so they're relatively invisible.

Replies from: AlexU, dclayh, infotropism
comment by AlexU · 2009-04-04T20:54:08.060Z · LW(p) · GW(p)

Still, if everyone who does succeed has an irrational belief in their own success, then it's not wrong to conclude that such a belief is probably a prerequisite (though certainly not a "guarantee") for success.

Replies from: pwno, rwallace, MBlume
comment by pwno · 2009-04-04T21:53:46.001Z · LW(p) · GW(p)

Maybe the reason why so many startups fail is that people are prone to have irrational beliefs about business ideas. This causes many entrepreneurs to pursue bad investments or irrational business practices.

More relevant to the discussion topic, consider these questions:

Some beliefs have the tendency to be self-fulfilling prophecies, but is it irrational to have these beliefs? Is self-deception necessary for the "self-fulfilling" property to work? Can we, say, have a positive outlook on life while having rational expectations at the same time?

Replies from: loqi, Liron, AlexU
comment by loqi · 2009-04-04T22:04:40.529Z · LW(p) · GW(p)

Surely having a positive outlook on life doesn't require any specific belief.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-04-04T22:09:51.022Z · LW(p) · GW(p)

Except that life is good that is.

Replies from: SoullessAutomaton, loqi
comment by SoullessAutomaton · 2009-04-04T23:25:05.041Z · LW(p) · GW(p)

No. That life can be better is sufficient.

Replies from: anonym
comment by anonym · 2009-04-05T00:03:05.989Z · LW(p) · GW(p)

To think that your individual life can be better is a way of thinking that life in general is good.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-04-05T00:45:30.822Z · LW(p) · GW(p)

By life in general do you mean the lives of humans in general, or just your own life, extended in time?

Replies from: anonym
comment by anonym · 2009-04-05T01:37:01.585Z · LW(p) · GW(p)

The former. I think that "life [in general] is good" is just a way of explaining what is meant by "having a positive outlook on life", while "[my] life can be better" is a particular belief that influences whether or not you have the positive outlook.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-04-05T02:19:31.232Z · LW(p) · GW(p)

"Influences" is vague, but I take it you mean:

[my life can be better] produces [positive outlook] and [positive outlook] is another way of saying [life (ie the lives of humans in general) is good]

or: "If I believe that my life can be better, then I believe that life-of-humans-in-general is good."

I'm not implying that you actually believe this, just that this is what you were saying "positive outlook" meant. Am I right? From this perspective a positive outlook seems like a non-sequitur, since the future quality of my life may not provide much information about the lives of other people. Not to mention the fact that some people have good lives with bright futures and some have bad, hopeless ones, so the notion of life-in-general seems meaningless. From this I conclude that I do not have a positive outlook.

Replies from: anonym
comment by anonym · 2009-04-06T04:35:25.532Z · LW(p) · GW(p)

I agree with the first interpretation if you replace produces with presupposes or is the sort of thing you believe if you have the feeling of.

I also didn't mean that [positive outlook] involves people at all: I think it's more of a feeling about existence in general. It's true like you say that there is so much variety from one person to another and over time, and that [life in general] as a concept doesn't make much sense when you really think about it, but that doesn't stop us from having a feeling about it. We know that it's silly to talk about whether chocolate ice cream tastes good in general, and yet if you have always loved chocolate ice cream, there is a strong feeling that the goodness is an attribute of the ice cream itself rather than a description of your preferences, which is what you believe when you stop to think about it. The feeling for chocolate ice cream is to the feeling of [positive outlook] as the thought of "I love chocolate ice cream" is to the thought of "my life can be better" (can be better as in "has no upper bound" rather than "has nowhere to go but up").

I feel like I'm expressing myself so poorly that I should just stop before I confuse even more.

comment by loqi · 2009-04-04T22:17:35.021Z · LW(p) · GW(p)

Hmm, good point. But is that a specific belief, or a family of beliefs parameterized over values of "good"? Still, it's a subjective belief constraint, if nothing else.

Replies from: anonym
comment by anonym · 2009-04-05T00:25:36.282Z · LW(p) · GW(p)

I think it's an attitude, which is a set of dispositions to think and believe (and thus act) in certain ways. The disposition can be represented internally as a belief, but it's actually something more fundamental. The belief corresponding to an attitude is a representation rather than the thing itself.

To illustrate what I mean, consider people suffering from depression. Their primary problem in cognitive terms is not that they have particular dysfunctional beliefs (my life sucks, I'm a failure, etc.), but that they have a dysfunctional attitude that predisposes them to act in self-defeating ways and adopt particular self-defeating beliefs. They have an attitude that manifests as a strong predisposition to filter the positive, blow the negative out of proportion, and interpret every event in life in a way that would actually be cause for unhappiness if the interpretation were accurate.

Replies from: NancyLebovitz, loqi
comment by NancyLebovitz · 2010-04-07T11:57:37.057Z · LW(p) · GW(p)

Julian Simon's Good Mood is a counterexample. He was seriously considering suicide once his children were grown-- he had no pleasure in life and a high background level of emotional pain.

Still, he was running his life quite well, and got over his depression when he finally had everything squared away enough that he could spend a little time thinking about it. He concluded that depression is caused by making negative comparisons about one's situation, and found a bunch of strategies (lower standards, improve situation, find something more important than making comparisons, etc.) for not making them.

The link is to the whole text of the book.

comment by loqi · 2009-04-05T00:42:19.068Z · LW(p) · GW(p)

The belief corresponding to an attitude is a representation rather than the thing itself.

Oh, indeed. Well put.

comment by Liron · 2009-04-05T00:39:32.846Z · LW(p) · GW(p)

Regarding self-fulfilling beliefs:

Yes, having a belief can have the side effect of changing your behavior independently of how you would consciously change your behavior in light of your beliefs.

When you have an accurate belief, and the side effects of believing it affect your behavior in a way you consciously believe is positive, then take advantage of it! If you can get a boost toward your goals without making a conscious effort, then by all means cut out conscious effort as the middleman in the causal chain between your beliefs and your goal state.

But if you spy a shortcut between an inaccurate belief state and your current goal, don't follow the causal chain from the beginning, but meet it in the middle. Strive to shape your behavior according to your prediction of its effect, but leave your innermost beliefs to entangle with reality. They are shaped too much by non-entanglement processes as it is.

comment by AlexU · 2009-04-04T22:53:25.507Z · LW(p) · GW(p)

Good point. It might be that there are very few business ideas that actually are rational to have confidence in -- otherwise, someone probably would have implemented them already. In other words, most business ideas, even the ones that turn out to be good ones, might be inherently bad gambles a priori.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-04T23:28:00.559Z · LW(p) · GW(p)

It's also possible that business ideas aren't actually all that important, and that other factors dominate the success of a start-up. I believe this is essentially Paul Graham's position, for what that matters.

comment by rwallace · 2009-04-05T08:41:21.384Z · LW(p) · GW(p)

I think the best approach is a slightly more sophisticated one: commit to the belief that there is a way to succeed and you will find it - but not necessarily that you have already found it.

comment by MBlume · 2009-04-05T03:42:43.710Z · LW(p) · GW(p)

were it a guarantee, it would not be irrational

comment by dclayh · 2009-04-04T21:38:13.116Z · LW(p) · GW(p)

But is it also true that 80% of entrepreneurs fail? I was under the impression that yes, 80% of startup companies fail, but the average entrepreneur might start five or more companies over his career, so that average success rate per person is much higher than 20%.

Replies from: ChrisHibbert
comment by ChrisHibbert · 2009-04-05T17:57:26.614Z · LW(p) · GW(p)

Interesting question. It would be useful to know what the real statistics are. We certainly know that some successful entrepreneurs start up several companies, and that some of them have multiple successes. But it's harder to find reports about serial failures. I think I've read that (successful) VCs don't count prior failures against CEOs they're considering funding, and sometimes the people they place in funded companies are experienced with start-ups, but haven't had any big hits yet.

But for most people, I think the real question is what type of start-up to join, not whether to start one of your own. The vast majority of people at every successful company weren't founders, and weren't even early joiners. I think for relatively young workers, joining a succession of promising start-ups may be the right way to spread your chips around. Stay for 3-5 years (and work at it with all your heart and mind) and after that much time, if it doesn't look like blistering success is just around the corner, move on to another opportunity. As you get older (and don't have a big win under your belt) it probably makes more sense to take a lower chance on the big win in order to get more current income.

comment by infotropism · 2009-04-04T21:37:06.052Z · LW(p) · GW(p)

Agreed. History is written by the victors; just as evolution's path is paved with the untold number of those which died for lack of fitness or simply luck.

That which is successful and remains eventually isn't representative of all that has been attempted. Especially when those attempts have been made with little planning, knowledge, method, or even, when they've simply been made at random or based on beliefs that weren't entangled with reality. Holds true of any optimization process.

The less intelligent and "rational" the process, the more trashed byproducts to be quickly forgotten.

comment by PhilGoetz · 2009-04-05T05:37:15.901Z · LW(p) · GW(p)

Some historical context:

16th through 19th-century rationalists advocated views something like the views Eliezer is advocating. This view was eventually reflected in the art of the day, as exemplified by Bach and, later, by the strict formalisms of classical music.

In the 19th century, romanticism was an artistic reaction against rationalism. We're talking Goethe, Beethoven, Byron, and Blake. In painting, it was also a reaction against photography, searching for a justification for continuing to paint.

During the romantic period, Nietzsche used romantic artistic ideas to criticize rationality, by saying that life is worth living when we commit to values, and rationality undermines our commitment to our values. He offered as an alternative the culture/value creator, who leads his culture to greatness. This greatness, he says, can only be attained if we reject rationalism. There is some happiness theory in there as well, including the idea that war isn't justified by values, war justifies values. This seems to be a riff on the idea that the striving and drama is itself what we value.

In the 20th century, Max Weber rephrased it this way: Societies are legitimized by tradition, reason, or charisma. Religious societies are legitimized by tradition. The Enlightenment introduced legitimization by reason. Nietzsche argued for legitimization by charisma.

By then, most intellectuals the world over sided with Nietzsche. (I use "intellectuals" in the standard way, which marginalizes the physicists, mathematicians, and other hard scientists whom many of us consider to be the world's true intellectuals.)

Then Hitler and Lenin-Stalin played out legitimization by charisma. Intellectuals the world over were repulsed. It didn't seem so noble in real life. They rejected Nietzsche's conclusions, but without finding any problems with Nietzsche's arguments.

Philosophy since then has been boring, probably because philosophers can't get worked up about any position anymore. Today, most intellectuals reject tradition, reason, and charisma for legitimizing society; and no one has come up with anything better.

To push the true rationalist agenda, someone needs to find the errors in Nietzsche.

This is not what I see happening. When I hear people defending the preference of truth over Happiness or utility, it sounds like they're trying to create a monstrous hybrid of the Enlightenment and Nietzsche by rooting the entire structure of rationalism in an act of charismatic Nietzschian value-creation. That's not true utilitarian rationality. It looks like rationality above the surface; but the roots are Nietzschian.

Replies from: Eliezer_Yudkowsky, Multiheaded
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:19:06.385Z · LW(p) · GW(p)

This should, perhaps, have been its own post, because I see no relation whatsoever to the original post.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T15:36:27.543Z · LW(p) · GW(p)

The initial point of contact is when you said

If you don't prefer truth to happiness with false beliefs...

followed by a number of people in the comments disagreeing with me when I said this didn't make sense to me.

That, as well a lot of things said on LW by various people including you, sounds to me like elevating truth above values.

But I'll delete it and make it its own post if you like. I also thought that maybe I should've made it a separate post. It is a side issue.

Replies from: HughRistik
comment by HughRistik · 2009-04-05T19:13:17.106Z · LW(p) · GW(p)

I'd like to see it as its own post, illustrated with quotes from Nietzsche or quotes from those interpreting Nietzsche.

comment by Multiheaded · 2012-05-08T13:42:19.555Z · LW(p) · GW(p)

:applause:

comment by cousin_it · 2009-04-05T12:47:03.103Z · LW(p) · GW(p)

Oh no, more grandeur.

A rationalist can take a small concrete problem, reduce it to essentials, figure out a good strategy and follow it. No need to brainf*ck yourself and reevaluate your whole life - people have built bridges and discovered physical laws without it. For examples of what I want see Thomas Schelling's "Strategy of Conflict": no mystique, just clear mathematical analysis of many real-life problems. Starts out from toys, e.g. bargaining games and PD, and culminates in lots of useful tactics for nuclear deterrence that were actually adopted by the US military after the book's publication. How's that for "something to protect"?

I for one would be happy if you just wrote up, mathematically, your solution concept for Newcomb's and PD. Is it an extension of superrationality for asymmetric games, or something else entirely? If we slowly modify one player's payoffs in PD, at what precise moment do you stop cooperating?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:08:54.538Z · LW(p) · GW(p)

If you know what you want so clearly, why not write it and post it? Less Wrong is what you make it.

Replies from: cousin_it, PhilGoetz, Annoyance
comment by cousin_it · 2009-04-05T15:54:26.885Z · LW(p) · GW(p)

Done. Let's see what you make of it.

comment by PhilGoetz · 2009-04-05T15:46:31.542Z · LW(p) · GW(p)

He doesn't have your solution for Newcomb's and PD.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T16:07:36.293Z · LW(p) · GW(p)

Stop speaking for me.

Replies from: cousin_it, PhilGoetz
comment by cousin_it · 2009-04-05T16:13:12.244Z · LW(p) · GW(p)

He was speaking to you, his "he" referred to me.

Edit: no, I didn't downvote anyone, but sorry for causing the mess anyway. Who's going around here downvoting stuff without explanation?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-05T19:35:06.908Z · LW(p) · GW(p)

Someone with a lot of accounts.

comment by PhilGoetz · 2009-04-05T17:04:38.634Z · LW(p) · GW(p)

I'm not speaking for you. I'm speaking for him. He doesn't have your solution for Newcomb's and PD.

comment by Annoyance · 2009-04-05T17:42:51.088Z · LW(p) · GW(p)

It's what a lot of people make it... and some people have more power over it than others.

Part of rationality is recognizing that there are things we can control, and things we can't. Another part is learning to tell the difference.

comment by RobinHanson · 2009-04-05T03:34:06.045Z · LW(p) · GW(p)

When there is a conventional wisdom it usually pays for most people to become more rational just so they can better anticipate, assimilate, remember and use that conventional wisdom. But once your rationality becomes so strong that it leads you to often reject conventional wisdom, then you face a tougher tradeoff; there can be serious social costs from rejecting conventional wisdom.

comment by Wei Dai (Wei_Dai) · 2009-04-05T09:58:56.072Z · LW(p) · GW(p)

Things are actually a bit worse than this, because there is also no theorem that says there is only one valley, so there's no guarantee that even after you climb out of this valley, your next step won't cause you to go off a precipice.

BTW, there's a very similar issue in economics, which goes under the name of the Theory of the Second Best. Markets will allocate resources efficiently if they are perfectly competitive and complete, but there is no guarantee that any incremental progress towards that state, such creating some markets that were previously missing, or making some markets more competitive, will improve social welfare.

Replies from: Eliezer_Yudkowsky, robzahra
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:19:58.824Z · LW(p) · GW(p)

I agree there's no guarantee in principle, but I can't recall ever running into a second valley in practice.

comment by robzahra · 2009-04-07T20:02:59.982Z · LW(p) · GW(p)

I can see at least one second valley instance in my own experience--someone born religious initially thinks a creator probably exists, then learns evidence against, and assigns fairly high probability of no creator. later on, he becomes more rational and considers simulation arguments, and needs to re-adjust up his estimate. (Bostrom I think was at ~1/3 probability that we are simulated, based on his Simulation Argument paper). Am I interpreting second valley the same way you are, Wei?

comment by JulianMorrison · 2009-04-04T23:10:29.403Z · LW(p) · GW(p)

An incremental step can be a loss where you have two errors reversing each other. You have error A that causes suffering a and error B that causes anti-a. You cure B, and suddenly you experience a. The anti-rationalist says "quick, reinstate B". I say "no, work back from a to A and cure A".

Example: pessimists make better calibrated estimates but are worse off for happiness and health. IMO the pessimists are probably not accepting the reality they predict, they are railing against it, which is a variety of magical thinking.

Replies from: DanielLC
comment by DanielLC · 2013-04-11T06:41:55.742Z · LW(p) · GW(p)

Another example: getting rid of risk aversion without getting rid of overconfidence bias, or vice versa.

comment by HughRistik · 2009-04-05T04:08:38.277Z · LW(p) · GW(p)

Even perfectly rational agents can lose. They just can't know in advance that they'll lose. They can't expect to underperform any other performable strategy, or they would simply perform it.

I think your formulation in this post is the clearest, and I agree with it. In previous posts, you may have said things which confused your point, such as this:

Said I: "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

The strong interpretation of this quote is that if you lose, you weren't being rational. This may explain why so many people felt the urge to point out that rational people can lose. The weak interpretation is that if you lose, rather than protesting that you were rational, you should more closely scrutinize your thinking and whether it is really rational. Now it seems that the weak interpretation is what you intend.

Replies from: Kenny
comment by Kenny · 2009-04-12T17:40:45.793Z · LW(p) · GW(p)

Or – if you lose, you should learn why, if it's important to not lose again.

comment by kluge · 2009-04-04T19:04:28.543Z · LW(p) · GW(p)

But if you don't care about the truth - and you have nothing to protect - and you're not attracted to the thought of pushing your art as far as it can go - and your current life seems to be going fine - and you have a sense that your mental well-being depends on illusions you'd rather not think about -

..then it may already be too late, since the seed of doubt is already planted.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T23:48:39.974Z · LW(p) · GW(p)

I wish. People seem capable of sustaining themselves in this state for indefinite periods.

comment by Roko · 2009-04-04T19:58:12.695Z · LW(p) · GW(p)

Most people are not signed up for cryonics, so if you postulate that the "benefit" to an individual of cryonics is massive compared to the increment in quality of life that being irrationally comforted brings, then almost everyone ought to be epistemically rational.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T16:48:02.823Z · LW(p) · GW(p)

I don't know what's up with the italics here. It doesn't show like that in the editor or in the raw HTML. Copying to another application and repasting doesn't fix it, etc.

comment by Pascal Morimacil (pascal-morimacil) · 2020-07-27T18:07:00.983Z · LW(p) · GW(p)

Rationality does not guarantee results at the single human scale.

Making a decision that is statistically correct only works out in the long run, over a number of such decisions.

You can make a decision that was the correct decision given the information you had, and then it doesn't work out.

comment by Loren · 2009-04-05T18:41:25.907Z · LW(p) · GW(p)

This is an experiment with quoted text. Now is the time for all good men to come to the aid of their country, don't ya think?

  • This is item 1.
  • This is item 2.
  • This is item 3.

This is really important.

comment by CronoDAS · 2009-04-05T04:24:55.381Z · LW(p) · GW(p)

From a statistical standpoint, lottery winners don't exist - you would never encounter one in your lifetime, if it weren't for the selective reporting.

Well... one of my grandmothers' neighbors, whose son I played with as a child, did indeed win the lottery. (AFAIK, it was a relatively modest jackpot, but he did win!)

Also, re: cryonics: My current understanding is that being an organ donor is incompatible with cryonic preservation. Is this correct? (Myself, I think I'd rather be an organ donor...)

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov, rwallace
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:21:48.498Z · LW(p) · GW(p)

Well, yes, some of the modest jackpots are statistically almost possible, in the sense that on a large enough web forum, someone else's grandmother's neighbor will have won it. Just not your own grandmother's neighbor.

Sorry about your statistical anomalatude, CronoDAS - it had to happen to someone, just not me.

comment by Vladimir_Nesov · 2009-04-05T10:58:06.151Z · LW(p) · GW(p)

And now that you have selectively reported this fact, I know of CronoDAS the web forum buddy, whose grandmother's neighbor has won a modest jackpot!

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-05T11:07:45.150Z · LW(p) · GW(p)

What is this, six degrees of a lottery winner?

comment by rwallace · 2009-04-05T09:47:47.438Z · LW(p) · GW(p)

I would imagine it should be possible to freeze your brain and donate the rest of your organs?

Replies from: ChrisHibbert
comment by ChrisHibbert · 2009-04-05T18:02:07.484Z · LW(p) · GW(p)

Mostly not. The process of preparing the body for cryonics (even for neuro- or head-only patients) requires pumping preservation chemicals through the bloodstream that are incompatible with donation.

comment by PhilGoetz · 2009-04-05T03:06:34.086Z · LW(p) · GW(p)

IAWYC, but am confused by the phrase

If you don't prefer truth to happiness with false beliefs...

Does it make sense to talk about preferring something over happiness? I know what you mean if we take a folk definition of happiness as something like "bubbly feelings". But I don't think you mean folk happiness; for this statement to have impact, it has to mean Happiness, defined to include all of your values.

I think what I'm trying to ask is: Isn't it by definition irrational (failing to maximize your happiness) to prefer truth to happiness?

Replies from: MBlume, Eliezer_Yudkowsky
comment by MBlume · 2009-04-05T03:46:38.147Z · LW(p) · GW(p)

My happiness is something you can measure just by observing the state of my brain. To measure the accuracy of my beliefs, you must measure my brain and the rest of the universe, and compare the two. I place value on the accuracy of my beliefs, which means I do value something beyond my happiness

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T05:16:41.615Z · LW(p) · GW(p)

Sure, But that is, by the definition of rationality that I think most of us have been using, irrational.

Replies from: Nick_Tarleton, Eliezer_Yudkowsky, Nick_Tarleton
comment by Nick_Tarleton · 2009-04-05T07:24:39.909Z · LW(p) · GW(p)

Exactly where have you gotten the idea that any of us have been using a definition of "rationality" that includes a requirement that utility supervene on brain states?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T15:41:00.451Z · LW(p) · GW(p)

Here's the quote from EY that I started this comment thread with:

If you don't prefer truth to happiness with false beliefs...

Both the alternatives here are talking about brain states. EY's 'truth' doesn't mean 'truth in the world'. The world is true by definition. He means having truth in your brain. He is trying to maximize the truth/falsehood ratio of the states within his own brain.

That's a definition of "rationality" that includes a requirement that utility supervene on brain states.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-04-05T17:00:32.216Z · LW(p) · GW(p)

No, as MBlume said, truth, and utility of truth, supervene on brain states and the things those brain states are about. Holding my belief about the color of the sky fixed, it is true if the sky is blue and false if the sky is green.

Also, truth and happiness are just the values being weighed in this particular case; nobody ever said they're the only things rationalists might care about.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T09:10:08.702Z · LW(p) · GW(p)

Most of us? Anyone besides Phil Goetz, vote this comment down if you think that it is by definition irrational to value something beyond your own experienced happiness.

(We really need a simple way to include small polls into blog comments!)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T14:48:13.063Z · LW(p) · GW(p)

I've already said, in this very thread, that I'm talking about

Happiness, defined to include all of your values

Now you just wrote

to value something beyond your own experienced happiness.

I realize that I introduced confusion with my unclear definitions.

My terms, to review a a recent discussion:

  • Utility = a function mapping your state into its desirability, based on your values.
  • Happiness = a time-varying function mapping utility over time into your satisfaction with your utility.
  • Rationality = maximizing your expected Happiness

So I think you're saying that you want to define rationality as maximizing your expected utility, not your expected happiness. It's a significant difference, and I would like to know which people prefer (or if they have some other definition). But it doesn't matter WRT the comment I made here. You're still being a Nietzschian if you elevate Truth beyond your utility function.

Replies from: loqi, conchis
comment by loqi · 2009-04-05T19:12:23.202Z · LW(p) · GW(p)

a time-varying function mapping utility over time into your satisfaction with your utility

I can't make any sense of this. I value happiness-the-brain-state, which means I value satisfaction with my situation in life. That is part of my utility function. The "life-states" are mere inputs, they don't exhaust the definition of "utility". If I can predict that a year after winning the lottery I won't be any happier than I am now, that bears directly on the expected utility of winning.

You say you're talking about "Happiness, defined to include all of your values", but the original mention of preferring truth to happiness had this for context: "I have been heartbroken, miserable, unfocused, and extremely ineffective since". This is surely talking about psychological happiness, not overall "value". Why such confusing terminology?

comment by conchis · 2009-04-05T16:26:43.358Z · LW(p) · GW(p)

I'm afraid I'm still utterly confused by your usage. It seems to me that you're trying to draw two separate distinctions when you contrast happiness and utility. One is a distinction between brain states and other things we might choose to value; the other is a distinction between an instantaneous measure and a measure aggregated in some way over time.

Does this seem right to you, or am I completely missing the point? (If it does seem right, do you see how trying to do both of these with a single shift in terminology might not be the best way of proceeding? In particular, it manages to leave us with no words for the aggregate-of-value-over-time; or for the instantaneous-experience-of-particular-brain-states.)

I am also somewhat confused by your viewing the brain states (Happiness) as functions of utility. We can clearly value more than just states of our brain, so it seems far more natural to me to view value as a function of brain states + other stuff, rather than the other way around.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T17:12:42.449Z · LW(p) · GW(p)

I'm afraid I'm still utterly confused by your usage. It seems to me that you're trying to draw two separate distinctions when you contrast happiness and utility. One is a distinction between brain states and other things we might choose to value; the other is a distinction between an instantaneous measure and a measure aggregated in some way over time.

Yes. I don't think introducing these distinctions one at a time would give you any additional useful concepts. The integral of utility over time serves as the aggregate of value over time. It only fails to do so when we talk about happiness because happiness is more sensitive to changes in utility than to utility.

Happiness does give you an instantaneous measure; it just depends on the history. When I talk about maximizing happiness, I mean maximizing the integral of happiness over time. This works out to be the same as maximizing the increase in utility over time, for reasonable definitions of happiness; see my comment above in response to EY.

We can clearly value more than just states of our brain

I think the distinction is

  • 'maximize utility' = non-hedonic rationalism
  • 'maximize happiness' = hedonic rationalism

I understand that there's a lot of sympathy for non-hedonic rationalism. But, in the long run, it probably relies on irrational, Nietzschian value-creation.

Hedonic rationalism is in danger of being circular once we can re-write our happiness functions. But this is probably completely isomorphic to the symbol-grounding problem, so we have to address this problem anyway.

Replies from: conchis
comment by conchis · 2009-04-05T17:19:11.023Z · LW(p) · GW(p)

I don't think introducing these distinctions one at a time would give you any additional useful concepts.

Then there are a lot of economists (and psychologists) who disagree with you, and routinely use these concepts you don't think are useful for apparently useful purposes.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-05-17T18:19:19.301Z · LW(p) · GW(p)

I was a little careless; and you are taking my statement out of context and overgeneralizing it. These two distinctions are both needed to find the answer I am proposing. They can be used successfully in other context, or probably within the same context to address different questions.

comment by Nick_Tarleton · 2009-04-05T07:21:55.471Z · LW(p) · GW(p)

Why do you think any of us have been using a definition of rationality that includes a requirement that utility supervene on brain states?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:30:08.483Z · LW(p) · GW(p)

While we're on the subject: I, and I think MBlume, meant simply happiness, not what you're calling Happiness.

Replies from: PhilGoetz, MBlume, PhilGoetz
comment by PhilGoetz · 2009-04-05T15:12:04.728Z · LW(p) · GW(p)

Yes and no. I meant Happiness to include your values. But I meant it to mean your brain states in response to the time-varying level of satisficing of your values.

Here's two possible definitions of rationality:

  • maximizing your expected utility, expressed as a static function mapping circumstances into a measure according to your values
  • maximizing your expected Happiness, where Happiness expresses your current brain state as a function of the history of your utility

The Happiness definition of rationality has evolutionary requirements: It should always motivate a creature to increase its utility, and so it should resemble the first derivative of utility.

With this definition, maximizing utility over time means maximizing the area under your utility curve. Maximizing Happiness over a time period means maximizing the amount by which your final utility is greater than your initial utility.

So utility rationality focuses on the effects during the interval under consideration. Making a series of decisions, each of which maximizes your utility over some time period, is not guaranteed to maximize your utility over the union of those time periods. (In fact, in real life, it's pretty much guaranteed not to.)

Happiness rationality is a heuristic that gives you nearly the same effect as evaluating your total utility from now to infinity, even if you only ever evaluate your utility over a finite time-period.

My initial reaction is that Happiness rationality is more practical for maximizing your utility in the long-term.

Which do people prefer? Or do they have some other definition of rationality?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T15:15:27.340Z · LW(p) · GW(p)

Everyone here except you is using 1.

Replies from: PhilGoetz, Annoyance, PhilGoetz
comment by PhilGoetz · 2009-04-05T15:55:16.436Z · LW(p) · GW(p)

Do you see how using 2 can better accomplish 1?

We think we can best maximize our utility by trying to maximize our utility. Evolution is a better reasoner than us, and designed us to { maximize our utility by trying to maximize our happiness }. Replies from: None
comment by [deleted] · 2009-04-18T03:24:16.400Z · LW(p) · GW(p)

That nature is (always) a better reasoner than man isn't a credible premise, particularly these days, when the analogous unconditional superiority of the market over central planning is no longer touted uncritically.

Do you assume individual rationality's justification is utility maximization, even if we settle for second-tier happiness in proxy? Programmed to try to maximize happiness, we act rationally when we succeed, making maximizing utility irrational or at least less rational. Utility has nothing more to recommend it when happiness is what we want.

Another way of saying this is that happiness is utility if utility is to play its role in decision theory, and what we've been calling utilities are biased versions of the real things.

comment by Annoyance · 2009-04-05T17:51:19.313Z · LW(p) · GW(p)

I would be more sympathetic towards your complaints about people speaking for you if you didn't frequently speak for others. All others.

Even if you were right, such behavior would be intolerable. And you frequently aren't. You aren't even rhetorically accurate, letting 'everyone' represent an overwhelming majority.

From now on I will downvote any comment or post of yours that puts words in my mouth, whether directly or through reference to us collectively, regardless of the remainder of the content.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-05T19:24:29.068Z · LW(p) · GW(p)

I would be fascinated to know how many of us I speak for when I say: why don't you just fuck off.

Replies from: conchis, Kenny
comment by conchis · 2009-04-05T20:16:06.552Z · LW(p) · GW(p)

Not me. Please can we not descend into this sort of thing? If you think Annoyance is trolling, then don't feed. Vote down and move on.

comment by Kenny · 2009-04-12T17:50:20.554Z · LW(p) · GW(p)

I bet he drew the red card.

comment by PhilGoetz · 2015-02-13T17:05:49.678Z · LW(p) · GW(p)

When I said "which do people prefer", I meant "Which do you prefer after considering my explanation?" Most people are using 1 because they've never realized that the brain is using 2. I'd be more interested in hearing what you think people should use than what they do use, and why they should use it.

comment by MBlume · 2009-04-05T20:51:20.593Z · LW(p) · GW(p)

and I think MBlume

Indeed

comment by PhilGoetz · 2009-04-05T17:32:01.335Z · LW(p) · GW(p)

I could also call Happiness rationality "hedonic rationality". Maximizing utility leaves you with the problem of selecting the utility function. Hedonic rationality links your utility function to your evolved biological qualia.

Perhaps the most important question in philosophy is whether it makes sense to pursue non-hedonistic rationality. How do you ground your values, if not in your feelings?

I think that maybe all we are really doing, when we say we are rationally maximizing utility, is taking the integral of our happiness function and calling it our utility function. We have a built-in happiness function; we don't have a built-in utility function. It seems too much of a coincidence to believe that we rationally came to a set of values that give us utility functions that just happen to pretty nearly be the same as we would get by deriving them from our evolved happiness functions.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-05T18:21:13.486Z · LW(p) · GW(p)

Then this "hedonic rationality" is a non-reflective variety, caring for what your current values are, but not for what you'll do or have done with your future and past values?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-06T13:53:51.345Z · LW(p) · GW(p)

Do you mean, do you place a value on your future values? I don't think you can do anything but place negative value on a change in your values. What's an example of a rationality model that does what you're asking?

Replies from: ciphergoth, conchis, thomblake, christopherj
comment by Paul Crowley (ciphergoth) · 2009-04-06T16:00:34.014Z · LW(p) · GW(p)

This is true in theory, but in practice, what we think are our terminal values we can later discover are instrumental values that we abandon when they turn out not to serve what turns out to be our even-more-terminal values. Thus lots of people who used to think that homosexuality was inherently wrong feel differently when they discover that their stereotypes about gay people were mistaken.

comment by conchis · 2009-04-06T15:33:02.481Z · LW(p) · GW(p)

I don't think you can do anything but place negative value on a change in your values.

At the very least, this would seem to hold only in the extreme case that you were absolutely certain that your current values are both exhaustive and correct. I for one, am not; and I'm not sure it's reasonable for anyone to be so certain.*

I would like generally like to value more things than I currently do. Provided they aren't harming anybody, having more things I can find value, meaning, and fulfillment in seems like a good thing.

One of the things I want from my values is internal consistency. I'm pretty sure my current values are not internally consistent in ways I haven't yet realized. I place positive value on changing to more consistent values.

* Unless values are supposed to be exhaustive and correct merely because you hold them - in which case, why should you care if they change? They'll still be exhaustive and correct.

comment by thomblake · 2009-04-07T16:13:49.320Z · LW(p) · GW(p)

I don't think you can do anything but place negative value on a change in your values.

Jim Moor makes a similar case in Should We Let Computers Get Under Our Skin - I dispute it in a paper (abstract here)

The gist is, if we have self-improvement as a value, then yes, changing our values can be a positive thing even considered ahead of time.

comment by christopherj · 2013-11-02T14:29:11.618Z · LW(p) · GW(p)

I don't think you can do anything but place negative value on a change in your values.

My assumption is that I would not choose to change my values, unless I saw the change as an improvement. If my change in values is both voluntary and intentional, I'm certain my current self would approve, given the relevant new information.

comment by Emiya (andrea-mulazzani) · 2020-12-16T10:25:51.984Z · LW(p) · GW(p)

And only now I finally get why some of the people I know kept telling me, again and again, "okay, but rationality is not enough for everyone to get through their lives, people need something to believe in..." they were just picturing the step of being "realistic".

It has dawned on me that nearly all the illusions I was wrapped in were making my life considerably unhappier. 

I guess that's why I've never experienced anything close as finding myself worse off because of studying rationality, not even after the first steps. 

comment by Loren · 2009-04-04T21:24:01.740Z · LW(p) · GW(p)

Eliezer said: "Even the surveys are comparing the average religious person to the average atheist, not the most advanced theologians to the most advanced rationalists."

Very true. Wouldn't it be a kicker if that was done and we found out that the most advanced theologians ARE the most advanced rationalists? I suspect the chances of something like this being true are higher than most of us think.

Replies from: AlexU, gjm, Loren, Roko, loqi
comment by AlexU · 2009-04-05T00:16:52.077Z · LW(p) · GW(p)

There are some brilliant theists out there. The best theologians are largely indistinguishable from the best philosophers, who are typically quite rational people, to say the least.

Still, the chances that the most advanced theologians are the most advanced rationalists -- more advanced than the best philosophers, physicists, computer scientists, etc., rather than merely comparable -- seems slim.

Replies from: Tom_Talbot
comment by Tom_Talbot · 2009-04-05T02:51:36.134Z · LW(p) · GW(p)

We really need to have a discussion about the polite way to downvote people. I say that the top-level comment shows the right way to moderate, with discussion about the decision to downvote, while this post above mine has been moderated badly. The comment above seems to have undergone some drive-by moderation, with no one saying what he did wrong. One line would do, "This comment downvoted because it is vapid/nonsensical/mistaken" or something. What would be really nice would be if you, anonymous moderators, would set people straight when they made a mistake (as has been done at the top-level) so that we can discuss it in public and avoid it in future. I'm not saying you should explain every downvote, but if you're hammering someone into the negatives, at least have the guts to say why. Was the post above downvoted because it was bad or because he agreed with the bad post of the top-level commenter? If so, a simple "Your post downvoted for reasons I gave above" would have sufficed.

Downvoting without explanation smacks of laziness or vindictiveness, and degrades the quality of the discussion. If you cannot be bothered to provide an explanation for your downvote, I do not think you should be moderating at all.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-05T13:27:20.005Z · LW(p) · GW(p)

I favor drive-by downvoting because otherwise we don't really have a downvoting system. Downvotes simply shouldn't be that awful. They're just info about how others think you did, and in extreme cases (-4 or below) a way to get comments that newcomers shouldn't see off the immediately visible page (but still visible if you want to probe further).

Replies from: AlanCrowe
comment by AlanCrowe · 2009-04-05T14:31:02.492Z · LW(p) · GW(p)

I think that it is very important to look at how much work the commenter put into their comment.

One thing that kills discussion boards is that the conversations become too cliched. Mr. A makes the standard comment. Mr. B make the standard rebuttal. Mr. A makes the standard defence. Mr. B makes the traditional follow up.

When Mr. A makes the standard comment, is that for real, or is it just trolling? Tough question. I think that there comes a point at which one has to get tough and do drive-by downvoting on valid, on topic comments, because they are common place and threaten to destroy the discussion by making it too familiar, swamping the discussion with the banal.

The other side to this it if Mr. A makes a three paragraph comment. 1)His point. 2)The standard rebuttal. 3)Why he thinks his points survives the standard rebuttal. At this point we know that Mr. A is not a troll. He has put in too much work to count coup on getting a bite. He is making a effort to move the discussion on briskly so that it can reach unbroken ground. He has earned an explanation of why his comment is crap, and I would say that he has earned the right to an actual typed in criticism instead of a down vote.

There are other kinds of work worthy of respect. It is easy to make a long general response, either by being a fast typist and rattling it off, or by use of cut and paste. A comment is worthy or respect if the commenter has taken the time to tailor it so that it is clear how the general point applies to the particular case under discussion. Gathering up and checking relevant links eats time. If some-one has gone to the trouble of decorating his comment with relevant links, that should earn him immunity from drive-by down voting.

One the other hand, there is discussion in the blog sphere of turning off comments altogether. Some people say that if the comments are there they feel obliged to read them, but actually they are mostly the same-old-same-old and a waste of time. Which ends up with the reader feeling that they are wasting their time reading the blog and giving up altogether. Short, mildly entertaining, chitchatty comments that fill the fleeting hour with work not done will eventually kill LessWrong. I think readers should be very free with downvotes for lightweight comments.

Replies from: Eliezer_Yudkowsky
comment by gjm · 2009-04-04T21:43:31.230Z · LW(p) · GW(p)

Any reasons for that suspicion?

comment by Loren · 2009-04-04T23:41:41.242Z · LW(p) · GW(p)

Roko said "do you have any reason or evidence pointing your conclusion?"

First of all, I wasn't concluding anything. As I said, it's just a suspicion. Is there a rule that all speculation on this web site is downvoted?

My suspicion comes from being impressed by the work of Ken Wilber. He is a case in point that I am thinking of. Here is a brief introduction to his work:

http://www.kenwilber.com/writings/read_pdf/91

Replies from: GuySrinivasan, gjm
comment by GuySrinivasan · 2009-04-05T00:29:55.557Z · LW(p) · GW(p)

I read the brief introduction, and was thoroughly unimpressed. Maybe there's a kernel of truth somewhere but you'd think a brief introduction would make it more visible... saying "scientism" over and over, dismissing reductionism as calling things "nothing but" their components over and over... apparently he has split things we can know up into 2x2=4 parts, and "Yet in erasing left-hand interiors, modernity also erased meaning, purpose, and significance from our view of the universe, life, and ourselves. For meaning, purpose, and significance, subjective value, and all other qualitative distinctions are interior left-hand events. Gone was any sense of value or purpose for life. Instead humans began to see themselves merely as meaningless blobs of protoplasm, adrift on a tiny speck of dust in a remote unchartered corner of one of countless billions of galaxies."

It seems science stole Ken Wilber's rainbows. Bad scientists! Or wait, I mean:

"scientists (or better, scientismists)"

In fairness, maybe it's just Roger Walsh (the author of the introduction) that failed to impress me enough to get me to read Wilber.

comment by gjm · 2009-04-05T00:27:57.474Z · LW(p) · GW(p)

I didn't downvote you, but I think such downvoting as you've received has been not just because you were speculating but because you were making what on the face of it is a very implausible suggestion without any indication of why it might be true. That's kinda rude: if you have some reason for thinking it's likely to be true, why aren't you at least hinting at it? and if you haven't, what's the value in telling us?

Ken Wilber's site is annoying. The link you gave, rather than just serving up the damn PDF file, s it in the page, which means that on my (admittedly slightly weird) system I can't read it. And his front page is Flash-only, ditto. However, I grabbed the file at http://www.kenwilber.com/Writings/PDF/SS-Walsh.pdf and also looked at his Wikipedia entry; from these, my own estimate of his likelihood of being one of "the most advanced rationalists" is extremely low. (Not that you need care what my estimate of that likelihood is.)

comment by Roko · 2009-04-04T23:08:03.859Z · LW(p) · GW(p)

Downvoted for unjustified sensationalism. Sure, people in mental asylums might be good rationalists, but do you have any reason or evidence pointing your conclusion?

Replies from: loqi
comment by loqi · 2009-04-04T23:41:12.320Z · LW(p) · GW(p)

I think this comparison is a bit unfair. Do you really think the rationality of the average mental patient is remotely comparable to that of the average theologian?

Replies from: steven0461
comment by steven0461 · 2009-04-05T00:28:48.433Z · LW(p) · GW(p)

No, but the difference between a statement that's 99% silly and a statement that's 99.9% silly is only a negligible .9 silly points.

comment by loqi · 2009-04-04T22:11:09.699Z · LW(p) · GW(p)

Apparently informing others of an estimate you find unusual gets you downvoted. How unfortunate. I found it an interesting bit of speculation.

Replies from: anonym, HughRistik
comment by anonym · 2009-04-05T00:00:51.242Z · LW(p) · GW(p)

I think the downvotes are because you gave no rationale. Speculation without even saying why you think it is plausible is worthless.

Replies from: loqi
comment by loqi · 2009-04-05T00:15:45.789Z · LW(p) · GW(p)

Speculation without even saying why you think it is plausible is worthless.

I disagree with this, and note that you did not provide a rationale.

Replies from: anonym
comment by anonym · 2009-04-05T00:48:13.236Z · LW(p) · GW(p)

The proposition you quote doesn't need a rationale to the same degree that "advanced theologians might be the best rationalists" does, just like "typing random gibberish for comments is a waste of everybody's time" is even less in need of justification.

The difference between them is the degree to which the justifications are likely to be obvious to other readers and the degree to which other readers are likely to agree or disagree.

Replies from: loqi
comment by loqi · 2009-04-05T01:31:25.826Z · LW(p) · GW(p)

The proposition you quote doesn't need a rationale to the same degree that "advanced theologians might be the best rationalists" does

All else being equal, an assertion requires more justification than a speculation. I also disagree with Loren's estimate, but given that I think your statement is just plain wrong, I'd sooner ask you for a justification than Loren.

The difference between them is the degree to which the justifications are likely to be obvious to other readers and the degree to which other readers are likely to agree or disagree.

As a general rule I disagree with this: I don't think I should be expected to know how likely others are to agree with me or find my reasoning obvious. That said, Loren did anticipate such a disagreement, so you have a point.

Replies from: anonym
comment by anonym · 2009-04-05T01:54:11.112Z · LW(p) · GW(p)

You aren't expected to know how likely others are to agree or whether they will find your reasoning obvious. However, I would argue that you should try to estimate how likely others are to disagree and to give some form of explanation if you think they're likely not to agree and not to see what your explanation would be. Most of us, most of the time, are reasonably good at making such estimates, so following this guideline makes discussion more efficient and results in better communication.

Replies from: loqi
comment by loqi · 2009-04-05T02:17:54.391Z · LW(p) · GW(p)

I'm skeptical of the claim of reasonable goodness if you mean it to apply to estimates of obviousness, but I do find myself agreeing that we should try to anticipate disagreement for the sake of efficient communication.

Replies from: anonym
comment by anonym · 2009-04-05T19:45:13.475Z · LW(p) · GW(p)

I meant it to apply to both. I agree that estimating obviousness depends very much on the individuals and topics involved, and factors like inferential distance, but we still have a huge common store of knowledge and thought processes by virtue of the psychological unity of humankind... On a site like LW, we can also all be expected to be somewhat familiar with the many topics that are discussed again and again. I'm not saying we can get anywhere near perfect, but I think we do pretty well. Most of the time that somebody says something for reasons that others will find non-obvious, they correctly anticipate this and give justification. This whole thread started because somebody didn't anticipate and didn't give justifications, which is somewhat unusual.

comment by HughRistik · 2009-04-05T01:29:40.280Z · LW(p) · GW(p)

I found it to be sensationalism, so I'm not surprised it got downvoted. I would have found it interesting like you if Loren had including his reasoning for that remark which he posted in his followup post.