Dissenting Views

post by byrnema · 2009-05-26T18:55:17.205Z · LW · GW · Legacy · 212 comments

Contents

  Maintaining a High Signal to Noise Ratio
  Rationality is not a religion – Or is it?
  A Solution
None
212 comments

Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.

Maintaining a High Signal to Noise Ratio

The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.

For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.

Rationality is not a religion – Or is it?

Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.

So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.

But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.

Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?

A Solution

I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)

Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?

212 comments

Comments sorted by top scores.

comment by conchis · 2009-05-26T21:33:18.707Z · LW(p) · GW(p)

Byrnema, you talk extensively in this post about the LW community having a (dominant) ideology, without ever really explicitly stating what you think this ideology consists of.

I'd be interested to know what, from your perspective are the key aspects of this ideology. I think this would have two benefits:

  1. the assumptions underlying our own ideologies aren't always clear to us, and having them pointed out could be a useful learning experience; and
  2. the assumptions underlying others' ideology aren't always clear to us, and making your impressions explicit would allow others the chance to clarify if necessary, and make sure we're all on the same page.

(More generally, I think this is a great idea.)

Replies from: byrnema
comment by byrnema · 2009-11-02T22:00:42.188Z · LW(p) · GW(p)

Long overdue:

In May when I composed this post, I saw the LW community as having a dominant ideology, which I have since learned to label 'physical materialism'. I refrained from publically defining this ideology because of some kind of reluctance.

I didn't expect the community to change over time, but it seems to me there has been drift in the type of discussions that occur on Less Wrong away from epistemological foundations. So I feel more comfortable now outlining the tenets of average LW epistemology, as I perceived it, as a ‘historical’ observation.

The first and fundamental tenet of this epistemology is that there is a real, objective reality X that we observe and interact with. In contrast, persons with a metaphysical bent are less definitive about the permanent existence of an objective reality, and believe that reality alters depending on your thoughts and interactions with it. On the other extreme are skeptics that believe it is meaningless to consider any objective reality, because we cannot consider it objectively. (There are only models of reality, etc.)

For formalism and precision, I will here introduce some definitions. Define objective reality as a universe X = the set of everything that we could ever potentially observe or interact with physically. (This is what we consider “real”.) We cannot know if X is a subset of a larger universe X-prime. Suppose that it is: The component of X-prime that is outside X (X-complement) may ‘exist’ in some sense but is not real to us.

The second tenet is that anything we observe or interact with is a subset of X, the real physical world. While this trivially follows from the definition of X, what is being argued with physical materialism is not the tautology itself but the value of seeing things from this point of view. Trivially, there is nothing metaphysical in X; we either interact with something or we don’t.

In contrast, the metaphysical view is to consider reality = X-prime, and consider that everything we interact with physically/scientifically/objectively may only be a subset of our total experience of reality.

Comparing the views: Physical Materialism verses Metaphysical View

Consider the hypothetical, real sighting of a ghost: a white floating image is observed in front of two observers. The physical materialist observes the ghost, and knows that either (a) the ghost exists outside subjective experience, in which case the ghost must be reflecting light in such a way as to appear white and hazy, and the interaction of the light with the ghost could be studied and reproduced or (b) the ghosts exists as a subjective experience, in which case it is still physically manifested as a hallucination that may be equal to certain neural patterns, etc. The metaphysicist, in contrast, considers a third possibility as potentially reasonable: the ghost has a physical component (same cases a,b) AND ALSO a metaphysical component that explains the ‘existence’ of the ghost in some deeper way. For the metaphysicist, the physical materialist’s ghost is a subset of the “whole” ghost that actually straddles X and X- complement.

In my view, the physical materialist view is more coherent. We cannot know if the ghost straddles X and X-complement, but if it does, in no sense is the part of the ghost contained within X- complement “real” to us. It is not real because we can never observe or interact with this component in any way.

The epistemological question, all along; the debate over the ages, is whether holding any X- complement component (imaginary component) in our theory of the ghost will give us a better understanding of the X component (real part) of the ghost.

Personally, I see no evidence that the physical world X should not be informationally/theoretically complete. The labor of science is the belief that X can be understood within X itself. On the other hand, there is no proof that X is not dependent upon or manipulated in (scientifically) unfathomable ways by a larger X-prime, and it is conceivable that interactions occur between X and X-complement in ways that cannot be understood within X. Physical materialism really is just a matter of ideological preference, not fact. But it is the direction modern culture is certainly going; metaphysical religious views seem increasingly anachronistic and ‘separate’.

Another point of view typically held on Less Wrong: that since reality is 'just' physical that this implies that it is coarse, or simple or stupid. I think this is just a backlash to metaphysical accounts describing reality as divine and inspired. We can look around and see that reality is structured, patterned and organized/directed*. Physical materialism is not the belief that these observations are nonsense, but that we can explain them without resorting to the supernatural.

*Please allow materialistic interpretations of these anthropomorphic words… I'm not aware of adequate alternatives and suspect language is evolving too slowly.

Replies from: AllanCrossman, Jack, Jack
comment by AllanCrossman · 2009-11-02T22:26:41.881Z · LW(p) · GW(p)

On the other hand, there is no proof that X is not dependent upon or manipulated in (scientifically) unfathomable ways by a larger X-prime

But is there any reason to favour this more complex hypothesis?

Replies from: byrnema
comment by byrnema · 2009-11-03T04:26:47.716Z · LW(p) · GW(p)

I feel at home with physical materialism and I like the way it's simultaneously simple, self-consistent and powerful as a theory for generating explanation (immediately: all of science). Yet there are some interesting issues that come up when I think about the justification of this world view.

The more complex hypothesis that there is 'more' than X would be favored by any evidence whatsoever that X is not completely self-contained. So then it becomes an argument about what counts as evidence, and "real" experience. The catch-22 is that any evidence that would argue for the metaphysical would either be rejected within X as NOT REAL or, if it was actually real -- in other words, observable, reproducible, explainable -- then it would just be incorporated as part of X. So it is impossible to refute the completeness of X from within X. (For example, even while QM observations are challenging causality, locality, counterfactual definiteness, etc., physicists are looking to understand X better, and modify X as needed, not rejecting the possibility of a coherent theory of X. But at what point are we going to recover the world that the metaphysicists meant all along? )

So the irrefutability of physical materialism is alarming, and the obstinate stance for 'something else' from the majority of my species leaves me interested in the question. I have nothing to lose from a refutation of either hypotheses, I'm just curious. Also despairing to some extent -- I believe such a questions are actually outside definitive epistemology.

Replies from: DanArmak, AllanCrossman
comment by DanArmak · 2009-11-03T15:58:11.099Z · LW(p) · GW(p)

This is completely backwards. It's non-materialism that irrefutable, pretty much by definition.

Suppose we allow non-materialistic, non-evidence-based theories. There is an infinite number of theories that describe X plus some non-evidential Y, for all different imaginable Ys. By construction, we can never tell which of these theories is more likely to be wrong then another.

So we can never say anything about the other-than-X stuff that may be out there. Not "a benevolent god". Not "Y is pretty big". Not "Y exists". Not "I feel transcendental and mystical and believe in a future life of the soul". Not "if counterfactually the universe was that way instead of this way, we would observe Y and then we would see a teacup." Nothing at all can be said about Y because every X+Y theory that can be stated is equally valid, forever.

Whatever description you give of Y, with your completely untestable religious-mental-psychic-magical-quantum powers of the mind that must not be questioned, I can give the precise opposite description. What reason could you have for preferring your description to mine? If your reason is in X, it can't give us information about Y. And if your reason is in Y, I can claim an opposite-reason for my opposite-theory which is also in Y, and we'll degenerate to a competition of divinely inspired religions that must not be questioned.

Bottom line: if the majority of the species believes in "something else", that is a fact about the majority of the species, not about what's out there. If I develop the technology for making almost all humans stop believing in "something else", could that possibly satisfy your private wonderings?

Replies from: byrnema
comment by byrnema · 2009-11-03T21:31:13.323Z · LW(p) · GW(p)

This is completely backwards. It's non-materialism that irrefutable, pretty much by definition.

Non-materialism is irrefutable within its own framework, agreed. So then we are left with two irrefutable theories, but one is epistemologically useful within X and one is not. Materialism wins.

Nevertheless, just to echo your argument across the canyon: reality doesn't care what theories we “allow”, it is what it is. We might deduce that such-and-such-theory is the best theory for various epistemological reasons, but that wouldn't make the nature of the universe accessible if it isn't in the first place. Just reminding that ascetic materialism doesn't allow conviction about materialism.

Replies from: DanArmak
comment by DanArmak · 2009-11-03T21:47:08.130Z · LW(p) · GW(p)

reality doesn't care what theories we “allow”, it is what it is.

It is what X is. That's the definition of X. Whatever is outside X is outside Reality. Materialists don't think that "something outside reality" is a meaningful description, but that is what you claim when you talk about things being beyond X.

We might deduce that such-and-such-theory is the best theory for various epistemological reasons

No. We deduce that it's the best theory because it's only uniquely identifiable theory, as I said before.

If you're going to pick any one theory, the only theory you can pick is a materialistic one. If you allow non materialistic theories, you have to have every possible theory all at once.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-11-03T21:57:20.408Z · LW(p) · GW(p)

Well, dunno. To be fair, for the sake of argument, I guess one could maybe propose Idealistic theories. That is, that all that exists is made up of a "basic physics of consciousness", and everything else that we is just an emergent phenomenon of that. One would still keep reductionism, simply that one might have the ultimate reduction be to some sort of "elementry qualia" plus simple rules (as strict and precise and simple as any basic physics theory) for how those behave.

(Note, I'm not advocating this position at this time, I'm just saying that potentially one could have a non materialist reductionism. If I ever actually saw a reduction like that that could successfully really predict/model/explain stuff we observe, I'd be kinda shocked and impressed.)

Replies from: byrnema, DanArmak
comment by byrnema · 2009-11-03T22:19:46.840Z · LW(p) · GW(p)

For the sake of argument, thank you. Yet I would guess that the theory you propose is still isomorphic to physical materialism, because physical materialism doesn't say anything about the nature of the elementary material of the universe. Calling it an elementary particle or calling it elementary qualia is just a difference in syllables, since we have no restrictions on what either might be like.

Yet you remind me that we can arrive at other unique theories, within different epistemological frameworks. What I thought you were going to say is that a metaphysicist might propose a universe X-prime that is the idealization of X. As in, if we consider X to be an incomplete, imperfect structure, X-prime is the completion of X that makes it ideal and perfect. Then people can speculate about what is ideal and perfect, and we get all the different religions. But it is unique in theory.

By the way, the epistemology used there would seem backwards to us. While we use logic to deduce the nature of the universe from what we observe, in this theory, what they observe is measured against what they predict should logically be. That is, IF they believe that "ideal and perfect" logically follows. (This 'epistemology' clearly fails in X, which is why I personally would reject it, but of course, based on a theory that ordinates X above all, even logic.)

comment by DanArmak · 2009-11-03T22:14:07.409Z · LW(p) · GW(p)

I don't see how that contradicts what I said.

Suppose you believe a theory such as you described. Then I propose a new theory, with different elementary qualia that have different properties and behaviors, but otherwise obey the meta-rules of your theory - like proposing a different value for physical constants, or a new particle.

If the two theories can be distinguished in any kind of test, if we can follow any conceivable process to decide which theory to believe, then this is materialism, just done with needlessly complicated theories. On the other hand, if we can't distinguish these theories, then you have to believe an infinite number of different theories equally, as I said.

comment by AllanCrossman · 2009-11-03T09:49:51.693Z · LW(p) · GW(p)

I'm perfectly happy with the idea that there could be stuff that we can't know about simply because it's too "distant" in some sense for us to experience it; it sends no signals or information our way. I'm not sure anyone here would deny this possibility.

But if that stuff interacts with our stuff then we certainly can know about it.

comment by Jack · 2009-11-03T07:24:15.398Z · LW(p) · GW(p)

Continued...

Now (finally) to the comparison.

The epistemological question, all along; the debate over the ages, is whether holding any X- complement component (imaginary component) in our theory of the ghost will give us a better understanding of the X component (real part) of the ghost.

If a particular ontological commitment gives us a better understanding of something than it is no longer in the X-complement. We are officially observing/ interacting with it. Neptune for example, before it was observed by telescope, was merely a theoretical entity needed for explaining perturbations in the orbit of Uranus. There was a mysterious feature of the solar system and we explained it by positing an astronomical entity. There was nothing unscientific about this.

there is no proof that X is not dependent upon or manipulated in (scientifically) unfathomable ways by a larger X-prime, and it is conceivable that interactions occur between X and X-complement in ways that cannot be understood within X.

See, if there are interactions between X and X-Complement then there are interactions between us and X-Complement. X and X-Complement, by definition cannot be causally related. The question then is if physical entities and physical causes are sufficient for accounting for all our experiences. If they weren't we would have a reason to favor a Spiritual or X-Skeptical view. But, in fact, we've been really good about explaining and predicting experiences using just physical and scientific-theoretical entities.

To conclude: I see three distinctions where you see two. There is the Scientific- physicalism of Less Wrong, the Spiritual view which holds that there are things that are not physical and that we can (only or chiefly) observe and interact with those things through means other than science, and finally, the Extreme Skeptic view which considers all our experiences as being structured by our brain or mind then as the effects of entities that are not part of our mind/brain. Moreover, the possibility you see, of our inability to make sense of physical universe we have access to because of interactions between that universe and one we do not have access to, does not exist. This is because the boundaries of what we have access to are the universe's boundaries of interaction. Anything that influences the reality we have access to we can include in our model of reality. And it turns out that a scientific-physicalist view is more or less successful and explaining and predicting experiences.

comment by Jack · 2009-11-03T07:22:55.329Z · LW(p) · GW(p)

Edit: My comment was way too long, but not sure if this justifies a full post.

Replies from: Jack
comment by Jack · 2009-11-03T07:23:36.876Z · LW(p) · GW(p)

Now (finally) to the comparison.

The epistemological question, all along; the debate over the ages, is whether holding any X- complement component (imaginary component) in our theory of the ghost will give us a better understanding of the X component (real part) of the ghost.

If a particular ontological commitment gives us a better understanding of something than it is no longer in the X-complement. We are officially observing/ interacting with it. Neptune for example, before it was observed by telescope, was merely a theoretical entity needed for explaining perturbations in the orbit of Uranus. There was a mysterious feature of the solar system and we explained it by positing an astronomical entity. There was nothing unscientific about this.

there is no proof that X is not dependent upon or manipulated in (scientifically) unfathomable ways by a larger X-prime, and it is conceivable that interactions occur between X and X-complement in ways that cannot be understood within X.

See, if there are interactions between X and X-Complement then there are interactions between us and X-Complement. X and X-Complement, by definition cannot be causally related. The question then is if physical entities and physical causes are sufficient for accounting for all our experiences. If they weren't we would have a reason to favor a Spiritual or X-Skeptical view. But, in fact, we've been really good about explaining and predicting experiences using just physical and scientific-theoretical entities.

To conclude: I see three distinctions where you see two. There is the Scientific- physicalism of Less Wrong, the Spiritual view which holds that there are things that are not physical and that we can (only or chiefly) observe and interact with those things through means other than science, and finally, the Extreme Skeptic view which considers all our experiences as being structured by our brain or mind then as the effects of entities that are not part of our mind/brain. Moreover, the possibility you see, of our inability to make sense of physical universe we have access to because of interactions between that universe and one we do not have access to, does not exist. This is because the boundaries of what we have access to are the universe's boundaries of interaction. Anything that influences the reality we have access to we can include in our model of reality. And it turns out that a scientific-physicalist view is more or less successful and explaining and predicting experiences.

comment by nazgulnarsil · 2009-05-27T00:12:15.931Z · LW(p) · GW(p)

that we seem more interested in esoteric situations than in the obvious improvements that would have the biggest impact if adopted on a wide scale.

Replies from: patrissimo, loqi
comment by patrissimo · 2009-05-28T23:39:22.027Z · LW(p) · GW(p)

I concur. We seem more interested in phenomena which are interesting psychologically than which are useful. This should not be surprising - interesting phenomena are fun to read about. Implementing a new cognitive habit takes hard work and repetition. Perhaps it is like divorcing warm fuzzies from utilons - we should differentiate from "biases that are fun to read/think about" and "practices which will help you become less wrong."

As a metaphor, consider flashy spinning kicks vs. pushups in martial arts. The former are much more fun to watch and think about, but boring exercises to build strength and coordination are much more basic and important.

comment by loqi · 2009-05-27T02:24:55.545Z · LW(p) · GW(p)

This pretty vague for a heresy. Can you link to a comment or post that explains what you're referring to, or why we should condition on wide-scale adoption?

Replies from: nazgulnarsil
comment by nazgulnarsil · 2009-05-27T03:25:55.140Z · LW(p) · GW(p)

aren't we supposed to be pulling sideways on issues that aren't in popular contention?

comment by knb · 2009-05-27T01:07:22.108Z · LW(p) · GW(p)

Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn't a set of objectively rational goal values, although I personally prefer utilitarianism myself.

Replies from: pwno, AndrewKemendo
comment by pwno · 2009-05-27T01:47:58.867Z · LW(p) · GW(p)

so there isn't a set of objectively rational goal values

There probably is for each individual, but none that are universal.

Replies from: knb
comment by knb · 2009-05-27T02:02:22.800Z · LW(p) · GW(p)

True, there are rational goals for each individual, but those depend on their own personal values. My point was there doesn't seem to be one set of objective goal values that every mind can agree on.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T09:31:50.464Z · LW(p) · GW(p)

All minds can't have common goals, but every human, and minds we choose to give life to, can.

Values aren't objective, but can well be said to be subjectively objective.

Replies from: timtyler
comment by timtyler · 2009-05-27T16:35:40.848Z · LW(p) · GW(p)

Um, the referenced The Psychological Unity of Humankind article isn't right. Humans vary considerably - from total vegetables up to Einstein. There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.

Similarly, humans have many different goals - from catholic priests to suicide bombers. That is partly as a result of the influence of memetic brain infections. Humans may share similar genes, but their memes vary considerably - and both contribute a lot to the adult phenotype.

That brings me to a LessWrong problem. Sure, this is Eliezer's blog - but there seems to be much more uncritical parroting of his views among the commentators than is healthy.

Replies from: AdeleneDawner, Vladimir_Nesov, MichaelBishop, MichaelBishop
comment by AdeleneDawner · 2009-05-27T17:02:18.295Z · LW(p) · GW(p)

There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.

And also many ways for human brains to develop differently, says the autistic woman who seems to be doing about as well at handling life as most people do.

Didn't we even have a post about this recently? Really, once you get past "maintain homeostasis", I'm pretty sure there's not a lot that can be said to be universal among all humans, if we each did what we personally most wanted to do. It just looks like there's more agreement than there is because of societal pressure on a large scale, and selection bias on an individual scale.

Replies from: thomblake
comment by thomblake · 2009-05-27T17:13:42.350Z · LW(p) · GW(p)

AdeleneDawner, I'm being off-topic for this thread, but have you posted on the intro thread?

Replies from: AdeleneDawner
comment by Vladimir_Nesov · 2009-05-27T16:46:38.997Z · LW(p) · GW(p)

You don't take into account that people can be wrong about their own values, with randomness in their activities not reflecting the unity of their real values.

Replies from: timtyler, Z_M_Davis, MichaelBishop
comment by timtyler · 2009-05-27T17:29:35.509Z · LW(p) · GW(p)

Are you suggesting that you still think that the cited material is correct?!?

The supporting genetic argument is wrong as well. I explain in more detail here:

http://alife.co.uk/essays/species_unity/

As far as I can tell, it is based on a whole bunch of wishful thinking intended to make the idea of Extrapolated Volition seem more plausible, by minimising claims that there will be goal conflicts between living humans. With a healthy dose of "everyone's equal" political-corectness mixed in for the associated warm fuzzy feelings.

All fun stuff - but marketing, not science.

Replies from: MichaelBishop, Vladimir_Nesov
comment by Mike Bishop (MichaelBishop) · 2009-05-27T17:49:17.764Z · LW(p) · GW(p)

I recommend making this a top level post, but expand a little more on this implications of your view versus Eliezer's and C&T's. This could be done in a follow-up post.

comment by Vladimir_Nesov · 2009-05-27T17:54:24.726Z · LW(p) · GW(p)

Simply stating your opinion is of little value, only a good argument turns it into useful knowledge (making authority cease to matter in the same movement).

You are not making your case, Tim. You've been here for a long time, but persist in not understanding certain ideas, at the same time arguing unconvincingly for own views.

You should either work on better presentation of you views, if you are convinced they have some merit, or on trying to understand the standard position, but repeating your position indignantly, over and over, is not a constructive behavior. It's called trolling.

Replies from: timtyler
comment by timtyler · 2009-05-27T19:25:02.469Z · LW(p) · GW(p)

I cited a detailed argument explaining one of the problems. You offer no counter-argument, and instead just rubbish my position, saying I am trolling. You then advise me to clean up my presentation. Such unsolicited advice simply seems patronising and insulting. I recommend either making proper counter-arguments - or remaining silent.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T19:46:43.416Z · LW(p) · GW(p)

Remaining silent if you don't have an argument that's likely to convince, educate or at least interest your opponent is generally a good policy. I'm not arguing with you, because I don't think I'll be able to change your mind (without extraordinary effort that I'm not inclined to make).

Trolling consists in writing text that falls deaf on the ears of the intended audience. Professing advanced calculus on a cooking forum or to 6-year olds is trolling, even though you are not wrong. When people don't want to hear you, or are incapable of understanding you, or can't stand the way you present your material, that's trolling on your part.

Replies from: timtyler
comment by timtyler · 2009-05-27T20:07:19.439Z · LW(p) · GW(p)

OK, then. Regarding trolling, see: http://en.wikipedia.org/wiki/Internet_troll

It does not say that trolling consists in writing text that falls deaf on the ears of the intended audience. What it says is that trolls have the primary intent of provoking other users into an emotional response or to generally disrupt normal on-topic discussion.

This is a whole thread where we are supposed to be expressing "dissenting views". I do have some dissenting views - what better place for them than here?

I deny trolling activities. I am here to learn, to debate, to make friends, to help others, to get feedback - and so on - my motives are probably not terribly different from those of most other participants.

One thing that I am is critical. However, critics are an amazingly valuable and under-appreciated section of the population! About the only people I have met who seem to understand that are cryptographers.

comment by Z_M_Davis · 2009-05-27T17:15:24.880Z · LW(p) · GW(p)

Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.

Replies from: Vladimir_Nesov, timtyler
comment by Vladimir_Nesov · 2009-05-27T17:27:20.777Z · LW(p) · GW(p)

Well, of course I don't mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.

As for the thesis above, its motivation can be stated thusly: If you can't be wrong, you can never get better.

Replies from: Z_M_Davis, Nick_Tarleton
comment by Z_M_Davis · 2009-05-27T19:11:16.918Z · LW(p) · GW(p)

the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.

How do you know what their real values are? Even after everyone's professed values get destroyed by the truth, it's not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you'll see disagreement again. I just don't see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn't seem likely to me either; that kind of type distinction doesn't seem to be built into human values. What could possibly force that kind of convergence?

If you can't be wrong, you can never get better.

Okay, I'm writing this one down.

Replies from: steven0461
comment by steven0461 · 2009-05-27T19:48:25.133Z · LW(p) · GW(p)

Even after everyone's professed values get destroyed by the truth, it's not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure.

Your conclusion may be right, but the HedWeb isn't strong evidence -- as far as I recall David Pearce holds a philosophically flawed belief called "psychological hedonism" that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.

comment by Nick_Tarleton · 2009-05-27T19:20:00.252Z · LW(p) · GW(p)

As for the thesis above, its motivation can be stated thusly: If you can't be wrong, you can never get better.

If "the thesis above" is the unity of values, this is not an argument. (I agree with ZM.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T19:36:36.853Z · LW(p) · GW(p)

It's an argument for it's being possible that behavior isn't representative of the actual values. That actual values are more united than the behaviors is a separate issue.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-27T19:38:23.915Z · LW(p) · GW(p)

It seems to me that it's an appeal to the good consequences of believing that you can be wrong.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T19:54:30.686Z · LW(p) · GW(p)

Well, obviously. So I'm now curious about what do you read in the discussion, so that you see this remark as worth making?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-27T20:04:59.923Z · LW(p) · GW(p)

That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T20:50:47.700Z · LW(p) · GW(p)

That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change

I'm uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.

Let's review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:

If a person likes watching TV, and spends much time watching TV, he must really care about TV, and saying that he's wrong and actually watching TV is a mistake is just meaningless.

To this I reply with "If you can't be wrong, you can never get better." This is not an endorsement to self-deceivingly "believe" that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it's possible to get better.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-27T21:41:18.221Z · LW(p) · GW(p)

My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.

Correct.

I state that the ethical diversity is less than the currently observed behavioral diversity.

I agree, and agree that the argument form you paraphrase is fallacious.

To this I reply with "If you can't be wrong, you can never get better." This is not an endorsement to self-deceivingly "believe" that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it's possible to get better.

Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn't clear, especially since you agreed that it's an appeal to consequences.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T21:44:32.048Z · LW(p) · GW(p)

Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn't clear, especially since you agreed that it's an appeal to consequences.

Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn't see "appeal to consequences" of a belief sufficiently distinct from desire to ensure the truth of the belief.

comment by timtyler · 2009-05-27T19:52:05.762Z · LW(p) · GW(p)

Human values are frequently in conflict with each other - which is the main explanation for all the fighting and wars in human history.

The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.

Unfortunately, everyone behaves as though they want to maximise the representation of their own genome - and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives - which explains cooperation within families.

This doesn't seem particularly complicated to me. What exactly is the problem?

comment by Mike Bishop (MichaelBishop) · 2009-05-27T17:16:23.964Z · LW(p) · GW(p)

It would be great if you could expand on this.

comment by Mike Bishop (MichaelBishop) · 2009-05-27T17:28:35.825Z · LW(p) · GW(p)

That brings me to a LessWrong problem. Sure, this is Eliezer's blog - but there seems to be much more uncritical parroting of his views among the commentators than is healthy.

You may be right. If so, fixing it requires greater specificity. If you have time to write top-level posts that would be great. Regardless, I value the contributions you make in the comments.

comment by Mike Bishop (MichaelBishop) · 2009-05-27T17:25:19.470Z · LW(p) · GW(p)

Some people tend to value things that people happen to have in common, others are more likely to value things which people have less in common.

comment by AndrewKemendo · 2009-05-27T18:53:20.521Z · LW(p) · GW(p)

I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness - without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.

True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails - the outcomes are impossible to predict. Exogeny anyone?

In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking "I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..". To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.

It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.

However I may be read the whole thing wrong.

ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.

Replies from: pengvado
comment by pengvado · 2009-05-28T00:01:30.851Z · LW(p) · GW(p)

Whether a given process is computationally feasible or not has no bearing on whether it's morally right. If you can't do the right thing (whether due to computational constraints or any other reason), that's no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.

If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it's just an approximation, and be willing to switch if a superior heuristic ever shows up.

See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.

It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics

Just because Aristotle founded formal logic doesn't mean he was right about ethics too, any more than about physics.

Replies from: AndrewKemendo
comment by AndrewKemendo · 2009-05-28T05:27:06.578Z · LW(p) · GW(p)

Rather, you just have to find the closest approximation of right that you can.

This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.

You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.

I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue - as the second tier does. One problem with the Two Tier solution as it is presented is that it's solutions to the consequentialist problems are based on vague terms:

Must be moral principles that identify a situation or class of situations and call for an action in that/those situation(s).

Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?

Or on virtue:

Must guide you in actions that are consistent with the expressions of virtue and integrity.

I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.

Therefore in order to be a consequentialist you must first answer "What consequence is right/correct/just?" The answer then is the correct philosophy, not simply how you got to it.

Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes - and consequentialism may be the best.

Ed; Seriously people, if you are going to down vote my reply then explain why.

comment by HalFinney · 2009-05-29T21:14:42.564Z · LW(p) · GW(p)

I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.

The second is that many participants here seem to see LW as being about more than helping each other eliminate errors in our thinking. Rather, they see a material probability that LW could become the core of a world-changing rationalist movement. This then motivates a higher degree of participation than would be justified without the prospect of such influence.

To the extent that this (perhaps false) hope may be underlying the motivations of community members, it would be good if we discussed it openly and tried to realistically assess its probability.

comment by pjeby · 2009-05-26T20:24:53.890Z · LW(p) · GW(p)

Where do you think Less Wrong is most wrong?

That it's not aimed at being "more right" -- which is not at all the same as being less wrong.

To be more right often requires you to first be more wrong. Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.

In contrast, you can be "less wrong" just by doing nothing, or by being a critic of those who do something.' But in the real world (and even in science), you can never win BIG -- and it's often hard to win at all -- if you never place any bets.

Replies from: JamesCole, timtyler, Peter_de_Blanc, timtyler, JamesCole, HughRistik, JGWeissman, conchis
comment by JamesCole · 2009-05-27T07:25:54.364Z · LW(p) · GW(p)

This is perhaps a useful disctinction:

When it comes to knowledge of the world you want to be more right.

But when it comes to reasoning I do think it is more about being less wrong... there are so many traps you can fall into, and learning how to avoid them is so much of being able to reason effectively.

Replies from: Eliezer_Yudkowsky
comment by timtyler · 2009-05-26T21:38:56.539Z · LW(p) · GW(p)

The group title is attempting to be modest - which is cool.

comment by Peter_de_Blanc · 2009-05-26T21:52:44.906Z · LW(p) · GW(p)

Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.

Disagree. You don't have to believe your new model or hypothesis.

Replies from: steven0461
comment by steven0461 · 2009-05-26T22:19:16.891Z · LW(p) · GW(p)

Indeed. It seems that PJEby is using a definition of "wrong" according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He's right that we should be prepared to sometimes be wrong in that sense. But I'm not convinced anyone else is interpreting "less wrong" in that way.

Replies from: pjeby
comment by pjeby · 2009-05-26T23:06:53.695Z · LW(p) · GW(p)

It seems that PJEby is using a definition of "wrong" according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition.

No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren't "true", no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono's notion of "proto-truth" - i.e., a truth you accept as provisional, rather than absolute.)

(DeBono's notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that's knowingly more wrong. His "provocative operator" (later renamed "green hat thinking") is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)

Replies from: Eliezer_Yudkowsky, loqi, pwno
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-27T06:50:26.534Z · LW(p) · GW(p)

No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren't "true"

Irrational?

If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.

If you decide that some false beliefs are useful, you don't get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.

Perhaps you find that a false belief on this subject is more convenient, though...?

(I need to write up a canonical article on "No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it", since it's a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)

Replies from: Sideways, pjeby, conchis, timtyler
comment by Sideways · 2009-05-27T09:44:27.700Z · LW(p) · GW(p)

For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:

"I'm terrible at math."

"I hate math class."

"I'm just dumb."

That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".

Now, the "I'm terrible at math" attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A's on their homework papers--when they started to believe that maybe they were good at math, after all--the difference in their confidence and effort was night and day. It was the false belief that enabled them to "take the first steps."

Replies from: Daniel_Burfoot, Vladimir_Nesov
comment by Daniel_Burfoot · 2009-05-27T14:36:24.042Z · LW(p) · GW(p)

"I'm terrible at math"

"I'm just dumb."

I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:

1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at Greek" and "just dumb".

2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is "terrible at math" and "just dumb".

Anecdote 1) just seems ridiculous. Of course if you walk into a language class that's out of your depth, you're going to be lost, everyone knows that. Every normal person can learn every natural language; there's no such thing as someone who's intrinsically "terrible at Greek". The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can't. This idea seems absurd to me: there is no "math gene"; there are no other examples of skills that some people can get and others not.

Replies from: Apprentice, Sideways
comment by Apprentice · 2009-05-27T15:18:02.709Z · LW(p) · GW(p)

It's actually anecdote 1 that seems plausible to me and anecdote 2 that does not.

I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they'd never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they'd found so hard/boring when learning Spanish. (Of course it's also possible that my classes just sucked.)

By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they're terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they're bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way - maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I'm sure that holds for the special case of mathematical aptitude.

I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it - but there's a far simpler theory which has more explanatory power: Some people are "just dumb". I personally find that a liberating theory to operate under. A lot of my "aha moments" seem to involve either the realization that "yes, people really are that stupid" or the realization that "yes, I really am that stupid".

comment by Sideways · 2009-05-27T16:17:18.317Z · LW(p) · GW(p)

Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous. So there are two skills required:

The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.

The second is the source of trouble. I can (and have) sat in on a single day's instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they're "terrible at math" after an experience like that!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-27T16:23:17.449Z · LW(p) · GW(p)

Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they're taught a second technique that builds on the previous.

How is that unlike other subjects? Seems pretty universal.

comment by Vladimir_Nesov · 2009-05-27T10:26:33.272Z · LW(p) · GW(p)

An example of dark arts used for a good cause. The problem is that the children weren't strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.

They can't feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.

Replies from: Sideways
comment by Sideways · 2009-05-27T16:36:56.052Z · LW(p) · GW(p)

Vladimir, the problem has nothing to do with strength--some of these students did very well in other classes. Nor is it about effort--some students had already given up and weren't bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn't solve the problem.

The problem was simply that they believed "math" was impossible for them. The best way to get rid of that belief--maybe the only effective way--was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn't suffice.

If your definition of "the dark arts" is so general that it includes giving an easy homework assignment, especially when it's the best solution to a problem, I think you've diluted the term beyond usefulness.

comment by pjeby · 2009-05-27T17:17:19.063Z · LW(p) · GW(p)

If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.

Ah, and where's your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?

If you decide that some false beliefs are useful, you don't get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.

I'm not clear if you're being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are "wrong".

Meanwhile, I think that you're also still assuming that "believe" and "think true" are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.

In the instrumental sense, belief is merely acting as if something is true -- which is not the same thing as thinking it's actually true.

The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it's permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-27T19:49:07.188Z · LW(p) · GW(p)

Ah, and where's your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?

Intermediate level: Rational evidence. I've learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.

Replies from: pjeby
comment by pjeby · 2009-05-28T00:37:08.569Z · LW(p) · GW(p)

I've learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain.

Me too. Which is why I find it astounding that you appear to be arguing against testing things.

The difference in my "bare minimum" versus yours is that I've learned not to consider mental techniques as being tested unless I have personally tested them using a "shut up and do the impossible" attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.

If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases... particularly the biases that lead them to avoid testing things in the first place.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-28T02:48:08.915Z · LW(p) · GW(p)

Okay... first, "shut up and do the impossible" may sound like it has a nice ring to you, but there's something specific I mean by it - a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You're talking enthusiasm or tsuyoku naritai. "Shut up and do the impossible" is for "reduce qualia to atoms" or "build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together". It is not for testing P. J. Eby's theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.

Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I'm interested in making another try at these things, I'll go looking at such papers as have been written in the field. You, I'm sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven't caught up on your reading. I can understand why you don't understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. "Just try my things!" you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.

It is your failure to understand why the achievements of others are important - why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by - and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there's a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it's one of those things so basic that I tend to take it for granted instead of writing about it.

As for your idea that others' attention to pay attention to you in particular indicates a willpower failure on their part... that's what we call "egocentric biases in availability", namely, you think you are a much larger part of others' mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven't earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.

Replies from: pjeby
comment by pjeby · 2009-05-28T04:49:12.455Z · LW(p) · GW(p)

But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary.

Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.

"Just try my things!" you say,

I've been saying to try something. Anything. Just test something. Yes, I've suggested some ways for testing things, and some things to test. But most of them are not MY things, as I've said over and over and over.

At this point I've pretty much come to the conclusion that it's impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to "try my things"... when in fact I've bent over backwards to point as much as possible to other people's things. Believe it or not, I didn't come here to promote my work or business.

I don't care if you test my things. They're not "my" things anyway. I'm annoyed that you think I don't understand science, because it shows you're rounding to the nearest cliche.

I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the "somatic marker hypothesis", if you haven't previously).

Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement -- we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.

True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable -- and it's a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.

If someone wanted to turn that into controlled science, they'd have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you'd need to validate a somatic marker scale against some other, more subjective scale that's already accepted, possibly in the context of some therapy that's also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can't be done in principle.

When I advocate that people try things, I mean that they should employ more-objective means of measurement -- and on far-shorter timescales -- than are traditionally used in the self-help field.

When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn't take more than 30 minutes after learning the technique to know if it's any good or not, because I have a way of measuring it that doesn't depend on me doing any guessing. Either I still flinch or I don't. Either I get a sinking feeling in my gut or I don't. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer's theory of animal magnetism.)

I don't know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I've consistently advocated here, is that it's possible to test self-help techniques by way of empirical observation of somatic marker responses both "before" and "after". But even this is not "my" idea.

The somatic marker hypothesis is cutting-edge neuroscience -- it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.

But for individual use, it has the advantage of being eminently testable.

Regarding the rest of your comment, I don't see how I can respond, since as far as I can tell, you're attacking things I never said... and if I had said them, I would agree with your impeccable critique of them. But since I didn't say them... I don't see what else I can possibly say.

comment by conchis · 2009-05-27T08:36:21.501Z · LW(p) · GW(p)

Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?

I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.

comment by timtyler · 2009-05-27T07:46:03.053Z · LW(p) · GW(p)

Believing things that aren't true can be instrumentally rational for humans - because their belief systems are "leaky" - lying convincingly is difficult - and thus beliefs can come to do double duty by serving signalling purposes.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-27T08:02:32.814Z · LW(p) · GW(p)

Yes, this is indeed the sort of argument that I'm not at all interested in, and naming this site "Less Wrong" instead of "More Wrong" reflects this. I'm going to find where the truth takes me; let me know how that lies thing works out - though I reserve the right not to believe you, of course.

Replies from: Psychohistorian, timtyler, aluchko, jimmy
comment by Psychohistorian · 2009-05-27T12:22:40.207Z · LW(p) · GW(p)

Hypothetical (and I may expand on this in another post):

You've been shot. Fortunately, there's a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he's got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.

He can say: A. Look, I don't have painkiller, but I'm going to have to operate anyhow.

B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it's morphine, and administer it to you.

Which do you prefer he does? Knowing what I know about the placebo effect, I'd have to admit I'd rather be deceived. Is this unwise? Why?

Admittedly, I haven't attained a false conclusion via my epistemology. It's probably wise to generally trust doctors when they tell you what they're administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby's various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. "Lie to me about what's in that IV!" might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)

On the other hand, I'm not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.

Replies from: Eliezer_Yudkowsky, JamesAndrix
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-27T19:32:17.326Z · LW(p) · GW(p)

The doctor should say "This is the best painkiller I have" and administer it. If the patient confronts the question, it's already too late.

Replies from: Liron, Psychohistorian, pjeby
comment by Liron · 2009-05-27T19:43:38.749Z · LW(p) · GW(p)

Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.

comment by Psychohistorian · 2009-05-27T23:25:16.842Z · LW(p) · GW(p)

Erased my previous comment. It missed the real point.

If you think the doctor should say, "This is the best painkiller I have," that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.

If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.

(Yes, I realize the irony in using the expression "sure as Hell.")

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-28T08:46:38.199Z · LW(p) · GW(p)

"Pain will go away" is a true belief for this situation.

comment by pjeby · 2009-05-28T01:24:30.191Z · LW(p) · GW(p)

The doctor should say "This is the best painkiller I have" and administer it.

The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, "you may notice some sensation happening now", and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.

(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that's the only painkiller you have, it'd be silly not to use it.)

comment by JamesAndrix · 2009-05-27T16:58:25.765Z · LW(p) · GW(p)

Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.

The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.

I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.

comment by timtyler · 2009-05-27T12:02:40.195Z · LW(p) · GW(p)

I was trying to explain why it can be instrumentally rational for humans to believe things that aren't true.

For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.

Lying is often dangerous for humans - because the other humans have built-in lie detectors.

I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin - or indeed, freely expressing your opinions - thereby getting ostracised, excommuniacted - or whatever.

Believing the truth is not my main goal - nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.

What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives - probably for signalling purposes. Not necessarily lying - they might actually believe themselves to be truth-seekers - but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes - rather like what happens to priests.

In the first case, they are behaving hypocritically - and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy - though there's a fair chance that this will be rejected.

Replies from: pjeby
comment by pjeby · 2009-05-27T17:38:32.695Z · LW(p) · GW(p)

What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives - probably for signalling purposes. Not necessarily lying - they might actually believe themselves to be truth-seekers - but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes - rather like what happens to priests.

Having been in this circumstance in the past -- i.e., for most of my life believing myself to be such a truth-seeker -- I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.

It's what Robert Fritz calls in his books, "an ideal-belief-reality conflict" -- a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.

Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters -- something that they defend with zeal -- you know this mechanism is at work.

The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.

The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I've squashed dozens in myself, including several relating to truth and rightness and fairness and such. They're also a major driving force in chronic procrastination, at least in my clients.

In the first case, they are behaving hypocritically - and I would prefer it if they stopped deceiving me about their motives.

They're not consciously deceving anyone; they're sincere in their belief, despite the fact that this sincerity is a deception mechanism.

In the second case, I am inclined to offer therapy - though there's a fair chance that this will be rejected.

Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our "Ideal" is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don't want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)

So someone who's not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They're just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)

Replies from: MichaelBishop, timtyler
comment by Mike Bishop (MichaelBishop) · 2009-09-22T22:20:47.888Z · LW(p) · GW(p)

I agree that people can take "really good ideas" too far, but I'm not satisfied by the distinction you draw.

|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.

ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.

Replies from: pjeby
comment by pjeby · 2009-09-23T17:55:03.092Z · LW(p) · GW(p)

ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.

Only if you're speaking in an abstract way that's divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something "bad" is different from labeling it "not very good", in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)

Some people have a hard time grokking this, because intellectually, it's easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others' attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.

However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you've got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.

comment by timtyler · 2009-05-27T20:38:57.067Z · LW(p) · GW(p)

Thanks for sharing.

It all makes me think of the beauty queens - and their wishes for world peace.

comment by aluchko · 2009-05-27T17:44:32.556Z · LW(p) · GW(p)

One of the concepts I've been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.

Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.

Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It's a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you're getting back $1250 for sure. It's an incorrect belief but one I'd probably use as I wouldn't have to expend willpower overcoming your faulty loss prevention circuits.

Which approach would you use?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-27T18:05:23.277Z · LW(p) · GW(p)

Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It's a definite win

Not true; $2500 is not necessarily 2.5 times as useful as $1000.

http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility

People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn't sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there's no single thing in the mind that is a "belief" (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.

comment by jimmy · 2009-05-27T08:34:10.470Z · LW(p) · GW(p)

Well, what you want to do (just about by definition) is be rational in the instrumental sense.

I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can't even know what you're missing and there is reason to suspect that it is a lot.

For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....

Maybe you just meant "I'm not interested in that kind of argument because it is so clearly wrong to not be worth my time", but it seems to come across as "I don't care even if it's true", and that's probably where the downvote came from.

Replies from: pjeby
comment by pjeby · 2009-05-27T17:52:49.620Z · LW(p) · GW(p)

For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....

This is a confusion based on multiple meanings of "belief", along the lines of the "does the tree make a sound?" debate. Depending on your definition of belief, the above is either trivial or impossible.

For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it's really blue afterward, have nothing to do with your "believing" in that moment. "Believe" is a verb -- something that you DO, not something that you have.

In common parlance, we think that belief is unified and static -- which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it's still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.

In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone "believes" they're not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.

Notice that these two parts are separate: there is a thought that occurs, and then it is believed... i..e, passively accepted, without dispute.

Thus, there is really no such thing as "belief" - only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it's the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.

comment by loqi · 2009-05-27T02:38:05.106Z · LW(p) · GW(p)

No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren't "true", no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better.

You might well be right that there are loads of "useful falsehoods", you might even know them personally, but you're wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.

His "provocative operator" (later renamed "green hat thinking") is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.

This sounds like a good creativity hack, but I don't see what it has to do with accepting false beliefs.

Replies from: pjeby
comment by pjeby · 2009-05-27T05:56:15.928Z · LW(p) · GW(p)

This sounds like a good creativity hack, but I don't see what it has to do with accepting false beliefs.

It's an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear "more wrong" than where you started from.

You might well be right that there are loads of "useful falsehoods", you might even know them personally, but you're wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.

[boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there's tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively.

Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one's self or others, that's tied to reasonably timely feedback.

To the extent that you find the teachings of these arts to be less than "true", and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true.

However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.

And the truth is not a substitute for that, however much blind faith you put into it.

The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why... and the why is only useful if it helps you to believe in something enough to make you actually DO something.

Replies from: loqi
comment by loqi · 2009-05-27T17:52:02.558Z · LW(p) · GW(p)

It's an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear "more wrong" than where you started from.

You've really hedged your language here. Are we talking about beliefs, or "perspectives"? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing "known-good" techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don't change betting strategies just because I'm in the middle of an experiment.

Why do you think this has anything to do with me?

Because it seems that you've had more experience with LW'ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I'm curious as to which "terror" you're referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.

However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.

Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just "went well" and are now out to capitalize on it? How can we tell the difference?

To the extent that you find the teachings of these arts to be less than "true", and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true. [...] The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning.

I think the above exemplifies the mismatch between your philosophy and mine. Yes, it's incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn't. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don't see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.

Replies from: pjeby
comment by pjeby · 2009-05-27T18:34:29.432Z · LW(p) · GW(p)

Really? All self-described teachers of practical arts have the comfort of systematized winning?

I said "for all that" is not "for all of". Very different meaning. "For all that" means something like "despite the fact that", or "Although". I.e., "although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning." What's more, it's you who said "self-described" -- I referred only to people who have some systematized winning.

There are no snake-oil charlatans for whom things just "went well" and are now out to capitalize on it? How can we tell the difference?

See, that's the sort of connotation I find interesting. How is "snake oil charlatan" connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn't have things go well for them? And if they didn't want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)

If you break down what you've just said, it should be easy to see why I think this sort of "thinking" is just irrationally-motivated reaction - the firing off "boo" lights in response to certain buttons being pushed.

To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness.

No - I'm saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren't really testing U, you're testing U+X, where X is whatever else it is you believe about U, like, "I'm going to see if this works", or "I think this is stupid".

Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.

Now, that may sound like a defense of psychic phenomena. But it isn't. You don't need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.

And the measurement of the performer's results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective -- i.e., to make you feel better about life or be more motivated -- then only the subjective experiencer can measure that bit, of course.)

Sometimes, I get clients who will say something like, "Well, I felt better, but how do I know I wasn't just imagining it?", and I have to separate out the confusion. Because what they're really saying is, "At time X I felt good, but now at time Y I'm thinking that maybe it wasn't real".

However, if the experiment was "perform procedure Z at time X-1" with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now -- it can't change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.

In other words, it's questioning yourself afterwards that's poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)

This "success at time X, questioning at time Y" pattern is really common among naturally-struggling people. It's basically the way people prevent themselves from advancing. And it doesn't matter what procedure Z is - it could be something like making a plan for their day. I'll ask, "well, did you make a plan?" And they'll be like, "well, yeah, but what if I didn't do it right?"

It's this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can't tinker with your success criteria after the fact, to make it so that you failed anyway.

Skepticism is useful before you do something, to set up the criteria for measuring something. But it's not useful while you're doing the thing, nor after you've actually done it.

The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I'm describing it's the questioning that creates the delusion, redefining the past to suit the whims of the present.

Replies from: loqi
comment by loqi · 2009-05-28T07:03:53.871Z · LW(p) · GW(p)

I referred only to people who have some systematized winning.

I did assume you held the position that these people are somehow identifiable. If your point was merely "there exist some people out there who are systematic winners"... then I'm not sure I get your point.

How is "snake oil charlatan" connected to having things go well and wanting to capitalize on it?

Because "I figured out the key to success, I succeeded, and now I want to share my secrets with you" is the story that sells, regardless of actual prior circumstance or method.

Would you want to be taught by someone who didn't have things go well for them? And if they didn't want to capitalize on it in some fashion, why would they be teaching it?

I don't think you understand why I bring up charlatans. This is a signaling problem. You're right... I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there's going to be a lot of faking going on.

If you break down what you've just said, it should be easy to see why I think this sort of "thinking" is just irrationally-motivated reaction - the firing off "boo" lights in response to certain buttons being pushed.

My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), "teachers of the practical arts [...] have the comfort of systematized winning". It seems to me that this "comfort" is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing "useful" signal from noise. If you do have legitimate insights, you're certainly not making yourself any more accessible by pointing to others in the field. If your point was merely "some deluded people win"... then I'm not sure I get your point.

No - I'm saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren't really testing U, you're testing U+X, where X is whatever else it is you believe about U, like, "I'm going to see if this works", or "I think this is stupid".

This response isn't really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I'll try to explain what I mean by "T". You say "skepticism is useful before you do something", and it's precisely this sort of skepticism that T represents. You leapt straight into explaining how I've just got to embrace U in order to make it work, but that doesn't address why I'm even considering U in the first place. Hence "I first need a true belief T that U is useful". Pardon me for a moment while I look into how useful it is to believe I'm a goat.

The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something.

Again, I think you're overstating this fear, but now that you mention theism, I can't help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you've already decided to experimentally swallow... work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?

Replies from: pjeby
comment by pjeby · 2009-05-28T15:05:53.331Z · LW(p) · GW(p)

You're right... I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there's going to be a lot of faking going on.

Well, in the case of at least marketing and pickup, you can generally observe the teacher's own results, as long as you're being taught directly. For acting, you could observe the ability of the teacher's students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established "controls". (Btw, in the direct marketing industry, the "control" is just whatever ad you're currently using; it's not a control condition where you don't advertise or run a placebo ad!)

IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don't always understand math or how to use it, but they're motivated to use statistical tools for split-testing.

If your point was merely "some deluded people win"... then I'm not sure I get your point.

The ancient Greeks thought fire was an element, but that didn't stop them from using fire. Developing a practical model and a "true" theory are quite often independent things. My point is that you don't need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to "act as if" certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.

For example, Seth Roberts' theory of calorie-flavor association is probably not entirely true -- but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.

"I first need a true belief T that U is useful".

Right -- and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I've already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that's sufficient T probability for me to engage in a test.

I can't help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you've already decided to experimentally swallow... work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?

Religion doesn't claim repeatable empirical benefits -- in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.

So, for me at least, your "T" consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times -- preferably suitable for immediate evaluation of whether something worked or it didn't.

I do have two things that most people evaluating such things don't. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn't get them to work. But once I had even the rudiments of those ideas -- not as theory but as experience -- I got many of the same things to work quite well.

That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they've learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in "Drawing On The Right Side Of The Brain", where people confuse their symbols or abstractions for faces and hair with what they're actually seeing.)

Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps -- i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven't even tried to be rigorous about investigating that, since even non-meditators can learn the skill.

(Hm, now that I've written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I'll have to look into that.)

My, you are confident in your theories of human motivation.

If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That's a subtlety that's hard to convey in text, I suppose.

comment by pwno · 2009-05-27T02:04:10.054Z · LW(p) · GW(p)

believing things that aren't "true", no matter how useful it may be to believe them

Why should a belief be true just because it's useful? Or are you saying people are claiming a belief's usefulness is not true despite the evidence that it's useful?

Replies from: pjeby
comment by pjeby · 2009-05-27T02:08:18.161Z · LW(p) · GW(p)

Why should a belief be true just because it's useful? Or are you saying people are claiming a belief's usefulness is not true despite the evidence that it's useful?

Neither. I'm saying that a popular attitude of LW culture is to prefer not to "believe" the thing it's useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.

For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can't statistically validate... some even if the belief has a high probability of making itself come true in the future.

Replies from: Nick_Tarleton, pwno
comment by Nick_Tarleton · 2009-05-27T07:23:02.478Z · LW(p) · GW(p)

Neither. I'm saying that a popular attitude of LW culture is to prefer not to "believe" the thing it's useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.

My immediate reaction to this paragraph is skepticism that I can believe something, if I don't believe the evidence weighs in its favor; other people might be able to choose what they believe, but I've internalized proper epistemology well enough that it's beyond me. On reflection, though, while I think there is some truth to this, it's also a cached oversimplification that derives its strength from being part of my identity as a rationalist.

Replies from: Vladimir_Nesov
comment by pwno · 2009-05-27T02:17:07.203Z · LW(p) · GW(p)

Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that's hard to prove.

Replies from: pjeby
comment by pjeby · 2009-05-27T05:31:35.763Z · LW(p) · GW(p)

It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that's hard to prove.

Thank you for eloquently demonstrating precisely what I'm talking about.

comment by timtyler · 2009-05-26T21:36:18.742Z · LW(p) · GW(p)

Results are neither right nor wrong - they just are.

comment by JamesCole · 2009-05-27T07:23:01.882Z · LW(p) · GW(p)

In contrast, you can be "less wrong" just by doing nothing, or by being a critic of those who do something.' But in the real world (and even in science), you can never win BIG -- and it's often hard to win at all -- if you never place any bets.

To expand a little on what timtyler said, I think you're mixing up beliefs and actions.

Doing nothing doesn't make your beliefs less wrong, and placing bets doesn't make your beliefs more right (or wrong).

Wanting to be 'less wrong' doesn't mean you should be conservative in your actions.

comment by HughRistik · 2009-05-26T21:55:58.737Z · LW(p) · GW(p)

That it's not aimed at being "more right" -- which is not at all the same as being less wrong.

I've also had mixed feelings about the concept of being "less wrong." Anyone else?

Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase "less wrong" might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it's possible to make less mistakes or less egregious mistakes.

Yet "less wrong" and "overcoming bias" sound kind of like "playing to not lose," rather than "playing to win." There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer's excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place.

The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable.

If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail.

The ultimate way to be "less wrong" is radical skepticism. As a fallibilist, I am fully aware that we may never know when or if we are finding the truth, but I do think we can use heuristic to move towards it, rather than merely trying to move away from falsehood and hoping we bump into the truth backwards. That's why I've been writing about heuristic here and here, and why I am glad to see Alicorn writing about heuristics to achieve procedural knowledge.

For certain real-world projects that shall-not-be-named to succeed, we will need to have some great cognitive and epistemological successes, not merely avoid failures.

Replies from: pjeby, HughRistik
comment by pjeby · 2009-05-26T23:00:25.885Z · LW(p) · GW(p)

If we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth.

And if you play the lottery long enough, you'll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning -- I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.

After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.

In effect, learning what something is NOT only negligibly decreases the search space, despite it still being "less wrong".

(Btw, I suspect you were downvoted because it's hard to tell exactly what position you're putting forth -- some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I'm also not clear how you get from the other points to "the ultimate way to be less wrong is radical skepticism", unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn't downvote you for it.)

Replies from: HughRistik
comment by HughRistik · 2009-05-27T01:03:35.167Z · LW(p) · GW(p)

Thanks, pjeby, I can see how it might be confusing what I am advocating. I've edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.

In effect, learning what something is NOT only negligibly decreases the search space, despite it still being "less wrong".

Yes, this is exactly the point I was making.

Btw, I suspect you were downvoted because it's hard to tell exactly what position you're putting forth -- some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way.

Rather than trying to explain my previous post, I think I'll try to summarize my view from scratch.

The project of "less wrong" seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.

Now, in a sense, both an error and a success are "wrong," because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton's physics, even though it was later corrected by Einstein's physics.

Yet I think that even though Newton's physics is "less wrong" than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being "less wrong" without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.

Being "less wrong" is a trivially correct description of what occurs in rationality, but I argue that focusing on being "less wrong" is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.

Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.

comment by HughRistik · 2009-05-26T22:39:16.411Z · LW(p) · GW(p)

At least one person seems to think that this post is in error, and I would very much like to hear what might be wrong with it.

comment by JGWeissman · 2009-05-27T01:50:07.991Z · LW(p) · GW(p)

Perhaps there are intuitive notions of "less wrong" that are different from "more right", but in a technical sense, they seem to be the same:

At this point it may occur to some readers that there's an obvious way to achieve perfect calibration - just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you're right half the time. Isn't that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is discrimination.

Suppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between "Yes" and "No". Congratulations, you're perfectly calibrated - answers for which you said "50% probability" were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said "50%" on twenty occasions - you said "50%" to Yes-1, No-1; Yes-2, No-2; .... On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; .... And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; ...

Now I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I'm more poorly calibrated than you. I said "90%" on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate "90%" into 80%, knowing that when I'm 90% sure I'm right about 80% of the time. But the probability you assigned to the final outcome is 1/2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn't, but my better discrimination between right and wrong answers more than makes up for it. My final score is higher - I assigned a greater joint probability to the final outcome of the entire experiment. If I'd been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 * 0.2^2, 0.006.

Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.

Replies from: Nick_Tarleton, pjeby
comment by Nick_Tarleton · 2009-05-27T07:52:31.187Z · LW(p) · GW(p)

Perhaps there are intuitive notions of "less wrong" that are different from "more right", but in a technical sense, they seem to be the same:

True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there's a difference between affirming fewer false beliefs and more true ones.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-27T22:04:58.471Z · LW(p) · GW(p)

The fact that psychology can explain how the phrase "less wrong" can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.

Replies from: pjeby
comment by pjeby · 2009-05-28T01:09:38.652Z · LW(p) · GW(p)

It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.

That's what I've been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it's well established that humans can have both utility and disutility, that they're not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?

It's not rational to expect human beings to treat "less wrong" as meaning the same thing (in behavioral terms) as "more right". Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think "avoiding a predator" versus "hunting for food".

The idea that we can simultaneously have approach and avoidance behaviors and they're differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.

Heck, I'd be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms "less wrong" and "more right", no matter how "rational" they rate themselves to be.

comment by pjeby · 2009-05-27T02:04:28.128Z · LW(p) · GW(p)

Perhaps there are intuitive notions of "less wrong" that are different from "more right"

You bet: there's the one where you can be "less wrong" by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-27T03:49:33.061Z · LW(p) · GW(p)

You missed the point. The intuitive meaning of "less wrong" you describe is a caricature of the ideal of this community.

If by "never believing anything", you mean "don't assign any probability to any event", well then we give a person who does that a score of negative infinity, as wrong as it gets.

If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered "reversed intelligence". As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.

Replies from: pjeby
comment by pjeby · 2009-05-27T06:17:25.125Z · LW(p) · GW(p)

The intuitive meaning of "less wrong" you describe is a caricature of the ideal of this community.

It's not a caricature of the actual behavior of many of its members.... which notably does not live up to that ideal.

If by "never believing anything", you mean "don't assign any probability to any event", well then we give a person who does that a score of negative infinity, as wrong as it gets.

No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true... especially with respect to the things we would prefer to believe are true about ourselves and others.

A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false -- even for a moment, even in a higher cause, is unthinkable.

Is that a caricature of the Bayesian ideal? No kidding. But I'm not the one who's drawing it.

As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.

What I'm specifically referring to here is the brigade whose favorite argument is that something or other isn't yet proven "true", and that they should therefore not try it... especially if they spend more time writing about why they shouldn't try something, than it would take them to try it.

Heck, not just why they shouldn't try something, but why noone should ever try anything that isn't proven. Why, thinking a new thought might be dangerous!

And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren't saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.)

What's that saying about how, if given a choice between changing their mind and trying to prove they don't need to, most people get busy on the proof?

Replies from: JGWeissman
comment by JGWeissman · 2009-05-27T07:37:39.596Z · LW(p) · GW(p)

So, "never believing anything" means having unwavering certainty?

What I'm specifically referring to here is the brigade whose favorite argument is that something or other isn't yet proven "true", and that they should therefore not try it... especially if they spend more time writing about why they shouldn't try something, than it would take them to try it.

Without knowing what "brigade" or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.

Replies from: Nick_Tarleton, pjeby
comment by Nick_Tarleton · 2009-05-28T00:30:51.263Z · LW(p) · GW(p)

I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.

Good point, but a priori I wouldn't expect a self-help technique to be harmful in a way that's either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?

Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose.

Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn't matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.)

They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work.

Strongly agreed. Generally, though, I agree with pjeby's conclusion (tentatively, but only because so many others here disagree).

Replies from: JGWeissman
comment by JGWeissman · 2009-05-28T03:13:39.764Z · LW(p) · GW(p)

Good point, but a priori I wouldn't expect a self-help technique to be harmful in a way that's either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?

So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don't think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.

Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn't matter.

Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-28T03:54:20.607Z · LW(p) · GW(p)

So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don't think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.

No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can't easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it's very weak.

Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.

I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, "willpower to engage in psychological modification" seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.

comment by pjeby · 2009-05-27T18:01:03.390Z · LW(p) · GW(p)

Without knowing what "brigade" or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.

My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole "rationality" business to make sure it's more helpful than harmful?

It's really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people's lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.

Replies from: JGWeissman, Vladimir_Nesov
comment by JGWeissman · 2009-05-27T22:36:52.524Z · LW(p) · GW(p)

What is the evidence that empirical rationality is more likely to be helpful than harmful?

Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.

Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don't.

And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can't explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to "consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true".

Replies from: pjeby, pjeby
comment by pjeby · 2009-05-28T00:58:16.941Z · LW(p) · GW(p)

And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right.

Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you.

Second, if this actually were about "my" ideas (and it isn't), I've certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them.

But very few of "my" ideas are new in any event -- I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.)

And I'm not aware that any of these ideas is particularly controversial in the scientific community. Nobody 's pushing for more individual empirical testing per se, but the "brief therapy" movement that resulted in things like CBT is certainly more focused that direction than before.

(The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they're online papers or Wikipedia. Is it any wonder that I would then conclude they didn't really want the references?)

comment by pjeby · 2009-05-27T23:39:03.798Z · LW(p) · GW(p)

Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.

Those are the products of rationalism. I'm asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.

Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don't.

It is if you also apply the status quo bias to choose which evidence to count.

You might discover the greatest anti-akrasia trick ever, but if you can't explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping

I really wish people wouldn't conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here.

(Edited to say "extreme rationalism", as suggested by Nick Tarleton.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-28T00:13:49.817Z · LW(p) · GW(p)

I'm asking about evidence that the practice of rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.

You should probably be asking about extreme rationality.

comment by Vladimir_Nesov · 2009-05-27T18:15:21.707Z · LW(p) · GW(p)

My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole "rationality" business to make sure it's more helpful than harmful?

Evidence is demanded for communicating the change in preferred decision.

If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I'm getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies.

If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won't be able to change my mind as a result of hearing his assertion. I consider what it'd take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That's what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.

Replies from: pjeby
comment by pjeby · 2009-05-27T18:54:29.461Z · LW(p) · GW(p)

Evidence is demanded for communicating the change in preferred decision.

You mean status quo bias, like the argument against the Many-Worlds interpretation?

If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won't be able to change my mind as a result of hearing his assertion.

It's funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.

Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they're hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now -- in effect, they can literally procrastinate on overeating, because they could now do it "any time". There's no particular moment at which they need to eat up because they're about to be out of reach of food.

I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn't match your experience.

However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.

And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you've definitely misunderstood something I've said.

(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I'm referring to above is called The Four-Day Win.)

Replies from: Eliezer_Yudkowsky, JGWeissman, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-28T05:16:45.742Z · LW(p) · GW(p)

Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they're hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now -- in effect, they can literally procrastinate on overeating, because they could now do it "any time". There's no particular moment at which they need to eat up because they're about to be out of reach of food.

I strongly suspect that this trick wouldn't work on me - the problem is that I've taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn't convince myself it was available.

What I may try is telling myself a true statement when I'm tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help - if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.

Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else - to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.

'Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.

Replies from: pjeby
comment by pjeby · 2009-05-28T05:34:33.219Z · LW(p) · GW(p)

That might help - if the basic underlying theory of eating to avoid famine is correct.

IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It's only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having... along with a reason why the relevant thought might not be true.

I haven't tried it myself -- I actually didn't buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn't represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.

That is, it's only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days... but just sitting in the parking lot and not actually going in.... then going in and sitting on a bike but not exercising... etc. At each stage, four days of it is supposed to be enough to make what you've already been doing a non-threatening part of your routine.

I've used the approach to implement some small habits, but nothing major as yet. Seems promising so far.

comment by JGWeissman · 2009-05-27T22:53:56.166Z · LW(p) · GW(p)

It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don't have to eat the cookie now, it will still be there later.

But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.

It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.

Replies from: pjeby, AdeleneDawner
comment by pjeby · 2009-05-28T00:19:39.965Z · LW(p) · GW(p)

But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.

And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don't want to do something, you can always find a reason.

Sure, that doesn't mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don't take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.

Even if you don't know what the hell you're doing and try things randomly, you'll improve as long as there's some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better... but ONLY by doing something besides thinking.

After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!

I learned the hard way that my brain's confabulation -- "reasoning" -- is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it's lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.

One of my pet sayings is that "amateurs guess, professionals test". But "test" in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it's 27% likely that the problem with my car is in the spark plugs, but didn't actually test them, I'd best get another mechanic!

The best that statistics can do for the mechanic is to mildly optimize what tests should be done first... but you could get almost as much optimization by testing in easiest-first order.

It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected.

And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn't sound very scalable to me.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-28T03:37:55.179Z · LW(p) · GW(p)

And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don't want to do something, you can always find a reason.

The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?

And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn't sound very scalable to me.

Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.

Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn't work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?

Replies from: pjeby
comment by pjeby · 2009-05-28T05:19:54.921Z · LW(p) · GW(p)

How are you going to realize if you have suggested a trick that doesn't work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?

I can only assume you're implying that that's what I do. But as I've already stated, when someone has performed a technique to my satisfaction, and it still doesn't work, I have them try something else. I don't just say, "oh well, tough luck, and it's your fault".

There are only a few possibilities regarding an explanation of why "different things work for different people":

  1. Some things only work on some people, and this is an unchanging trait attributable to the people themselves,

  2. Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and

  3. Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)

On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)

Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one's model, without any theoretical or empirical justification whatsoever.

More than that, it sounds exactly like attribution error, and an instance of Dweck's "fixed" mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.

That's why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It's a horrible negative self-prime to saddle yourself with.

comment by AdeleneDawner · 2009-05-27T23:19:49.900Z · LW(p) · GW(p)

Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person... like, uh, me. If I'm careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy 'treats' more often ('just this once', repeatedly) when I go out. If I keep goodies at home, I'll ignore them for a while, but then decide something along the lines of "it'd be a shame to let this go to waste" and eat them anyway.

There are different mental states involved in each of those situations, but I don't know what triggers the switch from one to another.

comment by Vladimir_Nesov · 2009-05-27T19:09:44.537Z · LW(p) · GW(p)

You mean status quo bias, like the argument against the Many-Worlds interpretation?

I mean the argument being too weak to change one's mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn't flip the switch.

comment by conchis · 2009-05-26T21:47:46.653Z · LW(p) · GW(p)

I think this is an excellent point; I'm not sure it's a valid criticism of this community.

comment by Vladimir_Nesov · 2009-05-26T21:44:36.466Z · LW(p) · GW(p)

The word "ideology" sounds wrong. One of the aspects of x-rationality is hoarding general correct-ideas-recognition power, as opposed to autonomously adhering to a certain set of ideas.

It's a difference between an atheist-fanatic who has a blind conviction in nonexistence of God and participates in anti-theistic color politics, and a person who has solid understanding of the natural world, and from this understanding concludes that certain set of beliefs is ridiculous.

Replies from: conchis
comment by conchis · 2009-05-26T22:03:47.686Z · LW(p) · GW(p)

As the wiki link points out, the word "ideology" has a fairly neutral sense in which it simply refers to "a way of looking at things", which seems to reflect Byrnema's focus on the underlying assumptions this community brings to things.

I don't think it's a stretch to suggest that many of us here probably do share particular ways of looking at things. It's possible that these general ways of looking-at-things do in fact let us track reality better than other ways of looking-at-things; but it's also possible that we have blind spots, and that our shared "ideology" may sometimes get in the way of the x-rationality we aspire to.

Replies from: JamesCole
comment by JamesCole · 2009-05-27T07:34:45.910Z · LW(p) · GW(p)

'Ideology' may have a fairly neutral sense (of "a way of looking at things"), but I don't think that is what it usually means to people, or is how it's used in the original post. "A burgeoning ideology needs a lot of faithful support in order to develop" isn't true of all "way[s] of looking at things".

"The ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it." implies that there's isn't much that can be done to defend or reject views other than by having sheer numbers, and I don't think that's (so much) the case here.

But I do take byrnema's point that the community does need an initial period to define itself. What this actually reminds me more of is the development of new paradigms. As Kuhn has described, it takes a while for a new paradigm to muster all the resources it needs to fully define and justify itself, and for a fair while it will be unfairly attacked by others who judge it by the tools and criteria of the old paradigm(s).

For most people in society, the sort of viewpoint embodied in LW (which you can see as like a new paradigm) is quite different to how they are used to seeing things (which you can see as like an old paradigm).

Replies from: AdeleneDawner
comment by AdeleneDawner · 2009-05-27T16:42:35.487Z · LW(p) · GW(p)

'Ideology' may have a fairly neutral sense (of "a way of looking at things"), but I don't think that is what it usually means to people, or is how it's used in the original post.

Shouldn't we be working on being better at ignoring social signaling than this?

Why are you assuming that 'ideology', even given the social-signaling meaning of it, is a bad thing, rather than just a thing?

(Thank you for providing a good example of something I've been trying to find a way to point out for the last few days.)

Replies from: JamesCole, Technologos
comment by JamesCole · 2009-05-28T06:16:26.540Z · LW(p) · GW(p)

hi, sorry but I'm not clear on how the social signaling you mention relates to my comment.

I didn't think my comment said anything about ideology being bad, though if you're interested in my opinion on it, here it is. I take ideology to be where your belief in something is less about you believing it is actually true, and more to do with other factors such as 'because i want to be part of the group who holds these views'. (please take that description of ideology with a grain of salt... i find it very difficult to describe it briefly). I think that can have negative consequences.

comment by Technologos · 2009-05-28T00:12:43.473Z · LW(p) · GW(p)

Ideology, given the social-signaling meaning, is taken to be anti-rational, so naturally it would be something of an insult around here.

I'm not sure that social signaling is strictly the point here--insofar as language is only useful intersubjectively, I would instead suggest that we should attempt to communicate a point in a way that leads to the least confusion rather than insisting that we try to drop all the connotations we have had ingrained for the duration of our lives.

In part, I think this is why we use jargon--we are defining new words with none of the connotations of the old ones, and this might be helpful in moving us past the effects of those connotations.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2009-05-28T15:28:47.354Z · LW(p) · GW(p)

My point wasn't about whether 'ideology' was intended to mean a concept that could be taken as insulting or not. My point was that reacting to it as if it was an insult, and getting defensive, is significantly less rational than taking a more neutral stance.

If the claim that there's an ideology here is false, examine the poster's motivation and react appropriately. Taking offense is a subset of this option, which I'd consider valid if they appear to have been malicious, but that doesn't seem to have been the case, and even if it was, taking offense would probably not be the best reaction to the situation.

If the claim is true (which I think it is), examine the situation to determine if that's a useful or harmful aspect (I think it's at least partly useful; the coherent ideology makes it easier for new members to get started - but the negative could easily outweigh the positive at higher levels of rationality... but then, learning enough epistemic hygiene to break out of ideologies is a big enough part of that that it may be moot... dunno. ask someone who's further along than I am.) and react appropriately, by either working on a solution or (if necessary) defending the status quo. Taking offense or picking nits about the original comment seems pretty pointless, in this case, when there are better angles of the situation to be working on, and comes across like you're trying to deny a fact.

Please bear in mind that I'm using this as an example of this kind of problem; it's not an especially egregious one, it's just convenient.

comment by HalFinney · 2009-05-29T21:02:37.702Z · LW(p) · GW(p)

I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.

The first is that the real purpose of this site is to create minions and funding for Eliezer's mad scheme to take over the world. There should be more recognition and consciousness of this underlying agenda.

comment by Psychohistorian · 2009-05-26T22:22:47.067Z · LW(p) · GW(p)

This is an interesting and worthwhile idea, though TBH I'm not sure I agree with the premise.

The whole "rationality" thing provides more of a framework that a status quo. People who make posts like "Well, I'm a rationalist and a theist, so there! Ha!" do tend to get voted down (when they lack evidence/argument), but I hardly see a problem with this. This community strongly encourages people to provide supporting evidence or argumentation and (interestingly) seems to have no objections to extremely long posts/replies.I have yet to see a well-thought-out, on-topic post get voted down. Admittedly some top-levels (including mine!) never get voted up, but that doesn't seem to be the result of a status quo being unwilling to consider them; it's usually that they aren't totally cogent, they aren't well written, or they're simply too minute to interest people.

I would actually be at least as curious to know what, specifically, has run into this problem to date, as I would be to hear ideas people have wanted to throw out there, but thought would be rejected out of hand.

comment by antisingularity · 2009-06-05T23:14:28.955Z · LW(p) · GW(p)

I don't know if this actually counts as a dissenting opinion, since there seems to be a conclusion around here that a little irrationality is okay. But I published a post about the virtues of irrationality (modeled after Yukowsky's twelve virtues of rationality), found here:

http://antisingularity.wordpress.com/2009/06/05/twelve-virtues-of-irrationality/

I suppose my attempt is to provide a more rational view by including irrationality but that is merely my opinion. I believe that there are good irrational things in the universe and I think that is a dissenting opinion from the major views expressed here. Please take that how you will.

Replies from: saturn, Vladimir_Nesov, byrnema
comment by saturn · 2009-06-08T07:12:51.549Z · LW(p) · GW(p)

It seems like, to some extent, you are confusing rationality with being "Spock".

comment by Vladimir_Nesov · 2009-06-05T23:22:31.802Z · LW(p) · GW(p)

Emotion is not irrational. Luck can't be irrational, because it doesn't exist. Aspects of human thought, such as imagination, are the bedrock of human rationality.

Replies from: pjeby, antisingularity
comment by pjeby · 2009-06-05T23:44:18.305Z · LW(p) · GW(p)

Luck can't be irrational, because it doesn't exist.

Really? At least one scientist appears to disagree with you:

Wiseman, 37, is head of a psychology research department at the University of Hertfordshire in England. For the past eight years, he and his colleagues at the university's Perrott-Warrick Research Unit have studied what makes some people lucky and others not. After conducting thousands of interviews and hundreds of experiments, Wiseman now claims that he's cracked the code. Luck isn't due to kismet, karma, or coincidence, he says. Instead, lucky folks -- without even knowing it -- think and behave in ways that create good fortune in their lives.

Replies from: Cyan
comment by Cyan · 2009-06-05T23:59:08.139Z · LW(p) · GW(p)

If we define "luck" as an unusual propensity for fortunate/unfortunate things to happen at random, then Wiseman does not disagree. Wiseman explains the subjective experience of luck in terms of more fundamental character traits that give rise to predictable tendencies. There's nothing irrational about it; arational, maybe, but not irrational.

Replies from: pjeby
comment by pjeby · 2009-06-06T00:17:27.285Z · LW(p) · GW(p)

If we define "luck" as an unusual propensity for fortunate/unfortunate things to happen at random, then Wiseman does not disagree. Wiseman explains the subjective experience of luck in terms of more fundamental character traits that give rise to predictable tendencies.

Yes, exactly. The fact that the typical person's understanding of "luck" does not include a correct theory of how "luck" occurs, doesn't prevent them from observing that there is in fact such a thing and that people vary in their degree of having it.

This sort of thing happens a lot, because human brains are very good at picking up certain kinds of patterns about things that matter to them. They're just very bad at coming up with truthful explanations, as opposed to simple predictive models or useful procedures!

The crowd that believes in "The Secret" is talking about many of the same things as Wiseman's research; I've seen all 4 of his principles in the LoA literature before. I haven't read his book, but my guess is that I will have already seen better practical instruction in these principles from books that were written by people who claim to be channeling beings from another dimension... which would just go to show how better theories aren't always related to better practices.

To be fair, it is a Fast Company piece on the research; I really ought to read the actual book before I judge. Still, from previous experience, scientific advice tends to be dreadfully vague compared to the advice of people who have experience coaching other people at doing something. (i.e. scientific advice is usually much more suggestive than prescriptive, and more about "what" than "how".)

comment by antisingularity · 2009-06-06T01:16:00.781Z · LW(p) · GW(p)

I agree that emotion is not totally irrational. There are systems to it, most of which we probably don't understand in the slightest.

"Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts"

And how am I to know which emotion is the one that fits the facts? If I am cheated, should I be sad or angry (or maybe something else)? Give me an objective way to deal with every emotional situation and then we can call it rational.

I still think luck exists and is irrational. And imagination too.

Replies from: Jack
comment by Jack · 2009-06-06T02:33:52.270Z · LW(p) · GW(p)

Do you mean luck as in the fact that random events occur to randomly distributed individuals and so some will have more good things than bad happen to them and others will have more bad things happen to them than good? Or do you mean that people have some ineffable quality which makes either good things or bad things more likely to occur to them? The first seems obviously true, the second strikes me as quite a claim.

btw, you blog is quite good. As someone with somewhat middle of the road views on singularity issues (more generous than you, more skeptical than most people here) your presence here is very welcome. I suggest those passing by check out the rest of the articles. Usually good to read both sides.

Replies from: antisingularity
comment by antisingularity · 2009-06-06T17:41:34.136Z · LW(p) · GW(p)

Thanks for the compliments. I had initially been worried that I might be poorly received around here but people are genuinely encouraging and looking for debate and perspective.

As for luck, I am really referring to your first statement. Random events happening to distributed individuals. It's just a tendency in the universe, I know, that we happen to call luck. But, since we get to decide on what's good and what's bad, its seems to me that sometimes really improbably good things will happen (good luck) and sometimes very improbably bad things will happen (bad luck).

comment by byrnema · 2009-06-06T01:40:28.256Z · LW(p) · GW(p)

The universe is irrational and we have to live in it. [...] Dealing with all of this, a perfectly rational being would forfeit. There is too much chaos, too much unpredictability.

Interesting. I see this as some kind of anti-thesis to rationality; being in some sense exactly what rationalists deny. Sure, we may believe in chaos and unpredictability, but we still believe that rationality is the best way to deal with it.

While I can sympathize with the view that the universe is sometimes too complex, I do believe that predictable success is possible to some extent (probabilistically, for example), and that being rational is the way to achieve that. If being irrational predictably gives better results in any specific context, then our rational theory needs to be expanded to include that irrational behavior as rational. My strongest belief is that the theory of rationality can always be expanded in a consistent way to include all behavior that yields success. I realize this is a substantial assumption.

I would like to learn more about what sorts of things are nevertheless "beyond" rationality, and whether there are some ways to be more rational about these things, or if it's just separate (so that the label rational/irrational doesn't apply.) For example, I think rationalists generally agree that preferences and values are outside rationality.

Replies from: antisingularity
comment by antisingularity · 2009-06-06T17:49:14.295Z · LW(p) · GW(p)

"Interesting. I see this as some kind of anti-thesis to rationality; being in some sense exactly what rationalists deny. Sure, we may believe in chaos and unpredictability, but we still believe that rationality is the best way to deal with it."

Yes, I suppose you could characterize it as an anti-thesis to rationality. Mostly, I think that rationality is an excellent way to deal with many things. But it is not the solution to every single problem (love is probably the best example of this I can give).

As for things beyond rational, well, your second paragraph, you might agree, is beyond rational. It's not irrational, but it's a value judgment about the fact that the theory of rationality can always be expanded. You can't justify it within the theory itself.

So I'm not advocating for irrationality as a better means to rationality, simply that they both exist and both have their uses. To believe that you can and should increase your rationality is both rational and great. But to believe that you will always be able to achieve perfect rationality strikes me as a bit irrational.

comment by Vichy · 2009-06-01T20:51:14.897Z · LW(p) · GW(p)

I would say the direction I most dissent from Less Wrong is that I don't think 'rationality' is inherently anything worth having. It's not that I doubt its relevance for developing more accurate information, nor its potential efficacy in solving various problems, but if I have a rationalistic bent that is mainly because I'm just that sort of person - being irrational isn't 'bad', it's just - irrational.

I would say the sort of terms and arguments I most reject are those with normative-moral content, since (depending on your definition) I either do not believe in, or reject the relevance of, ethical propositions. Which in itself is the reason I reject 'rationality' as some high standard. I'm also basically disinterested in anything that happens after my death, and I doubt the efficacy of personal influence enough to view any attempts to reform other people (much less the planet) as a waste of time. In fact, I don't think I even have any particular attachment to 'humans-as-a-species'. If they can't adapt to reality, good riddance.

comment by Ziphead · 2009-05-27T17:51:09.947Z · LW(p) · GW(p)

I'm continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?

The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people interested in rationality is to explore how to live without morality. Many "rationalists" instead go into denial and try to construct some supposedly rational form of morality, more often than not suspiciously similar to the traditional ideas. I'm not sure whether or not Eliezer's metaethical project is an example of this, but in any case he is commendable for taking the issues very seriously. Most other LW:ers seem to be far too uncritical toward their moral prejudices.

Replies from: PhilGoetz, Jess_Riedel, Technologos, byrnema
comment by PhilGoetz · 2009-05-31T23:55:12.468Z · LW(p) · GW(p)

I think you need to define what you mean by "morality" a lot more carefully. It's hard to attribute meaning to the statement "People should act without morals." Even if you mean "Everyone should act strictly within their own self-interest", evolutionary psychology would demand that you define the unit of identity (the body? the gene?), and would smuggle most of what we think of as "morality" back into "self-interest".

comment by Jess_Riedel · 2009-05-28T07:41:09.687Z · LW(p) · GW(p)

Moral skepticism is not particularly impressive as it's the simplest hypothesis. Certainly, it seems extremely hard to square moral realism with our immensely successful scientific picture of a material universe.

The problem is that we still must choose how to act. Without a morality, all we can say is that we prefer to act in some arbitrary way, much as we might arbitrarily prefer one food to another. And...that's it. We can make no criticism whatsoever about the actions of others, not even that they should act rationally. We cannot say that striving for truth is any better than killing babies (or following a religion?) anymore than we can say green is a better color than red.

At best we can make empirical statements of the form "A person should act in such-and-such manner in order to achieve some outcome".

Some people are prepared to bite this bullet. Yet most who say they do continue to behave as if they believed their actions were more than arbitrary preferences.

Replies from: Ziphead, Nick_Tarleton
comment by Ziphead · 2009-05-28T14:45:55.354Z · LW(p) · GW(p)

My point is that people striving to be rational should bite this bullet. As you point out, this might cause some problems - which is the challenge I propose that rationalists should take on.

You may wish to think of your actions as non-arbitrary (that is, justified in some special way, cf. the link Nick Tarleton provided), and you may wish to (non-arbitrarily) criticize the actions of others etc. But wishing doesn't make it so. You may find it disturbing that you can't "non-arbitrarily" say that "striving for truth is better than killing babies". This kind of thing prompts most people to shy away from moral skepticism, but if you are concerned with rationality, you should hold yourself to a higher standard than that.

Replies from: Jess_Riedel
comment by Jess_Riedel · 2009-05-28T16:18:24.479Z · LW(p) · GW(p)

I think the problem is much more profound than you suggest. It is not something that rationalists can simply take on with a non-infinitesimal confidence that progress will be made. Certainly not amateur rationalists doing philosophy in their spare time (not that this isn't healthy). I don't mean to say that rationalists should give up, but we have to choose how to act in the meantime.

Personally, I find the situation so desperate that I am prepared to simply assume moral realism when I am deciding how to act, with the knowledge that this assumption is very implausible. I don't believe this makes me irrational. In fact, given our current understanding of the problem, I don't know of any other reasonable approaches.

Incidentally, this position is reminiscent of both Pascal's wager and of an attitude towards morality and AI which Eliezer claimed to previously hold but now rejects as flawed.

comment by Nick_Tarleton · 2009-05-28T07:46:47.238Z · LW(p) · GW(p)

OB: "Arbitrary"

(Wait, Eliezer's OB posts have been imported to LW? Win!)

Replies from: Jess_Riedel
comment by Jess_Riedel · 2009-05-28T15:47:54.379Z · LW(p) · GW(p)

I've read it before. Though I have much respect for Eliezer, I think his excursions into moral philosophy are very poor. They show a lack of awareness that all the issues he raises have been hashed out decades or centuries ago at a much higher level by philosophers, both moral realists and otherwise. I'm sure he believes that he brings some new insights, but I would disagree.

comment by Technologos · 2009-05-28T00:02:29.258Z · LW(p) · GW(p)

My position may be one of those you criticize. I believe something that bears an approximation to "morality" is both worth adhering to and important.

I think a particular kind of morality helps human societies win.

Morality, as I understand it, consists of a set of constraints on acceptable utility functions combined with observable signals of those constraints.

Do I believe that this type of morality is in any sense ultimately correct? No. In a technical sense, I am a complete and total moral skeptic.

However, I do think publicly-observable moral behavior is useful for coordination and cooperation, among other things. To the extent that this makes us better off--to the extent it makes me better off--I would certainly think that even a moral skeptic might find it interesting.

Perhaps LWers are "too uncritical toward their moral prejudices." But it's at least worth examining which of those "moral prejudices" are useful, where this doesn't conflict with other, more deeply held values.

Finally, morality broadly enough construed is a condition of rationality: if morality is taken to simply be your set of values and preferences, then it is literally necessary to a well-defined utility function, which is itself (arguably) a necessary component of rationality.

Replies from: Ziphead
comment by Ziphead · 2009-05-28T14:45:04.731Z · LW(p) · GW(p)

It seems to me that your position can be interpreted in at least two ways.

Firstly, you might mean that it is useful to have common standards for behavior to make society run more smoothly and peacefully. I think almost everyone would agree with this, but these common standards might be non-moral. People might consider them simple social convections that they adopt for reasons of self-interest (to make their interactions with society flow more smoothly), but that have no special metaphysical status and do not supersede their personal values if a conflict arises.

Secondly, you might mean that it is useful that people in general are moral realists. The question then remains how you yourself, being "a complete and total moral skeptic", relate to questions of morality in your own life and in communication with people holding similar views. Do you make statements about what is morally right or wrong? Do you blame yourself or others for breaking moral rules? Perhaps you don't, but I get the impression that many LW:ers do. (In the recent survey, only 10.9% reported that they do not believe in morality, while over 80% reported themselves to support some moral theory.)

In regards to the second interpretation, one might also ask: If it works for you to be a moral skeptic in a world of moral realists, why shouldn't it work for other people too? Why wouldn't it work for all people? More to the point, I don't think that morality is very useful. Despite what some feared, people didn't become monsters when they stopped believing in God, and their societies didn't collapse. I don't think any of these things will happen when they stop believing in morality either.

Replies from: Technologos
comment by Technologos · 2009-05-28T23:00:58.444Z · LW(p) · GW(p)

I don't think they do have any "special metaphysical status," and indeed I agree that they are "simple social conventions." Do I make statements about moral rights and wrongs? Only by reference to a framework that I believe the audience accepts. In LWs case, this seems broadly to be utilitarian or some variant.

That's precisely my point--morality doesn't have to have any metaphysical status. Perhaps the problem is simply that we haven't defined the term well enough. Regardless, I suspect that more than a few LWers are moral skeptics, in that they don't hold any particular philosophy to be universally, metaphysically right, but they personally value social well-being in some form, and so we can usually assume that helping humanity would be considered positively by a LW audience.

As long as everyone's "personal values" are roughly compatible with the maintenance of society, then yes, losing the sense of morality that excludes such values may not be a problem. I was simply including the belief that personal values should not produce antisocial utility functions (that is, utility functions that have a positive term for another person's suffering) as morality.

Do I think that these things are metaphysically supported? No. But do I think that with fewer prosocial utility functions, we would likely see much lower utilities for most people? Yes.

Of course, whether you care about that depends on how much of a utilitarian you are.

comment by byrnema · 2009-05-29T06:47:43.086Z · LW(p) · GW(p)

Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call "reductionism" (perhaps closer to Daniel Dennetts "greedy reductionism" than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.

First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).

We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.

Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.

By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand -- the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.

It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)

I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.

[Later edit: I've convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more "paperclipness" without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-29T09:37:18.461Z · LW(p) · GW(p)

No, reductionism doesn't lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn't exist.

Replies from: byrnema
comment by byrnema · 2009-05-29T12:29:53.719Z · LW(p) · GW(p)

I agree that if morality exists, it is implemented through quarks. This is what I meant by morality not being transcendent. Used in this sense, as the assertion of a single magisterium for the physical universe (i.e., no magic), I think reductionism is another justified tenet of rationality -- part of the consistent ideology.

However, what would you call the belief I was criticizing? The one that denies the existence of non-material things? (Of course the "existence" of non-material things is something different than the existence of material things, and it would be useful to have a qualified word for this kind of existence.)

Replies from: Apprentice
comment by Apprentice · 2009-05-29T13:04:59.953Z · LW(p) · GW(p)

Eliminative materialism?

Replies from: byrnema
comment by byrnema · 2009-05-29T16:30:07.906Z · LW(p) · GW(p)

Yes, that is quite close. And now that I have a better handle I can clarify: Eliminative materialism is not itself "false" -- it is just an interesting purist perspective that happens to be impracticable. The fallacy is when it is inconsistently applied.

Moral skeptics aren't objecting to the existence of morality because it is an abstract idea, they are objecting to it because the intersection of morality with our current logical/scientific understanding of morality reduces to something trivial compared to what we mean when we talk about morality. I think their argument is along the lines of if we can't scientifically extend morality to include what we do mean (for example, at least label in some rigorous way what it is we want to include), then we can't rationally mean anything more.

comment by Velochy · 2009-05-27T17:58:10.679Z · LW(p) · GW(p)

One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more costly than gain and so on)...

Or is this how most people here already do things? Any and all feedback on this idea would be really apreciated (especially links to relevant discussion of this idea as I am sure Im not the first to come up with it).

Replies from: Velochy
comment by Velochy · 2009-05-27T18:20:38.925Z · LW(p) · GW(p)

Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain...

It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?

Replies from: Technologos, Nick_Tarleton
comment by Technologos · 2009-05-28T00:49:58.245Z · LW(p) · GW(p)

The standard argument against anything other than EU maximization (note that consistent loss-aversion may arise from diminishing marginal utility of money; loss-aversion only is interesting when directionally inconsistent) in economics involves Dutch-booking: the ability to set people up as money pumps and extract money from them by repeatedly offering subjectively preferred choices that violate transitivity. Essentially, EU maximization might be something we want to have because it induces consistency in decision-making.

For instance, imagine a preference ordering like the one in Nick_Tarleton's adjacent comment, where +10 is different from +20-10. Let us say that +9=+20-10 (without loss of generality; just pick a number on the left side).

Then I can offer you +9 in exchange for +20-10 repeatedly, and you'll prefer it every time, but you ultimately lose money.

The reason that rational risk aversion (which is to say, diminishing marginal utility of money) is not a money pump is that you have to reduce risk every time you extract some expected cash, and that cannot happen forever.

Ultimately, then, prospect theory and related work are useful in understanding human decision-making but not in improving it.

Replies from: linkhyrule5
comment by linkhyrule5 · 2013-08-28T07:14:34.481Z · LW(p) · GW(p)

Question - is there a uniqueness proof of VNM optimality in this regard?

Replies from: Technologos
comment by Technologos · 2015-06-20T12:53:41.943Z · LW(p) · GW(p)

VNM utility is a necessary consequence of its axioms but doesn't entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM's assumption of a fixed total ordering of outcomes than anything.

comment by Nick_Tarleton · 2009-05-27T18:34:46.585Z · LW(p) · GW(p)

Differing utilities for loss vs. gain introduce an apparently absurd degree of path dependence, in which, say, gaining $10 is perceived differently from gaining $20 and immediately thereafter losing $10. Loss vs. gain asymmetry isn't in conflict with expected utility maximization (though nonlinear probability weighing is), but it is inconsistent with stronger intuitions about what we should be doing.

It would really be good if I knew a bit more about the different descision theories at this point.

"Different decision theories" is usually used to mean, e.g., causal decision theory vs. evidential decision theory vs. whatever it is Eliezer has developed. Which of these you use is (AFAIK) orthogonal to what preferences you have, so I assume that doesn't answer your real question. Any reference on different types of utilitarianism might be a little more like what you're looking for, but I can't think of anyone who's catalogued different proposed selfish utility functions.

Replies from: steven0461
comment by steven0461 · 2009-05-27T18:47:41.381Z · LW(p) · GW(p)

Differing utilities for loss vs. gain introduce an apparently absurd degree of path dependence, in which, say, gaining $10 is perceived differently from gaining $20 and immediately thereafter losing $10.

Yes -- the example I've seen is that a loss-averse agent may evaluate a sequence of say ten coinflips with -$15/+$20 payoffs positively at the same time as evaluating each individual such coinflip negatively.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-27T19:12:35.851Z · LW(p) · GW(p)

Hmm:

single flip EU = -1.25032750338

multi-flip EU = 0.0793515109577

I didn't know that. Cool.

comment by AnlamK · 2009-05-27T00:23:38.986Z · LW(p) · GW(p)

"Where do you think Less Wrong is most wrong?"

I don't know where Less Wrong is most "wrong" - I don't have a reliable conclusion about this and moreover I don't think Less Wrong community accept exceptionlessly a group of statements - but I can certainly say this: some posts (and sometimes comments) introduce jargon (i.e. Kullback-Leibler distance, utility function, priors etc.) for not very substantial reasons. I think sometimes people have a little urge to show off and reveal the world how smart they are. Just relax, okay? We all know you are geniuises :-)

(I won't actually go through the trouble of quoting.)

On the other hand, I've been really impressed with some of the posts I have read here. They have been very engaging and interesting.

Replies from: loqi
comment by loqi · 2009-05-27T02:21:36.262Z · LW(p) · GW(p)

I think the tendency to use terms like "utility function" and "prior" stems more from a desire to be precise than to show off. Both have stuck with me as seemingly-useful concepts far outside the space of conversations in which they're potentially intelligible to others.

Unless you know it's superfluous, give jargon the benefit of the doubt. When communication is more precise, we all win.

Replies from: Sideways, billswift
comment by Sideways · 2009-05-27T04:39:48.493Z · LW(p) · GW(p)

Agreed--most of the arguments in good faith that I've seen or participated in were caused by misunderstandings or confusion over definitions.

I would add that once you know the jargon that describes something precisely, it's difficult to go back to using less precise but more understandable language. This is why scientists who can communicate their ideas in non-technical terms are so rare and valuable.

comment by billswift · 2009-05-27T12:34:18.027Z · LW(p) · GW(p)

I'm not so sure of that, since most of the people that use "utility function" and "prior" can't seem to agree on what they mean. They seem to be more terms of art; the art of showing off.

Replies from: steven0461
comment by steven0461 · 2009-05-27T13:51:44.613Z · LW(p) · GW(p)

Huh? A utility function is a map from states/gambles/whatever to real numbers that respects preferences. A prior is a probability assigned without conditioning on evidence. Maybe some terms people use here are for showing off, but these two happen to be clear and useful.

Replies from: Cyan, conchis, timtyler
comment by Cyan · 2009-05-27T14:59:51.859Z · LW(p) · GW(p)

A prior is a probability assigned without conditioning on evidence.

A prior is a probability distribution assigned prior to conditioning on some specific data. If I learn data1 today and data2 tomorrow, my overnight probability distribution is a posterior relative to data1 and a prior relative to data2.

The reason I nitpick this is because the priors we actually talk about here on LW condition on massive amounts of evidence.

Replies from: timtyler
comment by timtyler · 2009-05-27T16:46:33.589Z · LW(p) · GW(p)

More nitpicking: the data doesn't really have to be "specified" - at least, it can be presented in the form of a black box with contents that are not yet known, or perhaps not yet even measured.

comment by conchis · 2009-05-28T10:02:57.766Z · LW(p) · GW(p)

A utility function is a map from states/gambles/whatever to real numbers that respects preferences.

That's not its only meaning. It's not, for example the definition that a hedonist utilitarian would give (net pleasure-over-pain is not equivalent to preference; unless you're giving preference a very broad interpretation, in which case you've just shifted the ambiguity back a level.)

Replies from: steven0461
comment by steven0461 · 2009-05-28T11:23:38.159Z · LW(p) · GW(p)

I've seen that called "utility" but never a "utility function".

Replies from: conchis
comment by conchis · 2009-05-28T13:13:55.583Z · LW(p) · GW(p)

I could go trawling through the literature to get you examples of non-preferentist usages of the words "utility function", but if you're willing to take my word for it, I can assure you that they're pretty common (especially in happiness economics and pre-ordinalist economics, but also quite broadly apart from that). Indeed, it would be very strange if e.g. the hedonist account were a valid definition of utility, but no-one had thought to describe a mapping from states of the world into hedonist-utility as a utility function.

Googling "experienced utility function" turns up a few examples, but there are many more.

Replies from: steven0461
comment by steven0461 · 2009-05-28T13:27:38.450Z · LW(p) · GW(p)

Guess I'll take your word for it. Not sure I remember seeing that usage for "utility function" on LW, though.

ETA: It gets kind of confusing, because if I prefer that people are happy, their happiness becomes my utility, but in a way that doesn't contradict utility functions as a description of preferences.

Replies from: conchis
comment by conchis · 2009-05-28T13:42:30.944Z · LW(p) · GW(p)

Not sure I remember seeing that usage for "utility function" on LW, though.

Many uses are ambiguous enough to encompass either definition. If you aren't aware of the possible ambiguity then you're unlikely to notice anything awry - at least up until the point where you run into someone who's using a different default definition, and things start to get messy. (This has happened to me a couple of times.)

comment by timtyler · 2009-05-27T14:33:15.820Z · LW(p) · GW(p)

I've argued that utilitarians should probably employ surreal-valued utilitiy functions. However, that is hardly a major disagreement. It would be like the creationists arguing that evolution was a theory mired in controversy because of the "puctuated equilibrium" debate.

comment by timtyler · 2009-05-26T21:33:44.897Z · LW(p) · GW(p)

I think the group focusses too much on epistemic rationality - and not enough on reason.

Epistemic rationality is one type of short-term goal among many - whereas reason is the foundation-stone of rationality. So: I would like to see less about the former and more about the latter.

Replies from: conchis
comment by conchis · 2009-05-26T21:41:29.543Z · LW(p) · GW(p)

What do you mean by "reason"?

Replies from: timtyler
comment by timtyler · 2009-05-26T22:14:53.532Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/Reason is fairly reasonable.

Deduction, induction and Occam's razor.

Processing your sensory inputs to derive an accurate model of the world without actually performing any actions (besides what is necessary to output your results).

Reason can be considered to be one part of rationality.

Replies from: Matt_Simpson, pwno
comment by Matt_Simpson · 2009-05-26T22:22:40.283Z · LW(p) · GW(p)

isn't that epistemic rationality? I.e arriving at the correct answer?

Replies from: timtyler
comment by timtyler · 2009-05-26T22:37:01.048Z · LW(p) · GW(p)

No. Epistemic rationality is a type of instrumental rationality which primarily values truth-seeking. To find the truth, you sometimes have to take actions and perform experiments. Reason is more basic, more fundamental.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2009-05-26T23:11:11.154Z · LW(p) · GW(p)

so you mean the tools by which we arrive at the correct answer?

Replies from: timtyler
comment by timtyler · 2009-05-27T07:22:06.315Z · LW(p) · GW(p)

Only if those "tools" don't involve doing things. Once you start performing experiments and taking steps to gather more data, then you have gone beyond using reason.

If you like, you can imagine a test of reason to match the circumstances of a typical exam - where many ways of obtaining the correct answer are forbidden.

comment by pwno · 2009-05-27T02:05:40.404Z · LW(p) · GW(p)

Reason is useless without rationality.

Replies from: timtyler
comment by timtyler · 2009-05-27T07:30:25.219Z · LW(p) · GW(p)

I would rather say that "reason" is a useful concept. They call them "deductive reasoning" and "inductive reasoning" - and those are the correct names for some very useful tools.

Anyway, you should be able to make out my request to LessWrong - to talk more about reason, especially when it is reason that is under discussion.

comment by thomblake · 2009-05-26T19:27:34.025Z · LW(p) · GW(p)

I like this idea. I don't really have anything to contribute to this thread at the moment, though.

Seems along the same lines as the "closet thread" but better.

Replies from: ThanatosSavehn
comment by ThanatosSavehn · 2009-05-28T05:47:40.944Z · LW(p) · GW(p)

I think this is a problem of Rhetoric. I dialed it back to Plato and Aristotle and have made my way up to "The New Rhetoric: A Treatise on Argumentation". I shall report back if I have any success as I make my way to the here and now. In the meantime I note the following: as a sceptic I hope I'm ready to cast aside any idea, however cherished, that fails of its purpose - that fails the acid bath test of falsifiability. I cast aside the snake handlers that kept Romney from being nominated and I cast aside the "empathy" that leads Sotomayor to believe that single Latina Moms make better judgments than white males. But what's left? Is there really room for a party of Rationalists? Won't the purest rationalist sell out his brethren for a better deal offered by the emotionalists? Isn't that what a good rationalist would do? Are we ultimately the victims of our own good sense? Or are we able to deal with the negative externailities of personal rationalism. And if so, how? Alas, even Spock has now decided "if it feels right, do it!"

comment by ivan · 2009-05-27T15:16:19.991Z · LW(p) · GW(p)

I read LW for a few months but I haven't commented yet. This looks like a good place to start.

There are two points in LW community that seem to gravitate towards ideology IMHO:

  1. Anti-religion. Some people hold quite rational religious believes which seem to be a big no-no here.

  2. Pro-singularity. Some other people consider Singularity merely a "sci-fi fantasy" and I have an impression that such views, if expressed here, would make this community irrationally defensive.

I may be completely wrong though :)

Replies from: timtyler, Cyan, timtyler
comment by timtyler · 2009-05-27T16:52:51.554Z · LW(p) · GW(p)

I don't discuss religion much - but here is my list of "Viable Intelligent Design Hypotheses":

http://originoflife.net/intelligent_design/

Replies from: thomblake
comment by thomblake · 2009-05-27T17:09:16.771Z · LW(p) · GW(p)

Note that none of the items on the list is an alternative to evolution, which is how ID is presented in the US context.

comment by Cyan · 2009-05-27T17:06:14.869Z · LW(p) · GW(p)

I'd replace your item 1 with physicalism. The "rational religious" example you propose might get criticized here, but not for belief in the supernatural.

comment by Marshall · 2009-06-07T12:15:59.871Z · LW(p) · GW(p)

i) A lotta apes are writing on a lotta typewriters

ii) Not much dissent in a post reserved for dissension.

iii) The presumption of being Less Wrong leads to the arrogance of being More Right.

iv) Being More Right leads to the necessity of Violence.

v) The ramblings of the young are worth listening to when you get old.

Replies from: Marshall
comment by Marshall · 2009-06-07T19:33:13.834Z · LW(p) · GW(p)

Mission accomplished!