The Wannabe Rational

post by MrHen · 2010-01-15T20:09:32.662Z · LW · GW · Legacy · 305 comments

Contents

  My Place in This Community
  The Wannabe Sanity Waterline
  Why This Helps
  Conversion
None
305 comments

I have a terrifying confession to make: I believe in God.

This post has three prongs:

First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, "If someone holds a belief that is irrational, should they be fully ousted from the community?" I can see a handful of answers to this question and a few of them are discussed below.

Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.

Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?


My Place in This Community

So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point. This isn't to say that my beliefs aren't open for discussion, but here and now I think there are better things to discuss.  Namely, whether talking to people like me is within the purpose of LessWrong. Relevant questions have to do with my status and position at LessWrong. The short list:

  1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.
  2. Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems? Or is the system merit-based via karma and community voting? My karma is well above the level needed to post and my comments generally do better than worse. A merit-based system would prevent me from posting anything about religion or other irrational things, but is there a deeper problem? (More discussion below.) Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?
  3. Being religious, I assume I am far below the desired sanity waterline that the community desires. How did I manage to scrape up over 500 karma? What have I demonstrated that would be good for other people to demonstrate? Have I acted appropriately as a religious person curious about rationality? Is there a problem with the system that lets someone like me get so far?
  4. Where do I go from here? In the future, how should I act? Do I need to change my behavior as a result of this post? I am not calling out for any responses to my beliefs in particular, nor am I calling to other religious people at LessWrong to identify themselves. I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?

 

The Wannabe Sanity Waterline

This post has little to do with actual beliefs. I get the feeling that most discussions about the beliefs themselves are not going to be terribly useful. I originally titled this post, "The Religious Rational" but figured the opening line was inflammatory enough and as I began editing I realized that the religious aspect is merely an example of a greater group of irrationals. I could have admitted to chasing UFOs or buying lottery tickets.  What I wanted to talk about is the same.

That being said, I fully accept all criticisms offered about whatever you feel is appropriate. Even if the criticism is just ignoring me or an admin deleting the post and banning me. I am not trying to dodge the subject of my religious beliefs; I provided myself as an example to be convenient and make the conversation more interesting. I have something relevant and useful to discuss in regards to the overall topic of rationalistic communities that applies to the act of spawning rationalists from within fields other than rationalism. Whether it directly applies to LessWrong is for you to decide.

How do you approach someone below the sanity waterline? Do you ignore them and look for people above the line? Do you teach them until they drop their irrational deadweight? How do you know which ones are worth pursuing and which are a complete waste of time? Is there a better answer than generalizing at the waterline and turning away everyone who gets wet? The easiest response to these people is to put the burden of rationality on their shoulders. Let them teach themselves. I think think there is a better way.  I think there are people closer to the waterline than others and deciding to group everyone below the line together makes the job of teaching rationalism harder.

I, for example, can look at my fellow theists and immediately draw up a shortlist of people I consider relatively rationalistic. Compared to the given sanity waterline, all of us are deep underwater due to certain beliefs. But compared to the people on the bottom of the ocean, we're doing great. This leads into the question: "Are there different levels of irrationality?" And also, "Do you approach people differently depending on how far below the waterline they are?"

More discretely, is it useful to make a distinction between two types of theists? Is it possible to create a sanity waterline for the religious? They may be way off on a particular subject but otherwise their basic worldview is consistent and intact. Is there a religious sanity waterline? Are there rational religious? Is a Wannabe Rational a good place to start?

The reason I ask these questions is not to excuse any particular belief while feeling good about everything else in my belief system. If there is a theist struggling to verify all beliefs but those that involve God, then they are no true rationalist. But if said theist really, really wanted to become a rationalist, it makes sense for them to drop the sacred, most treasured beliefs last. Can rationalism work on a smaller scale?

Quoting from Outside the Laboratory (emphasis not mine):

Now what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world? We ask why, and the scientist says something along the lines of: "Well, no one really knows, and I admit that I don't have any evidence - it's a religious belief, it can't be disproven one way or another by observation." I cannot but conclude that this person literally doesn't know why you have to look at things.

A certain difference between myself and this spirit believing scientist is that my beliefs are from a younger time and I have things I would rather do than gallop through that area of the territory checking my accuracy. Namely, I am still trying to discover what the correct map-making tools are.

Also, admittedly, I am unjustifiably attached to that area of my map. It's going to take a while to figure out why I am so attached and what I can do about it. I am not fully convinced that rationalism is the silver-bullet that will solve Life, the Universe, and Everything. I am not letting this new thing near something I hold precious. This is a selfish act and will get in the way of my learning, but that sacrifice is something I am willing to make. Hence the reason I am below the LessWrong waterline. Hence me being a Wannabe Rational.

Instead, what I have done is take my basic worldview and chased down the dogma. Given the set of beliefs I would rather not think about right now, where do they lead? While this is pure anathema to the true rationalist, I am not a true rationalist. I have little idea about what I am doing. I am young in your ways and have much to learn and unlearn. I am not starting at the top of my system; I am starting at the bottom. I consider myself a quasi-rational theist not because I am rational compared to the community of LessWrong. I am a quasi-rational theist because I am rational compared to other theists.

To return to the underlying question: Is this distinction valid? If it is valid, is it useful or self-defeating? As a community, does a distinction between levels of irrationally help or hinder? I think it helps. Obviously, I would like to consider myself more rational than not. I would also like to think that I can slowly adapt and change into something even more rational. Asking you, the community, is a good way to find out if I am merely deluding myself.

There may be a wall that I hit and cannot cross. There may be an upper-bound on my rationalism. Right now, there is a cap due to my theism. Unless that cap is removed, there will likely be a limit to how well I integrate with LessWrong. Until then, rationalism has open season on other areas of my map. It has produced excellent results and, as it gains my trust, its tools gain more and more access to my map. As such, I consider myself below the LessWrong sanity waterline and above the religious sanity waterline. I am a Wannabe Rational.


Why This Helps

The advantage of a distinction between different sanity waterlines is that it allows you to compare individuals within groups of people when scanning for potential rationalists. A particular group may all drop below the waterline but, given their particular irrational map, some of them may be remarkably accurate for being irrational. After accounting for dumb luck, does anyone show a talent for reading territory outside of their too-obviously-irrational-for-excuses belief?

Note that this is completely different than questioning where the waterline is actually drawn. This is talking about people clearly below the line. But an irrational map can have rational areas. The more rational areas in the map, the more evidence there is that some of the mapmaker's tools and tactics are working well. Therefore, this mapmaker is above the sanity waterline for that particular group of irrational mapmakers. In other words, this mapmaker is worth conversing with as long as the conversation doesn't drift into the irrational areas of the map.

This allows you to give people below the waterline an attractive target to hit. Walking up to a theist and telling them they are below the waterline is depressing. They do need to hear it, which is why the waterline exists in the first place, and their level of sanity is too low for them to achieve a particular status. But after the chastising you can tell them that other areas in their map are good enough to become more rational in those areas. They don't need to throw everything away to become a Wanna Rational. They will still be considered irrational but at least their map is more accurate than it was. It is at this point that someone begins their journey to rationalism.

If we have any good reason to help others become more rational, it seems as though this would count toward that goal.


Conversion

This last bit is short. Taking an example of myself, what should I be doing to make my map more accurate? My process right now is something like this:

  1. Look at the map. What are my beliefs? What areas are marked in the ink of science, evidence, rationalism, and logic? What areas aren't and what ink is being used there?
  2. Look at the territory. Beliefs are great, but which ones are working? I quickly notice that certain inks work better. Why am I not using those inks elsewhere? Some inks work better for certain areas, obviously, but some don't seem to be useful at all.
  3. Find the right ink. Contrasting and comparing the new mapmaking methods with the old ones should produce a clear winner. Keep adding stuff to the toolbox once you find a use for it. Take stuff out of the toolbox when it is replaced by a better, more accurate tool. Inks such as, "My elders said so" and "Well, it sounds right" are significantly less useful. Sometimes we have the right ink but we use incorrectly. Sometimes we find a new way to use an old ink.
  4. Revisit old territory. When I throw out an old ink, examine the areas of the map where that ink was used. Revisit the territory with your new tools handy. Some territory is too hard to access now (beliefs about your childhood) or some areas on your map don't have corresponding territories (beliefs about the gender of God).

These things, in my opinion, are learning the ways of rationality. I have a few areas of my map marked, "Do this part later." I have a few inks labeled, "Favorite colors." These are what keep me below the sanity waterline. As time moves forward I pickup new favorite colors and eventually I will come to the areas saved for later. Maybe then I will rise above the waterline. Maybe then I will be a true rationalist.

305 comments

Comments sorted by top scores.

comment by Gavin · 2010-01-16T01:36:06.072Z · LW(p) · GW(p)

MrHen leaned back in his chair.

It had taken hours to write, but it was flawless. Everything was there: complete deference to the community’s beliefs, politely asking permission to join, admission of guilt. With one post, the tenor of LessWrong had been changed. Religion would join politics and picking up women as forbidden topics.

It would only be later that they would realize what had happened. When rationality became restricted by politeness, that would be when he would begin offering arguments that weakened atheist resolve. And he would have defenders, primed by this pitch-perfect post. Once he was made an honorary member of the “in” group, there is much greater leeway. They had already mentally committed to defend him here, the later details would be immaterial.

After the first online conversion, there would be anger. But at least some would defend him, harkening back to this one post. “It’s okay to be irrational,” they would say, “we’re all irrational about some things.” Oh, the luminaries would never fall. Eliezer, Robin, YVain, Gavin—they were far too strong. But there were those who longed to go back to the warm embrace of belief. Those just emerging from their shells, into the harsh glare of the real. And MrHen, with his equivocating, his rational irrationality—he would lead the way back. Always with the proper respect. A little flattery, a little bowing and scraping, these things go further than one might think in the “rational” world.

Once he was finally banned, and the conversions halted, the citizens of LessWrong would wonder what had driven him. Was it simply his own religious fervor? Or perhaps the old churches to weaken the growing rationalist community from within—perhaps he was in the employ of the Vatican or Salt Lake City, sent to curb a threat. But perhaps it was more sinister still. Perhaps, with his mission complete, MrHen would back to report back to his masters at the ‘chan, on the most epic trolling of all time.

They would never know, not for certain.

Replies from: MrHen, orthonormal, Blueberry
comment by MrHen · 2010-01-16T08:51:02.248Z · LW(p) · GW(p)

That was awesome.

comment by orthonormal · 2010-01-16T03:12:54.808Z · LW(p) · GW(p)

Oh man. I had already clicked downvote for excessive paranoia before I read the penultimate sentence. Needless to say, I reversed my judgment immediately.

comment by Blueberry · 2010-01-16T02:07:43.814Z · LW(p) · GW(p)

With one post, the tenor of LessWrong had been changed. Religion would join politics and picking up women as forbidden topics.

I don't understand. This post doesn't suggest that we forbid talking about religion.

Replies from: Kevin
comment by Kevin · 2010-01-16T02:33:59.492Z · LW(p) · GW(p)

It's a half-joke, or a half-sarcastic joke -- I hold such humor in the highest regard and describe it as hyper-cynicism, from my own garbling of this. http://www.snpp.com/other/special/philosophy.html Basically, the poster is mostly joking, as given away by the last two sentences, but he wouldn't have made the post if he didn't think elements of truth behind it existed.

Replies from: Blueberry
comment by Blueberry · 2010-01-17T07:14:53.409Z · LW(p) · GW(p)

the poster is mostly joking, as given away by the last two sentences, but he wouldn't have made the post if he didn't think elements of truth behind it existed.

I understand that, but the "joke" was basically that the original post was a conspiracy to make this site more accepting of religion and ban discussion of it, when the post doesn't suggest anything like that; quite the opposite, in fact.

Replies from: Kevin
comment by Kevin · 2010-01-18T08:20:28.143Z · LW(p) · GW(p)

It works at many levels.

comment by Paul Crowley (ciphergoth) · 2010-01-16T16:44:37.041Z · LW(p) · GW(p)

(Brief foreword: You really should read much more of the sequences. In particular How to Actually Change Your Mind, but there are also blog posts on Religion. I hope that one thing that comes out of this discussion is a rapid growth of those links on your wiki info page...)

What are the requirements to be a member of the LessWrong community? If we upvote your comments, then we value them and on average we hope you stay. If we downvote them, we don't value them and we hope either that they improve or you leave. Your karma is pretty positive, so stay.

You seem to be expecting a different shape of answer, about certain criteria you have to meet, about being an aspiring rationalist, or being above the sanity waterline, or some such. Those things will likely correlate with how your comments are received, but you need not reach for such proxies when asking whether you should stay when you have more direct data. From the other side, we need not feel bound by some sort of transparent criteria we propose to set out in order to be seen to be fair in the decisions we make about this; we all make our own judgement calls on what comments we value with the vote buttons.

I think you're led to expect a different sort of answer because you're coming at this from the point of view of what Eliezer calls Traditional Rationality - rationality as a set of social rules. So your question is, am I allowed this belief? If challenged, can I defend it such that those who hear it acknowledge I've met the challenge? Or can I argue that it should not be required to meet these challenges?

This of course is an entirely bogus question. The primary question that should occupy you is whether your beliefs are accurate, and how to make them more accurate. This community should not be about "how can I be seen to be a goodthinking person" but "how can I be less wrong?"

Also, it seems very much as if you already know how things are going to swing when you subject your theistic beliefs to critical examination. That being so, it's hard to know whether you actually believe in God, or just believe that you believe in God. I hope you will decide that more accurate beliefs are better in all areas of study for several reasons, but one is that I doubt that you are maximizing your own happiness. You are currently in a state of limbo on the subject of religion, where you openly admit that you daren't really think about it. I think that you will indeed find the process of really thinking about it painful, but it will be just as painful next year as it will be now, and if you do it now you'll avoid a year of limbo, a year of feeling bad about yourself for not goodthinking, and a year of being mistaken about something very important.

Replies from: MrHen, aausch
comment by MrHen · 2010-01-16T17:23:57.077Z · LW(p) · GW(p)

This of course is an entirely bogus question. The primary question that should occupy you is whether your beliefs are accurate, and how to make them more accurate. This community should not be about "how can I be seen to be a goodthinking person" but "how can I be less wrong?"

I like this. This clarifies a lot for me.

Replies from: woozle, ciphergoth
comment by woozle · 2010-01-17T00:10:53.106Z · LW(p) · GW(p)

This seems along similar lines to my initial reaction: "belief in God" is an undefined statement, since "God" is undefined (or, alternatively, has so many possible definitions that one is left with more noise than signal), and therefore such a statement does not automatically have any particular implications for your level of rationality. Given without any further context (real-world implications, specific definition of "God", etc.) it is more a social tag (statement of identification with the set of people who say they "believe in God") than anything else.

Are there any implications of this belief which affect how you treat other people? Do any of those implications put you at odds with beliefs which are also reasonable if one does not believe in the existence of [your definition of] God?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T03:34:12.962Z · LW(p) · GW(p)

Saying you believe in an undefined and undefinable fuzzword doesn't reflect well on your high-level mastery of rationality either.

Replies from: pdf23ds, woozle
comment by pdf23ds · 2010-01-17T04:34:19.035Z · LW(p) · GW(p)

OTOH, saying you "believe" in some mostly vacuous statement that you were raised to believe, while not really believing anymore in most of the more obviously false beliefs in the same package, doesn't reflect very poorly on your rationality. (I'm not sure to what extent this applies to MrHen.)

ETA: I view belief in god in a growing rationalist as sort of a vestigial thing. It'll eventually just wither and fall off.

Replies from: Eliezer_Yudkowsky, pdf23ds
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T04:39:28.905Z · LW(p) · GW(p)

It reflects less poorly than seriously believing in astrology, perhaps. But it's still Not Good, the more so if you've been warned. "Just give up already and admit you were completely wrong from the beginning" is not a trivial or dispensable skill.

Replies from: woozle
comment by woozle · 2010-01-17T13:35:32.685Z · LW(p) · GW(p)

It's "not good" on the large scale, but it seems to me that on an individual level MrHen has done a very positive thing -- perhaps two: (1) admitted openly, in front of a crowd known for its non-theism, that he is a theist and holds a belief for which he fully expected some censure; (2) did not cling defensively to that belief.

On #2: His focus on a possible change in his "rationalist" group membership as a result of that belief could be seen as an attempt to divert scrutiny away from his actual belief so that he would not have to defend (and possibly question) it -- but it did not feel to me like that sort of move; it felt more like he was expecting this group to behave much the same way that a religious group would behave if he had openly admitted disbelieving some item of their doctrine: a mis-application of previously experienced behavior, not a diversionary tactic.

Replies from: Eliezer_Yudkowsky, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T16:17:02.559Z · LW(p) · GW(p)

I fully agree that these are impressive subskills that have been displayed, but let us not also forget that it is better to be unimpressively right than impressively wrong. (E.g. Chalmers.)

comment by Paul Crowley (ciphergoth) · 2010-01-17T16:45:43.241Z · LW(p) · GW(p)

With all this assessment of how positive MrHen's actions are, what is the query you're trying to hug?

Replies from: woozle
comment by woozle · 2010-01-18T02:29:21.432Z · LW(p) · GW(p)

Is "query-hugging" a term which has been used elsewhere (e.g. some LW post I should have read)? If I'm interpreting the question correctly, I'm hoping MrHen will now fearlessly examine his "belief in God" and figure out what that means in non-metaphorical real-world terms.

For example, does his God merely provide an uplifting example of goodness for us all to follow, through a series of stories which are not literally true and which one is free to interpret as one wishes? Or (to take a moderate non-liberal theist stance) does his God have a firm belief that gay people, while entitled to the doctrine of "live and let live", are not properly fulfilling some Plan and therefore are not entitled to the same protections as others? Does his God plan to return only after some terrible cataclysm has befallen mankind (and which, therefore, perhaps we should not work so hard to prevent)? Does his God have opinions about the "right to life" of fetal tissue, working on the Sabbath (and which day exactly is the Sabbath... and what constitutes "work"), the value of evidence and reason over faith and doctrine?

Replies from: orthonormal, ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-18T08:20:45.208Z · LW(p) · GW(p)

So the question is, what difference in expectations are you hoping to discriminate between?

comment by pdf23ds · 2010-01-17T06:16:25.139Z · LW(p) · GW(p)

I'm not sure why my comment is at -1. People often start out at disadvantage, no matter how rational their character, and no matter what their potential*. You can't expect immediate maturity. I was raised as a fundamentalist Christian. I took it pretty seriously around the age of 14 or so, so much that I started looking into apologetics (the rational defense of the faith). After critically evaluating all the arguments for and against, I ended up abandoning the faith within a couple years. If my parents hadn't gone down the path of fundamentalism (which only started when I was around 8 anyway--before that they were much more average-like Christians) then I probably wouldn't have become an atheist nearly as soon. I find it unlikely that I wouldn't have ended up as a rationalist, though.

* Of course, people who are raised as rationalists have more potential, but potential has more to do with intelligence and disposition than upbringing.

comment by woozle · 2010-01-17T13:06:54.035Z · LW(p) · GW(p)

The fuzzword has no universally accepted definition, but the speaker may have a specific definition for it -- and that definition could, at least theoretically, be one in which it is entirely rational to believe.

Rather than presuming that just because this is not the case 99.9...% of the time, I wanted to make the point that it really does depend what that definition is... (a specific case of the more general point that rationalism isn't about group membership or [not] saying certain things, but about how you make decisions) and that this is why a rationalist wouldn't automatically characterize "belief in God" as irrational.

Replies from: alexflint
comment by Alex Flint (alexflint) · 2010-01-19T01:16:18.051Z · LW(p) · GW(p)

If it is not the case 99.9...% of the time then a rationalist certainly would characterize "belief in God" as irrational, with probability 99.9...%.

Replies from: woozle
comment by woozle · 2010-01-19T11:35:02.169Z · LW(p) · GW(p)

Hmm, well, I have to admit that I did that, internally... and now I'm trying to figure out why I didn't want to do it outwardly.

(several wodges of deleted text later...)

Labeling a belief as "irrational" without giving a reason is (a) likely to elicit an emotional defensive response which will inhibit self-critical thinking, and (b) doesn't address the issue of why we believe said belief to be irrational, so is just a kind of argument-from-authority (we are rationalists, which means we are rational, and we say your belief is irrational; therefore it is) which is not a good process to follow if we want to be less wrong.

...so handling it that way, if our goal is to maximize rationality in others, would be irrational.

And we can't address the issue of why it is irrational until we know what it actually means; saying "I believe in God" isn't really a whole lot more meaningful than "The Gostak distims the Doshes". Heck, I can truthfully say it myself: I believe in God -- as a fictional character from an ancient mythology which somehow manages to dominate political discourse in this country. Certainly that God (a character in people's minds) exists, and is very powerful, and it would be irrational of me to deny this.

Yes, "everyone knows" that if you say "I believe in God" it means you believe in a sentient universe-creator (who probably possesses a number of additional characteristics which compel you to behave in certain ways) -- but the actual words don't inherently mean that, which is why I say that it is more of a "social label" than a statement regarding factual matters.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-19T12:22:13.560Z · LW(p) · GW(p)

What assertions does this reasoning not apply to?

Replies from: woozle
comment by woozle · 2010-01-20T10:55:39.501Z · LW(p) · GW(p)

I should think it would not apply to any well-defined assertion of fact -- e.g "The universe was created by a conscious entity" is a statement we could discuss further -- though "There is evidence that the universe was created by a conscious entity" would be better, because then we are one step further into the dialogue.

Better still would be to add "...and that evidence is [insert argument here]", because then we have a specific line of reasoning to look at and say "No, this doesn't make sense because [fill in counter-argument]", allowing the other person to then explain why our counter is wrong... and so on.

As it is... can we even have a discussion about whether the Gostak distims the Doshes, or whether it is rational to believe that this is true? Not really, because the terms are undefined; we don't know what is being "believed". Same with "God", even though we know what might be (indeed, probably is) intended.

Thinking about it, this phenomenon of having a few handy exceptions to a generally-reliable rule is something frequently exploited by theists (and faitheists).

Theists freely use "God" as a club 99% of the time, to bash people into line and promote their meme, but then on those few occasions when they are backed into a corner by a skeptic they can always say "This? Oh no no, this isn't a club for bashing people, it's just a piece of found art I like to keep on my desk and through which I enjoy contemplating nature's beauty."

So it's very important to identify what we're talking about. If MrHen claims his God is really just a piece of found art, then we have rational grounds for objection if we ever see him using it as a club. If he openly admits that it's a club, then we can object on rational grounds to the idea of bashing people.

Replies from: AdeleneDawner, wedrifid
comment by AdeleneDawner · 2010-01-20T11:40:29.776Z · LW(p) · GW(p)

While I agree with the overall point of this comment, the 99% statistic seems very wrong to me. I expect that for some individuals that's true, but across the whole population I'd be surprised if it's much higher than 40%. I'm basing my estimate on, among other things, having worked in a Roman Catholic nursing home (with actual nuns, though I didn't interact with them often) for four years and not making any particular effort to hide the fact that I'm an atheist from my co-workers. (The residents, I took on a case-by-case basis, as seemed appropriate given the situation.) I experienced exactly one instance in those four years of someone objecting more strongly than 'wait, what?' to my lack of faith: The leader of a new bible study group took offense when I didn't actively participate in their event (and got in my face about it in front of my residents, which you just *don't do* - I was much more upset about her upsetting them than anything else), and my supervisor's reaction to that was to apologize profusely to me (and not about the residents having been upset, either, heh) and forbid that group from coming back. The vast majority of instances where religion came up were either in social bonding contexts or as personal or interpersonal reassurances ('s/he's in heaven now') that were rarely to never directed at me by people who were aware of my atheism, and easily ignorable in any case.

Strawmen aren't good, ok?

Replies from: woozle
comment by woozle · 2010-01-21T12:15:08.182Z · LW(p) · GW(p)

"99.999...%" was intended to refer to the portion of self-identified theists whose theistic beliefs would be demonstrably irrational if explored.

It sounds like you're talking about the portion of self-identified theists who are offended by atheism -- a number which I would expect to be substantially lower.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T12:20:29.228Z · LW(p) · GW(p)

Theists freely use "God" as a club 99% of the time, to bash people into line and promote their meme

Replies from: woozle
comment by woozle · 2010-01-21T12:49:17.731Z · LW(p) · GW(p)

I used that "99%" thing twice -- I apologize for getting muddled about which one you were referring to.

Since we're talking about the "bashing" figure: I maintain that the overwhelming majority of the time when "God" is invoked in the political field, it is being used as a club to bash people into line and promote religious ideas.

Replies from: AdeleneDawner, wedrifid
comment by AdeleneDawner · 2010-01-21T16:56:13.953Z · LW(p) · GW(p)

I maintain that the overwhelming majority of the time when "God" is invoked in the political field, it is being used as a club to bash people into line and promote religious ideas.

I'd agree with that; I don't see much other reason to bring religion up, in that context. I expect that politics, some kinds of child-rearing, and provoked debates constitute the bulk of instances where religion is used as a club, and that those situations aren't the bulk of the instances where religion is used at all. (My estimate for how often religion is used as a club compared to other uses, outside those contexts, is considerably less than 10%. People live this stuff even when we're not around for them to fight with, after all.)

Replies from: woozle
comment by woozle · 2010-01-22T01:45:54.531Z · LW(p) · GW(p)

Perhaps what we are working towards, then, is a recognition that an irrational belief which is Mostly Harmless in personal life can become a deadly threat when let loose in the wrong habitat (such as the political field) -- and that therefore people who wish to embrace this Mostly Harmless irrational belief are much like exotic pet owners in that they need to be aware that their cute furry wuggums can be a serious hazard if not properly contained and cared for.

To bring this back to the original issue -- i.e. why it's necessary for MrHen to explain what his belief means before anyone can claim it is rational or otherwise -- and complete the metaphor:

Believing in God is rather like owning a pet. It may or may not be a particularly rational thing to do (you have to spend a lot of time and money nurturing it, and the benefit you get in return is pretty much entirely psychological), but some pets are much more dangerous than others... and the degree of danger may not have any relationship to how cute and harmless they seem when you first adopt them.

Replies from: Blueberry, AdeleneDawner
comment by Blueberry · 2010-01-22T04:40:14.420Z · LW(p) · GW(p)

some pets are much more dangerous than others... and the degree of danger may not have any relationship to how cute and harmless they seem when you first adopt them.

And once you start owning a cute little pet, it opens the door to owning larger and more dangerous pets.

comment by AdeleneDawner · 2010-01-22T03:51:08.864Z · LW(p) · GW(p)

That sounds about right.

comment by wedrifid · 2010-01-21T14:08:21.194Z · LW(p) · GW(p)

I maintain that the overwhelming majority of the time when "God" is invoked in the political field, it is being used as a club to bash people into line and promote religious ideas.

It is appropriate, then, that politics is referred to as the skillful use of blunt instruments.

Your observations may be somewhat different from mine. I don't know where you reside but I know that in the US, for example, 'God' plays more part in politics than it does here in Australia.

comment by wedrifid · 2010-01-20T12:09:13.421Z · LW(p) · GW(p)

Theists freely use "God" as a club 99% of the time, to bash people into line and promote their meme

It seems you have suffered from the blunt end of a selection effect.

Perhaps:

Theists freely use "God" as a club most of the time, to feel like they belong and maintain a social identity.

Replies from: woozle
comment by woozle · 2010-01-21T12:30:47.859Z · LW(p) · GW(p)

It depends what context we're sampling from. I was thinking of discussion in the media, and/or politics in general, where religion's main contribution seems to be as I described it: demands that the speaker's particular beliefs be given precedence because they come "from God" -- a club for bashing people into line.

Yes, the 99% figure was overprecise; I probably should have said "the overwhelming majority of the time". It would be an interesting study to actually count the number of "bashing people into line" usages versus all other political uses of religion; perhaps religion-based pleas for charity and mercy don't get counted because they seem sane -- something anyone reasonable would say -- so my unconscious reference-counter doesn't add them to religion's score.

In any case, your definition-swapping with the word "club" completely misses my point. To whatever extent MrHen uses God as a club-for-joining (what I called a "social label"), I have no objection.

It is the other sort of club I want MrHen either to specifically reject or defend: does he accept such usage of "belief in God" (if someone says God said it, it must be true), or do reason and critical thinking prevail if someone tries to persuade him that he must do X because of his belief?

comment by Paul Crowley (ciphergoth) · 2010-01-16T20:26:16.413Z · LW(p) · GW(p)

Thanks. You should definitely read No One Can Exempt You From Rationality's Laws, from which this idea is largely drawn.

comment by aausch · 2010-01-17T00:12:59.601Z · LW(p) · GW(p)

Yes, I agree with you here. It looks to me like one of the core values of the community revolves around first evaluating each individual belief for its rationality, as opposed to evaluating the individual. And this seems very sensible to me - given how compartmentalized brains can be, and how rationality in one individual can vary over time.

Also, I am amused by the parallels between this core value, and one of the core principles of computer security in the context of banking transactions. As Scheiner describes it, evaluate the transaction not the end user

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-17T00:29:11.641Z · LW(p) · GW(p)

first evaluating each individual belief for its rationality Again, no, I'm afraid you're still making the same mistake. When you talk about evaluating a belief for its rationality, it still sounds like the mindset where you're trying to work out if the necessary duty has been done to the rationality dance, so that a belief may be allowed in polite society. But our first concern should be: is this true? Does this map match the territory? And rationality is whatever systematically tends to improve the accuracy of your map. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.

Replies from: aausch
comment by aausch · 2010-01-17T00:46:44.509Z · LW(p) · GW(p)

Now I am really confused. How can a belief be rational, and not true?

Replies from: Eliezer_Yudkowsky, Hul-Gil
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T03:33:22.699Z · LW(p) · GW(p)

"Rational" is a systematic process for arriving at true beliefs (or high-scoring probability distributions), so if you want true beliefs, you'll think in the ways you think are "rational". But even in the very best case, you'll hit on 10% probabilities one time out of ten.

I didn't see anything wrong with your original comment, though; it's possible that Ciphergoth is trying to correct a mistake that isn't there.

comment by Hul-Gil · 2011-06-23T07:13:21.709Z · LW(p) · GW(p)

Well, if you got a very improbable result from a body of data; I could see this happening. For example, if most of a group given a medication improved significantly over the control group, but the sample size wasn't large enough and the improvement was actually coincidence, then it would be rational to believe that it's an effective medication... but it wouldn't be true.

Then again, we should only have as much confidence in our proposition as there is evidence for it, so we'd include a whatever-percent possibility of coincidence. I didn't see anything wrong with your original comment, either.

Replies from: aausch
comment by aausch · 2011-06-28T03:30:41.015Z · LW(p) · GW(p)

I've since learned that some people use the word "rationality" to mean "skills we use to win arguments and convince people to take our point of view to be true", as opposed to the definition which I've come to expect on this site (currently, on an overly poetic whim, I'd summarize it as "a meta-recursively applied, optimized, truth-finding and decision making process" - actual definition here).

comment by RobinHanson · 2010-01-15T20:32:21.860Z · LW(p) · GW(p)

It may be enough if we find common cause in wanting to be rational in some shared topic areas. As long as we can clearly demarcate off-limit topics, we might productively work on our rationality on other topics. We've heard that politics is the mind killer, and that we will do better working on rationality if we stay away from politics. You might argue similarly about religion. That all said, I can also see a need for a place for people to gather who want to be rational about all topics. So, the question for this community to decide is, what if any topics should be off-limits here?

Replies from: Vladimir_Gritsenko, UnholySmoke, arbimote, Kevin, MrHen
comment by Vladimir_Gritsenko · 2010-01-15T20:43:25.578Z · LW(p) · GW(p)

Agreed.

One caveat: it's great to want to be rationalist about all things, but let him without sin cast the first stone. So much of the community's energies have gone into analyzing akrasia - understanding that behavior X is rational and proper yet not doing it - that it appears hypocritical and counter-productive to reject members because they haven't yet reached all the right conclusions. After all, MrHen did mark religion for later contemplation.

comment by UnholySmoke · 2010-01-18T13:19:09.472Z · LW(p) · GW(p)

"If you could reason with religious people, there would be no religious people."

  • House M.D.

Robin, I'm a little surprised to read you saying that topics on which it's difficult to stay on track should be skirted. As far as I'm concerned, 'What are your religious views?' is the first question on the Basic Rationality test. I know that encouraging compartmentalisation isn't your goal by any means, but it sounds to me as though it would be the primary effect.

I can also see a need for a place for people to gather who want to be rational about all topics.

Now you're talking. No topics should be off-limits!

comment by arbimote · 2010-01-16T06:07:52.350Z · LW(p) · GW(p)

It would be great for this rationalist community to be able to discuss any topic, but in a way that insulates the main rationality discussions from off-topic discussions. Perhaps forum software separate from the main format of LessWrong? Are monthly open threads enough for off-topic discussions?

Replies from: groupuscule
comment by groupuscule · 2010-01-25T03:15:59.802Z · LW(p) · GW(p)

A rationalist forum would be interesting not only for the discussions themselves, but also because it would materialize and test some of the more abstract stuff from this site.

Reading the new year/decade predictions conversations, it struck me that effective treatment of outside content should be Less Wrong's crowning jewel--the real proof that rationality makes good ideas.

comment by Kevin · 2010-01-25T15:10:20.265Z · LW(p) · GW(p)

We should discuss non-meta topics on non-meta subreddits. Maybe if you asked Eliezer he would turn on sub-reddit creation or make at least one. I would like both a non-meta group blog, a non-meta link sharing subreddit, and an on-topic meta rationalist link-sharing subreddit.

I think that the problems of scale and education these extra sites will create are not easy, but solving them as soon as possible is desirable.

It's something we need to discuss more fully soon enough; I'll make a top-level post to discuss it eventually.

comment by MrHen · 2010-01-15T20:48:25.859Z · LW(p) · GW(p)

This is an excellent way of saying what I wanted to say and asking what I wanted to ask.

comment by Roko · 2010-01-15T22:11:09.632Z · LW(p) · GW(p)

I think that, in practice, a few religious people on LW are harmless and will probably have a positive effect.

It seems politically correct to go softly-softly on the few theists here, but let's not forget that theism is known to systematically lead to false beliefs (above and beyond the [probabilistically] false belief that there is a God), such as theistic moral realism, denial of evolution and evolutionary psychology, and abandonment of the scientific method. In a community dedicated to creating an accurate map-territory correspondence by systematic weighing of evidence and by fostering a fundamental mistrust of the corrupted hardware that we run on, theism is not welcome en-masse.

Replies from: Furcas
comment by Furcas · 2010-01-15T23:11:56.300Z · LW(p) · GW(p)

Hear, hear.

I would add that in order to be welcome en-masse, theism first has to be welcome in small quantities, which seems to be the case already, judging from the overall positive response to the original post. This makes me think that Less Wrong is on the path to failure as a rationalist community.

Replies from: Roko
comment by Roko · 2010-01-15T23:17:44.112Z · LW(p) · GW(p)

I think that a mass invasion of theists is unlikely for social reasons - they just won't bother to come; I don't lie awake at night frightened that when I next check LW the top article will be about how we can learn rationality lessons from JC.

Replies from: Furcas
comment by Furcas · 2010-01-15T23:30:17.715Z · LW(p) · GW(p)

Neither do I, but I wouldn't be surprised to see a post promoting religious accommodationism before 2010 is over.

Replies from: billswift
comment by billswift · 2010-01-16T10:03:04.920Z · LW(p) · GW(p)

Neither would I. The last several months has been another of those depressing periods where I have gotten my nose rubbed in the essential truth of Lazarus Long's "Never underestimate the power of human stupidity."

comment by mathemajician · 2010-01-16T11:20:59.739Z · LW(p) · GW(p)

There is nothing about being a rationalist that says that you can't believe in God. I think the key point of rationality is to believe in the world as it is rather than as you might imagine it to be, which is to say that you believe in the existence of things due to the weight of evidence.

Ask yourself: do you want to believe in things due to evidence?

If the answer is no, then you have no right calling yourself a "wannabe rationalist" because, quite simply, you don't want to hold rational beliefs.

If the answer is yes, then put this into practice. Is the moon smaller than the earth? Does Zeus exist? Does my toaster still work? In each case, what is the evidence?

If you find yourself believing something that you know most rationalists don't believe in, and you think you're basing your beliefs on solid evidence and logical reasoning, then by all means come and tell us about it! At that point we can get into the details of your evidence and the many more subtle points of rational reasoning in order to determine whether you really do have a good case. If you do, we will believe.

Replies from: gelisam
comment by gelisam · 2010-01-17T17:15:15.270Z · LW(p) · GW(p)

Uh-oh.

I... I don't think I do want to believe in things due to evidence. Not deep down inside.

When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth.

I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment.

And I take notes, for those rare cases in my life where truth actually is more important to my happiness than social conventions: when I encounter a real-world problem that I actually want to solve. This happens less often than you might think.

Replies from: orthonormal, Nanani, Wilka
comment by orthonormal · 2010-01-18T08:25:14.923Z · LW(p) · GW(p)

Here's another set of downvotes I don't get (ETA: parent was at -2 when I arrived). Gelisam is just stating their personal experience, not in order to claim we must all do likewise, but as their own reaction to the debate.

I think this community would be ill served by a norm that makes it a punishable offense to ever admit one doesn't strive for truth as much as one ought.

As far as replies go:

I'd rather believe, quite simply, in whatever I need to believe in order to be happiest.

It's not so simple. If you're self-deceiving, you might be quite wrong about whether your beliefs actually make you happier! There's a very relevant post on doublethink.

Replies from: AdeleneDawner, gelisam
comment by AdeleneDawner · 2010-01-18T08:31:41.397Z · LW(p) · GW(p)

Here's another set of downvotes I don't get. Gelisam is just stating their personal experience, not in order to claim we must all do likewise, but as their own reaction to the debate.

I think this community would be ill served by a norm that makes it a punishable offense to ever admit one doesn't strive for truth as much as one ought.

Agreed.

comment by gelisam · 2010-01-18T19:01:16.923Z · LW(p) · GW(p)

Ah, so that's why people downvoted my comment! Thanks for explaining. I thought it was only because I appeared to be confusing utilons with hedons.

Regarding the doublethink post, I agree that I couldn't rationally assign myself false but beneficial beliefs, and I feel silly for writing that I could. On the other hand, sometimes I want to believe in false but beneficial beliefs, and that's why I can't pretend to be an aspiring rationalist.

comment by Nanani · 2010-01-18T07:13:35.348Z · LW(p) · GW(p)

"Maximizing truth" doesn't make any sense. You can't maximize truth. You can improve your knowlege of the truth, but the truth itself is independent of your brain state.

In any case, when is untruth more instrumental to your utility function than truth? Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.

Replies from: gelisam
comment by gelisam · 2010-01-18T19:27:07.323Z · LW(p) · GW(p)

You can't maximize truth.

I think it's fairly obvious that "maximizing truth" meant "maximizing the correlation between my beliefs and truth".

Having accurate beliefs is an incredibly useful thing. You may well find it serves your utility better.

Truth is overrated. My prior was heavily biased toward truth, but then a brief and unpleasant encounter with nihilism caused me to lower my estimate.

And before you complain that this doesn't make any sense either, let me spell out that is an estimate of the probability that the strategy "pursue truth first, happiness second" yields, on average, more hedons than "pursue happiness using the current set of beliefs".

comment by Wilka · 2010-01-18T18:32:46.377Z · LW(p) · GW(p)

When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth.

Have you ever the experience of learning something true that you would rather not have learned? The only type of examples I can think of here (of the top of my head) would be finding out you had an unfaithful lover, or that you were really adopted. But in both case, it seems like the 'unhappiness' you get from learning it would pass and you'd be happy that you found out in the long wrong.

I've heard people say similar things about losing the belief in God - because it could lead to losing (or at least drifting away from) people you hold close, if their belief in God had been an import thing in their relationship to you.

Replies from: faul_sname
comment by faul_sname · 2012-02-28T05:11:04.738Z · LW(p) · GW(p)

Have you ever the experience of learning something true that you would rather not have learned?

Yes. Three times, in fact. Two of them are of roughly the same class as that one thing floating around, and the third is of a different class and far worse than the other two (involving life insurance and charity: you'll find it if you look).

comment by PhilGoetz · 2010-01-16T03:33:20.642Z · LW(p) · GW(p)

I believe in God too, since I think it's more likely that there is a God than that there isn't. But by "God" I mean "experimenter", or "producer", or "player".

I should start an apocalyptic Dionysian religion around one commandment and threat: "Be entertaining, for sweeps week cometh." The main problem is that would make Hitler a saint.

Replies from: Jack
comment by Jack · 2010-01-16T03:59:21.647Z · LW(p) · GW(p)

I believe in God too, since I think it's more likely that there is a God than that there isn't. But by "God" I mean "simulation designer", or "producer", or "player".

I recognize the simulation hypothesis as valid but what evidence have you that it is more likely the case than not?

I should start an apocalyptic Dionysian religion around one commandment and threat: "Be entertaining, or the world will be cancelled."

I might well join if you can convince me of the above.

Replies from: PhilGoetz, xamdam, Kevin
comment by PhilGoetz · 2010-02-06T19:12:44.141Z · LW(p) · GW(p)

I might well join if you can convince me of the above.

But it would be more entertaining if you became its nemesis and devoted yourself to its destruction.

I begin to sense my new religion may have severe organizational problems.

comment by xamdam · 2010-07-25T14:29:02.509Z · LW(p) · GW(p)

He had me at

Dionysian

comment by Kevin · 2010-01-16T04:16:42.682Z · LW(p) · GW(p)

My intuitive justification:

There are an infinite number of times I can be the Kevin simulated in 2010. I even think it very likely that Kevin_10000CE would want to run a consciousness through an ancestor simulator of the bottleneck period in human history to be able to assimilate the knowledge of that experience.

So I could be in any one of an infinite possible number of simulations, or I could be living in the true 2010. The probability calculation becomes meaningless because of the infinities involved, but I don't see why my intuition is wrong.

Replies from: Jack
comment by Jack · 2010-01-16T04:46:10.919Z · LW(p) · GW(p)

So I could be in any one of an infinite possible number of simulations, or I could be living in the true 2010. The probability calculation becomes meaningless because of the infinities involved, but I don't see why my intuition is wrong.

If your premise is true the probability you and I are in a simulation is 1 (though for obnoxious reasons so I understand what you mean). But the premise seems wrong. There are a large number of plausible futures in which no world simulations are ever run.

Replies from: Kevin
comment by Kevin · 2010-01-16T05:04:30.697Z · LW(p) · GW(p)

Are most of those possible futures with no world simulations because of the destruction of human civilization, or because humans transcend and ancestor simulations are deemed to be something like unethical?

If the first, I'm much more optimistic about us not killing ourselves than Eliezer.

Replies from: Eliezer_Yudkowsky, Jack
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-06T21:35:22.189Z · LW(p) · GW(p)

I'd think UFAIs would be much more likely to run faithful generic-intelligent-species simulations than Friendly AIs would be likely to run faithful ancestor simulations.

Replies from: Kevin
comment by Kevin · 2010-02-06T22:27:40.980Z · LW(p) · GW(p)

So then the question becomes, if you're a transhuman living under a FAI and you want to play around in simulations of certain interesting times in human history, how realistic can the simulations be?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-07T00:24:28.113Z · LW(p) · GW(p)

Not so realistic that you become a different person who never consented to being simulated, nor so realistic that "waking up" afterward equates to killing an innocent person and substituting the old you in their place.

Replies from: DanielVarga, MichaelHoward, Kevin, MichaelHoward, MichaelHoward
comment by DanielVarga · 2010-02-07T02:24:30.211Z · LW(p) · GW(p)

In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don't create a new conscious being when I think about the intentions of another human.)

What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.

Replies from: Kevin, byrnema
comment by Kevin · 2010-02-07T04:56:42.312Z · LW(p) · GW(p)

I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.

Replies from: DanielVarga
comment by DanielVarga · 2010-02-07T14:31:55.788Z · LW(p) · GW(p)

I agree with you,

Do you agree with my first, ridiculously modest claim, or my second, quite speculative one? :)

Replies from: Kevin
comment by Kevin · 2010-02-08T13:42:56.964Z · LW(p) · GW(p)

I agreed specifically with the first modest claim and the general sentiment of the entire post.

comment by byrnema · 2010-02-07T03:51:24.656Z · LW(p) · GW(p)

This comment has been moved.

comment by MichaelHoward · 2010-02-08T13:30:57.103Z · LW(p) · GW(p)

Not so realistic that you become a different person who never consented to being simulated, nor so realistic that "waking up" afterward equates to killing an innocent person and substituting the old you in their place.

Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.

I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent.

I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.

Replies from: Kevin
comment by Kevin · 2010-02-08T13:38:45.253Z · LW(p) · GW(p)

One reason to significantly adjust downward the probability of being in a Friendly-run sim is what I would call "The Haiti Problem"... I'm curious if anyone has solutions to that problem. Does granting eventual immortality (or the desired heaven!) to all simulated persons make up for a lifetime of suffering?

Replies from: orthonormal
comment by orthonormal · 2010-07-25T17:04:39.295Z · LW(p) · GW(p)

Perhaps only a small number of persons need be simulated as fully conscious beings, and the rest are acted out well enough to fool us. Perceived suffering of others can add to the verisimilitude of the simulation.

Of course, internalizing this perspective seems like moral poison, because I really do want the root-level version of me to act against suffering there where it definitely exists.

comment by Kevin · 2010-02-07T10:56:05.520Z · LW(p) · GW(p)

I'm not sure I believe your first clause -- the final chapter of The Metamorphosis of Prime Intellect tried to propose an almost Buddhist type resurrection as a solution to the problem of fun. If the universe starts feeling too much like a game to some transhumans, I think a desire to live again as a human for a single lifetime might be somewhat common. Does that desire override the suffering that will be created for the new human consciousness that will later be merged back into the immortal transhuman? Most current humans do seem to value suffering for some reason I don't understand yet...

Since this is perilously close to an argument about CEV now, we can probably leave that as a rhetorical question. For what it's worth, I updated my intuitive qualitative probability of living in a simulation somewhat downward because of your statement that as you conceive of your friendly AI right now, it wouldn't have let me reincarnate myself into my current life.

Replies from: Strange7
comment by Strange7 · 2010-02-22T01:09:27.720Z · LW(p) · GW(p)

The masochists that I know seem to value suffering either for interpersonal reasons (as a demonstration of control - beyond that I'm insufficiently informed to speculate), or to establish a baseline against which pleasurable experiences seem more meaningful.

comment by MichaelHoward · 2010-02-08T13:29:14.741Z · LW(p) · GW(p)

Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.

I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent.

I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.

comment by MichaelHoward · 2010-02-08T13:29:04.400Z · LW(p) · GW(p)

Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.

I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we'd give consent.

I also wonder if we should adjust whether we'd give consent based on how much we'd prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.

comment by Jack · 2010-01-16T05:09:53.627Z · LW(p) · GW(p)

Are most of those possible futures with no world simulations because of the destruction of human civilization, or because humans transcend and ancestor simulations are deemed to be something like unethical?

Yes.

Also, we might just be too poor in the future (either too poor to run any or too poor to run many). And if it wasn't included in "humans transcend", a Singleton could prohibit them.

Replies from: Kevin
comment by Kevin · 2010-01-17T00:23:53.443Z · LW(p) · GW(p)

Those are certainly possibilities, but we are comparing infinite sets here. Or comparing uncountable futures. I recognize that my premise may "seem" wrong, but I don't think we can convince each other until we can take this out of the realm of comparing infinities.

Replies from: Jack
comment by Jack · 2010-01-17T04:05:53.954Z · LW(p) · GW(p)

I don't think they're uncountable. It's just a continuous probability distribution.

comment by Alicorn · 2010-01-15T20:21:30.747Z · LW(p) · GW(p)

I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?

I want you to keep doing what you have been doing. I find it distressing that you seem to think it'd be a reasonable, or even realistic, response for us to chase you out with torches and pitchforks. I am sorry to hear that we have created an environment that has led you to conceal this fact about yourself for such an extended time. I am pleased to note that you seem to find us worth hanging out with and seeking advice and help from in spite of us apparently having created this unwelcoming atmosphere.

I'm also personally curious about your exact flavor of theism, but that may, as you indicate, be neither here nor there.

If you haven't already, you might want to read Theism, Wednesday, and Not Being Adopted. I don't know if the case I describe is similar to yours or not, though.

Replies from: MrHen, MrHen
comment by MrHen · 2010-01-15T20:40:14.439Z · LW(p) · GW(p)

I find it depressing that you seem to think it'd be a reasonable, or even realistic, response for us to chase you out with torches and pitchforks. I am sorry to hear that we have created an environment that has led you to conceal this fact about yourself for such an extended time.

I really don't know how the community is going to respond. The last time I talked like this I made a comment that ended up receiving the most upvotes of anything I have done. I don't expect torches and pitchforks, but I do expect some form of ultimatum. I also expect an intangible response that will affect my future comments/posts.

But your comment is certainly encouraging. I wasn't so much "hiding". I just didn't have a good reason to come forward. Why would I? Why would any theist in this community? The reason I did is because I am weighing whether I want to actively devote time to continuing this path. If I commit to this path it is (a) better to say this now than later and (b) a good way to ping for impassable objects in regards to using LessWrong to continue my journey.

EDIT: Oh, and a response to Wednesday is forthcoming but will take more thought. :)

UPDATE: The response is up.

Replies from: JamesAndrix
comment by JamesAndrix · 2010-01-16T06:27:35.742Z · LW(p) · GW(p)

I don't expect torches and pitchforks, but I do expect some form of ultimatum. I also expect an intangible response that will affect my future comments/posts.

I suggest changing your expectations. I identified myself as a theist here long ago, and haven't noticed any negative response. At least one other person did at the same time, I think we got upvotes and someone commented on it being interesting, but that was that. Can't find the link.

I now self-identify as an atheist, so stick around, the magic works. :-)

comment by MrHen · 2010-01-15T21:15:45.052Z · LW(p) · GW(p)

This is the response to the Wednesday post. (Which, by the way, I read way back when it was written. You can find a few of my comments down in the threads. :) )

Wednesday's case is certainly interesting. My younger self used similar logic during his big crisis of faith and I don't consider it to be a poor choice of action. I think my current situation is very apt for a future Wednesday that begins to wonder about some of the things she has seen. Future-Wednesday and Present-MrHen would probably have some excellent discussions.

The big question that is relevant for Wednesday is whether you can successfully compartmentalize areas of your map. You say, "I reject out of hand the idea that she should deconvert in the closet and systematically lie to everyone she knows." I would respond by asking the same questions I asked in my post. Is it helpful to pursue "rational" theism? It isn't true rationalism by any means. But is it better than the alternative?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-15T22:55:42.988Z · LW(p) · GW(p)

How much of the Sequences have you read?

Replies from: MrHen
comment by MrHen · 2010-01-16T15:58:26.689Z · LW(p) · GW(p)

I keep track at my wiki user page.

Replies from: Eliezer_Yudkowsky, Kutta
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-16T17:36:12.047Z · LW(p) · GW(p)

That's basically nothing. Okay, not much point in my wondering "What could I have missed?" then.

Your intentions seem good, and if you read through the Sequences (or even just Map and Territory, Mysterious Answers and How To Actually Change Your Mind) then I expect you'll have a very different perspective at the end of it.

Replies from: byrnema, MrHen
comment by byrnema · 2010-01-16T18:03:57.881Z · LW(p) · GW(p)

I think you did miss something. You write that everything adds back up to normalcy, but I observe that physical materialism feels bereft of meaning compared to the theistic worldview. (I can give more details regarding this, and concede in advance it is not a universal experience.)

If I can construct a free-floating belief system that makes "values" coherent for this bereft person, on what basis should they not prefer the free-floating belief system? The running argument seems to be that they should value 'truth'. However the obvious catch is that the person only places a terminal value for truth from within the free floating belief system.

Replies from: Eliezer_Yudkowsky, Furcas, UnholySmoke, MichaelVassar, Nanani, byrnema
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-16T21:41:24.653Z · LW(p) · GW(p)

Byrnema, if you took someone who'd just never heard of God to begin with, never heard of any superstitutions, just grew up in a nice materialistic civilization that expected to take over the galaxies someday, and you asked them "What's left, when God's gone?" they'd look up at the stars, look back at you, and say, "I don't understand what you think is missing - it looks to me like everything is there."

I'm sorry that I failed to convey this, and I do worry that the metaethics sequence failed and will need to be done over. But you can't say I didn't try.

Replies from: byrnema, Kevin
comment by byrnema · 2010-01-16T23:02:46.549Z · LW(p) · GW(p)

I'm sorry that I failed to convey this,

You did. I just think it's crazy to think that no one will ever ask, "what's the purpose of taking over all these galaxies?".

I'm also not sure why you mention God specifically. I'm not sure how the existence of a supreme super-power assigning purpose would be any more meaningful -- or, really, any different -- than the physical laws of the universe assigning purpose.

Replies from: orthonormal, Vive-ut-Vivas
comment by orthonormal · 2010-01-16T23:48:43.627Z · LW(p) · GW(p)

I just think it's crazy to think that no one will ever ask, "what's the purpose of taking over all these galaxies?"

If asked, they might answer along the lines of "so that more people can exist and be happy"; "so that ever more interesting and fun and beautiful patterns can come into being"; "so that we can continue to learn and understand more and more of the strange and wonderful patterns of reality", etc. None of these are magical answers; they can all be discussed in terms of a (more sophisticated than current) analysis of what these future beings want and like, what their ethics and aesthetics consist of (and yes, these are complicated patterns to be found within their minds, not within some FOV), etc.

What I think is crazy is to reject all those answers and say you can't in principle be satisfied with any answer that could be different for a different civilization. I think that such dismissals are a mistake along the lines of asking for the final cause or "purpose" of the fact that rocks fall, and rejecting gravity as an insufficient answer because it's only an efficient cause.

comment by Vive-ut-Vivas · 2010-01-17T05:37:25.496Z · LW(p) · GW(p)

The question itself ("what's the purpose?") presupposes the answer. If you've never heard of God or superstition, why would you assume that there was any purpose other than just to take over all these galaxies?

Replies from: byrnema
comment by byrnema · 2010-01-17T08:29:54.631Z · LW(p) · GW(p)

Whenever you do anything, isn't it natural to question what you're doing it for?

Replies from: Vive-ut-Vivas, Mitchell_Porter
comment by Vive-ut-Vivas · 2010-01-17T18:23:10.875Z · LW(p) · GW(p)

That's not the question you're asking. There's no God-shaped hole in answering "because we feel like taking over galaxies" until you put it there.

Replies from: byrnema
comment by byrnema · 2010-01-17T18:46:29.218Z · LW(p) · GW(p)

I didn't say anything about a God-shaped hole. You're reading something different into my question, or maybe trying to cubbyhole my question into a stereotype that doesn't quite fit.

Whenever I do anything, I have an idea of how that fits into a larger objective. One exception might be activities that I do for simplistic hedonism, but that doesn't provide the full range of satisfaction and joy that I feel when I feel like I'm making progress in something. The pleasure in the idea of taking over galaxies is very much progress-based, and so it would be natural to ask why this would actually be progress.

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-01-17T19:13:40.656Z · LW(p) · GW(p)

Substitute "meaning" for "God", then. The problem is trying to fit everything into a "larger objective": whose objective? That's what I mean when I say you're presupposing the answer.

Also, "why would taking over galaxies be progress?" can be answered pretty simply once you explain what you mean by "progress". Technological advancement? Increased wealth? Curiosity?

Replies from: byrnema
comment by byrnema · 2010-01-17T19:21:17.858Z · LW(p) · GW(p)

Substitute "meaning" for "God", then.

Good. Your comments above now make good sense to me.

The problem is trying to fit everything into a "larger objective"

That's my problem. Maybe it's a problem common to many theists too. Any cures? And is this problem a hardware problem or a logic problem?

Replies from: Vive-ut-Vivas
comment by Vive-ut-Vivas · 2010-01-17T20:15:14.583Z · LW(p) · GW(p)

If I had the catch-all cure to existential angst, I wouldn't be parroting it on here, I'd be trying to sell it for millions!

Maybe you could call it a hardware problem, since I'd liken it to a virus. You've been corrupted to look for a problem when there isn't one, and you know there isn't one, but you just don't feel emotionally satisfied (correct me if I'm wrong here). I don't have an answer for that. I would suspect that the more you distance yourself from these kinds of views (that the universe must have "meaning" and all that), the question just stops being even relevant. I think the problem just involves breaking a habit.

Replies from: wedrifid
comment by wedrifid · 2010-01-17T22:40:40.394Z · LW(p) · GW(p)

If I had the catch-all cure to existential angst, I wouldn't be parroting it on here, I'd be trying to sell it for millions!

Then Robin would have a field day explaining why people did not actually buy it, despite the wringing of hands and gnashing of teeth.

comment by Mitchell_Porter · 2010-01-17T12:02:23.510Z · LW(p) · GW(p)

Before your journey into nihilism, why did you do things?

ETA: Though this discussion focuses on purposes and actions, I wonder if the problem might be, that something about life which was always present for you and providing meaning, no matter what you did, now appears to be absent under all conceivable circumstances.

comment by Kevin · 2010-01-17T08:50:36.898Z · LW(p) · GW(p)

I couldn't make it through the metaethics sequence but I really liked Three Worlds Collide.

comment by Furcas · 2010-01-16T18:28:52.706Z · LW(p) · GW(p)

Eliezer didn't really miss anything. What you're asking boils down to, "If I value happiness more than truth, should I delude myself into holding a false belief that has no significant consequence except making me happy?"

The obvious answer to that question is, "Yes, unless you can change yourself so that your happiness does not require belief in something that doesn't exist".

The second option is something that Eliezer addressed in Joy in the Merely Real. He didn't address the first option, self-deception, because this website is about truth-seeking, and anyway, most people who want to deceive themselves don't need help to do it.

Replies from: byrnema, ciphergoth
comment by byrnema · 2010-01-16T23:40:42.139Z · LW(p) · GW(p)

I was embarrassed for a while (about 25 minutes after reading your comment and Ciphergoth's) that my ideas would be reduced to the cliche's you are apparently responding to. But then I realized I don't need to take it personally, but qualify what I mean.

First, there's nothing from my question to Eliezer to indicate that I value happiness more than truth, or that I value happiness at all. There are things I value more than truth; or rather, I only find it possible to value truth above all else within a system that is coherent and consistent and thus allows a meaningful concept of truth.

Replies from: Furcas
comment by Furcas · 2010-01-17T02:39:36.111Z · LW(p) · GW(p)

If "feels bereft of meaning" doesn't mean that it makes you unhappy, the only other interpretation that even begins to make sense to me, is that an important part of your terminal values is entirely dependent on the truth of theism.

To experience what that must feel like, I try to imagine how I would feel if I discovered that solipsism is true and that I have no way of ever really affecting anything that happens to me. It would make me unhappy, sure, but more significantly it would also make my existence meaningless in the very real sense that while the desires that are encoded in my brain (or whatever it is that produces my mind) would not magically cease to exist, I would have to acknowledge that there is no possible way for my desires to ever become reality.

Is this closer to what you're talking about? If it isn't, I'm going to have to conclude that either I'm a lot stupider than I thought, or you're talking about a square circle, something impossible.

Replies from: byrnema
comment by byrnema · 2010-01-17T03:34:13.578Z · LW(p) · GW(p)

It is much closer to what I'm talking about.

Orthonormal writes that in the absence of a Framework of Objective Value, he found he still cared about things (the welfare of friends and family, the fate of the world, the truth of my his beliefs, etc).

In contrast, I find my caring begins fading away. Some values go quickly and go first -- the fate of the world, the truth of my own beliefs -- but other values linger, long enough for me to question the validity of a worldview that would leave me indifferent to my family.

Orthonormal also writes that in response to my hypothetical question about purpose,

If asked, they might answer along the lines of "so that more people can exist and be happy"; "so that ever more interesting and fun and beautiful patterns can come into being"; "so that we can continue to learn and understand more and more of the strange and wonderful patterns of reality", etc. None of these are magical answers;

And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves. In fact, the something which causes me to hesitate when I might feel indifference to my family is a feeling of responsibility.

It occurs to me that satisfying my moral responsibility might be a terminal value for me. If I have none; if it really is the case that I have no moral responsibility to exist and love, I'd happily not exist and not love.

Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn't happen. Others concede that nihilism does happen, but that this doesn't bother them or that they'd rather sit with an uncomfortable truth than be deluded. So perhaps it's the case that people are intrinsically motivated in different ways, or that people have different thresholds for how much lack of meaning they can tolerate. Or other 'solutions' come to mind.

Replies from: Eliezer_Yudkowsky, orthonormal, AdeleneDawner, Vladimir_Nesov, Eliezer_Yudkowsky, Furcas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T03:43:13.692Z · LW(p) · GW(p)

It seems to me that you conflate the lack of an outside moral authority with a lack of meaning to morality. Consider "fairness". Suppose 3 people with equal intrinsic needs (e.g. equal caloric reserves and need for food) put in an equal amount of work on trapping a deer, with no history of past interaction between any of them. Fairness would call for each of them to receive an equal share of the deer. A 90/9/1 split is unfair. It is unfair even if none of them realize it is unfair; if you had a whole society where women got 10% the wages of men, it wouldn't suddenly become massively unfair at the first instant someone pointed it out. It is just that an equal split is the state of affairs we describe by the word "fair" and to describe 90/9/1 you'd need some other word like "foograh".

In the same sense, something can be just as fair, or unfair, without there being any God, nor yet somehow "the laws of physics", to state with controlling and final authority that it is fair.

Actually, even God's authority can't make a 90/9/1 split "fair". A God could enforce the split, but not make it fair.

So who needs an authority to tell us what we should do, either? God couldn't make murder right - so who needs God to make it wrong?

Replies from: byrnema
comment by byrnema · 2010-01-17T06:45:47.243Z · LW(p) · GW(p)

Thank you for your effort to understand. However, I don't believe this is in the right direction. I'm afraid I misunderstood or misrepresented my feelings about moral responsibility.

For thoroughness, I'll try to explain it better here, but I don't think it's such a useful clue after all. I hear physical materialists explaining that they still feel value outside an objective value framework naturally/spontaneously. I was reporting that I didn't -- for some set of values, the values just seemed to fade away in the absence of an objective value framework. However, I admit that some values remained. The first value to obviously remain was a sense of moral responsibility, and it was that value that kept me faithful to the others. So perhaps it is a so-called 'terminal value', in any case, it was the limit where some part of myself said "if this is Truth, then I don't value Truth".

Replies from: CassandraR, orthonormal
comment by CassandraR · 2010-01-17T19:42:15.038Z · LW(p) · GW(p)

The reason I feel value outside of an objective value framework is that I taught myself over weeks and months to do so. If a theist had the rug pulled out from under them morally speaking then they might well be completely bewildered by how to act and how to think. I am sure this would cause great confusion and pain. The process of moving from a theist world view to a materialistic world view is not some flipped switch, a person has to teach themselves new emotional and procedural reactions to common every day problems. The manner in which to do this is to start from the truth as best you can approximate it and train yourself to have emotional reactions that are in accordance with the truth. There is no easy way to to do this but I personally find it much easier to have a happy life once I trained myself to feel emotions in relation to facts rather than fictions.

comment by orthonormal · 2010-01-17T07:08:20.459Z · LW(p) · GW(p)

Upvoted for honesty and clarity.

some part of myself said "if this is Truth, then I don't value Truth".

I'm not sure there's much more to discuss with you on the topic of theism, then; the object-level arguments are irrelevant to whether you believe. (There are plenty of other exciting topics around here, of course.) All I can do is attempt to convince you that atheism really isn't what it feels like from your perspective.

EDIT: There was another paragraph here before I thought better of it.

Replies from: wedrifid, randallsquared
comment by wedrifid · 2010-01-18T11:56:07.554Z · LW(p) · GW(p)

All I can do is attempt to convince you that atheism really isn't what it feels like from your perspective.

Perhaps we could say "needn't be what it feels like from your perspective". It clearly is that feeling for some. I wonder to what extent their difficulty is, in fact, an external-tribal-belief shaped hole in their neurological makeup.

Replies from: orthonormal
comment by orthonormal · 2010-01-18T19:00:12.267Z · LW(p) · GW(p)

Agreed. I should remember I'm not neurotypical, in several ways.

comment by randallsquared · 2010-01-18T05:44:44.605Z · LW(p) · GW(p)

All I can do is attempt to convince [byrnema] that atheism really isn't what it feels like from your perspective.

I'm not sure that's possible. As someone who's been an atheist for at least 30 years, I'd say atheism does feel like that, unless there's some other external source of morality to lean on.

From the back and forth on this thread, I'm now wondering if there's a major divide between those who mostly care deeply without needing a reason to care, and those who mostly don't.

Replies from: AdeleneDawner, ciphergoth
comment by AdeleneDawner · 2010-01-18T05:51:00.314Z · LW(p) · GW(p)

I'd thought of that myself a few days ago. It seems like something that we'd experience selection bias against encountering here.

Replies from: RobinZ
comment by RobinZ · 2010-01-18T05:53:32.331Z · LW(p) · GW(p)

I would expect to see nihilist atheists overrepresented here - one of the principles of rationality is believing even when your emotions oppose it.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-18T06:01:07.424Z · LW(p) · GW(p)

I'm not surprised to encounter people here who find nihlism comfortable, or at least tolerable, for that reason. People who find it disabling - who can't care without believing that there's an external reason to care - not so much.

comment by Paul Crowley (ciphergoth) · 2010-01-18T08:47:17.790Z · LW(p) · GW(p)

I don't feel that way at all, personally - I'm very happy to value what I value without any kind of cosmic backing.

comment by orthonormal · 2010-01-17T05:56:12.788Z · LW(p) · GW(p)

Orthonormal, yourself, Eliezer, all seem to argue that value nihilism just doesn't happen.

That's a rather poor interpretation. I pointed out from my own experience that nihilism is not a necessary consequence of leaving religion. I swear to you that when I was religious I agonized over my fear of nihilism, that I loved Dostoyevsky and dreaded Nietzsche, that I poured out my soul in chapels and confessionals time and time again. I had a fierce conscience then, and I still have one now. I feel the same emotional and moral passions as before; I just recognize them as a part of me rather than a message from the heart of the cosmos— I don't need permission from the universe to care about others!

I don't deny that others have adopted positions of moral nihilism when leaving a faith; I know several of them from my philosophy classes. But this is not necessary, and not rational; therefore it is not a good instrumental excuse to maintain theism.

Now, I cannot tell you what you actually feel; but consider two possibilities in addition to your own:

  • What you experience may be an expectation of your values vanishing rather than an actual attenuation of them. This expectation can be mistaken!

  • A temporary change in mood can also affect the strength of values, and I did go through a few months of mild depression when I apostasized. But it passed, and I have since felt even better than I had before it.

comment by AdeleneDawner · 2010-01-17T05:15:28.845Z · LW(p) · GW(p)

This might turn out to be vacuous, but it seems useful to me. Here goes nothing:

Do you have a favorite color? Or a favorite number, or word, or shirt, or other arbitrary thing? (Not something that's a favorite because it reminds you of something else, or something that you like because it's useful; something that you like just because you like it.)

Assuming you do, what objective value does it have over other similar things? None, right? Saying that purple is a better color than orange, or three is a better number than five (to use my own favorites) simply doesn't make sense.

But, assuming you answered 'yes' to the first question, you still like the thing, otherwise it wouldn't be a favorite. It makes sense to describe such things as fun or beautiful, and to use the word 'happiness' to describe the emotion they evoke. And you can have favorites among any type of things, including moral systems. Rationality doesn't mean giving those up - they're not irrational, they're arational. (It does mean being careful to make sure they don't conflict with each other or with reality, though - thinking that purple is somehow 'really' better than orange would be irrational.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-17T08:42:08.125Z · LW(p) · GW(p)

Reminds me of Wittgenstein's "Ethics and aesthetics are one and the same". Not literally true I don't think, but I found it enlightening all the same.

comment by Vladimir_Nesov · 2010-01-17T11:28:12.101Z · LW(p) · GW(p)

And none of these are terminal values for me. Existence, happiness, fun and beauty are pretty much completely meaningless to me in of themselves.

You are not really entitled to your own stated values. You can't just assert that beauty is meaningless to you and through this act make it so. If beauty is important to you, being absolutely convinced that it's not won't make it unimportant. You are simply wrong and confused about your values, at which point getting a better conscious understanding of what is "morality" becomes even more important than if you were naive and relied on natural intuition alone.

Replies from: byrnema
comment by byrnema · 2010-01-17T17:54:08.802Z · LW(p) · GW(p)

I'm not sure to what extent terminal values can be chosen or not, but it seems to me that (the following slightly different than what you were describing) if you become absolutely convinced that your values aren't important, then it would be difficult to continue thinking your values are important. Maybe the fact that I can't be convinced of the unimportance of my values explains why I can't really be convinced there's no Framework of Objective Value, since my brain keeps outputting that this would make my values unimportant. But maybe, by the end of this thread, my brain will stop outputting that. I'm willing to do the necessary mental work.

By the way, Furcas seemed to understand the negation of value I'm experiencing via an analogy of solipsism.

Replies from: orthonormal
comment by orthonormal · 2010-01-17T18:17:42.352Z · LW(p) · GW(p)

One last time, importanceuniversality.

If we had been Babyeaters, we would think that eating babies is the right-B thing to do. This doesn't in any way imply we should be enthusiastic or even blasé about baby-eating, because we value the right thing, not the right-B thing that expresses the Babyeaters' morality!

I understand that you can't imagine a value being important without it being completely objective and universal. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!

Imagine first that there is an objective value that you just don't care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

Replies from: Kutta, byrnema
comment by Kutta · 2010-01-17T20:58:10.002Z · LW(p) · GW(p)

Imagine first that there is an objective value that you just don't care about. Easy, right? Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

This the best (very) short guide to naturalistic metaethics I've read so far.

comment by byrnema · 2010-01-17T19:17:05.760Z · LW(p) · GW(p)

This is very helpful. The only thing I would clarify is that the lesson I need to learn is that importance ≠ objectivity. (I'm not at all concerned about universality.)

I understand that you can't imagine a value being important without it being completely objective [...]. But you can start by admitting that the concept of important-to-you value is at least distinct from the concept of an objective or universal value!

I'm not sure. With a squirrel in the universe, I would have thought the universe was better with more nuts than with less. I can understand there being no objective value, but I can't understand objective value being causally or meaningfully distinct from the subjective value.

Next, imagine that there is something you care about, deeply, that just isn't an objective value, but which your world would be awful/bland/horrifying without. Now give yourself permission to care about that thing anyway.

Hm. I have no problem with 'permission'. I just find that I don't care about caring about it. If it's not actually horrible, then let the universe fill up with it! My impression is that intellectually (not viscerally, of course) I fail to weight my subjective view of things. If some mathematical proof really convinced me that something I thought subjectively horrible was objectively good, I think I would start liking it.

(The only issue, that I mentioned before, is that a sense of moral responsibility would prevent me from being convinced by a mathematical proof to suddenly acquire beliefs that would cause me to do something I've already learned is immoral. I would have to consider the probability that I'm insane or hallucinating the proof, etc.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T04:03:34.186Z · LW(p) · GW(p)

I can barely imagine value nihilism, but not a value nihilism from which God or physics could possibly rescue you. If you think that your value nihilism has something to do with God, then I'm going to rate it as much more likely that you suffer from basic confusion, than that the absence of God is actually responsible for your values collapse whereas a real God could have saved it and let you live happily ever after just by ordering you to have fun.

Replies from: Peter_de_Blanc, byrnema
comment by Peter_de_Blanc · 2010-01-17T04:24:02.272Z · LW(p) · GW(p)

I think the basic problem is that evolution re-used some of the same machinery to implement both beliefs and values. Our beliefs reflect features of the external world, so people expect to find similar external features corresponding to their values.

Actually searching for these features will fail to produce any results, which would be very dismaying as long as the beliefs-values confusion remains.

The God meme acts as a curiosity stopper; it says that these external features really do exist, but you're too stupid to understand all the details, so don't bother thinking about it.

Replies from: byrnema
comment by byrnema · 2010-01-17T05:28:13.181Z · LW(p) · GW(p)

Exactly! I think this is exactly the sort of 'solution' that I hoped physical materialism could propose.

I'd have to think about whether the source of the problem is what Peter has guessed (whether this particular confusion) but from the inside it exactly feels like a hard-wiring problem (given by evolution) that I can't reconcile.

comment by byrnema · 2010-01-17T07:19:32.561Z · LW(p) · GW(p)

As I wrote above in this thread, I agree that there's not any clear way that the existence of God could solve this problem.

[Note: I took out several big chunks about how religions address this problem, but I understand people here don't want to hear about religion discussed in a positive light. But the relevant bit:]

Peter de Blanc wrote:

The God meme acts as a curiosity stopper; it says that these external features really do exist, but you're too stupid to understand all the details, so don't bother thinking about it.

And this seems exactly right. Without the God meme telling me that it all works out somehow -- for example, somehow the subjective/object value problem works out -- I'm left in a confused state.

comment by Furcas · 2010-01-17T04:16:16.637Z · LW(p) · GW(p)

What if the existence of a Framework of Objective Value wasn't the only thing you were wrong about? What if you are also wrong in your belief that you need this Framework in order to care about the things that used to be meaningful to you? What if this was simply one of the many things that your old religious beliefs had fooled you about?

It is possible to be mistaken about one's self, just as we can be mistaken about the rest of reality. I know it feels like you need a Framework, but this feeling is merely evidence, not mathematical proof. And considering the number of ex-believers who used to feel as you do and who now live a meaningful life, you have to admit that your feeling isn't very strong evidence. Ask yourself how you know what you think you know.

Replies from: byrnema
comment by byrnema · 2010-01-17T08:06:28.495Z · LW(p) · GW(p)

I would be quite happy to be wrong. I can't think of a single reason not to wish to be wrong. (Not even the sting of a drop in status; in my mind, it would improve my status to have presented a problem that actually has a solution instead of one that just leads in circles.)

Ask yourself how you know what you think you know.

Through the experiment of assimilating the ideas of Less Wrong over the course a year, I found my worldview changing and becoming more and more bereft of meaning as it seemed more and more logical that value is subjective. This means that no state of the universe is objectively any "better" than any other state, there's no coherent notion of progress, etc. And I can actually feel that pretty well; right on the edge of my consciousness, an awareness that nothing matters, I'm just a program that's running in some physical reality. I feel no loyalty or identity with this program, it just exists. And I find it hard to believe I ought to go there; some intuition tells me this isn't what I'm supposed to be learning. I've lost my way somehow.

This reminds me of the labyrinth metaphor. Where the hell am I? Why am I the only one to find this particular dead end? Should I really listen to my friends on the walkie-talkie saying, 'keep going, it's not really a deep bottomless chasm!', or shouldn't I try and describe it better to make certain you know where I'm at?

Replies from: Jordan, randallsquared, AdeleneDawner, Vladimir_Nesov
comment by Jordan · 2010-01-18T06:26:44.156Z · LW(p) · GW(p)

When I first gave up the idea of objective morality I also plummeted into a weird sort of ambivalence. It lasted for a few years. Finally, I confronted the question of why I even bothered continuing to exist. I decided I wanted to live. I then decided I needed an arbitrary guiding principle in life to help me maintain that desire. I decided I wanted to live as interesting a life as possible. That was my only goal in life, and it was only there to keep me wanting to live.

I pursued that goal for a few years, rather half-heartedly. It was enough to keep me going, but not much more. Then, one day, rather suddenly, I fell completely in love. Really, blubberingly, stupidly in love. I was completely consumed and couldn't have cared less if it was objectively meaningless. A week later, I found out the girl was also in love with me, and I promptly stopped loving her.

Meditating on the whole thing afterwards, I realized I hadn't been in love, but had experienced some other, probably quite disgusting emotion. I had been pulled up from the abyss of subjectivity by the worst kind of garbage! It felt like the punchline of a zen koan. I realized that wallowing in ambivalence was just as worthless as embracing the stupidest purpose, and became ambivalent to the lack of objectivity itself.

After that I began rediscovering and embracing my natural desires. A few years of that and I finally settled down into what I consider a healthy person. But, to this day, I still occasionally feel the fuzzy awareness at the edge of my consciousness that everything is meaningless. And, when I do, I just don't care. So what if everything I love is objectively worthless? Meaninglessness itself is meaningless, so screw it!

I realize this whole story is probably bereft of any sort of rational take away, but I thought I'd share anyway, in the hopes of at least giving you some hope. Failing that, it was at least nice to take a break from rationality to write about something totally irrational.

comment by randallsquared · 2010-01-18T05:33:52.272Z · LW(p) · GW(p)

Why am I the only one to find this particular dead end?

You are not.

I cannot remember a time I genuinely believed in God, though I was raised Baptist by a fundamentalist believer. I don't know why I didn't succumb. When I was a teen, I didn't really bother doing anything I didn't want to do, except to avoid immediate punishment. All of my goals were basically just fantasies. Sometime during the 90s I applied Pascal's Wager to objective morality and began behaving as though it existed, since it seemed clear that a more intelligent goal-seeking being than I might well discover some objective morality which I couldn't understand the argument for, and that working toward an objective morality (which is the same thing as a universal top goal, since "morality" consists of statements about goals) requires that I attempt to maximize my ability to do so when it's explained what it is. This is basically the same role you're using God for, if I understand correctly.

Unfortunately, as my hope for a positive singularity dwindles, so does my level of caring about, basically, everything not immediately satisfying to me. I remind myself that the Wager still holds even with a very small chance, but very small chance persistently feels like zero chance.

Anyway, I don't have a solution, but I wanted to point out that this problem is felt by at least some other people as well, and doesn't necessarily have anything to do with God, per se. I suppose some might suggest that I've merely substituted a sufficiently intelligent goal-seeker for "God"...

comment by AdeleneDawner · 2010-01-17T10:09:59.291Z · LW(p) · GW(p)

If you're still concerned about that after all the discussion about it, it might be a good idea to get some more one-on-one help. Off the top of my head I'd suggest finding a reputable Buddhist monk/master/whatever to work with: I know that meditation sometimes evokes the kind of problem you're afraid of encountering, so they should have some way of dealing with that.

comment by Vladimir_Nesov · 2010-01-17T12:01:03.765Z · LW(p) · GW(p)

This means that no state of the universe is objectively any "better" than any other state, there's no coherent notion of progress, etc.

This is wrong. Some states are really objectively better than other states. The trick is, "better" originates from your own preference, not God-given decree. You care about getting the world to be objectively better, while a pebble-sorter cares about getting the world to be objectively more prime.

Replies from: wedrifid, randallsquared, ciphergoth
comment by wedrifid · 2010-01-17T12:19:35.772Z · LW(p) · GW(p)

This is wrong.

Rather, it is using a different definition of 'better' (or, you could argue, 'objectively') than you are. Byrnema's usage may not be sophisticated or the most useful way to carve reality but it is a popular usage and his intended meaning is clear.

Some states are really objectively better than other states. The trick is, "better" originates from your own preference, not God-given decree.

That is the framework I use. I agree that byrnema could benefit from an improved understanding of this kind of philosophy. Nevertheless, byrnema's statement is a straightforward use of language that is easy to understand, trivially true and entirely unhelpful.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T12:37:18.013Z · LW(p) · GW(p)

Rather, it is using a different definition of 'better' (or, you could argue, 'objectively') than you are.

It doesn't work for most of any reasonable definition, because you'd need "better" to mean "absolute indifference", which doesn't rhyme.

Replies from: wedrifid
comment by wedrifid · 2010-01-17T12:44:44.118Z · LW(p) · GW(p)

It doesn't work for most of any reasonable definition, because you'd need "better" to mean "absolute indifference"

No it wouldn't. You are confused.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T12:49:48.136Z · LW(p) · GW(p)

I'm pretty sure I can't be confused about the real-world content of this discussion, but we are having trouble communicating. As a way out, you could suggest reasonable interpretations of "better" and "objectively" that make byrnema's "no state of the universe is objectively any "better" than any other state" into a correct statement.

Replies from: wedrifid
comment by wedrifid · 2010-01-17T13:34:00.031Z · LW(p) · GW(p)

I'm pretty sure I can't be confused about the real-world content of this discussion

You appear to have a solid understanding of the deep philosophy. Your basic claims in the two ancestors are wrong and trivially so at about the level of language parsing and logic.

It doesn't work for most of any reasonable definition, because you'd need "better" to mean "absolute indifference"

Far from being required, "absolute indifference" is doesn't even work as a meaning in the context: "No state of the universe is objectively any "absolute indifference" than any other state". If you fixed the grammar to make the meaning fit it would make the statement wrong.

As a way out, you could suggest reasonable interpretations of "better" and "objectively" that make byrnema's "no state of the universe is objectively any "better" than any other state" into a correct statement.

I'm not comfortable making any precise descriptions for a popular philosophy that I think is stupid (my way of thinking about the underlying concepts more or less matches yours). But it would be something along the lines of defining "objectively better" to mean "scores high in a description or implementation of betterness outside of the universe, not dependent on me, etc". Then, if there is in fact no such 'objectively better' thingumy (God, silly half baked philosophy of universal morality, etc) people would say stuff like byrnema did and it wouldn't be wrong, just useless.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T13:54:37.666Z · LW(p) · GW(p)

"No state of the universe is objectively any "absolute indifference" than any other state".

"According to a position of absolute indifference, no state of the universe is preferable to any other."

I'm not comfortable making any precise descriptions for a popular philosophy that I think is stupid

That "stupid" for me got identified as "incorrect", not a way to correctly interpret the byrnema's phrase to make it right (but a reasonable guess about the way the phrase came to be).

Replies from: ciphergoth, wedrifid
comment by Paul Crowley (ciphergoth) · 2010-01-17T14:47:41.300Z · LW(p) · GW(p)

"According to a position of absolute indifference, no state of the universe is preferable to any other."

And this I think is why people find moral non-cognitivism so easy to misunderstand - people always try to parse it to understand which variety of moral realism you subscribe to.

  • "There is no final true moral standard."
  • "Ah, so you're saying that all acts are equally good according to the final true moral standard?"
  • "No, I'm saying that there is no final true moral standard."
  • "Oh, so all moral standards are equally good according to the final true moral standard?"
  • "No, I'm saying that there is no final true moral standard."
  • "Oh, so all moral judgements are equally good according to the final true moral standard?"
  • \whimper**
Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T16:13:29.566Z · LW(p) · GW(p)

I like to use the word "transcendent", as in "no transcendent morality", where the word "transcendent" is chosen to sound very impressive and important but not actually mean anything.

However, you can still be a moral cognitivist and believe that moral statements have truth-values, they just won't be transcendent truth-values. What is a "transcendent truth-value"? Shrugs.

It's not like "transcedental morality" is a way the universe could have been but wasn't.

Replies from: byrnema
comment by byrnema · 2010-01-17T17:23:32.418Z · LW(p) · GW(p)

Yes, I think that transcendent is a great adjective for this concept of morality I'm attached to. I like it because it makes it clear why I would label the attachment 'theistic' even though I have no attachment that I'm aware of to other necessarily 'religious' beliefs.

Since I do 'believe in' physical materialism, I expect science to eventually explain that morality can transcend the subjective/objective chasm in some way or that if morality does not, to identify whether this fact about the universe is consistent or inconsistent with my particular programming. (This latter component specifically is the part I was thinking you haven't covered; I can only say this much now because the discussion had helped develop my thoughts quite a bit already.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T22:51:41.123Z · LW(p) · GW(p)

Er, did you actually read the Metaethics sequence?

comment by wedrifid · 2010-01-17T14:20:57.644Z · LW(p) · GW(p)

"According to a position of absolute indifference, no state of the universe is preferable to any other."

That is a description that you can get to using your definition of 'better' (approximately, depending on how you prefer to represent differences between human preferences). It still completely does away with the meaning Byrnema conveyed.

That "stupid" for me got identified as "incorrect", not a way to correctly interpret the byrnema's phrase to make it right (but a reasonable guess about the way the phrase came to be).

That was clear. But no matter how superior our philosophy we are still considering straw men if we parse common language with our own idiosyncratic variant. We must choose between translating from their language, forcing them to use ours, ignoring them or, well, being wrong a lot.

Replies from: byrnema
comment by byrnema · 2010-01-17T18:34:01.336Z · LW(p) · GW(p)

This thread between you and Vladimir_Nesov is fascinating, because you're talking about exactly what I don't understand. Allusions to my worldview being unsophisticated, not useful, stupid and incorrect fill me with the excitement of anticipation that there is a high probability of there being something to learn here.

Some comments:

(1) It appears that the whole issue of what I meant when I wrote, "no state of the universe is objectively any "better" than any other state," has been resolved. We agree that it is trivially true, useless and on some level insane to be concerned with it.

(2) Vladimir_Nesov wrote, "You care about getting the world to be objectively better [in the way you define better], while a pebble-sorter cares about getting the world to be objectively more prime [the way he defines better]."

This is a good point to launch from. Suppose it is true that there is no objective 'better', so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don't place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.

For what it's worth, I understand that this value I'm lacking -- to persist in caring about my subjective values even if they're not objectively substantiated -- is a subjective value. While I seem to lack it, you guys could very reasonably have this value in great measure.

So. Is this a value I can work on developing? Or is there some logical fallacy I'm making that would make this whole dilemma moot once I understood it?

Replies from: orthonormal, wedrifid
comment by orthonormal · 2010-01-17T20:50:48.041Z · LW(p) · GW(p)

This is connected to the Rebelling Within Nature post: have you considered that your criterion "you shouldn't care about a value if it isn't objective", is another value that is particular to you as a human? A simple Paperclip Maximizer wouldn't have the criterion "stop caring about paperclips if it turns out the goodness of paperclips isn't written into the fabric of the universe". (Nor would it have the criterion of respecting other agents' moralities, another thing which you value.)

comment by wedrifid · 2010-01-17T23:17:05.826Z · LW(p) · GW(p)

This is a good point to launch from. Suppose it is true that there is no objective 'better', so that the universe is no more improved by me changing it in ways that I think are better or by the pebble-sorter making things more prime, than either of us doing nothing or not existing. Then I find I don't place any value on whether we are subjectively improving the universe in our different ways, doing nothing or not existing. All of these things would be equivalently useless.

Have a look at Eliezer's posts on morality and perhaps 'subjectively objective'. (But also consider Adelene's suggestion on looking into whether your dissociation is the result of a neurological or psychological state that you could benefit from fixing.)

For what it's worth, I understand that this value I'm lacking -- to persist in caring about my subjective values even if they're not objectively substantiated -- is a subjective value.

Meanwhile I think you do, in fact, have this subjective measure. Not because you must for any philosophical reason but because your behaviour and descriptions indicate that you do subjectively care about your subjective value. Even thought you don't think you do. To put it another way, your subjective values are objective facts about the state of the universe and your part thereof and I believe you are wrong about them.

comment by randallsquared · 2010-01-18T05:13:06.865Z · LW(p) · GW(p)

Some states are really objectively better than other states. The trick is, "better" originates from your own preference

Is there a sense in which you did not just say "The trick is to pretend that your subjective preference is really a statement about objective values"? If by "objectively better" you don't mean "better according to a metric that doesn't depend on subjective preferences", then I think you may be talking past the problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-18T19:19:45.134Z · LW(p) · GW(p)

By "objectively better" I mean that given an ordering called "better", it is an objective fact that one state is "better" than another state. The ordering "better" is constructed from your own decision-making algorithm, you could say from subjective preference. This ordering however is not a matter of personal choice: you can't decide what it is, you only decide given what it already happens to be. It is only "subjective" in the sense that different agents have different preference.

comment by Paul Crowley (ciphergoth) · 2010-01-17T12:11:46.038Z · LW(p) · GW(p)

I can't quite follow that description. "More prime" really is an objective description of a yardstick against which you can measure the world. So is "preferred by me". But to use "objectively better" as a synonym for "preferred by byrnema" seems to me to invite confusion.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-01-17T12:33:22.763Z · LW(p) · GW(p)

But to use "objectively better" as a synonym for "preferred by byrnema" seems to me to invite confusion.

Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (link to the thread -- H/T to Wei Dai).

The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.

"Preferred by byrnema" is bad terminology because of another confusion, where she seems to assume that she knows what she really prefers. So, I could say "objectively more preferred by byrnema", but that can be misinterpreted as "objectively more the way byrnema thinks it should be", which is circular as the foundation for byrnema's own decision-making, just as with a calculator Y that when asked "2+2=?" thinks of an answer in the form "What will calculator Y answer?", and then prints out "42", which thus turns out to be a correct answer to "What will calculator Y answer?". By intermediary of the concept of "better", it's easier to distinguish what byrnema really prefers (but can't know in detail), and what she thinks she prefers, or knows of what she really prefers (or what is "better").

This comment probably does a better job at explaining the distinction, but it took a bigger set-up (and I'm not saying anything not already contained in Eliezer's metaethics sequence).

See also:

Replies from: wedrifid
comment by wedrifid · 2010-01-17T12:54:59.481Z · LW(p) · GW(p)

Yes it does, and I took your position recently when this terminological question came up, with Eliezer insisting on the same usage that I applied above and most of everyone else objecting to that as confusing (I can't think of a search term, so no link).

It was in the post for asking Eliezer Questions for his video interview.

The reason to take up this terminology is to answer the specific confusion byrnema is having: that no state of the world is objectively better than other, and implied conclusion along the lines of there being nothing to care about.

It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people's more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T13:08:11.971Z · LW(p) · GW(p)

It was in the post for asking Eliezer Questions for his video interview.

Nope, can't find my comments on this topic there.

It is one thing to use an idiosyncratic terminology yourself but quite another to interpret other people's more standard usages according to your definitions and respond to them as such. The latter is attacking a Straw Man and the fallaciousness of the argument is compounded with the pretentiousness.

I assure you that I'm speaking in good faith. If you see a way in which I'm talking past byrnema, help me to understand.

Replies from: Wei_Dai, wedrifid
comment by Wei Dai (Wei_Dai) · 2010-01-17T13:45:49.628Z · LW(p) · GW(p)

Is this the thread you're referring to?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T13:47:56.331Z · LW(p) · GW(p)

It is, thank you.

comment by wedrifid · 2010-01-17T14:01:36.738Z · LW(p) · GW(p)

Nope, can't find my comments on this topic there.

Ahh. I was thinking of the less wrong singularity article.

I assure you that I'm speaking in good faith.

I don't doubt that. I probably should consider my words more carefully so I don't cause offence except when I mean to. Both because it would be better and because it is practical.

Assume I didn't use the word 'pretentious' and instead stated that "when people go about saying people are wrong I expect them to have a higher standard of correctness while doing so than I otherwise would." If you substituted "your thinking is insane" for "this is wrong" I probably would have upvoted.

comment by wedrifid · 2010-01-17T12:37:18.249Z · LW(p) · GW(p)

But to use "objectively better" as a synonym for "preferred by byrnema" seems to me to invite confusion.

I suspect it may be even more confusing if you pressed Vladmir into territory where his preferences did not match those of byrnema. I would then expect him to make the claim "You care about getting the world to be objectively , I care about getting the world objectively better, while a pebble sorter cares about getting the world to be objectively more prime". But that line between 'sharing' better around and inventing words like booglewhatsit is often to be applied inconsistently so I cannot be sure on Vladmir's take.

comment by Paul Crowley (ciphergoth) · 2010-01-16T19:22:13.571Z · LW(p) · GW(p)

See also Doublethink

Replies from: byrnema
comment by byrnema · 2010-01-16T23:44:58.086Z · LW(p) · GW(p)

A free-floating belief system doesn't have to be double-think. In fact, the whole point of it would be to fill gaps because you would like a coherent, consistent world view even when one isn't given to you. I think that continuing to care about subjective value knowing that there is no objective value requires a disconcerting level of double-think.

comment by UnholySmoke · 2010-01-18T13:09:57.440Z · LW(p) · GW(p)

physical materialism feels bereft of meaning compared to the theistic worldview.

On what are you basing your assumption that the world should have whatever you mean by 'meaning'?

comment by MichaelVassar · 2010-01-16T19:53:35.238Z · LW(p) · GW(p)

I wouldn't even say that the rationalist view is properly seen as being a sub-set of physical materialism, just an evolutionary descendant of materialism. More like abstract ideal dynamicism.

Replies from: byrnema
comment by byrnema · 2010-01-17T17:39:48.569Z · LW(p) · GW(p)

Yes, agreed. (Whenever I've used and use the phrase 'physical materialism', this is what I'm referring to.)

comment by Nanani · 2010-01-18T07:18:34.217Z · LW(p) · GW(p)

The universe has the meaning we give it. Meaning is a perception of minds, not an inherent free-floating property of the universe.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-01-18T08:22:56.015Z · LW(p) · GW(p)

Meaning is a perception of minds, not an inherent free-floating property of the universe.

Off-topic, but: do you think the meaning of your own thoughts and cognitive activity is similarly observer-dependent?

comment by byrnema · 2010-01-16T18:20:21.565Z · LW(p) · GW(p)

By the way, I've read enough on Less Wrong to guess that your first reaction will be to feel some frustration that I must not have read the sequences. I've read enough of the sequences to believe that your main argument against feeling value-nihilism is that it just doesn't happen if the person lives in the moment and experiences life openly. Instead of looking for external validation of values, we look within ourselves and feel the internal validation.

Is this correct?

In which case, what about a person who feels like this kind of visceral experience is only a choice -- a moral choice?

Replies from: orthonormal
comment by orthonormal · 2010-01-16T23:29:38.989Z · LW(p) · GW(p)

When I became convinced that my belief in God was poorly founded, I worried intensely that I would become a nihilist and/or feel a perpetual vacuum of value. I've been incredibly relieved to find this fear unfounded.

On the nihilism front, I found that even in the absence of any Framework of Objective Value, I still cared about things (the welfare of friends and family, the fate of the world, the truth of my own beliefs, etc). I had thought that I'd cared about these things only insofar as they fit within the old FOV, but it turned out this fear was just a defense mechanism I employed in order to resist changing my worldview. Even with the FOV gone, I am simply the sort of being that cares about these things, and I don't need the permission of anyone or anything to do so!

I feel the same sense of purpose, passion, and meaning about these matters now that I felt when I was religious. Life is at times less comforting in other ways, but my fear of nihilism was misplaced. (Worse, it was subconsciously manufactured in order to stand in for other fears related to leaving religion, so that I wouldn't have believed someone else telling me this until I went through it myself!)

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2011-01-25T02:29:04.877Z · LW(p) · GW(p)

when I was religious

out of curiosity, what was your choice of poison?

Replies from: orthonormal
comment by orthonormal · 2011-01-25T05:03:21.404Z · LW(p) · GW(p)

Catholicism Classic, Extra Strength.

Replies from: endoself
comment by MrHen · 2010-01-18T19:31:43.923Z · LW(p) · GW(p)

The list on my wiki page isn't technically exhaustive. It is my bookmark for reading through everything in order. There are a few extras there from when I thought I would try to record everything but it turned out to be too troublesome and the chronological context interests me so I stopping recording anything outside of my place in the full list of posts. That being said, it is still basically nothing. For some reason I felt like clarifying anyway. :)

I have hit a few of the Map/Territory posts and my current favorite of what I have read is Mysterious Answers to Mysterious Questions.

I am not reading through the posts looking for a silver bullet. I am reading, processing, and looking for Truth. I assume this is what you intended, anyway, and I dislike creating expectations when there isn't a good reason to have them.

comment by Kutta · 2010-01-17T12:11:35.881Z · LW(p) · GW(p)

Speaking from my experience, I whole-heartedly recommend going through Eliezer's old posts and also old LW top level posts in a chronological manner. They're extremely dense in cross-references and I've ended up a few times in browser tab creating sprees that eventually gave me headaches before I switched to a systematic reading plan. Also, keeping track of the comment discussions is only possible this way.

Additionally, there is some sense of unfolding and progression that arises in the strictly chronological way that would be a shame to miss. Naturally, Eliezer tried to advance from easy and independently understandable topics to difficult and heavily interrelated ones. I daresay there was even a heavy emotional charge at the point we reached the final sequences, and I'm sure I was not the only one who was bewildered and intellectually/emotionally exhausted back then. I think it's definitely worthwhile to emulate the same reading experience by sticking to chronological order.

As a side note, I'm not sure I can recommend binge OB/LW reading to younger humans and less life-hardened persons. It gave me a couple of minor and medium crises of faiths and major shifts of views in a few months. Being a vivid and often lucid dreamer, I've also had a more than concerning number of dreams that starred Eliezer Yudkowsky.

comment by Kevin · 2010-01-15T21:35:51.641Z · LW(p) · GW(p)

I think your personal beliefs do matter. From my perspective, there is a big difference between "I believe that Jesus Christ lived on Earth and died for my sins and God really listens to my prayers", "I believe that some entity exists in the universe with power greater than we can imagine", "the entire universe is God" or "God is love."

Replies from: orthonormal, MrHen
comment by orthonormal · 2010-01-16T03:02:43.581Z · LW(p) · GW(p)

I'd add that how much rationality I ascribe to someone with a particular religious outlook has quite little correlation with our agreement on object-level beliefs. That is, I find a dogmatic Calvinist to be more likely to think rationally than a person with some vague hodgepodge of beliefs, although the latter will be more likely to agree with me on evolution and on social issues, because the former's beliefs are (to some extent) optimized for consistency while the latter's are generally not.

comment by MrHen · 2010-01-15T22:04:28.337Z · LW(p) · GW(p)

Are you saying that the difference between your examples is enough to include me or exclude me from LessWrong? Or is the difference in how you in particular relate to me here? What actions revolve around the differences you see in those examples?

Replies from: Kevin, Technologos
comment by Kevin · 2010-01-15T22:44:40.263Z · LW(p) · GW(p)

I don't think we would exclude someone solely on the basis of belief, as one of the goals here is to educate.

I'm not sure there is much action involved, but people might treat you differently if you admitted to being an evangelical Christian compared to being a believer because you are uncomfortable giving into the nihilism of non-belief

Edit: After rereading your post, yes, there are rational religious people. I have a few friends of the type, and I think the most important part of being a rational religious person is admitting that belief is irrational, steeped in feelings of culture or helplessness rather than convincing evidence. It's a slippery slope though, if you keep thinking about it you may find it hard to hold onto your belief.

Maybe in a few days you should make a top-level post about your beliefs and we can try to examine the reasons why you believe the way you do, and try and understand why you are comfortable with conflicting beliefs. No pitchforks, I promise, you seem to know the linguistic patterns to use here so that no one will pounce on you.

Replies from: MrHen
comment by MrHen · 2010-01-16T16:08:50.118Z · LW(p) · GW(p)

It's a slippery slope though, if you keep thinking about it you may find it hard to hold onto your belief.

If I cannot hold onto a belief it isn't worth holding on to.

Maybe in a few days you should make a top-level post about your beliefs and we can try to examine the reasons why you believe the way you do, and try and understand why you are comfortable with conflicting beliefs. No pitchforks, I promise, you seem to know the linguistic patterns to use here so that no one will pounce on you.

My current plan is to inch into the heavy topics with a few basic posts about belief, doubt, and self-delusion. But I know some of these things are discussed elsewhere because I remember someone at OB talking about the plausibility of self-delusion.

In any case, I am still working through the Sequences. I expect a lot of my questions are answered there.

comment by Technologos · 2010-01-16T00:45:27.199Z · LW(p) · GW(p)

I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).

The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson's religiosity has a great deal of effect on what he says and does, and that would cause a problem.

The fact that you wrote this post suggests you are in the former category, and I for one am glad you're here.

Replies from: orthonormal
comment by orthonormal · 2010-01-16T03:06:07.013Z · LW(p) · GW(p)

Best I can tell, it's not so much what you believe that matters here as what you say and do

I agree with the rest of your comment, but this seems very wrong to me. I'd say rather that the unity we (should) look for on LW is usually more meta-level than object-level, more about pursuing correct processes of changing belief than about holding the right conclusions. Object-level understanding, if not agreement, will usually emerge on its own if the meta-level is in good shape.

Replies from: Technologos
comment by Technologos · 2010-01-16T19:03:27.899Z · LW(p) · GW(p)

Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.

comment by Jayson_Virissimo · 2010-01-17T03:51:48.462Z · LW(p) · GW(p)

Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems?

Any rule that would prevent Robert Aumann from contributing here, or that would have prevented Kurt Gödel from contributing here is a bad rule.

comment by Vive-ut-Vivas · 2010-01-17T06:04:47.751Z · LW(p) · GW(p)

I have a question for you: do you expect that you will still be a theist after having read all the sequences?

Replies from: MrHen
comment by MrHen · 2010-01-18T19:11:26.074Z · LW(p) · GW(p)

Yes. I don't know what is in the sequences so it is pretty hard to accurately predict my state of beliefs on the flip-side. But as of yet, I have not yet imagined a path that will lead me to atheism. All I have to go on are other peoples' testimonies and predictions. While those are all pointing to exiting atheist there hasn't been much explanation as to why that is the prediction. I do not find this strange. I expect to find the explanations in the sequences.

comment by RobinZ · 2010-01-15T23:48:36.038Z · LW(p) · GW(p)

It occurs to me that I never responded to your explicit questions.

1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.

I think it is fairly obvious that people's beliefspace can have great chasms beneath the sanity waterline while still containing valuable islands or continents of rationality. For my purposes, when asking for book recommendations and the like, I will discount yours to an extent on these grounds (or not, if they are in a specific domain where I consider religion irrelevant), but argument screens out authority, and you've proven your capacity to provide desireable (on the karma scale) commentary. Which leads to:

2. Presumably, there is a level of entry to LessWrong that is enforced. Does this level include filtering out certain beliefs and belief systems? Or is the system merit-based via karma and community voting? My karma is well above the level needed to post and my comments generally do better than worse. A merit-based system would prevent me from posting anything about religion or other irrational things, but is there a deeper problem? (More discussion below.) Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?

Were I setting the rules, the basic requirements for contributing to LessWrong would go along the lines of:

A. Don't be a jerk, as a rule (exceptions do occur).
B. Respect evidence, even if you defy it on occasion.
C. Respect valid reasoning, even if you cannot always articulate such for your positions.

If someone was (a) persistently mean/annoying/rude to others, (b) dismissive of the authority of observations, or (c) antipathetic to argument/debate/logic/etc., I would not want them wasting my time here. This no more excludes you (who has decided not to examine certain beliefs) than it does Mitchell_Porter (who has refused to accept as definitive the evidence for physicalism with respect to consciousness) or me (who has cast aspersions on the rationality of various LessWrong contributors).

These standards are far, far weaker than Eliezer Yudkowsky's sanity waterline, but I think they approximate the level where self-improvement in rationality becomes possible.

3. Being religious, I assume I am far below the desired sanity waterline that the community desires. How did I manage to scrape up over 500 karma? What have I demonstrated that would be good for other people to demonstrate? Have I acted appropriately as a religious person curious about rationality? Is there a problem with the system that lets someone like me get so far?

See my previous answer: you've demonstrated your ability to contribute in ways which the community approves of. (Examining your past comments supports this feeling.)

4. Where do I go from here? In the future, how should I act? Do I need to change my behavior as a result of this post? I am not calling out for any responses to my beliefs in particular, nor am I calling to other religious people at LessWrong to identify themselves. I am asking the community what they want me to do. Leave? Keep posting? Comment but don't post? Convert? Read everything posted and come back later?

Keep doing what you're doing - but keep "belief in God" in mind as a place where your beliefs differ from ours, and prepare (and actually do) have that discussion at some point or points in the future.

comment by Jack · 2010-01-15T21:59:32.834Z · LW(p) · GW(p)

I think the minimal level of rationality necessary to participate successfully here has almost nothing to do with actual beliefs and everything to do with possessing the right attitude-- willing to change your mind, a desire to be have more accurate beliefs, updating with new evidence etc. See the Twelve Virtues of Rationality. You seem to be more than adequate in that regard.

If being a theist is a big part of your life, if you do things that you wouldn't do if you were an atheist then I suggest that your theism might be a big enough deal that you should stop beating around the bush and just subject your views to examination and argument in an open thread or in a dedicated thread for people to discuss issues where they don't agree with the rest of the community. But that is a recommendation, not a demand or anything.

If your theism is just a comforting, abstract belief it may well be harmless and you might as well take your time.

I wonder if we make too big a deal out of atheism here. Once you are an atheist it seems obviously true, but it is one of the hardest beliefs to change when you're a theist because it is so entangled in community, identity and normative issues. Scientology (while evil) might have the right strategic approach here. They say Scientology is totally compatible with traditional religious beliefs and then later teach that traditional religions are all false (implanted by Xenu, I guess). Maybe we should be saying "Yeah, most of us are atheists. But rationality can work for anyone!". Only to later explain why there is no god. Would that be wrong?

Replies from: byrnema, MrHen
comment by byrnema · 2010-01-16T00:34:20.781Z · LW(p) · GW(p)

a dedicated thread for people to discuss issues where they don't agree with the rest of the community

There's this one.

And in the interests of organizing information and arguments on LW, there is an argument to be made for separate posts to discuss the differences that lead to really lengthy discussions -- for example, there are posts dedicated to different angles of tolerating theism and -- now -- more posts dedicated to the problem of consciousness. Long after the discussions under these posts have died down, these posts are still places where the ideas can be picked up and probed by a newcomer.

comment by MrHen · 2010-01-15T22:19:27.616Z · LW(p) · GW(p)

If being a theist is a big part of your life, if you do things that you wouldn't do if you were an atheist then I suggest that your theism might be a big enough deal that you should stop beating around the bush and just subject your views to examination and argument in an open thread or in a dedicated thread for people to discuss issues where they don't agree with the rest of the community. But that is a recommendation, not a demand or anything.

One day I expect to have this conversation here. Until then, I expect a handful of discussions leading into why I still believe in God. There is a lot of ground to cover before I address the mean questions head on. As it is now, I am completely ill equipped for such a task.

Maybe we should be saying "Yeah, most of us are atheists. But rationality can work for anyone!". Only to later explain why there is no god. Would that be wrong?

Wrong as in incorrect or wrong as in immoral? I don't think its wrong under either usage. The only reason I would think it is incorrect is because some people no longer have what it takes to be a rationalist.

comment by JRMayne · 2010-01-16T02:35:50.954Z · LW(p) · GW(p)

Somewhat long and rambly response, perhaps in the spirit of the post:

  • I think those who quest for rationality, even if not completely, ought to be welcome here. Caveat that applies to all: I don't really deserve a vote, as a short-timer here.

  • So long as you are not trying to deliberately peddle irrationality, you're acting in good faith. That goes a fair distance.

  • Religious people are regularly rational and right on a lot of different issues. Rejecting a religious person's view solely because of religion doesn't seem like a good idea at all. (Deciding not to use time on a zombie-vampire hypothesis because it stems from religious belief rather than empirical evidence is dandy, though.) Irrational atheists are also commonplace.

Religion is an indicator of rationality, just not the be-all end-all of it.

This isn't a binary sin/no-sin situation. You can be rational in some areas and not others. Some religious people are able to be quite rational in virtually all day-to-day dealings. Some are poisoned.

We're all wanna-be rationals at some point. This post, to me, is great - the best thing I've seen written by MrHen. If someone tries to tell us that God wants us to eat less bacon, it's going to go badly, but I see no reason to reject or deter MrHen as a member.

  • Don't rationalists want to reach to religious people at some point? If you fix other parts of the map, religion may fade or change. For some people, religion is the result of irrational thinking, rather than the cause. Right?

If someone stumbles across this and wants to make 300 arguments for God, the group doesn't have time for that. But burning at the stake seems unnecessary; a clear view that this is not rational and not up for dispute.... well, the theists who stay past that are worth something.

  • Personal story: I read the Bible (and Scarne on Gambling) twice through before I was 10, and led Bible studies in my teens; I was also a very skinny guy (with glasses. Playing Dungeons and Dragons). My girlfriend the summer between high school and college (a fellow Christian, though not as properly devout) predicted I would become an atheist and gain 50 pounds within four years. I found that prediction absurd. I was an atheist in 16 months, and gained 50 pounds in three years.

Lesson: Outside view has value. Lesson 2: Continual efforts to examine religious beliefs in good faith predictably leads to less religion. Lesson 3: You can't keep eating far more than everyone else and stay thin.

  • I vote for, "Keep posting," if it wasn't clear from the above. This post is sincere, respectful, reasonable, and not trying to convert the heathens. Amen, brother.

[Edited; the auto-numbering was wonky]

Replies from: JamesAndrix, Alicorn
comment by JamesAndrix · 2010-01-16T06:45:23.343Z · LW(p) · GW(p)

My girlfriend the summer between high school and college (a fellow Christian, though not as properly devout) predicted I would become an atheist and gain 50 pounds within four years. I found that prediction absurd. I was an atheist in 16 months, and gained 50 pounds in three years.

With a prediction record like that we should prefer that she we here instead. ;-) I don't supposed she rated her confidence numerically?

comment by Alicorn · 2010-01-16T02:49:30.005Z · LW(p) · GW(p)

If someone stumbles across this and wants to make 300 arguments for God, the group doesn't have time for that.

300? It's been done... more than twice.

Replies from: Jack
comment by Jack · 2010-01-16T03:46:38.291Z · LW(p) · GW(p)

That would be a really nice tool if taken seriously. I don't think there are any valid arguments for theism with true premises but a list of 600+ strawmen isn't going to do much for anyone.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-16T21:33:57.699Z · LW(p) · GW(p)

I don't think there are any valid arguments for theism with true premises but a list of 600+ strawmen isn't going to do much for anyone.

If you can find arguments that aren't ultimately strawmen, please post them. I haven't seen them yet.

edit: By this I mean, if you can find arguments that when reduced to bullet points don't sound like those from that list, I'd like to see them.

comment by MichaelVassar · 2010-01-16T19:46:51.299Z · LW(p) · GW(p)

The point of the community is to figure out how to think, not to blame outsiders.

Unfortunately, the siren song of majoritarianism makes it critical to establish that the world is mad if one is to progress past the gates of Aumann with one's own sanity.

Discussion of the sanity waterline is largely focused on establishing epistemic non-equivalence between claimed "beliefs" in order to prevent efforts to avoid overconfidence from being self-undermining.

comment by komponisto · 2010-01-15T22:48:56.325Z · LW(p) · GW(p)

I liked this post.

Note that "Wannabe Rational" is not terribly different from "aspiring rationalist" -- the very term that most LWers use for themselves!

All of us, presumably, have some beliefs that are not accurate. That, of course, makes us irrational. But we'd like to be more rational. That desire, that aspiration, is the entrance requirement here.

It's true that there is a limit on how rational you can be and still be a theist. But that's not the same as the limit on how rational you can become in the future, given that you are now a theist (or have whatever incorrect belief X).

comment by RobinZ · 2010-01-15T20:23:35.942Z · LW(p) · GW(p)

I haven't read your entire post, but I find it very strange (and distracting, if I'm honest) that you would word it as if you believed it was irrational to believe in God. It is as if you either believe your belief is irrational (in which case, why believe it?) or you believe that it is polite to defer linguistically to the local majority position in this case. (Or something I haven't thought of - it's not like I've mathematically shown that these constitute all cases.)

I expect to find your discussion interesting - I love meta-discussions - but I'm just throwing that out there.

Edit: Ah, I see you discussed that very thing just a few paragraphs later. Interesting.

Replies from: MrHen
comment by MrHen · 2010-01-15T20:30:40.786Z · LW(p) · GW(p)

I thought about addressing this directly in the post but figured it would show up in a comment rather quickly. There are a handful of small reasons for me doing so:

  • Linguistically, LessWrong thinks of religious beliefs as irrational. I do think it is polite to defer to this usage.
  • I understand why this community considers religion to be irrational and do not feel like contending the point.
  • The post is not about religious beliefs but irrational beliefs. I use religion as an example because I used myself as an example of what I was talking about.
  • I expect it to be jarring to read, which hopefully forces the reader to realize that the post has little to do with the beliefs themselves.
Replies from: RobinZ
comment by RobinZ · 2010-01-15T20:58:43.474Z · LW(p) · GW(p)

I can buy that - although now I feel as if my second reply is rather patronizing. I apologize if the links therein are inapplicable to your situation; I would not have worded it as I did if I believed that the reason for your phrasing was as you described.

Replies from: MrHen
comment by MrHen · 2010-01-15T21:08:17.155Z · LW(p) · GW(p)

It's all good. I found something useful in the comment. :)

comment by byrnema · 2010-01-16T01:35:29.718Z · LW(p) · GW(p)

People on LW like to insist that there is a litmus test for rationality; certain things any rationalist believes or does not believe as a necessary condition of being rational. This post makes this pretty explicit (see 'slam-dunks').

However, I wish that the LW community would make more of a distinction between rational beliefs based on really good epistemological foundations (i.e., esoteric philosophical stuff) and rational beliefs that are rational because they actually matter -- because they're useful and help you win.

I'm someone who is interested in philosophy, but I still measure whether something is rational or not based on evidence of whether it works or not. Given my understanding of Less Wrong Rationality, I think my view of rationality is the more Less-Wrong-Rational view -- what defines 'rational' should be evidence based, not based on ability to spin the nicest sounding argument.

Accordingly, I think that someone who denies medical treatment due to religious beliefs (and dies) is much less rational (on a completely different level) than the theist who cannot locate any material way that his beliefs compromise his ability to achieve his terminal values.

The same argument goes for many worlds -- and it's crazy that I might now be even charting into even more heretical ground. Instead of believing that many worlds is a slam-dunk, I have no belief about it and think many worlds is an esoteric non-issue -- because nothing rests upon it. If I was fascinated by the question, I would read some papers about it, and might form an opinion or might continue to appreciate the aesthetic beauty of arguments on both sides.

However, as soon as the issue had a demonstrated material consequence -- for example; some experiment that could be done or some money pump that could be designed, then I trust I would get to the bottom of it. Because, for one thing, that fact that it materially mattered would mean that there would be some kind of evidence either way.

Replies from: byrnema
comment by byrnema · 2010-01-16T07:59:57.845Z · LW(p) · GW(p)

I just realized that while this is my argument for why I don't think theists are categorically irrational, it doesn't mean that any of them would belong here. Less Wrong obviously values having an accurate map not just to the extent that it facilitates "winning", but also for its own sake, because they value truth. So finally I would qualify that the argument against having theists here isn't that they're so necessarily irrational, but theism conflicts with the value of having an accurate map. Likewise, Less Wrong might value certain epistemological foundations, such as Occam's razor (obviously) and any others that lead to choosing many worlds as the natural hypothesis.

I just forgot (while composing the message above) that 'Less Wrong' represents a combination of instrumental rationality AND VALUES. I usually think of these values as valuing human life, but these values include valuing epistemic rationality. While Less Wrong is much more tolerant of different values than wrong beliefs in general, its justifiably not so tolerant of different values about beliefs.

I think that my comment above should have been down-voted more than it was, since it's not representing the community norm of valuing truth for its own sake. I'm not valuing truth so much these days because I'm going through a value-nihilistic phase -- that ironically, I blame on Less Wrong. But 'you guys' that care about truth might down-vote a comment arguing that there is no value to beliefs beyond their effectiveness in achieving goals.

Replies from: zero_call
comment by zero_call · 2010-01-16T20:06:27.773Z · LW(p) · GW(p)

It seems to me like you're creating an artificial dichotomy between the value of truth itself and the material relevancy of truth. To me, these ideas are rather coupled together, and I would up-vote your first post for the same reason I would up-vote your second post.

In other words, to me, "valuing truth for its own sake" includes valuing truth for its importance, testability, relevance, etc. in other areas.

comment by MarkusRamikin · 2011-06-28T04:50:08.276Z · LW(p) · GW(p)

For what it's worth:

  • I would like to see more people like the original poster here.
  • I do not think that the first order of business for a theist coming here needs to be examining their religious beliefs. Which seems to be an assumption behind a lot that was said here.

This is not an atheism conversion site, right? There needn't be pressure. Let them learn the methods of rationality and the language of Bayes, without eyeing them for whether they're ready to profess the teacher's password yet. If they're making useful contributions to the topics they post on, no less than atheist members, that screens off other considerations.

comment by clockbackward · 2010-01-27T02:52:09.580Z · LW(p) · GW(p)

Anyone who claims to be rational in all area of their lives is speaking with irrational self confidence. The human brain was not designed to make optimal predictions from data, or to carry out flawless deductions, or to properly update priors when new information becomes available. The human brain evolved because it helped our ancestors spread their genes in the world that existed millions of years ago, and when we encounter situations that are too different from those that we were built to survive in, our brains sometimes fail us. There are simple optical illusions, simple problems from probability, and simple logic puzzles that cause brain failings in nearly everyone.

Matters are even worse than this though, because the logical systems in our brain and the emotional ones can (and often do) come to differing conclusions. For example, people suffering from phobia of spiders know perfectly well that a realistic plastic spider cannot hurt them, and yet a plastic spider likely will terrify them, and may even send them running for their lives. Similarly, some theists have come to the conclusion that they logically have no reason to believe in a god, and yet the emotional part of the brain still fills them with the feeling of belief. I personally know one unusually rational person who admits to being just like this. I have even discussed with her ways in which she might try to bring her emotions in line with her reasoning.

So does one irrational belief discredit someone from being a rationalist? Not at all. We all have irrational beliefs. Perhaps a more reasonable definition of a rationalist would be someone who actively seeks out their irrationalities and attempts to eradicate them. But identifying our own irrationalities is difficult, admitting to ourselves that we have them is difficult (for rationalists, anyway), removing them is difficult, and overcoming the emotional attachment we have to them is sometimes the most difficult part of all.

Replies from: pjeby
comment by pjeby · 2010-01-27T07:57:17.245Z · LW(p) · GW(p)

But identifying our own irrationalities is difficult, admitting to ourselves that we have them is difficult (for rationalists, anyway), removing them is difficult, and overcoming the emotional attachment we have to them is sometimes the most difficult part of all.

I would reverse the ordering you have there: overcoming an emotional attachment is actually the easiest thing to do, finding the irrational belief is the hardest. Actually, finding any implicit belief/assumption is hard, whether it's rational or not. We see the picture framed by our beliefs, but not (usually) the frame itself.

Admitting and eliminating one's emotional beliefs can be done in a systematic,near-rote way, simply by asking a few questions (see e.g. Lefkoe or Katie). Identifying one's emotional beliefs, on the other hand, requires something to compare them to, and you can never be quite certain where to start. Brains don't have a "view source code" button, so one is forced to reverse-engineer the assumptions.

comment by ChristianKl · 2010-01-17T12:43:24.934Z · LW(p) · GW(p)

Throwing people out because they hold certain beliefs generally leads to groupthink effects that are lead to less clear thinking.

Having someone who plays devils advocate against the consensus is sometimes even helpful if everyone believes in the consensus. Otherwise one often finds oneself arguing against strawmans that come from not fully understanding the argument that's made by the opposing side.

comment by woozle · 2010-01-17T00:27:57.121Z · LW(p) · GW(p)

Also, admittedly, I am unjustifiably attached to that area of my map. It's going to take a while to figure out why I am so attached and what I can do about it. I am not fully convinced that rationalism is the silver-bullet that will solve Life, the Universe, and Everything. I am not letting this new thing near something I hold precious. This is a selfish act and will get in the way of my learning, but that sacrifice is something I am willing to make.

I have had a theory for some time now that people confuse "God" with "good[ness]". They seem to believe that if you don't believe in God, then you don't believe people can be good -- you don't believe that goodness exists in the universe. (As just one small bit of evidence, look at all the creationist arguments which arguably boil down to "if evolution is true, then the universe is a bad place, and I can't believe that, so evolution must be wrong.")

This is, of course, utterly untrue. Goodness does exist, and you don't need to resort to spirituality if believing this is vital to your sanity, because the evidence also shows it to be true.

(This somewhat opens another can of worms, i.e. what if science did show a really depressing universe that we're not emotionally equipped to deal with? ...and, is it rational to embrace rationality just because it happens to prove what we'd like to believe about the universe anyway? Or maybe science does actually show us a pretty scary, dehumanizing universe -- it's just not an impossibly scary, dehumanizing universe and certainly not as scary and evil as the "all goodness comes from God" hypothesis would make it... in which case embracing rationality still seems the most rational thing to do -- just not the easiest.)

(...and you thought you were getting too "meta".)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-17T11:13:30.063Z · LW(p) · GW(p)

This calls for Dennett's classic "Thank goodness".

Replies from: Bo102010
comment by Bo102010 · 2010-01-17T16:25:56.537Z · LW(p) · GW(p)

One of my favorite bits of writing ever, in part because it gave me the right answer to "Bo, I'm going to pray for you." "OK, and I will sacrifice a goat for you."

Replies from: Corey_Newsome
comment by Corey_Newsome · 2010-01-17T21:51:23.359Z · LW(p) · GW(p)

"Corey, I'm going to pray for you." "OK, then I'll think for both of us." Or, "Ok, then I'm going to prey on you."

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-17T23:01:05.067Z · LW(p) · GW(p)

"I'll pray for you."

"I'll think for you."

Is that original? GF and I both think it's awesome.

Replies from: Corey_Newsome
comment by Corey_Newsome · 2010-01-17T23:30:31.512Z · LW(p) · GW(p)

No, but unfortunately I can't find out where it came from. Perhaps P. Z. Myers's collection of infidel quotes (Edit: see PeerInfinity's comment) but I can't access it right now due to Linux problems. (Incidentally, he'll be in the Bay area for a week in a few days. Info here.)

At any rate, that's a good page to read when you're feeling particularly anti-theist and want ammo.

Replies from: PeerInfinity
comment by PeerInfinity · 2010-01-18T02:49:53.626Z · LW(p) · GW(p)

The "infidel quotes" link is broken. Or at least it failed to load when I clicked it. Is this the page you meant to link to:

http://www.pharyngula.org/quotes.html

comment by wedrifid · 2010-01-16T00:38:23.935Z · LW(p) · GW(p)

Should LessWrong /kick people who fail at rationality? Who makes the decision? Who draws the sanity water-line?

If we were doing that I would have /kicked Robin Hanson a long time ago and probably Eliezer too. There are few people who do not have at least one position they stick to more than would be rational.

As far as I am concerned you are more than welcome and seem to be a thoroughly positive influence towards rational discussion. Besides, you will probably not believe in God for much longer. People just don't tend to change that sort of fundamental part of their identity straight away unless they have some sort of traumatic experience (eg. hazing).

Replies from: Kevin
comment by Kevin · 2010-01-16T01:36:03.684Z · LW(p) · GW(p)

http://lesswrong.com/lw/1ly/consciousness/1fjv

From this exchange, it doesn't sound like the Alexxarian was being threatened with /kick for failing rationality -- it was for failing to use the right linguistic patterns when he was consistently (and correctly) questioned by people using the right linguistic patterns. The exchange would have gone very differently if Alexxarian said something like "Solak's book sounds convincing to me" instead of "[Solak] logically proves".

MrHen's post is soaked in doubt and admissions of uncertainty, so it is nearly impossible for us to judge him.

Replies from: RobinZ, wedrifid
comment by RobinZ · 2010-01-16T01:52:36.809Z · LW(p) · GW(p)

As a participant in that thread, I saw four problems which threatened to earn him the banhammer:

  1. Topic derailing - rather than engage with the material he was ostensibly replying to, Alexxarian chose to promote his own ideas.

  2. Excessive linkage to outside material without proper summarizing.

  3. Poor understanding of comment etiquette.

  4. Vague thinking and writing.

Linguistic patterns appear in the ultimate and penultimate points, but they do not constitute the whole story.

comment by wedrifid · 2010-01-16T03:17:17.159Z · LW(p) · GW(p)

From this exchange, it doesn't sound like the Alexxarian was being threatened with /kick for failing rationality

I didn't make part of that conversation but it sounds like Alexxarian was being threatened for reasons distinct from having a particular irrational belief. Do you think Alexxarian's convo was what MrHen was really talking about when he asked the questions here? Being unfamiliar with that potential context I simply took them at face value as general questions of policy.

Replies from: Kevin
comment by Kevin · 2010-01-16T03:46:53.652Z · LW(p) · GW(p)

I don't think that specific conversation was being referred to, but the general pattern of Eliezer's willingness to ban people that are consistently downmodded in conversations.

My broader point was that by using the appropriate language to admit wrongness and irrationality and uncertainty, it should be permissible to be almost arbitrarily irrational here, at least until someone tells you to go read the sequences before commenting again.

Replies from: wedrifid
comment by wedrifid · 2010-01-16T13:31:24.917Z · LW(p) · GW(p)

I don't think that specific conversation was being referred to, but the general pattern of Eliezer's willingness to ban people that are consistently downmodded in conversations.

What I have always found weird was him actually threatening to delete all future comments from an account rather than actually banning the account. Freedom with message deletion actually makes me more nervous than a free hand with the /kick command. It seems more transparent.

My broader point was that by using the appropriate language to admit wrongness and irrationality and uncertainty, it should be permissible to be almost arbitrarily irrational here, at least until someone tells you to go read the sequences before commenting again.

Humility and basic courtesy do go a long way, don't they?

Replies from: Kevin
comment by Kevin · 2010-01-17T00:25:46.173Z · LW(p) · GW(p)

I think it was the meta thread where I commented that Less Wrong needs a Hacker News style dead/showdead system, which allows you to arbitrarily censor while simultaneously allaying concerns about censorship.

Humility and basic courtesy do go a long way, don't they?

Amen.

comment by Emile · 2010-01-15T23:21:48.282Z · LW(p) · GW(p)

Believing in God may be "below the sanity waterline", but there are plenty of other ways to have crazy beliefs for the wrong reasons (anything other than "because as far as I can tell, it's true") while being an atheist - about about science, about themselves, about politics, about morality ...

I think the "politics is the mind killer" policy is a bit of an avowal that the people here are fully capable of irrationality, and that it's more productive to just avoid the subject.

If OB/LW had started a few centuries ago, maybe the policy would have been "religion is the mind killer", with a general recommendation to not talk about religion less people get excited. Hopefully in a few centuries politics may not be the mind killer either.

comment by MatthewB · 2010-01-18T17:15:49.445Z · LW(p) · GW(p)

Re: Irrational Beliefs.

When I was born, I was given a baby-blanket (blue), and a teddy bear. During childhood, I developed the belief that these two entities protected me, and even clung to those beliefs (although in a much less fully believed fashion). The presence of these two items, even though they really did nothing more than sit in my closet, did help to calm me in times of stress... Yet, I knew there was no possible way that a square piece of cloth, and a piece of cloth sewn into the shape of a bear (stuffed and buttoned with eyes) could affect the world. It was the ritual involved of taking the bear out to talk to it that I found to be more effective. During the childhood, the bear even developed a personality (that is, my imagination developed a personality which I then applied to the bear) that allowed me to take action in the face of a great amount of fear (such as fighting back against bullies as a child).

I am not so much worried about people's particular irrational beliefs as I am what they might do with those beliefs. It worries me that some people use their irrational beliefs as a basis for actions that are not in accord with reality in any way shape of form.

However, it might be the case that some people might be able to use an irrational belief, or a tool based upon an irrational belief) in order to find a more rational actions to take. That may sound... well, wrong, but I give the case of my bear (and the blanket, it was personified as a blue-bat until it was stolen in 1999) as an example of the fact that it can happen.

Edit: From the PoV of How can I make my beliefs more accurate, the bear helps in doing this by giving me an internal dialog where I allow myself to more strongly challenge my assumptions than I would if it were just me (I understand the contradiction inherent in that comment) making the arguments to myself (i.e. I am splitting myself into two personas)

comment by mariz · 2010-01-16T15:24:45.294Z · LW(p) · GW(p)

"So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point."

I think the particulars of your beliefs are important, because they reveal how irrational you might be. Most people get away with God belief because it isn't immediately contradicted by experience. If you merely believe a special force permeates the universe, that's not testable and doesn't affect your life, really. However, if you believe this force is intelligent and interacts with the world (causes miracles, led the Israelites out of Egypt, etc.), these are testable and falsifiable claims (the Exodus should have left evidence of a large semitic migration through the Sinai, but none exists, for example), and believing them in light of disconfirmatory evidence makes you more irrational.

Because of this lack of testability, it's much easier to believe in vague gods than, for example, that your next lottery ticket will be a winner.

comment by RobinZ · 2010-01-15T20:54:46.717Z · LW(p) · GW(p)

Making a general response to the post, now:

I think it is fairly obvious that the LessWrong community is not innately privileged as arbiters of rationality, or of fact. As such, it is reasonable to be cautious about obscuring large portions of your map with new ink; I don't think anyone should criticize you for moving slowly.

However, regarding your hesitance to examine some beliefs, the obvious thing to do (since your hesitance does not tell you whether or not the beliefs are correct, only examining them does) is to make the consequences of your discovery feel less severe. (It is precisely these feelings which are the only true consequences, but that does not make them nonexistent. Keats felt as if Science had murdered the gnomes, though there were no gnomes to be killed.) The author behind Ebon Musings and Daylight Atheist offers Stardust, Fragile Trappings, Extinguishing the Fear of Hell, To Those Who Doubt Their Religion, Green Fields; Greta Christina has Dancing Molecules, Comforting Thoughts About Death That Have Nothing To Do With God, The Meaning of Death: Part One, Two, Three of Many, The Not So Logical Conclusion: On the Morality of Atheists and Believers, Atheism, Bad Luck, and the Comfort of Reason; some people have found comfort in The Little Book of Atheist Spirituality by André Comte-Sponville, but I found it annoying; ... the long and the short of it is that there are a lot of places you can look to find reasons not to fear a particular answer to an empirical question. (I could do the same exercise with the negative answer to "do human beings have free will?", for example.)

I think Eliezer Yudkowsky talked about this idea, but his phrasing didn't stick with me.

Replies from: orthonormal, MrHen
comment by orthonormal · 2010-01-16T02:54:53.263Z · LW(p) · GW(p)

I think Eliezer Yudkowsky talked about this idea, but his phrasing didn't stick with me.

Leave a Line of Retreat

Replies from: RobinZ
comment by RobinZ · 2010-01-16T03:16:35.097Z · LW(p) · GW(p)

That was the one I was thinking of. Thanks for the link.

comment by MrHen · 2010-01-15T21:03:03.477Z · LW(p) · GW(p)

However, regarding your hesitance to examine some beliefs, the obvious thing to do (since your hesitance does not tell you whether or not the beliefs are correct, only examining them does) is to make the consequences of your discovery feel less severe.

Agreed. And this is very good advice. My map has beliefs about my map and I figured those are very high priority. Any ooga-booga's about touching an area is probably in the meta-map. As of right now, most of those are open for analysis. The big, annoying is a self-referential lockout that is likely to get tricky. Of all the discussions that would thoroughly surprise the community here, this one takes the cake. My younger self was pretty clever and saw the future too clearly for my own good.

The ulterior motive of this post is to give me a way to discuss these things without people going, "Wait, backup. You believe in God?"

Replies from: RobinZ
comment by RobinZ · 2010-01-15T21:07:44.261Z · LW(p) · GW(p)

The ulterior motive of this post is to give me a way to discuss these things without people going, "Wait, backup. You believe in God?"

Bear in mind that not everyone reads every post - assuming you continue to discuss matters related to theism without a major reversal of opinion (either on your part or ours - I, naturally, expect the latter to be unlikely), this will still happen occasionally, with increasing frequency as time progresses.

Replies from: MrHen
comment by MrHen · 2010-01-15T21:24:30.386Z · LW(p) · GW(p)

Agreed. Having this post in the archives is useful for my far-future self. I expect it to save me a lot of time.

comment by zero_call · 2010-01-16T04:31:00.331Z · LW(p) · GW(p)

I disagree with creating a hierarchy of rational levels, as you are suggesting. For one thing, how do you categorize all the beliefs of an individual? How do you rank every single belief in terms of value or usefulness? These are serious obstacles that would stand in the way of the execution of your program.

Moreover, I don't believe this categorization of perspective serves any real purpose. In fact it seems that many topics lie either "outside" of rationality, or else, they are not really served by a rational analysis. People shouldn't receive demerits when they choose to differ with some other rational viewpoint, as rationality doesn't exist as an exclusive, single-interpretation domain. This is why, for example, scientists and philosophers frequently disagree.

Lastly, you seem quite self-flagellating about your belief in God (excuse the double meaning.) This is a little bit unwholesome since you haven't offered your justification for your beliefs, so we really have no reason or method to agree with you on your self-inspection. The sorts of questions like "Will we accept you," "Should I be banned", and so on, are questions that require the barter of arguments, not proclamations. As such, I personally can't comment on any of that stuff except to say... lay off yourself a little?

Replies from: MrHen
comment by MrHen · 2010-01-16T17:06:16.633Z · LW(p) · GW(p)

I disagree with creating a hierarchy of rational levels, as you are suggesting. For one thing, how do you categorize all the beliefs of an individual? How do you rank every single belief in terms of value or usefulness? These are serious obstacles that would stand in the way of the execution of your program.

At this point, I have no better answer than feeling it out. It makes it a bit wishy-washy, but all I am really trying to do is get a rough estimate of someone's ability to improve their map.

I agree that it is unfeasible to categorize someone's every belief and then register their rationality on a scale.

Moreover, I don't believe this categorization of perspective serves any real purpose. In fact it seems that many topics lie either "outside" of rationality, or else, they are not really served by a rational analysis. People shouldn't receive demerits when they choose to differ with some other rational viewpoint, as rationality doesn't exist as an exclusive, single-interpretation domain. This is why, for example, scientists and philosophers frequently disagree.

I think I expect more from rationality than you do. I don't think any topic lies outside of the map/territory analogy. Whether we possess the ability to gather evidence from some areas of the territory is a debate worth having. Somewhere in here is the mantra, "Drawing on the map does not affect the territory." I cannot come up with some beliefs and then argue vehemently that there must be territory to go along with it.

I expect studying inaccessible territories to be much like how they discovered extra planets in the solar system. Even if we cannot go there ourselves, we can still figure out that something is there.

Lastly, you seem quite self-flagellating about your belief in God (excuse the double meaning.) This is a little bit unwholesome since you haven't offered your justification for your beliefs, so we really have no reason or method to agree with you on your self-inspection. The sorts of questions like "Will we accept you," "Should I be banned", and so on, are questions that require the barter of arguments, not proclamations. As such, I personally can't comment on any of that stuff except to say... lay off yourself a little?

Fair enough. I was trying to accomplish two things with this post and I tried using the flagellating to help people see the other point. It seems to have mixed success.

comment by Morendil · 2010-01-15T20:48:34.788Z · LW(p) · GW(p)

I get the feeling that most discussions about the beliefs themselves are not going to be terribly useful.

You lost me there. I can't think how this discussion can yield a useful result if held entirely at the meta level. It makes a difference what you mean by "believe in God"; your beliefs matter to the extent that they make a difference in how you behave, decide, and so on. Words like "rational" and "rationalist" can be a distraction, as can "God"; behaviour and outcomes offer better focus.

If you find yourself praying for people close to you who have been diagnosed with cancer, that's something you might want to look deeper into:

  • If you find yourself praying for them and encouraging them to stop seeing doctors, I'd say you are too far gone.
  • If you are praying, and your prayers supplement medical attention, and the act of praying is in and of itself a source of hope, then that is an issue of correspondence between reality and your emotions, not as huge a concern.
  • If you are praying out of a private sense of obligation arising from faith, and under no conception that your prayers will make a physical difference in this world, you are at least consistent, if not rational.
Replies from: MrHen
comment by MrHen · 2010-01-15T21:57:25.863Z · LW(p) · GW(p)

You lost me there. I can't think how this discussion can yield a useful result if held entirely at the meta level. It makes a difference what you mean by "believe in God"; your beliefs matter to the extent that they make a difference in how you behave, decide, and so on. Words like "rational" and "rationalist" can be a distraction, as can "God"; behaviour and outcomes offer better focus.

Because there will be more people like me. Is your response, "It depends on the individual beliefs"? How does this play into participating at LessWrong?

Replies from: Morendil
comment by Morendil · 2010-01-15T22:42:25.479Z · LW(p) · GW(p)

Not so much on the individual beliefs, as on what your thought processes are and in what ways you might want to improve them.

We do not possess isolated beliefs, but networks of beliefs. And a belief isn't, by itself, irrational; what is irrational is the process whereby beliefs are arrived at, or maintained, in the face of evidence.

I am an atheist, but I'm far from certain that none of my current beliefs and the way I maintain them would be deemed "irrational" if they came up for discussion, and judged as harshly as theism seems to be.

My intent in participating here is to improve my own thinking processes. Some of the ways this happens are a) coming across posts which describe common flaws in thinking, whereupon I can examine myself for evidence of these flaws; b) coming across posts which describe tools or techniques I can try out on my own; c) perhaps most interesting, seeing other people apply their own thinking processes to interesting issues and learning from their successes (and sometimes their failures).

The karma system strikes me as an inadequate filtering solution, but better than nothing. I'm now routinely browsing LW with the "anti-kibitzing" script in an effort to improve the quality of my own feedback in the form of up- and downvotes. My first reading of a comment from you would be looking for insights valuable in one of the three ways above. Perhaps if your comment struck me as inexplicably obscure I might check out your user name or karma.

By becoming a more active commenter and poster, I hoped to learn as others gave me feedback on whether my contributions are valuable in one of these ways. The karma system had significant and subtle effects on the ways I chose to engage others here - for good or ill, on balance, I'm still not sure.

Replies from: MrHen
comment by MrHen · 2010-01-16T16:05:00.321Z · LW(p) · GW(p)

Not so much on the individual beliefs, as on what your thought processes are and in what ways you might want to improve them.

Is it possible to glimpse or understand someone's thought processes are without delving into their particular beliefs? I assume yes. Since Religion is some of a touchy subject, I offer everything else I say as evidence of my thought processes. Is that enough?

We do not possess isolated beliefs, but networks of beliefs. And a belief isn't, by itself, irrational; what is irrational is the process whereby beliefs are arrived at, or maintained, in the face of evidence.

Yeah, that makes sense. There are a few interesting discussions that can lead from this, but I am fairly certain we agree on the major issues. The basic reason I did not want to go into the particular beliefs here is because (a) I felt the meta-discussion about how people should deal with these things was important and (b) I was unsure what the reaction would be.

Replies from: Morendil
comment by Morendil · 2010-01-16T17:15:22.068Z · LW(p) · GW(p)

Since Religion is some of a touchy subject, I offer everything else I say as evidence of my thought processes. Is that enough?

That's for you to say.

You chose to bring up religion - more specifically "belief in God". You could have illustrated how you think without bringing up that particular confession; you did so of your own initiative.

The major "meta" question of your post has already been addressed here: yes, you can strive to become "less wrong" whatever your starting point happens to be. All that seems to be required is a capacity for inquiry and a sense of what "wrong" is.

We couldn't function if we weren't rational to some extent. Any adult LessWronger presumably earns enough money to keep a roof over their head, food on the table and an Internet connection within easy reach; this is evidence that some at least of their actions are rational in the sense of making appropriate contributions to their projects.

This community seems to be about more than that basic ability to function in society. There is a strong sense of a more global responsibility: refining the art of human rationality enough to defend not just myself, not just my family, not just my friends, but much bigger groups. Before hanging around here I thought I had ambition, to the extent that I wanted to save my profession from itself. Well, this is a group of people attracted to the notion of at least saving humanity from itself.

In that context, no, I don't think your plea for a "waterline exception" covering your specific pet belief should be taken seriously.

I do, however, think we stand to gain by taking a closer look at religious belief, without attempting to turn it into a bogeyman or a caricature. For this to happen, it seems to me we need to examine the beliefs themselves.

Religious, in fact even spiritual belief is something of a mystery to me; what I find particularly puzzling is precisely how some very smart people I know are able to simultaneously hold those (to me) bizarre beliefs and still function very well in other intellectual domains.

The closest I've come to understanding it was while reading Michael Polanyi's Personal Knowledge; even so, and though I found useful insights in that book, my major conclusion was simply that I lacked enough "spiritual knowledge" to even understand the possibility of spiritual knowledge.

But I'm still curious about it.

Replies from: MrHen
comment by MrHen · 2010-01-16T17:40:04.480Z · LW(p) · GW(p)

Okay, that makes sense. To be clear, I am not trying to resist your questions or curiosity. The more I read the responses here the more I am internally committing to have the discussion about the particulars of my religiousness.

In that context, no, I don't think your plea for a "waterline exception" covering your specific pet belief should be taken seriously.

Fair enough. This answers the question adequately.

I do, however, think we stand to gain by taking a closer look at religious belief, without attempting to turn it into a bogeyman or a caricature. For this to happen, it seems to me we need to examine the beliefs themselves.

Religious, in fact even spiritual belief is something of a mystery to me; what I find particularly puzzling is precisely how some very smart people I know are able to simultaneously hold those (to me) bizarre beliefs and still function very well in other intellectual domains.

I completely agree. Standing on the other side, I find it puzzling that more people are puzzled.

comment by Nic_Smith · 2010-01-16T08:21:23.178Z · LW(p) · GW(p)

My apologies in advance for rambling:

To begin, the subject reminds of a bumper sticker I occasionally see on a car around here: "Militant Agnostic: I Don't Know, And You Don't Either!"* Though there are plenty of examples of irrational religious beliefs leading to bad results, nonetheless I am not convinced that rationality is most useful when applied to (against?) religion. Just off the top of my head, applying it to politics directly (per Caplan's Myth of the Rational Voter), or even something as mundane as climate (one way or the other), would yield greater dividends, as, absent a religious war among the G-8 (unlikely), improved rationality in these area should help preserve and improve economic growth, which in turn should fuel funding and legal friendliness toward anti-aging research (including cryonics). It's boring but true -- we can worry about religion later.

How to sort people by level of rationality has been on my mind quite a bit lately, because LW has previously discussed the power of rationalist organizations. We probably haven't identified any way to sort people into various levels of rationality, "at a glance", without dramatic and extensive testing, if only because such a tool would be immensely powerful. (IIRC, opinions differ.) The question that's vexing me is: how do you approach a suspected rationalist or semi-rationalist and try to recruit them to a random cause? I've so far thought of "have a cause that has a wide appeal," but, not having buckets of money, am somewhat at a loss as to what this might be. If rationality really is the art of winning, and if a rationalist group ought to do better at achieving its goals, it should be possible to test such ways of rationality-sorting by making groups out of suspected rationalists and having them work on some goal. "Would you like to join my guild in World of Warcraft?" doesn't seem like it's going to cut it. Going back to the original topic at hand, if you think that theism or atheism is such a great indicator, why not use it to take over the world? (Well, EY does have a group of about 80% atheists here, so maybe that's what he DID).

This bring me to my own religious beliefs. Strangely enough, I moved from Catholic to Deist after reading most of the Old Testament (I skipped Song of Solomon and some of the parts where they were going through lineage or inventory or whatever). On a meta-level, although I didn't realize it at the time, that should not be a possible effect of a substantial portion of the "word of God". OTOH, I am somewhat surprised that no one else seems to have brought up the concept of the free floating belief.

In turn, this brings me to a what's already been brought up in some other comments, but I think needs more emphasis: there are different degrees of irrationality in religion. Suppose that God exists and wants to get a message to you. Would it make sense to go through layers and layers of generations and bureaucracy, knowing, as we do, that humans are corrupt and/or biased, and subject to making mistakes in any case? The probability that the message would arrive as intended is low. And we also see conflicting claims of direct divine revelation even in the modern world. This seems, more or less, like Paine's suggestion that religious claims are "hearsay." I would very cautiously suggest that the more "applied hearsay" a religion has, the less rational it is.

*Looking it up online, the bumper sticker actually seems to be from a [political] progressive website. Leaving aside modern progressives, I just so happen to be reading The Cult of the Presidency, which depicts early 1900s Progressives as utterly insane, mostly due to religious reasons, believing that they had been appointed by God to... make people better... somehow... and cause wars, both literal and figurative. The book is published by Cato, take that as you may.

Replies from: Jack, Nic_Smith
comment by Jack · 2010-01-16T10:13:59.838Z · LW(p) · GW(p)

Just off the top of my head, applying it to politics directly (per Caplan's Myth of the Rational Voter), or even something as mundane as climate (one way or the other), would yield greater dividends, as, absent a religious war among the G-8 (unlikely), improved rationality in these area should help preserve and improve economic growth, which in turn should fuel funding and legal friendliness toward anti-aging research (including cryonics).

Religion is the most likely motivating force for biological or nuclear terrorism in the next 25 years. It exacerbates geopolitical tensions that could easily lead to broader conflicts (India-Pakistan, Israel-Arab world). And a large part of why AIDS kills millions of Africans every year, contributing to the near impossibility of building economic infrastructure there, is religious superstition and the inane dogma of the Catholic church. For me at least those issues are somewhat more important than making sure rich people don't die of old age.

Replies from: Nic_Smith
comment by Nic_Smith · 2010-01-16T11:09:55.125Z · LW(p) · GW(p)

I suppose my overly economical view offended. Sorry.

I would prefer a world where such conflicts and suffering did not exist. However, it still does not follow that this is where the most effort should be expended. You are talking about dramatically changing the religious beliefs of billions over a few decades. I've suggested that tweaking the political beliefs of some hundreds of millions, already somewhat educated, roughly over the same time period or perhaps a bit longer, may be more doable.

Replies from: Jack
comment by Jack · 2010-01-16T18:01:36.709Z · LW(p) · GW(p)

I'm not offended by your overly economical view. If you have some argument for why anti-aging research will help people more in the long term, great, lets here it. Nor do I doubt applying rationality to politics would have some good effects- for one we could set policies that undermine religion and superstition elsewhere. My objection was just that cryonics and anti-aging aren't even close to being important enough to be the operating concern here. A Friendly AI, maybe. But if suspect rich-middle class Westerners stop dying of old age I suspect many of the world's problems would be exacerbated and only one would be solved.

I've suggested that tweaking the political beliefs of some hundreds of millions, already somewhat educated, roughly over the same time period or perhaps a bit longer, may be more doable.

No, it is definitely more doable. It just isn't important enough to do if your only reason is financial and legal support for cryonics.

Replies from: Nic_Smith
comment by Nic_Smith · 2010-01-17T04:54:26.328Z · LW(p) · GW(p)

If you have some argument for why anti-aging research will help people more in the long term, great, lets here it.

Ok: people have value -- human capital, if necessary -- that compounds with time: knowledge, social ties, personal organization, etc. Currently, this is greatly offset by the physical and mental decline of aging. If we could undo and prevent that decline, people would have the opportunity to be unimaginably productive. The problems that you've mentioned are difficult now, but they'd be easier after someone spent a second lifetime dedicated solely to working on them. Furthermore, the management of physical and financial capital across great periods of time is limited -- there isn't anyone that can realistically oversee 300+ year projects and make sure they turn out right. All of this is of value not only to the individual whose life is extended, but to others as well. Admittedly, cryonics doesn't fall into this story perfectly, although a political environment that's better for anti-aging in general should also be better for cryonics.

I will also confess that I don't want to die. You shouldn't either.

comment by Nic_Smith · 2010-01-16T08:34:00.890Z · LW(p) · GW(p)

In case anyone misinterprets that last sidenote as a subtle jab: the book also says that many, not all, of these people switched sides roundabout (IIRC) the 50s through 70s, so no, it isn't.

comment by Jordan · 2010-01-16T01:33:24.343Z · LW(p) · GW(p)

Rationality is a tool. There must be something more fundamental for which the tool is wielded. I liken it to a formal mathematical system, where rationality is the process of proof, and what lies beneath are the axioms of the system. Some choices of axioms are inconsistent, but there are likely many choices that aren't.

While a rational person should never arrive at and then hold an unfalsifiable belief, I don't think it's irrational if an unfalsifiable belief is a starting axiom, something fundamental to who you are. Belief in God may or may not be such an axiom for you, but either way I find it useful to try and keep in mind the purpose of my rationality when applying it to areas of my map that scream when prodded.

comment by roland · 2010-01-15T20:29:26.206Z · LW(p) · GW(p)

There is one thing I don't understand: you seem to perceive your belief in God as irrational. In my understanding you can't belief in something and at the same time belief that this belief is irrational.

If I believe "the sky is red" and I'm aware that this is irrational since I know that in reality the sky is blue there is no way for me to continue believing "the sky is red".

Or did I misunderstand you somehow?

Replies from: MrHen
comment by MrHen · 2010-01-15T20:47:10.839Z · LW(p) · GW(p)

A similar question was asked elsewhere in the comments. I made a bigger reply there. The short answer is that I am being tricky. :)

If I believe "the sky is red" and I'm aware that this is irrational since I know that in reality the sky is blue there is no way for me to continue believing "the sky is red".

This is completely unrelated to my post, but I find this example interesting for the following reasons:

  • The realization that the sky is not red and that the sky is blue are two different things. Accepting "the sky is red" as irrational is possible without discovering that the sky is blue. If I happen to find an irrational belief in my map but have nothing to put there instead, what is the correct behavior? When I need to act on that area of the map, and all I have is an irrational belief, what should be done?
  • I do not consider it impossible to continue believing in a known irrational belief. This is a much larger discussion. The short version: Not everybody wants to be rational.
Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-15T22:24:07.632Z · LW(p) · GW(p)

The realization that the sky is not red and that the sky is blue are two different things. Accepting "the sky is red" as irrational is possible without discovering that the sky is blue. If I happen to find an irrational belief in my map but have nothing to put there instead, what is the correct behavior? When I need to act on that area of the map, and all I have is an irrational belief, what should be done?

The first thing is to realize that you don't have even the irrational belief, because if the map is wrong, it's worse than useless. You should regress to the prior, accept not knowing the answer, but at the same time being careful about "you either win the lottery or lose, hence equal odds" fallacy (it's "privileging the hypothesis" lingering even after you remove a given hypothesis from dominance). Incidentally, it's rarely a mistake to let go of your beliefs: if they are correct, reality will imprint them back.

I experienced this process while erasing my beliefs in folk medicine practices. At one point, I decided to forget all I knew about this stuff since I was little, and to draw the judgment anew in each case, as if I heard of it for the first time. In a few cases this led to me not believing in the suggested effects that turned out to be real, but that's only to be expected (and the reason I know of these cases is that in time, I learned whether particular pieces of tradition carried any truth, it's like education by osmosis with epistemic hygiene turned on, done from scratch all over again). On the plus side, I got rid of some annoying "rituals", like being afraid of every draught for fear of getting a cold.

Replies from: MrHen, Jayson_Virissimo
comment by MrHen · 2010-01-16T16:27:18.382Z · LW(p) · GW(p)

The first thing is to realize that you don't have even the irrational belief, because if the map is wrong, it's worse than useless. You should regress to the prior, accept not knowing the answer, but at the same time being careful about "you either win the lottery or lose, hence equal odds" fallacy (it's "privileging the hypothesis" lingering even after you remove a given hypothesis from dominance). Incidentally, it's rarely a mistake to let go of your beliefs: if they are correct, reality will imprint them back.

I think I understand you. Let me repeat what you said with my words and see if I get it:

An irrational belief is damaging. It is better to hold no belief and regress to the "I don't know" state of assigning probabilities to outcomes.

Unfortunately, "privileging the hypothesis" is pulling an "article I should have read by now" tag from my memory. Apparently I should go read an article. :)

The followup question I have is how do I act when I cannot find an alternative hypothesis? In other words, I have an irrational belief and I have to use that area of the map. "Do nothing," is an action. Should I just insert that an hope for the best? What if "Do nothing" is the rational belief? Act randomly? And so on so forth.

My point here can be boiled down to this: Beliefs fuel actions. Actions are expected from reality. Better beliefs produce better actions. What happens when I have no belief or only irrational beliefs when deciding how to act? Assume there is no time for further introspection or fact-finding.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-16T16:39:49.600Z · LW(p) · GW(p)

The second-best guess after the disabled known-irrational solution is often more interesting than "do nothing". On the other hand, "do nothing", when it's the way to go, may be hard to accept for a number of reasons (it can be seen as a signal of not caring, or of excessive loyalty to your position of disbelieving). This is a dangerous pressure, one that can push you to accept a different dogma in place of the discarded one just to fill the gap.

Replies from: MrHen
comment by MrHen · 2010-01-16T16:56:06.408Z · LW(p) · GW(p)

Soft reminder: This is just theory-chat and it has nothing to do with me or my post.

Part of the problem is that some maps don't keep track of second-best solutions. Namely, a common irrational behavior is to chuck everything that doesn't match or adhere to principal dogma. The problem is not so much that there needs to be a way to choose a second-best. The problem is what happens when there is no second best.

This is a dangerous pressure, one that can push you to accept a different dogma in place of the discarded one just to fill the gap.

I am unable to parse, "This". What are you referring to? As in, what is a dangerous pressure?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-16T17:16:13.943Z · LW(p) · GW(p)

The pressure to "do something", in particular to accept a system of beliefs that promotes a particular "something", when for all you know you should just "stay there".

Replies from: MrHen
comment by MrHen · 2010-01-16T17:35:17.916Z · LW(p) · GW(p)

Ah, gotcha. That makes sense.

comment by Jayson_Virissimo · 2010-01-16T00:43:46.346Z · LW(p) · GW(p)

The first thing is to realize that you don't have even the irrational belief, because if the map is wrong, it's worse than useless.

This isn't always the case. It is fairly easy to find anecdotes of explorers (and especially those in war) that have gotten lost and found their way to safety using the "wrong map". Sometimes having a map (even the wrong one) can provide just the amount of hope needed to press onwards.

Replies from: Strange7
comment by Strange7 · 2010-02-22T09:02:39.456Z · LW(p) · GW(p)

There are far fewer available anecdotes of explorers who persisted in using an incorrect map, became even more lost, and were never heard from again. I suspect this is a matter of selection bias.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-02-22T19:48:40.644Z · LW(p) · GW(p)

Sounds probable.

comment by h-H · 2010-01-20T16:49:24.680Z · LW(p) · GW(p)

it'll probably save a lot of time to discuss the particulars of your belief in God instead of going meta. ie. 'God' is a very specific entity, discussing the specifics instead of imagined abstractions is more useful.

Replies from: MrHen
comment by MrHen · 2010-01-20T17:20:19.419Z · LW(p) · GW(p)

It wouldn't accomplish the same things that I wanted to accomplish with this post. The meta was a point in its own right.

I consider this post a success as it was written. There are ways it could be improved but I do not think adding more details about my particular beliefs is one of those ways.

comment by Jonathan_Graehl · 2010-01-15T21:13:34.185Z · LW(p) · GW(p)

Suppose you could change your desires. Would you choose to abandon your desire to believe in (whatever) God? How about if it turned out to conflict with success in your other values?

Life with less-conflicting desires may be more effective or pleasurable. Maybe it's possible to have a mystical belief that retreats from actual rent-paying rational world-modeling, and only modifies your values and personal interactions. I'd still worry: am I now taking an irrational path toward satisfying myself, because of unquestioned beliefs about how I should behave?

In other words, just because it's hard to measure and know myself, I'd be wary of holding unquestioned beliefs about who I am and what works for me, especially if those were unduly impressed upon me by others in my youth. I think it's possible to hold wrong beliefs about what your desires are, and I think religion encourages it.

Replies from: MrHen
comment by MrHen · 2010-01-15T21:53:52.318Z · LW(p) · GW(p)

Suppose you could change your desires. Would you choose to abandon your desire to believe in (whatever) God? How about if it turned out to conflict with success in your other values?

I can change my desires. But to actually do so requires a desire to do so. These meta-desires are tricky buggers and one wrong step will wreck havoc with the whole system. I don't feel like outlining everything; I just want to point out that my particular case is not as simple as desiring the wrong thing.

In other words, just because it's hard to measure and know myself, I'd be wary of holding unquestioned beliefs about who I am and what works for me, especially if those were unduly impressed upon me by others in my youth. I think it's possible to hold wrong beliefs about what your desires are, and I think religion encourages it.

I, on the other hand, feel like treading carefully anytime something as dangerous as desire is used to apply sweeping changes to a belief system. Pulling the word "God" out is going to put a suspiciously God-shaped hole in my belief system. The first thing I am going to try is finding something else God-shaped and plugging the gaping hole in my suddenly crashing worldview. Instead, I find it easier and more successful to chip parts out of the map and replace them with better chips. I'm not in a hurry and I'd rather see things replaced with Correct stuff instead of merely Better stuff.

I am not trying to say your advice is invalid but I know just enough of myself to see red flags popping up all over the place. It is possible my red-flagger is completely whacked, but if this is the case I should start working on my red-flagger.

comment by MrHen · 2010-01-15T20:13:43.248Z · LW(p) · GW(p)

Someone upvoted this already? It hasn't been up for more than a minute. Do people here really read and process that quickly?

EDIT: Wait, I just checked the timestamps. My internal clock apparently has issues. It looks like it was about 4 minutes.

comment by MrHen · 2010-01-15T20:10:01.912Z · LW(p) · GW(p)

holds breath

comment by rortian · 2010-01-17T02:48:07.932Z · LW(p) · GW(p)

I think that there are very Christian religious overtones in what Eliezer talks about. In his recently posted answers to questions the term saved was used more than once and many times there was reference to how the world is incredibly screwed without the singularity.

You may have ideas that are more traditionally religious, but don't think that others around here don't have thoughts that rhyme with yours.

Replies from: Nick_Tarleton, thomblake
comment by Nick_Tarleton · 2010-01-19T17:18:30.331Z · LW(p) · GW(p)

See Rapture of the Nerds, Not.

Replies from: rortian
comment by rortian · 2010-01-19T20:52:02.454Z · LW(p) · GW(p)

I'm not so sure that this post is something I need to see. I was pointing out parallels in Eliezer's language to something you would hear from an evangelist.

If there is a specific point you'd like to discuss I'd be happy to do that.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-19T21:07:51.167Z · LW(p) · GW(p)

If there is a specific point you'd like to discuss I'd be happy to do that.

You started this thread with a vague claim. If you want talk about specifics, you should quote something that Eliezer has said and explain what Christian overtones you think it has. Pointing to the word "saved" without any context is not enough.

Replies from: rortian
comment by rortian · 2010-01-20T22:02:53.613Z · LW(p) · GW(p)

I thought people would have seen the videos, and thus what I was talking about this in context. Oh well here are quotes:

http://www.youtube.com/watch?v=vecaDF7pnoQ#t=2m26s

That's how the world gets saved.

http://www.youtube.com/watch?v=arsI1JcRjfs#t=2m30

The thing that will kill them when they don't sign up for cryonics.

http://www.youtube.com/watch?v=lbzV5Oxkx1E#t=4m00s

But for now it can help the rest of us save the world.

(Probably some paraphrasing but the quotes are in the videos).

So other quotes were in the vimeo video, but these mainly concern the argument that the singularity is obviously the number one priority. Also troubling to me is the idea the the world is irredeemably flawed before the emergence of FAI. Christianity very much rests on the notion that the world is hopeless without redemption from god.

So the similarity mainly lies in the notion that we need a savior, and look we have one! The you will die without cryonics is sort of icing on the cake.

To all this I would mainly argue what Jaron Lanier does here:

http://bloggingheads.tv/diavlogs/15555?in=00:46:48&out=00:51:08

While Eliezer asserts that he will cure AIDS.

There is a lot to like about this world and a lot of problems to work on. However, it is ridiculous to assert you know the number one priority for earth when you have no evidence that your project will be nearly as successful as you think it will be.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-21T05:23:52.614Z · LW(p) · GW(p)

You have only weak surface similarities, which break down if you look deeper.

In the Christian concept, people need to be saved from the moral punishment for a sin committed before they were born, and this salvation is available only by accepting the religion, and it is absolutely morally right that those who do not accept the religion are not saved, on the authority of a supremely powerful being. The salvation consists of infinite boredom rather than infinite pain after you die.

On the other hand, the concept of an FAI saving the world involves saving people from the harsh reality of an impersonal universe that does not care about us, or anything else. The salvation is for anyone it is in the FAI's power to save, the requirement of cryonics is only because even a superintelligence would likely not be able have enough information about a person to give them new life after their brain had decayed. If it turns out that the FAI can in fact simulate physics backwards well enough to retrieve such people, that would be a good thing. People who happen to be alive when the FAI goes FOOM will not be excluded because they aren't signed up for cryonics. The salvation consists of as much fun as we can get out of the universe, instead of non-existence after a short life.

To all this I would mainly argue what Jaron Lanier does here

Lanier's argument, within the time you linked to, seemed to consist mostly of misusing the word ideology. Throughout the diavlog, he kept accusing AI researchers and Singularians of having a religion, but he never actually backed that up or even explained what he meant. While he seemed to be worshiping mystery, particularly with regards to consciousness, and was evasive when Eliezer questioned him on it.

Replies from: rortian
comment by rortian · 2010-01-22T00:56:03.517Z · LW(p) · GW(p)

Consider me incredibly underwhelmed to hear a recitation of Eliezer's views.

It is humorous that you simply assert that Lanier just misuses the word ideology. What I find compelling is his advice to simply do the work and see what can be done.

Eliezer is a story teller. You like his stories and apparently find them worth retelling. Far out. I expect that is what you will always get from him. Look for results elsewhere.

comment by thomblake · 2010-01-19T15:29:55.930Z · LW(p) · GW(p)

I think that there are very Christian religious overtones in what Eliezer talks about. In his recently posted answers to questions the term saved was used more than once and many times there was reference to how the world is incredibly screwed without the singularity.

I think 'Christian' is overly specific. But you wouldn't be the first to compare the Singularity to 'Rapture' or some such, or to compare some of these folks to a cult. I think it would be worth everyone's time to make sure there isn't a "hole-shaped god" type effect going on here.

ETA: But remember, If there are biased reasons to say the sun is shining, that doesn't make it dark out.

Replies from: rortian
comment by rortian · 2010-01-19T20:38:15.347Z · LW(p) · GW(p)

Thanks for the reply...the downvoting without it is sort of a bummer.

Notice I did not bring up the rapture...Eliezer does not really use similar language in that regard. Use of the word save though strikes me as more Christian though.

Fuckin' a on the god shaped hole stuff. I don't have much patience for people that put arguments forward like that.

comment by nawitus · 2010-01-16T08:19:45.573Z · LW(p) · GW(p)

A person is not really either a rationalist or a irrationalist. There's no general "rationality level". Person can be more or less rational depening on the subject or time etc. Belief in God may not be that irrational depending on how you define God. And the community should not of course ban someone based on their beliefs in some particular matter. You can probably have a "rational discussion" on other subjects quite well.

Also, there's nothing inherently irrational about chasing UFOs or buying lottery tickets.

comment by gd779 · 2010-01-18T20:22:36.048Z · LW(p) · GW(p)

Belief in God is perfectly rational. For a thorough treatment, see Alvin Plantinga's "Warrant" trilogy, in particular "Warranted Christian Belief" available online in its entirety at the preceding link. It's more than a bit technical and it's not exactly light reading, but it rescues theistic belief from the charge of irrationality (or, at least, from certain charges of irrationality - certain factual charges are outside the book's scope).

comment by roland · 2010-01-15T21:06:36.565Z · LW(p) · GW(p)

I'm not going to read all of your post, but from what I understood I will say the following:

  • the fact that a lot of people here think that being religious is irrational doesn't mean that it actually is irrational.
  • there are people here, including some in the very top according to karma points who hold beliefs that are not religious but clearly irrational IMHO.
  • what one humans deems rational another might deem irrational
  • I don't see any problem with you having obtained a lot of karma here. Again, a lot of people(probably all) here have irrational beliefs why should you be different? Aumann is considered irrational in respect to his religious beliefs but no one denies his contributions to science.
  • Religion is now considered a great example of irrationality but only because of the current political and social context(being atheist is "in"). There are beliefs as irrational as but that are simply not pointed out as vehemently.

Enough said.

Replies from: Morendil, MrHen
comment by Morendil · 2010-01-15T21:10:31.122Z · LW(p) · GW(p)

Suggestion: if your time is too valuable to read a post from beginning to end, in all likelihood it is also too valuable to comment on the portions you did read.

Replies from: roland
comment by roland · 2010-01-15T21:54:50.866Z · LW(p) · GW(p)

I'll be the judge of that.

Replies from: roland, roland
comment by roland · 2010-01-16T17:03:06.026Z · LW(p) · GW(p)

To those who downvoted me. Could you provide me with your specific reason so that I could actually learn something from this?

Replies from: Cyan, Jack
comment by Cyan · 2010-01-16T18:08:34.761Z · LW(p) · GW(p)

I downvoted, "I'm not going to read all of your post, but..." for the dismissive tone*. After reading this comment I recognized that the tone was something I was projecting onto the comment and removed my downvote. I downvoted "Fuck you whoever downvoted ..." for pointless combativeness*. After downvoting the other two I downvoted "I'll be the judge of that" reflexively but removed the downvote after actually thinking about it for five seconds.

* under the general category of "do not want to see comments like this".

comment by Jack · 2010-01-16T18:28:38.965Z · LW(p) · GW(p)

-16 seems massively disproportionate to the initial offense so I removed my down vote. Upvoted this.

Replies from: roland
comment by roland · 2010-01-16T18:46:47.475Z · LW(p) · GW(p)

What would be the initial offense?

comment by roland · 2010-01-15T22:37:13.057Z · LW(p) · GW(p)

Fuck you whoever downvoted this! It's my time and I decide how to invest it and if you can't live with that, it's your problem! Yeah, downvote me more, I don't give a fuck!!!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-15T22:58:22.229Z · LW(p) · GW(p)

Take it easy.

Replies from: roland
comment by roland · 2010-01-15T23:55:15.112Z · LW(p) · GW(p)

Thanks for your supporting comment Vladimir. It's just that I feel I've been downvoted quite often recently(feel free to read through my recent comments and see for yourself) and honestly I don't think many of those are justified. If you think otherwise I'm all ears. I simply don't know how to express myself other than the way I did(refering to my venting off). I was thinking about writing a top level post about this issue but I'm not sure if I want to play this game.

Replies from: mattnewport, komponisto, GuySrinivasan
comment by mattnewport · 2010-01-16T00:04:56.085Z · LW(p) · GW(p)

The recently downvoted posts I see begin with "I'm not going to read all of your post, but", "I'll be the judge of that." and "Fuck you". I don't think it's very surprising those have been downvoted. Consider whether other people are likely to feel you've added something useful to the discussion when you post and you'll probably avoid the downvotes. If you just feel the need to vent occasionally then accept the karma hit as the price for your satisfaction.

Replies from: roland
comment by roland · 2010-01-16T00:15:56.942Z · LW(p) · GW(p)

I don't think it's very surprising those have been downvoted.

Can you be more specific, except regarding the "Fuck you" comment?

My view:

"I'm not going to read all of your post, but"

I had already read a considerable part of the post, commented once and read other comments. Based on what I had already read and understood I just decided I didn't want to read more of the post(I think this is my right) but since I had already thought about what I read and the other replies considerably I nevertheless decided to comment based on the information I had. I could have made that very same comment without mentioning the fact that I didn't (intend) to read the whole thing but I decided to be honest about the state of my knowledge so I added that in. So from my perspective I'm being penalized for honest self disclosure.

"I'll be the judge of that." Was given as an answer to someone suggesting how I should use my time. I still don't see where I'm wrong with that.

I'm all ears to hear your point of view.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-16T00:43:01.559Z · LW(p) · GW(p)

I downvoted all three.

In the first case, my downvote had nothing to do with whether you'd read the entire article or not. It had to do with your apparent lack of understanding of the purpose of the site. You came very close to suggesting that rationality and irrationality are subjective, and essentially indicated that you think rationality shouldn't be valued as highly as is the norm here. The comments about other posters' rationality seemed inappropriate, too: We're here to learn to be more rational, and all of us have areas of irrationality to work on.

The downvote in the second case was based primarily on tone: Morendil offered what appears to me to be a suggestion on how to avoid being downvoted in the future (I'll grant that it's not the best advice that could have been given, but it seems to me to have been given in good faith if nothing else), and you responded in a way that looks both defensive and status-oriented to me, without actually adding any useful information. Such reactions are rarely useful, especially here. If you'd stated your disagreement in a way that opened the issue up for discussion, rather than apparently trying to end the conversation by asserting dominance, I wouldn't've downvoted you. In fact, I may even have upvoted in that case; I like to see good discussions of our group norms - at least, the norms that don't define the group.

Replies from: roland
comment by roland · 2010-01-16T00:53:22.690Z · LW(p) · GW(p)

The comments about other posters' rationality seemed inappropriate, too: We're here to learn to be more rational, and all of us have areas of irrationality to work on.

If I understood you correctly the part after the colon is referencing my viewpoint right? I don't get what is wrong with it unless you want to assert that there are already lots of people here with 100% rationality who don't need to work on it anymore.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-16T01:03:12.984Z · LW(p) · GW(p)

Incorrect; I brought it up because it's part of how-things-are-here that you seemed not to realize that we realize. We acknowledge that we're not perfectly rational, or even rational to the limits of what human minds can accomplish, but we still do have expectations and standards.

Replies from: roland
comment by roland · 2010-01-16T01:13:33.107Z · LW(p) · GW(p)

Quoting myself:

I'm here to improve my rationality I suppose you are here for the same reason.

http://lesswrong.com/lw/1lv/the_wannabe_rational/1gei

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-16T01:38:57.094Z · LW(p) · GW(p)

I don't disbelieve you, but that's not what I was trying to get at. This site is intended for people who are trying to improve their rationality and have already passed a certain threshold of rationality. The top post poses the question of where that threshold should be, not whether we should have one at all. At least three of the points in your original comment are in conflict with that intention:

  • the fact that a lot of people here think that being religious is irrational doesn't mean that it actually is irrational.
  • what one humans deems rational another might deem irrational
  • Religion is now considered a great example of irrationality but only because of the current political and social context(being atheist is "in"). There are beliefs as irrational as but that are simply not pointed out as vehemently.

We do have a non-subjective definition of rationality, by the way.

Replies from: roland
comment by roland · 2010-01-16T02:05:33.147Z · LW(p) · GW(p)

Until we have AGI it will always be humans who will judge what is rational and what isn't. I don't see my points as being contradictory to the site's intention unless you want to assert that there is an absolute judge of rationality somewhere.

Passed a certain threshold of rationality? What would that threshold be? Do you think that Aumann has passed this threshold? Who will judge who passed it and who didn't? Of course we could use religion as a filter but this only tells us that religion has become a salient example of supposed irrationality.

As for the non-subjective definition of rationality I think this is highly questionable even from a bayesian perspective. I'll say it again: even bayesian superintelligences will disagree if they have different priors. So the question becomes: is there a correct prior? AFAIK this question is still open.

And as humans we certainly all have different priors which implies my point:

what one humans deems rational another might deem irrational

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-16T02:50:09.115Z · LW(p) · GW(p)

I have other things to do with my evening, so I will probably not be responding to further posts on this thread until tomorrow, and I may not wind up getting back to this thread at all. If someone else would like to pick up the conversation, that's fine with me.

Until we have AGI it will always be humans who will judge what is rational and what isn't. I don't see my points as being contradictory to the site's intention unless you want to assert that there is an absolute judge of rationality somewhere.

False dichotomy. There are definitely other options between considering all rationality subjective and requiring there to be one person who has all the answers. Many topics have been discussed here and elsewhere in the rationalist community and are considered resolved; our normal method is to use those as benchmarks.

Passed a certain threshold of rationality? What would that threshold be?

Opening that question for discussion was a large part of the point of the original post; I expect it to be answered within the next few days. Also note that the question is context-specific: I'm only referring to the expected-rationality threshold here at Less Wrong.

Do you think that Aumann has passed this threshold? Who will judge who passed it and who didn't? Of course we could use religion as a filter but this only tells us that religion has become a salient example of supposed irrationality.

Religion is one of the benchmarks, yes, and there are reasons for that. (No, I don't intend to discuss them; perhaps some of the other posters will give you relevant links if you ask.) As to how the passing of those benchmarks is judged, the whole group is involved in that by way of voting and discussion, and so far that appears to be a useful method that's less subject to bias than traditional forums with formal moderation.

As for the non-subjective definition of rationality I think this is highly questionable even from a bayesian perspective. I'll say it again: even bayesian superintelligences will disagree if they have different priors. So the question becomes: is there a correct prior? AFAIK this question is still open.

We don't have a rationally-determined, uncontroversial method for determining priors, so that obviously won't be one of the benchmarks that we expect people to pass. Using Bayesian reasoning could be, though, or updating because of evidence regardless of the method.

Replies from: roland
comment by roland · 2010-01-16T04:04:22.440Z · LW(p) · GW(p)

I have other things to do with my evening, so I will probably not be responding to further posts on this thread until tomorrow, and I may not wind up getting back to this thread at all.

I don't see anything in your answer that is worthwhile for me to comment on, so yes I consider this finished.

comment by komponisto · 2010-01-16T00:02:39.077Z · LW(p) · GW(p)

I was thinking about writing a top level post about this issue

Please don't; make it an Open (or Meta) Thread comment instead.

comment by SarahNibs (GuySrinivasan) · 2010-01-16T00:32:58.445Z · LW(p) · GW(p)

Don't discount random chance plus miscommunication plus priming.

You've been downvoted quite often recently when making a post that says "I espouse a 9/11 conspiracy theory, here is my evidence and argument". I think we both know why that is, whether or not it's "justified".

On this top-level comment, regardless of what you thought you were saying by "I'm not going to read all of your post, but", it's very plausible you communicated "I'm going to pull a typical internet-poster and not read what you have to say, now you listen to me:" to a reader. A single initial downvote makes it far more likely the next reader will interpret your statement in a negative light as well. etc etc

comment by MrHen · 2010-01-15T21:44:20.782Z · LW(p) · GW(p)

the fact that a lot of people here think that being religious is irrational doesn't mean that it actually is irrational.

True. But if I assumed that I was correct there really wouldn't be much point in me being here.

there are people here, including some in the very top according to karma points who hold beliefs that are not religious but clearly irrational IMHO.

I think this is relevant to the post and the subject at hand. I am not sure I see why you brought it up, however.

what one humans deems rational another might deem irrational

Hence rationality. I do not consider the rational/irrational line to be fuzzy. Our vision of it is certainly fuzzy, but I think the art of rationality depends on there being an Answer. Am I wrong in this thought?

I don't see any problem with you having obtained a lot of karma here. Again, a lot of people(probably all) here have irrational beliefs why should you be different? Aumann is considered irrational in respect to his religious beliefs but no one denies his contributions to science.

The reason I bring up karma is because I see flaws in the karma system. These flaws are not so much that I happened to get some, but rather than karma has a limit in its ability to predict rationality. As in, it can't. The relevant question is how LessWrong plans to deal with irrational people that slip through the karma system. The answer I feel coming from the community in the short time this post has been live is that it will be handled on a case-by-case basis. (No one has explicitly said this. I am reading between the lines.) I see no problem with that until LessWrong gets ridiculously large.

Religion is now considered a great example of irrationality but only because of the current political and social context(being atheist is "in"). There are beliefs as irrational as but that are simply not pointed out as vehemently.

Religion was chosen because I am religious. I completely agree with you.

I'm not going to read all of your post, but from what I understood I will say the following: [...] Enough said.

Okay. If you are curious, I didn't notice an answer to the primary question: "How do we deal with the irrational amongst us?" Did I miss it?

Replies from: roland
comment by roland · 2010-01-15T22:13:35.756Z · LW(p) · GW(p)

True. But if I assumed that I was correct there really wouldn't be much point in me being here.

Why not? I'm here to improve my rationality I suppose you are here for the same reason. Would you dismiss Newtonian physics once you discover that Newton had irrational mystical beliefs?

Hence rationality. I do not consider the rational/irrational line to be fuzzy. Our vision of it is certainly fuzzy, but I think the art of rationality depends on there being an Answer. Am I wrong in this thought?

I tend to agree. But as far as my understanding of Bayesianism goes, even two bayesian superintelligences starting off with different priors will disagree on what is rational(they will arrive at different beliefs). The question then is: is there one correct prior and if there is how do we find it?

The relevant question is how LessWrong plans to deal with irrational people that slip through the karma system.

I didn't notice an answer to the primary question: "How do we deal with the irrational amongst us?" Did I miss it?

Who will be the judge of what is rational/irrational? I have seen perfectly rational posts being downvoted into negative numbers while irrational ones have been upvoted. From my observation points are really given more by the heuristic "does this comment conform to my point of view". And when more people join LW the average level of rationality here will approximate the average in the general population. So hoping that people will be able to vote correctly is wishful thinking.

Replies from: MrHen
comment by MrHen · 2010-01-16T16:51:16.392Z · LW(p) · GW(p)

Why not? I'm here to improve my rationality I suppose you are here for the same reason. Would you dismiss Newtonian physics once you discover that Newton had irrational mystical beliefs?

If I assumed I was correct, I would be going to other people who believed the way I did and learning from them. I don't assume I am correct, so I try learning from people who believe differently and seeing what sticks.

I tend to agree. But as far as my understanding of Bayesianism goes, even two bayesian superintelligences starting off with different priors will disagree on what is rational(they will arrive at different beliefs). The question then is: is there one correct prior and if there is how do we find it?

I thought there was a way to deal with this? I could be wrong and I haven't read the relevant articles. I just remember people talking about it.

I am not sure I agree with this use of "rational". I would expect these two superintelligences to be able to explain their priors and see that the other has arrived at a rational conclusion given those priors.

What I am talking about is someone who is arriving at an obviously irrational conclusion.

Who will be the judge of what is rational/irrational? I have seen perfectly rational posts being downvoted into negative numbers while irrational ones have been upvoted. From my observation points are really given more by the heuristic "does this comment conform to my point of view". And when more people join LW the average level of rationality here will approximate the average in the general population. So hoping that people will be able to vote correctly is wishful thinking.

Okay, let me reword the question: "How do we deal with the obviously irrational among us?" I am not talking about people near or close to the line. I am talking about people who are clearly irrational.

It sounds like you are saying, "Not with karma because people are not using it that way." I agree.