Raising the Sanity Waterline
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T04:28:49.168Z · LW · GW · Legacy · 232 commentsContents
232 comments
To paraphrase the Black Belt Bayesian: Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.
If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.
We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem. But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.
Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions? In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment. You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way. All your examples must center about real-world cases that have nothing to do with religion.
If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?
Here are some such topics I've already covered—not avoiding all mention of religion, but it could be done:
- Affective Death Spirals—plenty of non-supernaturalist examples.
- How to avoid cached thoughts and fake wisdom; the pressure of conformity.
- Evidence and Occam's Razor—the rules of probability.
- The Bottom Line / Engines of Cognition—the causal reasons why Reason works.
- Mysterious Answers to Mysterious Questions—and the whole associated sequence, like making beliefs pay rent and curiosity-stoppers—have excellent historical examples in vitalism and phlogiston.
- Non-existence of ontologically fundamental mental things—apply the Mind Projection Fallacy to probability, move on to reductionism versus holism, then brains and cognitive science.
- The many sub-arts of Crisis of Faith—though you'd better find something else to call this ultimate high master-level technique of actually updating on evidence.
- Dark Side Epistemology—teaching this with no mention of religion would be hard, but perhaps you could videotape the interrogation of some snake-oil sales agent as your real-world example.
- Fun Theory—teach as a literary theory of utopian fiction, without the direct application to theodicy.
- Joy in the Merely Real, naturalistic metaethics, etcetera etcetera etcetera and so on.
But to look at it another way—
Suppose we have a scientist who's still religious, either full-blown scriptural-religion, or in the sense of tossing around vague casual endorsements of "spirituality".
We now know this person is not applying any technical, explicit understanding of...
- ...what constitutes evidence and why;
- ...Occam's Razor;
- ...how the above two rules derive from the lawful and causal operation of minds as mapping engines, and do not switch off when you talk about tooth fairies;
- ...how to tell the difference between a real answer and a curiosity-stopper;
- ...how to rethink matters for themselves instead of just repeating things they heard;
- ...certain general trends of science over the last three thousand years;
- ...the difficult arts of actually updating on new evidence and relinquishing old beliefs;
- ...epistemology 101;
- ...self-honesty 201;
- ...etcetera etcetera etcetera and so on.
When you consider it—these are all rather basic matters of study, as such things go. A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites?
But there are Nobel laureates who haven't taken that course! Richard Smalley if you're looking for a cheap shot, or Robert Aumann if you're looking for a scary shot.
And they can't be isolated exceptions. If all of their professional compatriots had taken that course, then Smalley or Aumann would either have been corrected (as their colleagues kindly took them aside and explained the bare fundamentals) or else regarded with too much pity and concern to win a Nobel Prize. Could you—realistically speaking, regardless of fairness—win a Nobel while advocating the existence of Santa Claus?
That's what the dead canary, religion, is telling us: that the general sanity waterline is currently really ridiculously low. Even in the highest halls of science.
If we throw out that dead and rotting canary, then our mine may stink a bit less, but the sanity waterline may not rise much higher.
This is not to criticize the neo-atheist movement. The harm done by religion is clear and present danger, or rather, current and ongoing disaster. Fighting religion's directly harmful effects takes precedence over its use as a canary or experimental indicator. But even if Dawkins, and Dennett, and Harris, and Hitchens should somehow win utterly and absolutely to the last corner of the human sphere, the real work of rationalists will be only just beginning.
232 comments
Comments sorted by top scores.
comment by Nebu · 2009-03-12T14:52:54.640Z · LW(p) · GW(p)
I already mentioned this as a comment to another post, but it's worth repeating here: The human brain has evolved some "dedicated hardware" for accelerating certain tasks.
I already mentioned in that other post that one such hardware was for recognizing faces, and that false-positives generated by this hardware caused us have a feeling of hauntedness and ghosts (because the brain receives a subconscious signal indicating the presence of a face, but consciously looking around we see no one around).
Another such hardware (which I only briefly alluded to in the other post) was "agency detection". I.e. trying to figure out whether a certain event occurred "naturally", or because another agent (a friend, a foe, or a neutral?) caused it to happen. False positives from this hardware would cause us to "detect agency" where none was, and if the event seems something way out of the capacity for a human to control, and since humans seem to be the most powerful "natural" beings in the universe, the agent in question must be something supernatural, like God.
I don't have all the details worked out, but it seems plausible that agency-detection could have been naturally selected for, perhaps to be able to integrate better into a society, and to help with knowing when it is appropriate to cooperate and when it is appropriate to defect. It's a useful skill to be able to differentiate between "something good happened to me, because this person wanted something good to happen to me and made it happen. They cooperated (successfully). I should become their friend." versus "something good happened to me, despite this person wanting something bad to happen to me, but it backfired on them. They defected (unsuccessfully). I should be wary of them."
From there, bring in Anna Salamon and Steve Rayhawkideas about tag-along selection, and it seems like religion really may be a tag-along evolutionary attribute.
Anyway, I used to be scared of ghosts and the dark and stuff like that, but once I found out about the face-recognition hardware and its false positives (and other hardware, such as sound-location) this fear has almost completely disappeared almost instantaneously.
I was already atheist or agnostic (depending on what definitions you assign to those words) when I found out about the hardware false-positives, so I can't say for sure whether had I been religious, this would have converted me.
But if it worked at making me stop "believing"[1] in ghosts, then perhaps it could work at making people stop beliving in God as well.
1: Here I am using the term "believe" in the sense of Yvain's post on haunted rationalists. Like everyone else, I would assert that ghosts didn't really exist, and would be willing to make a wager that they didn't exist. And yet, like everyone else, I was still scared of them.
Replies from: hhadzimu, Cameron_Taylor↑ comment by hhadzimu · 2009-03-12T16:43:32.699Z · LW(p) · GW(p)
Excellent description. Reminds me a little of Richard Dawkins in "The God Delusion," explaining how otherwise useful brain hardware 'misfires' and leads to religious belief.
You mention agency detection as one of the potential modules that misfire to bring about religious belief. I think we can generalize that a little more and say fairly conclusively that the ability to discern cause-and-effect was favored by natural selection, and given limited mental resources, it certainly favored errors where cause was perceived even if there was none, rather than the opposite. In the simplest scenario, imagine hearing a rustling in the bushes: you're better off always assuming there's a cause and checking for predators and enemies. If you wrote it off as nothing, you'd soon be removed from the gene pool.
Relatedly, there is evidence that the parts of the brain responsible for our ability to picture absent or fictional people are the same ones used in religious thought. It's understandable why these were selected for: if you come back to your cave to find it destroyed or stolen, it helps to imagine the neighboring tribe raiding it.
These two mechanisms seem to apply to religion: people see a cause behind the most mundane events, especially rare or unusual events. Of course they disregard the giant sample size of times such events failed to happen, but those are of course less salient. It's a quick hop to imagining an absent/hidden/fictional person -and agent - responsible for causing these events.
Undermining religion on rational grounds must thus begin with destroying the idea that there is necessarily an agent intentionally causing every effect. This should get easier: market economies are famously results of human action, but not of human design - any given result may be the effect of an agent's action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.
It would probably also help to remind people of sample size. I recently heard a story by a religious believer who based her faith on her grandfather's survival in the Korean war, which happened against very high odds. Someone like that must be reminded that many people did not survive similar incidents, and that there is likely no force behind it but random chance, much like, if life is possible on 0.000000001% of planets, and exists on the same percentage of those, given enough planets you will have life.
Replies from: Eliezer_Yudkowsky, David_Gerard↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T18:37:29.473Z · LW(p) · GW(p)
Agency misfires and causal misfires can help to suggest religion. For that suggestion to get past your filters, the sanity waterline has to be low. I don't invent a new religion every time I see a face in the clouds or three dandelions lined up in a row.
Replies from: hhadzimu, CronoDAS↑ comment by hhadzimu · 2009-03-12T19:12:19.318Z · LW(p) · GW(p)
Neither do I, though I'm often tempted to find a reason for why my iPod's shuffle function "chose" a particular song at a particular time. ["Mad World" right now.]
It seems that our mental 'hardware' is very susceptible to agency and causal misfires, leaving an opening for something like religious belief. Robin explained religious activities and beliefs as important in group bonding [http://www.overcomingbias.com/2009/01/why-fiction-lies.html], but the fact that religion arose may just be a historical accident. It's likely that something would have arisen in the same place as a group bonding mechanism - perhaps religion just found the gap first. From an individual perspective, this hardly means that the sanity waterline is low. In fact, evolutionarily speaking, playing along may be the sanest thing to do.
The relevant sentence from Robin's post: "Social life is all about signaling our abilities and cooperativeness, and discerning such signals from others." As Norman points out [link below], self-deception makes our signals more credible, since we don't have to act as believers if we are believers. As a result, in the ancestral environment at least, it's "sane" to believe what others believe and not subject it to a conscious and costly rationality analysis. You'd basically expend resources to find out a truth that would make it more difficult for me to deceive others, which is costly in itself.
Of course today, the payoff from signaling group membership is far lower than ever before, which is why religious belief, and especially costly religious activities, violate sanity. Which, perhaps, is why secularism is on the rise: http://www.theatlantic.com/doc/200803/secularism
Replies from: Rings_of_Saturn↑ comment by Rings_of_Saturn · 2009-03-12T20:46:28.509Z · LW(p) · GW(p)
I think this is a good answer to Eliezer's thought experiment. Teach those budding rationalists about the human desire to conform even in the face of the prima facie ridiculousness of the prevailing beliefs.
Teach them about greens and blues; teach them about Easter Islanders building statues with their last failing stock of resources (or is that too close to teaching about religion?). Teach them how common the pattern is: when something is all around you, you are less likely to doubt its wisdom.
Human rationality (at least for now) is still built on the blocks and modules provided to us by evolution. They can lead us astray, like the "posit agency" module firing when no agent is there. But they can also be powerful correctives. A pattern-recognizing module is a dangerous thing when we create imaginary patterns... but, oh boy, when there actually is a pattern there, let that module rip!
↑ comment by CronoDAS · 2009-03-12T19:49:31.553Z · LW(p) · GW(p)
For reference:
http://tvtropes.org/pmwiki/pmwiki.php/Main/RandomNumberGod
Replies from: Vivi↑ comment by Vivi · 2012-04-13T02:27:58.176Z · LW(p) · GW(p)
If I recall, that trope corresponds to prior points stating that humans are driven by evolutionary heuristics to assign agency based causality to a random probability distribution. However, the laconic does summarize that fallacy rather well. Narrative examples such as tropes do tend to ease comprehension. +1 Karma
↑ comment by David_Gerard · 2011-02-21T11:51:38.276Z · LW(p) · GW(p)
This should get easier: market economies are famously results of human action, but not of human design - any given result may be the effect of an agent's action, but not necessarily its intended cause. Thus, such results are not fundamentally different from, say, storms: effects of physical causes but with no intent behind them.
The conspiracy theory of economics remains prevalent, however, and very difficult to disabuse people of. So I'm not sure this is that helpful a handle to disabuse people of religion.
↑ comment by Cameron_Taylor · 2009-03-12T18:19:03.250Z · LW(p) · GW(p)
Good speculation
comment by JulianMorrison · 2009-03-12T08:45:14.516Z · LW(p) · GW(p)
If you want people to repeat this back, write it in a test, maybe even apply it in an academic context, a four-credit undergrad course will work.
If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-12T09:00:08.570Z · LW(p) · GW(p)
If you want them to have it as the ground state of their mind in everyday life, you probably need to have taught them songs about it in kindergarten.
I don't know; I agree with you about the likely effects of the four-credit class, but OB has had substantial effects on me and various other people I know, despite not reaching us in kindergarten. Why does OB work as well as it does?
Replies from: Kaj_Sotala, JulianMorrison↑ comment by Kaj_Sotala · 2009-03-14T00:58:41.838Z · LW(p) · GW(p)
Also, I think it's the way OB's teachings get reinforced daily. You don't just study one course and then forget about it: if you read OB/LW regularly, you get constant tiny nudges in the right direction. There's research suggesting that frequent small events have a stronger effect on one's happiness than rare big ones, and I suspect it's the same when it comes to learning new patterns of thought. Our minds are constantly changing and adapting, so if you just make a change once, it'll be drowned out in the sea of other changes. You'll want to bring it up to the point where it becomes self-reinforcing, and that takes time.
This is the reason why I suspect Eliezer's book won't actually have as big of an effect as many may think. Most people will probably read it, think it amazing, think they absolutely have to apply it to their normal lives... then go on and worry about their bills and partners and forget about the book. The main benefit will be for those who'll actually be startled enough to go online and find out more - if they end up as regular readers of OB and LW, or find some other rationality resource, then they have hope. Otherwise, probably not.
Replies from: Eliezer_Yudkowsky, MBlume↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-14T02:19:04.072Z · LW(p) · GW(p)
This is a very good point that I'll try to keep in mind, and another solution would be to have a decent community.
↑ comment by MBlume · 2009-03-14T01:08:32.423Z · LW(p) · GW(p)
Perhaps Eliezer's book should have a note -- please read one chapter per day?
I don't know, I came in and read a little over a year's worth of Eliezer's OB posts in a couple months' exploration, and I think it had a pretty solid impact on me.
↑ comment by JulianMorrison · 2009-03-12T10:05:37.204Z · LW(p) · GW(p)
Unrepresentative sample. Nobody would start reading OB unless they were already at least a rationalist-wannabe.
Replies from: AnnaSalamon, Swimmy↑ comment by AnnaSalamon · 2009-03-12T10:21:55.515Z · LW(p) · GW(p)
I agree about the unrepresentative sample. It would be interesting to try teaching OB in a small class-sized four-credit college seminar, with a follow-up a year later, to see if the material can be presented so as to have impact on ordinary university students, or on ordinary students at a selective university. Probably worthwhile as an experiment, after we do some more basic research seeing if we can detect this "rationality" thing in a survey or something of the OB readership (so we'd know what to test for).
But even given that OB is starting with all or mostly rationalist wannabes, I'm surprised at the impact its had on my and others' thinking, relative to what happens to rationalist wannabes who don't read OB, or who aren't members of this community.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-03-12T11:49:33.434Z · LW(p) · GW(p)
I'd be interested in trying to drag the age range down as low as possible - could 13-year-olds handle uncut OB? I think yes.
I can only speak for myself here, but personally what changed my thinking after reading OB was understanding both how things work, and why they necessarily must be that way and no other. Now when I think about that, I realize it allowed me to completely prune many search trees and redirect a lot of wasted effort "sitting on fences".
Replies from: Marion Z.↑ comment by Marion Z. · 2022-12-02T21:43:40.447Z · LW(p) · GW(p)
Anecdotally, I started casually reading Less Wrong/Overcoming Bias when I was 12. I didn't really get it, obviously, but I got it enough to explain some basic things about biases and evidence and probability to an uninitiated person
Replies from: sinitsa↑ comment by Swimmy · 2009-03-12T17:47:04.698Z · LW(p) · GW(p)
I started reading OB because I liked Robin Hanson as an economist. I continued reading because I liked Yudkowsky as a writer. I agree I'm still part of an unrepresentative sample (people who are willing to read and consider Yudkowsky's long ramblings), but not everyone found the site because of an interest in rationality per se.
Unfortunately, anyone taking a college course probably would be interested in rationality qua rationality. But the lessons are still valuable for those poor souls who, like I once was, are still religious despite it. The same for those who are religious fence-sitters.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-03-12T20:00:06.822Z · LW(p) · GW(p)
You were de-converted? Interesting. What clinched it?
Replies from: Swimmy↑ comment by Swimmy · 2009-03-12T23:07:59.737Z · LW(p) · GW(p)
I left a comment about it here: http://lesswrong.com/lw/2/tell_your_rationalist_origin_story/45#comments
Long story short, OB helped a lot.
comment by pjeby · 2009-03-12T05:22:56.247Z · LW(p) · GW(p)
It seems to me that the principal issue is that, even if you know all those things... that doesn't guarantee that you're actually applying them to your own beliefs or thought processes. There is no "view source" button for the brain, nor even a way to get a stack trace of how you arrived at a particular conclusion... and even if there were, most of us, most of the time, would not push the button or look at the trace, if we were happy with our existing/expected results.
In addition, most people are astonishingly bad at reasoning from the general to the specific... which means that if you don't mention religion explicitly in your hypothetical course, very few people will actually apply the skills in a religious context... especially if that part of their life is working out just fine, from their point of view.
It may be fictional evidence, but I think S.P. Somtow's idea that "The breaking of joy is the beginning of wisdom" has some applicability here... as even highly-motivated individuals have trouble learning to see their beliefs, as beliefs -- and therefore subject to the skills of rationality.
That is, if you think something is part of the territory, you're not going to apply something you think of as map-reading skills.
Hm, in fact, here's an interesting example. One of my students in the Mind Hackers' Guild just posted to our forum, complaining that by eliminating all his negative motivation regarding work, he now had no positive motivation either. But it was not apparent to him that the very fact he considered this a problem, was also an example of negative motivation.
That's because even though I teach people that ALL negative motivation is counterproductive for achieving long-term, directional goals (as opposed to very short-term or avoidance goals), people still assume that "negative motivation" means "motivations I don't like, or already know are irrational"... and so they make exceptions for all the things they think are "just the way it is". (Like in this man's case, an irrational fear linked to his need to "pay the bills".)
And this happens routinely with people, no matter how explicitly and repeatedly I state that, "no, you have to include those too". It seems like people still have to go through the process at least once or twice with someone pointing one of these out, before they "get it" that those other motivations also "count".
Heck, truth be told, I still sometimes take a while to find what hidden assumption in my thinking is leading to interference... even at times when I'd happily push the "view source" button or look at the stack trace... if only that were possible.
But since I routinely and trivially notice these map-territory confusions when my students do them, even without a view-source button -- heck, I can spot them from just a few words in the middle of their forum posts! -- I have to conclude that there is something innate at issue, besides me just not being a good enough teacher. After all, if I can spot these things in them, but not me, there must be some sort of bias at work.
Replies from: RobinHanson↑ comment by RobinHanson · 2009-03-12T13:04:15.101Z · LW(p) · GW(p)
I suspect you are right; the issue isn't that these people haven't "learned" relevant abstractions or tools. They just don't have enough incentives to apply those tools in these context. I'm not sure you "teach" incentives, so I'm not sure there is anything you can teach which will achieve the goal stated. So I'd ask the question: how can we give people incentives to apply their tools to cases like religion?
Replies from: pjeby, AnnaSalamon, Eliezer_Yudkowsky, Lee_A_Arnold↑ comment by pjeby · 2009-03-12T15:32:28.288Z · LW(p) · GW(p)
It's not incentive either. I have plenty of incentive, and so do my students. It's simply that we don't notice our beliefs as beliefs, if they're already in our heads. (As opposed to the situation when vetting input that's proposed as a new belief.)
Since we don't have any kind of built-in function for listing ALL the beliefs involved in a given decision, we are often unaware of the key beliefs that are keeping us stuck in a particular area. We sit there listing all the "beliefs" we can think of, while the single most critical belief in that area isn't registering as a "belief" at all; it just fades in as part of our background assumptions. To us, it's something like "water is wet" -- sure it's a belief, but how could it possibly be relevant to our problem?
Usually, an irrational fear associated with something like, "but how will I pay the bills?" masquerades as simple, factual logic. But the underlying emotional belief is usually something more like, "If I don't pay the bills, then I'm an irresponsible person and no-one will love me." The underlying belief is invisible because we don't look underneath the "logic" to find the emotion hiding underneath.
Unfortunately, all reasoning is motivated reasoning, which means that to find your irrational beliefs in a given area, you have to first dig up a nontrivial number of rationalizations... knowing that the rationalization you're looking for is probably something you specifically created to prevent you from thinking about the motivation involved in the first place! (After all, revealing to others that you think you're irresponsible isn't good genetic fitness... and if you know, that makes it more likely you'll unintentionally reveal it.)
A simple tool, by the way, for digging up the motivation behind seemingly "factual" statements and beliefs is to ask, "And what's bad about that?" or "And what's good about that?".... usually followed by, "And what does that say/mean about YOU?" You pretty quickly discover that nearly everything in the universe revolves around you. ;-)
↑ comment by AnnaSalamon · 2009-03-12T17:54:47.385Z · LW(p) · GW(p)
I'd say there're two problems: one is incentives, as you say; the other is making "apply these tools to your own beliefs" a natural affordance for people -- something that just springs to mind as a possibility, the way drinking a glass of liquid springs to mind on seeing it (even when you're not thirsty, or when the glass contains laundry detergent).
Regarding incentives: good question. If rationality does make peoples' lives better, but it makes their lives better in ways that aren't obvious in prospect, we may be able to "teach" incentives by making the potential benefits of rationality more obvious to the person's "near"-thinking system, so that the potential benefits can actually pull their behavior. (Humans are bad enough at getting to the gym, switching to more satisfying jobs in cases where this requires a bit of initial effort, etc., that peoples' lack of acted-on motivation to apply rationality to religion does not strongly imply a lack of inventives to do so.)
Regarding building a "try this on your own beliefs" affordance (so that The Bottom Line or other techniques just naturally spring to mind): Cognitive-Behavioral Therapy people explicitly teach the "now apply this method to your own beliefs, as they come up" steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer's scenario where we skip mention of religion). The evidence for CBT's effectiveness is fairly good AFAICT; it's worth studying their teaching techniques.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-03-13T05:10:28.146Z · LW(p) · GW(p)
Cognitive-Behavioral Therapy people explicitly teach the "now apply this method to your own beliefs, as they come up" steps, and then have people practice those steps as homework. We should do this with rationality as well (even in Eliezer's scenario where we skip mention of religion). The evidence for CBT's effectiveness is fairly good AFAICT; it's worth studying their teaching techniques.
Great! Links?
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T18:49:40.455Z · LW(p) · GW(p)
I think there's a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can't draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it's the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1/2 A t^2, which is much more general than gravity.
If you knew on a gut level - as knowledge - that you couldn't draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T19:42:36.398Z · LW(p) · GW(p)
Even if you "just know", this doesn't grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don't always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there's not a shared abstraction in use, you have to search-and-replace... and the brain doesn't have very many "indexes" you can use to do the searching with -- you're usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. ("Off-index" or "table scan" searches are slower and unlikely to be complete, anyway -- think of trying to do a search and replace on uses of the "visitor" pattern, where each application has different method names, none of which include "visit" or use "Visitor" in a class name!)
It seems to me that yours and Robin's view of minds still contains some notion of a "decider" -- that there's some part of you that can just look and see something's wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we're also subject to recording that wrongness, and perpetuating it in a variety of ways... recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, "if people were better programmers, they'd write better code"... it seems to me you're leaving out the part where becoming a better programmer has NO effect...
On all the code you've already written.
↑ comment by Lee_A_Arnold · 2009-03-13T22:29:00.236Z · LW(p) · GW(p)
Think of something you might have said to Kurt Gödel: He was a theist. (And not a dualist: he thought materialism is wrong.) In fact he believed the world is rational and also was a Leibnitzian monadology with God as the central monad. He was certainly NOT guilty of not applying Eliezer's list of "technical, explicit understandings," as far as I can see. I should point out that he separated the question about religion: "Religions are, for the most part, bad -- but religion is not." (Gödel in Wang, 1996.)
comment by HughRistik · 2009-03-12T21:20:12.945Z · LW(p) · GW(p)
Here's another way of evaluating the sanity of religious belief:
It's arguable that the original believers of religion were insane (e.g. shamans with schizotypical personality disorder, temporal lobe epilepsy, etc...), yet with each subsequent believer in your culture, you are less and less insane to believe in it. During past history, it would only take a few insane or gullible people with good oratorical skills getting together to make religion sanely believable.
If you are religious because you see spirits, you are insane. If you are religious because your friend Shaman Bob sees spirits and predicts the rainfall, you aren't very smart, but you aren't insane either. If you are religious because your whole tribe believes in the spirits seen by Shaman Bob and has indoctrinated you from birth, you are not insane at all, you are a typical human.
Replies from: thomblake, Eliezer_Yudkowsky, Vladimir_Nesov↑ comment by thomblake · 2009-03-12T21:26:55.655Z · LW(p) · GW(p)
Even better:
Evidence for the existence of God: my ancestors saw God and talked to him, and he did really great things for them, and so they passed down stories about it so that we'd remember. Everybody knows that.
Evidence for the existence of Jesus: same.
Evidence for the existence of Hercules: same.
Evidence for the existence of Socrates: same.
Evidence for the existence of Newton: same. Okay, we have a few more records of this one.
Replies from: NancyLebovitz, Vladimir_Nesov, HughRistik↑ comment by NancyLebovitz · 2010-03-24T14:17:36.352Z · LW(p) · GW(p)
Mohammed is solidly part of history.
Replies from: thomblake↑ comment by thomblake · 2010-03-24T15:26:19.543Z · LW(p) · GW(p)
Certainly not more solid than Newton
Replies from: Marion Z.↑ comment by Marion Z. · 2022-12-02T21:55:07.126Z · LW(p) · GW(p)
No, around the same level as Socrates.
We are sure with 99%+ probability both were real people, it would be possible but really difficult to fake all the evidence of their existence.
We are sure with quite high but lesser probability that the broad strokes of their life are correct: Socrates was an influential philosopher who taught Plato and was sentenced to death, Muhammad was a guy from Mecca who founded Islam and migrated to Medina, then returned to Mecca with his followers.
We think some of the specific details written about them in history books might be true, but definitely not all of them. Muhammad might have lived in a cave during his young life, and Socrates might have refused to escape from his death sentence, etc.
↑ comment by Vladimir_Nesov · 2009-03-13T01:12:54.983Z · LW(p) · GW(p)
When a coin comes out tails ten times in a row, you'll bet on it being rigged strictly depending on your prior belief about how much you expect it to be rigged. Evidence only makes sense given your prior belief, inferred from other factors. If I hear a report of a devastating hurricane, I believe it more than if the very same report stated that a nuclear bomb went off in a city, and I won't believe it at all if it stated that the little green men have landed in front of the White House.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2009-03-13T02:05:11.942Z · LW(p) · GW(p)
This is one of the principles of rationality I'm proud to say I discovered on my own: http://heresiology.blogspot.com/2006/01/intelligent-design-1.html
Short version: An interesting formation on earth might be a sign of human involvement, and interesting formation on mars, not so much.
↑ comment by HughRistik · 2009-03-12T21:37:14.727Z · LW(p) · GW(p)
Exactly. These are all sane beliefs, even though only some of them are rational.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-27T23:37:59.749Z · LW(p) · GW(p)
It's arguable that the original believers of religion were insane (e.g. shamans with schizotypical personality disorder, temporal lobe epilepsy, etc...), yet with each subsequent believer in your culture, you are less and less insane to believe in it.
But this would be true only if the subsequent believers were not taking into account previous believers as evidence - if they had all come to the same view independently. Otherwise we have an information cascade.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-02T23:27:40.945Z · LW(p) · GW(p)
Information cascades may be irrational, but they seem fully sane and neurotypical.
↑ comment by Vladimir_Nesov · 2009-03-13T01:01:33.936Z · LW(p) · GW(p)
If you are religious because your whole tribe believes in the spirits seen by Shaman Bob and has indoctrinated you from birth, you are not insane at all, you are a typical human.
The point is that a typical contemporary human is insane. The problem doesn't go away if everyone is suffering from it. Death is still bad even if everyone dies, and believing in nonsense is still insane, even if everyone bends to some reason to do so.
Replies from: HughRistik↑ comment by HughRistik · 2009-03-13T22:06:32.147Z · LW(p) · GW(p)
Yes, there is a "problem" that everyone is suffering from. But the problem is stupidity, not insanity. There is no reasonable basis to assign insanity to typical contemporary humans just because their brains can't achieve the rationality that a minority of human brains can, unless someone actually has some arguments showing some brain malfunction.
Believing in nonsense is not at all insane if your brain is hardwired to be biased towards certain types of nonsense, or if you aren't smart enough to figure out that you are encountering nonsense. And this is exactly how normal human beings are.
The normal, healthy, and sane functioning of typical contemporary human brains is to be susceptible to certain biases. That's the whole thesis of Overcoming Bias. The sooner that we atypical rationalists get used to this, the better, because myopically characterizing bias as insanity will disguise the fact that one of the biggest threats to rationality is certain perfectly healthy processes in the typical human brain.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T22:07:22.338Z · LW(p) · GW(p)
Taboo "sane". "Neurotypical" might be a good substitute.
comment by Vladimir_Golovin · 2009-03-12T08:48:23.855Z · LW(p) · GW(p)
There was a time in history when religion was completely eliminated from the social and scientific life -- the Soviet period, roughly from 1920s to 1980s.
I'm not informed well enough to judge the effects the removal of religion had on the Soviet science. Granted, the country went from rubble to Sputnik and nuclear weapons, but it is hard for me to untangle the causes of this -- there were other powerful factors at work (e.g. "if you don't do good science, we'll send you and your family to GULAG").
One thing, however, is certain -- after the Soviet Union collapsed, religion conquered its lost positions back in a matter of a few years. The memetic sterilization that has been going on for several generations didn't help at all.
Now, about 20 years after the collapse, we see quite a lot of academics publicly mentioning God in their TV interviews, and you'll never hear a public politician mentioning that he is an atheist -- after doing so, his career would be instantly ruined.
To sum up, I have to agree with the posters suggesting that the 'God-shaped hole' wanting to be filled is innate. Figuring out whether religion is an epistemic need, a signaling tool, or both of these mixed in some proportion is another story.
Replies from: MBlume↑ comment by MBlume · 2009-03-12T09:22:17.781Z · LW(p) · GW(p)
It doesn't have to be a 'God-shaped hole' -- there probably is a hole, and over the past few millennia, the Goddists have learned some excellent strategies to fill it, and to exploit it for the replication of their memes. People like Sagan and Dawkins have spent their lives trying to show that science, properly understood and appreciated, fills the hole better, fits it more truly, than do the ideas of religion.
Bottom line: we're not selling Sweet'n'Low here. If we slap "I Can't Believe It's Not Christ!" on the jar, if we act as though religion is the "real thing," and we've got a convenient stop-gap, people are going to want to go back to the "real thing" every time.
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2009-03-12T13:23:27.202Z · LW(p) · GW(p)
Agreed, the term 'God-shaped hole' is misleading. Actually, I didn't mean any specific monotheistic God, but rather 'One or more anthropomorphic entities with supernatural powers who created the observable world'.
Yes, the Goddists learned to exploit the Hole quite well, but couldn't it be because the Hole provided a better environment for survival of memes involving powerful anthropomorphic entities than for other kinds of memes?
As for science filling the hole better, I of course agree with this, but a layperson may have a different definition of 'better' for this context. You, Dawkins, Sagan and most OB/LW readers define 'better' as 'more closely corresponding to reality', while a layperson may define 'better' as 'making me feel more comfortable'.
(Also, I don't quite understand what part of my post can be interpreted as suggesting to "act as though religion is the "real thing," or that scientific worldview is a quick-and-easy hole filler -- it obviously isn't. Perhaps I wasn't clear enough -- I'm not a native English speaker.)
Replies from: MBlume↑ comment by MBlume · 2009-03-21T05:07:22.717Z · LW(p) · GW(p)
(Also, I don't quite understand what part of my post can be interpreted as suggesting to "act as though religion is the "real thing," or that scientific worldview is a quick-and-easy hole filler -- it obviously isn't. Perhaps I wasn't clear enough -- I'm not a native English speaker.)
Sorry, I didn't mean to imply that you were implying that religion is epistemically the real thing. More that...our sense of sweetness is supposed to detect sugar. Sugar is the real referent of our pleasure in sweet-tasting things, while something like sucralose is simply a substitute, a way of replacing it. I worry that by saying "God-shaped hole," we imply that the supernatural -- whether or not it exists -- really is the original referent of the desires which religion exploits. This could be true, but I do not think it is, and I do not think it is a point we should concede just yet.
comment by haig · 2009-03-12T13:19:14.380Z · LW(p) · GW(p)
I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was "religious conviction is associated with reduced neural responsivity to uncertainty and error". I'm hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-12T18:19:33.448Z · LW(p) · GW(p)
brilliant
comment by [deleted] · 2009-03-12T11:42:53.839Z · LW(p) · GW(p)
My father grew up in a heavily religious family, and rejected religion at an early age. I'd say he was a clever fellow, but the turning point wasn't intelligence, it was what a horrible little bastard he was as a child, as any of his siblings would tell you.
If you just don't give a shit, all the emotional manipulation in the world will just wash over you like water off a duck's back. And that's all religion really has going for it, appealing to hope, to fear, to love, to respect, to piety, to community.
If you can teach people truly not to care, a huge rotten portion of their psyche falls away. There is a cost, of course... but this will do the job the post demands.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-12T18:23:50.575Z · LW(p) · GW(p)
Some truth to that.
comment by PhilGoetz · 2009-03-13T05:37:53.208Z · LW(p) · GW(p)
Recently I contemplated writing an "Atheist's Bible", to present the most important beliefs of atheists. Eventually I realized that this Atheist Bible would not mention atheism. "Atheism" is just the default belief state we were born with. Atheism isn't having reasons not to believe religion; it's not having reasons to believe religion. If one knows how the world works, there are no gaps for religion to fill.
The French Encyclopedia of the late 18th century was by design an atheist work; it carried out this design by not mentioning religion.
Replies from: JJ10DMAN, hypatia↑ comment by JJ10DMAN · 2010-08-10T11:25:54.148Z · LW(p) · GW(p)
On the contrary, I would argue that our default belief state is one full of scary monsters trying to kills us and whirling lights flying around overhead and oh no what this loud noise and why am I wet
...I can't imagine a human ancestor in that kind of situation not coming up with some kind of desperate Pascal's wager of, "I'll do this ritualistic dance to the harvest goddess because it's not really that much trouble to do in the grand scheme of things, and man if there's any chance of improving the odds of a good harvest, I'm shakin' my rain-maker." Soon you can add, "and everyone else says it works" to the list, and bam, religion.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2010-08-17T21:44:17.053Z · LW(p) · GW(p)
On the contrary, I would argue that our default belief state is one full of scary monsters trying to kills us and whirling lights flying around overhead and oh no what this loud noise and why am I wet
There is no mention of God in that state; therefore it is atheism. Any person who does not believe in God is an atheist. Anyone who has never thought about whether there is a god, or doesn't have the concept of god, is therefore an atheist.
Replies from: fubarobfusco, JJ10DMAN↑ comment by fubarobfusco · 2011-07-08T04:33:23.024Z · LW(p) · GW(p)
Plug that data into a brain that's been optimized by evolution for thinking about agents and their motives rather than about atmospheric physics, and it's no surprise that you get outputs like "Who threw that rain at me!? What'd I ever do to you, rain-agent? Why are you pissed off at me? What can I do to make you do what I want you to do?"
↑ comment by JJ10DMAN · 2010-10-15T13:48:42.585Z · LW(p) · GW(p)
What I meant was, the moment anyone comes up with such a concept, it would appear so completely and undeniably sensible that it would instantly take hold as accepted truth and only become dislodged with great effort of the combined philosophical efforts of humanity's greatest minds over thousands of years.
It's not technically "default", but that's like saying a magnet is not attracted to a nearby piece of iron "by default" because there's no nearby piece of iron implied by the existence of the magnet. It's technically true, but it kind of misses the important description of a property of magnets.
comment by Daniel_Burfoot · 2009-03-12T09:19:45.957Z · LW(p) · GW(p)
There are a couple of large gorillas in this room.
First, the examples of great scientists who were also religious shows that you don't have to be an atheist to make great discoveries. I think the example of Isaac Newton is especially instructive: not only did Newton's faith not interfere with his ability to understand reality, it also constituted the core of his motivation to do so (he believed that by understanding Nature he would come to a greater understanding of God). Faraday's example is also significant: his faith motivated him to refuse to work on chemical weapons for the British government.
Second, evidence shows that religious people are happier. Now, this happiness research is of course murky, and we should hesitate to make any grand conclusions on the basis of it. But if it is true, it is deeply problematic for the kind of rationality you are advocating. If rationalists should "just win", and we equate winning with happiness, and the faithful are happier than atheists, then we should all stop reading this blog and start going to church on Sundays.
There are subtleties here that await discovery. Note for example Taleb's hypothesis that the ancients specifically promoted religion as a way of preventing people from going to doctors, who killed more people than they saved until the 19th century. Robin made a similar point about the cost effectiveness of faith healing.
Replies from: MichaelHoward, JulianMorrison, steven0461, Cameron_Taylor, Annoyance, Nebu, NQbass7↑ comment by MichaelHoward · 2009-03-12T13:54:30.564Z · LW(p) · GW(p)
If rationalists should "just win", and we equate winning with happiness,
Many of us don't, certainly not with happiness alone, but even if we did...
evidence shows that religious people are happier.
I accept a correlation between religious faith and happiness, but it's a long way from there to concluding that taking up religious faith is the best way to gain this happiness. Many sources of long-term happiness - sense of community, feelings of purpose, close family bonds, etc - are more likely to be seen in a religious person, but you don't have to turn to religion to experience them.
↑ comment by JulianMorrison · 2009-03-12T16:52:42.254Z · LW(p) · GW(p)
I hear that people who have had a lobotomy also live untroubled lives of quiet happiness.
↑ comment by steven0461 · 2009-03-12T16:48:00.374Z · LW(p) · GW(p)
If religion making people happier is a special case of conformity to prevalent ideas making people happier, then the happiness benefit from religion is strictly vampiric in nature. God knows religious people make me unhappy.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T17:35:41.467Z · LW(p) · GW(p)
In which case, the rational thing would be to get rid of whatever emotional belief causes them to make you unhappy. Why be unhappy about a mere fact?
Replies from: thomblake, Eliezer_Yudkowsky, Cameron_Taylor, Cameron_Taylor↑ comment by thomblake · 2009-03-12T21:51:11.814Z · LW(p) · GW(p)
Why be unhappy about a mere fact?
It's worth noting that 'mere' is a weasel word of the highest order. If you change that to 'Why be unhappy about a fact' then it loses its emotive force while having effectively the same content, unless you meant 'Why be unhappy about a fact that's not worth being unhappy about' in which case you're just baiting.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:55:22.765Z · LW(p) · GW(p)
Actually, I had a specific meaning in mind for "mere" in that context: a fact, devoid of the meaning that you yourself assigned to it. The "mere fact" that Starbucks doesn't sell naughty underwear doesn't make it something to be unhappy about... it's only a problem if you insist on reality doing things the way you want them. Same thing for the existence (or lack thereof) of religious people.
Replies from: thomblake, Nick_Tarleton↑ comment by thomblake · 2009-03-12T22:06:38.071Z · LW(p) · GW(p)
a fact, devoid of the meaning that you yourself assigned to it
So are you then denying meanings that come from other sources? Culturally constructed meaning? Meaning that comes from the relations between concepts?
it's only a problem if you insist on reality doing things the way you want them.
One of the best things about being human is that I insist everything happen the way that I want it to. If it doesn't, then I overcome it by replacing it, fixing it, or destroying it.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T22:12:35.027Z · LW(p) · GW(p)
So are you then denying meanings that come from other sources?
Nope... but they're stored in you, and represented in terms of common reinforcers (to use EY's term).
One of the best things about being human is that I insist everything happen the way that I want it to. If it doesn't, then I overcome it by replacing it, fixing it, or destroying it.
You can only do that in the future, not the past. Religious people already exist, so anything bad you might feel about that fact is already pointless.
Replies from: thomblake↑ comment by thomblake · 2009-03-12T22:16:33.485Z · LW(p) · GW(p)
You can only do that in the future, not the past. Religious people already exist, so anything bad you might feel about that fact is already pointless.
Only if you think emotion can't influence behavior. Feeling bad about religious people existing leads Dawkins to campaign against them, which is intended to stop people from being religious. It's sortof a virtuous circle. Do you think our feelings and actions only matter if they can change the past?
Replies from: pjeby↑ comment by pjeby · 2009-03-12T22:32:44.176Z · LW(p) · GW(p)
And if he didn't feel so badly about them, he might campaign in ways that were more likely to win them over. ;-) He seems far more effective at rallying his base than winning over the undecided.
In fact, it might be interesting to objectively compare the results of sending negatively- and positively- motivated atheists to speak to religious groups or individuals, and measure the religious persons' attitudes towards atheism and religion afterwards, as well as the long-term impact on (de)conversions.
I would predict better success for the positively-motivated persons... if only because positive motivation is a stronger predictor for success at virtually everything than negative motivation is!
↑ comment by Nick_Tarleton · 2009-03-12T22:06:02.301Z · LW(p) · GW(p)
it's only a problem if you insist on reality doing things the way you want them.
I agree that this is common and pathological, but why do you think ALL unhappiness proceeds from it? There is a middle ground between dispassionate judgment and desperate craving.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T22:20:34.047Z · LW(p) · GW(p)
Not really: there are two utility systems, not one. The absence of pleasure does NOT equal pain, nor vice versa. They are only weakly correlated, and functionally independent: they have different effects on the brain and body. And we can have mixed feelings, because both systems can activate at the same time.
But there's no inherent reason why activating the pain system is required in the absence of any given form of pleasure, or else we'd be mind-bendingly unhappy most of the time. MOST pleasant things, in fact, we are happy about the presence of, but not necessarily unhappy at the absence of.
As for the question about "all", please give me an example of a form of unhappiness that does NOT derive from a conflict between your desires, and the current state of reality. ;-)
Replies from: Nick_Tarleton, Vladimir_Nesov↑ comment by Nick_Tarleton · 2009-03-13T04:30:08.196Z · LW(p) · GW(p)
I agree with your first two paragraphs, but fail to see the relevance.
I must be matching cliches myself. I interpreted "you insist on reality doing things the way you want them" as referring to, as I said, pathological craving - a subset of "a conflict between your desires, and the current state of reality". I can't think of any unhappiness that has a reason other than that (although sometimes it has no apparent reason), but what of it? Having desires means having such a conflict (barring omnipotence or impossible luck).
Replies from: pjeby↑ comment by pjeby · 2009-03-13T05:22:23.855Z · LW(p) · GW(p)
My error, actually. I should have explained more clearly what I mean by "conflict"... I lapsed into a bit of a cliche there, undermining the point I was building up to from the first two paragraphs.
The key is conflict with the current state of reality, distinguished from future state. If you're busy being mad at the current state, you're probably not working as effectively on improving the future state. Negative motivation is primarily reactive, and past/present focused rather than active and present/future focused, the way positive motivation is.
A positively-motivated person is not in conflict with the current state of reality, just because he or she desires a different state. Whereas, someone with an emotion-backed "should", is in conflict.
So my question is/was, can you give an example of active unhappiness that derives from anything other than an objection to the current state of reality?
You implied that there's a "middle ground" between dispassionate judgment and desperate craving, but my entire point is that positives and negatives are NOT a continuum -- they're semi-independent axes.
Some researchers, btw, note that "affective synchrony" -- i.e., the correlation or lack thereof between the two -- is different under conditions of stress and non-stress, and conditions of high pain or pleasure. High pain is much more likely to be associated with low pleasure, and vice versa for high pleasure. But the rest of the time, for most people, they show near-perfect asynchrony; i.e., they're unrelated to each other.
Which means the "middle ground" you posit is actually an illusion. What happens is that your negative and positive evaluations mostly run in parallel, until and unless one system kicks into high gear.
You could compare it to a chess program's tree-trimming: you're cutting off subtrees based on negative evaluation, and more deeply investigating some, based on positive evaluations.
In the ancestral environment -- especially our prehuman ancestry -- this is effective, because negative branches equal death, and you don't have a lot of time to make a choice.
But in humans, most of our negative branch trimming isn't based on actual physical danger or negative consequences: it's based on socially-learned, status-based, self-judgment. We trim the action trees, not because we'll fall off a cliff, but because we'll seem like a "bad" person in some way. And our reasoning is then motivated to cover up the trimmed branches, so nobody else will spot our concerns about them.
So the trimmed branches don't show up in consciousness for long, if at all.
But, since the evaluation modes are semi-independent, we can also have a positive evaluation for the same (socially unacceptable) thought or action that we (for the moment) don't act on.
So we then experience temptation and mixed feelings... and occasionally find ourselves "giving in". (Especially if nobody's around to "catch" us!)
So this dual-axis model is phenomenally better at explaining and predicting what people actually do, than the naive linear model is. (People who approach the linear model in reality, are about as rare as people who have strongly mixed feelings all the time, at least according to one study.)
The linear model, however, seems to be what evolution wants us to believe, because it suits our need for social and personal deception much better. Among other things, it lets us pretend that our lack of action means a virtuous lack of temptation, when in fact it may simply mean we're really afraid of being discovered!
(whew, more fodder for my eventual post or series thereof!)
↑ comment by Vladimir_Nesov · 2009-03-13T01:23:40.916Z · LW(p) · GW(p)
pjeby:
Not really: there are two utility systems, not one. [...]
You are professing these unorthodox claims instead of communicating them. Why should I listen to anything you say on this issue?
Replies from: pjeby, Nick_Tarleton↑ comment by pjeby · 2009-03-13T03:24:37.132Z · LW(p) · GW(p)
I don't see how they're even remotely unorthodox. Most of what I said is verifiable from your own personal experience.... unless you're claiming that if having sex gives you pleasure, then you're in constant pain when you're not having sex!
However, if you must have the blessings of an authority (which I believe is what "orthodox" means), perhaps you'll find these excerpts of interest:
In addition to behavioral data, evidence from the neurosciences is increasingly in accord with the partial independence of positive and negative evaluative mechanisms or systems (Berntson, Boysen & Cacioppo, in press; Gray, 1987, 1991). The notion dates back at least to the experimental studies of Olds (1958: Olds & Milner, 1954), who spearheaded a literature identifying separate neural mechanisms to be related to the subjective states of pleasure and pain.
...
In an intriguing study that bears on functional rather than stochastic independence, Goldstein and Strube (in press) demonstrated the separability of positive and negative affect within a specific situation and time and the uncoupled activation of positive and negative processes after success and failure feedback, respectively.
The paper containing these two excerpts (from the 1994 APA Bulletin), is probably worth reading in more detail; you can find a copy here.
So, at least in what might be loosely considered "my" field, nothing I said in the post you're referencing is exactly what I'd call "unorthodox".
↑ comment by Nick_Tarleton · 2009-03-13T01:35:34.081Z · LW(p) · GW(p)
Now that the idea has been singled out in hypothesis space, you can evaluate its plausibility on your own, even without supporting argument.
I didn't realize there was an orthodoxy saying that humans have one utility system.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-03-13T01:50:53.117Z · LW(p) · GW(p)
Given how little we are given to go on, evaluating the plausibility of a vaguely suggested hypothesis is hard work. One should only do it if one sufficiently expects it to lead to fruition, otherwise you can solve the riddles posed by the Oracle of White Noise till the end of time. Maybe a proper writeup in a separate post will do the trick.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T19:02:41.813Z · LW(p) · GW(p)
Why be unhappy about something that isn't a fact?
Replies from: Cameron_Taylor, pjeby↑ comment by Cameron_Taylor · 2009-03-12T19:35:28.365Z · LW(p) · GW(p)
I share your confusion. In fact, I can't think of a single contrarian counterexample.
↑ comment by pjeby · 2009-03-12T19:10:03.766Z · LW(p) · GW(p)
I don't understand your question. Are you trying to make a case for unhappiness being useful, or supporting the idea that unhappiness is not useful?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T19:32:57.185Z · LW(p) · GW(p)
I'm saying that if you're going to be unhappy about anything - a position I do currently lean toward, albeit with strong reservations - then you should be unhappy about facts.
Replies from: Vladimir_Nesov, pjeby↑ comment by Vladimir_Nesov · 2009-03-12T23:51:03.644Z · LW(p) · GW(p)
Sometimes the important facts of which you worry are counterfactual. Which, after all, is what happens when you decide, determining the real decision, based on its comparison to your model of its unreal alternative.
↑ comment by pjeby · 2009-03-12T20:07:49.897Z · LW(p) · GW(p)
In order to be unhappy "about" a fact, the fact has to have some meaning... a meaning which can exist only in your map, not the territory, since the fact or its converse have to have some utility -- and the territory doesn't come with utility labels attached.
However, there's another source of possible misunderstanding here: my mental model of the brain includes distinct systems for utility and disutility -- what I usually refer to as the pain brain and gain brain. The gain brain governs approach to things you want, while the pain brain governs avoidance of things you don't want.
In theory, you don't need anything this complex - you could just have a single utility function to squeeze your futures with. But in practice, we have these systems for historical reasons: an animal works differently depending on whether it's chasing something or being chased.
What we call "unhappiness" is not merely the absence of happiness, it's the activation of the "pain-avoidance" system -- a system that's largely superfluous (given our now-greater reasoning capacity) unless you're actually being chased by something.
So, from my perspective, it's irrational to maintain any belief that has the effect of activating the the pain brain in situations that don't require an urgent, "this is a real emergency" type of response. In all other kinds of situations, pain-brain responses are less useful because they are:
- more emotional
- more urgent and stressful
- less deep thinking
- less creativity and willingness to explore options
- less risk-taking
And while these characteristics could potentially be life-saving in a truly urgent emergency... they are pretty much life-destroying in all other contexts.
So, while you might have a preference that people not be religious (for example), there is no need for this preference not being met, to cause you any actual unhappiness.
In other words, you can be happy about a condition X being met in reality, without also requiring that you be unhappy when condition X is not met.
Replies from: Eliezer_Yudkowsky, Annoyance↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T20:18:47.718Z · LW(p) · GW(p)
Should I not be unhappy when people die? I know that I could, by altering my thought processes, make myself less unhappy; I know that this unhappiness is not cognitively unavoidable. I choose not to avoid it. The person I aspire to be has conditions for unhappiness and will be unhappy when those conditions are met.
Our society thinks that being unhappy is terribly, terribly sinful. I disagree morally, pragmatically, and furthermore think that this belief leads to a great deal of unhappiness.
(My detailed responses being given in Feeling Rational, Not For the Sake of Happiness Alone, and Serious Stories, and furthermore illustrated in Three Worlds Collide.)
Replies from: pjeby, pjeby↑ comment by pjeby · 2009-03-12T20:36:26.227Z · LW(p) · GW(p)
I don't know. Is it useful for you to be unhappy when people die? For how long? How will you know when you've been sufficiently unhappy? What bad thing will happen if you're not unhappy when people die? What good thing happens if you are unhappy?
And I mean these questions specifically: not "what's good about being unhappy in general?" or "what's good about being unhappy when people die, from an evolutionary perspective?", but why do YOU, specifically, think it's a good thing for YOU to be unhappy when some one specific person dies?
My hypothesis: your examination will find that the idea of not being unhappy in this situation is itself provoking unhappiness. That is, you think you should be unhappy when someone dies, because the idea of not being unhappy will make you unhappy also.
The next question to ask will then be what, specifically, you expect to happen in response to that lack of unhappiness, that will cause you to be unhappy.
And at that point, you will discover something interesting: an assumption that you weren't aware of before.
So, if you believe that your unhappiness should match the facts, it would be a good idea to find out what facts your map is based on, because "death => unhappiness" is not labeled on the territory.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T20:45:32.506Z · LW(p) · GW(p)
Pjeby, I'm unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it. To say that it is encoded directly into my utility function (not just that certain things are bad, but that I should be a person who feels bad about them) might be oversimplifying in this case, since we are dealing with a structurally complicated aspect of morality. But just as I don't think music is valuable without someone to listen to it, I don't think I'm as valuable if I don't feel bad about people dying.
If I knew a few other things, I think, I could build an AI that would simply act to prevent the death of sentient beings, without feeling the tiniest bit bad about it; but that AI wouldn't be what I think a sentient citizen should be, and so I would try not to make that AI sentient.
It is not my future self who would be unhappy if all his unhappiness were eliminated; it is my current self who would be unhappy on learning that my nature and goals would thus be altered.
Did you read the Fun Theory sequence and the other posts I referred you to? I'm not sure if I'm repeating myself here.
Replies from: NancyLebovitz, pjeby↑ comment by NancyLebovitz · 2010-03-24T05:50:15.641Z · LW(p) · GW(p)
Possibly relevent: A General Theory of Love suggests that love (imprinting?) includes needing the loved one to help regulate basic body systems. It starts with the observation that humans are the only species whose babies die from isolation.
I've read a moderate number of books by Buddhists, and as far as I can tell, while a practice of meditation makes ordinary problems less distressing, it doesn't take the edge off of grief at all. It may even make grief sharper.
↑ comment by pjeby · 2009-03-12T21:14:45.666Z · LW(p) · GW(p)
I'm unhappy on certain conditions as a terminal value, not because I expect any particular future consequences from it.
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
I ask because my experience tells me that there are only a handful of "terminal" negative values, and they are human universals; as far as I can tell, it isn't possible for a human being to create their own terminal (negative) values. Instead, they derive intermediate negative values, and then forget how they did the derivation... following which they invent rationalizations that sound a lot like the ones they use to explain why death is a good thing.
Don't you find it interesting that you should defend this "terminal" value so strongly, without actually asking yourself the question, "What really would happen if I were not unhappy in situation X?" (Where situation X is actually specified to a level allowing sensory detail -- not some generic abstraction.)
It's clear from what you've written throughout this thread that the answer to that question is something like, "I would be a bad person." And in my experience, when you then ask something like, "And how did I learn that that would make me bad?", you'll discover specific, emotional memories that provide the only real justification you had for thinking this thought in the first place... and that it has little or no connection to the rationalizations you've attached to it.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T21:25:59.408Z · LW(p) · GW(p)
Really? How do you know that? What evidence would convince you that your brain is expecting particular future consequences, in order to generate the unhappiness?
You could actually tell me what I fear, and I'd recognize it when I heard it?
What would it take for me to convince you that I'm repulsed by the thing-as-it-is and not its future consequence?
I ask because my experience tells me that there are only a handful of "terminal" negative values
I strongly suspect, then, that you are too good at finding psychological explanations! Conditioned dislike is not the same as conditional dislike. We can train our terminal values, and we can be moved by arguments about them. Now, there may be a humanly universal collection of negative reinforcers, although there is not any reason to expect the collection to be small; but that is not the same thing as a humanly universal collection of terminal values.
I can tell you just exactly what would happen if I weren't unhappy: I would live happily ever afterward. I just don't find that to be the most appealing prospect I can imagine, though one could certainly do worse.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T22:08:24.225Z · LW(p) · GW(p)
What would it take for me to convince you that I'm repulsed by the thing-as-it-is and not its future consequence?
A source listing for the relevant code and data structures in your brain. At the moment, the closest thing I know to that is examining formative experiences, because recontextualizing those experiences is the most rapid way to produce testable change in a human being.
We can train our terminal values, and we can be moved by arguments about them.
Then we mean different things by "terminal" in this context, since I'm referring here to what comes built-in to a human, versus what is learned by a human. How did you learn that you should have that particular terminal value?
I can tell you just exactly what would happen if I weren't unhappy: I would live happily ever afterward.
As far as I can tell, that's a "far" answer to a "near" question -- it sounds like the result of processing symbols in response to an abstraction, rather than one that comes from observing the raw output of your brain in response to a concrete question.
In effect, my question is, what reinforcer shapes/shaped you to believe that it would be bad to live happily ever after?
(Btw, I don't claim that happily-ever-after possible -- I just claim that it's possible and practical to reduce one's unhappiness by pruning one's negative values to those actually required to deal with urgent threats, rather than allowing them to be triggered by chronic conditions. I don't even expect that I won't grieve people important to me... but I also expect to get over it, as quickly as is practical for me to do so.)
↑ comment by pjeby · 2009-03-12T20:51:56.484Z · LW(p) · GW(p)
Argh. You keep editing your comments after I've already started my replies. I guess I'll need to wait longer before replying, in future.
Your detailed responses are off-point, though, except for "Serious Stories", in which you suggest that it would be useful to get rid of unnecessary and soul-crushing pain and/or sorrow. My position is that a considerable portion of that unnecessary and soul-crushing stuff can be done away with, merely by rational examination of the emotional source of your beliefs in the relevant context.
Specifically, how do you know what "person you aspire to be"? My guess: you aspire to be that person, not because of an actual aspiration, but rather because you are repulsed by the alternative, and that the alternative is something you're either afraid you are, or might easily become. (In other words, a 100% standard form of irrationality known as an "ideal-belief-reality conflict".)
What's more, when you examine how you came to believe that, you will find one or more specific emotional experiences... which, upon further consideration, you will find you gave too much weight to, due to their emotional content at the time.
Now, you might not be as eager to examine this set of beliefs as you were to squirt ice water in your ear, but I have a much higher confidence that the result will be more useful to you. ;-)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T21:02:51.765Z · LW(p) · GW(p)
By "person I aspire to be" I mean that my present self has this property and my present self wants my future self to have this property. I originally wrote "person I define as me" but that seemed like too much of a copout.
Yes, I'm repulsed by imagining the alternative Eliezer who feels no pain when his friends, family, or a stranger in another country dies. It is not clear to me why you feel this is irrational. Nor is it based on any particular emotional experience of mine of having ever been a sociopath.
It seems to me that you are verging here on the failure mode of having psychoanalysis the way that some people have bad breath. If you don't like my arguments, argue otherwise. Just casting strange hints of childhood trauma is... well, it's having psychoanalysis the way some people have bad breath.
So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:16:32.872Z · LW(p) · GW(p)
Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?
Replies from: thomblake↑ comment by thomblake · 2009-03-12T21:22:16.692Z · LW(p) · GW(p)
While EY might not put it this way, this line:
So far as I can tell, being a person who hurts when other people hurt is part of that which appears to me from the inside as shouldness.
answered your question
Okay, let me rephrase. Why is it better to be a person who hurts when other people hurt, than a person who is happier when people don't hurt?
since Eliezer was making a moral observation. The answer: It is obviously so. Do you have conflicting observational data?
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:27:40.922Z · LW(p) · GW(p)
How is it rational to treat a "moral observation" as "obviously so"? That's how religion works, isn't it?
Replies from: Eliezer_Yudkowsky, thomblake↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T21:30:10.591Z · LW(p) · GW(p)
This discussion is now about
NATURALISTIC METAETHICS
my view on which is summarized in Joy in the Merely Good.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:52:21.738Z · LW(p) · GW(p)
My question is about the implementation of meta-ethics in the human brain. If I were going to write a program to simulate Eliezer Yudkowsky, what rules (other than "be unhappy when others are unhappy") would I need to program in for you to arrive at this "obvious" conclusion?
In my personal experience, the morality that people arrive at by avoiding negative consequences is substantially different than the morality they arrive at by seeking positive ones.
In other words, a person who does good because they will otherwise be a bad person, is not the same as a person who does good because it brings good. Their actions and attitudes differ in substantive ways, besides the second person being happier. For example, the second person is far more likely to actually be generous and warm towards other people -- especially living, present, individual people, rather than "people" as an abstraction.
So which of these two is really the "good" person, from your moral perspective?
(On another level, by the way, I fail to see how contagious, persistent unhappiness is a moral good, since it greatly magnifies the total amount of unhappiness in the universe. But that's a separate issue from the implementation question.)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T22:02:02.507Z · LW(p) · GW(p)
It seems to me that when you say 'meta-ethics' you simply mean 'ethics'. I don't know why you'd think meta-ethics would need to be implemented in the human brain. Ethics is in the world; meta-ethics doubly so. There's a fact about what's right, just like there's a fact about what's prime. You could ask why we care about what's right, but that's neither an ethical question nor a meta-ethical one. The ethical question is 'what's right?' and the meta-ethical question is 'what makes something a good answer to an ethical question?'. Both of those questions can be answered without reference to humans, though humans are the only reason why anyone would care.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T22:28:07.076Z · LW(p) · GW(p)
Unless Eliezer has some supernatural entity to do his thinking for him, his ethics and meta-ethics require some physical implementation. Where else are you proposing that he store and process them, besides physical reality?
Replies from: thomblake↑ comment by thomblake · 2009-03-12T22:41:42.723Z · LW(p) · GW(p)
I think you're shifting between 'ethics' and 'what Eliezer thinks about ethics'. While it's possible that ideas are not real save via some implementation, I don't think it would therefore have to be in a particular human; systems know things too.
You seem to frequently shift the focus of conversation as it happens, hurting the potential for rational discourse in favor of making emotively positive statements that loosely correlate with the topic at hand. Would you be the same pjeby that writes those reprehensible self-help books?
Replies from: Eliezer_Yudkowsky, pjeby↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T22:49:38.958Z · LW(p) · GW(p)
That seemed a bit ad hominem. The commenter pjeby (I know nothing else about him) seems like someone who might be unfamiliar with part of the LW/OB background corpus but is reasoning pretty well under those conditions.
Replies from: pjeby, thomblake↑ comment by pjeby · 2009-03-12T23:02:04.889Z · LW(p) · GW(p)
Actually, I'm quite familiar with a large segment of the OB corpus -- it's been highly influential on my work. However, I also see what appear to be a few holes or incoherencies within the OB corpus... some of which appear to stem from precisely the issue I've been asking you about in this thread. (i.e. the role of negative utilities in creating bias)
In my personal experience, negative utilities create bias because they cut off consideration of possibilities. This is useful in an emergency -- but not much anywhere else. If human beings had platonically perfect minds, there would be no difference between a uniform utility scale and a dual positive/negative one... but as far as I can tell (and research strongly suggests) we do have two different systems.
So, although you're wary of Robin's "cynicism" and my "psychological explanations", this is inconsistent with your own statements, such as:
There is no perfect argument that persuades the ideal philosopher of perfect emptiness to attach a perfectly abstract label of 'good'. The notion of the perfectly abstract label is incoherent, which is why people chase it round and round in circles. What would distinguish a perfectly empty label of 'good' from a perfectly empty label of 'bad'? How would you tell which was which?
See, I'm as puzzled by your ability to write something like that, and then turn around and argue an absolute utility for unhappiness, as you are puzzled by that Nobel-winning Bayesian dude who still believes in God. From my POV, it's just as inconsistent.
There must be some psychology that creates your position, but if your position is "truly" valid (assuming there were such a thing), then the psychology wouldn't matter. You should be able to destroy the position, and then reconstruct it from more basic principles, once the original influence is removed, no? (This idea is also part of the corpus.)
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-12T23:36:18.182Z · LW(p) · GW(p)
pjeby,
Are you familiar with Eliezer's take on naturalistic meta-ethics in particular, or just with other large segments of the OB corpus? If the former, maybe you could take more care to spell out that you get the difference between "achieving one's original goals" and "hacking one's goal-system so that the goal-system thinks one has acheived one's goals (e.g., by wireheading)".
I like your writing, but in this particular thread, my impression is that you're "rounding to the nearest cliche" -- interpreting Eliezer and others as saying the nearest mistake that you've heard your students or others make, rather than making an effort to understand where people are coming from. My impression may be false, but it sounds like I'm not the only one who has it, and it's distracting, so maybe take more care to spell out in visible terms a summary of peoples' main points, so we know you're disagreeing with what they're saying and not with some other view.
More generally, you've joined a community that has been thinking awhile and has some unusual concepts. I'm glad you've joined the commenters, because we badly need the best techniques we can get for changing our own thinking habits and for teaching the same to others -- we need techniques for learning and teaching rationality -- and I find your website helpful here, and your actual thinking on the subject, in context, can probably become better still. But I wonder if you could maybe take a bit more care in general to hear the threads you're responding to. I've felt like you were "rounding to the nearest cliche" in your thread with me as well (I wasn't going off the Lisa Simpson happiness theory), and it might be nice if you could take the stance of a co-participant in the conversation, who is interested in both learning and teaching, instead of repeating the (good) points on your website in response to all comments, whatever the comments' subject matter.
Replies from: pjeby↑ comment by pjeby · 2009-03-13T00:24:26.370Z · LW(p) · GW(p)
First, yes, I do understand the the difference between goal-achievement and wireheading. I'm drawing a much finer distinction about the means by which you set up a system to achieve your goals, as well as the means by which you choose those goals in the first place.
It is possible in some cases that I've "rounded to the nearest cliche" as you put it. But I'm pretty confident that I'm not doing that with Eliezer's points, precisely because I've read so much of his work... but also because the mistake I believe he is making (or at least, the thing he appears to not be noticing) is a perfect example of a point that I was trying to make in another thread... about why you can't just put one new, correct belief in someone's head, and have it magically fix every broken belief they already have.
I'm a little confused about the rest of your statement; it doesn't seem to me that I'm repeating the same points, so much as that I've been struggling to deal with the fact that so many of the threads I've become involved in, boil down (AFAICT) to the same issues -- and trying NOT to have too much duplication in my responses, while also not wanting to create a bunch of inter-comment links. (Another fine example of how avoiding negatives leads to bad decisions... ;-) )
Now, whether that's a case of me having only a hammer, or whether it's simply because everything really is made out of ones and zeros, I'm not sure. It has been seeming to me for a bit now, that what I really need to do is write an LW article about positive/negative utility and abstract/concrete thinking, as these are the main concepts I work with that clash with some portions of the OB corpus (and some of the more vocal LW commenters). Putting that stuff in one place would certainly help reduce duplication.
Meanwhile, it's not my intention to reduce anyone to cliche, or to presume that I understand something I don't. If I were, I wouldn't spend so much time in so many of my comments, asking so many questions. They are not rhetorical; they represent genuine curiosity. And I've actually learned quite a lot from the process of asking and commenting in the last few days; many things I've written here are NOT concepts I previously had.
This is especially true for the two comments that were replies to you; they were my musings on the ideas I got from your statements, more than critique or commentary of anything you said. I can see how that might make you feel not understood, however. (Also, the "Lisa Simpson theory" part of that one comment was actually directed to the comment you were replying to, not your comment in that thread, btw. I was trying to avoid writing two replies there.)
Replies from: Eliezer_Yudkowsky, AnnaSalamon↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T01:19:02.136Z · LW(p) · GW(p)
I also get the sense that you're trying to say something off-the-cuff in your replies that would be better done as a specific LW post.
↑ comment by AnnaSalamon · 2009-03-13T01:09:29.251Z · LW(p) · GW(p)
Thanks for the thoughtful reply. It's quite possible I misinterpreted. Also, re: the Lisa Simpson thing, I'll be more careful to look at other nearby posts people might be replying to instead of reading comments so much from the new comments page.
It seems slightly odd to me that you say you're "pretty confident" you're not rounding Eliezer's point to the nearest cliche in part because the mistake you think he's making "is a perfect example of a point [you] were trying to make in another thread". Isn't that what it feels like when one rounds someone's response to a pre-existing image of "oh, the such-and-such mistake"?
A LW article about how people think about positive/negative utility, and another about abstract/concrete thinking, sounds wonderful. Then we can sift through your concepts as a community, air confusions or objections in a coherent manner, etc.; and you can reference it and it'll be part of our shared corpus. Both topics sound useful.
Replies from: pjeby↑ comment by pjeby · 2009-03-13T03:04:49.932Z · LW(p) · GW(p)
Isn't that what it feels like when one rounds someone's response to a pre-existing image of "oh, the such-and-such mistake"?
So, how would you distinguish that, from the case where their response is making the such-and-such mistake?
The way I'd distinguish it, is to ask questions that would have different answers, depending on whether the person is making that mistake or not. I asked Eliezer those questions, and of the ones he answered, the answers were consistent with my model of the mistake.
Of course, there's always the possibility of confirmation bias... except that I also know what answers I'd have taken as disconfirming my hypothesis, which makes it at least a little less likely. (But I do know of more than one mechanism by which beliefs and behaviors are formed and maintained, and it would've been plausible -- albeit less probable -- that his evaluation could've been formed another way. And I'd have been perfectly okay with my hypothesis being wrong.)
See, I'm not pointing out what I believe to be a mistake because I think I'm smarter than Eliezer... it's because I'm constantly making the same mistake. We all do, because it's utterly trivial to make it, and really non-trivial to spot it. And if you haven't gotten an intuitive grasp of why and how that mistake comes into being (for example, if you insist it doesn't exist in the first place!), then it's hard to see why there's "no silver bullet" for reducing the complexity of developing "rationality" in people.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-13T04:05:03.091Z · LW(p) · GW(p)
So, how would you distinguish that, from the case where their response is making the such-and-such mistake?
If my interlocutor is someone who might well have thoughts that don't fit into my schemas, I might be suspicious enough of my impression that they were making one of my standard cached example-mistakes that I'd:
Make a serious effort at original seeing, and make sure my model of the such-and-such mistake is really the best way to organically understand the situation in front of me; and then
Describe my schema for the such-and-such mistake (in general), and see if the person agrees that such-and-such is a mistake; and then
Describe the instance of the such-and-such mistake that the person seems to be making, and ask if they agree or if there's a kind of reasoning going into their claim that doesn't fit into my schema.
Or maybe this is just the pain-in-the-neck method one should use if one's original communication attempt stalls somewhere. Truth be told, I'm at this point rather confused about which aspects of meta-ethics under dispute, and I can't easily scan back through the conversation to find the quotes of yours that made me think you misunderstood because our conversation has overflowed LW's conversation-display settings. And you've made some good points, and I'm convinced I misunderstood you in at least some cases. I'm going to bow out of this conversation for now and wait to discuss values and their origins properly, in response to your own post. (Or if you'd rather, I'd love to discuss by email; I'm annasalamon at gmail.)
Replies from: pjeby↑ comment by pjeby · 2009-03-13T04:49:27.830Z · LW(p) · GW(p)
Yes, the comment system here is really not suited to the kind of conversation I've been trying to have... not that I'm sure what system would work for it. ;-)
As far as meta-ethics goes, the short summary is:
"Avoiding badness" and "seeking goodness" are not interchangeable when you experience them concretely on human hardware,
It is therefore a reasoning error to treat them as if they were interchangeable in your abstract moral calculations (as they will not work the same way in practice),
Due to the specific nature of the human hardware biases involved (i.e., the respective emotional, chemical, and neurological responses to pain vs. pleasure) , badness-avoidance values are highly likely to be found irrational upon detailed examination... and thus they are always the ones worth examining first.
Badness-avoidance values are a disproportionately high (if not exclusive!) source of "motivated reasoning". i.e., we don't so much rationalize to paint pretty pictures, as to hide the ugly ones. (Which makes rooting them out of critical importance to rationalists.)
This summary is more to clarify my thoughts for the eventual post, than an attempt to continue the discussion. (To me, these things are so obvious and so much a part of my day-to-day experience that I often forget the inferential distance involved for most people.)
These ideas are all capable of experimental verification; the first one has certainly been written about in the literature. None are particularly unorthodox or controversial in and of themselves, as far as I'm aware.
However, there are common arguments against some of these ideas that my own students bring up, so in my (eventual) post I'll need to bring them up and refute them as well.
For example, a common argument against positively-motivated goodness is that feeling good about being generous means you're "really" being selfish... and thus bad! So, the person advancing this argument is motivated to rationalize the "virtue" of being dutiful -- i.e., doing something you don't want to, but nonetheless "should" -- because it would be bad not to.
Strangely, most people have these judgments only in relation to their self... They see no problem with someone else doing good out of generosity or kindness, with no pain or duty involved. It's only themselves they sentence to this "virtue" of suffering to achieve goodness. (Which is sort of like "fighting for peace" or "f*ing for virginity", but I digress.)
Whether this is something inbuilt, cultural, or selection bias of people I work with, I have no idea. But it's damn common... and Eliezer's making a virtue out of unhappiness (beyond the bare minimums demanded by safety, etc.) fits smack dab in the middle of this territory.
Whew. Okay, I'm going to stop writing this now... this really needs to be a post. Or several. The more I think about how to get here, starting from only the OB corpus and without recapitulating my own, the bigger I realize the inferential gap is.
Replies from: Eliezer_Yudkowsky, Vladimir_Nesov↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-13T05:48:02.959Z · LW(p) · GW(p)
You may be running into the Reversed Stupidity problem; most cases you've seen advocating negative feelings are stupid, therefore, you assume that all such advocations must result from the same stupidity.
I sympathize because I remember back when I would have thought that anyone arguing against the abolitionist program - that is, the total abolition of all suffering - was a Luddite.
But I eventually realized I didn't want to eliminate my negative reinforcement hardware, and that moreover, I wouldn't be such a bad person if I, you know, just did things the way I did want, instead of doing things the way I felt vaguely dutifully obligated to want but didn't want.
Why am I a terrible, bad person for not wanting to modify myself in that way? What higher imperative should override: "I'd rather not do this"?
Replies from: pjeby↑ comment by pjeby · 2009-03-13T06:43:19.816Z · LW(p) · GW(p)
I didn't say you're a terrible bad person - I said your choice to be unhappy in the absence of a positive benefit from same, is likely to be found irrational, if you reflect on the concrete emotional reason you find the prospect abhorrent.
I also don't recommend eliminating the negative reinforcement hardware, I merely recommend carefully vetting all the software you permit to run on it, or to be generated by it. (So don't worry, I'm not an advance spokesperson for the Superhappies.)
This isn't an absolute, just a VERY strong heuristic, in my experience. Sort of like, if someone's going to commit suicide, I have more hoops for them to jump through to prove their rationality, than someone who's just going to the grocery store. ;-)
And, based on what you've said thus far, it doesn't sound like you've thoroughly investigated what concrete (near-system) rules drove the creation of your aspiration to suffering.
(As opposed to the abstract ideation that happened afterward, since a major function of abstract ideation is to allow us to hide our near-system rules from ourselves and others... an idea I got from OB, btw, and one that significantly increased the effectiveness of my work!)
Now, were you advocating a positive justification for the use of unhappiness, rather than a desire to avoid its loss, I wouldn't need to apply the same stringency of questioning... in the same way that I wouldn't question a masochist finding enjoyment in the experience of pain!
And if you were giving a detailed rationale for your negative justification, I'd be at least somewhat more satisfied. However, your justifications here and on OB sound to me like vague "apologies for death", that is, they handwave various objections as being "obvious", without providing any specific scenario in which any given person would actually be better off by not having the option of immortality, or by lacking the ability to reject unhappiness, or to get over it with arbitrary quickness.
Also, you didn't answer any of my questions like, "So, how long would you need to be unhappy, after some specific person died?" This kind of vagueness is (in my experience) an strong indicator of negatively-motivated rationalization. After all, if this were as well-thought out as your other positions, it seems to me that you'd either already have had an answer ready, or one would have come quickly to mind.
That one question is particularly relevant, too, for determining where our positions actually differ -- if they really do! I don't mind being (briefly) unhappy, as an indicator that something is wrong. I just don't see any point in leaving the alarm bell ringing, 24/7 thereafter. Our lives and concerns don't exist on the same timescales as our ancestors, and a life-threatening problem 20 years from now, simply doesn't merit the same type of stress response as one that's going to happen 20 seconds from now. But our nervous systems don't seem to know the difference, or at least lack the required dynamic range for an adequate degree of distinction.
By the way, this comment gives a more detailed explanation of how the negative reinforcement mechanism leads to undesirable results besides excessive stress (like hypocrisy and inner conflict!) compared to keeping it mostly-inactive, within the region where positive reinforcement is equally suitable to create roughly-similar results.
And now, I'm going to sign off for tonight, and take a break from writing here for a while. I need to get back to work on the writing and speaking I do for my paying customers, at least for a few days anyhow. ;-) But I nonetheless look forward to your response.
Replies from: Emile↑ comment by Emile · 2009-03-13T14:06:31.546Z · LW(p) · GW(p)
Interesting thread!
I'm not sure that pjeby has fully adressed Eliezer's concern that "eliminating my negative emotions would be changing my preferences, and changing my preferences so that they're satisfied is against my current preferences (otherwise, I'd just go for being an orgasmium)".
(Well, at least that's how I'd paraphrase it, Eliezer, tell me if I'm wrong)
To which I would answer:
Yes, it's very possible that eliminating some negative emotions would be immoral, or at least, would change one's preferences in a way my previous preferences would disagree with (think: eliminating the guilt over killing people, and things like that. I wouldn't be very happy to learn that the army or police of a dictatorship is researching emotion elimination)
Still, there is probably a wide range of negative feelings that could be removed in a way that doesn't contradict one's original preferences - in the sense that the pre-modification person wouldn't find the behaviour of the modified person objectionable.
The line between which changes are OK and which are not is not that obvious to draw, and many posts on OB talk about it (The difference between the morality of the ancient greek and our own, and thus the risk of "freezing" our own morality and barring future moral progress, the Confessor's objections to non-consensual sex, etc.). pjeby might be being a bit light-handed when he dismisses concerns over changing preferences as "irrational", but I think he meant that careful examination could show that those changes stayed in the second category and wouldn't turn one into a immoral monster.
(It feels a bit weird answering pjeby's post in the third person, but it felt clearer to me that way :P I'm not responding to this post in particular)
(Disclaimer: I'm one of pjeby's clients, but that's not why I'm here, I've been reading OvercomingBias since nearly the beginning)
Replies from: pjeby↑ comment by pjeby · 2009-03-13T15:35:25.895Z · LW(p) · GW(p)
pjeby might be being a bit light-handed when he dismisses concerns over changing preferences as "irrational"
I didn't (explicitly) dismiss those concerns; I said that away-from reasoning has a higher rationality standard to meet, in part because it's likely to be vague.
I wasn't even thinking about preference-changing being dangerous, because our preferences are largely independent and mostly don't "auto-update" when we change one -- there's a LOT of redundancy. So if a specific change isn't compatible with your overall morality, you'll note the dissonance, and change your preferences again to tune things better.
Science-fictional evidence of preference-changing is about as far off as science-fictional evidence of AI behavior... and for the same reasons. The built-in models our brain uses to understand minds and their preferences, are simpler than the models the brain uses to create a mind... and its preferences.
Offtopic: Shortly after you posted this, it appears that someone undertook a massive vote-down campaign, systematically searching for every comment I've ever posted to LW, and voting it down by 1. I don't know if, or how these events are correlated.
But, if the person who undertook that campaign was trying to send me a message of some sort, they neglected to include any actionable information content. I only noticed because the karma number suddenly and dramatically changed when I clicked through from one page to another, reading this morning's new comments.... and that sudden large drop was weird enough to make me investigate.
Otherwise, I probably never would've been aware of their action, as an action, let alone as any sort of feedback! If you want to communicate something to someone, it's probably best to be more explicit. Or, in the alternative, contribute a patch to the LW software to let you filter out posts by people you don't like, or perhaps the entire subthreads they participate in.
Replies from: Emile↑ comment by Vladimir_Nesov · 2009-03-14T23:10:37.201Z · LW(p) · GW(p)
This is what I was talking about. Please do prepare the posts, it'll help you to clarify your position to yourself. Let them lie as drafts for a while, then make a decision about whether to post them. Note that your statements are about the form of human preference computation, not about the utility that computes the "should" following from human preferences. Do you know the derivation of expected utility formula? You refer to a well-known finding that people avoid negative reward more than they seek positive reward.
Replies from: pjeby↑ comment by pjeby · 2009-03-15T02:23:00.739Z · LW(p) · GW(p)
You refer to a well-known finding that people avoid negative reward more than they seek positive reward.
Well, there is that too, of course, but actually the issues I'm talking about here are (somewhat) orthogonal. Negatively-motivated reasoning is less likely to be rational in large part because it's more vague -- it requires only that the source of negative motivation be dismissed or avoided, rather than a particular source of positive motivation be obtained. Even if negative and positive motivation held the same weight, this issue would still apply.
The literature I was actually referring to (about the largely asynchronous and simultaneous operation of negative and positive motivation), I linked to in another comment here, after you accused me of making unorthodox and unsupported claims. In my posts, I expect to also make reference to at least one paper on "affective synchrony", which is the degree to which our negative and positive motivation systems activate to the same degree at the same time.
Note that your statements are about the form of human preference computation, not about the utility that computes the "should" following from human preferences.
All I'm pointing out is that a rationalist that ignores the irrationality of the hardware on which their computations are being run, while expecting to get good answers out of it, isn't being very rational.
↑ comment by thomblake · 2009-03-12T22:55:45.423Z · LW(p) · GW(p)
It was deliberately ad hominem, of course - just not the fallacious kind. We seriously need profile pages of some sort. Wish I had the stomach for Python.
I don't expect anyone to be familiar with the LW/OB background corpus - I expect my education and training is quite different from yours, for example. However, I still expect one to follow rules of conduct with respect to reasonable discourse, for example avoiding equivocation and its related vices.
Or maybe I'm just viscerally angered by the winky smileys. Who knows.
↑ comment by pjeby · 2009-03-12T22:56:27.957Z · LW(p) · GW(p)
I don't see how I can separate "ethics" from "what Eliezer thinks about ethics" and still have a meaningful conversation with him on the topic.
Meanwhile, reading back through the thread, the only digressions I see in my comments are those made in response to those raised by you or Eliezer. Perhaps you could point to some specific examples of these shifted foci and emotively positive statements? I do not see them.
As for my "reprehensible" books, I trust you formed that judgment by actually reading them, yes? If so, then yes, I'm that person. But if you didn't read them, then clearly your judgment isn't about the books I actually wrote... and thus, I could not have been the person who wrote the (imaginary) ones you'd therefore be talking about. ;-)
Replies from: thomblake↑ comment by thomblake · 2009-03-12T23:01:27.307Z · LW(p) · GW(p)
Perhaps you could point to some specific examples of these shifted foci and emotively positive statements? I do not see them.
I was not referring only to this thread, but to several ongoing discussions. If you'd like clear examples, feel free to contact me via http://thomblake.com or http://thomblake.mp
As Eliezer has kindof pointed out, I'm weary enough from this discussion to be on the verge of irrationality, so I shall retire from it (if only because this forum is devoted to rationality!).
↑ comment by Annoyance · 2009-03-12T20:13:20.744Z · LW(p) · GW(p)
I agree with your reasoning, but I think there are plenty of reasons to be unhappy about religion that go beyond the absence of a preferred state.
In other words, I think I should be actively displeased that religion exists and is prevalent, not merely being non-happy. Neutrality is included in non-happiness, and if the word were used logically, unhappiness. But the way it's actually used, 'unhappy' means active displeasure.
Replies from: pjeby↑ comment by Cameron_Taylor · 2009-03-12T19:37:22.096Z · LW(p) · GW(p)
Why be unhappy about a mere fact?
Because it may serve as a source of motivation as well as serve as a vital mechanism for sending honest signals of your values and allegiances.
Unhappiness also changes our mode of thinking. Unhappy children have been shown to have significantly improved performance at tasks requiring detailed thinking and problem solving.
Happiness is overrated.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T21:40:51.307Z · LW(p) · GW(p)
How does "religious people make me unhappy" motivate anything useful? And to whom is it sending this "honest signal"? What mode of thinking is being changed?
In other words, how is being unhappy about the mere existence of religious people actually useful to the specific person who stated that?
At the moment, what you've said could equally apply to death being a good thing, because it motivates us to not waste time, and because it kills off the less intelligent. In other words, it sounds like a way to justify continued belief in the usefulness of your own unhappiness.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-17T19:38:41.144Z · LW(p) · GW(p)
How does "religious people make me unhappy" motivate anything useful? And to whom is it sending this "honest signal"? What mode of thinking is being changed?
I don't know. That wasn't the question I answered.
At the moment, what you've said could equally apply to death being a good thing, because it motivates us to not waste time, and because it kills off the less intelligent.
No it doesn't, that's absurd.
In other words, it sounds like a way to justify continued belief in the usefulness of your own unhappiness.
No. It is several examples of how being unhappy about a fact can provide benefit to people. If unhappiness was a strictly detrimental phenonemon then we would not have evolved to experience it in such depth.
Replies from: pjeby↑ comment by pjeby · 2009-03-17T21:23:33.573Z · LW(p) · GW(p)
That wasn't the question I answered.
But it was the question I asked, in context. The "mere fact" referred to in my question was the existence of religious people -- it was not an abstract question to be applied to any random "mere fact".
No it doesn't, that's absurd.
Right -- because that's exactly what happens when you take a point out of context, treat it as an abstraction, and then reapply it to some other specific fact than the one it was relevant to.
If unhappiness was a strictly detrimental phenonemon then we would not have evolved to experience it in such depth.
Detrimental to whom, relative to what purpose? Evolution doesn't have our personal goals or fulfillment in mind -- just reproductive success. It's merely a happy accident that some of the characteristics useful for one, are also useful for the other.
↑ comment by Cameron_Taylor · 2009-03-12T19:59:19.271Z · LW(p) · GW(p)
The mere fact is not the primary cause of net unhappiness from the prevalence of a belief that you do not share. Loss of status and connection is far more difficult to dismiss than a mere fact.
Replies from: pjeby↑ comment by pjeby · 2009-03-12T20:18:01.828Z · LW(p) · GW(p)
If you want status and connection, seek them out. What does that have to do with the fact that you don't have them elsewhere?
Do you have to be unhappy because Starbuck's doesn't sell underwear, and Victoria's Secret doesn't serve coffee? Or do you just go to someplace that has whatever you're looking for at the moment?
That's how you operate when you have a preference, rather than an emotionally-backed addiction.
In this case, the emotional rule backing the addiction is a "should" -- the idea that people "shouldn't" be religious or "should" be rational. Such rules produce only pain, when you're not in a position to enforce them.
However, if you change "they should" to "I prefer", then you don't have to be unhappy when reality doesn't match your preferences. You are still free (and still motivated) to change the situation, if and when you have the power to do so, but you are not also bound to become unhappy whenever the situation is not exactly as you prefer.
↑ comment by Cameron_Taylor · 2009-03-12T19:53:15.726Z · LW(p) · GW(p)
You may be using the other tribes battle cries, but you make relevant points. This post comforts me.
↑ comment by Annoyance · 2009-03-12T19:42:25.605Z · LW(p) · GW(p)
"I think the example of Isaac Newton is especially instructive: not only did Newton's faith not interfere with his ability to understand reality,"
Actually, his belief that God constantly corrected the motion of the planets to ensure that they'd remain stable over time crippled his ability to recognize that the planets' deviations from predicted orbits could be caused by other, unknown planets.
Other people, who weren't hung up on trying to find visible signs of godly intervention, used those deviations to predict where new planets should be, and then found them with astronomical searching.
Rationality: 1 Religion: 0
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T19:47:00.577Z · LW(p) · GW(p)
Citation?
Replies from: JoshuaZ, Daniel_Burfoot↑ comment by JoshuaZ · 2010-04-29T05:43:29.916Z · LW(p) · GW(p)
I'm going to echo Eliezer's request for a citation. As far as I know this is simply wrong. First of all, Newton understood that planets interacted with each other gravitationally. Indeed, taking this into account gave slightly better data than the strict Keplerian model. The only planet in this solar system that was predicted based on apparent gravitational influence was Neptune which was predicted based on deviations in the orbit of Uranus. (In fact, people had seen Neptune before but had not realized what it was. Galileo saw it at least once but didn't realize it was a planet (Edit: See remark below)). Uranus wasn't even recognized as a planet until 1781 (some prior intermittent observations of Uranus had marked it possibly as star) and even then wasn't widely accepted as a planet for a few years. Newton died about 50 years prior. So there's no way he could have had any hope of using anomalies in the orbit of Uranus to detect Neptune. The situation gets worse given that the anomalies weren't even recognized until Bouvard's detailed calculations in the early part of the 19th century revealed the discrepancy between the observed and predicted orbit of Uranus.
I thus find Annoyance's comment very hard to understand. It is very hard to attribute Newton's religion as a reason why he didn't use gravitational anomalies to predict the presence of other planets when no such anomalies were at all severe enough in his lifetime to even justify the claim.
I suspect that the commentator may be confusing this with the issue of the stability of the orbits. That is, the orbits of the planets are an inherently chaotic system. Newton had an intuitive but non-rigorous intuition of this problem and suggested that God might step in from time to time to nudge a planet to prevent it from doing something wildly bad.
There are two good books to read on these and related issues. One is Kuhn's "The Copernican Revolution." Despite Kuhn's general philosophy coloring the presentation it gives an excellent summary of the history of astronomy especially around the switch from the Ptolemaic to Keplerian models. The other book to read is Alan Hirschfeld's "Parallax" which focuses on the problem of stellar parallax from ancient times to the modern era and uses that as a general theme to discuss the history of astronomy.
Edit: Just checked. Apparently the claim that Galileo saw Neptune is not completely clear cut at this point. There are two diagrams which show an object in roughly the right places if those observations were taken at the right time but there's no hard evidence. This doesn't impact the overall point substantially.
↑ comment by Daniel_Burfoot · 2009-03-13T03:23:38.421Z · LW(p) · GW(p)
The paper "Religious Involvement and US Adult Mortality" [Hummer, Rogers, Nam, Ellison] concludes that there is "a seven-year difference in life expectancy at age 20 between those who never attend and those who attend more than once a week."
Now, I'm skeptical of this claim, but it's still a bit shocking.
Replies from: Larks↑ comment by Nebu · 2009-03-12T15:31:22.733Z · LW(p) · GW(p)
Second, evidence shows that religious people are happier. Now, this happiness research is of course murky, and we should hesitate to make any grand conclusions on the basis of it. But if it is true, it is deeply problematic for the kind of rationality you are advocating. If rationalists should "just win", and we equate winning with happiness, and the faithful are happier than atheists, then we should all stop reading this blog and start going to church on Sundays.
I think Eliezer Yudkowsky already addressed this point several times on "that other site" (Overcoming Bias). Basically "happiness" is a nice-sounding one word summary of "what people want", but it's inaccurate.
Look at the examples of the wish-granting genie/AI. If you wish for "happiness", it'll turn you into a pile of orgasmium, basically grey-matter which just experiences infinite pleasure constantly. You've got happiness, but it seems like it's not what most people actually want.
↑ comment by NQbass7 · 2009-03-12T13:34:38.300Z · LW(p) · GW(p)
... shows that you don't have to be an atheist to make great discoveries.
I don't think anyone is making the argument that you do. Plenty of people get through life without basic rationality, and some even do interesting and amazing things. That's not an argument for being religious though - at best, it shows that in certain cases religion doesn't completely cripple your rationality. It's still a risk, however.
As for religious people being happier than atheists, ... In my experience, the average atheist is not at the basic level talked about in this post. Slightly more rational than the average theist I've met (and I tend to more time around the smarter, more rationalizing type of theist), but still not even close to this.
Obviously that's just anecdotal, so I wouldn't bet much on it at all, but it's enough for me to question the validity of applying murky evidence about happiness and religion to a discussion about teaching basic rationality. If anything, I would say that evidence seems to more strongly indicate that we should teach basic rationality, because leaving religion without it might make it more difficult to be happy.
comment by Vladimir_Golovin · 2009-03-12T17:51:34.282Z · LW(p) · GW(p)
To return to the question asked in the original post:
what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions?
My first reaction to the question -- too many constraints. I can't quickly think of anything that satisfies all three of them. However, if I'm allowed to drop one constraint, I'd drop the second one ("useful as a general method of rationality"), and my answer would be evolution.
In my experience, understanding evolution down to chemistry, down to predictable interactions of very simple parts that have nothing mystical or anthropomorphic about them can have a tremendous impact on one's further thinking.
Replies from: bentarm↑ comment by bentarm · 2009-03-12T23:34:29.760Z · LW(p) · GW(p)
I'd second that. In fact, I think that knowing about evolution is probably a necessary prerequisite to being a rational atheist. Even Dawkins admits that it would have been pretty difficult not to believe in God before there was a plausible naturalist explanation for the complexity of life.
Of course, it's possible to know that there is a plausible naturalist explanation without really understanding the nuts and bolts of how it works (I'd probably put myself in that category), but maybe it really does help to hammer home the point if you really understand that The Creator wasn't necessary to make life, what exactly is He for?
comment by JamesAndrix · 2009-03-12T17:36:33.934Z · LW(p) · GW(p)
"Could you - realistically speaking, regardless of fairness - win a Nobel while advocating the existence of Santa Claus?"
Absolutely! Your examples already show that that level of insanity is tolerated. The difference is only social acceptance of one belief vs. another.
The only way to change that is to have a culture in which people are held to a high standard in every part of their lives. Today if you examine the reason people believe anything, you come off as the jerk. Even outside of religious beliefs.
Person makes unfounded statement, people don't challenge unfounded statement. That's polite conversation. Some people like a few iterations of challenge-response, but only wierdos follow the turtles all the way down.
We look a t a scientists work, what they do outside of their work is ignored in general. Do we want to start discounting the credibility of a physicist because he cheats on his wife?
comment by Roko · 2009-03-12T15:35:06.669Z · LW(p) · GW(p)
So, further to my earlier comment about teaching people how to be happy and how to flourish, I have a question to ask. Suppose that (for some reason) everyone in the world was always very happy with their lives. Would people even consider religion as a serious option in their hypothesis space? I don't think so. Imagine trying to convert a citizen of Banks' Culture to Christianity. Would they even take you seriously? Would it be like trying to convert a grown-up to belief in Santa Claus?
Replies from: pjeby, Cameron_Taylor↑ comment by pjeby · 2009-03-12T15:54:07.710Z · LW(p) · GW(p)
The catch is that highly-religious parents can make their kids unhappy... and thus put religion in a more attractive position. You're talking about adults, here.
But yeah, salvation is a lot harder sell if you don't feel like there's anything you need to be saved from.
Replies from: Roko, Cameron_Taylor↑ comment by Roko · 2009-03-12T16:01:34.003Z · LW(p) · GW(p)
Well indeed. Why do Christians talk so much about Sin? Why do they emphasize the fact that you are a SINNER and Jesus will forgive you for your sin?
Suppose you've just cheated on your boyfriend and you're feeling bad about yourself, that you are an evil person, etc. If, at that point, you are exposed to the Sin meme, and the corresponding solution that if you believe in God your sin will be washed away, why are you going to side with rationality? It would be irrational (from a hedonistic point of view) not to believe in God when in this situation...
Replies from: pjeby, NancyLebovitz↑ comment by pjeby · 2009-03-12T17:20:37.428Z · LW(p) · GW(p)
Dunno, doesn't seem much different from explaining that you're not an evil person, your genes and current programming made you do it, and that you can both let it go -- and feel better -- AND change your programming so that you don't do similar things in the future.
Religion doesn't have a monopoly on forgiveness, after all. In order for it to work, there has to be something there in the brain that supports that function. And nothing stops rationalists from using that same mechanism. Hell, it's a critical part of a set of techniques I teach for altering one kind of "self-esteem".
Now, the idea that there's somebody who loves you no matter what, that might still be attractive. But if everybody learns at a young enough age how to use forgiveness and other methods to address their broken beliefs and judgments, they should already be in the habit of doing that.
Of course, good luck trying to teach forgiveness in schools... Religious folks will positively freak about that, because they DO think they've got a monopoly on the process.
Replies from: Roko↑ comment by Roko · 2009-03-12T17:38:54.786Z · LW(p) · GW(p)
"Dunno, doesn't seem much different from explaining that you're not an evil person, your genes and current programming made you do it, and that you can both let it go -- and feel better -- AND change your programming so that you don't do similar things in the future."
- but people don't get told that they can think along those lines. Unless they see a psychiatrist... The default is for people to suffer lots of bad feelings about themselves. Religion probably acts as a psychiatrist replacement for many people. In fact if you think about what a psychiatrist does, its a lot like having someone to pray to; though most psychiatrists are less judgmental than the Christian god.
"Now, the idea that there's somebody who loves you no matter what, that might still be attractive. But if everybody learns at a young enough age how to use forgiveness and other methods to address their broken beliefs and judgments,"
- since most people currently don't do this, this is a good potential strategy for rooting religion out. Which was kind of the point of my original comment.
↑ comment by Cameron_Taylor · 2009-03-12T18:21:48.404Z · LW(p) · GW(p)
reasonable
↑ comment by NancyLebovitz · 2010-03-24T14:15:13.292Z · LW(p) · GW(p)
Not all versions of Christianity are Sin-oriented, but those that are seem to get more traction in this era. There've been times when Unitarianism and Universalism (they used to be separate religion) and the Society of Friends (also known as Quakers)-- much more gentle religions-- were spreading, but they never got huge.
I suggest that people are attracted to drama at least as much as they are to things which are likely to make their lives better.
There may be an underlying premise that "anything which attracts my attention must be worth paying attention to".
Replies from: Roko↑ comment by Roko · 2010-03-24T16:57:12.750Z · LW(p) · GW(p)
I suggest that people are attracted to drama at least as much as they are to things which are likely to make their lives better.
Yes, something like that sounds reasonable. Perhaps it is more that people are attracted to polarity, drama and struggle.
↑ comment by Cameron_Taylor · 2009-03-12T18:20:49.084Z · LW(p) · GW(p)
Not really. If the high status folks have a religion,you'll seek it.
↑ comment by Cameron_Taylor · 2009-03-12T18:20:14.486Z · LW(p) · GW(p)
yes.
comment by Roko · 2009-03-12T13:52:19.101Z · LW(p) · GW(p)
"Consider this thought experiment - what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions?"
- Teach people how to be happy, how to have high self-esteem, and how to flourish in their lives. Teach them what the pitfalls of human happiness are, what the evolutionary reasons for human misbehavior are, and what we can do about them. Also, as a second line of defense, create a rational religion-replacement.
I have some good experience in this area, because whilst at university I spent time around the university evangelical Christian society. Disturbingly, some members were better mathematicians than me, all were highly intelligent. What they almost all had in common was that they all had some good reason to be unhappy with their lives: they were socially awkward, ugly, mentally unbalanced or had self-confidence issues, etc. The attractive, socially adept, self-confident crowd spent their free time drinking, playing in the sports teams and socializing in a very ahem secular way.
Robin (elsewhere in this comment thread) says:
"They just don't have enough incentives to apply those tools in these context."
I agree completely, at least in the case of clever Christians.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-03-12T18:23:04.112Z · LW(p) · GW(p)
harsh
comment by knb · 2009-03-12T05:51:41.477Z · LW(p) · GW(p)
Since you brought up Dawkins, I think teaching about Memetics would be very useful in raising the "sanity waterline". Learning about Memetics really forces you to analyze your beliefs for selfish replicator ideas.
In addition, it challenges the view that consensus is an impregnable defense for believing bizarre things. You are put in the position of actually having to try to cite evidence for why you believe things. Of course even that doesn't work very often, since most people have very strong ideological immune systems that protect their beliefs. But asking those questions, and trying to justify your beliefs is a necessary first step.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-03-12T09:06:05.330Z · LW(p) · GW(p)
Memes are at best a thought-provoking analogy - we have no way of being rigorous about them. I'd love to be wrong about this, but I'd be surprised.
Replies from: Kenny, Johnicholas↑ comment by Kenny · 2013-09-20T15:06:04.487Z · LW(p) · GW(p)
What exactly do you mean by being "rigorous about them"?
Some seemingly 'rigorous' ways of studying memes that spring to mind:
- Archaeologists studying the dissemination of arrowhead technology
- Linguists studying the geographic distribution of phonemes among dialects of a language
- Biblical studies
- Etymology
It's not clear that memes are copied with a high enough degree of fidelity to really be subject to evolution by natural selection, but they certainly share the other structural characteristics of genes, namely variation and differential fitness.
↑ comment by Johnicholas · 2009-03-12T10:49:32.114Z · LW(p) · GW(p)
Would you say that "We have no way of being rigorous about it" means that we shouldn't teach the meme analogy?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-03-12T13:13:13.851Z · LW(p) · GW(p)
It means that if I talk about memes, I leave myself open to an easy challenge to which I currently have no reply. I'd really like a good reply, since I think it's a genuinely useful aid to thinking about what it means for an idea to be popular, so if you have one I'm keen to hear it!
Replies from: Johnicholas↑ comment by Johnicholas · 2009-03-12T14:29:55.198Z · LW(p) · GW(p)
Suppose I present a concrete non-rigorous analogy: "A chain letter is like an organism with a habitat of human minds.". What is the easy challenge that I have left myself open to? I already freely conceded that it was non-rigorous.
Replies from: Rings_of_Saturn↑ comment by Rings_of_Saturn · 2009-03-12T21:12:18.205Z · LW(p) · GW(p)
Johnicholas:
You leave yourself open to the reply that the non-rigorousness of the analogy makes it useless or even pernicious. Owning up to a fault doesn't make it go away.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T21:19:49.343Z · LW(p) · GW(p)
Owning up to a fault doesn't make it go away.
Congratulations, you have just reduced the proper use of humility to a single proverb. I shall endeavor to go around repeating this.
comment by PhilGoetz · 2009-03-12T06:06:49.716Z · LW(p) · GW(p)
Err... I actually toss around endorsements of "spirituality" in those contexts where doing so seems likely to have positive effects. Naive realism is a supernatural belief system anyway, just a more subtle than average one. I'll invoke Einstein, Hume and Spinoza as precedents if you wish. Who do you think, by the way, is more likely to convince a theist to sign up for cryonics, a person who says "god is a stupid idea, this is the only way to survive death" or a person who says "I believe in god too, but I also believe in taking advantage of the best available medical technologies". I'd accept a double blind study showing that the former worked better, but it's not how I'd bet.
More importantly, I think that the canary function is more valuable than any harm caused by moderate Christianity, especially if combined with a possible vaccine function.
Also, Sam Harris DOES talk about spirituality, and Dennett about free will.
Finally, for what it's worth, we only have one data point for a scientific civilization rising, and it was in the religious West not the relatively secular China. Weak evidence, but still evidence.
↑ comment by Marcello · 2009-03-12T07:10:30.430Z · LW(p) · GW(p)
Michael Vassar said:
Naive realism is a supernatural belief system anyway
What exactly do you mean by "supernatural" in this context? Naive realism doesn't seem to be anthropomorphizing any ontologically fundamental things, which is what I mean when I say "supernatural".
Now of course naive realism does make the assumption that certain assumptions about reality which are encoded in our brains from the get go are right, or at least probably right, in short, that we have an epistemic gift. However, that can't be what you meant by "supernatural", because any theory that doesn't make that assumption gives us no way to deduce anything at all about reality.
Now, granted, some interpretations of naive realism may wrongly posit some portion of the gift to be true, when in fact, by means of evidence plus other parts of the gift, we end up pretty sure that it's wrong. But I don't think this sort of wrongness makes an idea supernatural. Believing that Newtonian physics is absolutely true, regardless of how fast objects move is a wrong belief, but I wouldn't call it a supernatural belief.
So, what exactly did you mean?
↑ comment by Marcello · 2009-03-12T07:43:27.099Z · LW(p) · GW(p)
Incidentally, I agree that using the term "spirituality" is not necessarily bad. Though, I'm careful to try to use it to refer to the general emotion of awe/wonder/curiosity about the universe. To me the word means something quite opposed to religion. I mean the emotion I felt years ago when I watched Carl Sagan's "Cosmos".... To me religion looks like what happens when spirituality is snuffed out by an answer which isn't as wonderfully strange and satisfyingly true as it could have been.
It's a word with positive connotations, and we might want to steal it. It would certainly help counteract the vulcan stereotype.
Replies from: steven0461, MBlume↑ comment by steven0461 · 2009-03-12T16:57:04.401Z · LW(p) · GW(p)
I question whether awe and wonder about this giant mostly-unstructured human-hostile death trap we call a universe is an appropriate emotion for a rationalist. Morbid fascination, maybe -- Lovecraft and Teller, not Sagan.
Replies from: Eliezer_Yudkowsky, JulianMorrison↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T18:57:44.587Z · LW(p) · GW(p)
The place has potential if it were fixed up a bit. That's what gets me up in the morning.
↑ comment by JulianMorrison · 2009-03-12T17:10:23.001Z · LW(p) · GW(p)
"Wonder" is the emotion that smells a bit off to me. Can you feel that if you are not enamored of mysterious answers?
Replies from: Johnicholas↑ comment by Johnicholas · 2009-03-12T19:00:16.369Z · LW(p) · GW(p)
Yes. See Sense of Wonder for examples.
↑ comment by MBlume · 2009-03-12T08:12:43.465Z · LW(p) · GW(p)
If we could do this, if we could really do this, in a way that is genuine, and unforced, if we could show people that religion has hijacked their deepest needs and that there are better ways to fill those needs, I really think that could be the opening move to winning this thing. I think that could be what finally gets people to pull their fingers out of their ears, stop screaming "can't hear you, you can't make me think!" and maybe, just maybe learn something.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T18:54:33.880Z · LW(p) · GW(p)
I can't think of 100 posts of mine which fit that description, can you? Why yes, this is one of my not-so-hidden agendas.
Replies from: MBlume↑ comment by MBlume · 2009-03-21T05:02:33.340Z · LW(p) · GW(p)
Oh, absolutely. I'd only been an atheist for about a month when you posted Explaining vs. Explaining Away, and I can't tell you what a relief it was to feel that yes, it was going to be alright.
I was quoting "Bad scientist! No poems for you, gnomekiller!" for days after that =)
My religious friends didn't find it as funny though, if I recall. I worry that maybe there's still an activation energy left to deal with -- that the comfort of joy in the merely real doesn't start to become attractive until you've already confronted, to some extent, the fact of atheism.
comment by Annoyance · 2009-03-12T19:34:05.400Z · LW(p) · GW(p)
If you don't care about consistency, there's really no way to be argued into caring about it. Not rationally, anyway, and not even with unconscious logic. It can only be done by "pushing a motivational button", and without a very detailed model of how your mind works, someone else can only do that by flailing about at random.
Self-consistency is the most basic aspect of effective thought.
Replies from: thomblake↑ comment by thomblake · 2009-03-12T20:19:22.180Z · LW(p) · GW(p)
The contrapositive of this:
If you don't want people to be able to easily manipulate you, be irrational and inconsistent.
Is this right?
Replies from: ciphergoth, Annoyance↑ comment by Paul Crowley (ciphergoth) · 2009-03-13T09:23:30.678Z · LW(p) · GW(p)
If you were to try this, you would instead be irrational in a consistent way described by well-known cognitive biases, and therefore unusually easy to manipulate.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-21T11:41:51.642Z · LW(p) · GW(p)
How manipulable do you find the determinedly irrational in practice?
Mostly I just find them deeply painful and try to avoid dealing with them except by occasionally poking them with sticks.
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2011-02-21T15:59:28.284Z · LW(p) · GW(p)
Depends on what I'm trying to get them to do, and how well I understand the framework of their irrationality. In many cases, I find them highly manipulable; in other cases, not at all.
↑ comment by wedrifid · 2011-02-21T17:24:04.496Z · LW(p) · GW(p)
How manipulable do you find the determinedly irrational in practice?
Extremely. But when people are already acting against their own interests it is sometimes more convenient to exploit their current practice without trying to influence them significantly.
↑ comment by Annoyance · 2009-03-12T20:39:26.965Z · LW(p) · GW(p)
Not exactly. If you want to avoid being manipulated, keeping your motivational structure secret is a good idea. Inconsistency does make it harder for others to anticipate your motivations and positions, but it carries some very high costs.
To the degree that rationality converges on common solutions, it is predictable, and that can be a weakness. But the converge is valuable enough that it's usually worth just putting in some protections.
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-21T11:44:08.960Z · LW(p) · GW(p)
If you want to avoid being manipulated, keeping your motivational structure secret is a good idea.
FWIW, the Church of the SubGenius advocates the "Nameless Mission. To name it is to doom it." I have found this a useful concept when there's something driving me and I've yet to find its true name, but like its results so far. I also gave the concept to a friend who's sick of their field (they've basically won that game) and about to change their entire career, isn't quite sure what to pursue next but is accumulating new hobbies of great interest.
comment by HughRistik · 2009-03-12T17:40:32.700Z · LW(p) · GW(p)
That's what the dead canary, religion, is telling us: that the general sanity waterline is currently really ridiculously low. Even in the highest halls of science.
From the standpoint of rationalists, this kind of thinking looks insane. Yet is just species-typical thinking with a hardwired basis, as others in this thread have observed. The kind of cognitive biases that lead to religion, especially social ones such as group conformity and social proof, were/are adaptive. This is how sane human minds work. It is the rationalists who are the crazy people.
Replies from: Annoyance↑ comment by Annoyance · 2009-03-12T19:35:34.421Z · LW(p) · GW(p)
"This is how sane human minds work. It is the rationalists who are the crazy people."
I must disagree. That is how normal human minds work. Sanity is not at all normal.
'Being normal' is highly overrated, but of course the people who do so are both normal and crazy, so I expect them to continue praising it.
Replies from: HughRistik↑ comment by HughRistik · 2009-03-12T20:31:31.894Z · LW(p) · GW(p)
There is no such thing as "normal and crazy." Insanity implies some malfunction of the brain (hence the study of psychopathology is often called "abnormal psychology"). The brains of normal people, including those who believe in religion, are not malfunctioning at all. In fact, they are "working as intended" (strictly speaking, evolution doesn't "intend" anything; the point is that the hardware developed by evolution is showing no defects).
Rationalists are also perfectly sane (I only called them crazy jokingly, which is how they look to normal people), and their brains are also "working as intended." They just have a less typical phenotype.
comment by CarlShulman · 2009-03-12T04:43:15.577Z · LW(p) · GW(p)
You linked to your Dark Side Epistemology post, which is all about the generally anti-rationalist propaganda generated by organizations with bogus claims to shield, but avoid mentioning here that a reduction in religion would thus raise the waterline at least somewhat. Why?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-12T04:49:18.214Z · LW(p) · GW(p)
Because religions aren't the only ones who need or generate the Dark Side. They've just got the largest, most sophisticated flocks of paid Sith.
It's true that if religion vanished the derivative of the Dark Side would level off somewhat, but the existing Dark Side would have already been generated.
To put it another way, suppose that all religion vanished from universities tomorrow in a surgical intervention. Would the general quality of science go up? How much? Why?
Replies from: JulianMorrison, CarlShulman, Cameron_Taylor↑ comment by JulianMorrison · 2009-03-12T10:13:22.886Z · LW(p) · GW(p)
I don't even think religion is the worst / most influential offender. That would probably be fiction. People marinate in the stuff, and it's cram full of magical thinking even when omitting the overtly supernatural.
↑ comment by CarlShulman · 2009-03-12T06:32:24.380Z · LW(p) · GW(p)
Restricting the effect to universities prevents me from drawing on a key factor I had in mind, external pressures that shape scientific research, since in popular opinion support for science competes with religion on average. More popular support for and deference to science could have significant positive effects (depending on your model).
Still, the Dark Side needs to be sustained, and when its emotional underpinnings are removed a superstructure can collapse suddenly. Moreover, the negative affect for their previous faith-based religions would tend to make ex-believers more susceptible to the idea of reliance on logic and evidence.
↑ comment by Cameron_Taylor · 2009-03-12T19:48:08.092Z · LW(p) · GW(p)
To put it another way, suppose that all religion vanished from universities tomorrow in a surgical intervention. Would the general quality of science go up? How much? Why?
I would expect the general quality of science to go down. Humans, as they currently exist, will have an outlet for their darkside. This takes the form of an institution of some sort. Eliminating a sink for some of the most 'Dark' personalities just pours more Sith into scientific institutions.
comment by Suh_Prance_Alot · 2018-04-14T13:33:58.746Z · LW(p) · GW(p)
Are there any specific strategies/plans to get Rationalists into positions of socio-political power?
Could a targeted approach be used to reach people who are already in such positions, like say the pope, for a ripple effect?
comment by gilch · 2016-01-06T00:07:10.739Z · LW(p) · GW(p)
When you consider it—these are all rather basic matters of study, as such things go. A quick introduction to all of them (well, except naturalistic metaethics) would be... a four-credit undergraduate course with no prerequisites?
I wonder what it would take to make and run a MOOC? I know that MOOC software has been open sourced (e.g. OpenMOOC, edX). If a single undergrad course could have that big an impact on the world, isn't it worth doing?
comment by MarkusRamikin · 2014-10-21T16:23:09.860Z · LW(p) · GW(p)
First link in article not worky. Black Belt Bayesian's blog got wiped out?
Replies from: sudoLife↑ comment by sudoLife · 2021-08-26T09:06:07.673Z · LW(p) · GW(p)
Yeah, it seems to have been. Proposed edit: https://web.archive.org/web/20110718032458/http://www.acceleratingfuture.com/steven/?p=3
comment by Cameron_Taylor · 2009-03-12T18:18:04.475Z · LW(p) · GW(p)
Outlaw debating class.
comment by groovymutation · 2012-01-26T14:40:42.024Z · LW(p) · GW(p)
Just a few points:
1) I would not call atheism "rationality." Atheism requires a certain degree of blind faith and accepting lack of evidence for religion as evidence of not-religion which is not in concordance with the principles of rationality. Perhaps "agnostic atheism" would be a more reasonable perspective. "There is a god" and "there is no god" are both non-falsifiable assertions, and I can think of few things that I would accept as corroborating evidence thereof. You cannot deduce atheism from the fact that religion does not seem to be correct. You can, however, reasonably state that while there might be a god, there probably is not.
2) Occam's razor is not necessarily applicable sui generis. I would direct you to Elliott Sober's excellent paper on the subject, "Let's Razor Occam's Razor." It is a useful argument against the existence of a god, but certainly not definitive.
Replies from: thomblake, TheOtherDave↑ comment by thomblake · 2012-01-26T16:17:28.108Z · LW(p) · GW(p)
To a rationalist, "Thor doesn't exist" and "Thor almost certainly doesn't exist" are pretty much equivalent, and generally caused by "I have no good evidence that Thor exists and a low prior on complicated hypotheses like Thor".
Replies from: groovymutation↑ comment by groovymutation · 2012-01-26T18:16:50.417Z · LW(p) · GW(p)
I both agree and disagree with you.
I would say they are equivalent "for all practical purposes," but that qualifier is necessary. A low prior on a complicated hypothesis is not as relevant as one might think due to the washing out of priors (which is how we can have subjective priors such as "deciding" that the existence of Thor has low prior probability).
And as I said in another comment, you cannot call a probability of 0.9998 and a probability of 1 equivalent. If they were equivalent, your probability would = 1. If it is not, you cannot be rationally justified in making such an absolute statement.
Replies from: Kasseev, None↑ comment by Kasseev · 2012-02-20T06:32:02.356Z · LW(p) · GW(p)
I would point you over to the "Fallacy of Grey" if we are beginning to split hairs over the likelihood of Thor existing. You are right that nothing that is non-tautological can ever be proven with absolute certainty, and this category includes the existence of God and a predicted sunrise every day - however, to function rationally we must give claims the appropriate level of credence that supporting evidence demands; we must not turn a critique against binary black-white causality into a unitary embrace of the grey.
Viewed in this light, I don't really think "gnostic" atheism has much use as a term, as it is nonexistent just as "gnostic" anything-non-tautological is nonexistent. Thus quibbling that the neo-atheists are in fact foolish gnostics is really a little petty, when really just about every rationalist falls into the agnostic category as a matter of principle; which still allows us to maintain a category of very very high unbelief in things like God.
↑ comment by [deleted] · 2012-03-27T22:05:45.650Z · LW(p) · GW(p)
You're right, but that doesn't mean what you're saying is useful.
At some point, I have to make decisions, including what to say I believe. It is far less confusing to say 'I believe that god does not exist' than to say 'I hold a vanishingly low belief in the class of things you refer to when you say 'God.''
Saying that you believe a thing is true, or don't believe a thing is true, is not the same thing as saying that your probability estimates are 1 and 0, respectively. That would mean that no evidence would be adequate to change your beliefs, because you have infinite confidence, and that would obviously be stupid. The implication you're inferring from the term 'atheist' doesn't even really logically follow from the definition. I think the class of atheists that actually do mean that they slavishly and unquestioningly disbelieve in God is small enough to be safely disregarded. Most of the people who call themselves atheists simply hold very very low beliefs in God.
↑ comment by TheOtherDave · 2012-01-26T15:34:47.022Z · LW(p) · GW(p)
It's worth noting that LW generally eschews an exclusive concern with falsification in favor of a concern with how various observations properly affect an observer's confidence levels in various statements.
The canonical example around here is the entirely undetectable dragon in my garage: if I don't observe phenomena that I would expect (probabilistically) to observe if there were a dragon in my garage, that doesn't prove there's no dragon, but it is grounds for me to reduce my confidence in the existence of such a dragon. Enough "missing" evidence causes my confidence in the dragon's existence to drop to negligible levels.
A similar process can cause my confidence in any particular God's existence to drop to negligible levels.
And "there's no X" is a pretty standard way for humans to express the high confidence that there's no X.
Replies from: groovymutation↑ comment by groovymutation · 2012-01-26T18:12:08.660Z · LW(p) · GW(p)
I think we share the same perspective on this issue. My main point regarding the existence/non-existence of a god was that one cannot say with a P of 1 that there is certainly no god. In fact, such an assertion seems to me to be absurd. However, given other evidence, we can have very low confidence in the existence of a god and very high confidence in the non-existence of a god.
However, as a scientist and philosopher of science, I cannot accept that missing evidence infers any one alternative hypothesis. This was one of the many criticisms of Karl Popper's falsification theory: finding evidence that says research program T is not true may imply that not-T is true, may imply there is an auxiliary hypothesis in T that needs to be adjusted but that the other assumptions surrounding that theory are still acceptable, etc. If T is false and not-T is true, there is no immediately obvious standard by which to choose which of the theoretically infinite alternative theories is true.
This is getting more theoretical, though, and does not really apply to a binary problem like existence/nonexistence of god, so I'm afraid I've gone off on a bit of a tangent here. At base, I agree with you that the non-existence of god seems to have a probability very close to 1. However, it is not 1, and I would be loathe to say it is close enough to 1 for the difference to be "negligible." If your probability is 1 (or 0), then it is 1 (or 0). If it is close to but not quite 1 (or 0), you are not justified in making an absolute statement.
As Lakatos wrote, it is not irrational to continue working within a degenerating research program, for such programs have been seen historically to have comebacks when new evidence is discovered. Personally, however, I'd place my money on the non-existence of god (which seems, to me, to be the progressive research program).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-26T18:47:07.133Z · LW(p) · GW(p)
It's a truism around here that you can't say anything with P=1, on pain of being unable to subsequently change your mind given new evidence. Here's a post along those lines.
Agreed that missing evidence doesn't privilege any single alternative hypothesis, except in cases of strictly binary propositions. However, insofar as T2 and T1 are relevantly similar, events that lower my confidence in T1 will lower my confidence in T2 as well, so missing evidence can legitimately anti-privilege entire classes of explanation. That said, it's important not to generalize over relevant dis-similarities between T2 and T1.
As far as "negligible"... well, enough "missing" evidence causes my confidence in a proposition to drop to a point where the expected value of behaving as though it were true is lower than the expected value of behaving as though it were false. For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)... human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I'm justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
Replies from: groovymutation, thomblake↑ comment by groovymutation · 2012-01-26T18:53:00.787Z · LW(p) · GW(p)
All right, I think I concede your point. (Not to say I will stop thinking about this issue, of course -- have to be in a constant state of "crisis of belief" &c.) I also think we agree fundamentally about a great many of these points you made in this comment to begin with and perhaps I did not verbalize them coherently -- such as "behaving for all practical purposes as if a given T were true" and so on. The majority of your last paragraph is new to me, however. Thanks.
↑ comment by thomblake · 2012-01-26T18:52:20.919Z · LW(p) · GW(p)
For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)... human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I'm justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
The inverted Pascal's Wager.
or
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program.
comment by Jonnan · 2009-03-12T22:07:39.958Z · LW(p) · GW(p)
Begs the question - I would posit that the minimum assumption for any form of 'spirituality' is body/mind duality, and your proposed 'better' definition of insanity presupposes the result that there is no axiomatic, logical system that can result in body/mind duality being either true, or undecidable.
However, so long as it is even undecidable, then a person that uses it as an axiom for further thought is no more 'insane' than someone that explores the logical consequences of parallel lines crossing.
Now, Religion posits not only body-mind duality, but a number of other assumptions, and those other assumptions are generally quite amenable to debunking. But I suspect dualism itself qualifies as undecidable, which would place it outside the pale of propositions which one can both maintain a cohesive logical structure and explicitly deny.
comment by aceofspades · 2012-07-02T05:02:52.317Z · LW(p) · GW(p)
Arguing about the existence of a god is like arguing about free will. The only worthwhile argument concerns differences in anticipated experience, notably things like "Does prayer work?".
Replies from: nshepperd↑ comment by nshepperd · 2012-07-02T06:15:57.212Z · LW(p) · GW(p)
The world would be a very different place if, say, Thor existed and took a strong interest in the affairs of the human world.
Replies from: aceofspades↑ comment by aceofspades · 2012-07-05T18:29:37.702Z · LW(p) · GW(p)
If his interest resulted in actions that would provide evidence of his existence, then yes. Also, if libertarian free will existed then the world would be an even more different place.
Replies from: nshepperd↑ comment by nshepperd · 2012-07-06T07:05:31.510Z · LW(p) · GW(p)
I'd argue that libertarian free will is an incoherent concept, and therefore there is no counterfactual world where it "exists", or if there is, that it is identical to any nondeterministic world without libertarian free will. On the other hand the existence of Thor might be exceedingly improbable, but it's not incoherent.
Replies from: aceofspades↑ comment by aceofspades · 2012-07-19T23:28:04.391Z · LW(p) · GW(p)
In order to dissolve the disagreement: I think the first sentence of my original comment here was ill-posed. It makes sense to me because it serves as a convenient pointer for the type of "religion" espoused by a significant proportion of people which involves "belief" and "faith" and does not actually contain any differences in anticipated experience from a non-religious position. However, given only the original sentence it does not mean much. And even with elaboration it is pretty much going to be tautological. As to my second post I expect that contemplating that particular "counterfactual" is going to be along the lines of considering the "counterfactual" under which 2+2=5 which I do not anticipate being a particularly enlightening discussion based on what I've already read on the subject.
comment by LazyDave · 2010-01-19T00:45:38.007Z · LW(p) · GW(p)
I would dispute the (implied) part of this post that suggests removing religion would necessarily be a good thing. Besides being a way to explain rainbows and earthquakes and whatnot, it is also a "solution" to the prisoner's dilemma. For explanations of physical phenomena, religion is no longer needed. But the "morality" problem is still there. Make everyone rational, and they will do the rational thing, i.e. defect when faced with a prisoner's dilemma (or tragedy of the commons, or whatever). Getting rid of religion may certainly have its benefits, but I would not be too sure it would be an overall good thing.
Replies from: orthonormal, mwengler↑ comment by orthonormal · 2010-01-19T05:22:15.247Z · LW(p) · GW(p)
Make everyone rational, and they will do the rational thing, i.e. defect when faced with a prisoner's dilemma (or tragedy of the commons, or whatever).
Also, have you spent any time searching for a third alternative before deciding that religion is the only thing that can keep people from destroying each other?
Replies from: LazyDave↑ comment by LazyDave · 2010-01-23T22:30:11.491Z · LW(p) · GW(p)
I have wondered for many years what a good alternative would be, and have not been able to come up with one. Now that in itself doesn't mean anything; just because I can't think of one does not mean there isn't one. But given the antipathy of most on this site to religion (as evidenced by my comment getting dinged 4 times for merely suggesting that religion, though irrational, may be socially beneficial), I would think there would be posts upon posts explaining better alternatives. I have not seen them.
It seems to me that many rationalists hate religion so much, that they are loathe to admit it has any benefits at all, even if those benefits have NOTHING to do with the original reason for the loathing. It reminds me of a few years ago when someone said something to the effect of "Hitler's army had great uniforms" (I don't remember the exact details). Of course, the person had to end up apologizing a million times over, lost her job, etc. even though she was in no way endorsing Hitler's horrendous actions.
Again, I am not even saying that religion is necessarily worth keeping. But the unquestioned assumption that it would be desirable to get rid of it does not seem to get a lot of scrutiny around here.
Replies from: Morendil, Alicorn↑ comment by Morendil · 2010-01-23T22:47:14.458Z · LW(p) · GW(p)
If you are going to take religion's effects into account as well as the truth of it, you need to look at both sides of the ledger, and weigh the ills it brings against the good. No cherry-picking.
Replies from: LazyDave↑ comment by LazyDave · 2010-03-23T22:43:11.512Z · LW(p) · GW(p)
Morendil, I absolutely agree. It may very well be that the ills outweigh the good (though I happen to personally doubt it). I'm just saying that the weighing should be done independently of the rationality of religion (which I think we can all agree is about 0). I just fear that it is too easy for there to be a negative halo effect around religion, which is understandable seeing that this is a forum about rationality.
↑ comment by Alicorn · 2010-01-23T22:37:05.675Z · LW(p) · GW(p)
We think that religions are false, and a shared priority of Less Wrong denizens is to believe things that are true instead. I'll readily admit that religion has some good effects. Many people find it comforting; it's inspired great works of art and music and architecture; it does a lot of work to funnel money to charitable causes, some of which are very helpful; it encourages community-building; and it has historically served as a cultural touchstone to enable the development of some very powerful iconography and tropes.
It's still false.
If you like the good things about religion, there are alternatives (although most of them only work piecemeal). For instance, there's Ethical Culture, which fills in the community gap a departing religion can leave.
Replies from: LazyDave, Vladimir_Nesov↑ comment by LazyDave · 2010-01-23T22:58:50.349Z · LW(p) · GW(p)
We think that religions are false, and a shared priority of Less Wrong denizens is to >believe things that are true instead
You'll get no argument from me that religions are false. You will get practically no argument from me that it makes sense to want to believe things that are true. What I question is, is it always rational to make others believe things that are true? If I leave my lights on when I leave the house so that would-be robbers think I am home when I am not, I am making a rational decision to make others believe something that is false.
If I am playing the Prisoner's Dilemma with someone (just once, so no tit-for-tat or anything), and I have the choice of making my opponent either act rationally or irrationally, the rational thing for me to do is make him act irrationally.
Replies from: Vladimir_Nesov, MichaelGR, Alicorn↑ comment by Vladimir_Nesov · 2010-01-23T23:05:01.552Z · LW(p) · GW(p)
See Bayesians vs. Barbarians. You may need the following posts for the background:
- Newcomb's Problem and Regret of Rationality
- Newcomb's Problem standard positions
- The True Prisoner's Dilemma
↑ comment by LazyDave · 2010-01-23T23:37:04.134Z · LW(p) · GW(p)
Thanks; the Bayesians vs. Barbarians post is exactly the kind of thing I was looking for. I'll have to read some of the posts that it links to (as well as re-read the background posts you referred to; haven't read them in a while), as the way it stands I still think the Barbarians would win.
↑ comment by MichaelGR · 2010-01-24T05:22:25.340Z · LW(p) · GW(p)
If I am playing the Prisoner's Dilemma with someone (just once, so no tit-for-tat or anything), and I have the choice of making my opponent either act rationally or irrationally, the rational thing for me to do is make him act irrationally.
↑ comment by Alicorn · 2010-01-23T23:13:51.094Z · LW(p) · GW(p)
is it always rational to make others believe things that are true?
This depends on your values. If chief among them is "honesty", and you caveat the "make others believe things that are true" with a "for the right reasons" clause, then probably, yeah. If honesty has to compete with things like keeping your property, maybe not.
If I am playing the Prisoner's Dilemma with someone (just once, so no tit-for-tat or anything), and I have the choice of making my opponent either act rationally or irrationally, the rational thing for me to do is make him act irrationally.
I'm not sure what the content of "making your opponent behave (ir)rationally" is supposed to be. It's certainly not an uncontroversial tidbit of received wisdom that the rational thing to do in the Prisoner's Dilemma is to defect, which is what you seem to imply.
Replies from: LazyDave↑ comment by LazyDave · 2010-01-23T23:20:14.884Z · LW(p) · GW(p)
I'm not sure what the content of "making your opponent behave (ir)rationally" is >supposed to be. It's certainly not an uncontroversial tidbit of received wisdom that the >rational thing to do in the Prisoner's Dilemma is to defect, which is what you seem to >imply.
Exactly, if I was able to make him act irrationally, he would not defect, whereas I would. And if the definition of rationality is that it makes you win, then it can be perfectly rational to have others act irrationally (i.e. believe wrong things).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-23T23:25:30.286Z · LW(p) · GW(p)
If you both cooperate, instead of you both defecting, you'd both be better off, which is a more rational (more winning) outcome. Thus, making "cooperation" a synonym for "irrational" will irk people around here. (Of course if you defect and the other player cooperates, you'd have the best possible payoff.)
↑ comment by Vladimir_Nesov · 2010-01-23T22:55:20.021Z · LW(p) · GW(p)
It's still false.
He didn't claim that religion isn't false.
Replies from: Alicorn↑ comment by Alicorn · 2010-01-23T22:57:25.780Z · LW(p) · GW(p)
I didn't say that he so claimed.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-23T22:59:38.096Z · LW(p) · GW(p)
Yes you didn't, but the discussion is at cross-purposes.
↑ comment by mwengler · 2012-01-26T19:50:54.000Z · LW(p) · GW(p)
Russ Roberts' latest podcast with Dave Rose touches on this. My paraphrase: utilitarian morality does not lead to the greatest good. It works well in small groups (they used 25 people as a guideline) but fails in very large groups. This makes sense: for large group cooperation, coordinators need to correctly predict motivations of the people involved. If people can be made "mechanical" in the sense that they will do what they are told reliably (because they have a rule morality that tells them to), then larger organized efforts can succeed. If people defect locally from prioners dilemmas, the rate at which this happens creates an upper limit in the size of effective cooperative organizations.
I would say that religion is certainly a source of rule based morality on net. Rationalists (tm) pursuing their personal utility are often ready to defect from rules that they conclude come from outside their utility functions, or are at odds with their utility functions. Since we have a rather larger group of us on the planet than ever before, a path for removing religion might need to be carefully designed to not bring about the collapse of a lot of cooperative endeavors, to not bring about a collapse from rule-based to Rational (tm) utilitarian behavior on the parts of broad subpopulations.
Since I may not comment on the David Rose Russ Roberts podcast anywhere else, I will say here that I sure think it is at minimum ironic that the argument against utilitarianism as a moral system is that it is not as productive as rule-based moral systems can be. That is, the argument against utilitarianism is that it does not produce maximum utility. That is pretty much the only kind of argument against utilitarianism that might ever succeed with me.
comment by Amaroq · 2010-07-11T12:13:05.535Z · LW(p) · GW(p)
The answer is so simple, I don't understand why you guys are so strained about it.
Teach people to base their beliefs on reality. Teach them to systematically check their beliefs to make sure they're connected to reality. Via induction and conceptual reduction. (If you want to prove that an abstract concept is connected to reality, you break it apart into its constituent concepts. Keep doing this until you've broken the abstract concept up into 1st-level concepts that represent percepts Once you get from the abstract concept to the perceptual level, your idea has been proven to be connected to reality.)
The problem underlying religion is that people think faith is an acceptable source of knowledge. Teach them that all knowledge must be derived from observation, and you can undercut religion.
"Faith in the supernatural begins as faith in the superiority of others." -John Galt
Replies from: RobinZcomment by mtraven · 2009-03-12T07:35:04.650Z · LW(p) · GW(p)
The naivety and arrogance of this post is somewhat breathtaking. Religion is (or can be) primarily a set of practices and attitudes, not a collection of propositions, and is perfectly compatible with rationality. In any case, it is one of those things that humans do and efforts to get rid of it would seem doomed, as can be readily seen by the way people here (atheists, transhumanists, rationalists, etc) all create pseudo-religions to replace the ones they reject. Religion is a universal part of human nature; you can't be a good reasoner without understanding how humans work; so you must at least understand what religion is. You can start here
Replies from: MBlume, Rings_of_Saturn↑ comment by MBlume · 2009-03-12T08:07:08.442Z · LW(p) · GW(p)
Religion, as practiced today, is most commonly a collection of propositions, entirely incompatible with rationality.
If religion fills certain needs (and in fact I won't argue that it does) we will find more constructive ways to fill those needs without lying to people.
If you really think that religion isn't about god, then honestly, I don't think we disagree that much -- I don't know why you're starting out with insults flying.
↑ comment by Rings_of_Saturn · 2009-03-12T21:42:51.842Z · LW(p) · GW(p)
I think you are mistaking ambition and the spirit of betterment for "naivety and arrogance."
You say "it is one of those things that humans do and efforts to get rid of it would seem doomed," but that was once applicable to slavery. It may be difficult, quixotic, and almost comically ambitious. But the only thing that would make it "doomed" would be an attitude that says it must be.
MBlume's comment below summarizes nicely how we as a society might go about that (as does this whole thread).
Incidentally, I don't think this comment is really so bad; it's within reasonable argument, if only it didn't come blasting out of the gates with insults.