Is it rational to be religious? Simulations are required for answer.
post by Aleksei_Riikonen · 2010-08-11T15:20:04.058Z · LW · GW · Legacy · 71 commentsContents
71 comments
What must a sane person1 think regarding religion? The naive first approximation is "religion is crap". But let's consider the following:
Humans are imperfectly rational creatures. Our faults include not being psychologically able to maximally operate according to our values. We can e.g. suffer from burn-out if we try to push ourselves too hard.
It is thus important for us to consider, what psychological habits and choices contribute to our being able to work as diligently for our values as we want to (while being mentally healthy). It is a theoretical possibility, a hypothesis that could be experimentally studied, that the optimal2 psychological choices include embracing some form of Faith, i.e. beliefs not resting on logical proof or material evidence.
In other words, it could be that our values mean that Occam's Razor should be rejected (in some cases), since embracing Occam's Razor might mean that we miss out on opportunities to manipulate ourselves psychologically into being more what we want to be.
To a person aware of The Simulation Argument, the above suggests interesting corollaries:
- Running ancestor simulations is the ultimate tool to find out, what (if any) form of Faith is most conducive to us being able to live according to our values.
- If there is a Creator and we are in fact currently in a simulation being run by that Creator, it would have been rather humorous of them to create our world thus that the above method would yield "knowledge" of their existence.
1: Actually, what I've written here assumes we are talking about humans. Persons-in-general may be psychologically different, and theoretically capable of perfect rationality.
2: At least for some individuals, not necessarily all.
71 comments
Comments sorted by top scores.
comment by cousin_it · 2010-08-11T17:37:05.384Z · LW(p) · GW(p)
First I was like, "oh no, another one claims that faith could be rational without showing why it is". Then I parsed the simulation part and clicked "upvote" because it showed me a new and scary possibility.
In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct - test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?
Times like these I regret that I can't write good sci-fi in English because it's not my first language.
Replies from: thomblake, DanielLC, jimrandomh↑ comment by thomblake · 2010-08-11T17:51:01.805Z · LW(p) · GW(p)
Times like these I regret that I can't write good sci-fi in English because it's not my first language.
I find that hard to believe; your non-native-speaker status is not apparent from your comments.
Clarification: I find it hard to believe that's a limitation on writing good sci-fi, not that English is not your first language.
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-11T21:29:40.018Z · LW(p) · GW(p)
Writing quality fiction takes more facility with language than than writing quality nonfiction (like posts) does. (Not that nativeness is an absolute barrier: English was Nabokov's third language, IIRC.)
A cynic could observe that readers of genre fiction are comparatively less demanding in this respect, though.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2010-08-12T07:44:14.665Z · LW(p) · GW(p)
Very off topic, but I've actually often wondered why there don't seem to be any non-native speakers writing commercial fiction in English, given how massively larger the English-speaking market is compared to that of a lot of the smaller European languages. Nabokov is literally the only example I can think of.
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-12T07:58:15.603Z · LW(p) · GW(p)
Well, fiction writing generally isn't the sort of thing one enters into for the wonderful market. You do it because you love it and are somehow good enough at it that it can pay the bills. Probably you write your first novel in your spare time with very low expectations of it ever being published.
So why write in anything but your favorite language? And while, proficiency-wise, people like Nabokov and Conrad exist, chances are that you aren't one of them. (That said, there are probably more non-native writers of note than you think. How many of my favorite authors have red hair? I have no idea.)
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2010-08-12T10:17:39.820Z · LW(p) · GW(p)
Thing is, I read almost no fiction in Finnish and quite a lot in English. There isn't much of a tradition of speculative fiction in Finnish that isn't just copying stuff already done in English. So if I were to write a SF or a fantasy story, I'd seriously consider whether I could do it in English, because for me those kinds of stories are written in English and then maybe poorly translated into Finnish.
I'm sure few people match up to Nabokov or Conrad (whose non-nativeness I didn't know about), but I find it odd that I don't know of any contemporary writers who are even trying to write in English without being native speakers. I'm sure there are ones I don't know about, so any examples of currently active published non-native English fiction writers are welcome.
Replies from: gwern↑ comment by gwern · 2010-08-12T10:56:57.425Z · LW(p) · GW(p)
because for me those kinds of stories are written in English and then maybe poorly translated into Finnish.
Sounds like an opportunity! I wonder if it would be more valuable to translate (with the royalties that implies) or to just rip off?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2010-08-12T12:52:29.717Z · LW(p) · GW(p)
An opportunity for what? Genre literature is already translated into Finnish, and the publishers with something that isn't Harry Potter or Lord of the Rings are mostly able to stick around, but probably aren't making big profits. Finnish-written SF or fantasy is mostly crap and sinks without a trace. A good indication of the general level of quality is that for the first 20 years of the Atorox prize for the best Finnish SF short story, two outlier authors won ten of the 20 awards. No one has even been able to earn a full-time living writing SF in Finnish, not to mention growing rich.
Of course there's nothing preventing someone from writing stuff on par with Ted Chiang and Jeff VanderMeer in Finnish, but why bother? The book could be a cult classic but probably not a mainstream hit, and there aren't enough non-mainstream Finnish-speaking buyers for a book to earn someone a living.
I don't even know who buys all the translated Finnish SF books. I've read a bunch of those from the library, but almost all of the books I've bought have been in English. Why bother with translations that are both more expensive than the original paperbacks and have clunkier language?
↑ comment by jimrandomh · 2010-08-11T18:11:08.026Z · LW(p) · GW(p)
In the year 2020, cousin_it downloads some software for running predictor sims. He wants to optimize his future conduct -test which particular minor mind-hack would be the most beneficial, given his individual psychological makeup. So he runs many ancestor sims of himself poisoned with different irrational mind-hacks to see who wins in the past. I am one such sim, poisoned in an unknown manner; what should I do?
I have precommitted as strongly as I can to never run simulations of myself which are worse off than the version the simulation was based on. This might fall out as a consequence of UDT, but with a time-varying utility function I'm not really sure.
In general, self-copying and self-simulation require extreme care. They might be able to affect subjective experience in a way that goes back in time. The rules of subjective experience, if any, are non-transferrable (you can't learn them from someone else who's figured them out, even in principle) and might not be discoverable at all.
Replies from: Will_Newsome, cousin_it↑ comment by Will_Newsome · 2010-08-12T01:16:14.237Z · LW(p) · GW(p)
Humans can't easily precommit to anything at all, and even if they could, it'd be incredibly stupid to try without thinking about it for a very very long time. I'm surprised at how many people don't immediately see this.
↑ comment by cousin_it · 2010-08-11T18:58:51.322Z · LW(p) · GW(p)
I don't believe your decision follows from UDT. If you have a short past and a long future, knowledge gained from sims may improve your future enough to pay off the sims' suffering.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-08-11T19:28:06.979Z · LW(p) · GW(p)
This is firmly in the realm of wild speculation and/or science fiction plot ideas. That said -
You're right that it does not follow from UDT alone. I do think it follows from a combination of UDT with many common types of utility functions; in particular, if utility is discounted exponentially with time, or if the sims must halt and being halted is sufficiently bad.
A lot depends on what happens subjectively after the simulation is halted, or if there are sufficient resources to keep it going indefinitely. In the latter case, most simulated bad things can be easily made up for by altering the simulated universe after the useful data has been extracted. This would imply that if you are living in a sim created by your future self, your future self follows UDT, and your future self has sufficient resources, you'll end up in a heaven of some sort. Actually, if you ever did gain the ability to run long-duration simulations of your past selves, then it seems like UDT implies you should run rescue sims of yourself.
Simulations that have to halt after a short duration are very problematic, though; if you anticipate a long life ahead of you, then your past selves probably also have a long life ahead of them, too, which would be cut short by a sim that has to halt. This would probably outweigh the benefits of any information gleaned.
comment by knb · 2010-08-11T16:59:53.338Z · LW(p) · GW(p)
I think the mistaken assumption here is that you can actually choose to have faith. Certainly you can choose to repeat the words "I have faith". You can even say those words inside your head. You can even attend religious services. That is not the same as actually believing in your religion.
I think this essentially is what Orwell called "Doublethink", and it seems to explain much of the religious behavior I personally have seen.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T17:20:08.402Z · LW(p) · GW(p)
Introspectively, it seems clear to me that it is possible to choose to have faith. It might be, though, that what this "faith" is is a more complicated question than it would initially seem to be. But I can clearly imagine adopting something that very much feels and would seem to work like "faith".
(Might as well now mention that I've been an atheist all my life, and still am. I don't currently have any faith I know of, except perhaps the thing about a time-continuous self.)
Replies from: knb↑ comment by knb · 2010-08-11T17:45:47.331Z · LW(p) · GW(p)
I "chose to have faith" when I had a crisis of faith as a kid. However, after my crisis of faith I no longer actually made predictions based on my faith-beliefs.
"Choosing faith" and actually believing something are very different.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T18:05:46.242Z · LW(p) · GW(p)
Not making predictions based on faith-beliefs sounds just perfect for what I had in mind here. Maybe it can be characterized as "Doublethink" or "belief in belief (without the actual belief)", but if so, that's fine.
The goal would not be to use Faith to make predictions, only to psychologically manipulate oneself.
Replies from: orthonormal↑ comment by orthonormal · 2010-08-11T19:42:06.272Z · LW(p) · GW(p)
The goal would not be to use Faith to make predictions, only to psychologically manipulate oneself.
Relevant LW posts:
comment by thomblake · 2010-08-11T17:01:49.612Z · LW(p) · GW(p)
This post does not seem to contribute much. As nawitus pointed out, this post does a good enough job of distinguishing between instrumental and epistemic rationality.
While it seems obvious that in some cases, a false belief will have greater utility than a true one (I can set up a contrived example if you need one), it's a devil's bargain. Once you've infected yourself with a belief that cannot respond to evidence, you will (most likely) end up getting the wrong answer on Very Important Problems.
And if you've already had your awakening as a rationalist, I'd like to think it would be impossible to make yourself honestly believe something that you know to be false.
Replies from: thomblake, Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T17:10:14.087Z · LW(p) · GW(p)
I did not hypothetize about infecting oneself with a belief that doesn't respond to evidence.
The kind of hypothetical faith I spoke of would respond to evidence; evidence of what is conducive to being able to act according to one's values.
Replies from: thomblake↑ comment by thomblake · 2010-08-11T17:21:04.380Z · LW(p) · GW(p)
I did not hypothetize about infecting oneself with a belief that doesn't respond to evidence.
In that case, the following is misleading:
Faith, i.e. beliefs not resting on logical proof or material evidence.
At any rate, a belief that would not respond to evidence of its truth or falsehood would be sufficiently malign.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T17:25:06.742Z · LW(p) · GW(p)
Ah, yes, you're correct. That was poorly written.
On your latter point, do you really mean that in the thought experiment of someone wanting to shoot your friend and coming to you to ask for directions, you hope you couldn't make yourself honestly believe falsehoods that you could then convey to the assassin thereby misdirecting him without seeming dishonest?
Replies from: thomblake↑ comment by thomblake · 2010-08-11T17:27:56.432Z · LW(p) · GW(p)
On your latter point, do you really mean that in the thought experiment of someone wanting to shoot your friend and coming to you to ask for directions, you hope you couldn't make yourself honestly belief falsehoods that you could then convey to the assassin thereby misdirecting him without seeming dishonest?
Indeed. As long as we're asking for superpowers, I'd prefer to have the ability to defeat the assassin, or to credibly lie to him without believing my lie.
Given that this situation is not going to happen to me, I'd rather keep the ability to distinguish truth from falsehood without epistemically poisoning myself.
comment by mtraven · 2010-08-12T21:27:11.023Z · LW(p) · GW(p)
This post is based on the (very common) mistake of equating religious practice and religious faith. Religion is only incidentally about what you believe; the more important components are community and ritual practice. From that perspective, it is a lot easier to believe that religion can be beneficial. What you think about the Trinity, for instance, is less important than the fact that you go to Mass and see other members of your community there and engage in these bizarre activities together.
There is an enormous blindspot about society in the libertarian/rationalist community, of which the above is just one manifestation.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-13T03:58:03.901Z · LW(p) · GW(p)
No, I very clearly am aware of those two things as separate things. (Though I could have been clearer about this in my post.)
It is not obvious that faith couldn't be psychologically useful, also separately from practice.
comment by TobyBartels · 2010-08-11T23:34:55.259Z · LW(p) · GW(p)
I know some individuals that I believe would be worse off if they were to have a crisis of faith and lose their religion. And while I can't be sure and have never run any tests to find out, I think that they really believe, not just with belief in belief. By the way, none of these are particularly intelligent people.
But I have a hard time imagining someone intelligent and rational who would be better off deceiving themself and gaining faith. Adopting a religion where you are allowed to fake it (like Risto suggests) would almost certainly be better. Sometimes I adopt foma to help me through the day, but I don't take them seriously.
Of course, it's easy to imagine situations where they would be better off mouthing faith, such as kidnap and interrogation by fundamentalist terrorists, or daily life in a lot of societies (past and present) where rationality is undervalued. But I don't think that this is what you mean.
comment by Will_Newsome · 2010-08-12T01:28:11.822Z · LW(p) · GW(p)
I didn't downvote this post, but I can't say I endorse seeing more posts like it. The concept of this post is one of the least interesting in a huge conceptspace of decision theory problems, especially decision theory problems in an ensemble universe. To focus on 'having faith' and 'rationality' in particular might seem clever, but it fails to illuminate in the same way that e.g. Nesov's counterfactual mugging does. When you start thinking about various things simulators might do, you're probably wrong about how much measure is going to be taken up by any given set of simulations. Especially so once you consider that a superintelligence is extremely likely to occur before brain emulations and that a superintelligence is almost assuredly not going to be running simulations of the kind you specify.
Instead of thinking "What kind of scenario involving simulations could I post to Less Wrong and still be relevant?", as it seems to me you did, it would be much better to ask the more purely curious question "What is the relative power of optimization processes that would cause universes that include agents in my observer moment reference class to find themselves in a universe that looks like this one instead of some other universe?" Asking this question has led me to some interesting insights, and I imagine it would interest others as well.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-12T02:48:32.471Z · LW(p) · GW(p)
Actually, I didn't try to be relevant or interesting to the LW community. I'm just currently genuinely very interested in the kinds of questions this post was about, and selfishly thought I'd get very useful criticism and comments if I'd post here like this (as did indeed happen).
Getting downvoted so much is something that I for some reason enjoy very much :) It probably has to do with me thinking, that while there are very valid points on which my post and my decision to post it can be criticized, I suspect that instead of thinking of those valid reasons to dislike what I did, and seeing them as a reason to downvote, the downvoters probably just had a knee-jerk reaction to religion as a topic (probably even suspecting that I'd have religious views that I don't actually have). If this suspicion is true, I would have ended up demonstrating a form of irrationality somewhat widespread within LW readership.
Or then the typical voting LW member is smarter than I thought, and they did actually notice that I was largely just looking for criticism and comments useful to me, instead of formulating my thoughts further on my own, and then perhaps later posting on LW something more polished and with a somewhat different focus.
Replies from: Will_Newsome, Unknowns↑ comment by Will_Newsome · 2010-08-12T04:27:05.701Z · LW(p) · GW(p)
I'm just currently genuinely very interested in the kinds of questions this post was about, and selfishly thought I'd get very useful criticism and comments if I'd post here like this (as did indeed happen).
I've found it hard to avoid doing this kind of thing. Luckily I have people at the Singularity Institute to discuss this kind of hypothesis with. If you post it to the Open Thread, it will like as not be ignored, and if you make a top level post about it, then it will like as not be downvoted (but at least you'll get feedback). Perhaps it would be best if a lot of Less Wrongers made blogs and advertised those blogs in a top level post using some sort of endorsement method? I've thought about writing my own blog before, but it'd be annoying to have to ask a lot of people to check it out or subscribe. But if a lot of LWers did it, it wouldn't be nearly as annoying. Then posts like the one you wrote could still get feedback without taking up space in the minds of a multitude of Less Wrong readers who don't care about the decision theory of simulations.
Getting downvoted so much is something that I for some reason enjoy very much :) It probably has to do with me thinking, that while there are very valid points on which my post and my decision to post it can be criticized, I suspect that instead of thinking of those valid reasons to dislike what I did, and seeing them as a reason to downvote, the downvoters probably just had a knee-jerk reaction to religion as a topic (probably even suspecting that I'd have religious views that I don't actually have).
This totally does not work unless you have a way of discovering evidence to distinguish between the two hypotheses, and I don't think you have such a method. Commenters are more likely to have more sophisticated reasons for disagreeing than the average LW lurker who sees the word 'faith' in anything but a totally negative light and immediately downvotes, so posting this gave you little evidence. The downvotes are most likely to come from both the least and most sophisticated of LWers: the least because they're allergic to anything to do with religion, the most because they're allergic to hypotheses that fail to carve reality at its joints. If I was going to downvote the post it'd be because of the latter, but I still don't know what the median reason would be for a downvote.
↑ comment by Unknowns · 2010-08-12T03:00:26.314Z · LW(p) · GW(p)
"the downvoters probably just had a knee-jerk reaction to religion as a topic (probably even suspecting that I'd have religious views that I don't actually have)"
With some exceptions when the poster is well known here, my impression is that posts and comments on the topic of religion do get treated this way on a regular basis.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2010-08-12T04:32:12.855Z · LW(p) · GW(p)
In my experience bringing up religion to make a point is often a bad call (or so community norms suggest) because politics is the mind killer and alienation for no good reason is generally frowned upon, and bringing up religion as a topic of discussion in itself is often done by those who want to bash religion for a reason that is not sophisticated enough.
comment by simplicio · 2010-08-11T18:34:48.765Z · LW(p) · GW(p)
I'm not convinced that the religious have any particular advantages wrt akrasia and such things.
A conversion is sure to give you a great deal of energy & willpower temporarily, but ultimately that's a finite resource.
The main advantage the religious have is supportive community. That is where rationalists really fall down, although I think LW is a step in the right direction.
comment by SilasBarta · 2010-08-11T16:02:18.769Z · LW(p) · GW(p)
I don't think simulations help. Once you start simulating yourself to arbitrary precision, that being would have the same thoughts as you, including "Hey, I should run a simulation", and then you're back to square one.
More generally, when you think about how to interact with other people, you are simulating them, in a crude sense, using your own mind as a shortcut. See empathic inference.
If you become superintelligent and have lots more computing resources, then your simulations of other minds themselves become minds, with experiences indistinguishable from yours, and make the same decisions, for the same reasons. What's worse, the simulations have the same moral weight! See EY's nonperson predicates.
(This has inspired me to consider "Virtualization Decision Theory", VDT, which says, "Act as though setting the output of yourself in a simulation run by beings deciding how to interact with a realer version of you that you care about more.")
Here are my earlier remarks on the simulated-world / religion parallel.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T16:40:17.239Z · LW(p) · GW(p)
Simulations might be of limited utility (given limited computational resources), but they'd certainly help.
Without simulations, it's very difficult to run complex experiments of how an entity behaves in a series of situations, with the only changing variable being the entity's initial beliefs.
Replies from: SilasBarta, thomblake↑ comment by SilasBarta · 2010-08-11T17:01:48.715Z · LW(p) · GW(p)
I think that underscores the crucial difference about doing simulations in this context: You're simulating a being that can itself do simulations, and this is the defining aspect of that being's cognitive architecture.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T17:41:33.457Z · LW(p) · GW(p)
No, it is not particularly relevant to run such simulations where the entities have sufficient technology to run advanced simulations themselves. Almost all that we'd want to find out in this context we can use simulations of entities of a limited technological level for.
↑ comment by thomblake · 2010-08-11T16:52:31.744Z · LW(p) · GW(p)
Simulations might be of limited utility, but they'd certainly help.
If you believe that, then your earlier estimate of "necessary" was very far off-target.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T17:34:55.576Z · LW(p) · GW(p)
I also think that what intelligence I or anyone else has is imperfect and therefore "of limited utility", but this doesn't mean that our intelligence wouldn't be "necessary" for a very large number of tasks, or even that it wouldn't be "the ultimate tool" for most of those tasks.
So I don't see at all what would be the contradiction you're referring to.
comment by Aleksei_Riikonen · 2010-08-11T15:41:25.807Z · LW(p) · GW(p)
Belief in the concept of a time-continuous "self" might be an example of an article of Faith that is useful for humans.
(Most people believe in a time-continuous self anyway, they just don't realize it's something that current best physics tells us there's no evidence for the existence of.)
Replies from: Vladimir_Nesov, NihilCredo↑ comment by Vladimir_Nesov · 2010-08-11T15:51:09.296Z · LW(p) · GW(p)
Don't confuse heuristics with faith.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T16:10:48.764Z · LW(p) · GW(p)
Physics has already given us better ideas that we could replace a belief in a time-continuous self with. If we choose not to use these ideas that better reflect what we know of reality, I wouldn't call it a heuristic, but instead choosing faith over what pure reason would tell us.
Replies from: jimrandomh, cousin_it↑ comment by jimrandomh · 2010-08-11T16:20:06.260Z · LW(p) · GW(p)
Physics has already given us better ideas that we could replace a belief in a time-continuous self with. If we choose not to use these ideas that better reflect what we know of reality, I wouldn't call it a heuristic, but instead choosing faith over what pure reason would tell us.
But physics has also confirmed that a time-continuous self is a good enough approximation under most circumstances. You wouldn't call choosing Newtonian physics over relativity "faith", and in most cases you wouldn't call it wrong either. It is only when we try to use the approximation in corner cases, like cloning and death, that it becomes a problem.
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-11T16:33:00.572Z · LW(p) · GW(p)
The analogy would be that relativity says something that demoralizes us.
Using Newtonian physics as a heuristic when solving problems doesn't allow us to avoid that demoralizing effect. If we'd still believe that relativity is the model that is actually true, the demoralizing effect would remain.
Replies from: thomblake↑ comment by thomblake · 2010-08-11T16:55:00.227Z · LW(p) · GW(p)
the model that is actually true
Calling a model "true" is a category error. Models predict their relevant details of reality to the accuracy and precision necessary for the tasks to which they are appropriately applied, as best they can.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-11T18:18:18.665Z · LW(p) · GW(p)
Calling a model "true" is a category error.
It's well-defined enough when we talk about models of reality, so far as what "reality" is, assumed understood. It's clearly false that speed of light is 10km/s, it's clearly true that speed of light is not 10 km/s.
Replies from: thomblake↑ comment by thomblake · 2010-08-11T18:47:59.409Z · LW(p) · GW(p)
Yes, the proposition "the speed of light is 10km/s" is false. However, it is entirely possible to have a model which sets the speed of light to 10km/s (to make the math simpler, possibly), that nonetheless churns out accurate predictions.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-11T18:51:49.920Z · LW(p) · GW(p)
LCPW (least convenient possible world). I obviously meant that you use a standard physical framework. Accurate predictions = true model, completely wrong predictions = false model. Simple enough.
Replies from: thomblake↑ comment by cousin_it · 2010-08-11T17:19:53.493Z · LW(p) · GW(p)
I'm gonna pull a Nesov on this one and say that belief in a time-continuous self can be thought of as a value/preference rather than belief. You care about your individual organism because evolution made you care about it, not because it is physically real (whatever that means).
Of course, similar reasoning can be used to show that observed particle physics is a Darwinian construct :-) Last I talked with Nesov about it, this was a big puzzle. Any news?
Replies from: nawitus↑ comment by nawitus · 2010-08-11T17:39:23.017Z · LW(p) · GW(p)
The lack of belief in a time-continuos self would give the same moral value to yourself as to other people, but wouldn't eliminate caring about yourself altogether.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-11T18:22:56.489Z · LW(p) · GW(p)
Wrong. To see the error, try applying the argument to structures other than people.
Replies from: nawitus↑ comment by nawitus · 2010-08-11T21:55:06.386Z · LW(p) · GW(p)
Care to give an example then?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-12T08:23:35.625Z · LW(p) · GW(p)
Example of what? You didn't give your argument, only conclusion. I only guessed that this argument, whatever it is, will more visibly crumble in the case I suggested.
Replies from: nawitus↑ comment by nawitus · 2010-08-12T14:15:47.867Z · LW(p) · GW(p)
Eh. If you don't know the argument it's irrational to call it wrong. I didn't really argue anything, I just made an observation for those people who possibly believe that time-continuos self is required for morality.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-12T16:26:24.282Z · LW(p) · GW(p)
If you don't know the argument it's irrational to call it wrong.
Your conclusion is wrong, therefore the argument must be wrong as well.
Replies from: nawitus↑ comment by nawitus · 2010-08-12T18:21:28.779Z · LW(p) · GW(p)
And you don't provide any arguments for your claim either..
Okay, here's one: Even with time-continuos self, humans value other people, even though they personally experience anything other peope do. There's some (moral) value in other persons. Maybe people value themselves more, but that's not even relevant to the argument. So, if time-continuos self doesn't exist, people will value their future selfs as much as any other persons, which is atleast more than nothing.
Of course, this assumes that such a person does value other people. May not apply to every single person.
↑ comment by NihilCredo · 2010-08-14T03:09:22.978Z · LW(p) · GW(p)
In what ways would the world look different to me if my TCS did/did not exist?
Replies from: Aleksei_Riikonen↑ comment by Aleksei_Riikonen · 2010-08-14T03:28:31.870Z · LW(p) · GW(p)
Can't think of any such way.
Similarly, the existence or non-existence of some sorts of inactive Gods doesn't affect your observations in any way.
Occam's Razor would eliminate both those Gods and a time-continuous self, though.
(But personally, I propose that we may have faith in a time-continuous self, if it is sufficiently useful psychologically. And that it's an open question whether there are other supernatural things that at least some should also have faith in.)
comment by PhilGoetz · 2010-08-11T21:41:23.467Z · LW(p) · GW(p)
The basic idea is sound - If you really think religion is good/bad for X, the best proof would be to run a simulation and observe outcomes for X. I interpret the downvoting to -10 as a strong collective irrational bias against religion, even in a purely instrumental context.
The corollaries are distracting.
Replies from: luminosity↑ comment by luminosity · 2010-08-12T02:37:02.223Z · LW(p) · GW(p)
Except I don't read the post as endorsing testing whether religion is good or bad for X, but rather saying that if a simulation showed we'd be better off with religious beliefs we'd be better off adopting them. There's a number of reasons why this seems like a bad idea:
- First, your simulation can only predict whether religion is better for you under the circumstances that you simulate. For instance, suppose you run a simulation of the entire planet earth, but nothing much beyond it. The simulation shows better outcomes for you with religious faith. You then modify yourself to have religious faith. An extinction level asteroid comes hurtling towards earth. Previously, you would have tried to work out the best strategy to divert its course. Now you sit and pray that it goes off course instead.
- Let's say you simulate yourself for twenty years under various religious beliefs, and under athiesm, and one of the simulations leads to a better outcome. You alter yourself to adopt this faith. You've now poisoned your ability to conduct similar tests in future. Perhaps a certain religion is better for the first year, or five years, or five thousand. Perhaps beyond that time it no longer is. Because you have now altered your beliefs to rest upon faith rather than upon testing you can no longer update yourself out of this religious state because you will no longer test to see whether it is optimal.
↑ comment by Larks · 2010-08-12T18:36:12.261Z · LW(p) · GW(p)
While true, I doubt the first effect would be significant. You're not very likely to be the one responsible for saving the earth and, singularity aside, terrestrial effects are likely to be far more important to you.
Contrawise, if you were capable of running a simulation, the odds of your input being relevant for existential are much higher. You might be running the simulation to help other people decide whether or not to be religious, or whether to persuade others to be religious, but then it becomes a lot more likely that the combined reduction in epistemic rationality would become an existential issue.
comment by Risto_Saarelma · 2010-08-11T20:15:55.633Z · LW(p) · GW(p)
Don't some interpretations of neopagan magic have a bit of the same idea as the religion thing here? The idea is that there isn't assumed to be any supernatural woo involved, the magic rituals just exist as something that an observing human brain will really glom onto, and will end up doing stuff that it is able to but otherwise might not have done.
I think Eric S. Raymond and Alan Moore have written about magic from this outlook. Chaos magic with its concept of belief as a tool might also be relevant.
Replies from: PhilGoetzcomment by KevinC · 2010-08-15T20:40:56.362Z · LW(p) · GW(p)
I think the question "Is it rational to be religious?" is one that deserves critical attention and testing, but talk of ancestor simulations completely demolishes the point. Any entity capable of creating an actual ancestor simulation--a fully-modeled "Matrix" populated with genuinely human-equivalent sentient Sims--is an entity for whom the results of such a test would be irrelevant and obsolete. The premise, that some form of Faith might be useful or even necessary for rational humans to maximally act in accordance with their values, is not applicable for a posthuman being.
The technology for creating a real ancestor simulation would almost certainly exist in a context of other technologies that are comparably advanced within their fields. If the computer power exists to run a physics engine sufficient to simulate a whole planet and its environs, complete with several billion human-level consciousnesses, the beings who possess that power would almost certainly be able to enhance their own cognitive and psychological capacities to the point that Faith would no longer be necessary for them, even if it might be for us here and now, or for the Sims in the ancestor simulation. A creator of ancestor simulations would for all practical intents and purposes be God, even in relation to his/her own universe. With molecular nanotechnology, utility fogs, programmable matter, and technologies we can't even imagine, conjuring a burning bush or a talking snake or a Resurrection would be child's play.
Proposing ancestor simulations as a way to test the usefulness of Faith is like saying, "Let's use a TARDIS to go watch early space-age planets and see if rockets or solar sails are the best way for us to explore the universe!"
On the other hand, we do already possess computer platforms that are fairly good at emulating other human-level intelligences, and we routinely create plausible, though limited world-simulations. These are "human brains" and "stories," respectively. So one way to partially examine and test to determine whether or not it could be rational to be religious would be to write a story about a rational person who adopts a Faith and applies it to maximally operate according to his or her values.
Then, present the story to people who believe that Faith, and people who don't. Is the story itself believable? Do the other minds processing the simulation (story) judge that it accurately models reality? Unfortunately this method cannot simultaneously generate billions of fully-realized simulated lives so that a wide variety of Faiths and life-circumstances under which they are used can be examined. Instead, the author would have to generate what they consider to be a plausible scenario for a rational person adopting a Faith and write a genuinely believable story about it. To serve as an effective test, the story would have to include as many realistic circumstances adverse to the idea as possible, in the same way that the secret to passing the 2-4-6 Test is to look for number sets that produce a "no." It could not be written like a fictional Utopia in which the Utopia works only because everyone shares the author's beliefs and consistently follows them.
Eliezer's story Harry Potter and the Methods of Rationality does a mirror-opposite of this, providing a story-test for the question, "Would the Sequences still be applicable even under the extreme circumstance of being catapulted into the Harry Potter universe?" Some of the best moments in this story are where Harry's rationalist world-view is strained to the utmost, like when he sees Professor McGonagall turn into a cat. A reader who finds the story "believable" (assuming sufficient suspension-of-disbelief to let the magic slide) will come away accepting that, if the Sequences can work even in a world with flying broomsticks and shape-shifting witches, they'll probably work here in our rather more orderly and equation-modelable universe.
So, a "So-And-So and the Methods of Faith" story might, if well-written, be able to demonstrate that Faith could be a valid way of programming the non-rational parts of our brain into helping us maximally operate according to our values.
Another method of testing (perhaps a next step) would be to adopt the techniques of Chaos Magic and/or Neuro-Linguistic Programming and try out the utility of Faith (perhaps testing different Faiths over set periods of time) in one's own life. Or better still: get the funding for a proper scientific study with statistically-sufficient sample sizes, control-groups, double-blind protocols, etc..
comment by Psychohistorian · 2010-08-12T23:00:17.785Z · LW(p) · GW(p)
This post definitely has problems, but given the fairly interesting discussion it appears to have prompted, does not seem to deserve being in the minus-double digits.
I think the central point is that the practical value of faith is more of an empirical question than a logical one. The central problem, of course, is that for a real person to accept some propositions on faith requires a (likely significant) dent in their overall rationality. The question is not about the value of faith, but about the tradeoffs made to obtain it; a complexity which is not really addressed here, and which may prove so entwined as to be impossible to arrive at deliberately.
In other words, once you've chosen a certain amount of rationality, many paths of faith are closed to you. Conversely, once you've chosen a certain amount of faith, some levels of rationality are closed to you.
comment by Unknowns · 2010-08-11T15:58:49.701Z · LW(p) · GW(p)
I predict a score of around negative five for this post.
Replies from: katydee↑ comment by katydee · 2010-08-11T23:12:23.250Z · LW(p) · GW(p)
I don't think this post should be downvoted. Testable predictions are a good idea and I would like to see more of them here, though ideally they would be posted as spoilers so that people wouldn't correct their actions to fulfill (or not fulfill) the predictions.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-08-14T03:07:37.160Z · LW(p) · GW(p)
This particular prediction has every sign of being meant as a very indirect formulation of "This post sucks".
Replies from: katydee