Posts
Comments
kebko, (1) doubtless there's something terribly dysfunctional going on; the question is whether it's better treated by giving more aid or by giving less. (2) If the continent's GDP might have been larger than it is, then the argument I was making applies more, not less. (Namely: the amount of foreign aid seems very small in comparison with the total size of the economy, which suggests that the amount of influence it can have had for good or ill probably isn't all that enormous.)
Carl, I like the idea of inventing things and making them free, but it might be unattractive to the people who'd need to do (or at least fund) it because it doesn't look like charity to, e.g., people looking at your accounts; and because unless the technologies are tightly Africa-focused they might lose a lot more in potential revenue than Africa gains in value. Also, it only works in so far as there are the necessary (human and material) resources in the poorest African countries to take advantage of the inventions.
Ian C, you either don't know what reason is or (at least in this case) don't know how to do it.
haig, if she's really calling for an end to all aid to Africa then that seems to go beyond what you suggest. (Eliezer could be right that she's keeping the message simple but really wants something more sophisticated. I am not convinced that this is the right strategy even if she's right about the underlying facts, and I'd also have thought that in a book-length treatment of the issue she could afford to present a less-simplistic version of her case.)
Perhaps I should point out one particular way in which I could be badly wrong: presumably aid tends to go to the poorest African countries, whose GDP may be way below the average, so 1% of GDP might turn out to be a substantial amount for the countries it actually goes to. Perhaps Moyo's book has the relevant numbers?
Eliezer, it's clear that Africa is in trouble. How compelling an argument does Moyo's book offer for believing that Africa is in trouble because it needs less aid, rather than because it needs more?
In this particular context it seems a bit strange to describe Moyo as an African economist. She lives in London and so far as I can tell has lived in the West for most of her adult life. In particular, the two most obvious reasons one might have for trusting an African economist more on this issue -- that her self-interest is more closely aligned with what's best for Africa than with what's best for the West, and that she's constantly exposed to the economic realities of life in poor African countries -- are less applicable than they would be to someone who actually lives in Africa.
Oh, and ... $1 trillion. Sounds like a lot. That's over the last 50 years, though. $20bn/year. Still sounds like a lot. The population of Africa is a little less than a billion. $20/year per person. Hmm. It's not quite so obvious that that would be enough to have a major distorting effect. Total GDP of Africa is something like $2T/year, which would make foreign aid to Africa something like 1% of its GDP. Again, would we really expect much distortion from that? Or, for that matter: If it's possible for aid to help Africa, would we expect aid at that level to have done much good?
(These are all without-even-an-envelope calculations, and could be badly wrong.)
I'm not sure whether "it" in Rasmus's second paragraph is referring specifically to the fact that you can submit old predictions, or to the idea of the site as a whole; but the possibility -- nay, the certainty -- of considerable selection bias makes this (to me) not at all like a database of all pundit predictions, but more another form of entertainment.
Don't misunderstand me; I think it's an excellent form of entertainment, and entertainment with an important serious side. But even if someone is represented by a dozen predictions on Wrong Tomorrow, all of them (correctly) marked WRONG, that could just mean that it's only the wackiest 1% of their predictions that have been submitted. Which would show that they're far from infallible, but that's hardly news.
Quite possibly this is the best one can do without a large paid staff (which introduces troubles aplenty of its own); it's just not feasible to track every single testable prediction made by any pundit, and if that started being done and noticed the likely result is that pundits would start taking more care to make their predictions untestable.
vroman, see the post on Less Wrong about least-convenient possible worlds. And the analogue in Doug's scenario of the existence of (Pascal's) God isn't the reality of the lottery he proposes -- he's just asking you to accept that for the sake of argument -- but your winning the lottery.
Carl, it clearly isn't based only on that since Eliezer says "You see it all the time in discussion of cryonics".
Eliezer, it seems to me that you may be being unfair to those who respond "Isn't that a form of Pascal's wager?". In an exchange of the form
Cryonics Advocate: "The payoff could be a thousand extra years of life or more!"
Cryonics Skeptic: "Isn't that a form of Pascal's wager?"
I observe that CA has made handwavy claims about the size of the payoff, hasn't said anything about how the utility of a long life depends on its length (there could well be diminishing returns), and hasn't offered anything at all like a probability calculation, and has entirely neglected the downsides (I think Yvain makes a decent case that they aren't obviously dominated by the upside). So, here as in the original Pascal's wager, we have someone arguing "put a substantial chunk of your resources into X, which has uncertain future payoff Y" on the basis that Y is obviously very large, and apparently ignoring the three key subtleties, namely how to get from Y to the utility-if-it-works, what other low-probability but high-utility-delta possibilities there are, and just what the probability-that-it-works is. And, here as with the original wager, if the argument does work then its consequences are counterintuitive to many people (presumably including CS).
That wouldn't justify saying "That is just Pascal's wager, and I'm not going to listen to you any more." But what CS actually says is "Isn't that a form of Pascal's wager?". It doesn't seem to me an unreasonable question, and it gives CA an opportunity to explain why s/he thinks the utility really is very large, the probability not very small, etc.
I think the same goes for your infinite-physics argument.
I don't see any grounds for assuming (or even thinking it likely) that someone who says "Isn't that just a form of Pascal's wager?" has made the bizarrely broken argument you suggest that they have. If they've made a mistake, it's in misunderstanding (or failing to listen to, or not guessing correctly) just what the person they're talking to is arguing.
Therefore: I think you've committed a Pascal's Wager Fallacy Fallacy Fallacy.
(Second attempt at posting this. My first attempt vanished into the void. Apologies if this ends up being a near-duplicate.)
Patrick (orthonormal), I'm pretty sure "Earth" is right. If you're in the Huygens system already, you wouldn't talk about "the Huygens starline". And the key point of what they're going to do is to keep the Superhappies from reaching Earth; cutting off the Earth/Huygens starline irrevocably is what really matters, and it's just too bad that they can't do it without destroying Huygens. (Well, maybe keeping the Superhappies from finding out any more about the human race is important too.)
Patrick (orthonormal), I'm fairly sure that "Earth" is correct. They haven't admitted that what they're going to do is blow up Huygens (though of course the President guesses), and the essential thing about what they're doing is that it stops the aliens getting to Earth (and therefore to the rest of humanity). And when talking to someone in the Huygens system, talk of "the Huygens starline" wouldn't make much sense; we know that there are at least two starlines with endpoints at Huygens.
Eliezer, did you really mean to have the "multiplication factor" go from 1.5 to 1.2 rather than to something bigger than 1.5?
Beerholm --> Beerbohm, surely? (On general principles; I am not familiar with the particular bit of verse Eliezer quoted.)
Wei Dai, singleton-to-competition is perfectly possible, if the singleton decides it would like company.
Reasoning by analogy is at the heart of what has been called "the outside view" as opposed to "the inside view" (in the context of, e.g., trying to work out how long some task is going to take). Eliezer is on record as being an advocate of the outside view. The key question, I think, is how deep are the similarities you're appealing to. Unfortunately, that's often controversial.
(So: I agree with Robin's first comment here.)
I'd suggest:
Existing contributors keep posting at whatever frequency they're happy with (which hopefully would be above zero, but that's up to them).
Also, slowly scour the web for material that wouldn't be out of place on OB. When you find some, ask the author two or three questions. (a) May we re-post this on OB? (b) Would you like to write an article for OB? (c) [if appropriate] May we re-post some of your other existing material on OB?
If the posting rate drops greatly from what it is now, have more open threads. (One a week, on a regular schedule?) Be (cautiously) on the lookout for opportunities to say "Would you like to turn that into an OB post?".
I'd strongly not suggest
Anything that would broaden the focus of OB much. (It already strays a little further from its notional core topic than would be my ideal.)
Voting.
Continuing Robin Hanson's quirk of deleting as many words from the title as is possible without rendering it completely unintelligible. (Or, sometimes, one more than that.) :-)
Those subjunctives in 1-3 of course assume that there are people willing to do that much work. I don't know whether there are, not least because I haven't seriously tried to estimate how much work it is.
Richard, I wasn't suggesting that there's anything wrong with your running a simulation, I just thought it was amusing in this particular context.
Anyone who evaluates the performance of an algorithm by testing it with random data (e.g., simulating these expert-combining algorithms with randomly-erring "experts") is ipso facto executing a randomized algorithm...
So, the randomized algorithm isn't really better than the unrandomized one because getting a bad result from the unrandomized one is only going to happen when your environment maliciously hands you a problem whose features match up just wrong with the non-random choices you make, so all you need to do is to make those choices in a way that's tremendously unlikely to match up just wrong with anything the environment hands you because it doesn't have the same sorts of pattern in it that the environment might inflict on you.
Except that the definition of "random", in practice, is something very like "generally lacking the sorts of patterns that the environment might inflict on you". When people implement "randomized" algorithms, they don't generally do it by introducing some quantum noise source into their system (unless there's a real adversary, as in cryptography), they do it with a pseudorandom number generator, which precisely is a deterministic thing designed to produce output that lacks the kinds of patterns we find in the environment.
So it doesn't seem to me that you've offered much argument here against "randomizing" algorithms as generally practised; that is, having them make choices in a way that we confidently expect not to match up pessimally with what the environment throws at us.
Or, less verbosely:
Indeed randomness can improve the worst-case scenario, if the worst-case environment is allowed to exploit "deterministic" moves but not "random" ones. What "random" means, in practice, is: the sort of thing that typical environments are not able to exploit. This is not cheating.
nazgulnarsil, just because you wouldn't have to call it a belief doesn't mean it wouldn't be one; I believe in the Atlantic Ocean even though I wouldn't usually say so in those words.
It was rather tiresome the way that Lanier answered so many things with (I paraphrase here) "ha ha, you guys are so hilariously, stupidly naive" without actually offering any justification. (Apparently because the idea that you should have justification for your beliefs, or that truth is what matters, is so terribly terribly out of date.) And his central argument, if you can call it that, seems to amount to "it's pragmatically better to reject strong AI, because I think people who have believed in it have written bad software and are likely to continue doing so". Lanier shows many signs of being a smart guy, but ugh.
Vladimir, if I understand both you and Eliezer correctly you're saying that Eliezer is saying not "intelligence is reality-steering ability" but "intelligence is reality-steering ability modulo available resources". That makes good sense, but that definition is only usable in so far as you have some separate way of estimating an agent's available resources, and comparing the utility of what might be very different sets of available resources. (Compare a nascent superintelligent AI, with no ability to influence the world directly other than by communicating with people, with someone carrying a whole lot of powerful weapons. Who has the better available resources? Depends on context -- and on the intelligence of the two.) Eliezer, I think, is proposing a way of evaluating the "intelligence" of an agent about which we know very little, including (perhaps) very little about what resources it has.
Put differently: I think Eliezer's given a definition of "intelligence" that could equally be given as a definition of "power", and I suspect that in practice using it to evaluate intelligence involves applying some other notion of what counts as intelligence and what counts as something else. (E.g., we've already decided that how much money you have, or how many nuclear warheads you have at your command, don't count as "intelligence".)
How do you avoid conflating intelligence with power? (Or do you, in fact, think that the two are best regarded as different facets of the same thing?) I'd have more ability to steer reality into regions I like if I were cleverer -- but also if I were dramatically richer or better-connected.
PK, I thought Eliezer's post made at least one point pretty well: If you disagree with some position held by otherwise credible people, try to understand it from their perspective by presenting it as favourably as you can. His worked example of capitalism might be helpful to people who are otherwise inclined to think that unrestrained capitalism is obviously bad and that those who advocate it do so only because they want to advance their own interests at the expense of others less fortunate.
I agree that he's probably violating his own advice when he implies that capitalism amounts to treating "finance as ... an ultimate end".
To those who are saying things like "Eliezer, someone will get power anyway and they'll probably be worse than you, so why not grab power for yourself?", and assuming for the sake of argument that we're talking about some quantity of power that Eliezer is actually in a position to grab: If you grab power and it corrupts you, that's bad not only for everyone else but also for you and whatever your goals were before you got corrupted. Observing that other people would be corrupted just as badly defuses the first of those objections to power-grabbing, but not the second.
Bo, the point is that what's most difficult in these cases isn't the thing that the 10-year-old can do intuitively (namely, evaluating whether a belief is credible, in the absence of strong prejudices about it) but something quite different: noticing the warning signs of those strong prejudices and then getting rid of them or getting past them. 10-year-olds aren't specially good at that. Most 10-year-olds who believe silly things turn into 11-year-olds who believe the same silly things.
Eliezer talks about allocating "some uninterrupted hours", but for me a proper Crisis of Faith takes longer than that, by orders of magnitude. If I've got some idea deeply embedded in my psyche but am now seriously doubting it (or at least considering the possibility of seriously doubting it), then either it's right after all (in which case I shouldn't change my mind in a hurry) or I've demonstrated my ability to be very badly wrong about it despite thinking about it a lot. In either case, I need to be very thorough about rethinking it, both because that way I may be less likely to get it wrong and because that way I'm less likely to spend the rest of my life worrying that I missed something important.
Yes, of course, a perfect reasoner would be able to sit down and go through all the key points quickly and methodically, and wouldn't take months to do it. (Unless there were a big pile of empirical evidence that needed gathering.) But if you find yourself needing a Crisis of Faith, then ipso facto you aren't a perfect reasoner on the topic in question.
Wherefore, I at least don't have the time to stage a Crisis of Faith about every deeply held belief that shows signs of meriting one.
I think there would be value in some OB posts about resource allocation: deciding which biases to attack first, how much effort to put into updating which beliefs, how to prioritize evidence-gathering versus theorizing, and so on and so forth. (We can't Make An Extraordinary Effort every single time.) It's a very important aspect of practical rationality.
Unfortunately, the capabilities of an omnipotent being are themselves not very well defined. Suppose we want to determine whether "The Absolute is an uncle" is meaningful. Well, says the deranged Hegelian arguing the affirmative, of course it is: we just ask our omnipotent being to take a look and see whether the Absolute is an uncle or not.
Butbutbutbut, you say, we can't tell it how to do that, whereas we can tell it how to check whether there's a spaceship past the cosmological horizon. But can we really? I mean, it's not like we know how to make that observation, or we'd be able to make it ourselves. What's the difference between this and checking whether the Absolute is an uncle? "Well, we know what it means to check whether there's a spaceship past the cosmological horizon, but not what it means for the Absolute to be an uncle." Circular argument alert!
It does feel like there's a difference that we can use, but trying to formulate it exactly seems to lead to a circular definition.
(No one is really going to defend "The Absolute is an uncle", but there certainly are people prepared to claim that the existence of an afterlife is testable because dead people might discover it, or because God could tell you whether it's there or not; and I don't think any sort of logical positivist would agree.)
Interesting aesthetic question raised by Caledonian's comment: "not beckoning, but drowning" versus "not wading, but drowning". I think the latter would have worked much better, but presumably C. thought it too obvious and wanted to preserve more of Stevie Smith's semantics. :-)
Arthur, what would keeping a time coordinate buy you in your scenario? Suppose, simplifying for convenience, we have A -> B -> C -> B [cycle], and suppose each state completely determines its successor. What advantage would there be to labelling our states (A,0), (B,1), (C,2), (B,3), (C,4), etc., instead of just A,B,C? Note that there's no observable difference between, say, (B,1) and (B,3); in particular, no memory or record of the past can distinguish them because those things would have to be part of state B itself.
I think David Deutsch has a similar unsorted-pile-of-block-slices view of the world. I don't know if either was influenced by the other.
You can make positions relative in ways other than using pairwise distances as your coordinates. For instance, just take R^4n (or R^11n or whatever) and quotient by the appropriate group of isometries of R^4 or R^11 or whatever. That way you get a dimension linear in the number of particles. The space might be more complicated topologically, but if you take general relativity seriously then I think you have to be prepared to cope with that anyway.
So, in Eliezer's example of triangles in 2-space, we start off with R^6; letting E be the group of isometries of R^2 (three-dimensional: two dimensions for translation, one for rotation, and we also have two components because we can either reflect or not), it acts on R^6 by applying each isometry uniformly to three pairs of dimensions; quotienting R^6 by this action of E, you're left with a 2-dimensional quotient space.
Of course you end up with the same result (up to isomorphism) this way as you would by considering pairwise distances and then noticing that you're working in a small subset of the O(N^2)-dimensional space defined by distances. But you don't have to go via the far-too-many-dimensional space to get there.
But ... suppose the laws of physics are defined over a quotient space like this. From the anti-epiphenomenal viewpoint, I wonder whether we should consider the quantities in the original un-quotiented space to be "real" or not. Consider quantum-mechanical phase or magnetic vector potential, which aren't observable (though other things best thought of as quotients of them are). Preferring to see the quotiented things as fundamental seems to me like the same sort of error as Eliezer (I think rightly) accuses single-world-ists of.
But ... the space of distance-tuples (appropriately subsetted) and the space of position-tuples (appropriately quotiented) are the same space, as I mentioned earlier. So, how to choose? Simplicity, of course. And, so far as we can currently tell, the laws of physics are simpler when expressed in terms of positions than when expressed in terms of distances. So, for me and pending the discovery of some newer better way of expressing the state space that supports our churning quantum mist, sticking with absolute positions seems better for now.
Yeah, but when playing actual Taboo "rational agents should WIN" (Yudkowsky, E.) and therefore favour "nine innings and three outs" over your definition (which would also cover some related-but-different games such as rounders, I think). I suspect something like "Babe Ruth" would in fact lead to a quicker win.
None of which is relevant to your actual point, which I think a very good one. I don't think the tool is all that nonstandard; e.g., it's closely related to the positivist/verificationist idea that a statement has meaning only if it can be paraphrased in terms of directly (ha!) observable stuff.
Lee, I'm confident that you'd find that "97 is approximately 100" seems more natural to most people than "100 is approximately 99". As for the percentage differences, (1) why should the percentage difference be the thing to focus on rather than the absolute difference, and (2) why do it that way around? (Only, I think, because of the effect I mentioned above: when you say "X is approximately Y" you're implicitly suggesting Y as a standard of comparison, because it's useful for that purpose one way or another.)
Tiiba, I don't think what I described is a bias, but perhaps I didn't explain it well. I'm proposing that in phrases like "X is approximately Y" and "X is like Y", the connectives are not intended to be taken as symmetrical relations like "differs little from"; rather, they mean something like "If you want to know about X, it may be useful to think about Y instead". And I don't see anything wrong with that, as such.
Let me give an analogy from a field where bias is quite effectively eliminated: pure mathematics. Mathematicians have various notations they use to express relationships of the form "this function is bigger than that one for large x". One of them, written something like "f ~ g", means "the ratio f/g tends to 1 in whatever limiting case we're interested in" (n -> oo, x -> 0, whatever). This really is a symmetrical relation; f ~ g if and only if g ~ f. But if you ask mathematicians which of "x^3+17x^2-25x+1 ~ x^3" and "x^3 ~ x^3+17x^2-25x+1" is more natural then I bet they'll quite consistently go for the former.
Now, if you want to call it a "bias" every time some term that looks symmetrical is used asymmetrically as a matter of convention or convenience, fair enough. I'd prefer to reserve "bias" for cases where the asymmetrical usage actually causes, or is a symptom of, error. As I say, I'm sure there's plenty of error caused by typicality heuristics; but I don't see that the asymmetry in the use of phrases like "is like" is, or indicates, an error.
(What "wrong question" do you think is being answered here?)
Has it been established that people who prefer "98 is approximately 100" to "100 is approximately 98" or "Mexico is like the US" to "the US is like Mexico" do so because, e.g., they think 98 is nearer to 100 than vice versa? It seems to me that "approximately 100" and "like the US" have an obvious advantage over "approximately 98" and "like Mexico": 100 is a nice-round-number, one that people are immediately familiar with the rough size of and that's easy to calculate with; the US is a nation everyone knows (or thinks they do).
I bet there really is a bias here, but that observation doesn't strike me as very good evidence for it. The rival explanations are too good. (The example about disease in ducks and robins is much better.)
Jeffrey wrote: To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B? Gosh. The only justification I can see for that equivalence would be some general belief that badness is simply independent of numbers. Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?
In any case: since you now appear to be conceding that it's possible for someone to prefer TORTURE to SPECKS for reasons other than a childish desire to shock, are you retracting your original accusation and analysis of motives? ... Oh, wait, I see you've explicitly said you aren't. So, you know that one leading proponent of the TORTURE option actually does care about humanity; you agree (if I've understood you right) that utilitarian analysis can lead to the conclusion that TORTURE is the less-bad option; I assume you agree that reasonable people can be utilitarians; you've seen that one person explicitly said s/he'd be willing to be the one tortured; but in spite of all this, you don't retract your characterization of that view as shocking; you don't retract your implication that people who expressed a preference for TORTURE did so because they want to show how uncompromisingly rationalist they are; you don't retract your implication that those people don't appreciate that real decisions have real effects on real people. I find that ... well, "fairly shocking", actually.
(It shouldn't matter, but: I was not one of those advocating TORTURE, nor one of those opposing it. If you care, you can find my opinions above.)
And we should be wary to select something orthodox for fear of provoking shock and outrage. Do you have any reason to believe that the people who say they prefer TORTURE to SPECKS are motivated by the desire to prove their rationalist credentials, or that they don't appreciate that their decisions have real consequences?
Josh, if you think about a picture like the one Eliezer drew (but in however many dimensions you like) it's kinda obvious that the leading term in the difference between two n-cubes consists of n (n-1)-cubes, one per dimension. So the leading term in the next difference is n(n-1) (n-2)-cubes, and so on. But that doesn't really give the n! thing at a glance. I'm not convinced that anything to do with nth differences can really be seen at a glance without some more symbolic reasoning intervening.
James Bach, I suspect that the really good mathematicians can't be had for cheap because doing mathematics is so important to them, and the quite good mathematicians can't be had for cheap because they've taken high-paying jobs in finance or software or other domains where a mathematical mind is useful. But maybe it depends on what you count as "cheap" and what fraction of the mathematician's time you want to take up with tutoring...
Isabel, I think perhaps differentiation really is easier in some sense than differencing, not least because the formulae are simpler. Maybe that stops being true if you take as your basic objects not n^k but n(n-1)...(n-k+1) or something, but it's hard to see the feeling that n^k is simpler than that as mere historical accident.
Eliezer's use of "the one" is not an error or a Matrix reference, it's a deliberate echo of an ancient rabbinical trope. (Right, Eliezer?)
denis bider, I thought Eliezer's use of "the one" was a deliberate echo of a rabbinical or Talmudic idiom, though I'm not sure how I got that idea and my google-fu isn't sufficient to verify or refute it. ... Ah, but take a look e.g. at page 8 of this book.
Incidentally, Al Cellier, what on earth is scientology doing in your list? It has (so far as I can see) nothing in common with the other items in the list, either in terms of shared beliefs or shared adherents. Are you just trying to annoy any singularitarians and transhumanists who are reading what you write?
Eliezer, I don't think your story would have been appreciably weakened if you'd just deleted the words "to talk about the Singularity". On the other hand, I also don't see any reason why you should have to avoid mentioning Your Strange Beliefs either. Also: surely the conjunction fallacy is not a bias, but a symptom of a bias. (The bias in question being more or less a special case of the availability heuristic: the more detail we're provided with, the easier it is to imagine whatever-it-is.)
burger flipper and Zubon, I think (1) Eliezer's thought processes are quite unusual and (2) his claimed thought processes on the two occasions mentioned by bf are (no more than) quite unusual, which to my mind makes them unsurprising and unsuspicious.
Overcoming Cryonics, singularitarianism seems to me to lack a number of important characteristics that almost all things commonly called religions share, so however wrong or irrational it may be the term "religion" seems unhelpful. (I find that applying the term "religion" to things that aren't commonly regarded as religions generally produces more heat than light.) Likewise for cryonics, life-extensionism and transhumanism more generally. All of which, incidentally, seem to me to be quite separate things, which I agree makes it interesting that they seem usually to get accepted or rejected as a group.
... If they do, that is; I realise that my evidence for this is very thin. Anyone have any figures, or even more extensive anecdotal evidence? Do people who sign up for cryonics believe in the Singularity more often than they should if the only factor is that the Singularity might make signing up for cryonics a better bet? (Etc.)
K Larson, I think Eliezer was wrong about bad political jokes, for two reasons. Firstly, a joke depends on its context, and it may not be possible to depoliticize a joke without losing something essential in the context. Secondly, like it or not, most of us do find it funny to see a disliked powerful figure get their comeuppance, which means that when assessing how good a joke is it's an error to penalize it for getting some of its laughs that way.
(But he was right when he said that finding what would otherwise be a bad joke funny is evidence that its target is playing the Hated Enemy role for you. Eliezer has been quite open about the fact that he greatly dislikes religion.)
And, for what it's worth, I think Eliezer's little drama does introduce one idea that not everyone's thought of (I don't claim that it's new, but novelty as such isn't all that valuable): that preservation of self-esteem might be an element in why the virgin-birth story succeeded. For what it's worth, though, I think it much more likely to have arisen some time after the events themselves than to have been made up on the spot.)
Joseph, I think the externals of the Christmas and Easter stories (virgin birth arranged by God; agony, death, resurrection, again arranged by God) are pretty much equally coherent. (Coherence isn't their problem.) But the point of each story, for Christians, is something much harder to swallow: Christmas is supposed to be about the Incarnation (with Jesus somehow being entirely human, just as much as we are, and entirely God, etc.) and Easter about the Atonement (where the whole death-and-resurrection thing somehow enables God to forgive the sins of humanity when he couldn't before). Both seem pretty incoherent to me.
They both make good stories, if you don't think too hard about how they're supposed to work. I'm not sure that has much to do with their coherence. (Take a look at the "explanation" in TLTWATW for the Easter-like event. Lewis isn't even trying to deal with the really doubtfully-coherent bits, but he still resorts to entirely arbitrary stuff about Deep Magic and Deeper Magic.)
It's quite well established that stories tend to feel more plausible if they include a wealth of details, even though the presence of those details actually makes the story less probable. (It's more likely that you'll be abducted by aliens than that you'll be abducted by aliens so that they can perform weird sexual experiments on you.) So I'd be very hesitant about taking the fact that a story can be told satisfyingly as a sign that it's less improbable, or more coherent, than a story that can't be told so satisfyingly.
Jey, I think the dichotomy between religious and other beliefs (in how much offence disagreement causes) isn't so stark as it's sometimes painted. Random example: US politics; how would a staunch Reaganite Republican react to the suggestion that Reagan's policies were all deliberately designed simply to funnel money to his big-business pals? For that matter, how do biologists generally react when creationists accuse them (in effect) of a gigantic conspiracy to suppress the truth? I think there's at least some offence taken in both cases, and those accusations (rather than mere disagreement) seem to me to be parallel to Eliezer's story.
Caledonian, we should respect people who have daft beliefs for the same reason(s) as we respect other people for. Someone who views people as mere repositories of beliefs, and doles out respect solely on that basis, should not respect people whose beliefs are, on balance, daft. I don't think that's how most people operate. And having some daft beliefs isn't the same as having daft-on-balance beliefs.
Eliezer, I think you improved the story when you softened the suggestion of extreme promiscuity on Mary's part. The bit about crucifixion is (to my taste) an unsuccessful flourish, not least because (apologies for literal-mindedness here) the Romans would not have crucified someone for having his mother claim he was conceived by direct divine intervention. But having the friend be called Betty is a nice touch. (Wasn't she supposed to be a relation, not just a friend?)
Only because you think of Japanese schoolgirls and tentacle monsters once a minute.
Adirian (sorry for not noticing your response sooner), the situation is more like: we have a million data points and several models that all fit those points very precisely and all agree very precisely on how to interpolate between those points -- but if we try to use them to extrapolate wildly, into regions where in fact we have no way of getting any real data points, they diverge. It also turns out that within the region where we can actually get data -- where the models agree -- they don't agree merely by coincidence, but turn out to be mathematically equivalent to one another.
You are welcome to describe this situation by saying that the models "completely and totally contradictory", but I think that would be pretty eccentric.
(This is of course merely an analogy. I think the reality is even less favourable to your case.)
rukidding, it's obvious that it's saved some lives (of people who would have been killed by Saddam Hussein and his minions) and cost some lives (of people killed by US forces, or by the people opposing them, or as a result of the general state of lawlessness and civil war in Iraq, or because the chaos there has produced poverty, poor healthcare, etc.), and certainly someone who is unable to consider both doesn't belong in the argument.
But if you're saying that no one "belongs in the argument" who can't make both a serious argument that on balance lives have been saved by the invasion and a serious argument that on balance lives have been lost by the invasion ... well, that's only true if in fact the evidence is rather evenly balanced, and I see no reason to think it is.
denis bider, the people who perpetrated the 2001-09-11 attacks died, and knew they were going to die, so others like them won't be deterred by the likelihood that the USA will go after them personally. It doesn't seem like the US's overreaction to those attacks has been all that effective in harming al Qaeda (I mean, bin Laden is still alive so far as anyone knows). It doesn't seem like it's been all that effective in making people who might have been sympathetic to groups like al Qaeda less so.
So I'm wondering how you expect the overreaction to deter other people who might be considering similar attacks.
Eliezer, I first saw the distinction between "natural" and "supernatural" made the way you describe in something by Richard Carrier. It was probably a blog entry from 2007-01, which points back to a couple of his earlier writings. I had a quick look at the 2003 one, and it mentions a few antecedents.
Nick: Oh, sorry, I forgot that there are still people who take the Copenhagen interpretation seriously. Though actually I suspect that they might just decree that observation by a reversible conscious observer doesn't count. That would hardly be less plausible than the Copenhagen interpretation itself. :-)
(I also have some doubt as to whether sufficiently faithful reversibility is feasible. It's not enough for the system to be restored to its prior state as far as macroscopic observations go; the reversal needs to be able to undo decoherence, so to speak. That seems like a very tall order.)
Adirian: the fact that their agreement-about-observations was predictable in advance doesn't make it any less an agreement. (And if you're talking only about the parts of those theories that are "theories about the reasons why", bracketing all the agreements about what's observed and how to calculate it, then I don't think you are entitled to call the things that disagree completely "models for modern theoretical physics".)
I think "completely and totally contradictory" is putting it too strongly, since they do in fact all agree about all observations we have ever been able to make or ever anticipate being able to make. Extreme verificationists would argue that the bits they disagree about are meaningless :-).
But some Great Thingies might not be readily splittable. For instance, consider the whole edifice of theoretical physics, which is a pretty good candidate for a genuinely great Thingy (though not of quite the same type as most of the Great Thingies under discussion here). Each bit makes most sense in the context of the whole structure, and you can only appreciate why a given piece of evidence is evidence for one bit if you have all the other bits available to do the calculations with.
Of course, all this could just indicate that the whole edifice of theoretical physics (if taken as anything more than a black box for predicting observations) is a self-referential self-supporting delusion, and in a manner of speaking it's not unlikely that that's so -- i.e., the next major advance in theoretical physics could well overturn all the fundamentals while leaving the empirical consequences almost exactly the same. Be that as it may, much of the value of theoretical physics comes from the fact that it is a Great Thingy and not just a collection of Little Thingies, and it seems like it would be a shame to adopt a mode of thinking that prevents us appreciating it as such.
rukidding, Eliezer has already said -- in this very comments thread -- that he isn't aiming to deconvert Christians but to use some features of Christianity as a case study.
The damages experiment, as described here, seems not to nail things down enough to say that what's going on is that damages are expressions of outrage on a scale with arbitrary modulus. Here's one alternative explanation that seems consistent with everything you've said: subjects vary considerably in their assessment of how effective a given level of damages is in deterring malfeasance, and that assessment influences (in the obvious way) their assessment of damages.
(I should add that I find the arbitrary-modulus explanation more plausible.)
Ouch, don't the units in that diagram hurt your brain? (Yeah, I understand what it means and it does make sense, but it looks soooo wrong. Especially in my part of the world where an ounce is a unit of mass or weight, not of volume.)