The Sin of Underconfidence
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T06:30:03.826Z · LW · GW · Legacy · 187 commentsContents
187 comments
There are three great besetting sins of rationalists in particular, and the third of these is underconfidence. Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.
But he's actually quite right to worry, and I worry too, and any adept rationalist will probably spend a fair amount of time worying about it. When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That's what makes a lot of cognitive subtasks so troublesome—you know you're biased but you're not sure how much, and you don't know if you're correcting enough—and so perhaps you ought to correct a little more, and then a little more, but is that enough? Or have you, perhaps, far overshot? Are you now perhaps worse off than if you hadn't tried any correction?
You contemplate the matter, feeling more and more lost, and the very task of estimation begins to feel increasingly futile...
And when it comes to the particular questions of confidence, overconfidence, and underconfidence—being interpreted now in the broader sense, not just calibrated confidence intervals—then there is a natural tendency to cast overconfidence as the sin of pride, out of that other list which never warned against the improper use of humility or the abuse of doubt. To place yourself too high—to overreach your proper place—to think too much of yourself—to put yourself forward—to put down your fellows by implicit comparison—and the consequences of humiliation and being cast down, perhaps publicly—are these not loathesome and fearsome things?
To be too modest—seems lighter by comparison; it wouldn't be so humiliating to be called on it publicly, indeed, finding out that you're better than you imagined might come as a warm surprise; and to put yourself down, and others implicitly above, has a positive tinge of niceness about it, it's the sort of thing that Gandalf would do.
So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence—heck, even if you've just read a couple of dozen—and you don't know exactly how overconfident you are—then yes, you might genuinely be in danger of nudging yourself a step too far down.
I have no perfect formula to give you that will counteract this. But I have an item or two of advice.
What is the danger of underconfidence?
Passing up opportunities. Not doing things you could have done, but didn't try (hard enough).
So here's a first item of advice: If there's a way to find out how good you are, the thing to do is test it. A hypothesis affords testing; hypotheses about your own abilities likewise. Once upon a time it seemed to me that I ought to be able to win at the AI-Box Experiment; and it seemed like a very doubtful and hubristic thought; so I tested it. Then later it seemed to me that I might be able to win even with large sums of money at stake, and I tested that, but I only won 1 time out of 3. So that was the limit of my ability at that time, and it was not necessary to argue myself upward or downward, because I could just test it.
One of the chief ways that smart people end up stupid, is by getting so used to winning that they stick to places where they know they can win—meaning that they never stretch their abilities, they never try anything difficult.
It is said that this is linked to defining yourself in terms of your "intelligence" rather than "effort", because then winning easily is a sign of your "intelligence", where failing on a hard problem could have been interpreted in terms of a good effort.
Now, I am not quite sure this is how an adept rationalist should think about these things: rationality is systematized winning and trying to try seems like a path to failure. I would put it this way: A hypothesis affords testing! If you don't know whether you'll win on a hard problem—then challenge your rationality to discover your current level. I don't usually hold with congratulating yourself on having tried—it seems like a bad mental habit to me—but surely not trying is even worse. If you have cultivated a general habit of confronting challenges, and won on at least some of them, then you may, perhaps, think to yourself "I did keep up my habit of confronting challenges, and will do so next time as well". You may also think to yourself "I have gained valuable information about my current level and where I need improvement", so long as you properly complete the thought, "I shall try not to gain this same valuable information again next time".
If you win every time, it means you aren't stretching yourself enough. But you should seriously try to win every time. And if you console yourself too much for failure, you lose your winning spirit and become a scrub.
When I try to imagine what a fictional master of the Competitive Conspiracy would say about this, it comes out something like: "It's not okay to lose. But the hurt of losing is not something so scary that you should flee the challenge for fear of it. It's not so scary that you have to carefully avoid feeling it, or refuse to admit that you lost and lost hard. Losing is supposed to hurt. If it didn't hurt you wouldn't be a Competitor. And there's no Competitor who never knows the pain of losing. Now get out there and win."
Cultivate a habit of confronting challenges—not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you. I recently read of a certain theist that he had defeated Christopher Hitchens in a debate (severely so; this was said by atheists). And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out. Note that this is not self-handicapping in the classic sense—if the debate is indeed arranged (I haven't yet heard back), and I do not prepare, and I fail, then I do lose those stakes of myself that I have put up; I gain information about my limits; I have not given myself anything I consider an excuse for losing.
Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost. In that case you make everything as easy for yourself as possible. To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.
A subtler form of underconfidence is losing your forward momentum—amid all the things you realize that humans are doing wrong, that you used to be doing wrong, of which you are probably still doing some wrong. You become timid; you question yourself but don't answer the self-questions and move on; when you hypothesize your own inability you do not put that hypothesis to the test.
Perhaps without there ever being a watershed moment when you deliberately, self-visibly decide not to try at some particular test... you just.... slow..... down......
It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...
There's not enough hope of triumph to inspire you to try hard...
When you consider doing any new thing, a dozen questions about your ability at once leap into your mind, and it does not occur to you that you could answer the questions by testing yourself...
And having read so much wisdom of human flaws, it seems that the course of wisdom is ever doubting (never resolving doubts), ever the humility of refusal (never the humility of preparation), and just generally, that it is wise to say worse and worse things about human abilities, to pass into feel-good feel-bad cynicism.
And so my last piece of advice is another perspective from which to view the problem—by which to judge any potential habit of thought you might adopt—and that is to ask:
Does this way of thinking make me stronger, or weaker? Really truly?
I have previously spoken of the danger of reasonableness—the reasonable-sounding argument that we should two-box on Newcomb's problem, the reasonable-sounding argument that we can't know anything due to the problem of induction, the reasonable-sounding argument that we will be better off on average if we always adopt the majority belief, and other such impediments to the Way. "Does it win?" is one question you could ask to get an alternate perspective. Another, slightly different perspective is to ask, "Does this way of thinking make me stronger, or weaker?" Does constantly reminding yourself to doubt everything make you stronger, or weaker? Does never resolving or decreasing those doubts make you stronger, or weaker? Does undergoing a deliberate crisis of faith in the face of uncertainty make you stronger, or weaker? Does answering every objection with a humble confession of you fallibility make you stronger, or weaker?
Are your current attempts to compensate for possible overconfidence making you stronger, or weaker? Hint: If you are taking more precautions, more scrupulously trying to test yourself, asking friends for advice, working your way up to big things incrementally, or still failing sometimes but less often then you used to, you are probably getting stronger. If you are never failing, avoiding challenges, and feeling generally hopeless and dispirited, you are probably getting weaker.
I learned the first form of this rule at a very early age, when I was practicing for a certain math test, and found that my score was going down with each practice test I took, and noticed going over the answer sheet that I had been pencilling in the correct answers and erasing them. So I said to myself, "All right, this time I'm going to use the Force and act on instinct", and my score shot up to above what it had been in the beginning, and on the real test it was higher still. So that was how I learned that doubting yourself does not always make you stronger—especially if it interferes with your ability to be moved by good information, such as your math intuitions. (But I did need the test to tell me this!)
Underconfidence is not a unique sin of rationalists alone. But it is a particular danger into which the attempt to be rational can lead you. And it is a stopping mistake—an error which prevents you from gaining that further experience which would correct the error.
Because underconfidence actually does seem quite common among aspiring rationalists who I meet—though rather less common among rationalists who have become famous role models)—I would indeed name it third among the three besetting sins of rationalists.
187 comments
Comments sorted by top scores.
comment by [deleted] · 2009-04-20T10:05:41.975Z · LW(p) · GW(p)
I wonder if the decline of apprenticeships has made overconfidence and underconfidence more common and more severe.
I'm not a history expert, but it seems to me that a blacksmith's apprentice 700 years ago wouldn't have had to worry about over/underconfidence in his skill. (Gender-neutral pronouns intentionally not used here!) He would have known exactly how skilled he was by comparing himself to his master every day, and his master's skill would have been a known quantity, since his master had been accepted by a guild of mutually recognized masters.
Nowadays, because of several factors, calibrating your judgement of your skill seems to be a lot harder. Our education system is completely different, and regardless of whatever else it does, it doesn't seem to be very good at providing reliable feedback to its students, who properly understand the importance of the feedback and respond accordingly. Our blacksmith's apprentice (let's call him John) knows when he's screwed up - the sword or whatever that he's made breaks, or his master points out how it's flawed. And John knows why this is important - if he doesn't fix the problem, he's not going to be able to earn a living.
Whereas a modern schoolkid (let's call him Jaden) may be absolutely unprepared to deal with math, but he doesn't know exactly how many years he's behind (it's hard enough to get this information in aggregate, and it seems to be rarely provided to the students themselves on an individual basis - no one is told "you are 3 years behind where you ought to be"). And Jaden has absolutely no clue why that matters, since the link between math and his future employment isn't obvious to him, and no one's explaining it to him. (School isn't for learning; as Paul Graham has explained, "Officially the purpose of schools is to teach kids. In fact their primary purpose is to keep kids locked up in one place for a big chunk of the day so adults can get things done. And I have no problem with this: in a specialized industrial society, it would be a disaster to have kids running around loose.")
Another modern schoolkid (let's call her Jaina) may be really skilled at math, but testing won't indicate this strongly enough (it works both ways; tests saturate at the high end - especially if they're targeting a low level of achievement for the rest of the class - and "you are 3 years ahead of everyone else in this room" is not feedback that is commonly given). And there's a good chance it won't be obvious to her how important this is, and how important becoming even more skilled is. And if she ends up being underconfident in her ability, and the feedback loop ("I know how skilled I am, I know why becoming stronger is important, and I know what I need to do") isn't established, then instead of learning plasma physics and working on ITER or DEMO, she goes into marketing or something. Maybe doing worthy things, but not being as awesome as she could have been.
My point, after this wondering, is that I agree with this post, and want to elaborate: structuring what you do so that you test yourself in the process of doing it is a good way to establish a feedback loop that increases your skill and the accuracy of your confidence in it. I find nothing wrong with the debating example in this post, but I worry that it makes self-testing sound like something that you should go out and do, separate from your everyday work. (Part of this, I think, is due to Eliezer's very unusual occupation.) My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work. Successfully completing it, or coming close, has allowed me to build up my skill ("the compiler in my head") and avoid the danger of underconfidence.
Replies from: Will_Newsome, Eliezer_Yudkowsky, LongInTheTooth, DanielLC↑ comment by Will_Newsome · 2010-08-09T17:41:04.027Z · LW(p) · GW(p)
A friend of mine, normal in most ways, has exceptionally good mental imagery, such that one time she visited my house and saw a somewhat complex 3-piece metalwork puzzle in my living room and thought about it later that evening after she had left, and was able to solve it within moments of picking it up when she visited a second time. At first I was amazed at this, but I soon became more amazed that she didn't find this odd, and that no one had ever realized she had any particular affinity for this kind of thing in all the time she'd been in school. I'm curious as to how many cognitive skills like this there are to excel at and if many people are actually particularly good at one or many of them without realizing it due to a lack of good tests for various kinds of cognition.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:15:30.058Z · LW(p) · GW(p)
My usual self-testing example is something like "can I write this program correctly on the very first try?". That's a hard challenge, integrated into my everyday work.
I should try to remember to try this the next time I have a short piece of code to write. Furthermore, it's the sort of thing that makes me slightly uncomfortable and is therefore easy to forget, so I should try harder to remember it.
In general, this sort of thing seems like a very useful technique if you can do it without endangering your work. Modded parent up.
↑ comment by LongInTheTooth · 2009-04-20T14:50:37.105Z · LW(p) · GW(p)
Without risk, there is no growth.
If your practice isn't making you feel scared and uncomfortable, it's not helping. Imagine training for a running race without any workouts that raise your heart rate and make you breathe hard.
Feeling out of your comfort zone and at risk of failure is something everybody should seek out on a regular basis.
↑ comment by DanielLC · 2013-05-08T06:02:08.923Z · LW(p) · GW(p)
My usual self-testing example is something like "can I write this program correctly on the very first try?".
I never thought of that as a thing you could do. I think when my code compiles on the first try, it's more often then not a sign of something very wrong. For example, the last time it happened was because I forgot to add the file I was working on to the makefile.
Perhaps I should try to learn to code more precisely.
Replies from: None↑ comment by [deleted] · 2013-05-08T06:36:13.939Z · LW(p) · GW(p)
Heh. (You should use makefiles that automatically build new files, and automatically sense dependencies for rebuild.)
As I recall, Eliezer said somewhere that I'm too tired to Google - there is no limit to the amount of intelligence that you can use while programming.
comment by Paul Crowley (ciphergoth) · 2009-04-20T07:09:36.655Z · LW(p) · GW(p)
it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that
I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.
Replies from: CronoDAS, CannibalSmith, Yvain, Cameron_Taylor↑ comment by CronoDAS · 2009-04-20T18:29:39.906Z · LW(p) · GW(p)
Scientists are frequently advised to never participate in a live debate with a creationist. This is because being right has absolutely nothing to do with winning.
"Debating creationists on the topic of evolution is rather like playing chess with a pigeon - it knocks the pieces over, craps on the board, and flies back to its flock to claim victory." -- Scott D. Weitzenhoffer
Debates are not a rationality competition. They're a Dark Arts competition, in which the goal is to use whatever underhanded trick you can come up with in order to convince somebody to side with you. Evidence doesn't matter, because it's trivial to simply lie your ass off and get away with it.
The only kind of debates worth having are written debates, in which, when someone tells a blatant lie, you can look up the truth somewhere and take all the space you need to explain why it's a lie - and "cite your sources, or you forefeit" is a reasonable rule.
↑ comment by CannibalSmith · 2009-04-20T10:42:47.598Z · LW(p) · GW(p)
Indeed. Association fallacy. Eliezer might not think much of his loss, but it would still be seen by people as a loss for "the atheists" and a victory for "the theists". Debate to win!
↑ comment by Scott Alexander (Yvain) · 2009-04-20T07:22:58.257Z · LW(p) · GW(p)
Who is this theist? I'm interested in watching these debates. (though obviously without knowledge of the specific case, I agree with ciphergoth. It's not just about you, it's about whoever's watching.)
Replies from: gjm, ciphergoth, JulianMorrison↑ comment by gjm · 2009-04-20T09:30:26.173Z · LW(p) · GW(p)
I agree with ciphergoth's guess.
Eliezer: I agree with ciphergoth and Yvain. Debating, at least as the Theist Who (Apparently) Must Not Be Named is concerned, is a performance art more than it is a form of intellectual inquiry, and unless you've done a lot of it you run the severe risk of getting eaten by someone who has, especially if you decide to handicap yourself. If you engage in such a debate, the chances are that at least some people will watch or hear it, or merely learn of the result, and change their opinions as a result. (Probably few will change so far as to convert or deconvert: maybe none. Many will find that their views become more or less entrenched.)
What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?
(For what it's worth, I think it's rather unlikely that TTWMNBN will agree to a Bloggingheads-style debate. He would want it to be public. And he might decide that Eliezer isn't high-enough-profile to be worth debating. Remember: for him, among other things, this is propaganda.)
[EDITED a few minutes after posting to remove the explicit mention of the theist's name]
Replies from: ciphergoth, JulianMorrison↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T10:49:26.389Z · LW(p) · GW(p)
For what it's worth, I think it's rather unlikely that TTWMNBN will agree to a Bloggingheads-style debate. He would want it to be public. And he might decide that Eliezer isn't high-enough-profile to be worth debating. Remember: for him, among other things, this is propaganda.
Entirely agreed. There's a chance such a debate could be arranged if the book is a success, though.
↑ comment by JulianMorrison · 2009-04-20T09:33:53.697Z · LW(p) · GW(p)
Rot13 is your friend. (Edit: fixed above)
Replies from: gjm↑ comment by gjm · 2009-04-20T09:38:54.390Z · LW(p) · GW(p)
I already knew that, as you might have inferred from "I agree with ciphergoth's guess" and, er, the fact that I named him in my last paragraph. (That was an oversight which I am about to correct.) Perhaps I should have been more explicit about what guess I was agreeing with.
I don't know why the coyness, but perhaps TTWMNBN is suspected of googling for his own name every now and then. Or perhaps ciphergoth was just (semi-)respecting Eliezer's decision not to name him.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T10:49:50.438Z · LW(p) · GW(p)
Semi-respecting.
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-04-20T14:19:38.091Z · LW(p) · GW(p)
But you haven't really not named him. Anyone can decipher these posts with a small amount of effort. All that's happened is that this thread has become annoying to read.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T07:40:56.031Z · LW(p) · GW(p)
Jvyyvnz Ynar Penvt (I'm guessing; certainly the only time I've heard it credibly said that Hitchens lost a debate with a theist)
Replies from: JulianMorrison, AllanCrossman, NQbass7↑ comment by JulianMorrison · 2009-04-20T09:27:48.592Z · LW(p) · GW(p)
Well, I already know the proper counter to his pet argument. Hat tip, Tnel Qerfpure for explaining gvzr.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T09:39:14.079Z · LW(p) · GW(p)
He has piles of pet arguments, that's part of his technique; he fires off so many arguments that you can't answer them all. I've watched him and put a lot of thought into how I'd answer him, and I'm still not sure how I can fit the problems with his arguments into the time available in a debate, but I'd start with asking either him or the audience to pick which of his arguments I was going to counter in my reply.
In particular, I still don't have a counter to the fine-tuning argument which is short, assumes no foreknowledge, and is entirely intellectually honest.
Could you point me to the counter argument you rot-13? Google isn't finding it for me. Thanks!
Replies from: drnickbone, Psychohistorian, billswift, gjm, JulianMorrison↑ comment by drnickbone · 2012-06-16T11:41:20.524Z · LW(p) · GW(p)
In particular, I still don't have a counter to the fine-tuning argument which is short, assumes no foreknowledge, and is entirely intellectually honest.
The "fine-tuning" argument falls into the script:
- Here is a puzzle that scientists can't currently explain
- God explains it
- Therefore God exists
If you accept that script you lose the debate, because there will always be some odd fact that can't currently be explained. (And even if it can actually be explained, you won't have time to explain it within the limits of the debate and the audience's knowledge.)
The trap is that it is a very temping mistake to try and solve the puzzle yourself. It's highly unlikely that you will succeed, and your opponent will already know the flaws (and counter-arguments) to most of the existing solution attempts, so can throw those at you. Or if you support a fringe theory (which isn't generally considered in the solution space, but might work), the opponent can portray you as a marginal loon.
I suspect that the theist wins these debates because most opponents fall into that trap. They are smart enough that they think that they can resolve the puzzle in question, and so walk right into it. By debating domain experts, the theist positively invites them into the trap.
How I might respond. "I can't currently explain the values of physical constants, and as far as I'm aware no-one else can either. If you think you have an explanation, you can do the scientific community a great service. Just formulate your 'God' theory as a set of equations from which we can derive those values, including some values or degrees of precision that we don't currently know. Propose experiments by which we can test that theory. Submit to a leading physics journal, and get physicists to perform the experiments. When you've done that, you can claim evidence for your theory, and I will be more inclined to support it. You can't do it though, can you?"
↑ comment by Psychohistorian · 2009-04-20T15:34:53.203Z · LW(p) · GW(p)
The anthropic principle does technically work, but it admittedly feels like a cheat and I'd expect most audiences not familiar with it already would consider it such.
It's not a knock-down counterargument, but it seems to me we don't know enough about physics to say it's actually possible that the universe could be fine-tuned differently. Sure, we can look at a lot of fundamental constants and say, "If that one were different by 1 unit, fusion wouldn't occur," but we don't know if they are interconnected, and I don't think we can accurate model what would occur, so it's possible that it couldn't be different, that other constants would vary with it, and/or that it would make a universe so entirely different from our own that we have no idea what it would be like, so it's quite possible it could support life of some form.
Or, reduced into something more succinct, we don't actually know what the universe would look like if we changed fundamental constants (if this is even possible) because the results are beyond our ability to model, so it's quite possible that most possible configurations would support some form of life.
Multiverse works too, but again feels like cheating. I also admit there may be facts that undermine this, I'm not super-familiar with the necessary physics.
Replies from: Furcas↑ comment by Furcas · 2009-04-20T18:45:05.443Z · LW(p) · GW(p)
If there is no multiverse, "Why is the universe the way it is rather than any other way?" is a perfectly good question to which we haven't found the answer yet. However, theists don't merely ask that question, they use our ignorance as an argument for the existence of a deity. They think a creator is the best explanation for fine-tuning. The obvious counter-argument is that not only is a creator not the best explanation, it's not an explanation at all. We can ask the exact same question about the creator that we asked about the universe: Why is the creator what it is rather than something else? Why isn't 'He' something that couldn't be called a 'creator' at all, like a quark, or a squirrel? Or, to put the whole thing in the right perspective, why is the greater universe formed by the combination of our universe and its creator the way it is, rather than any other way?
At this point the theist usually says that God is necessary, or outside of time, which could just as easily be true of the universe as we know it. Or the theist might say that God is eternal, while our universe probably isn't, which is irrelevant. None of these alleged characteristics of God's explain why He's fine-tuned, anyway.
Replies from: byrnema↑ comment by byrnema · 2009-04-20T19:14:56.871Z · LW(p) · GW(p)
I was thinking along similar lines but didn't post because I was talking myself in circles. So I gave up and weighted the hypothesis that this kind of philosophy is insoluble. Here's what I wrote:
In such a debate, what is the end goal -- what counts as winning the debate question? If they provide a hypothesis that invokes God, is it sufficient to just provide another plausible hypothesis that doesn't? (Then, done.)
Or do you really need to address the root of the root of the question: Why are we here? (Even if you have multi-verses, why are they all here?) And "why" isn't really the question anyway. It's just a complaint, "I don't understand the source of everything." ... "If there is a source 'G', I don't understand the source of 'G'."
You can't answer that question: The property "always existing" or the transition between "not existing and then existing" is a mystery; it's the one thing atheists and theists can agree on. How does giving it a name mean anything more? So I think the best argument is that invoking God doesn't answer the question either.
Unless is the problem really about whether or not this is evidence that something wanted us to be here? Then finding plausible scientific hypothesis for X,Y, Z would never answer the question. You would always have remaining, did someone want this all to be so?
And I got stuck there, because if something exists, to what extent was it "willed" has no meaning to me at the moment.
↑ comment by billswift · 2009-04-20T13:25:10.726Z · LW(p) · GW(p)
I haven't read this particular version of the fine-tuning argument, but the general counter-argument is that evolution fine-tuned life (humans) for the universe, not that the universe was fine-tuned for humans.
Replies from: ciphergoth, AllanCrossman, byrnema↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T14:06:26.298Z · LW(p) · GW(p)
Unfortunately, that doesn't work. Without the fine tuning, the Universe consists of undifferentiated mush, and evolution is impossible.
Replies from: billswift↑ comment by billswift · 2009-04-20T15:33:25.110Z · LW(p) · GW(p)
That isn't any version of the fine tuning argument I've heard. And it just sounds plain stupid. Who makes this particular argument, and more importantly how do they justify it? It sounds like some wild claim that is just too irrational to refute.
Replies from: timtyler↑ comment by AllanCrossman · 2009-04-20T14:07:29.538Z · LW(p) · GW(p)
evolution fine-tuned life (humans) for the universe, not that the universe was fine-tuned for humans.
I don't think this is good enough. There seem to be several physical constants that - if they had been slightly different - would have made any sort of life unlikely.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-20T14:33:27.026Z · LW(p) · GW(p)
That part can be deproblematized (if you will forgive the nonce word) by the anthropic principle: if the universe were unsuited for life, there would be no life to notice that and remark upon it.
Replies from: DanielLC, AllanCrossman, AlexU↑ comment by DanielLC · 2013-05-08T06:10:01.946Z · LW(p) · GW(p)
I don't accept that form of the anthropic principle. I am on a planet, even though planets make up only a tiny portion of the universe, because there's (almost) nobody not on a planet to remark on it. The anthropic principle says that you will be where a person is. However, it can't change the universe. The laws of physics aren't going to rewrite themselves just because there was nobody there to see them.
That being said, if you combine this with multiple universes, it works. The multiverse is obviously suitable for life somewhere. We are going to end up in one of those places.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-08T14:48:38.693Z · LW(p) · GW(p)
Even in the case of a single infinite universe, the anthropic principle does help - it means that any arbitrarily low success rate for forming life is equally acceptable, so long as it is not identically zero.
Replies from: DanielLC↑ comment by DanielLC · 2013-05-08T23:06:03.591Z · LW(p) · GW(p)
In that case, it would look like the universal constants don't support life at all, but you somehow managed to get lucky and survive anyway, rather than the universal constants appearing to be fine-tuned.
If the "universal constants" are different in different areas, then it would basically be a multiverse.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-09T01:58:22.439Z · LW(p) · GW(p)
As i understand it, it's possible to pick out even better constants than what we have. For instance, having a fine structure constant between 6 and 7 would cause all atoms with at least 6 protons to be chemically identical to carbon due to 'atomic collapse'. That would probably help life along noticeably.
As things stand, we're pretty marginal. There's a whole lot of not-life out there.
Replies from: DanielLC, army1987, JoshuaZ↑ comment by DanielLC · 2013-05-09T03:58:41.463Z · LW(p) · GW(p)
As I understand it, the vast majority of constants are worse than what we have now. You might be able to find something better, but if this was just chance, we're very lucky as it is. Since you're not usually that lucky, it probably wasn't chance.
↑ comment by A1987dM (army1987) · 2013-05-09T17:52:00.067Z · LW(p) · GW(p)
It would probably also completely screw up the triple-alpha process, so that much less carbon will be produced in stars -- assuming stars would be possible in that situation in the first place.
↑ comment by JoshuaZ · 2013-05-09T04:05:57.691Z · LW(p) · GW(p)
Would that help really? Most life requires all of CHNOPS. And pretty much all complex life requires at least a few heavier elements, especially iron, copper, silicon, selenium, chlorine, magnesium, zinc, and iodine. Life won't do much if one can't get any elements heavier than carbon.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-09T05:13:25.041Z · LW(p) · GW(p)
It obviously wouldn't be life exactly as we know it, no! I'm pretty confident that if you replaced all the elements heavier than carbon with carbon, some form of life would be able to emerge. Carbon is where the complexity comes from - everything else is optimization.
Seriously, that's the most blatant case of the failure of imagination fallacy I've seen since I stopped cruising creationist discussion boards.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-05-09T15:17:00.514Z · LW(p) · GW(p)
I'm substantially less convinced. While carbon is the main cause of complexity, that's still carbon with other elements. Your options in this hypothetical are hydrogen, helium, lithium, beryllium, boron and carbon and that's it. Helium is effectively out (I think, I don't know enough to be that confident that basic bonding behavior will be that similar when you've drastically altered the fine structure constant.) The chemistry for that set isn't nearly as complicated as that involving full CHNOPS. And the relevant question isn't "can life form with these elements" but rather "how likely is it?" and "how likely is complex life to form"?
↑ comment by AllanCrossman · 2009-04-20T14:45:32.316Z · LW(p) · GW(p)
if the universe were unsuited for life, there would be no life to notice that and remark upon it.
True. But since a universe unsuitable for life seems overwhelmingly the more probable situation, we can still ask why it isn't so.
(My own feeling is that the problem has to be resolved by either "God" or "a multiverse". The idea that there's precisely one universe and it just happens to have the conditions for life seems extraordinary.)
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-04-20T20:56:28.533Z · LW(p) · GW(p)
My understanding (I'd have to dig out references) is that the fine tuning may not be as fine as generally believed. Ah, the wikipedia page on the argument has some references on this: http://en.wikipedia.org/wiki/Fine-tuned_Universe#Disputes_on_the_existence_of_fine-tuning
In addition to the anthropic type arguments, some theoretical work seems to suggest that the fine tuning isn't. ie, that we don't even need to invoke anthropic reasoning too strongly. Heck, supposedly one can even have stars in a universe with no weak interaction at all.
So it may very well be that, even without appealing to anthropic style reasoning in multiverses (which I'm not actually opposed to, but there's stuff there that I still don't understand. Born stats, apparent breakdown of the Aumann Agreement Theorem, etc... so too easy to get stuff wrong) anyways, even without that, it may well be that the fine tuning stuff can be refuted by simply pointing out "looking at the actual physics, the tuning is rather less fine than claimed."
↑ comment by byrnema · 2009-04-21T00:37:39.741Z · LW(p) · GW(p)
Exactly. The parameters we have define this universe. Any complex system -- presumably most if not all universes -- would have complex patterns. You would just need patterns to evolve that are self-promoting (i.e., accumulative) and evolving, and eventually a sub-pattern will evolve that significantly meta-references. Given that replicating forms can result from simple automata rules and self-referencing appears randomly (in a formal sense) all over the place in a random string (Godel) it doesn't seem so improbable for such a pattern to emerge. In fact, an interesting question is why is there only one "life" that we know of (i.e., carbon-based)? Once we understand the mechanism of consciousness, we may find that it duplicates elsewhere -- perhaps not in patterns that are accumulative and evolving but briefly, spontaneously. This is totally idle speculation of course.
Another argument: There's nothing in Physics that says there isn't a mechanism for how the parameters are chosen. It's just another mystery that hasn't been solved yet -- so far, to date, God has reliably delegated answers regarding questions about the empirical world to Science.
↑ comment by gjm · 2009-04-20T09:41:18.942Z · LW(p) · GW(p)
I'd start with asking either him or the audience to pick which of his arguments I was going to counter in my reply.
Yes, that's something I've often thought too. (Not only about this particular theist; the practice of throwing up more not-very-good arguments than can be refuted in the time available seems to be commonplace in debates about religious topics. Quite possibly in all debates, but I haven't watched a broad enough sample to know.)
↑ comment by JulianMorrison · 2009-04-20T09:50:48.674Z · LW(p) · GW(p)
Gur Xnynz Pbfzbybtvpny Nethzrag
(via Wikipedia).
Counter argument in ISBN 0262042339 where gvzr
is explained as fhpprffvir senzrf juvpu vapernfr va pbeeryngvba njnl sebz gur bevtvany fgngr
. Nothing in physics requires the bevtvany fgngr
to have a pnhfr
. It might have a ernfba
, but you can't spin a theology around that.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T10:46:19.590Z · LW(p) · GW(p)
Thanks. That doesn't sound like the counter-argument I'd present.
Replies from: gjm↑ comment by AllanCrossman · 2009-04-20T13:58:40.968Z · LW(p) · GW(p)
Why are we talking in ROT-13?
↑ comment by JulianMorrison · 2009-04-20T07:48:53.458Z · LW(p) · GW(p)
No, winning is good but losing is also useful - we ought to permanently eliminate from the corpus any argument that fails. Even if it wouldn't fail against a blockhead without the intellectual muscle to finesse a counter.
Replies from: ciphergoth, PhilGoetz↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T09:54:39.544Z · LW(p) · GW(p)
Losing is a lot more informative if we build on what we learned last time, don't you think?
↑ comment by Cameron_Taylor · 2009-04-20T09:50:31.665Z · LW(p) · GW(p)
I urge you to prepare properly. Not only Hitchens but Richard Carrier and several other atheists have been humiliated in debate with him, by their own admission. Winning at all is challenge enough, and would be a great service to the world. Given how much of a blow you would find it to lose having fully prepared, I urge you to to reconsider whether you're self-handicapping.
Eleizer will be humiliated. Even if Eleizer prepares for the debate he will still lose. Eleizer spends too much time thinking rationally for him to be a match for a master debater. I've seen him on Bloggingheads. He doesn't spend nearly enough energy producing the kind of bullshit you are supposed to throw together if you want to be considered victorious in a debate.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T09:57:35.994Z · LW(p) · GW(p)
I disagree; I watched Eliezer vs Adam Frank, and at several points I paused it, trying to work out what I'd say in response to Frank's arguments. I still found that Eliezer got across the counterarguments in a far neater way when I unpaused, and he had a lot less time than I did.
(BTW, after hearing that I also learned how his name is pronounced, so I'm better at spelling it correctly: it's Eli-Ezer, four syllables.)
Replies from: Cameron_Taylor, Eliezer_Yudkowsky↑ comment by Cameron_Taylor · 2009-04-20T13:49:34.937Z · LW(p) · GW(p)
I disagree; I watched Eliezer vs Adam Frank, and at several points I paused it, trying to work out what I'd say in response to Frank's arguments. I still found that Eliezer got across the counterarguments in a far neater way when I unpaused, and he had a lot less time than I did.
I have not observed that getting across counterarguments in a neat way is a particularly vital element of what it takes to 'win' a debate.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:29:45.678Z · LW(p) · GW(p)
I'd read Frank's book. (And I did try to direct him to the webpages whereby he could have read my stuff.) But I think I could've done it equally well on the fly.
comment by Paul Crowley (ciphergoth) · 2009-04-20T10:59:16.814Z · LW(p) · GW(p)
Unfair debate proposal
You want a debate in which the tables are tilted against you? I see a way to do that which doesn't carry the risks of your current proposal.
A bunch of us get together on an IRC channel and agree to debate you. We thrash out our initial serve; we then spring the topic and our initial serve on you. You must counter immediately, with no time to prepare. We then go away and mull over your counter, and agree a response, which you must again immediately respond to.
We can give ourselves more speaking time than you in each exchange, too, if you want to tilt the tables further (I'm imagining the actual serves and responses being delivered as video).
Replies from: byrnema, Eliezer_Yudkowsky, Vladimir_Nesov, Alicorn↑ comment by byrnema · 2009-04-20T17:41:17.919Z · LW(p) · GW(p)
Since Eliezer hasn't prepared by watching earlier debates then one solution could be to just use arguments from the theist's past debates in a simulated debate. As Eliezer prefers, he wouldn't prepare and would have to answer questions immediately.
There are two drawbacks: first it would just be "us" evaluating whether Eliezer performed well (but then, debate performance is always somewhat subjective) and we would lose the interaction of question, response and follow-up question.
Nevertheless, Eliezer's off-the-cuff responses to the theist's past questions could be informative.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:10:20.544Z · LW(p) · GW(p)
You're not theists; a handicap is more appropriate if we're going to be debating theology and you taking the positive... but this does sound interesting, so long as we can find a debate position that I agree with but that others are willing to take the negative of.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-20T21:10:36.404Z · LW(p) · GW(p)
I'm pretty sure it's not required that one agree with a position to debate in its favor.
Replies from: dclayh↑ comment by Vladimir_Nesov · 2009-04-20T17:19:50.198Z · LW(p) · GW(p)
This triggered an idea about paranoid debating: require players to submit a preliminary answer in the first few seconds of being presented with the question, then debate.
comment by Eric · 2009-04-25T00:08:05.171Z · LW(p) · GW(p)
I've found some of the characterizations of Craig's arguments and debate style baffling.
When he debates the existence of god, he always delivers the same five arguments (technically, it's four: his fifth claim is that god can be known directly, independently of any argument). He develops these arguments as carefully as time allows, and defends each of his premises. He uses the kalam cosmological argument, the fine tuning argument, the moral argument, and the argument from the resurrection of Jesus. This can hardly be characterized as dumping.
Also, his arguments are logically valid; you won't see any, 'brain teaser, therefore god!' moves from him. He's not only a 'theologian'; he's a trained philosopher (he actually has two earned PHDs, one in philosophy and one in theology).
Finally, Craig is at his best when it comes to his responses. He is extremely quick, and is very adept at both responding to criticisms of his arguments, and at taking his opponent's arguments apart.
Debating William Lane Craig on the topic of god's existence without preparation would be as ill advised as taking on a well trained UFC fighter in the octagon without preparation. To extend the analogy further, it would be like thinking it's a good idea because you've won a couple of street fights and want to test yourself.
Replies from: Jack↑ comment by Jack · 2009-04-25T00:27:55.671Z · LW(p) · GW(p)
I don't think its a good idea either. But the fact that the debate would be on bloggingheads rather than in front of an audience with formal speeches and timed rebuttals definitely helps Eliezer. He's free to ask questions, clarify things etc.
So really its like fighting a UFC fighter in an alley. Not a good idea but I guess you might have a chance.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-25T00:47:55.116Z · LW(p) · GW(p)
I'd tend to assume that the absence of a moderator makes it easier to abuse the more inexperienced party.
Replies from: Jack, robirahman↑ comment by Jack · 2009-04-25T01:04:02.820Z · LW(p) · GW(p)
Well you'd have more experience with the medium. But at a formal debate he'd give 5 five arguments each of which would take your entire speaking time to respond to. On bloggingheads you can ask for his best argument and then spend as much time as you need to on it (or within bloggingheads limits I guess). Also, if you watch formal debates between theists and atheists the participants often avoid answering the difficult questions. In particular, theists always avoid explaining how invoking God doesn't merely obscure and push the question of creation back a step. This medium gives you and opportunity to press things and I like to think that opportunity is an advantage for the side of truth.
Still I'm sure he has an answer to that question. The guy does this for a living, I think even if you prepare it would be a good test of your skills.
↑ comment by Robi Rahman (robirahman) · 2016-03-17T04:23:53.776Z · LW(p) · GW(p)
Did this debate ever end up happening? If it did, is there a transcript available somewhere?
Edit: Found in another comment that WLC turned down the debate.
comment by Mulciber · 2009-04-20T19:54:09.817Z · LW(p) · GW(p)
It sounds as though you're viewing the debate as a chance to test your own abilities at improvisational performance. That's the wrong goal. Your goal should be to win.
"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him."
By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.
In choosing not to try for that, you'll end up sending the message that rationalists don't play to win. You and I know this isn't quite accurate -- what you're doing is more like a rationalist choosing to lose a board game, because that served some other, real purpose of his -- but that is still how it will come across. Do you consider this to be acceptable?
Replies from: Peter_de_Blanc↑ comment by Peter_de_Blanc · 2009-04-21T01:15:04.303Z · LW(p) · GW(p)
This isn't about choosing to lose. It's more about exploration vs. exploitation. If you always use the strategy you currently think is the best, then you won't get the information you need to improve.
Replies from: Mulciber↑ comment by Mulciber · 2009-04-21T02:36:46.573Z · LW(p) · GW(p)
That seems contradictory. If you actually thought that always using one strategy would have this obvious disadvantage over another course of action, then doing so would by definition not be "the strategy you currently think is best."
Replies from: JamesAndrix, jimmy↑ comment by JamesAndrix · 2009-04-21T15:30:10.485Z · LW(p) · GW(p)
Experiments can always be framed as a waste of resources.
There is always something you're using up that you could put to direct productive use, even if it's just your time.
Replies from: andrewc↑ comment by jimmy · 2009-04-21T06:45:03.947Z · LW(p) · GW(p)
You're confusing meta strategies and strategies. The best meta strategy might be implementing strategies that do not have the highest chance of succeeding, simply because you can use the information you gain to choose the actual best strategy when it matters.
Consider the case where you're trying to roll a die many times and get the most green sides coming up, and you can choose between a die that has 3 green sides, and one that probably (p = 0.9) has 2 green sides, but might (p = 0.1) have 4 green sides. If the game lasts 1 roll, you chose the first die. If the game lasts many many rolls, you chose the other die until you're convinced that it only has 2 green sides- even though this is expected to lose in the short term.
Replies from: Mulciber↑ comment by Mulciber · 2009-04-21T22:06:51.908Z · LW(p) · GW(p)
Both those courses of action with dice sound like strategies to me, not meta strategies. Could you give another example of something you'd consider a meta strategy?
I think there's a larger point lurking here, which is that a good strategy should, in general, provide for gathering information so it can adapt. Do you agree?
Replies from: MrHen↑ comment by MrHen · 2009-04-21T22:37:47.098Z · LW(p) · GW(p)
Both those courses of action with dice sound like strategies to me, not meta strategies. Could you give another example of something you'd consider a meta strategy?
I might be able to clarify the example. The strategy for one roll is the die with 3 green sides. The strategy for multiple rolls is not the same as repeating the strategy for one roll multiple times. That being said, I do not know if that qualifies as a meta-strategy.
A more typical example could be a Rock-Paper-Scissors game. Against a random player, the game-theory optimal is to pick randomly amongst the three choices. Against your cousin Bob who is known to always picks Rock, picking Paper is the better option. Using knowledge from outside the game lets you win against Bob because you are using a meta-strategy. See also, Wikipedia's article on Metagaming.
Replies from: Mulciber↑ comment by Mulciber · 2009-04-21T23:47:27.806Z · LW(p) · GW(p)
That does indeed help. Thank you.
So really, a meta strategy would be something like choosing your deck for a Magic tournament based on what types of decks you expect your opponents to use. While the non-meta strategy would be your efforts to win within a game once it's started.
Replies from: MrHen↑ comment by MrHen · 2009-04-22T00:05:26.845Z · LW(p) · GW(p)
Ah, crap. Was that my comment? Sorry. I keep deleting comments when it looks like no one has responded.
But, yeah, Magic has a rather intense meta-game. The reason I deleted my comment was because I realized I had no idea where the meta-strategy was in the dice example so I assumed I missed something. I could be chasing down the wrong definition.
Replies from: orthonormal↑ comment by orthonormal · 2009-04-22T04:58:43.988Z · LW(p) · GW(p)
Ah, crap. Was that my comment? Sorry. I keep deleting comments when it looks like no one has responded.
...and that's why you really shouldn't delete a comment unless you think it's doing great harm. You may be worrying a bit too much about what others here think about every comment you make, when it's in fact somewhat random whether anyone replies to a given comment.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-22T05:25:30.937Z · LW(p) · GW(p)
Also, I believe that deleting a comment does not dissipate any negative karma that it has already earned you.
Replies from: MrHen↑ comment by MrHen · 2009-04-22T13:34:47.112Z · LW(p) · GW(p)
Also, I believe that deleting a comment does not dissipate any negative karma that it has already earned you.
This is correct.
I do not delete to avoid the karma hit, I delete to drop the number of comments in a thread. If two other people say the same thing there was no reason for me to say it.
In this case, I realized immediately after I posted the comment that I probably had not done justice to the entire thread, so I deleted it. I find the clutter annoying and if I can voluntarily take my comment out of the path I am happy to do so.
Unfortunately, this apparently does not work because two people have responded before I could delete a comment. So, deleting does not work well and now I know. Next strategy to try, just editing with a sentence saying "Ignore me"? What is the community consensus on this subject? Just leave the comment alone?
It would be neat if there was a way to just hit my own comment with -4 and get it off of people's radar.
comment by pozorvlak · 2009-12-22T10:52:50.451Z · LW(p) · GW(p)
Cultivate a habit of confronting challenges - not the ones that can kill you outright, perhaps, but perhaps ones that can potentially humiliate you.
You may be interested to learn that high-end mountaineers apply exactly the strategy you describe to challenges that might kill them outright. Mick Fowler even states it explicitly in his autobiography - "success every time implies that one's objectives are not challenging enough".
A large part of mountaineering appears to be about identifying the precise point where your situation will become unrecoverable, and then backing off just before you reach it. On the other hand, sometimes you just get unlucky.
comment by komponisto · 2009-04-20T18:46:56.381Z · LW(p) · GW(p)
A slogan I like is that failure is OK, so long as you don't stop trying to avoid it.
While reading this post, a connection with Beware of Other-Optimizing clicked in my mind. Different aspiring rationalists are (more) susceptible to different failure modes. From Eliezer's previous writings it had generally seemed like he was more worried about the problem of standards (for oneself) that are too low -- that is, not being afraid enough of failure -- than about the opposite error, standards that are too high. But I suspect that's largely specific to him; others may need to worry more about being too afraid of failure. Hence I'm happy to see this post.
comment by PhilGoetz · 2009-04-20T15:51:31.028Z · LW(p) · GW(p)
And so I wrote at once to the Bloggingheads folks and asked if they could arrange a debate. This seemed like someone I wanted to test myself against. Also, it was said by them that Christopher Hitchens should have watched the theist's earlier debates and been prepared, so I decided not to do that, because I think I should be able to handle damn near anything on the fly, and I desire to learn whether this thought is correct; and I am willing to risk public humiliation to find out.
This really bothers me, because you weren't just risking your own public humiliation; you were risking our public humiliation. You were endangering an important cause for your personal benefit.
Replies from: Annoyance, Eliezer_Yudkowsky↑ comment by Annoyance · 2009-04-20T18:54:12.534Z · LW(p) · GW(p)
The cause of rationalism does not rise and fall with Eliezer Yudkowsky.
If you fear the consequences of being his partisan, don't align yourself with his party. If you are willing to associate yourself and your reputation with him, accept the necessary consequences of having done so.
Replies from: Jack↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:26:50.425Z · LW(p) · GW(p)
I've done a service or two to atheism, and will do more services in the future, and those may well depend on this test of calibration.
Replies from: Jack, PhilGoetz↑ comment by Jack · 2009-04-20T21:05:11.817Z · LW(p) · GW(p)
Who is the theist? I've actually seen Hitchens preform poorly in a number of debates with theists just because he doesn't really give a damn about responding to their arguments because he rightly finds them so silly. Plus his focus is really on religion being bad more than religion being false and as such is rarely equipped to answer the more advanced theist arguments (like the say the fine-tunning of physical constants) in the way someone like Dawkins is.
(Edit- forget the question. I just read your reason for not naming him. Fair enough. But if you told someone who it was they could watch the debate and indicate to you whether or not you really ought to be worried. Particularly if you don't end up debating him, we might get something out of watching him.)
comment by ata · 2011-04-25T17:55:42.221Z · LW(p) · GW(p)
There are three great besetting sins of rationalists in particular, and the third of these is underconfidence.
Were we ever told the other two?
Replies from: steven0461↑ comment by steven0461 · 2011-04-25T18:17:18.640Z · LW(p) · GW(p)
Yes, by Jeffreyssai:
"Three flaws above all are common among the beisutsukai. The first flaw is to look just the slightest bit harder for flaws in arguments whose conclusions you would rather not accept. If you cannot contain this aspect of yourself then every flaw you know how to detect will make you that much stupider. This is the challenge which determines whether you possess the art or its opposite: Intelligence, to be useful, must be used for something other than defeating itself."
"The second flaw is cleverness. To invent great complicated plans and great complicated theories and great complicated arguments - or even, perhaps, plans and theories and arguments which are commended too much by their elegance and too little by their realism. There is a widespread saying which runs: 'The vulnerability of the beisutsukai is well-known; they are prone to be too clever.' Your enemies will know this saying, if they know you for a beisutsukai, so you had best remember it also. And you may think to yourself: 'But if I could never try anything clever or elegant, would my life even be worth living?' This is why cleverness is still our chief vulnerability even after its being well-known, like offering a Competitor a challenge that seems fair, or tempting a Bard with drama."
"The third flaw is underconfidence, modesty, humility. You have learned so much of flaws, some of them impossible to fix, that you may think that the rule of wisdom is to confess your own inability. You may question yourself so much, without resolution or testing, that you lose your will to carry on in the Art. You may refuse to decide, pending further evidence, when a decision is necessary; you may take advice you should not take. Jaded cynicism and sage despair are less fashionable than once they were, but you may still be tempted by them. Or you may simply - lose momentum."
comment by AndySimpson · 2009-04-20T23:13:56.788Z · LW(p) · GW(p)
gjm asks wisely:
What would you think of a musician who decided to give a public performance without so much as looking at the piece she was going to play? Would you not be inclined to say: "It's all very well to test yourself, but please do it in private"?
The central thrust of Eliezer's post is a true and important elaboration of his concept of improper humility, but doesn't it overlook a clear and simple political reality? There are reputational effects to public failure. It seems clear that those reputational effects often outweigh whatever utility is gained from an empirical "test" of one's own abilities: this is why international relations theory isn't a rigorous empirical science. We live in an irrational kaleidescope of power, driven by instinct and emotion, ordered only fleetingly by rhetoric and guile. In this situation, we need to keep our cards close to our chest if we want to win.
Mulciber adds something along the same lines:
By increasing the challenge the way you suggest, you may very well be acting rationally toward the goal of testing yourself, but you're not doing all you can to cut the opponent. To rationally pursue winning the debate, there's no excuse for not doing your research.
And Eliezer does seem to approve of this mode of thinking in some cases:
Of course this is only a way to think when you really are confronting a challenge just to test yourself, and not because you have to win at any cost. In that case you make everything as easy for yourself as possible. To do otherwise would be spectacular overconfidence, even if you're playing tic-tac-toe against a three-year-old.
So, to sum up my concern, how is this principle of pragmatism reconciled to your choice not to prepare? Isn't it best to test yourself in the peace and safety of your dojo, or in circumstances where the stakes are not high, and use every means available to resist on the actual field of battle?
comment by RobinHanson · 2009-04-20T17:42:57.945Z · LW(p) · GW(p)
We have lots of experimental data showing overconfidence; what experimental data show a consistent underconfidence, in a way that a person could use that data to correct their error? This would be a lot more persuasive to me than the mere hypothetical possibility of underconfidence.
Replies from: timtyler, Eliezer_Yudkowsky↑ comment by timtyler · 2009-04-20T18:20:35.519Z · LW(p) · GW(p)
Underconfidence is surely very common in the general population. It's usually referred to "shyness", "tentativeness", "depression" - or by other names besides "underconfidence". This is part of the audience of the self-help books that encourage people to be more confident.
E.g. see: "The trouble with overconfidence." on PubMed.
Replies from: timtyler↑ comment by timtyler · 2009-04-20T19:45:41.056Z · LW(p) · GW(p)
For underconfidence and depression, see:
"Depressive cognition: a test of depressive realism versus negativity using general knowledge questions." on PubMed.
Underconfidence in visual perceptual judgments:
"The role of individual differences in the accuracy of confidence judgments." on PubMed.
More on that, see:
"Realism of confidence in sensory discrimination." on PubMed.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T17:57:28.847Z · LW(p) · GW(p)
I believe there were some nice experiments having to do with overcorrection, and I believe those were in "Heuristics and Biases" (the 2003 volume), but I'm on a trip right now and away from my books.
comment by Vladimir_Nesov · 2009-04-24T10:19:40.076Z · LW(p) · GW(p)
I skimmed several debates with WLC yesterday, referenced here. His arguments are largely based on one and the same scheme:
- Everythng must have a cause
- Here's a philosophical paradox for you, that can't be resolved within the world
- Since despite the paradox, some fact still holds, it must be caused by God, from outside the world
(Or something like this, the step 3 is a bit more subtle than I made it out to be.) What's remarkable, even though he uses a nontrivial number of paradoxes for the step 2, almost all of them were explicitly explained in the material on Overcoming Bias. At least, I was never confused while listening to his arguments, whereas some of his opponents were, on some of the arguments. I don't see WLC as possessing magical oratorial skills, but he bends the facts on occasion, and is very careful in what he says. Also, his presentations are too debugged to be alive, so it looks unnatural.
The general meta-counterargument would be to break this scheme, as he could present some paradox (e.g. anthropics) without clear known resolution, and through it bend his line. I'm sure he knows lots of paradoxes, so there is a real danger of encountering an unknown one.
He knows Bayesian math. On one occasion, he basically replied to a statement that there is no evidence for God that it's only relevant if you expect more evidence for God if it exists, as opposed to if it doesn't, and if you expect no evidence in both cases, this fact can't be lowered a priori probability. This, of course, contradicts the rest of his arguments, but I guess he'll say that those arguments are some different kind of evidence.
Replies from: ciphergoth, Jack↑ comment by Paul Crowley (ciphergoth) · 2009-04-24T10:56:42.467Z · LW(p) · GW(p)
Many of WLC's arguments have this rough structure:
- Here's a philosophical brain teaser. Doesn't it make your head spin?
- Look, with God we can shove the problem under the carpet
- Therefore, God.
That's why I think that in order to debate him you have to explicitly challenge the idea that God could ever be a good answer to anything; otherwise, you disappear down the rabbit hole of trying to straighten out the philosophical confusions of your audience.
Replies from: MBlume↑ comment by MBlume · 2009-04-25T02:26:31.410Z · LW(p) · GW(p)
"saying 'God' is an epistemic placebo -- it gives you the feeling of a solution without actually solving anything"
something like that?
Replies from: pnrjulius, ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-25T10:46:12.163Z · LW(p) · GW(p)
Well, you could start with something like that, but you're going to have to set out why it doesn't solve anything. Which I think means you're going to have to make the "lady down the street is a witch; she did it" argument. Making that simple enough to fit into a debate slot is a real challenge, but it is the universal rebuke to everything WLC argues.
↑ comment by Jack · 2009-04-24T13:29:27.596Z · LW(p) · GW(p)
If we shouldn't expect evidence in either case then the probability of God's existence is just the prior, right? How could P(God) be above .5? I can't imagine thinking that the existence of an omnipotent, omniscient and benevolent being who answers prayers and rewards and punishes the sins of mortals with everlasting joy or eternal punishment was a priori more likely than not.
I wonder what variety of first cause argument he's making. Even if everything must have a cause that does not mean there is a first cause and the existence of a first cause doesn't mean the first cause is God. Aquinas made two arguments of this variety that actually try to prove the existence of God, but they require outdated categories and concepts to even make.
Replies from: byrnema, handoflixue, Vladimir_Nesov, pangloss↑ comment by byrnema · 2009-04-24T15:27:50.692Z · LW(p) · GW(p)
If God's existence is the prior, I don't think you include that he is also an "omnipotent, omniscient and benevolent being, [...]". Those are things you deduce about him after. The way I've thought about it is let X =whatever the explanation is to the creation conundrum. We will call X "God". X exists trivially (by definition), can we then infer properties about X that would justify calling it God? In other words, does the solution to creation have to be something omniscient and benevolent? (This is the part which is highly unlikely.)
Replies from: pnrjulius, Eliezer_Yudkowsky, Jack↑ comment by pnrjulius · 2012-06-12T03:34:48.319Z · LW(p) · GW(p)
If you call X "God" by definition, you may find yourself praying to the Big Bang, or to mathematics.
There is a mysterious force inherent in all matter and energy which binds the universe together. We call it "gravity", and it obeys differential equations.
Replies from: byrnema↑ comment by byrnema · 2012-06-17T05:56:36.612Z · LW(p) · GW(p)
If you call X "God" by definition, you may find yourself praying to the Big Bang, or to mathematics.
The Big Bsng and mathematics are good candidates. I've considered them. It only sounds ridiculous because you mentioned praying to them. The value of 'praying to X' is again something you need to deduce, rather than assume.
We call it "gravity", and it obeys differential equations.
Nah, gravity isn't universal or fundamental enough. That is, I would be very surprised if it was a 'first cause' in any way.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-24T18:48:04.425Z · LW(p) · GW(p)
You certainly should not call X "God", nor should you suppose that X has the property "existence" which is exactly that which is to be rendered non-confusing.
Replies from: byrnema↑ comment by byrnema · 2009-04-25T02:19:26.342Z · LW(p) · GW(p)
I just read your posts about the futility of arguing "by definition". I suspect that somewhere there is where my error lies.
More precisely, could you clarify whether I "shouldn't" do those things because they are "not allowed" or because they wouldn't be effective?
Replies from: MBlume, Vladimir_Nesov↑ comment by MBlume · 2009-04-25T02:30:22.215Z · LW(p) · GW(p)
You shouldn't because even though when you speak the word "God" you simply intend "placeholder for whatever eventually solves the creation conundrum," it will be heard as meaning "that being to which I was taught to pray when I was a child" -- whether you like it or not, your listener will attach the fully-formed God-concept to your use of the word.
Replies from: byrnema↑ comment by byrnema · 2009-04-25T02:41:26.433Z · LW(p) · GW(p)
Got it. if X is the placeholder for whatever eventually solves the creation conundrum, there's no reason to call it anything else, much less something misleading.
Replies from: JulianMorrison, MBlume↑ comment by JulianMorrison · 2009-04-25T14:45:35.292Z · LW(p) · GW(p)
In fact even naming it X is a bit of a stretch, because "the creation conundrum" is being assumed here, but my own limited understanding of physics suggests this "conundrum" itself is a mistake. What a "cause" really is, is something like: the information about past states of the universe embedded in the form of the present state. But the initial state doesn't have embedded information, so it doesn't really have either a past or a cause. As far as prime movers go, the big bang seems to be it, sufficient in itself.
Replies from: byrnema↑ comment by byrnema · 2009-04-25T15:11:10.213Z · LW(p) · GW(p)
Yes, I agree with you: there is no real conundrum. In the past, we've solved many "conundrums" (for example, Zeno's paradox and the Liar's Paradox). By induction, I believe that any conundrum is just a problem (often a math problem) that hasn't been solved yet.
While I would say that the solution to Zeno's paradox "exists", I think this is just a semantic mistake I made; a solution exists in a different way than a theist argues that God exists. (This is just something I need to work on.)
Regarding the physics: I understand how a state may not causally depend upon the one proceeding (for example, if the state is randomly generated). I don't understand (can't wrap my head around) if that means it wasn't caused... it still was generated, by some mechanism.
↑ comment by Vladimir_Nesov · 2009-04-25T11:51:11.133Z · LW(p) · GW(p)
More precisely, could you clarify whether I "shouldn't" do those things because they are "not allowed" or because they wouldn't be effective?
You shouldn't do something not directly because it's not allowed, but for the reason it's not allowed.
Replies from: byrnema↑ comment by byrnema · 2009-04-25T14:00:02.149Z · LW(p) · GW(p)
This comment is condescending and specious.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-25T14:18:34.228Z · LW(p) · GW(p)
That comment was meta. It isn't condescending, as it's not about you.
Replies from: byrnema, byrnema↑ comment by byrnema · 2009-04-25T14:29:40.554Z · LW(p) · GW(p)
It is condescending because you assumed that I didn't know what you were telling me, and you presume to tell me how to make decisions about what I "should" do. And the reason why it irritated me enough to complain is because I know the source of that condescension: I was asking a question in a vulnerable (i.e., feminine) way. And I got a cheap hit for not using language the way a man does. But I'm not saying it's sexism; it's just a cheap shot.
↑ comment by byrnema · 2009-04-25T14:41:18.114Z · LW(p) · GW(p)
It's about me because you imply that I don't already know what you're saying, and I could benefit from this wise advice.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-25T14:48:16.834Z · LW(p) · GW(p)
If someone speaks the obvious, then it's just noise, no new information, and so the speaker should be castigated for destructive stupidity. Someone or I.
↑ comment by Jack · 2009-04-24T17:26:45.399Z · LW(p) · GW(p)
You could do it that way but then the question is just the priors for the probability that X has those traits. You can't say. "It would be a lot easier for God to do all of the things we think he needs to do if he was omnipotent therefore it is more likely that God is omnipotent. Adding properties to God that increase His complexity have to decrease the probability that He exists otherwise we're always going to be ascribing super powers to the entities we posit since they never make it harder for those entities to accomplish the tasks we need them to. Now I suppose if you could deduce that God has those traits then you would be providing evidence that X had those traits with a probability of 1. Thats pretty remarkable but anyone is free to have at it.
So either you're putting a huge burden on your evidence to prove that there is some X such that X has these traits OR you have to start out with an extremely low prior.
↑ comment by handoflixue · 2011-04-23T00:19:22.022Z · LW(p) · GW(p)
For some reason, the idea that P(God) = 0.5 exactly amuses me. Thank you for the smile :)
Replies from: LukeStebbing↑ comment by Luke Stebbing (LukeStebbing) · 2011-04-23T01:10:53.985Z · LW(p) · GW(p)
It reminded me of one of my formative childhood books:
What is the probability there is some form of life on Titan? We apply the principle of indifference and answer 1/2. What is the probability of no simple plant life on Titan? Again, we answer 1/2. Of no one-celled animal life? Again, 1/2.
--Martin Gardner, Aha! Gotcha
He goes on to demonstrate the obvious contradiction, and points out some related fallacies. The whole book is great, as is its companion Aha! Insight. (They're bundled into a book called Aha! now.)
↑ comment by Vladimir_Nesov · 2009-04-24T16:33:31.965Z · LW(p) · GW(p)
If we shouldn't expect evidence in either case then the probability of God's existence is just the prior, right? How could P(God) be above .5? I can't imagine thinking that the existence of an omnipotent, omniscient and benevolent being who answers prayers and rewards and punishes the sins of mortals with everlasting joy or eternal punishment was a priori more likely than not.
Contradiction: answered prayers is lots of evidence.
Replies from: Jack, William↑ comment by Jack · 2009-04-24T17:16:18.823Z · LW(p) · GW(p)
I looking at the concept of God and trying to guess what the priors would be for a being that meets that description. That description usually includes answering prayers. If there is evidence of answered prayers then we might want to up the probability of God's existence- but a being capable of doing that is going to be some complex that extraordinary evidence is necessary to come to the conclusion one exists.
↑ comment by pangloss · 2009-04-24T14:20:46.431Z · LW(p) · GW(p)
Given the problems for the principle of indifference, a lot of bayesians favor something more "subjective" with respect to the rules governing appropriate priors (especially in light of Aumann-style agreement theorems).
I'm not endorsing this manuever, merely mentioning it.
comment by Richard_Kennaway · 2009-04-20T12:49:38.509Z · LW(p) · GW(p)
There is a children's puzzle which consists of 15 numbered square blocks arranged in a frame large enough to hold 16, four by four, leaving one empty space. You can't take the blocks out of the frame. You can only slide a block into the empty space from an adjacent position. The puzzle is to bring the blocks into some particular arrangement.
The mathematics of which arrangements are accessible from which others is not important here. The key thing is that no matter how you move the blocks around, there is always an empty space. Wherever the space is, you can always move a block into it, but however fast you move the blocks, they never fill the frame.
I have not heard of this theologian before, but ciphergoth's description of him firing off piles of pet arguments faster than you can point to the holes does suggest the sliding block metaphor, but he's playing with a much larger set of blocks. At any rate, it is utterly unlike the sedate, civilised pursuit of truth observed on Bloggingheads. It is theatre addressed to the audience, not the antagonist. As far as he is concerned, you would just be one of his supporting cast.
If a debate is arranged, I second the advice to prepare as well as possible, attending not just to the specific arguments he uses and how others fared against them, but also the theatrics. It may help that Bloggingheads does not have a live audience. I am 90% sure that if he agrees to debate you, he will ask to do so in front of one.
comment by pangloss · 2009-04-20T06:40:14.441Z · LW(p) · GW(p)
This post reminds me of Aristotle's heuristics for approaching the mean when one tends towards the extremes:
"That moral virtue is a mean, then, and in what sense it is so, and that it is a mean between two vices, the one involving excess, the other deficiency, and that it is such because its character is to aim at what is intermediate in passions and in actions, has been sufficiently stated. Hence also it is no easy task to be good. For in everything it is no easy task to find the middle, e.g. to find the middle of a circle is not for every one but for him who knows; so, too, any one can get angry- that is easy- or give or spend money; but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy; wherefore goodness is both rare and laudable and noble.
Hence he who aims at the intermediate must first depart from what is the more contrary to it, as Calypso advises-
Hold the ship out beyond that surf and spray.
For of the extremes one is more erroneous, one less so; therefore, since to hit the mean is hard in the extreme, we must as a second best, as people say, take the least of the evils; and this will be done best in the way we describe. But we must consider the things towards which we ourselves also are easily carried away; for some of us tend to one thing, some to another; and this will be recognizable from the pleasure and the pain we feel. We must drag ourselves away to the contrary extreme; for we shall get into the intermediate state by drawing well away from error, as people do in straightening sticks that are bent." (NE, II.9)
Replies from: thomblakecomment by pangloss · 2009-04-21T01:03:47.293Z · LW(p) · GW(p)
Eliezer, does your respect for Aumann's theorem incline you to reconsider, given how many commenters think you should thoroughly prepare for this debate?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-21T01:44:06.958Z · LW(p) · GW(p)
Actually, the main thing that moved me was the comment about Richard Carrier also losing. I was thinking mostly that Hitchens had just had a bad day. Depending on how formidable the opponent is, it might still be a test of my ability even if I prepare.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-21T06:15:40.602Z · LW(p) · GW(p)
Carrier lost by his own admission, on his home territory.
I've given a lot of thought to how I'd combat what he says, and what I think it comes down to is that standard, "simple" atheism that says "where is your evidence" isn't going to work; I would explicitly lead with the fact that religious language is completely incoherent and does not constitute an assertion about the world at all, and so there cannot be such a thing as evidence for it. And I would anticipate the way he's going to mock it by going there first: "I'm one of those closed-minded scientists who says he'll ignore the evidence for Jesus". At least when I play the debate out in my head, this is always where we end up, and if we start there I can deny him some cheap point scoring.
Replies from: Simey, pangloss↑ comment by Simey · 2009-04-21T14:39:07.923Z · LW(p) · GW(p)
"I'm one of those closed-minded scientists who says he'll ignore the evidence for Jesus"
He would probably answer that it is not scientific to ignore evidence. Miracles cannot be explained by science. But they could - theoretically - be proven with scientific methods. If someone claims to have a scientific proof of a miracle (for example a video), it would be unscientific to just ignore it, wouldn't it?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-21T15:47:54.734Z · LW(p) · GW(p)
The idea is that you would open with this, but go on to explain why there could not be such a thing as evidence, because what is being asserted isn't really an assertion at all.
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-04-21T16:03:17.520Z · LW(p) · GW(p)
I can't agree with the idea that religious assertions aren't really assertions.
A fairly big thing in Christianity is that Jesus died, but then two or three days later was alive and well. This is a claim about how the world is (or was). It's entirely conceivable that there could be evidence for such a claim. And, in fact, there is evidence - it's just not strong enough evidence for my liking.
↑ comment by pangloss · 2009-04-21T06:27:11.645Z · LW(p) · GW(p)
I don't think making a move towards logical positivism or adopting a verificationist criterion of meaning would count as a victory.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-21T06:40:29.907Z · LW(p) · GW(p)
You don't have to do either of those things, I don't think. Have a look at the argument set out in George H Smith's "Atheism: the Case against God".
Replies from: pangloss↑ comment by pangloss · 2009-04-21T14:35:36.383Z · LW(p) · GW(p)
I didn't think that one had to. That is what your challenge to the theist sounded like. I think that religious language is coherent but false, just like phlogiston or caloric language.
Denying that the theist is even making an assertion, or that their language is coherent is a characteristic feature of positivism/verificationism, which is why I said that.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-21T15:49:03.053Z · LW(p) · GW(p)
No, I think it extends beyond that - see eg No Logical Positivist I
comment by noahlt · 2009-04-20T07:30:29.299Z · LW(p) · GW(p)
What is the danger of overconfidence?
Passing up opportunities. Not doing thing you could have done, but didn't try (hard enough).
Did you mean "danger of underconfidence"?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:17:08.503Z · LW(p) · GW(p)
Yes. Fixed. Thanks.
Apparently "danger of overconfidence" is cached in my mind to the point that even when the whole point of the article is the opposite, it still comes out that way. Case in point!
comment by corruptedCatapillar · 2023-10-24T04:17:52.361Z · LW(p) · GW(p)
comment by corruptedCatapillar · 2023-10-24T04:16:20.907Z · LW(p) · GW(p)
comment by Wei Dai (Wei_Dai) · 2012-09-28T19:38:03.820Z · LW(p) · GW(p)
Can anyone give some examples of being underconfident, that happened as a result of overcorrecting for overconfidence?
Replies from: wmorgan, komponisto↑ comment by wmorgan · 2012-09-28T20:56:00.453Z · LW(p) · GW(p)
I'll give it a shot.
In poker you want to put more money in the pot with strong hands, and less money with weaker ones. However, your hand is secret information, and raising too much "polarizes your range," giving your opponents the opportunity to outplay you. Finally, hands aren't guaranteed -- good hands can lose, and bad hands can win. So you need to bet big, but not too big, with your good hands.
So my buddy and I sit down at the table, and I get dealt a few strong hands in a row, but I raise too big with them -- I'm overconfident -- so I win a couple of small pots, and lose a big one. My buddy whispers to me, "dude...you're overplaying your hands..." Ten minutes later I get dealt another good hand, and I consider his advice, but now I bet too small, underconfident, and miss out on value.
Replace the conversation with an internal monologue, and this is something you see all the time at the poker table. Once bitten, twice shy and all that.
↑ comment by komponisto · 2012-09-28T20:52:59.790Z · LW(p) · GW(p)
My "revision" to my Amanda Knox post is one. I was right the first time.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-09-28T21:20:54.332Z · LW(p) · GW(p)
How did you end up concluding that your original confidence level was correct after all?
Replies from: komponisto↑ comment by komponisto · 2012-09-28T22:31:02.124Z · LW(p) · GW(p)
I realized that there was a difference between the information I had and the information most commenters had; also that I had underestimated my Bayesian skills relative to the LW average, so that my panicked reaction to what I perceived as harsh criticism in a few of the comments was an overreaction brought about by insecurity.
Replies from: Wei_Dai, mfb↑ comment by Wei Dai (Wei_Dai) · 2012-10-01T06:43:08.508Z · LW(p) · GW(p)
I'm afraid I can't accept your example at this point, because based on my priors and the information I have at hand (the probability of guilt that you gave was 10x lower than the next lowest estimate, it doesn't look like you managed to convince anyone else to adopt your level of confidence during the discussions, absence of other evidence indicating that you have much better Bayesian skills than the LW average), I have to conclude that it's much more likely that you were originally overconfident, and are now again.
Can you either show me that I'm wrong to make this conclusion based on the information I have, or give me some additional evidence to update on?
↑ comment by mfb · 2012-09-28T22:47:21.317Z · LW(p) · GW(p)
Interesting posts.
However, I disagree with your prior by a significant amount. The probability that [person in group] commits a murder within one year is small, but so is the probability that [person in group] is in contact with a victim. I would begin with the event [murder has happened], assign a high probability (like ~90%) to "the murderer knew the victim", and then distribute those 90% among people who knew her (and work with ratios afterwards). I am not familiar enough with the case to do that know, but Amanda would probably get something around 10%, before any evidence or (missing) motive is taken into account.
Replies from: shokwave↑ comment by shokwave · 2012-09-29T00:19:04.073Z · LW(p) · GW(p)
assign a high probability (like ~90%)
A cursory search suggests 54% is more accurate. source, seventh bullet point. Also links to a table that could give better priors.
Replies from: Nornagest↑ comment by Nornagest · 2012-09-29T01:42:44.277Z · LW(p) · GW(p)
I'm reading that as 54% plus some unknown but probably large proportion of the remainder: that includes a large percentage in which the victim's relationship to the perpetrator is unknown, presumably due to lack of evidence. Your link gives this as 43.9%, but that doesn't seem consistent with the table.
If you do look at the table, it says that 1,676 of 13,636 murders were known to be committed by strangers, or about 12%; the unknowns probably don't break down into exactly the same categories (some relationships would be more difficult to establish than others), but I wouldn't expect them to be wildly out of line with the rest of the numbers.
Replies from: mfb↑ comment by mfb · 2012-09-29T12:46:24.953Z · LW(p) · GW(p)
I agree with that interpretation. The 13636 murders contain:
1676 from strangers
5974 with some relation
*5986 unknown
Based on the known cases only, I get 22% strangers. More than expected, but it might depend on the region, too (US <--> Europe). Based on that table, we can do even better: We can exclude reasons which are known to be unrelated to the specific case, and persons/relations which are known to be innocent (or non-existent). A bit tricky, as the table is "relation murderer -> victim" and not the other direction, but it should be possible.
comment by HughRistik · 2009-04-22T23:02:39.132Z · LW(p) · GW(p)
Eliezer said:
So if you have learned a thousand ways that humans fall into error and read a hundred experimental results in which anonymous subjects are humiliated of their overconfidence - heck, even if you've just read a couple of dozen - and you don't know exactly how overconfident you are - then yes, you might genuinely be in danger of nudging yourself a step too far down.
I also observed this phenomenon of debiasing being over-emphasized in discussions of rationality, while heuristic is treated as a bad word. I tried to get at the problem of passing up opportunities you mention when I said in my post on heuristic: "It's a mistake in cartography to have areas of your map that are filled in wrong, but it's also a mistake to have areas on your map blank that you could have filled in, at least with something approximate".
I think we need more success stories of human heuristic. Currently, the glut of information on biases and faulty heuristic is making these more cognitively available, leading to underconfidence.
Of course, it's easier to measure the gravity of mistakes of overconfidence, because we know the bad outcome, and we can speculate that it would have been avoided without the overconfidence. Yet in the case of mistakes of underconfidence, we don't know what we are missing out on, what brilliant theories were prematurely discarded, and what groundbreaking inventions were never created, because the creators (or their colleagues, investors, advisors, professors, whoever) were underconfident.
Yet we can look at examples of great discoveries, ideas, solutions, practices, and what what our lives, or the world, would be like if they had got nipped in the bud. Furthermore, there may be cases where two people (say, scientists or entrepreneurs) were both acquainted with the same evidence or theory, yet only one was confident enough about it to capitalize on it.
comment by Nominull · 2009-04-20T17:35:59.725Z · LW(p) · GW(p)
Playing tic-tac-toe against a three-year-old for the fate of the world would actually be a really harrowing experience. The space of possible moves is small enough that he's reasonably likely to force a draw just by acting randomly.
Replies from: somervta↑ comment by somervta · 2013-02-06T04:08:26.106Z · LW(p) · GW(p)
Not if you can go first.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-05-08T15:33:49.997Z · LW(p) · GW(p)
So, you go center.
If he goes on a flat side, you're golden (move in a nearly-opposite corner, you can compel victory).
If he goes in a corner, you go 90° away. Now, if he's really acting randomly, he has a 1/6 chance to block your next-turn win.
Then you block his win threat, making a new threat of your own, that he has a 1/4 chance to block. If he does, he'll make the last block half the time. So, a 1/96 chance to tie by moving randomly.
That would be enough to make me nervous if the fate of the world were at stake. Would you like to play Global Thermonuclear War?
Replies from: somervtacomment by AlexU · 2009-04-20T14:28:31.766Z · LW(p) · GW(p)
Can someone explain why we can't name the theist in question, other than sheer silliness?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T16:25:28.560Z · LW(p) · GW(p)
Because I consider it unfair to him to talk about a putative debate before he's replied to a request; also somewhat uncourteous to talk about how I plan to handicap myself (especially if it's not a sign of contempt but just a desire to test myself). If people can work it out through effort, that's fine, I suppose, but directly naming him seems a bit discourteous to me. I have no idea whether he's courteous to his opponents outside debate, but I have no particular info that he isn't.
Replies from: AlexU↑ comment by AlexU · 2009-04-20T16:32:59.214Z · LW(p) · GW(p)
How is it unfair to him in any way? He's free to choose whether to debate or not debate you; I doubt any reasonable person would be offended by the mere contemplation of a future debate. And any sort of advantage or disadvantage that might be gained or lost by "tipping him off" could only be of the most trivial sort, the kind any truth-seeking person should best ignore. All this does is make it a bit difficult to talk about the actual substance and ideas underlying the debate, which seems to me the most important stuff anyway.
Replies from: PhilGoetzcomment by HughRistik · 2009-04-22T23:01:26.064Z · LW(p) · GW(p)
This post reminds me of the phrase "cognitive hyper-humility," used by Ben Kovitz's Sophistry Wiki:
Demand for justification before making a move. Of course, this is not always sophistry. In some special areas of life, such as courtroom trials, we demand that a "burden" of specific kinds of evidence be met as a precondition for taking some action. Sophistry tends to extend this need for justification far beyond the areas where it's feasible and useful. Skeptical sophistry tends to push a sort of cognitive hyper-humility, or freezing out of fear of ever being "wrong"--or even being right but not fully justified. If you were to reason as the skeptic suggests that you should reason, you'd never be able to do anything in real life, because you'd never have sufficiently articulated and proven a priori principles to get started, nor evidence to justify your actions according to those principles, nor time to think this stuff through to the demanded degree.
comment by PhilGoetz · 2009-04-20T16:02:25.729Z · LW(p) · GW(p)
IHAPMOE, but the post seems to assume that a person's "rationality" is a float rather than a vector. If you're going to try to calibrate your "rationality", you'd better try to figure out what the different categories of rationality problems there are, and how well rationality on one category of problems correlates with rationality on other categories. Otherwise you'll end up doing something like having greater confidence in your ethical judgements because you do well at sudoku.
comment by Jayson_Virissimo · 2009-04-22T01:07:15.095Z · LW(p) · GW(p)
Does the fact that I find this guy's formulation of the cosmological argument somewhat persuasive mean that I can't hang out with the cool kids anymore? I'm not saying it is an airtight argument, just that it isn't obviously meaningless or ridiculous metaphysics.
comment by CronoDAS · 2009-04-20T21:33:45.187Z · LW(p) · GW(p)
Slightly off-topic:
I don't know if it would be possible to arrange either of them, but there are two debates I'd love to see Eliezer in:
A debate with Amanda Marcotte on evolutionary psychology
and
A debate with Alonzo Fyfe on meta-ethics.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-22T23:46:33.719Z · LW(p) · GW(p)
A debate with Amanda Marcotte on evolutionary psychology
Before anyone even thinks about this, they need to read Gender, Nature, and Nurture by Richard Lippa. He creates a hypothetical debate between Nature and Nurture which is very well done. Nurture has a bunch of arguments that sound "reasonable" and will be persuasive to audiences who are either close-minded or unfamiliar with the research literature, yet are actually sophistry. I would recommend having at least some sort of an answer to all of those points.
Defending evolutionary psychology in a debate is going to be very hard, because the playing field is so stacked. It's really easy to get nailed by skeptical sophistry or defeated by a King on the Mountain. In this case, the King would be arguing something like "male-female differences are socially constructed."
Appreciating the argument of evolutionary psychology, like evolution itself, requires thinking holistic and tying a lot of arguments and evidence together. This is difficult in a verbal debate, where a skilled sophist will take your statements and evidence in isolation and ridicule them without giving you a change to link them together into a bigger picture:
The amount of information that the King considers at one time is very small: one statement. He makes one decision at a time. He then moves on to the next attempted refutation, putting all previous decisions behind him. The broad panorama--of mathematical, spatial, and temporal relationships between many facts--that makes up the pro-evolution argument, which need to be viewed all at once to be persuasive, cannot get in, unless someone finds a way to package it as a one-step-at-a-time argument (and the King has patience to hear it). Where his opponent was attempting to communicate just one idea, the King heard many separate ideas to be judged one by one.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T20:01:29.321Z · LW(p) · GW(p)
And conversely, as Ari observes:
Replies from: outlawpoetIf you’ve never hit the ground while skydiving, you’re opening your parachute too early.
↑ comment by outlawpoet · 2009-04-21T22:12:46.496Z · LW(p) · GW(p)
er, am I misparsing this?
It seems to me that if you haven't hit the ground while skydiving, you're some sort of magician, or you landed on an artificial structure and then never got off..
comment by [deleted] · 2009-04-20T12:36:52.994Z · LW(p) · GW(p)
This seems like a reflection of a general problem people have, the problem of not getting things done - more specifically, the problem of not getting things done by convincing yourself not to do them.
It's so much easier to NOT do things than do them, so we're constantly on the lookout for ways not to do them. Of course, we feel bad if we simply don't do them, so we first have to come up with elaborate reasons why it's ok - "I'll have plenty of time to do it later", "There's too much uncertainty", "I already got alot of work done today", etc. The underconfidence you're describing seems like another attempt at this rather than a peculiar habit of rationalists.
I try to fight this, semi-successfully, by remembering that it's only the RESULT that matters. If I want something, it doesn't matter what clever words or arguments I make to myself about doing it; in the end I either get it or I don't. And there's certainly nothing rational about convincing yourself to not get something you want; rationalists WIN, after all.
Replies from: dclayh↑ comment by dclayh · 2009-04-21T23:18:57.580Z · LW(p) · GW(p)
It's so much easier to NOT do things than do them, so we're constantly on the lookout for ways not to do them.
In CS, laziness is considered a virtue, principally (I believe) because being too lazy to just do something the hard (but obvious) way tends to lead to coming up with an easy (clever) way that's probably faster and more elegant.
And there's certainly nothing rational about convincing yourself to not get something you want
But what if you convince yourself not to want it?
comment by FrF · 2009-04-20T10:57:45.789Z · LW(p) · GW(p)
Eliezer should write a self-help book! Blog posts like the above are very inspiring to this perennial intellectual slacker and general underachiever (meaning: me).
I certainly can relate to this part:
"It doesn't seem worthwhile any more, to go on trying to fix one thing when there are a dozen other things that will still be wrong...
There's not enough hope of triumph to inspire you to try hard..."
comment by Annoyance · 2009-04-20T22:02:00.567Z · LW(p) · GW(p)
Overconfidence is usually costlier than underconfidence. The cost to become completely accurate is often greater than the benefit of being slightly-inaccurate-but-close-enough.
When these two principles are taken into account, underconfidence becomes an excellent strategy. It also leaves potential in reserve in case of emergencies. As being accurately-confident tends to let others know what you can do, it's often desirable to create a false appearance.
Replies from: Douglas_Knight, mattnewport↑ comment by Douglas_Knight · 2009-04-21T03:44:07.957Z · LW(p) · GW(p)
The cost of underconfidence is an opportunity cost. This is easy to miss, so it will be underweighted--salience bias. This is not a rebuttal, but it is a reason to expect people will falsely conclude that overconfidence is costlier.
Replies from: Annoyance↑ comment by Annoyance · 2009-04-21T13:43:06.078Z · LW(p) · GW(p)
I approve of your response, Douglas_Knight, but think that it is both incomplete and somewhat inaccurate.
The cost of underconfidence isn't necessarily or always an opportunity cost. It can be so, yes. But it can also be not so. You are making a subtle and mostly implicit claim of universality regarding an assertion that is not universally the case.
A strategy doesn't need to work in every possible contingency to be useful or valid.
↑ comment by mattnewport · 2009-04-20T22:09:04.871Z · LW(p) · GW(p)
Overconfidence is usually costlier than underconfidence.
I suspect you are overconfident in that belief. Simply stating something is not a persuasive argument.
Replies from: Annoyance↑ comment by Annoyance · 2009-04-20T22:25:11.746Z · LW(p) · GW(p)
"Simply stating something is not a persuasive argument."
Is simply stating that supposed to be persuasive?
Sooner or later we have to accept or reject arguments on their merits, and that requires evaluating their supports. Not demanding supports for them.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-20T22:39:47.448Z · LW(p) · GW(p)
Overconfidence and underconfidence both imply a non-optimal amount of confidence. It's a little oxymoronic to claim that underconfidence is an excellent strategy - if it's an excellent strategy then it's presumably not underconfidence. I assume what you are actually claiming is that in general most people would get better results by being less confident than they are? Or are you claiming that relative to accurate judgements of probability of success it is better to consistently under rather than over estimate?
You claim that overconfidence is usually costlier than underconfidence. There are situations where overconfidence has potentially very high cost (overconfidently thinking you can safely overtake on a blind bend perhaps) but in many situations the costs of failure are not as severe as people tend to imagine. Overconfidence (in the sense of estimating greater probability of success than is accurate) can usefully compensate for over estimating the cost of failure in my experience.
You seem to have a pattern of responding to posts with unsupported statements that appear designed more to antagonize than to add useful information to the conversation.
Replies from: MrHen, Annoyance↑ comment by MrHen · 2009-04-20T23:06:04.753Z · LW(p) · GW(p)
I am replying here instead of higher because I agree with mattnewport, but this is addressed to Annoyance. It is hard to for me to understand what you mean by your post because the links are invisible and I did not instinctively fill them in correctly.
Overconfidence is usually costlier than underconfidence.
As best as I can tell, this is situational. I think mattnewport's response is accurate. More on this below.
The cost to become completely accurate is often greater than the benefit of being slightly-inaccurate-but-close-enough.
It seems that the two paths from this statement are to stay inaccurate or start getting more efficient at optimizing your accuracy. It sounds too similar to saying, "It is too hard. I give up," for me to automatically choose inaccuracy. I want to know why it is so hard to become more accurate.
It also seems situational in the sense that it is not always, just often. This is relevant below.
When these two principles are taken into account, underconfidence becomes an excellent strategy.
In addition to mattnewport's comment about underconfidence implying non-optimal confidence, I think that building this statement on two situational principles is dangerous. Filling out the (situational) blanks leads to this statement:
If underconfidence is less costly than overconfidence and the cost of becoming more accurate is more than the benefit of being more accurate than stay underconfident.
This seems to work just as well as saying this:
If overconfidence is less costly than underconfidence and the cost of becoming more accurate is more than the benefit of being more accurate than stay overconfident.
Which can really be generalized to this:
If it costs more to change your confidence than the resulting benefit, do not change.
Which just leads us back to mattnewport's comment about optimal confidence. It also seems like it was not the point you were trying to make, so I assume I made a mistake somewhere. As best as I can tell, it was underemphasizing the two situational claims. As a result, I fully understand the request for more support in that area.
It also leaves potential in reserve in case of emergencies. As being accurately-confident tends to let others know what you can do, it's often desirable to create a false appearance.
Acting overconfident is another form of bluffing. Also, acting one way or the other is a little different than understanding your own limits. How does it help if you bluff yourself?
↑ comment by Annoyance · 2009-04-21T13:54:10.464Z · LW(p) · GW(p)
"Overconfidence and underconfidence both imply a non-optimal amount of confidence."
Not in the sense of logical implication. The terms refer to levels of confidence greater or lesser than they should be, with the criteria utilized determining what 'should' means in context. The utility of the level of confidence isn't necessarily linked to its accuracy.
Although accuracy is often highly useful, there are times when it's better to be inaccurate, or to be inaccurate in a particular way, or a particular direction.
"You seem to have a pattern of responding to posts with unsupported statements"
I can support my statements, and support my supports, and support my support supports, but I can't provide an infinite chain of supports. No one can. The most basic components of any discussion stand by themselves, and are validated or not by comparison with reality. Deal with it.
"that appear designed more to antagonize than to add useful information to the conversation"
They're crafted to encourage people to think and to facilitate that process to the degree to which that is possible. I can certainly see how people uninterested in thinking would find that unhelpful, even antagonizing. So?
Replies from: quasimodo↑ comment by quasimodo · 2012-06-16T06:26:42.513Z · LW(p) · GW(p)
Why is confidence or lack thereof an issue aside from personal introspection?
Replies from: beoShaffer↑ comment by beoShaffer · 2012-06-17T07:02:41.630Z · LW(p) · GW(p)
If you are under confident you may pass up risky but worthwhile opportunities, or spend resources on unnecessary safety measures. As for over confidince see hubris. Also welcome to less wrong.
comment by Vladimir_Nesov · 2009-04-20T15:44:41.717Z · LW(p) · GW(p)
A typo in the article: "What is the danger of overconfidence?" -> "What is the danger of underconfidence?"
comment by hbarlowe · 2009-04-21T00:05:36.423Z · LW(p) · GW(p)
...the third of these is underconfidence. Michael Vassar regularly accuses me of this sin, which makes him unique among the entire population of the Earth.
Well, that sure is odd. Guess that's why Vassar was promoted. It makes sense now.
Anyway, EY's history doesn't seem to me marked by much underconfidence. For example, his name has recently been used in vain at this silly blog, where they're dredging up all sorts of amusing material that seems to support the opposite conclusion.
Since I know EY has guru status around here, please don't jump down my throat. For the record, I agree with everything he says. I must, for the force of his rationality encircles me and compels me.
Anyway, for those who don't want to follow the link, here's the best part -- a bit of pasted materials in a comment by someone named jimf:
When Ayn [Rand] announced proudly, as she often did, 'I can account for every emotion I have' -- she meant, astonishingly, that the total contents of her subconscious mind were instantly available to her conscious mind, that all of her emotions had resulted from deliberate acts of rational thought, and that she could name the thinking that had led her to each feeling. And she maintained that every human being is able, if he chooses to work at the job of identifying the source of his emotions, ultimately
to arrive at the same clarity and control.
Barbara Branden, The Passion of Ayn Rand pp. 193 - 195
From a transhumanist acquaintance I once corresponded with:
Replies from: orthonormalJim, dammit, I really wish you'd start with the assumption that I have a superhuman self-awareness and understanding of ethics, because, dammit, I do.
↑ comment by orthonormal · 2009-04-21T00:47:58.104Z · LW(p) · GW(p)
With detractors like this, who needs supporters? I almost wonder whether razib wrote that blog post in one of his faux-postmodernist moods.
I advise you all not to read it; badly written and badly supported criticism of EY is too powerful of a biasing agent towards him.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-04-21T06:39:00.586Z · LW(p) · GW(p)
This is a brutal oversimplification, but it seems to me, roughly speaking, that in mis-identifying fundamentalism with the humanities, they tend to advocate a reductionism that re-writes science itself in the image of a priestly authoritarianism with too much in common with the very fundamentalisms they claim to disdain (and rightly so).
The author understandably distances himself from his own output, reminiscent of the passages ridiculed in "Politics and the English Language".