Advancing Certainty
post by komponisto · 2010-01-18T09:51:31.050Z · LW · GW · Legacy · 110 commentsContents
110 comments
Related: Horrible LHC Inconsistency, The Proper Use of Humility
Overconfidence, I've noticed, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But I am going to argue that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting -- which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)
Here's Eliezer, voicing the typical worry:
[I]f you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.
I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.
No wonder, then, that people claim that we humans can't possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people -- even famous scientists! -- said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols?
[EDIT: Unnecessary material removed.]
A probability estimate is not a measure of "confidence" in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging calibration, it is not really appropriate to imagine oneself, say, judging thousands of criminal trials, and getting more than a few wrong here and there (because, after all, one is human and tends to make mistakes). Let me instead propose a less misleading image: picture yourself programming your model of the world (in technical terms, your prior probability distribution) into a computer, and then feeding all that data from those thousands of cases into the computer -- which then, when you run the program, rapidly spits out the corresponding thousands of posterior probability estimates. That is, visualize a few seconds or minutes of staring at a rapidly-scrolling computer screen, rather than a lifetime of exhausting judicial labor. When the program finishes, how many of those numerical verdicts on the screen are wrong?
I don't know about you, but modesty seems less tempting to me when I think about it in this way. I have a model of the world, and it makes predictions. For some reason, when it's just me in a room looking at a screen, I don't feel the need to tone down the strength of those predictions for fear of unpleasant social consequences. Nor do I need to worry about the computer getting tired from running all those numbers.
In the vanishingly unlikely event that Omega were to appear and tell me that, say, Amanda Knox was guilty, it wouldn't mean that I had been too arrogant, and that I had better not trust my estimates in the future. What it would mean is that my model of the world was severely stupid with respect to predicting reality. In which case, the thing to do would not be to humbly promise to be more modest henceforth, but rather, to find the problem and fix it. (I believe computer programmers call this "debugging".)
A "confidence level" is a numerical measure of how stupid your model is, if you turn out to be wrong.
The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.
This is the first thing to remember in setting out to dispose of what I call "quantitative Cartesian skepticism": the view that even though science tells us the probability of such-and-such is 10-50, well, that's just too high of a confidence for mere mortals like us to assert; our model of the world could be wrong, after all -- conceivably, we might even be brains in vats.
Now, it could be the case that 10-50 is too low of a probability for that event, despite the calculations; and it may even be that that particular level of certainty (about almost anything) is in fact beyond our current epistemic reach. But if we believe this, there have to be reasons we believe it, and those reasons have to be better than the reasons for believing the opposite.
I can't speak for Eliezer in particular, but I expect that if you probe the intuitions of people who worry about 10-6 being too low of a probability that the Large Hadron Collider will destroy the world -- that is, if you ask them why they think they couldn't make a million statements of equal authority and be wrong on average once -- they will cite statistics about the previous track record of human predictions: their own youthful failures and/or things like Lord Kelvin calculating that evolution by natural selection was impossible.
To which my reply is: hindsight is 20/20 -- so how about taking advantage of this fact?
Previously, I used the phrase "epistemic technology" in reference to our ability to achieve greater certainty through some recently-invented methods of investigation than through others that are native unto us. This, I confess, was an almost deliberate foreshadowing of my thesis here: we are not stuck with the inferential powers of our ancestors. One implication of the Bayesian-Jaynesian-Yudkowskian view, which marries epistemology to physics, is that our knowledge-gathering ability is as subject to "technological" improvement as any other physical process. With effort applied over time, we should be able to increase not only our domain knowledge, but also our meta-knowledge. As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.
If we're smart, we will look back at Lord Kelvin's reasoning, find the mistakes, and avoid making those mistakes in the future. We will, so to speak, debug the code. Perhaps we couldn't have spotted the flaws at the time; but we can spot them now. Whatever other flaws may still be plaguing us, our score has improved.
In the face of precise scientific calculations, it doesn't do to say, "Well, science has been wrong before". If science was wrong before, it is our duty to understand why science was wrong, and remove known sources of stupidity from our model. Once we've done this, "past scientific predictions" is no longer an appropriate reference class for second-guessing the prediction at hand, because the science is now superior. (Or anyway, the strength of the evidence of previous failures is diminished.)
That is why, with respect to Eliezer's LHC dilemma -- which amounts to a conflict between avoiding overconfidence and avoiding hypothesis-privileging -- I come down squarely on the side of hypothesis-privileging as the greater danger. Psychologically, you may not "feel up to" making a million predictions, of which no more than one can be wrong; but if that's what your model instructs you to do, then that's what you have to do -- unless you think your model is wrong, for some better reason than a vague sense of uneasiness. Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress. At the end of the day, you have to shut up and multiply -- epistemically as well as instrumentally.
110 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2010-01-18T23:35:48.969Z · LW(p) · GW(p)
I'd like to recast the problem this way: we know we're running on error-prone hardware, but standard probability theory assumes that we're running on errorless hardware, and seems to fail, at least in some situations, when running on error-prone hardware. What is the right probability theory and/or decision theory for running on error-prone hardware?
ETA: Consider ciphergoth's example:
do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?
This kind of reasoning can be derived from standard probability theory and would work fine on someone running errorless hardware. But it doesn't work for us.
We need to investigate this problem systematically, and not just make arguments about whether we're too confident or not confident enough, trying to push the public consensus back and forth. The right answer might be completely different, like perhaps we need different kinds or multiple levels of confidence, or upper and lower bounds on probability estimates.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-01-19T05:17:29.167Z · LW(p) · GW(p)
I think that standard probability theory assumes a known ontology and infinite computing power. We should ideally also be able to produce a probability theory for agents with realistically necessary constraints but without the special constraints that we have.
comment by Paul Crowley (ciphergoth) · 2010-01-18T10:25:49.859Z · LW(p) · GW(p)
One simple example: do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?
Replies from: Bo102010↑ comment by Bo102010 · 2010-01-18T13:30:49.008Z · LW(p) · GW(p)
Isn't the expected value still negative?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T13:40:16.572Z · LW(p) · GW(p)
No, the jackpot is much more than a million times bigger than the stake.
EDIT: the expected utility might still be negative, because of the diminishing marginal utility of money.
Replies from: Bo102010↑ comment by Bo102010 · 2010-01-18T18:22:22.694Z · LW(p) · GW(p)
Maybe I'm not understanding your point.
If the odds of winning are one in 100 million, you could very well expect to make a million statements of "I will not win the lottery" and not be right once.
Replies from: orthonormal, Alicorn↑ comment by orthonormal · 2010-01-18T19:38:55.744Z · LW(p) · GW(p)
As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn't what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8.
Even ruling out fatigue as a factor by imagining Omega copies me a million times and asks each a different question, I believe my mind is so constituted that I'd be very overconfident in tens of thousands of cases, and that several of them would prove me wrong.
Replies from: MichaelVassar, ciphergoth↑ comment by MichaelVassar · 2010-01-19T05:28:45.445Z · LW(p) · GW(p)
Everything is dependent on everything else. I can't make many independent statements.
Replies from: orthonormal↑ comment by orthonormal · 2010-01-19T05:41:31.405Z · LW(p) · GW(p)
That's certainly true given full rationality and arbitrary computing power, but there are certainly many individual things I could be wrong about without being able to immediately see how it contradicts other things I get right. I wouldn't put it past Omega to pull this off.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T19:51:00.005Z · LW(p) · GW(p)
As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn't what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8.
I'm not sure this properly represents what I was thinking. We all agree that any decision procedure that leads you to play the lottery is flawed. But the "million equivalent statement" test seems to indicate that you can't have sufficient confidence of not winning not to play given the payoffs. If you insist on independent reasoning, passing the million-statement test is even harder, and justifying not playing is therefore harder. It's a kind of real-life Pascal's mugging.
I don't have a solution to Pascal's mugging, but for the lottery, I'm inclined to think that I really can have 10^-8 confidence of not winning, that the flaw is with the million-statement test, and it's simply that there aren't a million disparate situations where you can have this kind of confidence, though there certainly are a million broadly similar situations in the reference class "we are actually in a strong position to calculate high-quality odds on this coming to pass".
Replies from: wedrifid, RobinZ↑ comment by wedrifid · 2010-01-18T21:45:57.751Z · LW(p) · GW(p)
We all agree that any decision procedure that leads you to play the lottery is flawed.
I don't.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-18T22:17:07.584Z · LW(p) · GW(p)
Can you please explain that further? Why not? Do you just mean that the pleasure of buying the ticket could be worth a dollar, even though you know you won't win?
Replies from: wedrifid↑ comment by wedrifid · 2010-01-19T00:13:59.443Z · LW(p) · GW(p)
Just reasoning based on a non linear relationship between money and utility.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-19T03:16:55.173Z · LW(p) · GW(p)
Winning ten million dollars provides less than ten million times the utility of winning one dollar, because the richer you are, the less difference each additional dollar makes. That seems to argue against playing the lottery, though.
Replies from: wedrifid↑ comment by wedrifid · 2010-01-19T03:19:12.549Z · LW(p) · GW(p)
$5,000,000 debt. Bankruptcy laws.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-21T18:35:33.058Z · LW(p) · GW(p)
Very clever! You're right; that is a situation where you might as well play the lottery.
This actually comes up in business, in terms of the types of investments that businesses make when they have a good chance of going bankrupt. They may not play the lottery, but they're likely to make riskier moves since they have very little to lose and a lot to gain.
Replies from: wedrifid↑ comment by wedrifid · 2010-01-21T23:05:32.206Z · LW(p) · GW(p)
They may not play the lottery, but they're likely to make riskier moves since they have very little to lose and a lot to gain.
It also applies if you believe your company will be bailed out by the government. I don't tend to approve of bank bailouts for this reason. (Although government guarantees for deposits I place in a different category.)
↑ comment by RobinZ · 2010-01-18T21:51:09.305Z · LW(p) · GW(p)
It looks to me like the flaw is in calculating the expected utility after changing the probability estimate with the probability of error.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T21:53:44.098Z · LW(p) · GW(p)
What alternative do you have in mind?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-18T22:24:42.190Z · LW(p) · GW(p)
Well, in an abstract case it would be reasonable, but if you are considering (for example) the lottery, the rule of thumb "you won't win playing the lottery" outweighs any expectation of errors in your own calculations.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T23:29:58.125Z · LW(p) · GW(p)
Potentially promising approach, but how does that translate into math?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-18T23:45:50.547Z · LW(p) · GW(p)
Let A represent the event when the lottery under consideration is profitable (positive expected value from playing); let X represent the event in which your calculation of the lottery's value is correct. What is desired is P(A). Trivially:
P(A) = P(X) * P(A|X) + P(~X) * P(A|~X)
From your calculations, you know P(A|X) - this is the arbitrarily-strong confidence komponisto described. What you need to estimate is P(X) and P(A|~X).
P(X) I cannot help you with. From my own experience, depending on whether I checked my work, I'd put it in the range {0.9,0.999}, but that's your business.
P(A|~X) I would put in the range {1e-10, 1e-4}.
In order to conclude that you should always play the lottery, you would have to put P(A|~X) close to unity.
Q.E.D.
Edit: The error I see is supposing that a wrong calculation gives positive information about the correct answer. That's practically false - if your calculation is wrong, the prior should be approximately correct.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-18T23:57:20.515Z · LW(p) · GW(p)
I think this doesn't work, or at least is incomplete, because what is needed (under standard decision theory) to decide whether or not to play is not the probability of the lottery having a positive expected value, but the expected utility of the lottery, which I don't see how to compute from your P(A) (assuming that utility is linear in dollars).
ETA: In case the point isn't clear, suppose P(A)=1e-4, but the expected value of the lottery, conditional on A being true, is 1e5, then you should still play, right?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-19T00:11:42.023Z · LW(p) · GW(p)
You're right: recalculating...
Let E(A) be the expected value of the lottery that you should use in determining your actions. Let E(a) be the expected value you calculate. Let p be your confidence in your calculation (a probability in the Bayesian sense).
If we want to account for the possibility of calculating wrong, we are tempted to write something like
E(A) = p * E(a) + (1-p) * x
where x is what you would expect the lottery to be worth if your calculation was wrong.
The naive calculation - the one which says, "play the lottery" - takes x as equal to the jackpot. This is not justified. The correct value for x is closer to your reference-class prediction.
Setting x equal to "negative the cost of the ticket plus epsilon", then, it becomes abundantly clear that your ignorance does not make the lottery a good bet.
Edit: This also explains why you check your math before betting when it looks like a lottery is a good bet, which is nice.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-01-19T03:04:42.811Z · LW(p) · GW(p)
If we follow your suggestion and obtain E(A) < 0, then compute from that the probability of winning the lottery, we end up with P(will win lottery) < 1e-8. But what if we want to compute P(will win lottery) directly? Or, if you think we shouldn't try to compute it directly, but should do it in this roundabout way, then we need a method for deciding when this indirect method is necessary. (Meta point: I think you might be stopping at the first good answer.)
Replies from: RobinZ↑ comment by RobinZ · 2010-01-19T03:22:52.961Z · LW(p) · GW(p)
The parallel calculation would be
P(L) = p * P_calculated + (1-p) * P_typical
I don't put P_typical very high.
Meta point: I think you might be stopping at the first good answer.
Okay, I'll grant you that one. I'm still promoting my original idea to a top-level post.
Edit: ...in part because I would like more eyes to see it and provide feedback - I would love to know if it has some interesting faults.
Edit: Here it is.
comment by Drahflow · 2010-01-18T13:46:30.669Z · LW(p) · GW(p)
You propose to ignore the "odd" errors humans sometimes make while calculating a probability for some event. However, errors do occur, even when judging the very first case. And they (at least some of them) occur randomly. When you believe you have correctly calculated the probability, you just might have made an error anywhere in the calculation.
If you keep around the "socially accepted" levels of confidence, those errors average out pretty fast, but if you make only one error in 10^5 calculations, you should not assign probabilities smaller than 1 / 10^5. Otherwise a bet 10000 to 1 between you and me (a fair game from your perspective) will give me an expected value larger than 0 due to the errors in your thoughts you could possibly make.
This is another advantage an AI might have over humans, if the hardware is good enough, probability assignments below 10^-5 might actually be reasonable.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T14:38:52.560Z · LW(p) · GW(p)
You propose to ignore the "odd" errors humans sometimes make while calculating a probability for some event
I don't think I said any such thing.
There is always some uncertainty; but a belief that the uncertainty is above some particular lower bound is a belief like any other, and no more exempt from the requirements of justification.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-18T16:24:51.802Z · LW(p) · GW(p)
But Drahflow did just justify it. He said you're running on error-prone hardware. Now, there's still the question of how often the hardware makes errors, and there's the problem of privileging the hypothesis (thinking wrongly about the lottery can't make the probability of a ticket winning more than 10^-8, no matter how wrong you are), and there's the horrible LHC inconsistency, but the opposing position is not unjustified. It has justification that goes beyond just social modesty. It's a consistent trend in which people form confidence bounds that are too narrow on hard problems (and to a lesser extent, too wide on easy problems). If you went by the raw experiments then "99% probability" would translate into 40% surprises because (a) people are that stupid (b) people have no grasp of what the phrase "99% probability" means.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T16:55:20.890Z · LW(p) · GW(p)
I agree, and I don't think this contradicts or undermines the argument of the post.
These experiments should definitely shift physicists' probabilities by some nonzero amount; the question is how much. When they calculate that the probability of a marble statue waving is 10 to the minus gazillion, would you really want to argue that, based on surveys like this, they should adjust that to some mundane quantity like 0.01? That seems absurd to me. But if you grant this, then you have to concede that "epistemic bootstrapping" beyond ordinary human levels of confidence is possible. Then the question becomes: what's the limit, given our knowledge of physics (present and future)?
Replies from: Morendil↑ comment by Morendil · 2010-01-18T17:16:40.148Z · LW(p) · GW(p)
If you did see a marble statue wave, after making this calculation, you would resurrect a hypothesis at the one-in-a-million level maybe (someone played a hugely elaborate prank on you involving sawing off a duplicate statue's arm and switching that with the recently examined statue while you were briefly distracted by a phone ringing, say), not a hypothesis at the 10 to the minus whatever (e.g. you are being simulated by Omega for laughs).
Perhaps I'm getting this wrong, but this seems similar in spirit to the "queer uses of probability" discussion in Jaynes, where he asks what kind of evidence you'd have to see to believe in ESP, and you can take the probability of that as an indication of your prior probability for ESP.
Perhaps you're making too much of absolute probabilities, when in general what we're interested in is choosing between two or more competing hypotheses.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T18:02:25.529Z · LW(p) · GW(p)
This comment reads as if you're disagreeing with me about something ("you're making too much..."), but I can't detect any actual disagreement.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-18T16:21:27.394Z · LW(p) · GW(p)
Now, if it is the case that she didn't, then it follows that, given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0.
What, like 1/3^^^3? There isn't that much information in the universe, and come to think, I'm not sure I can conceive of any stream of evidence which would drive the probability that low in the Knox case, because there are complicated hypotheses much less complicated than that in which you're in a computer simulation expressly created for the purpose of deluding you about the Amanda Knox case.
Replies from: komponisto, Sticky, Tyrrell_McAllister↑ comment by komponisto · 2010-01-18T16:26:26.307Z · LW(p) · GW(p)
I thought I was stating a mathematical tautology. I didn't say there was enough information in the universe to get below 1/3^^^3. The point was only that the information controls the probability.
↑ comment by Sticky · 2010-01-23T19:25:04.115Z · LW(p) · GW(p)
But surely any statement one could make about Amanda Knox is only about the Amanda Knox in this world, whether she's a fully simulated human or something less. Perhaps only the places I actually go are fully simulated, and everywhere else is only simulated in its effects on the places I go, so that the light from distant stars are supplied without bothering to run their internal processes; in that case, the innocent Amanda Knox only exists insofar as the effects that an innocent Amanda Knox would have on my part of the world are implemented. Even so, my beliefs about the case can only be about the figure in my own world. It doesn't matter that there could be some other world where Amanda Knox is a murderess and Hitler was a great humanitarian.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-01-23T20:25:31.794Z · LW(p) · GW(p)
I'm not sure why this is being downvoted so much (to –3 when I saw it). It's a good point.
If I'm in a simulation, and the "base reality" is sufficiently different from how things appear to me in the simulation, it stops making sense to say that I'm fooled into attributing false predicates to things in the base reality. I'm so cut off from the base reality that few of my beliefs can be said to be about it at all. It makes more sense to say that I have true beliefs about the things in the simulation. I just have one important false belief about them—namely, that they're not simulated. But that doesn't mean that my other beliefs about them are wrong.
The situation is similar to that of the proverbial man who thinks that penguins are blind borrowing mammals who live in the Namib Desert. Such beliefs aren't really about penguins at all. More probably, the man has true beliefs about some variety of golden mole. He just has one important false belief about them—namely, that they're called "penguins".
Replies from: Sticky↑ comment by Sticky · 2010-01-24T22:58:39.374Z · LW(p) · GW(p)
Perhaps it's being downvoted because of my strange speculation that the stars are unreal -- but it seems to me that if this is a simulation with such a narrow purpose as fooling komponisto/me/us/somebody about the Knox case is would be more thrifty to only simulate some narrow portion of the world, which need not include Knox herself. Even then, I think, it would make sense to say that my beliefs are about Knox as she is inside the simulation, not some other Knox I cannot have any knowledge of, even in principle.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2010-01-24T23:12:01.701Z · LW(p) · GW(p)
I downvoted the great-grandparent because it ignores the least convenient possible world where the simulators are implementing the entire Earth in detail such that the simulated Amanda Knox is a person, is guilty of the murder, and yet circumstances are such that she seems innocent given your state of knowledge. You're right that implementing the entire Earth is more expensive then just deluding you personally, but that's irrelevant to Eliezer's nitpick, which was only that 1/(3^^^3) really is just that small and yet nonzero.
Replies from: Tyrrell_McAllister, Sticky↑ comment by Tyrrell_McAllister · 2010-01-25T00:00:44.876Z · LW(p) · GW(p)
I think that you've only pushed it up (down?) a level.
If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you're suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox.
All deception amounts to an attempt to construct a simulation by controlling the evidence that the deceived person receives. The kind of deception that we see day-to-day is far too crude to really merit the term "simulation". But the difference is one of degree. If an epistemic agent were sufficiently powerful, then deceiving it would very probably require the sort of thing that we normally think of as a simulation.
ETA: And the more powerful the agent, the more probable it is that whatever we induced it to believe is a true belief about the simulation, rather than a false belief about the "base reality" (except for its belief that it's not in a simulation, of course).
Replies from: Nick_Tarleton, Zack_M_Davis↑ comment by Nick_Tarleton · 2010-01-25T01:15:23.972Z · LW(p) · GW(p)
If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you're suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox.
This is a good point, but your input could also be the product of modeling you and computing "what inputs will make this person believe Knox is innocent?", not modeling Knox at all.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-01-25T01:36:32.500Z · LW(p) · GW(p)
How would this work in detail? When I try to think it through, it seems that, if I'm sufficiently good at gathering evidence, then the simulator would have to model Knox at some point while determining which inputs convince me that she's innocent.
There are shades here of Eliezer's point about Giant Look-Up Tables modeling conscious minds. The GLUT itself might not be a conscious mind, but the process that built the GLUT probably had to contain the conscious mind that the GLUT models, and then some.
Replies from: pengvado↑ comment by pengvado · 2010-01-25T03:03:33.747Z · LW(p) · GW(p)
The process that builds the GLUT has to contain your mind, but nothing else. The deceiver tries all exponentially-many strings of sensory inputs, and sees what effects they have on your simulated internal state. Select the one that maximizes your belief in proposition X. No simulation of X involved, and the deceiver doesn't even need to know anything more about X than you think you know at the beginning.
Replies from: Sticky, komponisto↑ comment by Sticky · 2010-01-25T23:26:47.338Z · LW(p) · GW(p)
If whoever controls the simulation knows that Tyrrell/me/komponisto/Eliezer/etc. are reasonably reasonable, there's little to be gained by modeling all the evidences that might persuade me. Just include the total lack of physical evidence tying the accused to the room where the murder happened, and I'm all yours. I'm sure I care more than I might have otherwise because she's pretty, and obviously (obviously to me, anyway) completely harmless and well-meaning, even now. Whereas, if we were talking about a gang member who's probably guilty of other horrible felonies, I'd still be more convinced of innocence than I am of some things I personally witnessed (since the physical evidence is more reliable than human memory), but I wouldn't feel so sorry for the wrongly convicted.
↑ comment by komponisto · 2010-01-25T03:15:42.061Z · LW(p) · GW(p)
But remember my original point here: level-of-belief is controlled by the amount of information. In order for me to reach certain extremely high levels of certainty about Knox's innocence, it may be necessary to effectively simulate a copy of Knox inside my mind.
ETA: And that of course raises the question about whether in that case my beliefs are about the mind-external Knox ("simulated" or not) or the mind-internal simulated Knox. This is somewhat tricky, but the answer is the former -- for the same reason that the simple, non-conscious model of Amanda I have in my mind right now represents beliefs about the real, conscious Amanda in Capanne prison. Thus, a demon could theoretically create a conscious simulation of an innocent Amanda Knox in my mind, which could represent a "wrong" extremely-certain belief about a particular external reality. But in order to pull off a deception of this order, the demon would have to inhabit a world with a lot more information than even the large amount available to me in this scenario.
↑ comment by Zack_M_Davis · 2010-01-25T00:59:54.162Z · LW(p) · GW(p)
That is a fascinating counterargument that I'm not sure what to make of yet.
Replies from: komponisto↑ comment by komponisto · 2010-01-26T03:47:18.012Z · LW(p) · GW(p)
Here's how I see the whole issue, after some more reflection:
Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime.
Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher.
Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of "misleading" information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be "mistaken", in the sense that your highly complex model does not "correspond" to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent.
When we make probability estimates, what we're really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be "wrong" as a model but yet morally significant as a universe in its own right.
↑ comment by Sticky · 2010-01-25T15:17:34.354Z · LW(p) · GW(p)
What possible world would that be? If it should turn out that the Italian government is engaged in a vast experiment to see how many people it can convince of a true thing using only very inadequate evidence (and therefore falsified the evidence so as to destroy any reasonable case it had), we could, in principle, discover that. If the simulation simply deleted all of her hair, fiber, fingerprint, and DNA evidence left behind by the salacious ritual sex murder, then I can think of two objections. First, something like Tyrrell McAllister's second-order simulation, only this isn't so much a simulated Knox in my own head, I think, as it is a second-order simulation implemented in reality, by conforming all of reality (the crime scene, etc.) to what it would be if Knox were innocent. Second, an unlawful simulation such as this might seem to undermine any possible belief I might form, I could still in principle acquire some knowledge of it. Suppose whoever is running the simulation decides to talk to me and I have good reason to think he's telling the truth. (This last is indistinguishable from "suppose I run into a prophet" -- but in an unlawful universe that stops being a vice.)
ETA: I suppose if I'm entertaining the possibility that the simulator might start telling me truths I couldn't otherwise know then I could, in principle, find out that I live in a simulated reality and the "real" Knox is guilty (contrary to what I asserted above). I don't think I'd change my mind about her so much as I would begin thinking that there is a guilty Knox out there and an innocent Knox in here. After all, I think I'm pretty real, so why shouldn't the innocent Amanda Knox be real?
↑ comment by Tyrrell_McAllister · 2010-01-18T16:58:43.189Z · LW(p) · GW(p)
There seems to be a deep idea here, but I don't yet see that the numbers really balance out. I would appreciate it if you made a top-level post elaborating on this.
comment by wedrifid · 2010-01-19T02:19:54.412Z · LW(p) · GW(p)
The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.
You're just telling people to pull different probabilities out of their rear end. Framing the other guy's model with 'rear endeness' doesn't make your model any less so. Your model must include information about the part of the universe that is komponisto and more generally about human psychology. Your model appears to make poor predictions about the likelyhood that human beliefs are well founded and so, were it convenient, I would bet against such predictions.
It may be complicated and my model is certainly not detailed but nor is it especially vaguer than it should be given the information I have available.
Replies from: Unknowns↑ comment by Unknowns · 2010-08-23T13:13:08.341Z · LW(p) · GW(p)
Not only does komponisto's model make poor predictions; he in fact wants it to do this. That's why he brings up the image of a computer calculating your posteriors, so that you can say the probability of such and such is 10^-50, even though even komponisto knows that you are not and cannot be calibrated in asserting this probability.
comment by CronoDAS · 2010-01-18T21:53:47.774Z · LW(p) · GW(p)
What are the odds that, given that I didn't make a mistake pressing the buttons, that my electronic calculator (which appears to be in proper working order) will give a wrong answer on a basic arithmetic problem that it should be able to solve?
Replies from: RobinZ, ciphergoth↑ comment by RobinZ · 2010-01-18T22:38:47.162Z · LW(p) · GW(p)
With all the caveats, I'd guess somewhere south of one in ten thousand. I would expect the biggest terms by far in the error rate to be:
User error.
Design fault.
Mechanical failure (e.g. solder bump fracture, display damage).
I'd like to know some estimates of probability that high-energy radiation can affect a calculation, but pretty much everything after 1 is highly unlikely.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T22:18:12.680Z · LW(p) · GW(p)
Presumably you're imagining something like a year-old calculator, solar powered and in bright light, reported in good working order and tested on a few problems with known answers, doing aritmetic on integers less than 10,000 in magnitude. Just to close as many of the doors as possible...
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-19T01:02:36.132Z · LW(p) · GW(p)
Just to close as many of the doors as possible...
Shouldn't have to do that here.
Replies from: bogdanbcomment by ChristianKl · 2010-01-18T22:46:56.186Z · LW(p) · GW(p)
Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress.
That's not really true. 10,000 hours of deliberate practice at making predictions about a given field will improve the intuition by a lot. Intuition isn't fixed.
Replies from: Waldheri↑ comment by Waldheri · 2010-01-19T17:24:50.161Z · LW(p) · GW(p)
Isn't "intuition" in that case not simply subconscious empirical knowledge?
Replies from: ChristianKl↑ comment by ChristianKl · 2010-01-20T11:38:38.726Z · LW(p) · GW(p)
Do you believe that intuition exists in some other form than subconscious empirical knowledge? Provided you don't believe in any paranormal stuff I don't think that there's something else that you could call intuition.
For me science is about having well defined theories and then trying to falsify those theories. When you make decisions based on intuition you aren't making decisions based on theory.
comment by Sniffnoy · 2010-01-18T22:41:42.756Z · LW(p) · GW(p)
"Generalizing from One Example" and "Reference Class of the Unreferenceclassable" links are both broken.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T22:46:31.809Z · LW(p) · GW(p)
Thanks, fixed.
comment by komponisto · 2010-01-18T21:51:09.861Z · LW(p) · GW(p)
In accordance with a suggestion that turned out to be rather good, I've now deleted several paragraphs of unnecessary material from the post. (The contents of these paragraphs, having to do with previous posts of mine, were proving to be a distraction; luckily the post still flows without them.)
comment by RolfAndreassen · 2010-01-18T21:01:25.995Z · LW(p) · GW(p)
Perhaps too much is being made of the "arbitrarily close to zero" remark. Sure, there isn't enough information in the universe to get to one-over-number-made-with-up-arrow-notation. But there's certainly enough to get to, say, one in a thousand, or one in a million; and while this isn't arbitrarily close to zero in a mathematical sense, it's good enough for merely human purposes. And what's worse, the word 'arbitrary' is distracting from the actual argument being made, in what looks to me like a rather classic case of "Is that your real objection?"
comment by Jayson_Virissimo · 2010-01-18T18:27:42.428Z · LW(p) · GW(p)
A probability estimate is not a measure of "confidence" in some psychological sense.
This is one of the possible interpretations of probability. To say that this interpretation is wrong requires an argument, not simply you saying that your interpretation is the correct one.
Replies from: orthonormal↑ comment by orthonormal · 2010-01-18T19:32:13.829Z · LW(p) · GW(p)
Here's one facet of the argument.
comment by ChristianKl · 2010-01-18T23:16:25.443Z · LW(p) · GW(p)
As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.
How do we know that we are acquiring more real information? The number of open questions in science grows. It doesn't shrink.
Replies from: RobinZ, loqi↑ comment by RobinZ · 2010-01-18T23:19:30.309Z · LW(p) · GW(p)
How do we know that we are acquiring more real information?
Because Archimedes didn't have a microwave.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2010-01-19T04:04:26.360Z · LW(p) · GW(p)
How do we know that we are acquiring more real information?
Because Archimedes didn't have a microwave.
If by know, ChristianKl means having belief that is universal, necessary, and certain, then we don't know that we have more real information. Nothing short of deductive proof will achieve this kind of knowledge.
RobinZ seems to be (implicitly) using an argument similar to this:
If theory A is true, then technology B will work. Technology B works. ∴ Theory A is true.
This argument, while plausible, commits the fallacy of affirming the consequent, and so isn't deductively valid. This means that it fails to achieve the kind of knowledge that is universal, necessary, and certain.
If, on the other hand, you will settle for knowledge that is particular, contingent, and probable, then it is quite clear that we have made leaps and bounds in the amount of real information that we have access to. For instance, compare Wikipedia to the 1911 Encyclopedia Britannica.
Replies from: RobinZ↑ comment by RobinZ · 2010-01-19T04:14:44.628Z · LW(p) · GW(p)
I'm afraid I don't see what you're driving at. There's nothing in your comment that I disagree with, and nothing in my comment that you do not address correctly, but I thought my reply to ChristianKl was sufficient. Do you believe that it was not? If so, what is the question I should be responding to?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2010-01-19T04:21:11.354Z · LW(p) · GW(p)
I was trying to point out (perhaps badly) that your argument succeeds assuming one definition of knowledge, but fails assuming the other definition.
It isn't clear to me which definition ChristianKl had in mind.
Replies from: RobinZcomment by ChristianKl · 2010-01-18T23:12:33.270Z · LW(p) · GW(p)
I think it's fair to compare the LHC with past scientific experiments but if you do you should remember that no past scientific experiment destroyed the world and therefore you don't get a prior probability greater than 0 by that process.
The LHC even didn't worked the first time around. You could say the predictions of how the LHC was supposed to work were wrong. There however millions of different ways that the LHC can turn out results that aren't what anybody expects that don't include the LHC blowing up the planet.
comment by Dagon · 2010-01-18T21:10:27.646Z · LW(p) · GW(p)
There have been a number of posts recently on the topic of beliefs, and how fragile they can be. They would benefit A LOT by a link to Making Beliefs Pay Rent
When you say Amanda Knox either killed her roommate, or she didn't, you've moved from a universe of rational beliefs to that of human-responsibility models. It's very unclear (to me) what experience you're predicting with "killed her roomate". This confusion, not any handling of evidence or bayesean updates, explains a large divergence in estimates that people give. They're giving estimates for different experiences, not different estimates of the same experience.
Replies from: ciphergoth, orthonormal↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T21:52:00.060Z · LW(p) · GW(p)
This is a curious interpretation of "making beliefs pay rent". I hesitate to assert that a difference of belief about a prosaic historical fact, which you could in principle check with a "time camera", is not a real difference of belief unless you can set out specific, realistic predictions they differ in. If one person believes that Lee Harvey Oswald was in the book depository with a rifle and another believe he wasn't even in the building, I don't think they need to articulate the different predictions of their beliefs to believe that they're disagreeing.
Replies from: Dagon↑ comment by Dagon · 2010-01-19T18:28:29.103Z · LW(p) · GW(p)
The difference in expected experience is that some people think about the question given a time camera, while others think about the probability that additional evidence will come to their attention.
I think the probability that I'll ever have a time camera is very low, and the chance that I'd use it to understand the details of this roomate and death relationship even lower, so there is no expected experience from this direction.
Additionally, there are lots of ways for someone to have some responsibility for a death without having a hand on the weapon directly.
To me, probability assignments of her guilt or innocence is primarily a matter of group consensus. There WAS an underlying physical reality, but the proposition given wasn't well enough defined for me to understand the wager.
↑ comment by orthonormal · 2010-01-18T21:17:24.790Z · LW(p) · GW(p)
HTML links and tags don't work here. You can edit your comment and click the "Help" tag under the textbox to see how to do links and italics in this format.
comment by Morendil · 2010-01-18T16:15:52.735Z · LW(p) · GW(p)
How do you get from "uncertainty exists in the map, not the territory" to the following ?
given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0
One's uncertainty about the here-and-now, perhaps. In criminal cases we are dealing with backward inference, and information is getting erased all the time. Right about now perhaps the only way you could get "arbitrarily close to 0" is by scanning Knox's brain or the actual perpetrator's brain; if both should die, we would reach a state of absolute uncertainty about the event, all the evidence you could in principle examine to reach certainty will have been rearranged into unintelligible patterns.
Similarly, I can't shake the notion that the physical distance between you and the evidence should constrain how much strength you think the evidence has, just as much as the temporal distance eventually must.
I could be wrong about that, and I have already come around to your point of view somewhat, but you seem to have a blind spot about this particular objection: you're just some guy who got his information about the case from the Internet (which is fine, so is almost everyone).
Your comparison between the Amanda Knox case and scientific knowledge leaves me cold. Science is concerned with regularities, situations where induction applies; the knowledge sought in a criminal case is of a completely different kind, by definition applying to a unique and hopefully irregular situation.
Yes, I agree that improved "epistemic technology" is grounds for more confidence, even in cases such as this one; but your argument would be improved by throttling back the eloquence - you have at times sounded like a defence lawyer - and focusing more on the concrete details of your argument.
You'd do more to convince me if you observed, for instance, how the Web has allowed you access to multiple reports about the case, improving your chances that the biases in each source would cancel each other out, and the facts that remain are (as you claim) basically all there is to know. Pre-Internet you would have had to rely on one or two "official" news sources. That is also part of "epistemic technology".
Replies from: Vive-ut-Vivas, komponisto, Nick_Tarleton, Nick_Tarleton↑ comment by Vive-ut-Vivas · 2010-01-18T18:24:14.460Z · LW(p) · GW(p)
"Your comparison between the Amanda Knox case and scientific knowledge leaves me cold. Science is concerned with regularities, situations where induction applies; the knowledge sought in a criminal case is of a completely different kind, by definition applying to a unique and hopefully irregular situation."
I'm eerily reminded of creationists arguing that studying evolution isn't "science", because it happened in the past. I don't see how it follows that the knowledge sought in a criminal case is somehow "different" than the knowledge sought in otherwise "legitimate" scientific pursuits. At the risk of playing definition games, if science is simply the methodology used to arrive at correct answers, then science can be applied to the Amanda Knox case - resulting in "scientific" knowledge.
↑ comment by komponisto · 2010-01-18T17:32:08.745Z · LW(p) · GW(p)
You seem to be replying to this post as if it were about the Knox case. It isn't. [ETA: Post now edited to make this clearer.] I'm not making any object-level arguments here about what the probabilities in that case should be. I only referred to it in order to introduce the point that one should think in terms of applying one's model to the data in computer-fashion to obtain probabilities, rather than imagining oneself judging a bunch of similar cases. (The two scenarios ought to be equivalent, but they feel different. )
Science is concerned with regularities, situations where induction applies; the knowledge sought in a criminal case is of a completely different kind, by definition applying to a unique and hopefully irregular situation.
I don't buy this for a minute. You may as well say that cosmology isn't scientific, because the Big Bang isn't repeatable in a lab; or that evolutionary biology won't become scientific until we can recreate dinosaurs.
Replies from: Morendil↑ comment by Morendil · 2010-01-18T18:17:13.916Z · LW(p) · GW(p)
You seem to be replying to this post as if it were about the Knox case. It isn't.
The post refers to your postings on the Knox case a lot. Perhaps you should consider that other readers will share my confusion on that point. Again, I tend to agree with your conclusions, but I find the tone of the writing a distraction from the good bits.
You may as well say that cosmology isn't scientific, because the Big Bang isn't repeatable in a lab; or that evolutionary biology won't become scientific until we can recreate dinosaurs.
In both cases science finds plenty of regularities to reason from, so it seems you're attacking straw men. My point is that there are some matters of fact about which we cannot reduce our uncertainty below a certain level. The details of historical facts tend to belong in that category.
Consider an extreme form of chick sexing. Put a chick in a blender, and while there certainly is a "fact of the matter" as to its having been male or female, you can no longer tell, you have to live with 50:50. Advances in technology can catch up with that, and I'm deliberately choosing an example which is middle-of-the-road in the amount of information that gets randomized (imagine burning the remains). You could in principle recover that information, but only if you had previously observed some regularities (say, hormonal) about chicks. That pretty much captures the difference between science and investigation.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T18:28:03.377Z · LW(p) · GW(p)
The post refers to your postings on the Knox case a lot. Perhaps you should consider that other readers will share my confusion on that point.
I've made some edits to (hopefully) prevent that. The references are to some extent inevitable, since the Knox writings were my only posts up to this point, and the resulting discussions did help to prompt the thoughts expressed here, as a matter of historical fact.
Again, I tend to agree with your conclusions, but I find the tone of the writing a distraction from the good bits.
Could you perhaps give some examples? (I think I automatically tend to write in the sort of tone that I would enjoy reading.)
My point is that there are some matters of fact about which we cannot reduce our uncertainty below a certain level
Yes; certainty (as any technology) is definitely limited by the physics of the universe. Those limits may be considerably beyond the human level, though.
Replies from: Morendil↑ comment by Morendil · 2010-01-18T19:06:46.822Z · LW(p) · GW(p)
Could you perhaps give some examples?
Here's one - "scoffed and sneered, in capital letters" - and elsewhere you used "gasped" to refer to one of my own comments (this may make me oversensitive to this pattern, compared to other readers, but the effect is still there). That sound dismissive of other's objections.
A more subtle one is the profusion of hyperlinks, to comments, posts and wiki pages, not always necessary to the point being made. More generally the post advances too many distinct ideas; I'd try to say the same thing in fewer words. ("You should fly faster when your instruments are good" seems to be the thrust of the whole post.)
Still more subtle, you are selective in the objections that you choose to respond to.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T19:45:24.978Z · LW(p) · GW(p)
Here's one - "scoffed and sneered, in capital letters"
Hm...that comment did sound like a scoff or sneer to me ("I offer $50 to the AK defense fund..."), and capital letters were in fact used.
and elsewhere you used "gasped" to refer to one of my own comments
What if I had used "balked" instead?
A more subtle one is the profusion of hyperlinks
This one surprises me. The use of hyperlinks to simultaneously provide convenient references and subtly convey conversational nuance has always been for me one of the more enjoyable aspects of Eliezer's writing; I probably learned it from him.
More generally the post advances too many distinct ideas; I'd try to say the same thing in fewer words.
Yikes. This is bad advice for me, since I already obsess about this, and as a result write very little. (I have a hard time allowing myself to just "write what's in my head".) If this is anything like a widespread view, I may have to seriously reconsider whatever plans I may have had of top-level posting in the future.
("You should fly faster when your instruments are good" seems to be the thrust of the whole post.)
I like this figure of speech; I wish I had come up with it.
Still more subtle, you are selective in the objections that you choose to respond to.
That's probably the case with everyone, though, isn't it? Given the constraints of time and attention, it seems hard to avoid this.
Replies from: Morendil, ciphergoth↑ comment by Morendil · 2010-01-18T20:15:51.184Z · LW(p) · GW(p)
The use of hyperlinks to simultaneously provide convenient references
Doesn't work when you link to a discussion comment: you can't tell from the URL what the link points to, so you have to follow it (thank goodness for tabs), breaking the flow.
I may have to seriously reconsider whatever plans I may have had of top-level posting
No no no. Please keep 'em coming. Just, you know: spend more time revising, most of which effort should consist of deleting stuff. Case in point, if the thread post isn't about the Knox case, then just delete every para which is a reference to the Knox case. Most of the time ruthless deletion improves your writing to a surprising extent.
Don't censor yourself in the writing phase, but do delete more in revising. For more on this see Peter Elbow's Writing With Power.
You can do what I do: save the long version to a local text file, "in case you ever need those words again".
Replies from: komponisto↑ comment by komponisto · 2010-01-18T21:40:53.734Z · LW(p) · GW(p)
if the thread post isn't about the Knox case, then just delete every para which is a reference to the Knox case.
You know, you're right. I just realized that the whole section can be cut, and the post still flows. It hadn't occurred to me because the thoughts were linked in my mind -- but that doesn't mean they need to be linked in the post.
Replies from: Morendil, orthonormal↑ comment by Morendil · 2010-01-18T22:03:37.258Z · LW(p) · GW(p)
Welcome to the club. This is one of the things that makes writing hard; you can never read your own stuff quite as a reader sees it.
The Knox reference in the para starting with "In the vanishingly unlikely event..." is now even more jarring. But the part of that para referencing "the model" continues from the previous para, so rather than delete it I'd try to reword it.
Your "core" para is the one that contains the idea, "we are not stuck with the inferential powers of our ancestors" and goes on to discuss "epistemic technology".
A typical good-writing suggestion is to find a way to move the key idea from where it is often found, buried in the middle of the article, to the very top. (Memorable quote which has helped me internalize this advice: "Your article is not a mystery novel. Don't keep the reader guessing until the punchline.")
I wouldn't worry about "EDIT" marks, not in top level posts. Just accept that the discussion can reflect past versions, and make the post the best version you can.
↑ comment by orthonormal · 2010-01-18T22:12:57.610Z · LW(p) · GW(p)
Agree with Morendil about the paragraph beginning "In the vanishingly unlikely event...". Without the earlier references, it's not good to have your example of something you're sure of be something that a newcomer or Googler could find so controversial.
I'd suggest you either swap it for something else in which the very probably correct view is also the mainstream one within the pool of possible readers, or failing that, put your first link to your old post here instead of at the paragraph beginning with "Previously...".
Replies from: komponisto↑ comment by komponisto · 2010-01-18T22:20:38.648Z · LW(p) · GW(p)
Done. (Good catch.)
↑ comment by Paul Crowley (ciphergoth) · 2010-01-18T19:53:47.914Z · LW(p) · GW(p)
Still more subtle, you are selective in the objections that you choose to respond to.
That's probably the case with everyone, though, isn't it? Given the constraints of time and attention, it seems hard to avoid this.
A tricky point! But I think I would worry if I was ignoring a highly-scored argument.
↑ comment by Nick_Tarleton · 2010-01-18T19:56:17.611Z · LW(p) · GW(p)
Right about now perhaps the only way you could get "arbitrarily close to 0" is by scanning Knox's brain or the actual perpetrator's brain; if both should die, we would reach a state of absolute uncertainty about the event, all the evidence you could in principle examine to reach certainty will have been rearranged into unintelligible patterns.
This may be true if you measure from inside the universe, but certainly isn't if you can measure from outside, including observing other quantum branches. (Hey, you did say "in principle.")
↑ comment by Nick_Tarleton · 2010-01-18T19:52:55.759Z · LW(p) · GW(p)
Right about now perhaps the only way you could get "arbitrarily close to 0" is by scanning Knox's brain or the actual perpetrator's brain; if both should die, we would reach a state of absolute uncertainty about the event, all the evidence you could in principle examine to reach certainty will have been rearranged into unintelligible patterns.
You'd have to examine a lot more, but certainly there would still in principle be some finite amount of information (though much of it unmeasurable within the universe, possibly including some in other quantum branches) that would suffice to run the physics backwards (with a finite computation) and figure out what happened.
comment by wedrifid · 2010-01-19T02:05:19.398Z · LW(p) · GW(p)
I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.
That would be misleading imagery. I don't think that is a fair representation of the imagery you quote. It sounds more like a description a debater would use when trying to make a position sound bad. Fatigue doesn't come into it. That would be silly.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-19T08:54:59.923Z · LW(p) · GW(p)
No, it's a description of a potential failing in the intuition pump that the imagery sets up.
comment by Kevin · 2010-01-18T12:09:30.170Z · LW(p) · GW(p)
This is a side-point, perhaps, but something to take into account with assigning probabilities is that while Amanda Knox is not guilty, she is certainly a liar.
When confronting someone known to be lying during something as high stakes as a murder trial, people assign them a much higher probability of guilt, because someone that lies during a murder trial is actually more likely to have committed murder. That seems to be useful evidence when we are assigning numerical probabilities, but it was a horrific bias for the judge and jury of the case.
Edit: To orthnormal, yes, that is what I meant, thank you. I also agree that it's possible that her being a sociopath and/or not neurotypical confused the prosecutor.
Replies from: orthonormal, billswift, komponisto↑ comment by orthonormal · 2010-01-18T19:20:15.772Z · LW(p) · GW(p)
IAWYC (and don't understand the downvotes); the point in the last paragraph is a key one. Evidence that a suspect is lying should raise the probability of their guilt, but not nearly to the extent that it actually sways judges and juries (because people have the false idea that everyone but perpetrators will be telling the truth).
↑ comment by billswift · 2010-01-18T15:14:45.311Z · LW(p) · GW(p)
someone that lies during a murder trial is actually more likely to have committed murder.
People lie all the time, mostly to protect their self-image or their image in others' minds. Just because it was done during a trial does not mean they are more likely to have committed the crime. Just as often people misremember, forget things they said before, or remember things they didn't mention before.
Replies from: Kevin, Kevin↑ comment by Kevin · 2010-01-20T10:44:20.206Z · LW(p) · GW(p)
I think if we compare the set of all accused murderers that lie during their trials to those that tell the truth, the whole truth, and nothing but the truth, a higher percentage of liars will be guilty.
It's improper reasoning, however, to use that as the reason for convicting someone of murder.
I think there is a significant chance she was in the house at the time of the murder or otherwise knew something that she didn't tell the police, and that major lie could have really confused the prosecutor, who was also the interrogater when she implicated Patrick Lumumba.
↑ comment by komponisto · 2010-01-18T12:32:08.700Z · LW(p) · GW(p)
I've addressed the relationship between legal and Bayesian reasoning here.
In general, I think we should keep discussion of the Knox case to the post dedicated to that subject. Here I'll just note that the meme about Knox being a "liar" derives from the allegation of "changing stories", which is an uninformed misconception.
Replies from: orthonormal, Kevin↑ comment by orthonormal · 2010-01-18T19:28:17.799Z · LW(p) · GW(p)
Sorry to put this here instead of the other thread, but I don't think this actually came up there:
Here I'll just note that the meme about Knox being a "liar" derives from the allegation of "changing stories", which is an uninformed misconception.
It can derive from other sources as well. I ran into the case on the Eyes for Lies blog, written by an experimentally identified "truth wizard" (boy do I hate that term) with a pretty impressive track record for judging liars from their media appearances. The author sees a number of telltale signs of lying and of sociopathy.
Now this shouldn't be admissible in court, and it's not unassailable Bayesian evidence that Amanda Knox is a liar or a sociopath (even these truth wizards are wrong on the order of 5% of the time). But it is evidence of those. (Still, being a sociopath only moderately raises the odds of being involved in the murder, and those are very low given the other facts of the case.)
Replies from: komponisto↑ comment by komponisto · 2010-01-18T20:23:03.961Z · LW(p) · GW(p)
I am strongly tempted to defy the data here.
In fact, looking at the blog, I didn't find much data. There was a link to an unimpressive article by a psychoanalyst, with some not-particularly-expert-sounding comments from the blog author -- who also admitted to not being able to tell whether Knox was lying during the testimony without hearing the questions. Furthermore, the author's understanding of the facts of the case left a lot to be desired, to put it mildly.
But even if we grant that this person has a tested above-average ability to identify characteristic signs of lying/sociopathy, and has identified Knox as possessing some of these signs (an assertion I didn't actually find, though I could have missed it), I'd want to know a lot more: what sort of likelihood ratios are we talking about? (I.e. what fraction of non-sociopaths also exhibit these signs?) Exactly what is this person's error rate? What do other "wizards" say in independent testing with strict experimental protocols? Etc.
Then there's also the theoretical question: if this evidence is truly worth paying attention to, why shouldn't it be admissible in court? (Presumably there's no danger of abuse of police power or similar, so the reason for exclusion must have to do with the evidentiary strength or lack thereof.)
Replies from: orthonormal↑ comment by orthonormal · 2010-01-18T20:46:04.752Z · LW(p) · GW(p)
if this evidence is truly worth paying attention to, why shouldn't it be admissible in court?
Hmm. I was going to say that it's really a form of private evidence, if these "truth wizards" can tell more accurately on a subconscious level than they can consciously explain the reasons for. But this basically puts them in the same boat as other expert witnesses, whose authority and probity basically has to be trusted (or countered by another expert of the same type).
Exactly what is this person's error rate?
Like I said, the usual figure is 5% false positives, and this person did list a recent case where they offered an opinion on the blog and later found themselves mistaken. Their track record otherwise looks pretty good.
I am strongly tempted to defy the data here.
Why? (Serious question.) It doesn't seem to me that there's strong evidence in the other direction, just a low prior of a random person being a sociopath. But given the way that this case has gone, it's worth considering the hypothesis that Amanda Knox is a sociopath who is innocent of this particular crime, but suspected nonetheless because of her atypical behavior during the investigation.
The prosecutor does appear to be a hack with an affinity for farfetched conspiracies, but he didn't try that in every case he's touched— it's reasonable to suspect that something in Knox's interrogation set him down that trail, and one plausible hypothesis is that she wasn't acting the way a neurotypical human being would act in that situation. Indeed, there are plenty of bits of evidence you mentioned to this effect, but you (rightly) treated them as mostly irrelevant to the question of whether she committed the crime. They are, however, good evidence that she's not neurotypical, and Eyes for Lies' analysis further supports that theory.
Replies from: komponisto↑ comment by komponisto · 2010-01-18T22:13:44.581Z · LW(p) · GW(p)
We may need to do some tabooing. My understanding is that "sociopath" is a much narrower category than "not neurotypical"; in particular, I was under the impression that sociopathy involved a lack of empathy. That doesn't appear to characterize Knox from anything else I have come across (there are perhaps one or two anecdotes that you could retrospectively regard as consistent with that assumption, but only if you didn't know anything else -- most information about Knox from her hometown points in the opposite direction).
It doesn't seem to me that there's strong evidence in the other direction, just a low prior of a random person being a sociopath.
Start here, here, here, and here (4:50).
But you may be right in the sense that I may be overestimating P(Guilty|Sociopath).
↑ comment by Kevin · 2010-01-20T10:35:44.108Z · LW(p) · GW(p)
On applying the word liar, I wasn't intending to allude to an existing meme.
First, she was found guilty of trying to implicate Patrick Lumumba in the murder. I understand she did it during duress. I'm not sure if "told during duress" changes when we can apply the word liar, but I agree that liar is a charged word.
Second, I mean that I am positive she has told at least one lie while on the witness stand. There are many aspects of the defense's story that don't quite make sense. They, like the prosecution, are making up stories about what exactly happened to Merideth Kutcher that night. Also, in Italian court, defendants are legally allowed to lie on the witness stand; she was not expected to tell nothing but the truth during the trial.
Replies from: wedrifid, komponisto↑ comment by wedrifid · 2010-01-20T10:53:55.991Z · LW(p) · GW(p)
Can we please keep discussion of this particular court case in the relevant thread? We really don't need the politics of near mode 'justice' spreading too much into loosely related topics.
Replies from: Kevin↑ comment by Kevin · 2010-01-20T11:03:24.405Z · LW(p) · GW(p)
I was actually going to post about this in the meta-thread until I saw your reply, but I think orthonormal's statement "I don't think this actually came up there" applies for the most part. Let's please not meta-discuss outside of the meta-thread. I would however be fine if a moderator could move this entire thread to the Amanda Knox post, but I don't think that's possible.
Edit: Also, discussing why the prosecution and jury and judge believed Knox and Solecito guilty with absolute certainty seems relevant.
Replies from: wedrifid↑ comment by komponisto · 2010-01-20T14:10:06.831Z · LW(p) · GW(p)
Reply here.