Absolute Authority

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-08T03:33:43.000Z · LW · GW · Legacy · 78 comments

Contents

78 comments

The one comes to you and loftily says: “Science doesn’t really know anything. All you have are theories—you can’t know for certain that you’re right. You scientists changed your minds about how gravity works—who’s to say that tomorrow you won’t change your minds about evolution?”

Behold the abyssal cultural gap. If you think you can cross it in a few sentences, you are bound to be sorely disappointed.

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information. If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.

Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing. All claims are discounted accordingly. If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

When someone has lived their life accustomed to certainty, you can’t just say to them, “Science is probabilistic, just like all other knowledge.” They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.

You have admitted you are not trustworthy—so begone, Science, and trouble us no more!

One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you’re certain or not.1

But I suspect that the traditional school regimen also has something to do with it. The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test. But when a student makes a suggestion in class, you don’t have to go along with it—you’re free to agree or disagree (it seems) and no one will punish you.

This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law. In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities. There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation. Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference. And Science, since it confesses itself to have a possibility of error, must belong in the second class.

(I note in passing that I see a certain similarity to they who think that if you don’t get an Authoritative probability written on a piece of paper from the teacher in class, or handed down from some similar Unarguable Source, then your uncertainty is not a matter for Bayesian probability theory.2 Someone might—gasp!—argue with your estimate of the prior probability. It thus seems to the not-fully-enlightened ones that Bayesian priors belong to the class of beliefs proposed by students, and not the class of beliefs commanded you by teachers—it is not proper knowledge).

The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side. Here is someone who believes they have knowledge more reliable than science’s mere probabilistic guesses—such as the guess that the Moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places. And what is this knowledge that the unenlightened ones set above ours, and why? It’s probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week. Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted. They toss around the word “certainty” like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability. “I’m perfect,” they say without a care in the world, “I must be so far above you, who must still struggle to improve yourselves.”

There is nothing simple you can say to them—no fast crushing rebuttal. By thinking carefully, you may be able to win over the audience, if this is a public debate. Unfortunately you cannot just blurt out, “Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.” It’s a difference of life-gestalt that isn’t easy to describe in words at all, let alone quickly.

What might you try, rhetorically, in front of an audience? Hard to say . . . maybe:

But, in a way, the more interesting question is what you say to someone not in front of an audience. How do you begin the long process of teaching someone to live in a universe without certainty?

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn’t be certain of anything, it would not deprive you of the ability to make moral or factual distinctions. To paraphrase Lois Bujold, “Don’t push harder, lower the resistance.”

One of the common defenses of Absolute Authority is something I call “The Argument from the Argument from Gray,” which runs like this:

Reversed stupidity is not intelligence. You can’t arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion—it gives the fool too much detailed control over you. Every single line must be correct for a mathematical argument to carry. And it doesn’t follow, from the fact that moral relativists say “The world isn’t black and white,” that this is false, any more than it follows, from Stalin’s belief that 2 + 2 = 4, that “2 + 2 = 4” is false. The error (and it only takes one) is in the leap from the two-color view to the single-color view, that all grays are the same shade.

It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact, not something to get all dramatic about.

I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B. It is not a necessary condition.

Oh, and: Logical fallacy: Appeal to consequences of belief.

Let’s see, what else do they need to know? Well, there’s the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.

There’s the whole notion of gaining information by looking at things, rather than being proselytized. When you look at things harder, sometimes you find out that they’re different from what you thought they were at first glance; but it doesn’t mean that Nature lied to you, or that you should give up on seeing.

Then there’s the concept of a calibrated confidence—that “probability” isn’t the same concept as the little progress bar in your head that measures your emotional commitment to an idea. It’s more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true. If you take one hundred people and ask them each to make a statement of which they are “absolutely certain,” how many of these statements will be correct? Not one hundred.

If anything, the statements that people are really fanatic about are far less likely to be correct than statements like “the Sun is larger than the Moon” that seem too obvious to get excited about. For every statement you can find of which someone is “absolutely certain,” you can probably find someone “absolutely certain” of its opposite, because such fanatic professions of belief do not arise in the absence of opposition. So the little progress bar in people’s heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn’t even behave monotonically.

As for “absolute certainty”—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. This is incredible enough. (It’s amazing to realize we can actually get that level of confidence for “Thou shalt not win the lottery.”) So let us say nothing of probability 1.0. Once you realize you don’t need probabilities of 1.0 to get along in life, you’ll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn’t just certainty, it’s infinite certainty.

In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying “We are not infinitely certain” rather than “We are not certain.” For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.

1See “Professing and Cheering [? · GW],” collected in Map and Territory and findable at rationalitybook.com and lesswrong.com/rationality [? · GW].

2See “Focus Your Uncertainty” in Map and Territory.

78 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by denis_bider · 2008-01-08T04:45:04.000Z · LW(p) · GW(p)

For all your talk about The One, I'm going to start to call you Morpheus.

comment by James_Bach · 2008-01-08T05:08:40.000Z · LW(p) · GW(p)

I wonder what your life must be like. The way you write, it sounds as if you spend a lot of your time trying to convince crazy people (by which I mean most of humanity, of course) to be less crazy and more rational, like us. Why not just ignore them?

Then I looked at your Wikipedia entry and noticed how young you are. Ah! When I was your age, I was also trying to convert everybody. My endless arguments about software development methods, circa 1994, are still in Google's Usenet archive. So, who am I to talk?

(Note: Mostly I write comments that complain about something you say, but please understand that there's a selection bias here. Even though I often find myself thinking "What an interesting way to think about that. Great idea, Eliezer!" I would rather write comments that have some kind of content, and those tend to be the critical ones.)

comment by Sam5 · 2008-01-08T05:22:17.000Z · LW(p) · GW(p)

I really enjoy your deep analysis of topics, but might I suggest writing shorter entries a bit more often?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-08T05:45:53.000Z · LW(p) · GW(p)

Sam, if I write shorter entries, I'll never get everything said.

James: Snort. One of these days I'll do a post on "maturity bias".

comment by Paul_Gowder · 2008-01-08T05:52:01.000Z · LW(p) · GW(p)

Oh Eliezer, why'd you have to toss that parenthetical in about priors? The rest of the post is so wonderful. But the priors thing... hell, for my part, the objection isn't to priors that aren't imposed by some Authority, it's priors that are completely pulled out of one's arse. Demanding something beyond the whim of some metaphorical marble bouncing about in one's brain before one gets to make a probability statement is hardly the same as demanding capital-A-Authority.

comment by Unknown · 2008-01-08T06:12:40.000Z · LW(p) · GW(p)

The main reason people think a probability of 100% is necessary is that they assume that any other probability implies a subjective feeling of doubt, and they are aware that it is impossible to go through life in a continuous state of subjective doubt about whether or not food is necessary to sustain one's life and the like.

Once someone has separated the probability from this subjective feeling, a person can see that a subjective feeling of certainty can be justified in many cases, even though the probability is less than 100%. Once this has been admitted, I think most people would not have a problem with admitting that 100% probabilities are not possible.

Replies from: PetjaY
comment by PetjaY · 2015-02-07T13:34:49.093Z · LW(p) · GW(p)

I would rather say that for normal people certainty is ~90% propability, you can notice this noticing that people who say something is certain aren´t willing to act in ways that would cause serious harm if they were wrong.

comment by Grammarian · 2008-01-08T06:18:58.000Z · LW(p) · GW(p)

Eliezer, every time you say "the one" you mean to say "one".

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-04-14T11:19:18.296Z · LW(p) · GW(p)

"One" means "an arbitrary person". "The one" means "the specific person we were just talking about".

comment by Ian_C. · 2008-01-08T06:32:39.000Z · LW(p) · GW(p)

I once thought I had a fast, crushing argument against the existence of God. I would point to various objects around me and ask "What does that do?" e.g. point at a beach ball and they would say "bounce," point at a bird and they would say "sing." And I would triumphantly say, "See, God can't exist!" and they would look at me blankly.

In my mind, every object I had ever seen did it's own peculiar thing - that is, it didn't do "just anything." Therefore the idea of omnipotence - the ability to make objects do whatever you please is contradicted by all the evidence, and therefore God (a supposedly omnipotent being) is too.

What I didn't grasp, was that all the evidence of every object they had seen in their entire life wasn't convincing to them(!), as long as they could still imagine a counter-example. They gave their own imagination the same weight as real evidence. So it wasn't a quick argument after all, it would require explaining evidence vs. imagination and that would lead to another thing, and another, and before you know it, it's an entire website full of articles. So I have to agree with Eliezer, there's no simple way to convey an entire mental framework in short order.

comment by Paul_Crowley2 · 2008-01-08T09:00:31.000Z · LW(p) · GW(p)

Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.

comment by Ben_Jones · 2008-01-08T09:45:21.000Z · LW(p) · GW(p)

Denis, you will definitely enjoy this one.

Thinking of science in religious terms makes the whole thing fall over, for everyone. The only way you can have 100% certainty in something is if it's not falsifiable. The only way something can be unfalsifiable is if it is mysterious, ethereal and makes no testable predictions.

My withering rejoinder? "Yes, you may have god. But do you have any knowledge?"

Replies from: christopherj
comment by christopherj · 2013-10-12T22:42:51.150Z · LW(p) · GW(p)

This is an excellent point, an implication that I ought to have deduced myself but totally didn't. This means not only that absolute certainty about reality is impossible to get, but more interestingly that absolute certainty about reality is entirely useless as it can't make specific predictions. Even if it were something like "can't go faster than the speed of light", being absolutely certain of this would mean that "scientists measuring something going faster than the speed of light because of experimental error" would be a valid prediction, along with "it is an illusion/I am crazy". Since neither experimental result would disprove the certain thing, it must follow that the certain thing can't predict the experimental result.

In fact, I think we can claim that the probability that you're sane, should be an upper bound on probabilities you're allowed to claim. Thus to claim arbitrarily high probabilities, you'd need an arbitrarily large group of probably sane people who agree (but then what are the odds that you just imagined the group of people who agree with you?). Since you can't be absolutely certain that you and all your group are perfectly sane (along with the possibility of a coincidentally matching mass hallucination), that would make for an upper bound on certainty. In fact the whole group thing would be unnecessary if we admit the possibility that the person we're trying to convince might be insane.

Next time someone claims absolute certainty about something, I'll ask them to prove that they're not insane. That should take them into neutral territory that they haven't had time to wall up, and if they did consider that they might be insane it would be an even better argument.

comment by Ian_C. · 2008-01-08T12:09:35.000Z · LW(p) · GW(p)

'Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.'

Yes, exactly. The concept of "certainty" as colloquially used has no referents. It is such a strict standard, the only things that could possibly be referents for it are statements made by an omniscient entity. A statement by any lesser entity could be wrong and therefore could not be a referent. We are beating ourselves up over a concept no more valid that "unicorn."

comment by LG · 2008-01-08T14:12:04.000Z · LW(p) · GW(p)

Ian, your God argument doesn't follow:

1) Objects behave in certain, predictable ways 2) God can make objects behave arbitrarily 4) No objects behave arbitrarily 5) There is no God

Hidden argumentation:

3) Therefore, God WILL make things behave arbitrarily

You can't assume that an omnipotent God will behave in any particular way.

comment by Caledonian2 · 2008-01-08T14:24:35.000Z · LW(p) · GW(p)
You can't assume that an omnipotent God will behave in any particular way.

What happens when an immovable object meets an irresistible force?

comment by Michael_Sullivan · 2008-01-08T15:48:50.000Z · LW(p) · GW(p)

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things. As you get more complex than balls, the range of options get wider and wider. For semi-intelligent animals the range is already spectacularly wide, and for sentient creatures, the array of possibility is literally terrifying to behold.

We see this vast range in our experience of things, and the range of behaviors and powers that they have, that it seems doubtful we can circumscribe too closely what some unknown being would be able to do. Now, complete omnipotence poses huge philosophical and mathematical problems not unlike infinite sets or probabilities of 1. Intuitively I can see that the same arguments rendering probabilities of 1 impossible (or at least impossible to prove) would seem to work equally well against total ominipotence.

But what if omnipotence, like the normal use of "certainty", doesn't have to mean the absolute ability to do anything at all, but merely so much power and range of use of power that it can do anything we could practically conceive for it to do. This is probably the sense in which early writers mean to claim that God is all-powerful, but the lack of precision in language tripped them up.

I suggest we don't have any strong evidence to suggest that such a being could never exist. In fact, anyone who doesn't consider interest in a potential singularity a complete load of horse manure must agree with me that it's entirely possible that some of us will either become create such beings.

In my mind, either this is no argument against religions with omnipotent gods or it's a damning argument against the singularity. Which is it?

comment by Ian_C. · 2008-01-08T17:30:48.000Z · LW(p) · GW(p)

LG - Your objection is only valid if you assume I am starting with the idea of omnipotence and trying to use the evidence to disprove it. In fact, I am starting with the evidence and showing that the idea of omnipotence can't be arrived at without contradiction.

1) Objects behave in certain, predictable ways 2) Therefore the suggestion that someone could make an object behave arbitrarily contradicts the evidence 3) Therefore the idea of "omnipotence" contradicts the evidence 4) Therefore the idea of God contradicts the evidence

It's a different style of reasoning: starting with reality vs. starting with imagination and then using reality only as a test.

comment by Z._M._Davis · 2008-01-08T20:14:06.000Z · LW(p) · GW(p)

Ian, are you arguing that the concept of omnipotence is incoherent, or merely (as Michael seems to have interpreted you:) that we have no reason to believe that any omnipotent entity actually exists?

If you really mean the latter, then I suspect most people here will agree with you: if one does not observe any evidence for omnipotence, and one accepts Occam's razor (as reasonable people do), then one concludes that no omnipotent entity exists, unless and until strong evidence to the contrary comes up.

But it remains the case that the idea of omnipotence is compatible with the evidence. The religious can, without logical self-contradiction, claim that God-in-Her-Infinite-Wisdom chooses to make created objects behave in predictable ways. It's true that one would be silly to believe this story: that would be violating Occam's razor, "starting with imagination, and then using reality only as a test"--however you want to phrase it--but it's not contradictory.

If you want to show that an omnipotent entity cannot exist (that P(God-exists) is closer to, say, P(1+1=9) than P(there's-an-invisible-unicorn-following-you)), you have to do a little more work. Fortunately, it's already been done (see Caledonian's comment).

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-03T17:04:46.147Z · LW(p) · GW(p)

When I was growing up in a baptist church one of the primary arguments for all the evidence that suggests the earth is over four billion years old and that the universe is nearly fourteen billion years old, was that God made it to look that way on purpose. That is, when he said "let there be light!" he didn't just make the stars and let the light take its course (which would take between thousands and billions of years, and some light we see now would never reach us at all), but made the stars with a past history and its light already hitting us. Same with all the geological evidence - God just made it look as though it were really old. So the universe was 6,000 years old, but it looked exactly like it would if it were 14 billion years old.

Ostensibly this was to test our faith, however, after thinking about it for a few years after I left high school I realized that if any of this were the least bit true, if God really did exist, if he really designed a universe specifically to trick people into believe he didn't exist (it's the only valid reason I can think of for doing it - it's even what the preachers think, though they don't put it that way), and thereby send whole swaths of people to hell for no reason other than they were trying to find the truth (which the Bible does admonish one to seek) then he has to be the biggest douchebag in the universe.

That's not evidence against the position though. Really there can never be any evidence against their position - it's theological phlogiston, but it does make it very easy to stop accepting God. Once you do that you realize that a god isn't necessary at all, so why would you believe in one? Especially one that is such a vile, evil, spiteful creature?

comment by RBH2 · 2008-01-08T20:18:26.000Z · LW(p) · GW(p)

Here's an example: some time ago I was discussing evolution with a creationist, and was asked "Can you prove it?" I responded that "prove" isn't the appropriate word, but rather scientists gather and evaluate evidence to see what position the evidence most clearly supports. He crowed in jubilation. "Then you don't have any proof!" he exclaimed.

So my response in that situation has changed. I now respond, "Yes, we have the same level of proof that sends people to death row: We've got the DNA!" That's adapted from Sean B. Carroll, author of Making of the Fittest.

With respect to the first potential response you identify,

"The power of science comes from having the ability to change our minds and admit we're wrong. If you've never admitted you're wrong, it doesn't mean you've made fewer mistakes."
I tend to simplify that to "Yup, that even has a technical name: It's called learning. I commend it to your attention." :)

RBH

comment by Paul_Gowder · 2008-01-08T22:21:40.000Z · LW(p) · GW(p)

Ian, your argument fails not merely because premise 1 isn't established apodictically. (Which is the flaw of inductive reasoning generally, but which, as Eliezer tries to point out to the religious, doesn't mean we don't have good reason to believe it.)

It also fails because we have counterexamples up the wazoo. Michael's point about sentient creatures is one of them. But we can generate a lot of others just by diddling around the space in which we define "objects." Balls bounce and roll, bowling balls just roll, spherical objects generally do all sorts of crazy things. So the "spherical things" case is a counterexample too, just so far as you define the class of objects in such a way that spherical things count as objects.

You get a one-to-one mapping of object to function only by defining the objects on the functions, by picking as your object a uni-function (or few-function) idea like "ball." So your argument is actually circular in a sense.

comment by g · 2008-01-08T23:47:38.000Z · LW(p) · GW(p)

Eliezer's use of "the one" is not an error or a Matrix reference, it's a deliberate echo of an ancient rabbinical trope. (Right, Eliezer?)

comment by poke · 2008-01-09T00:49:47.000Z · LW(p) · GW(p)

I think Ian makes an important point: people give their ability to imagine something the same weight as evidence. The most gratuitous example of this, relevant here because it's the impetus for inductive probabilism, is the so-called "problem of induction." Say we have two laws concerning the future evolution of some system, call them L1 and L2, such that at some future time t L2(t) gives a result that is defined only as being NOT the result given by L1(t). L1 is based on observation. L2 represents my ability to imagine that my observations will fail to hold at some future time t. The problem of induction is a result of giving MORE weight to L2 than L1.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-09T02:47:11.000Z · LW(p) · GW(p)

Actually, I didn't realize "the one comes to us and says" was a rabbinical borrowing until it was pointed out to me. But it seems to have the right tone, and it's syntactical; I care not whether it is grammatical.

comment by Paul_Gowder · 2008-01-09T02:58:14.000Z · LW(p) · GW(p)

Poke, that's a really unhelpful way of thinking about the problem of induction. The problem of induction is a problem of logic in the first instance -- a description of the fact that we do have absolute knowledge of the truth of deductive arguments (conditional on the premises being true) but we don't have absolute knowledge of the truth of inductive arguments. And that's just because the conclusion of a deductive argument is (in some sense) contained in the premises, whereas the conclusion of a generalization isn't contained in the individual observations. What's contained in the individual observations (putting on social scientist hat here) is a probability, given one's underlying distribution, of finding data like what you found if the world is a certain way.

That's a real distinction -- it doesn't come from somehow giving weight to imaginary possibilities, it reflects the difference between logical truth (which IS absolute) and empirical truth (which is not).

comment by Ian_C. · 2008-01-09T07:58:40.000Z · LW(p) · GW(p)

Michael: "Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things." Paul: "It also fails because we have counterexamples up the wazoo."

But even if an object behaves thousands of ways, it is still behaving in those ways and only those ways. If we want to work with it, we must follow cause and effect, we can't simply will it to do what we want. That is the case for all objects I know of, there are no counter-examples.

Z. M. Davis: "are you arguing that the concept of omnipotence is incoherent, or merely [...] that we have no reason to believe that any omnipotent entity actually exists?"

I am arguing that observation proves that omnipotence is impossible. If object behavior is determined by the kind of thing it is (which appears to be the case, since the same kinds of things act the same), then it is not determined by anything else, such as the will of an external agent.

"The religious can, without logical self-contradiction, claim that God-in-Her-Infinite-Wisdom chooses to make created objects behave in predictable ways."

Their claim that object behavior is determined by God contradicts the observation that it is determined by what kind of thing an object is.

p.s. I think this discussion has gotten a little OT (sorry Eliezer)

comment by poke · 2008-01-09T23:33:41.000Z · LW(p) · GW(p)

Paul Gowder,

I think your response is too general. How does the problem of induction being an deductive argument make the conclusion any less absurd? It's a deductive argument that takes as its premise my ability to imagine something being otherwise. That makes sense if you're an Empiricist philosopher, since you accept an Empiricist psychology a priori, but not a lot of sense if you're a scientist or committed to naturalism. Further, the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction.

comment by gutzperson · 2008-01-10T08:40:30.000Z · LW(p) · GW(p)

Really like your article. Thanks

comment by Paul_Gowder · 2008-01-10T09:07:13.000Z · LW(p) · GW(p)

Poke: let's attack the problem a different way. You seem to want to cast doubt on the difference along the dimension of certainty between induction and deduction. ("the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction")

Either deduction and induction are different along the dimension of certainty, or they're not. So there are four possibilities. induction = certain, deduction = certain (IC, DC); InC, DnC; IC, DnC; and InC, DC.

Surely, you don't agree that induction gives us certain knowledge. The "imagination-based" story: the fact that the coin came up heads the last three million times gives us very high probability for the proposition that the coin is loaded, but not certain. But you've rejected the "imagination-based" story. I'm fine with that. Because there are real stories. Countless real stories. Every time one scientist repeats another scientist's experiment and gets a different result, it's a demonstration of the fact that inductive knowledge isn't certain: the first scientist validly drew a conclusion from induction as a result of his/her experiments (do you disagree with that??), and the second scientist showed that the conclusion was wrong or at least incomplete. Ergo, induction doesn't give us certain knowledge.

That eliminates two possibilities, leaving us with InC, DnC and InC, DC. The following is a deductive argument. "1. A. 2. A-->B. 3. B." Assume 1 and 2 are true. Do you think we thereby have certain knowledge that B? If so, you seem to be committed to DC, and thereby to a difference between induction and deduction on the domain of certainty.

(Heavens... the things I do rather than sleep.)

comment by Manon_de_Gaillande2 · 2008-01-13T10:32:55.000Z · LW(p) · GW(p)

Ian C: What about an universal Turing machine?

comment by Kragen_Javier_Sitaker · 2008-02-14T13:14:02.000Z · LW(p) · GW(p)

Maybe you should try telling some parables about people who thought they had certain knowledge. Maybe some of them should include other people who did not think their knowledge was certain.

comment by Amaroq · 2009-07-05T07:36:31.274Z · LW(p) · GW(p)

I cannot accept that Probability must be applied to everything. Which of course indirectly states that there are no absolutes, since probability has no 0 or 1.

If you discard absolutes, you must be willing to accept mysticism and contradictions.

I can create a long list of false or contradictory statements, and anyone who lives by probabilities must obediently tell me that every one of them is possible.

  • "Does God exist?" "Probably not, but it's possible."

  • "Can he create a boulder that he cannot lift?" "Probably not, but it's possible."

  • "If God dropped that boulder on me, would I survive?" "Probably not, but it's possible."

  • "Can he lift that boulder that he created to be unliftable by him?" "I dunno. It's possible."

  • "Do I exist?" "Probably, but you might not."

  • "Does existence exist?" "It probably does, but it might not."

  • "Can probability really have 0% or 100%?" "Probably not, but it might be possible..."

Besides, "There are no absolutes" is a statement that invokes an absolute to claim that there are none. It contradicts itself. It is an example of the fallacy of the stolen concept.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2009-07-05T12:12:25.045Z · LW(p) · GW(p)

Not quite so. There is a lot of nearly-impossible things, and you are good to call them "impossible", even if technically they aren't. Likewise, some things are so certain that you are good to call them "absolutely certain" even if technically they aren't. See possibility, antiprediction, fallacy of gray, technical explanation, absolute certainty.

Replies from: Amaroq
comment by Amaroq · 2009-07-05T15:20:56.518Z · LW(p) · GW(p)

Ah, but you see, I was arguing at the technical level, not on the "it's good to call it this" level.

I believe that absolute certainty is required. Not in all, and probably not even in most things. But absolute certainty has to be possible, because without it, I must give technical possibility to self-contradicting statements like this one:

"God exists, he is omniscient, infallible, and he can make a boulder that he cannot lift."

Can you tell me that all the pieces of that statement are technically possible?

P.S. I don't think I commit the fallacy of gray. I accept that there are varying shades of gray. But I believe that there must be a black and a white as well. I also apologize if I seem aggressive. I don't read or post much here. Only when I see something I believe is wrong because I, like you, want there to be less wrong.

P.P.S. I am a beginner Objectivist, so my acceptance of black, white, and gray may be subject to change as I learn more.

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-03T17:24:25.465Z · LW(p) · GW(p)

"God exists, he is omniscient, infallible, and he can make a boulder that he cannot lift."

You screwed that up, there is no contradiction there. God must too be omnipotent to make the argument you are looking for.

And really, it's not a contradiction any way. If he is all powerful, then certainly he has the ability to make a rock that he cannot lift if he so chooses. But, since he is all powerful, he can just as easily make that rock liftable again.

When you are given an absurd premise, absurd outcomes are logical.

Deductive conclusions are only absolute certainties relative to their premises. That is, the conclusion can never be more certain than the premise In fact, it will be at least as uncertain as the uncertainty of both premises combined. Since the premises can never be certain, the result of deductive reasoning is never certain either, only valid or invalid.

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-03T22:55:30.798Z · LW(p) · GW(p)

I'm curious why this was voted down. If it was because my language was a little harsh, I assure you I did not mean to offend, I simply meant he made a mistake in the wording of the argument - omniscient means "all-knowing", omnipotent means "all-powerful". I'd be surprised to be voted down even though I was right on this matter.

If there is a problem with my reasoning after that, please do point it out to me rather than just voting me down. I'm new to Bayes, and as the Amanda Knox Test demonstrated, I often fail at reasoning. If this is such a case I would very much like to know about it. I can't see where I made the mistake though.

[Edit to change the Amanda Knox link to the original, instead of the spoiler]

Replies from: wedrifid, HonoreDB, Dorikka
comment by wedrifid · 2011-03-04T01:07:47.321Z · LW(p) · GW(p)

God must too be omnipotent to make the argument you are looking for.

Your reasoning is correct. The below quote is not self contradictory. You may consider substituting the 'too' with 'also' or moving the word order around to make the sentence flow better. When you are saying things forcefully as with "you screwed that up" it pays to be extra careful with wording - higher standards are expected.

"God exists, he is omniscient, infallible, and he can make a boulder that he cannot lift."

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-04T03:57:22.615Z · LW(p) · GW(p)

I was a little careless with "you screwed that up"; I honestly did not intend for it to sound mean, and I could have chosen better words. I simply meant he obviously intended to use the word omnipotent instead of omnipresent.

Regarding the word too, however, I completely disagree. That is a valid use of the word, unconventional sure, but valid. I've always enjoyed seeing it employed in such a manner.

[Edit] Maybe putting "too" before "must" would sound a little nicer to some, but I liked the way "God must too" sounded in my head.

Replies from: wedrifid, komponisto
comment by wedrifid · 2011-03-04T06:10:21.737Z · LW(p) · GW(p)

Regarding the word too, however, I completely disagree. That is a valid use of the word, unconventional sure, but valid. I've always enjoyed seeing it employed in such a manner.

You were curious as to why you were downvoted. That wording would, I predict, have been a contributing factor. Wording significantly influences tone. That wording came across as more petulant or crude as a follow up to 'screwed up' than an alternative would have.

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-04T16:48:55.984Z · LW(p) · GW(p)

I still don't see it as a very good reason for a down vote when nothing in the post is considered incorrect.

I expect not to be up voted if I'm being rude and technically correct, but I don't expect to be down voted. Usually when I'm down voted it is because I'm either factually wrong or I've failed at reasoning. Getting down voted for a phrasing that someone considers a little rude seems odd on this particular website. And honestly, I was not intending to be rude in any way, it is a common phrase when someone makes a mistake. I did not intend to imply anything other than the fact that he used the wrong word in his paradox.

In any case, the points aren't a big deal, and someone corrected it anyway. I was just curious if I had made a mistake, because I didn't see one even after looking over what I wrote a second and third time.

Replies from: thomblake
comment by thomblake · 2011-03-04T19:56:37.140Z · LW(p) · GW(p)

Downvotes for rudeness are pretty common. Especially after Defecting by Accident

comment by komponisto · 2011-03-04T16:57:29.898Z · LW(p) · GW(p)

Maybe putting "too" before "must" would sound a little nicer to some, but I liked the way "God must too" sounded in my head.

The order affects the meaning: "must too" doesn't mean "must also"; it means "on the contrary, must!" (Cf. "did too!") I don't think that's the meaning you wanted here.

Replies from: bigjeff5
comment by bigjeff5 · 2012-01-02T01:26:49.381Z · LW(p) · GW(p)

Just noticed this comment when I was looking through my messages for an old comment, and I wanted to respond.

It is the word "too" that is important there, and the usage you describe is only used as an affirmative for contradicting a negative statement (at least, that's proper grammar anyway).

For example, if the original statement had been "God must not make a boulder he cannot lift!" and I had responded with "God must too make a boulder he cannot lift!" you would be right, but the original statement is an affirmative statement ("God can make a boulder he cannot lift."), my own sentence before it is an affirmative (in the grammatical sense - not so much in the "uplifting" sense), so trying to contradict either with an affirmative doesn't make any sense.

Also, I did a Google search, and while using "too" between must and another verb is not common, using "must too" to mean "must also" is by far the most common usage I could find. I do admit that other combinations of verb "too" verb seem to imply contradicting a negation even without the proper context, so that usage is definitely not as clear as I originally thought it would be. I still think it's pretty, though.

comment by HonoreDB · 2011-03-04T01:38:17.391Z · LW(p) · GW(p)

Are omnipotence and omniscience logically distinct? One can "know how to do something" or "be able to learn something."

Replies from: CuSithBell, wedrifid
comment by CuSithBell · 2011-03-04T02:10:34.400Z · LW(p) · GW(p)

Under most conceptions, omnipotence certainly entails at least the ability to become omniscient. It doesn't work the other way - knowing how to shoot a three-point shot in basketball doesn't help an omniscient cantaloupe.

Replies from: HonoreDB
comment by HonoreDB · 2011-03-04T03:11:15.222Z · LW(p) · GW(p)

You don't think it could think its way out of the box? Is causally discrete omniscience really omniscience?

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-04T04:49:13.033Z · LW(p) · GW(p)

If you are going to take the premise that information is the substance and causation of all that exists, then yes, an omniscient being must also be omnipotent. You need that premise first, though, or the omniscient is simply a know-it-all (literally). If no condition exists to change its lack of omnipotence given its current abilities, then no amount of knowledge will allow it to become omnipotent.

Omnipotence does not necessarily imply the knowledge necessary to create omniscience, either. The ability is certainly there, but the knowledge may not be. I'm sure if the omnipotent being were clever it could figure out a way to make it happen, though.

Usually when someone dreams up an all powerful being, they make it all knowing as a matter of course, and vice versa. At least they do these days, anyway. The Greeks liked their gods to have serious flaws, and I can appreciate that.

comment by wedrifid · 2011-03-04T07:00:18.433Z · LW(p) · GW(p)

Are omnipotence and omniscience logically distinct? One can "know how to do something" or "be able to learn something."

Yes, they are distinct. One can "know it is impossible to do something", for example.

comment by Dorikka · 2011-03-04T04:42:15.928Z · LW(p) · GW(p)

Please tell us when you are posting a spoiler for a rationality exercise. I clicked through your link and didn't catch that it was a spoiler fast enough for the exercise itself not to be spoiled.

Replies from: bigjeff5
comment by bigjeff5 · 2011-03-04T16:36:49.839Z · LW(p) · GW(p)

I'm very sorry, I didn't consider that. I actually got to the original Amanda Knox post through the spoiler, but I stopped reading at the mention of the original and went straight to that one first.

I'll change the link so it doesn't trip anybody else up.

comment by wedrifid · 2009-07-05T12:38:34.943Z · LW(p) · GW(p)

Leave the math alone, redefine 'possible' to match your preferred meaning if you must.

comment by David_Gerard · 2011-01-19T12:37:28.950Z · LW(p) · GW(p)

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information.

This is pretty much the standard argument against Wikipedia. It fails to address the question of "what's it for?"

comment by kaz · 2011-08-18T21:52:55.150Z · LW(p) · GW(p)

I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.

I think that latter statement is equivalent to this:

V = Virgin Birth
G = God appears and proclaims ~V

P(V|G) < 1
∴​P(V) < 1

But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.

Replies from: Technoguyrob
comment by robertzk (Technoguyrob) · 2011-12-18T08:07:15.795Z · LW(p) · GW(p)

Go a little farther. Let G(X) = God appears and proclaims X. For religions with acknowledgment of divine revelation, which is all major religions, P(G(X)) has been non-zero for certain X (people have received revelation directly from God). Indeed, granting ultimate authority to God, again a factor of all major religions, means that 0 < P(G(X)) < 1 for all X (granting that there is a statement X such that humans know God will not appear and proclaim X is removing ultimate authority from God and assigning part of it to humans--by the way, we can assume the space of X's is countable so there is no problem with summing to 1). So it is not internally consistent to assume, in particular, that P(G(~V)) = 0, without abandoning ultimate authority to God (or probability theory as a way of reasoning about this stuff, as most religions opt to do).

Of course the more productive question is what evolutionary mechanisms allowed human brain architecture the ability to get so off-par with reality but productive from a Darwinian point of view. Some would argue that potential to be so absurdly wrong is what gives brains their computational power in the first place! Bounded rationality under physical constraints is a very active area of research.

comment by ike · 2014-08-03T04:53:44.656Z · LW(p) · GW(p)

For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one.

This is supposed to be an argument against giving anything an 100% probability. I do agree with the concept, but this particular argument seems wrong. It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know). However, the Bayes rule doesn't just imply that "having a chance of changing your mind" -> "you are not 100% certain", it also gives us bounds on what posteriors we can have. If we evaluate a 5% chance to changing our minds on something, that would seem to imply that we cannot put a >95% in our original claim.

So, the reason I reject this is as follows:

EY lays out possible evidence for 2+2=3 here. Imagine you believe at 50% level that someone will cause you to view that evidence tomorrow. Hypnosis, or some other method. Applying Bayes rule like EY seems to be applying it here, you should evaluate right now at most a 50% chance that 2+2=4. I think the rational thing to do in that situation (where putting the earplugs together does in fact show 2+2 equaling 4), is to believe that 2+2=4, with around the same much confidence as you do now. Therefore, there is something wrong with this line of reasoning.

If anyone can point to what I'm doing wrong, or thinks that in the situation I outlined, the rational thing to do is to evaluate a 50% or lower chance of 2+2=4, I'd like to hear about it.

Replies from: Jiro, Wes_W, Wes_W
comment by Jiro · 2014-08-03T14:45:56.318Z · LW(p) · GW(p)

Not everything that changes your mind is evidence within the meaning of Conservation of Expected Evidence. If there's a 50% chance you will believe X tomorrow, but that situation involves believing X because you're hypnotized, that's not evidence at all and you should not change your current beliefs based on that.

Replies from: ike
comment by ike · 2014-08-03T20:50:08.423Z · LW(p) · GW(p)

So then, moving on to the argument that "because I might believe 2+2=3 tomorrow (albeit very unlikely), I can't believe 2+2=4 100% today".

If Omega tells you that tomorrow you will believe that 2+2=3, most of your probability mass is concentrated in the possibility that 2+2=4, but you'll be somehow fooled, perhaps by hypnosis or nano-editing of your brain. Very little if any probability mass is for the theory that 2+2 really equals 3, and you'll have the major revelation tomorrow. In order to use this thought experiment to show that I don't have 100% confidence in 2+2=4, you need to assert that the second probability exists, however the thought experiment is also consistent with the first probability being high or one and the second being zero (you can't assume I agree that zero is not a probability, or you're begging the question).

comment by Wes_W · 2014-08-03T19:12:07.960Z · LW(p) · GW(p)

EY lays out possible evidence for 2+2=3 here. Imagine you believe at 50% level that someone will cause you to view that evidence tomorrow. Hypnosis, or some other method. Applying Bayes rule like EY seems to be applying it here, you should evaluate right now at most a 50% chance that 2+2=4. I think the rational thing to do in that situation (where putting the earplugs together does in fact show 2+2 equaling 4), is to believe that 2+2=4, with around the same much confidence as you do now.

Why do you think that is the correct thing to do in that situation?

Here, in this real situation, yes you should trust your current counting abilities. But if you believe with 50% confidence that, within 24 hours, someone will be able to convince you that your ability to count is fundamentally compromised, you also don't place a high level of confidence on your ability to count things correctly - no more than 50%, in fact.

"I can count correctly" and "[someone can demonstrate to me that] I'm counting incorrectly" are mutually exclusive hypotheses. Your confidence in the two ought not to add up to more than 1.

Replies from: ike, CCC
comment by ike · 2014-08-03T21:23:08.160Z · LW(p) · GW(p)

If I know that I'll actually experience that scenario tomorrow where I wake up and have all available evidence showing that 2+2=3, but now I still visualize XX+XX=XXXX, then I trust my current vast mound of evidence over a future smaller weird mound of evidence. I'm not evaluating "what will I think 2+2= tomorrow?" (as EY points out elsewhere, this kind of question is not too useful). I'm evaluating "what is 2+2?" For that, it seems irrational to trust future evidence when I might be in an unknown state of mind. The sentence EY has repeated "Those who dream do not know they dream; but when you wake you know you are awake", seems appropriate here. Just knowing that I will be convinced, however the means, is not the same as actually convincing me. What if they hack your mind and insert false memories? If you would know someone would do that tomorrow, would you think that the future memories actually happened in your past?

If you're trying to make the argument that "since someone can fool me later, I can be fooled now and wouldn't notice", well, first of all, that doesn't seem to be the argument EY is making. Second, I might have to be in such a situation to be precise, but I'd expect the future that I am being fooled in would have to delete the memory of this sequence of posts (specifically the 2+2=3 post, and this series of comments). The fact that I remember seems to point to the editing/hacking not happening yet.

After thinking of this I see that an intruder would just change all the references from 2+2=4 to 2+2=3 and vice versa, leaving me with the same logic to justify my belief in 2+2=3. So that didn't work.

How about this: once I have to consider my thought processes hacked, I can't unwind past that anyway, so to keep sane I'll have to assume my current thoughts are not corrupted.

Replies from: Wes_W
comment by Wes_W · 2014-08-04T08:25:16.290Z · LW(p) · GW(p)

I think deception should be treated as a special case, here. Normally, P(X | a seemingly correct argument for X) is pretty high. When you specifically expect deception, this is no longer true.

I'm not sure it's useful to consider "what if they hack your mind" in this kind of conversation. Getting hacked isn't a Bayesian update, and hallucinations do not constitute evidence.

Replies from: ike
comment by ike · 2014-08-04T10:43:26.758Z · LW(p) · GW(p)

If there was a way to differentiate hallucinations from real vision, then I'd agree, but there isn't.

Anyway, I thought of a (seemingly) knockdown argument for not believing future selves: what if you currently believe at 50% that tomorrow you'll be convinced of 2+2=3, the next day 2+2=5, and the next day 2+2=6? (And that it only has one answer.) If you just blindly took those as minimums, then your total probability mass would be at least 150%. Therefore, you can only trust your current self.

Replies from: Wes_W
comment by Wes_W · 2014-08-04T15:31:41.267Z · LW(p) · GW(p)

If there was a way to differentiate hallucinations from real vision, then I'd agree, but there isn't.

Sure, but that is a different problem than what I'm talking about. Expecting to hallucinate is different than expecting to receive evidence. If you expect to be actually convinced, you ought to update now. If you expect to be "convinced" by hallucination, I don't think any update is required.

Framing the 2+2=3 thing as being about deception is, IMO, failing to engage with the premise of the argument.

Anyway, I thought of a (seemingly) knockdown argument for not believing future selves: what if you currently believe at 50% that tomorrow you'll be convinced of 2+2=3, the next day 2+2=5, and the next day 2+2=6?

I would be very confused, and very worried about my ability to separate truth from untruth. In that state, I wouldn't feel very good about trusting my current self, either.

comment by CCC · 2014-08-04T10:10:11.134Z · LW(p) · GW(p)

"I can count correctly" and "[someone can demonstrate to me that] I'm counting incorrectly" are mutually exclusive hypotheses. Your confidence in the two ought not to add up to more than 1.

Not entirely. It is possible that someone may be able to provide a convincing demonstration of an untrue fact; either due to deliberate deception, or due to an extremely unlikely series of coincidences, or due to the person giving the demonstration genuinely but incorrectly thinking that what they are demonstrating is true.

So, there is some small possibility that I am counting correctly and someone can demonstrate to me that I am not counting correctly. The size of this possibility depends, among other things, on how easily I can be persuaded.

comment by Wes_W · 2014-08-05T04:13:59.281Z · LW(p) · GW(p)

It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know).

By the way, separate from our conversation downthread, I don't think that is the technical reason being referred to. Or at least, it's a rather indirect way of proving that point.

Bayes' Theorem is P(A|B) = P(B|A)P(A)/P(B).

If P(A) = 0, then P(A|B) = P(B|A)*0/P(B) = 0 as well, no matter what P(B|A) and P(B) are. Or in words: if you start with credence exactly zero in some proposition, it is impossible for any piece of evidence to make you update away from that. By the contrapositive, if it is not impossible for you to update away from your original opinion ("change your mind"), your credence is nonzero.

A similar argument holds for probability 1, which should be unsurprising, since P(A) = 1 is equivalent to P(~A) = 0.

Replies from: ike
comment by ike · 2014-08-07T15:00:22.162Z · LW(p) · GW(p)

The problem with this argument is that it assumes that evidence is not altered. What I mean is that Bayesian updating implicitly assumes that all evidence previously used is included in the new calculation, and the new evidence is a strict superset of the old one. However, suppose I hypothetically assign 100% to any math fact "simple enough" that I can verify it mentally in under a minute (to choose an arbitrary time). So today, when I'm visualizing 2+2=4, I can say that I put a 100% confidence on the claim "2+2=4".

Now, is this contradicted by the fact that tomorrow I will see new evidence, causing me to conclude that 2+2=3? No. Aside from seeing new evidence later, my current evidence is being changed. Right now, the evidence consists of actual brain operations that visualize 2+2. Tomorrow, that evidence is in the form of memories of brain operations. If I live in a possible world where only memories can be edited and not actual running brain processes, then tomorrow I will conclude that today's memories were faked. That is not something I can conclude today, because I can repeat the visualization at any time. (One minute after, I might be relying on memories, but at the time, I'm not.)

Replies from: Wes_W
comment by Wes_W · 2014-08-07T19:14:25.308Z · LW(p) · GW(p)

However, suppose I hypothetically assign 100% to any math fact "simple enough" that I can verify it mentally in under a minute (to choose an arbitrary time).

That isn't a valid operation.

For one: assigning 100% confidence in your ability to correctly do something on which you do not have, historically, a 100% track record is quite unwise. Probably you aren't even 1-10^-6 reliable, historically, and that would still be infinitely far short of 100%. But it's a toy hypothetical, so realism isn't the primary objection.

More importantly, we don't get to arbitrarily assign probabilities.

The problem with this argument is that it assumes that evidence is not altered. What I mean is that Bayesian updating implicitly assumes that all evidence previously used is included in the new calculation, and the new evidence is a strict superset of the old one.

Bayes' Theorem, as the name implies, is a theorem. It does not assume anything about evidence; it doesn't even mention evidence. It talks strictly about probabilities. All this "evidence" stuff is high-level natural-language abstraction about what the probabilities "mean" - the math itself is a reduction of the concept of evidence. It only assumes some axioms of probability; you may attempt to dispute those if you like, but that would be a very different conversation.

And, because Bayes' Theorem is a theorem, assigning 100% confidence to any proposition of which you could in principle ever cease to have 100% confidence is strictly, provably an error.

The special case of reasoning while unable to trust your own sanity requires lots of conditions that are usually negligible. For example, P(X happened | I remember that X happened) is usually pretty close to 1; for most purposes we can ignore it and pretend "X happened" and "I remember X happened" are the same thing. But if you suspect your memories have been altered, this is no longer true, so you'll have that extra factor in certain calculations.

Nothing that you are describing is outside the domain of the relevant math. It's just weird corner cases.

Replies from: ike
comment by ike · 2014-08-07T20:13:46.231Z · LW(p) · GW(p)

Why can't "X happened" be infinite evidence for X, while "I remember that X happened" only finite?

Bayes theorem applies, but it's not being applied accurately, because of these special cases.

And, because Bayes' Theorem is a theorem, assigning 100% confidence to any proposition of which you could in principle ever cease to have 100% confidence is strictly, provably an error.

Define "you" and "ever". I argue that the "you" who changes there mind tomorrow is not the same observer that decides with 100% probability today, because the one today has information that the one tomorrow doesn't; namely, actual brain ops, versus memories for tomorrow you.

I could in principle be convinced that my 100% assesment is wrong: by removing or editing the evidence. That is not Bayesian updating, it's brain editing, and then a Bayesian update on other evidence.

You're equating today me with tomorrow me, and you can't do that unless all my current evidence will still be there tomorrow.

Why didn't EY use an example of a hypothetical other race (the 223ers), who think that everything is evidence for 2+2=3 as his example? Because we need the same person (or observer, is there a technical term for that thing-doing-the-assesing?) to change their mind. I assert that if memory can't be trusted, it won't count as the "same" to apply Bayes theorom straightforwardly.

Replies from: Wes_W
comment by Wes_W · 2014-08-08T08:06:01.773Z · LW(p) · GW(p)

You could consider a proposition to be infinite evidence for itself, I guess. That seems like maybe a kinda defensible interpretation of P(A|A) = 1. I don't think it gets you anything useful, though.

Define "you" and "ever". I argue that the "you" who changes there mind tomorrow is not the same observer that decides with 100% probability today, because the one today has information that the one tomorrow doesn't; namely, actual brain ops, versus memories for tomorrow you.

[∃ B: P(A|B) ∈ (0,1)] → [P(A) ∈ (0,1)]. Better?

If, having made them, your own probability assessments are meaningless and unusable, who cares what values you assign? Set P(A) = 328+5i and P(B) = octopus, for all it matters.

Additionally, I'm not sure it matters when the mind-changing actually occurs. At the instant of assignment, your mind as it is right that moment should already have a value for P(A|B) - how you would counterfactually update on the evidence is already a fact about you. If you would, counterfactually assuming your current mind operates without interference until it can see and process that evidence, update to some credence other than 1, it is already at that moment incorrect to assign a credence of 1. Whether that chain of events does in fact end up happening won't retroactively make you right or wrong; it was already the right or wrong choice when you made it.

Or, if you get mind-hacked, your choice might be totally moot. But this is generally a poor excuse to deliberately make bad choices.

Replies from: ike
comment by ike · 2014-08-08T11:49:52.685Z · LW(p) · GW(p)

[∃ B: P(A|B) ∈ (0,1)] → [P(A) ∈ (0,1)]. Better?

Yes, it makes it clearer what you're doing wrong. I'll do what I should have done earlier, and formalize my argument:

Let's call "2+2=4" A, "2+2=3" B, "I can visualize 2+2=4" C, "I can visualize 2+2=3" D, "I can remember visualizing 2+2=4" E, "I can remember visualizing 2+2=3" F.

So, my claim is that P(A|C) is 1, likewise P(B|D). (Remember, I don't think it's like this in real life, I'm trying to show that the argument put forward to prove that is not sufficient.)

What is the Bayes formula for tomorrow's assessment?

Not, P(A|C,D), which (if <1) would indeed disprove P(A|C)=1.

But, instead, P(A|E,D). This can be less than 1 while P(A|C)=1. I'll just make up some arbitrary numbers as priors to show that.

I'm assuming A and B are mutually exclusive, as are C and D.

P(A)=.75

P(B)=.25 (just assume that it's either 2 or 3)

P(C)=.375

P(D)=.125

P(memory of X | X happened yesterday)=.95

P(memory of X | X didn't happen yesterday)=.001

P(E)=P(C)*.95+P(~C)*.001=0.356875

P(F)= P(D)*.95+P(~D)*.001=0.119625

P(C|A)=.50

P(C|B)=0

P(D|A)=0

P(D|B)=.50

P(A|C) = P(C|A)P(A)/P(C)=(.50*.75)/(.75*.50+.25*0)=1

P(A|C,D) is undefined, because C and D are mutually exclusive (which corresponds to not being able to visualize both 2+2=3 and 2+2=4 at the same time)

P(F,D)=P(D)*.95=0.11875

P(A|E,D)= P(E,D|A)P(A)/P(E,D)=0 (Because D|A is zero).

Using my numbers, you need to derive a mathematical contradiction if there are, truly "technical reasons" for this being impossible.

The mistake you (and EY) are making is that you're not comparing P(A) to P(A|B) for some A,B, but P(A|B) to P(A|C) for some A,B,C.

Added: I made two minor errors in definitions that have been corrected. E and F are not exclusive, and C and D shouldn't be defined as "current", but rather as having happened, which can only be confirmed definately if they are current. However, they have the evidential power whenever they happened, it's just if they didn't happen now, they're devalued because of fragile memory.

Added: Fixed numerical error and F where it was supposed to be E. (And errors with evaluating E and F. I really should not have assumed any values that I could have calculated from values I already assumed. I have less degrees of freedom than I thought.)

Replies from: CCC, Wes_W
comment by CCC · 2014-08-08T12:23:08.870Z · LW(p) · GW(p)

I'm assuming A and B are mutually exclusive, as are C and D, and E and F.

While A and B being mutually exclusive seems reasonable, I don't think it holds for C and D. And I'm pretty sure that it doesn't hold at all for E and F.

If I remember visualising 2+2=3 yesterday and 2+2=4 the day before, then E and F are both simultaneously true.

P(A)=.75

P(C)=.75

P(C|A)=.50

These three statements, taken together, are impossible. Consider:

Over the 0.75 probability space where C is true (second statement), A is only true in half that space (third statement). Thus, A is false in the other half of that space; therefore, there is a probability space of at least 0.375 in which A is false. Yet A is only false over a probability space of size 0.25 (first statement).

In your calculations further down, you use the value P(C) = (.75.50+.250) = 0.375; using that value for P(C) instead of 0.75 removes the contradiction.

Similarly, the following set of statements lead to a contradiction, considered together:

P(A)=.75

P(B)=.25 (just assume that it's either 2 or 3)

P(D)=.25

P(D|A)=0

P(D|B)=.50

Replies from: ike
comment by ike · 2014-08-08T16:12:32.403Z · LW(p) · GW(p)

The first and third comments are correct. I made some errors in first typing it up that shouldn't take away from the argument that are now fixed. The third comment is an actual mistake that has also been fixed.

Over the 0.75 probability space where C is true (second statement), A is only true in half that space (third statement).

This is wrong. P(C|A) is read as C given A, which is the chance of C, given that A is true. You're mixing it up with P(A|C). However, if you switch A and C in your paragraph, it becomes a valid critique, which I've fixed, substituting the correct values in. Thanks. (Did I mess anything else up?)

I'm starting to appreciate mathematicians now :)

You need to escape your * symbols so they output correctly.

Replies from: CCC
comment by CCC · 2014-08-09T05:39:51.779Z · LW(p) · GW(p)

You're mixing it up with P(A|C). However, if you switch A and C in your paragraph, it becomes a valid critique, which I've fixed, substituting the correct values in. Thanks.

You're right, I had that backwards.

(Did I mess anything else up?)

Hmmm....

P(F)=.20

P(F)= P(D)*.95+P(C)*.001=0.119125

You have two different values for P(F). Similarly, the value P(E)=0.70 does not match up with P(C), P(D) and the following:

P(memory of X | X happened yesterday)=.95

P(memory of X | X didn't happen yesterday)=.001

None of which is going to affect your point, which seems to come down to the claim that there exist possible events A, B, C, D, E and F such that P(A|C) = 1.

comment by Wes_W · 2014-08-08T21:37:45.611Z · LW(p) · GW(p)

P(A|C) = P(C|A)P(A)/P(C)=(.50.75)/(.75.50+.25*0)=1

blink

Well huh. I suppose I ought to concede that point.

There are probabilities of 0 and (implicitly) 1 in the problem setup. I'm not confident it's valid to start with that; I worry it just pushes the problem back a step. But clearly, it is at least possible for probabilities of 1 to propagate to other propositions which did not start at 1. I'll have to think about it for a while.

comment by Pato Lubricado (pato-lubricado) · 2020-08-18T10:32:38.744Z · LW(p) · GW(p)
Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.

Have you considered selling merch? I'm infinitely certain I'd buy a T-shirt with that quote.

comment by Vojtěch Kantor (vojtech-kantor) · 2022-03-02T23:11:10.990Z · LW(p) · GW(p)

The Dalai Lama stated that "If science proves some belief of Buddhism wrong, then Buddhism will have to change."

 

I like the guy :)

Replies from: papetoast
comment by papetoast · 2022-09-04T00:42:02.188Z · LW(p) · GW(p)

That quote doesn't come from the passage and it is not obvious to me how it relates to the passage. What are you trying to talk about?

comment by xenohunter · 2022-10-03T05:32:00.270Z · LW(p) · GW(p)

Another problem with some people is that they don’t consciously believe (or won’t openly admit) they have absolute certainty. In their speech, they say that they doubt this and that, that they "cannot know everything" but I guess that’s mostly a trick for them to say "and neither do you." With them, one first needs to convince them that they are lying to themselves before having a talk about certainty vs uncertainty.