Update Yourself Incrementally

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-14T14:56:33.000Z · LW · GW · Legacy · 29 comments

Contents

29 comments

Politics is the mind-killer [? · GW].  Debate is war, arguments are soldiers [? · GW].  There is the temptation to search for ways to interpret every possible experimental result [? · GW] to confirm your theory, like securing a citadel against every possible line of attack.  This you cannot do.  It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence. [? · GW]

But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two [? · GW], and still survive, so long as the hits don't keep on coming in.2

Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.

You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.

It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.

I just know someone in the audience out there is going to say, “But you can’t concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll lose! What could be more viscerally terrifying than that?

Whatever. Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.

Rationality is not a walk, but a dance. On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right. Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence. Yes, down. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.

If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence [? · GW]. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.

The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away [? · GW]. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory. That is qualitative reasoning; think quantitatively.

For every expectation of evidence, there is an equal and opposite expectation of counterevidence. [? · GW] On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward. If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn’t leave much room for the probability to go further upward. And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability.

How silly is it, then, to be terrified [? · GW] of revising your probability downward, if you’re bothering to investigate a matter at all? On average, you must anticipate as much downward shift as upward shift from every individual observation.

It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in. You may find your belief drifting downward and further downward. Until, finally, you realize from which quarter the winds of evidence are blowing against you. In that moment of realization, there is no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. Yay! Time to celebrate! Pop a champagne bottle or send out for pizza! You can’t become stronger by keeping the beliefs you started with, after all.

29 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Matthew_C · 2007-08-14T16:32:01.000Z · LW(p) · GW(p)

If you've already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you'd best integrate all the evidence.

OK, here, now go take your own advice. As an academic imprint it's pretty expensive, so if you can't find it in your local university library I'll snail-mail you some relevant extracts.

comment by GreedyAlgorithm · 2007-08-14T16:59:08.000Z · LW(p) · GW(p)

Matthew C:

I don't understand why the Million Dollar Challenge hasn't been won. I've spent some time in the JREF forums and as far as I can see the challenge is genuine and should be easily winnable by anyone with powers you accept. The remote viewing, for instance, that I see on your blog. That's trivial to turn into a good protocol. Why doesn't someone just go ahead and prove these things exist? It'd be good for everyone involved. I see you say: "But for the far larger community of psi deniers who have not read the literature of evidence for psi, and get all your information from the Shermers and Randis of the world, I have a simple message: you are uninformed." So obviously you think that either Randi has bad information or is deliberately sharing bad information. That's fine. If the Challenge is set up correctly it shouldn't matter what Randi does or does not believe/know/whatever. I can only conclude there is at least one serious flaw in the Challenge. Could you tell me what it is?

comment by michael_vassar3 · 2007-08-14T18:04:49.000Z · LW(p) · GW(p)

Matthew: As far as I can tell, Psi is not a hypothesis that constrains the probability density of predictions rather than simply saying "anything goes, anything can happen". As such, isn't it just an instance of radical skepticism? The thing is, radical skeptical arguments don't change anticipations or proscribe changes in behavior. Taken seriously, it's not clear that such hypotheses even constitute arguments for their own advocacy. Maybe if I draw attention to the unknowable demons behind the curtain I will be better able to deal with them but maybe that will cause them to eat me. I don’t see how an expected value calculation holds that the former is more likely than the latter, just as I don’t see how a god who punishes atheists is any less likely than one who punishes believers. Related question. What evidence would cause you to relinquish the psi hypothesis?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-14T19:30:48.000Z · LW(p) · GW(p)

You want me to believe precognition has been scientifically established? Give me one single research protocol which reliably (90% probability) produces results at the p < 0.01 significance level for events 30 minutes in the future.

If the effect is real, however small, there will exist some number of subjects/trials that reliably amplifies the effect to any given level of statistical significance.

comment by Matthew_C · 2007-08-14T23:59:04.000Z · LW(p) · GW(p)

Actually rather than rehashing the entire psi debate here, I'd much prefer you just read the material instead. Chapter 3 of Irreducible Mind is particularly powerful, and I will send excerpts to anyone who gives me a US postal address or PO box (email mcromer @t blast dawt com). The natural history of these phenomena are very easily available, and often very well documented.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-15T00:04:20.000Z · LW(p) · GW(p)

Got protocol? Yes or no?

comment by Tom_McCabe · 2007-08-15T03:13:03.000Z · LW(p) · GW(p)

"Got protocol? Yes or no?"

If there was any actual evidence, somebody would have claimed Randi's million-dollar prize years ago. I wasn't able to find a copy of "The Irreducible Mind" online; it doesn't have a Wikipedia article and apparently isn't that popular. A quick Google of the authors reveals that only one (Bruce Greyson) has a Wikipedia article (http://en.wikipedia.org/wiki/Bruce_Greyson). The lead author, Edward F. Kelly, is employed as a professor of "Perceptual Studies" at the University of Virginia Health System (http://www.healthsystem.virginia.edu/internet/personalitystudies/Edbio.cfm) and has a PhD. from Harvard in "Psycholinguistics/Cognitive Science". The authors seem to work mainly within the field of psychology, asserting that it has "no explanation" for the human mind (http://www.amazon.com/Irreducible-Mind-hard-find-contemporary/dp/customer-reviews/0742547922).

As for the other two links, the first one sounds like nonsense; the "research" was not peer-reviewed, replicated or verified and was "released exclusively to the Daily Mail", a well-known London tabloid (http://en.wikipedia.org/wiki/Daily_Mail). The article he linked is from The Evening Standard, another British tabloid (http://en.wikipedia.org/wiki/The_Evening_Standard), and asserts that "Virtually all the great scientific formulae which explain how the world works allow information to flow backwards and forwards through time - they can work either way, regardless.", as well as a great deal of other obvious nonsense. The second one lists a number of anecdotes, none of which have sources, identifying references or even names.

Replies from: tlhonmey
comment by tlhonmey · 2021-01-07T23:04:56.098Z · LW(p) · GW(p)

Information flowing both backward and forward through time is obviously useless to us since we perceive and move in only one direction.  It's not obviously nonsense.  Our perception moves forward through time, so it seems obvious to us that cause leads to effect.

However...  if, in fact, the effect precipitates the cause...  Or some feedback combination of both...  How would we actually be able to tell?  Our perception only computes in one direction so we always see the cause half of it first and then the effect.

If there were people reliable enough at passing information back to their past selves to beat random chance though I expect they would already have found a way to make use of it.

comment by John2 · 2007-08-15T08:49:37.000Z · LW(p) · GW(p)

"If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will see what looks like contrary evidence."

My question here assumes that you mean one in twenty times you get a tails (if you mean one in twenty times you get a heads, then I'm also confused but for different reasons).

Surely if I have a hypothesis that a coin will land heads 95% of the time (and therefore tails 5% of the time) then every cluster of results in which 1/20 are tails is actually supporting evidence. If I toss a coin X times (where X is some number whereby 95% is a meaningful description of outcomes: X >= 20) and 1 out of those 20 is tails, that actually is solid evidence is support of my hypothesis - if, as you say "one in twenty times" I see a tails, that is very strong evidence that my 95% hypothesis is accurate...

Have I misread you point or am I thinking about this from the wrong angle?

comment by Stuart_Armstrong · 2007-08-15T10:35:20.000Z · LW(p) · GW(p)

Have I misread you point or am I thinking about this from the wrong angle?

Maybe the belief here is "the next flip of the coin will be heads". Then each head causes your confidence in that belief to increase, while each tail causes a decrease in that confidence.

You're right, though; the belief "the coin is heads 94-96% of the time" behaves according to more complicated rules. Even if it is true, every so often, you will still get evidence that contradicts your belief - such as a twenty tails in a row. But not often, and Elizer's point still applies.

comment by Peter_de_Blanc · 2007-08-15T15:14:05.000Z · LW(p) · GW(p)

John, Stuart, let's do the math:

H1: "the coin will come up heads 95% of the time."

Whether a given coinflip is evidence for or against H1 depends not only on the value of that coinflip, but on what other hypotheses you are comparing H1 to. So let's introduce...

H2: "the coin will come up heads 50% of the time."

By Bayes' Theorem (odds form), the odds conditional upon the data D are:

p(H1|D) / p(H2|D) = p(H1)p(D|H1) / p(H2)p(D|H2)

So when we see the data, our odds are multiplied by the likelihood ratio p(D|H1)/p(D|H2).

If D = heads, our likelihood ratio is:

p(heads|H1) / p(heads|H2) = .95 / .5 = 1.9.

If D = tails, our likelihood ratio is:

p(tails|H1) / p(tails|H2) = .05 / .5 = 0.1.

If you prefer to measure evidence in decibels, then a result of heads is 10log10(1.9) ~= +2.8db of evidence and a result of tails is 10log10(0.1) = -10.0db of evidence.

The same result is true regardless of how you group the coinflips; if you get nothing but heads, that is even stronger evidence for H1 than if you get 95% heads and 5% tails. This is true because we are only comparing it to hypothesis H2. If we introduce hypothesis H3:

H3: "the coin will come up heads 99% of the time."

Then we can also measure the likelihood ratio p(D|H1) / p(D|H3).

Plugging in "heads" or "tails", we get:

p(heads|H1) / p(heads|H3) = 0.95 / 0.99 = 0.9595... p(tails|H1) / p(tails|H3) = 0.05 / 0.01 = 5.0

So a result of heads is about -0.18 db of evidence for H1, and a result of tails is about +7.0 db of evidence.

If you have a uniform prior on [0, 1] for the frequency of a heads, then you can use Laplace's Rule of Succession.

comment by Matthew_C · 2007-08-15T23:41:05.000Z · LW(p) · GW(p)

McCabe's single-paragraph dismissal of an 800 page book with hundreds of footnotes that he hasn't read, based on wikipedia entries seems to be the precise opposite of the raison d’être of Overcoming Bias. And Yudkowsky, I simply dare you to read this book. You talk the good talk here about The Way and the search for truth. I dare you to expose yourself to some of the meticulously-documented lacunae in your worldview by reading Irreducible Mind. I dare you to your sense of intellectual pride. Chapter 3 is a good place to start. . .

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-08-16T02:12:31.000Z · LW(p) · GW(p)

So there's no reproducible protocol, then?

I have better things to do with my time.

comment by Ultima_Ratio · 2007-08-16T20:35:42.000Z · LW(p) · GW(p)

Matthew C - it sounds more like you're trying to sell a book than produce a testable experiment.

comment by Doug_S. · 2007-08-17T03:55:09.000Z · LW(p) · GW(p)

Here's the thing.

I could a book and find that the arguments in the book are "valid" - that it is impossible, or at least unlikely, that the premises are true and the conclusion false. However, what I can't do by reading is determine if the premises are true.

In the infamous Alien Autopsy "documentary", there were three specific claims made for the authenticity of the video.

1) An expert from Kodak examined the film, and verified that it is as old as was claimed. 2) A pathologist was interviewed, who said that the autopsy portrayed was done in the manner that an actual autopsy would have been done. 3) An expert from Spielberg's movie studio testified that modern special effects could not duplicate the scenes in the video.

If you accept these statements as true, it becomes reasonable to accept that the footage was actually showing what it appeared to show; an autopsy of dead aliens.

Upon seeing these claims, though, my response was along the lines of "I defy the data." As it turns out, all three of those statements were blatant lies. There was no expert from Kodak who verified the film. Kodak offered to verify the film, but was denied access. Many other pathologists said that the way the autopsy was performed in the film was absurd, and that no competent pathologist would ever do an autopsy on an unknown organism in that manner because it would be completely useless. The person from Spielberg's movie studio was selectively quoted and was very angry about it. What he really said that the film was good for whatever grade B studio happened to have produced it.

I could read your book, but I believe that it is more likely that the statements in the book are wrong than it is that psi exists. As Thomas Jefferson did not say, "It is easier to believe that two Yankee professors [Profs. Silliman and Kingsley of Yale] would lie than that stones would fall from the sky."

The burden of proof is on you, Matthew. Many, many claims of the existence of "psi" have been shown to be bogus, so I give further claims of that nature very little credence. Either tell us about a repeatable experiment - copy a few paragraphs from that book if you have to - or we're going to ignore you.

Replies from: mlionson
comment by mlionson · 2010-02-17T01:10:50.466Z · LW(p) · GW(p)

Although I also think Psi is bogus, my belief has nothing to do with the fact that previous claims of psi have been bogus. Evidence can never justify a theory, any more than finding 10 white swans in a row proves that there are no black swans! Believing that psi is false because of evidence that psi has been false in the past is the logical fallacy of inductivism. Most rational people do not believe in Psi because it has no logical theoretical/scientific basis and because it does not explain things well.

Much of this type of argument strikes me as nonsense. Something that is true can not be justified. One can (and should) argue that something is true. But argument is not justification. If the argument explains something well, then one should believe it, if it is the best theory available.

But evidence can never support any argument. It merely corroborates it. The reason that you believe a coin is fair is not ultimately because the results of an experiment convince you. It would be easy to set up an algorithm that causes the first 3000 examples of a computer simulated coin-flip to have the correct number of heads or tails to make the uninformed believe that the simulated coin flip is fair. But the next 10,000 could yield very different results, just by using an easy-to-create mathematical algorithm. No p-value can be assigned even after 3000 computer simulations of a coin flip. The data never tell a story (to quote someone on another site).

The reason we rationally believe the results of experiment when we flip the coin, but not when we see an apparent computer simulation of a coin flip is: In the case of the actual coin we already have explanations of the effects of gravity on two-sided metal objects, well before we have any data about coin flips. The same is not true about the computer simulation of the coin flip, unless we see the program ahead of time.

It is the theory about the effects of gravity on two-sided metal objects (with a particular pattern of metal distribution) that we try to evaluate when we flip coins. The data never tell us a story about whether the coin is fair. We first have a theory about the coin and its properties and then we utilize the experiment (the coin flip) to try to falsify our notion that the coin is fair if the coin looks balanced. Or, we falsify the notion that the coin is not fair, if our initial theory is that the coin does not look balanced. Examples of a phenomena do not increase the probability of it being true.

The reason we may believe that a coin could be fair is that we first evaluate the structure of the material, note that it seems to have a structure that would promote fairness given standard human flips of coins. Only then do we test it. But it is our rational understanding of the properties of the coin and expectations about the environment which make the coin flip reasonable. The results of any test tell you nothing (logically, nothing at all) about the fairness of a coin unless you first have a theory and an explanation about why the coin should or should not be considered fair.

The reason we do not believe in psi is that it does not explain anything, violates multiple known laws of physics, yet creates no alternative scientific structure that allows us to understand and predict events in our world.

Replies from: Jack
comment by Jack · 2010-02-17T01:33:52.037Z · LW(p) · GW(p)

This is pretty muddled and wrong. You use a lot of terms in an unorthodox way. For example I don't know how something that is true cannot ever be justified (how else do you know it's true!). Also, there is no such thing as science without induction, no laws of physics or predictions. So I'm pretty confused about what your position is. That's okay though because it looks like you've never heard of Bayesian inference. In which case this is a really important day in your life.

The wikipedia enty

The SEP entry

Eliezer's explanation of the Math

Also: the "Rationality and Science" subsection at the bottom here.

Who has better links?

Edit: Welcome to less wrong, btw! Feel free to introduce yourself.

Edit again: This PDF looks good.

Replies from: Zack_M_Davis, mlionson
comment by Zack_M_Davis · 2010-02-17T01:39:34.060Z · LW(p) · GW(p)

You use a lot of terms in an unorthodox way. [...] Also, there is no such thing as science without induction

I wouldn't call Popperianism unorthodox exactly.

Replies from: Jack
comment by Jack · 2010-02-17T01:43:01.052Z · LW(p) · GW(p)

I sort of see some Popper in the comment but I also see a good deal that isn't.

comment by mlionson · 2010-02-17T03:02:14.039Z · LW(p) · GW(p)

"For example I don't know how something that is true cannot ever be justified (how else do you know it's true!"

You can't know that something is true. We are fallible. And our best theories are often wrong. We gain knowledge by arguing with each other and trying to point out logical contradictions in our explanations. Experiments can help us to show that competing explanations are wrong (or that ours is!) .

Induction as a scientific methodology has been known (since Hume) to be impossible. Happy to discuss this further if you like. I will certainly read the articles you suggest. Please consider reading David Deutsch's, The Fabric of Realtiy. He (better than Hume in my estimation) shows the 'complete irrationality of induction, but I am happy to discuss, if you are interested.

Replies from: Tyrrell_McAllister, Jack
comment by Tyrrell_McAllister · 2010-02-17T04:05:56.041Z · LW(p) · GW(p)

You can't know that something is true.

This is true if you take "know" to mean "absolute certainty". And, precisely because absolute certainty never happens, taking "know" in this sense would be pointless. We would never have the opportunity to use such a word, so why bother having it? For that reason, people on this site take the assertion that they "know" a proposition P to mean that the evidence they've gathered adds up to a sufficiently high probability for P. Here,

  1. "sufficiently high" depends on the context — for example, the expected cost/benefit of acting as though P is true; and

  2. the evidence that they've gathered "adds" in the sense of Bayesian updating.

That's all that they mean by "know".

Induction as a scientific methodology has been known (since Hume) to be impossible.

On the Bayesian interpretation, induction is just a certain mathematical computation. The only limits on its possibility are the limits on your ability to carry out the computations.

Replies from: mlionson
comment by mlionson · 2010-02-17T04:58:24.729Z · LW(p) · GW(p)

"evidence they've gathered adds up to a sufficiently high probability for P"

Perhaps I should ask what you mean by "evidence"? By evidence do you mean examples of an event happening that corroborates a particular theory that someone holds ?

So if

  1. you have an expectation of something happening, and
  2. that something happens,

then you are saying that the event is evidence in favor of the theory. And if the event happens even more when you expect it to then

  1. it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?

Have I stated your argument correctly?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-02-17T06:16:55.960Z · LW(p) · GW(p)

Perhaps I should ask what you mean by "evidence"?

All input that you have access to is potentially evidence. That is, ideally, all your input would figure into your evaluation of the probability of any proposition whatsoever. And if some input E weren't evidence with respect to some particular proposition H, you would still have to run the Bayesian updating computation to determine that E didn't change the probability that you ought to assign to H.

Obviously, in practice, computing the upshot of all your input is so ideal as to be physically impossible. But, in principle, everything is evidence.

And if the event happens even more when you expect it to then

  1. it is even more evidence for the theory, and this increased probability is calculated by using a Bayesian rule to update your increased expectation of the likelihood of the truth of your theory?

Contradicting prior expectation is a particularly potent kind of evidence. But it is only a special case. Search for "Popper" at Eliezer's An Intuitive Explanation of Bayes' Theorem.

Replies from: mlionson
comment by mlionson · 2010-02-17T07:11:14.622Z · LW(p) · GW(p)

"And if the event happens even more when you expect it to then

it is even more evidence for the theory, "

I am not sure you agreed with this based on your response but I will assume that you did. But correct me if I am wrong!

If you did agree, then consider the Bayesian turkey. Every time he gets fed in November, he concludes that his owner really wants what's best for him and likes him, because he enjoys eating and keeps getting food. Every day more food is provided, exactly as he expects given his theory, so he uses Bayesian statistical inference to increase the confidence he has in his theory about the beneficence of his master. As more food is provided, exactly according to his expectations, he concludes that his theory is becoming more and more likely to be true. Towards the end of November, he considers his theory very true indeed.

You can guess the rest of the story. Turkeys are eaten at Thanksgiving. The turkey was killed.

I think you can see that probabilistic evidence, or any evidence, does not (can not) logically support a theory. It merely corroborates it. One can not infer from an example of something, a general rule. Exactly the opposite is the case. One cannot infer that because food is provided each day, that it will continue to be provided each day. Examples of food being provided do not increase the likelihood that the theory is true. But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.

I can summarize like this. Outcomes of probabilistic experiments do not tell us what it is rational to believe, any more than the turkey was justified in believing in the beneficence of his owner because he kept getting food in November. Probability does not help us develop rational expectations. Rational expectations, on the other hand, do help us to determine what is probable. When the turkey has a rational theory, he can determine the likelihood that he will or will not be given food on a given day.

Replies from: Jack
comment by Jack · 2010-02-17T07:51:05.086Z · LW(p) · GW(p)

A perfect Bayesian turkey would produce multiple hypotheses to explain why he is being fed. One hypothesis would be that his owner loves him, another would be that he is being fattened for eating. Let us stipulate that those are the only possibilities. When the turkey continues to be fed that is new data. But that data doesn't favor one hypothesis over the other. Both hypotheses are about equally consistent with the turkey continuing to be fed so little updating will occur in either direction.

But good theories about the world (people like to eat turkeys on Thanksgiving) helps one develop expected probabilities of events. If the turkey had a good theory, he would rationally expect certain probabilities. For example he would predict that he would be given food up until Nov. 25th, but not after.

But this gives the game away. What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something). If the turkey had this information it isn't even close. The probability distribution immediately shifts drastically in favor of the Thanksgiving meal hypothesis.

Then, if Thanksgiving comes and goes and the turkey is still being fed he can update on that information and the probability his owner loves him goes up again.

Replies from: mlionson
comment by mlionson · 2010-02-17T09:16:23.842Z · LW(p) · GW(p)

"What makes this theory a good one is that people have eaten turkeys for Thanksgiving in the past and induction tells us they are likely to do so in the future (absent other data that suggests otherwise like a rise in Veganism or something)."

I do appreciate your honesty in making this assumption. Usually inductivists are less candid (but believe exactly as you do, secretly. We call them crypto-inductivists!)

But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past. There also is no law of mathematics or logic that says that when a sequence of 100 zeroes in a row are observed, the next one is more likely to be another zero. Indeed there are a literal INFINITE number of hypotheses that are consistent with 100 zero's coming first and then anything else coming next.

With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc. That is why we think Thanksgiving will come again. It is your understanding of our culture that allows you to make predictions about Thanksgiving, not the fact that it has happened for! For example, you didn't keep writing the year 19XX, just because most of your life you did so and did so repeatedly. You were not fooled by an imaginary principle of induction when the calendar turned from 1999 to 2000. You did not keep writing 19...something, just because you had written it before. You understood the calendar, just as you understand our culture and have deep theories about it. That is why you make certain predictions (Thankgiving will keep coming but you won't continue to write 19XX, no matter how many times you wrote it in the past.

I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.

Replies from: Jack
comment by Jack · 2010-02-17T10:01:54.903Z · LW(p) · GW(p)

But there is no law of physics, psychology, economics, or philosophy that says that the future must resemble the past

Of course not. Though I'm pretty sure induction occurs in humans without them willing it. This is just Hume's view, certain perceptions become habitual to the point where we are surprised if we do not experience them, We have no choice but to do induction. But none of this matters. Induction is just what we're doing when we do science. If we can't trust it we can't trust science

With respect, the reason you believe that Thanksgiving will keep coming has everything to do with your a-priori theory about culture and nothing to do with inductivism. You and I probably have rich theories that cultures can be slow to change, that brains may be hard-wired and difficult to change, that memes reinforce each other, etc.

I'm sorry, my "a priori" theory? In what sense could I possibly know about Thanksgiving a priori? It certainly isn't an analytic truth and it isn't anything like math or something Kant would have considered a priori. Where exactly are these theories coming from if not from induction? And how come inductivists aren't allowed to have theories? I have lots of theories- probably close to the same theories you do. The only difference between our positions is that I'm explaining how those theories got here in the first place.

I'm afraid I don't know what to make of your calendar and number examples. Just because I think science is about induction doesn't mean I don't think that social conventions can be learned. Someone explaining math, that after 1999 comes 2000 counts as pretty good Bayesian evidence that that is how the rest of the world counts. Of course most children aren't great Bayesians and just accept what they are told as true. But the fact that people aren't actually naturally perfect scientists isn't relevant.

I think you can see that your rationality,( not a principle of induction, not that everything stays the same) is actually what caused you to have rational expectations to begin with.

Rationality is just the process of doing induction right. You have to explain what you mean if you mean something else by it :-) (And obviously induction does not mean everything stays the same but that there are enough regularities to say general things about the world and make predictions. This is crucial. If there were no regularities the notion of a "theory" wouldn't even make sense. There would be nothing for the theory to describe. Theories explain large class of phenomena over many times. They can't do that absent regularities.)

comment by Jack · 2010-02-17T04:43:18.441Z · LW(p) · GW(p)

Induction as a scientific methodology has been known (since Hume) to be impossible.

I agree with Hume about just about everything. You're misreading him. Induction definitely isn't impossible. We do it all the time. Scientists do it for a living. Hume certainly didn't think it was impossible. What he thought was that there was no deductive reason for expecting that today will be like yesterday. They only justification is induction itself. Thus, any inductive argument begs the question. But his solution definitely wasn't to throw it out and wallow in extreme skepticism. He thought induction was inevitable (not even something we will, just part of psychological habit formation) and was pretty much the only way of having knowledge about anything.

Hume's position is basically my position. Though I have some sketchy arguments in my head that might let us go farther than Hume, I'm more than comfortable with that. Now it turns out that if your psychological habit formation occurs in a certain way (the Bayesian way) you'll start winning bets against those who form beliefs in different ways. It also lets us do statistical/probabilistic experimentation which would never falsify anything but can provide evidence for and against theories. It also explains why we like unfalsified theories that have been tested many, many times more than unfalsified theories that have rarely been tested.

If Deutsch has other arguments you can spell out here I'd be happy to hear them.

comment by MinibearRex · 2011-07-20T04:22:59.250Z · LW(p) · GW(p)

There is more discussion of this post here as part of the Rerunning the Sequences series.