What is Bayesianism?

post by Kaj_Sotala · 2010-02-26T07:43:53.375Z · LW · GW · Legacy · 217 comments

Contents

217 comments

This article is an attempt to summarize basic material, and thus probably won't have anything new for the hard core posting crowd. It'd be interesting to know whether you think there's anything essential I missed, though.

You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula. There's a wiki entry about "Bayesian", but that doesn't help much. And the LW usage seems different from just the "Bayesian and frequentist statistics" thing, too. As far as I can tell, there's no article explicitly defining what's meant by Bayesianism. The core ideas are sprinkled across a large amount of posts, 'Bayesian' has its own tag, but there's not a single post that explicitly comes out to make the connections and say "this is Bayesianism". So let me try to offer my definition, which boils Bayesianism down to three core tenets.

We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

If you thought a cold was more likely, well, that was the answer I was after. Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do. Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)

There should be nothing surprising about that, of course. Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.

Core tenet 1: Any given observation has many different possible causes.

Acknowledging this, however, leads to a somewhat less intuitive realization. For any given observation, how you should interpret it always depends on previous information. Simply seeing that the person was running wasn't enough to tell you that they were in a hurry, or that they were getting some exercise. Or suppose you had to choose between two competing scientific theories about the motion of planets. A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forwards with His Noodly Appendage. If these both theories made the same predictions, you'd have to depend on your prior knowledge - your prior, for short - to judge which one was more likely. And even if they didn't make the same predictions, you'd need some prior knowledge that told you which of the predictions were better, or that the predictions matter in the first place (as opposed to, say, theoretical elegance).

Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.

Again, this might seem obvious. But there are many well-known instances in which people forget to apply this information. Take supernatural phenomena: yes, if there were spirits or gods influencing our world, some of the things people experience would certainly be the kinds of things that supernatural beings cause. But then there are also countless of mundane explanations, from coincidences to mental disorders to an overactive imagination, that could cause them to perceived. Most of the time, postulating a supernatural explanation shouldn't even occur to you, because the mundane causes already have lots of evidence in their favor and supernatural causes have none.

Core tenet 2: How we interpret any event, and the new information we get from anything, depends on information we already had.

Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself "if this cause didn't exist, would I regardless expect to experience this with equal probability?" If the answer is "yes", then it probably wasn't cause A.

This realization, in turn, leads us to

Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.

The fact that anything can be caused by an infinite amount of things explains why Bayesians are so strict about the theories they'll endorse. It isn't enough that a theory explains a phenomenon; if it can explain too many things, it isn't a good theory. Remember that if you'd expect to experience something even when your supposed cause was untrue, then that's no evidence for your cause. Likewise, if a theory can explain anything you see - if the theory allowed any possible event - then nothing you see can be evidence for the theory.

At its heart, Bayesianism isn't anything more complex than this: a mindset that takes three core tenets fully into account. Add a sprinkle of idealism: a perfect Bayesian is someone who processes all information perfectly, and always arrives at the best conclusions that can be drawn from the data. When we talk about Bayesianism, that's the ideal we aim for.

Fully internalized, that mindset does tend to color your thought in its own, peculiar way. Once you realize that all the beliefs you have today are based - in a mechanistic, lawful fashion - on the beliefs you had yesterday, which were based on the beliefs you had last year, which were based on the beliefs you had as a child, which were based on the assumptions about the world that were embedded in your brain while you were growing in your mother's womb... it does make you question your beliefs more. Wonder about whether all of those previous beliefs really corresponded maximally to reality.

And that's basically what this site is for: to help us become good Bayesians.

217 comments

Comments sorted by top scores.

comment by nazgulnarsil · 2010-02-26T12:32:18.254Z · LW(p) · GW(p)

is there a simple explanation of the conflict between bayesianism and frequentialism? I have sort of a feel for it from reading background materials but a specific example where they yield different predictions would be awesome. has such already been posted before?

Replies from: Cyan, Blueberry, bill, PhilGoetz
comment by Cyan · 2010-02-26T20:49:22.500Z · LW(p) · GW(p)

Eliezer's views as expressed in Blueberry's links touch on a key identifying characteristic of frequentism: the tendency to think of probabilities as inherent properties of objects. More concretely, a pure frequentist (a being as rare as a pure Bayesian) treats probabilities as proper only to outcomes of a repeatable random experiment. (The definition of such a thing is pretty tricky, of course.)

What does that mean for frequentist statistical inference? Well, it's forbidden to assign probabilities to anything that is deterministic in your model of reality. So you have estimators, which are functions of the random data and thus random themselves, and you assess how good they are for your purpose by looking at their sampling distributions. You have confidence interval procedures, the endpoints of which are random variables, and you assess the sampling probability that the interval contains the true value of the parameter (and the width of the interval, to avoid pathological intervals that have nothing to do with the data). You have statistical hypothesis testing, which categorizes a simple hypothesis as “rejected” or “not rejected” based on a procedure assessed in terms of the sampling probability of an error in the categorization. You have, basically, anything you can come up with, provided you justify it in terms of its sampling properties over infinitely repeated random experiments.

Replies from: Tyrrell_McAllister, Mayo, PhilGoetz, nazgulnarsil
comment by Tyrrell_McAllister · 2010-02-26T21:19:32.868Z · LW(p) · GW(p)

Here is a more general definition of "pure frequentism" (which includes frequentists such as Reichenbach):

Consider an assertion of probability of the form "This X has probability p of being a Y." A frequentist holds that this assertion is meaningful only if the following conditions are met:

  1. The speaker has already specified a determinate set X of things that actually have or will exist, and this set contains "this X".

  2. The speaker has already specified a determinate set Y containing all things that have been or will be Ys.

The assertion is true if the proportion of elements of X that are also in Y is precisely p.

A few remarks:

  1. The assertion would mean something different if the speaker had specified different sets X and Y, even though X and Y aren't mentioned explicitly in the assertion.

  2. If no such sets had been specified in the preceding discourse, the assertion by itself would be meaningless.

  3. However, the speaker has complete freedom in what to take as the set X containing "this X", so long as X contains X. In particular, the other elements don't have to be exactly like X, or be generated by exactly the same repeatable procedure, or anything like that. There are practical constraints on X, though. For example, X should be an interesting set.

  4. [ETA:] An important distinction between Bayesianism and Frequentism is this: Note that, according to the above, the correct probability has nothing to do with the state of knowledge of the speaker. Once the sets X and Y are determined, there is an objective fact of the matter regarding the proportion of things in X that are also in Y. The speaker is objectively right or wrong in asserting that this proportion is p, and that rightness or wrongness had nothing to do with what the speaker knew. It had only to do with the objective frequency of elements of Y among the elements of X.

comment by Mayo · 2013-09-29T07:24:43.580Z · LW(p) · GW(p)

I'm sorry to see such wrongheaded views of frequentism here. Frequentists also assign probabilities to events where the probabilistic introduction is entirely based on limited information rather than a literal randomly generated phenomenon. If Fisher or Neyman was ever actually read by people purporting to understand frequentist/Bayesian issues, they'd have a radically different idea. Readers to this blog should take it upon themselves to check out some of the vast oversimplifications... And I'm sorry but Reichenbach's frequentism has very little to do with frequentist statistics--. Reichenbach, a philosopher, had an idea that propositions had frequentist probabilities. So scientific hypotheses--which would not be assigned probabilities by frequentist statisticians--could have frequentist probabilities for Reichenbach, even though he didn't think we knew enough yet to judge them. He thought at some point we'd be able to judge of a hypothesis of a type how frequently hypothesis like it would be true. I think it's a problematic idea, but my point was just to illustrate that some large items are being misrepresented here, and people sold a wrongheaded view. Just in case anyone cares. Sorry to interrupt the conversation (errorstatistics.com)

Replies from: Cyan
comment by Cyan · 2013-09-30T00:24:49.716Z · LW(p) · GW(p)

Do you intend to be replying to me or to Tyrrell McAllister?

comment by PhilGoetz · 2010-02-27T05:47:54.428Z · LW(p) · GW(p)

What does that mean for frequentist statistical inference? Well, it's forbidden to assign probabilities to anything that is deterministic in your model of reality.

Wait - Bayesians can assign probabilities to things that are deterministic? What does that mean?

What would a Bayesian do instead of a T-test?

Replies from: wnoise
comment by wnoise · 2010-02-27T10:34:45.390Z · LW(p) · GW(p)

Wait - Bayesians can assign probabilities to things that are deterministic? What does that mean?

Absolutely!

The Bayesian philosophy is that probabilities are about states of knowledge. Probability is reasoning with incomplete information, not about whether an event is "deterministic", as probabilities do still make sense in a completely deterministic universe. In a poker game, there are almost surely no quantum events influencing how the deck is shuffled. Classical mechanics, which is deterministic, suffices to predict the ordering of cards. Even so, we have neither sufficient initial conditions (on all the particles in the dealer's body and brain, and any incoming signals), nor computational power to calculate the ordering of the cards. In this case, we can still use probability theory to figure out probabilities of various hand combinations that we can use to guide our betting. Incorporating knowledge of what cards I've been dealt, and what (if any) are public is straightforward. Incorporating player's actions and reactions is much harder, and not really well enough defined that there is a mathematically correct answer, but clearly we should use that knowledge in determining what types of hands we think it likely for our opponents to have. If we count as the dealer shuffles, and see he only shuffled three or four times, in principle we can (given a reasonable mathematical model of shuffling, such as the one Diaconis constructed to give the result that 7 shuffles are needed to randomize a deck) use the correlations left in there to give us even more clues about opponents' likely hands.

What would a Bayesian do instead of a T-test?

In most cases we'd step back, and ask what you were trying to do, such that a T-test seemed like a good idea.

For those unaware, a t-test is a way of calculating the "likelihood" for the null hypothesis, which measures how likely the data are given that model. If the data is even moderately compatible, Frequentists say "we can't reject it". If it is terribly unlikely, the Frequentists say that it can be rejected -- that it's worth looking at another model.

From a Bayesian perspective, this is somewhat backwards -- we don't really care how likely the data is given this model P(D|M) -- after all, we actually got the data. We effectively want to know how useful the model is, now that we know this data. Some simple consistency requirements and scaling constraints means that this usefulness has to act just like a probability. So let's just call it the probability of the model, given the data: P(M|D). A small bit of algebra gives us that P(M|D) = P(D|M) * P(M)/P(D), where P(D) is the sum over all models i of P(D|M_i) P(M_i), and P(M_i) is some "prior probability" of each model -- how useful we think that model would be, even without any data collected (But, importantly, with some background knowledge).

In this framework, we don't have absolute objective levels of confidence in our theories. All that is absolute and objective is how the data should change our confidence in various theories. We can't just reject a theory if the data don't match well, unless we have a better alternative theory to which we can switch. In many cases these models can be continuously indexed, such that the index corresponds to a parameter in a unified model, then this becomes parameter estimation -- we get a range of theories with probability densities instead of probabilities, or equivalently, one theory with a probability density on a parameter, and getting new data mechanically turns a crank to give us a new probability density on this parameter.

There are a couple unsatisfying bits here:
First it really would be nice to say "this theory is ridiculous because it doesn't explain the data" without any reference to any other theory. But if we know it's the only theory in town, we don't have a choice. If it's not the only theory in town, then how useful it is can really only coherently be measured relative to how useful other theories are.
Second, we need to give "prior probabilities" to our various theories, and the math doesn't give any direct justifications for what these should be. However, as long as these aren't crazy, the incoming data will continuously update these so that the ones that seem more useful will get weighted as more useful, and the ones that aren't will get weighted as less useful. This of course means we need reasonable spaces of theories to work over, and we'll only pick a good model if we have a good model in this space of theories. If you eventually realize that "hey, all these models are crappy", there is no good way of expanding the set of models you're willing to consider, though a common way is to just "start over" with an expanded model space, and reallocated prior probabilities. You can't just pretend that the first analysis was over some subset of this analysis, because the rescaling due to the P(D) term depends on the set of models you have. (Though you can handwave that you weren't actually calculating P(M_i|D), but P(M_i|D, {M}), the probability of each model given the data, assuming that it was one of these models).

A sometimes useful shortcut is rather than working directly with the probabilities, and hence needing the rescaling is to work with the likelihoods (or more tractably, the log of them). The difference of the log likelihoods of two different theories for some data is a reasonable measure of how much that data should effect their relative ranking. But any given likelihood by itself hasn't much meaning -- only in comparison to the rest in a set tells you anything useful.

Replies from: Cyan
comment by Cyan · 2010-02-27T13:35:12.068Z · LW(p) · GW(p)

Very nice! I'd only replace "useful" with "plausible". (Sure, it's hard to define plausibility, but usefulness is not really the right concept.)

Replies from: wnoise
comment by wnoise · 2010-02-27T19:19:00.837Z · LW(p) · GW(p)

"Usefulness" certainly isn't the orthodox Bayesian phrasing. I call myself a Bayesian because I recognize that Bayes's Rule is the right thing to use in these situations. Whether or not the probabilities assigned to hypotheses "actually are" probabilities (whatever that means), they should obey the same mathematical rules of calculation as probabilities.

But precisely because only the manipulation rules matter, I'm not sure it is worth emphasizing that "to be a good Bayesian" you must accord these probabilities the same status as other probabilities. A hardcore Frequentist is not going to be comfortable doing that. Heck, I'm not sure I'm comfortable doing that. Data and event probabilities are things that can eventually be "resolved" to true or false, by looking after the fact. Probability as plausibility makes sense for these things.

But for hypotheses and models, I ask myself "plausibility of what? Being true?" Almost certainly, the "real" model (when that even makes sense) isn't in our space of models. For example, a common, almost necessary, assumption is exchangeability: that any given permutation of the data is equally likely -- effectively that all data points are drawn from the same distribution. Data often doesn't behave like that, instead having a time drift. Coins being tossed develop wear, cards being shuffled and dealt get bent.

I really do prefer to think of some models being more or less useful. Of course, following this path shades into decision theory: we might want to assign priors according to how "tractable" the models are, including both in specification (stupid models that just specify what the data will be take lots of specification, so should have lower initial probabilities). Models that take longer to compute data probabilities should similarly have a probability penalty, not simply because they're implausible, but because we don't want to use them unless the data force us to.

Replies from: Douglas_Knight, Cyan, wedrifid
comment by Douglas_Knight · 2010-02-27T23:25:03.074Z · LW(p) · GW(p)

...shades into decision theory...Models that take longer to compute data probabilities should similarly have a probability penalty, not simply because they're implausible, but because we don't want to use them unless the data force us to.

Whoa! that sounds dangerous! Why not keep the beliefs and costs separate and only apply this penalty at the decision theory stage?

Replies from: wnoise
comment by wnoise · 2010-02-27T23:32:36.516Z · LW(p) · GW(p)

Well, I said shaded into the lines of decision theory...

Yes, it absolutely is dangerous, and thinking about it more I agree it should not be done this way. Probability penalties do not scale correctly with the data collected: they're essentially just a fixed offset. Modified utility of using a particular method really is different. If a method is unusable, we shouldn't use it, and methods that trade off accuracy for manageability should be decided at that level, once we can judge the accuracy -- not earlier.

EDIT: I suppose I was hoping for a valid way of justifying the fact that we throw out models that are too hard to use or analyze -- they never make it into our set of hypotheses in the first place. It's amazing how often conjugate priors "just happen" to be chosen...

comment by Cyan · 2010-02-27T20:20:08.195Z · LW(p) · GW(p)

But for hypotheses and models, I ask myself "plausibility of what? Being true?"

Plausibility of being true given the prior information. Just as Aristotelian logic gives valid arguments (but not necessarily sound ones), Bayes's theorem gives valid but not necessarily sound plausibility assessments.

following this path shades into decision theory

That's pretty much why I wanted to make the distinction between plausibility and usefulness. One of the things I like about the Cox-Jaynes approach is that it cleanly splits inference and decision-making apart.

Replies from: wnoise
comment by wnoise · 2010-02-27T21:12:06.726Z · LW(p) · GW(p)

Plausibility of being true given the prior information.

Okay, sure we can go back to the Bayesian mantra of "all probabilities are conditional probabilities". But our prior information effectively includes the statement that one of our models is the "true one". And that's never the actual case, so our arguments are never sound in this sense, because we are forced to work from prior information that isn't true. This isn't a huge problem, but it in some sense undermines the motivation for finding these probabilities and treating them seriously -- they're conditional probabilities being applied in a case where we know that what is being conditioned on is false. What is the grounding to our actual situation? I like to take the stance that in practice this is still useful -- as an approximation procedure -- sorting through models that are approximately right.

Replies from: Cyan
comment by Cyan · 2010-02-27T22:53:11.472Z · LW(p) · GW(p)

And that's never the actual case, so our arguments are never sound in this sense, because we are forced to work from prior information that isn't true.

One does generally resort to non-Bayesian model checking methods. Andrew Gelman likes to include such checks under the rubric of "Bayesian data analysis"; he calls the computing of posterior probabilities and densities "Bayesian inference", a preceding subcomponent of Bayesian data analysis. This makes for sensible statistical practice, but the underpinnings aren't strong. One might consider it an attempt to approximate the Solomonoff prior.

Replies from: wnoise
comment by wnoise · 2010-02-28T07:31:41.631Z · LW(p) · GW(p)

Yes, in practice people resort to less motivated methods that work well.

I'd really like to see some principled answer that has the same feel as Bayesianism though. As it stands, I have no problem using Bayesian methods for parameter estimation. This is natural because we really are getting pdf(parameters | data, model). But for model selection and evaluation (i.e. non-parametric Bayes) I always feel that I need an "escape hatch" to include new models that the Bayes formalism simply doesn't have any place for.

Replies from: Cyan
comment by Cyan · 2010-02-28T14:56:58.922Z · LW(p) · GW(p)

I feel the same way.

comment by wedrifid · 2010-02-27T23:12:08.458Z · LW(p) · GW(p)

Models that take longer to compute data probabilities should similarly have a probability penalty, not simply because they're implausible, but because we don't want to use them unless the data force us to.

I am much more comfortable leaving probability as it is but using a different term for usefulness.

comment by nazgulnarsil · 2010-02-26T21:31:13.928Z · LW(p) · GW(p)

the tendency to think of probabilities as inherent properties of objects.

yeah, this was my intuitive reason for thinking frequentists are a little crazy.

Replies from: byrnema
comment by byrnema · 2010-02-26T22:47:05.618Z · LW(p) · GW(p)

On the other hand, it's evidence to me that we're talking about different types of minds. Have we identified whether this aspect of frequentism is a choice, or just the way their minds work?

I'm a frequentist, I think, and when I interrogate my intuition about whether 50% heads / 50% tails is a property of a fair coin, it returns 'yes'. However, I understand that this property is an abstract one, and my intuition doesn't make any different empirical predictions about the coin than a Bayesian would. Thus, what difference does it make if I find it natural to assign this property?

In other words, in what (empirically measurable!) sense could it be crazy?

Replies from: wnoise
comment by wnoise · 2010-02-26T23:10:56.653Z · LW(p) · GW(p)

http://comptop.stanford.edu/preprints/heads.pdf

Well, the immediate objection is that if you hand the coin to a skilled tosser, the frequencies of heads and tails in the tosses can be markedly different than 50%. If you put this probability in the coin, than you really aren't modeling things in a manner that accords with results. You can, of course talk instead about a procedure of coin-tossing, that naturally has to specify the coin as well.

Of course, that merely pushes things back a level. If you completely specify the tossing procedure (people have built coin-tossing machines), then you can repeatedly get 100%/0% splits by careful tuning. If you don't know whether it is tuned to 100% heads or 100% tails, is it still useful to describe this situation probabilistically? A hard-core Frequentist "should" say no, as everything is deterministic. Most people are willing to allow that 50% probability is a reasonable description of the situation. To the extent that you do allow this, you are Bayesian. To the extent that you don't, you're missing an apparently valuable technique.

Replies from: byrnema
comment by byrnema · 2010-02-27T01:15:43.710Z · LW(p) · GW(p)

The frequentist can account for the biased toss and determinism, in various ways.

My preferred reply would be that the 50/50 is a property of the symmetry of the coin. (Of course, it's a property of an idealized coin. Heck, a real coin can land balanced on its edge.) If someone tosses the coin in a way that biases the coin, she has actually broken the symmetry in some way with her initial conditions. In particular, the tosser must begin with the knowledge of which way she is holding the coin -- if she doesn't know, she can't bias the outcome of the coin.

I understand that Bayesian's don't tend to abstract things to their idealized forms ... I wonder to what extent Frequentism does this necessarily. (What is the relationship between Frequentism and Platonism?)

Replies from: wnoise, Blueberry
comment by wnoise · 2010-02-27T01:55:12.901Z · LW(p) · GW(p)

The frequentist can account for these things, in various ways.

Oh, absolutely. The typical way is choosing some reference class of idealized experiments that could be done. Of course, the right choice of reference class is just as arbitrary as the right choice of Bayesian prior.

My preferred reply would be that the 50/50 is a property of the symmetry of the coin.

Whereas the Bayesian would argue that the 50/50 property is a symmetry about our knowledge of the coin -- even a coin that you know is biased, but that you have no evidence for which way it is biased.

I understand that Bayesian's don't tend to abstract things to their idealized forms

Well, I don't think Bayesians are particularly reluctant to look at idealized forms, it's just that when you can make your model more closely match the situation (without incurring horrendous calculational difficulties) there is a benefit to do so.

And of course, the question is "which idealized form?" There are many ways to idealize almost any situation, and I think talking about "the" idealized form can be misleading. Talking about a "fair coin" is already a serious abstraction and idealization, but it's one that has, of course, proven quite useful.

I wonder to what extent Frequentism does this necessarily. (What is the relationship between Frequentism and Platonism?)

That's a very interesting question.

comment by Blueberry · 2010-02-27T08:44:42.818Z · LW(p) · GW(p)

What is the relationship between Frequentism and Platonism?

To quote from Gelman's rejoinder that Phil Goetz mentioned,

In a nutshell: Bayesian statistics is about making probability statements, frequentist statistics is about evaluating probability statements.

So, speaking very loosely, Bayesianism is to science, inductive logic, and Aristotelianism as frequentism is to math, deductive logic, and Platonism. That is, Bayesianism is synthesis; frequentism is analysis.

Replies from: byrnema
comment by byrnema · 2010-02-27T13:42:35.414Z · LW(p) · GW(p)

Interesting! That makes a lot of sense to me, because I had already made connections between science and Aristotelianism, pure math and Platonism.

comment by Blueberry · 2010-02-26T16:49:58.900Z · LW(p) · GW(p)

This and this might be the kind of thing you're looking for.

Though the conflict really only applies in the artificial context of a math problem. Frequentialism is more like a special case of Bayesianism where you're making certain assumptions about your priors, sometimes specifically stated in the problem, for ease of calculation. For instance, in a Frequentialist analysis of coin flips, you might ignore all your prior information about coins, and assume the coin is fair.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-02-26T17:22:37.481Z · LW(p) · GW(p)

thanks, that's what I was looking for. would it be correct to say that in the frequentist interpretation your confidence interval narrows as your trials approach infinity?

Replies from: wnoise
comment by wnoise · 2010-02-26T18:17:31.950Z · LW(p) · GW(p)

That is a highly desired property of Frequentist methods, but it's not guaranteed by any means.

comment by bill · 2010-02-28T01:25:08.183Z · LW(p) · GW(p)

If it helps, I think this is an example of a problem where they give different answers to the same problem. From Jaynes; see http://bayes.wustl.edu/etj/articles/confidence.pdf , page 22 for the details, and please let me know if I've erred or misinterpreted the example.

Three identical components. You run them through a reliability test and they fail at times 12, 14, and 16 hours. You know that these components fail in a particular way: they last at least X hours, then have a lifetime that you assess as an exponential distribution with an average of 1 hour. What is the shortest 90% confidence interval / probability interval for X, the time of guaranteed safe operation?

Frequentist 90% confidence interval: 12.1 hours - 13.8 hours

Bayesian 90% probability interval: 11.2 hours - 12.0 hours

Note: the frequentist interval has the strange property that we know for sure that the 90% confidence interval does not contain X (from the data we know that X <= 12). The Bayesian interval seems to match our common sense better.

Replies from: cupholder, nazgulnarsil, Jordan
comment by cupholder · 2010-02-28T13:28:46.927Z · LW(p) · GW(p)

Heh, that's a cheeky example. To explain why it's cheeky, I have to briefly run through it, which I'll do here (using Jaynes's symbols so whoever clicked through and has pages 22-24 open can directly compare my summary with Jaynes's exposition).

Call N the sample size and θ the minimum possible widget lifetime (what bill calls X). Jaynes first builds a frequentist confidence interval around θ by defining the unbiased estimator θ∗, which is the observations' mean minus one. (Subtracting one accounts for the sample mean being >θ.) θ∗'s probability distribution turns out to be y^(N-1) exp(-Ny), where y = θ∗ - θ + 1. Note that y is essentially a measure of how far our estimator θ∗ is from the true θ, so Jaynes now has a pdf for that. Jaynes integrates that pdf to get y's cdf, which he calls F(y). He then makes the 90% CI by computing [y1, y2] such that F(y2) - F(y1) = 0.9. That gives [0.1736, 1.8259]. Substituting in N and θ∗ for the sample and a little algebra (to get a CI corresponding to θ∗ rather than y) gives his θ CI of [12.1471, 13.8264].

For the Bayesian CI, Jaynes takes a constant prior, then jumps straight to the posterior being N exp(N(θ - x1)), where x1's the smallest lifetime in the sample (12 in this case). He then comes up with the smallest interval that encompasses 90% of the posterior probability, and it turns out to be [11.23, 12].

Jaynes rightly observes that the Bayesian CI accords with common sense, and the frequentist CI does not. This comparison is what feels cheeky to me.

Why? Because Jaynes has used different estimators for the two methods [edit: I had previously written here that Jaynes implicitly used different estimators, but this is actually false; when he discusses the example subsequently (see p. 25 of the PDF) he fleshes out this point in terms of sufficient v. non-sufficient statistics.]. For the Bayesian CI, Jaynes effectively uses the minimum lifetime as his estimator for θ (by defining the likelihood to be solely a function of the smallest observation, instead of all of them), but for the frequentist CI, he explicitly uses the mean lifetime minus 1. If Jaynes-as-frequentist had happened to use the maximum likelihood estimator -- which turns out to be the minimum lifetime here -- instead of an arbitrary unbiased estimator he would've gotten precisely the same result as Jaynes-as-Bayesian.

So it seems to me that the exercise just demonstrates that Bayesianism-done-slyly outperformed frequentism-done-mindlessly. I can imagine that if I had tried to do the same exercise from scratch, I would have ended up faux-proving the reverse: that the Bayesian CI was dumber than the frequentist's. I would've just picked up a boring, old-fashioned, not especially Bayesian reference book to look up the MLE, and used its sampling distribution to get my frequentist CI: that would've given me the common sense CI [11.23, 12]. Then I'd construct the Bayesian CI by mechanically defining the likelihood as the product of the individual observations' likelihoods. That last step, I am pretty sure but cannot immediately prove, would give me a crappy Bayesian CI like [12.1471, 13.8264], if not that very interval.

Ultimately, at least in this case, I reckon your choice of estimator is far more important than whether you have a portrait of Bayes or Neyman on your wall.

[Edited to replace my asterisks with ∗ so I don't mess up the formatting.]

Replies from: wnoise, Cyan
comment by wnoise · 2010-02-28T18:33:08.797Z · LW(p) · GW(p)

So it seems to me that the exercise just demonstrates that Bayesianism-done-slyly outperformed frequentism-done-mindlessly.

This example really is Bayesianism-done-straightforwardly. The point is that you really don't need to be sly to get reasonable results.

For the Bayesian CI, Jaynes takes a constant prior, then jumps straight to the posterior being N exp(N(θ - x1))

A constant prior ends up using only the likelihoods. The jump straight to the posterior is a completely mechanical calculation, just products, and normalization.

Then I'd construct the Bayesian CI by mechanically defining the likelihood as the product of the individual observations' likelihoods.

Each individual likelihood goes to zero for (x < θ). This means that product also does for the smallest (x1 < θ). You will get out the same PDF as Jaynes. CIs can be constructed many ways from PDFs, but constructing the smallest one will give you the same one as Jaynes.

EDIT: for full effect, please do the calculation yourself.

Replies from: Cyan
comment by Cyan · 2010-02-28T18:43:52.765Z · LW(p) · GW(p)

I stopped reading cupholder's comment before the last paragraph (to write my own reply) and completely missed this! D'oh!

comment by Cyan · 2010-02-28T15:05:01.143Z · LW(p) · GW(p)

Jaynes does go on to discuss everything you have pointed out here. He noted that confidence intervals had commonly been held not to require sufficient statistics, pointed out that some frequentist statisticians had been doubtful on that point, and remarked that if the frequentist estimator had been the sufficient statistic (the minimum lifetime) then the results would have agreed. I think the real point of the story is that he ran through the frequentist calculation for a group of people who did this sort of thing for a living and shocked them with it.

Replies from: cupholder
comment by cupholder · 2010-02-28T16:09:40.765Z · LW(p) · GW(p)

You got me: I didn't read the what-went-wrong subsection that follows the example. (In my defence, I did start reading it, but rolled my eyes and stopped when I got to the claim that "there must be a very basic fallacy in the reasoning underlying the principle of confidence intervals".)

I suspect I'm not the only one, though, so hopefully my explanation will catch some of the eyeballs that didn't read Jaynes's own post-mortem.

[Edit to add: you're almost certainly right about the real point of the story, but I think my reply was fair given the spirit in which it was presented here, i.e. as a frequentism-v.-Bayesian thing rather than an orthodox-statisticians-are-taught-badly thing.]

Replies from: Cyan
comment by Cyan · 2010-02-28T17:36:12.910Z · LW(p) · GW(p)

I think my reply was fair...

Independently reproducing Jaynes's analysis is excellent, but calling him "cheeky" for "implicitly us[ing] different estimators" is not fair given that he's explicit on this point.

....given the spirit in which it was presented here, i.e. as a frequentism-v.-Bayesian thing rather than an orthodox-statisticians-are-taught-badly thing.

It's a frequentism-v.-Bayesian thing to the extent that correct coverage is considered a sufficient condition for good frequentist statistical inference. This is the fallacy that you rolled your eyes at; the room full of shocked frequentists shows that it wasn't a strawman at the time. [ETA: This isn't quite right. The "v.-Bayesian" part comes in when correct coverage is considered a necessary condition, not a sufficient condition.]

ETA:

I suspect I'm not the only one, though, so hopefully my explanation will catch some of the eyeballs that didn't read Jaynes's own post-mortem.

This is a really good point, and it makes me happy that you wrote your explanation. For people for whom Jaynes's phrasing gets in the way, your phrasing bypasses the polemics and lets them see the math behind the example.

Replies from: cupholder
comment by cupholder · 2010-02-28T19:03:32.421Z · LW(p) · GW(p)

Independently reproducing Jaynes's analysis is excellent, but calling him "cheeky" for "implicitly us[ing] different estimators" is not fair given that he's explicit on this point.

I was wrong to say that Jaynes implicitly used different estimators for the two methods. After the example he does mention it, a fact I missed due to skipping most of the post-mortem. I'll edit my post higher up to fix that error. (That said, at the risk of being pedantic, I did take care to avoid calling Jaynes-the-person cheeky. I called his example cheeky, as well as his comparison of the frequentist CI to the Bayesian CI, kinda.)

It's a frequentism-v.-Bayesian thing to the extent that correct coverage is considered a sufficient condition for good frequentist statistical inference. This is the fallacy that you rolled your eyes at; the room full of shocked frequentists shows that it wasn't a strawman at the time. [ETA: This isn't quite right. The "v.-Bayesian" part comes in when correct coverage is considered a necessary condition, not a sufficient condition.]

When I read Jaynes's fallacy claim, I didn't interpret it as saying that treating coverage as necessary/sufficient was fallacious; I read it as arguing that the use of confidence intervals in general was fallacious. That was made me roll my eyes. [Edit to clarify: that is, I was rolling my eyes at what I felt was a strawman, but a different one to the one you have in mind.] Having read his post-mortem fully and your reply, I think my initial, eye-roll-inducing interpretation was incorrect, though it was reasonable on first read-through given the context in which the "fallacy" statement appeared.

Replies from: Cyan
comment by Cyan · 2010-02-28T19:06:12.110Z · LW(p) · GW(p)

I did take care to avoid calling Jaynes-the-person cheeky.

Fair point.

comment by nazgulnarsil · 2010-02-28T09:39:57.218Z · LW(p) · GW(p)

excellent paper, thanks for the link.

comment by Jordan · 2010-02-28T01:39:30.328Z · LW(p) · GW(p)

My intuition would be that the interval should be bounded above by 12 - epsilon, since the probability that we got one component that failed at the theoretically fastest rate seems unlikely (probability zero?).

Replies from: Cyan, JGWeissman
comment by Cyan · 2010-02-28T02:22:47.385Z · LW(p) · GW(p)

You can treat the interval as open at 12.0 if you like; it makes no difference.

comment by JGWeissman · 2010-02-28T02:13:15.780Z · LW(p) · GW(p)

If by epsilon, you mean a specific number greater than 0, the only reason to shave off an interval of length epsilon from the high end of the confidence interval is if you can get the probability contained in that epsilon-length interval back from a smaller interval attached to the low end of the confidence interval. (I haven't work through the math, and the pdf link is giving me "404 not found", but presumably this is not the case in this problem.)

Replies from: Cyan, Jordan
comment by Cyan · 2010-02-28T02:20:14.128Z · LW(p) · GW(p)

The link's a 404 because it includes a comma by accident -- here's one that works: http://bayes.wustl.edu/etj/articles/confidence.pdf.

comment by Jordan · 2010-02-28T03:40:19.771Z · LW(p) · GW(p)

Thanks, that makes sense, although it still butts up closely against my intuition.

comment by PhilGoetz · 2010-02-27T06:26:59.691Z · LW(p) · GW(p)

Andrew Gelman wrote a parody of arguments against Bayesianism here. Note that he says that you don't have to choose Bayesianism or frequentism; you can mix and match.

I'd be obliged if someone would explain this paragraph, from his response to his parody:

• “Why should I believe your subjective prior? If I really believed it, then I could just feed you some data and ask you for your subjective posterior. That would save me a lot of effort!”: I agree that this criticism reveals a serious incoherence with the subjective Bayesian framework as well with in the classical utility theory of von Neumann and Morgenstern (1947), which simultaneously demands that an agent can rank all outcomes a priori and expects that he or she will make utility calculations to solve new problems. The resolution of this criticism is that Bayesian inference (and also utility theory) are ideals or aspirations as much as they are descriptions. If there is serious disagreement between your subjective beliefs and your calculated posterior, then this should send you back to re-evaluate your model.

comment by madair · 2010-02-26T13:08:48.403Z · LW(p) · GW(p)

Nice explanation. My only concern is that by the opening statement "aiming low". It makes it difficult to send this article to people without them justifiably rejecting it out of hand as a patronizing act. When the intention for aim low is truly noble, perhaps it is just as accurately described as writing clearly, writing for non-experts, or maybe even just writing an "introduction".

Replies from: Kaj_Sotala, madair
comment by Kaj_Sotala · 2010-02-26T15:13:04.919Z · LW(p) · GW(p)

Good point. I changed "to aim low" to "to summarize basic material".

comment by madair · 2010-02-26T13:11:12.431Z · LW(p) · GW(p)

And besides, as a software developer with plenty of Bayesian theory behind me, I appreciate the simplicity of the article for the clarity it provides me. Thanks for "aiming low" ;-)

comment by Blueberry · 2010-02-26T16:39:53.050Z · LW(p) · GW(p)

Great, great post. I like that it's more qualitative and philosophical than quantitative, which really makes it clear how to think like a Bayesian. Though I know the math is important, having this kind of intuitive, qualitative understanding is very useful for real life, when we don't have exact statistics for so many things.

comment by timtyler · 2010-02-26T18:08:13.355Z · LW(p) · GW(p)

Re: "Core tenet 1: For any given observation, there are lots of different reasons that may have caused it."

This seems badly phrased. It is normally previous events that cause observations. It is not clear what it means for a reason to cause something.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-26T19:26:51.573Z · LW(p) · GW(p)

Good point. That sentence structure was a carryover from Finnish, where you can say that reasons cause things.

Would "Any given observation has many different possible causes" be better?

Replies from: Morthrod
comment by Morthrod · 2010-02-26T22:55:59.340Z · LW(p) · GW(p)

Yes, that would be better.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-27T14:50:08.945Z · LW(p) · GW(p)

Changed.

comment by Jack · 2010-02-26T09:30:50.079Z · LW(p) · GW(p)

I don't know if it belongs here or in a separate post but afaik there is no explanation of the Dutch book argument on Less Wrong. It seems like there should be. Just telling people that structuring your beliefs according to Bayes Theorem will make them accurate might not do the trick for some. The Dutch book argument makes it clear why you can't just use any old probability distribution.

Replies from: Kaj_Sotala, wedrifid
comment by Kaj_Sotala · 2010-02-26T10:28:17.485Z · LW(p) · GW(p)

I thought about whether to include a Dutch Book discussion in this post, but felt it would have been too long and not as "deep core" as the other stuff. More like "supporting core". But yes, it would be good to have a discussion of that up on LW somewhere.

comment by wedrifid · 2010-02-26T10:22:09.980Z · LW(p) · GW(p)

I don't know if it belongs here or in a separate post but afaik there is no explanation of the Dutch book argument on Less Wrong. It seems like there should be.

Post away! (Then have someone add a link and summary of your post in the wiki.)

Replies from: Jack
comment by Jack · 2010-02-26T21:12:31.636Z · LW(p) · GW(p)

I'm on it.

comment by MatthewB · 2010-02-27T07:43:49.998Z · LW(p) · GW(p)

Thanks Kaj,

As I stated in my last post, reading LW often gives me the feeling that I have read something very important, yet I often don't immediately know why what I just read should be important until I have some later context in which to place the prior content.

Your post just gave me the context in which to make better sense of all of the prior content on Bayes here on LW.

It doesn't hurt that I have finally dipped my toes in the Bayesian Waters of Academia in an official capacity with a Probability and Stats class (which seems to be a prerequisite for so many other classes). The combined information from school and the content here have helped me to get a leg up on the other students in the usage of Bayesian Probability at school.

I am just lacking one bit in order to fully integrate Bayes into my life: How to use it to test my beliefs against reality. I am sure that this will come with experience.

comment by MrHen · 2010-02-26T18:49:56.277Z · LW(p) · GW(p)

Possible typo:

A theory about the laws of physics governing the motion of planets, devised by Sir Isaac Newton, or a theory simply stating that the Flying Spaghetti Monster pushes the planets forward>s< with His Noodly Appendage.

In the spirit of aiming low, I don't think you aimed nearly low enough. If I hadn't already read a small amount from the sequences I wouldn't have been able to pick up too much from this article. This reads as a great summary; I am not convinced it is a good explanation.

The rest of this comment is me saying the above in more detail. Do note that this is my perspective. Even a newb such as myself has been tainted with enough keywords to being inferring details that are not explicitly mentioned. This critique is massively excessive compared to the quality of the work. This means that you did a good job but I went all pesky-picky on you anyway.


You've probably seen the word 'Bayesian' used a lot on this site, but may be a bit uncertain of what exactly we mean by that. You may have read the intuitive explanation, but that only seems to explain a certain math formula.

I don't know which is a more successful way to talk to people: Using "you" or not using "you." Rephrasing those two sentences without the word, "You:"

The word "Bayesian" is used a lot on this site but it is a difficult concept to fully grasp. There is an intuitive explanation, but it focuses on the math behind the concept.

And so on. What I like about your opening:

  • Links to previous descriptions
  • Lets the reader know that the Bayesian concept is deeper than math. Math is at the core but for people who are scared of Math another way to think about the subject is possible.
  • Notes that the concept is difficult to understand because it is difficult to understand, not because the reader is an idiot

Things I don't like:

  • As much as I like the intuitive explanation, starting with Math is bad for people scared of math. Even bringing it up can shut them into a, "Oh no, I won't be able to understand this," mode. I don't know if there is a better way to say what needs to be said, however.
  • "You," in this case, is a little patronizing. Not a big deal; just a minor point.
  • Too defensive. The first couple paragraphs are trying to convince the LessWrong crowd that this explanation is needed. That is good, but the final edit should probably leave it out. The intended audience is much, much lower than that.
  • There is no mention of the Simple Truth or an equivalent starting ground. This may not be needed, but it sure helped me get into the right mindset when starting the sequences.

We'll start with a brief example, illustrating Bayes' theorem. Suppose you are a doctor, and a patient comes to you, complaining about a headache. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold. A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

I would drop the term "Bayes' theorem" here. "We'll" is another example of, "You." This paragraph could be touched up a bit but I feel this is more me noticing that my writing style is different from yours.

I am not sold on this being a good first example. I like that it is something that most people will identify with, but the edge cases here are nuts:

  • There are more than two reasons for headaches
  • Do brain tumors always cause a headache?
  • I don't normally get headaches from colds and don't normally associate headaches with colds. When pondering why I have a headache, "colds" is pretty far down the list.
  • More than "exceedingly few" have brain tumors. A heck of a lot more people have colds but "exceedingly few" doesn't immediately translate into "more people have colds."
  • Is the type of headache from a brain tumor the same type of headache from a common cold? This doesn't matter to you, since you don't actually care about the details of the headache, but a reader may very well offer this suggestion as a solution to figuring out if the headache is from a brain tumor or a cold. People like to stick unnecessary details into examples because that is how they solve real-world examples. At this point in the article, they don't care about the example. They are imagining someone with a cold.

Given the chance, I would reword the paragraph as such:

A simple example can be found when someone asks a doctor why they have a headache. The doctor knows that a typical cold will only sometimes cause headaches. The doctor also knows that a brain tumor will almost always causes headaches. If the doctor compared these two causes and decided that it is more likely a brain tumor is at fault, then something went wrong. If you walked into a doctor's office complaining of a headache and were immediately diagnosed with a brain tumor, you would probably be a little suspicious. Bayes' theorem helps us explain what, exactly, went wrong and how to fix it. It uses math to do it, but the basic concept is easy to understand.


Do you want more of this? If not, I can stop now. If so, I can continue later. If you'd like something similar but much shorter and concise, I can do that too.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-26T19:23:30.045Z · LW(p) · GW(p)

This is excellent feedback; please, do go on.

I did wonder if this was still too short and not aiming low enough. I chose to go on the side of briefness, partially because I was worried about ending up with a giant mammoth post and partially because I felt I'd just be repeating what Eliezer's said before. But yeah, looking at it now, I'm not at all convinced of how well I'd have gotten the message if my pre-OB self had read this.

Interesting that you find the usage of "you" and "we" patronizing. I hadn't thought of it like that - I intended it as a way to make the post less formal and build a more comfortable atmosphere to the reader.

Your rewording sounds good: not exactly the way I'd put it, but certainly something to build on.

Hmm, what do people think - if we end up rewriting this, should I just edit this post? Or make an entirely new one? Perhaps keep this one as it is, but work the changes into a future one that's longer?

Replies from: MrHen, pjeby, wnoise, h-H
comment by MrHen · 2010-03-01T18:51:36.367Z · LW(p) · GW(p)

Continuing.

If you thought a cold was more likely, well, that was the answer I was after.

Part of the great danger in explaining a High topic is that people who haven't been able to understand High topics are super wary about looking like an idiot. Math is the most obvious High topic that people hate trying to understand. They would much rather admit to fearing math than trying and failing at understanding it.

This is sad, to me, because math isn't really that hard to understand. It is a daunting subject that never ends but the fundamentals are already understood by anyone who functions in society. They just never put all the pieces together with the right terms.

I am firmly convinced that the Way of Bayes is like this. The sequences are, for the most part, about subjects that could be easy to understand. They make intuitive sense. The details and the numbers are a pain, but the concept itself is something I could explain to nearly everyone I know. (So I think. I haven't actually tried yet.)

A sentence like the one I quoted above is one that will put a layperson on defensive. This pushes Bayesianism into the realm of High topics: Topics that are grasped by the Smart people; the intellectual elite. Asking them questions at all makes them realize they don't know the answer. This is scary. Immediately answering the question and telling them the answer should be obvious could easily make them feel awkward, even if they got the answer correct.

Articles explaining "obvious" things are often explaining not-obvious things and assume that you are following them each step of the way. These articles are full of trick questions and try to make you second guess yourself in an effort to show you what you do not know. This is scary and elitist to someone who has sold their own intelligence short.

Your example is so minor that most people wouldn't have a problem with it. I bring it up because I am picky. This is an example of aiming far, far too high. The audience at LessWrong reads a question/answer like this and enjoys it. They like learning they are wrong and revel in the introspection that follows as they chase down the error in the machine so they can fix it. A layperson dreads this. They think it means they are stupid and unable to understand. They fail at the competition of intelligence whether the competition actually exists or not.

Even if a brain tumor caused a headache every time, and a cold caused a headache only one per cent of the time (say), having a cold is so much more common that it's going to cause a lot more headaches than brain tumors do.

I think this belongs in the description of the example. You could even leave out the actual numbers because they only matter for the people that have the exact numbers. It takes too long to explain that you just made the numbers up because:

  • Every word is more processing that needs to be done
  • The intended audience are probably inexperienced at skimming these sorts of topics
  • The numbers really are irrelevant
  • Someone will disagree with the numbers and make a big stink about something that was irrelevant

Bayes' theorem, basically, says that if cause A might be the reason for symptom X, then we have to take into account both the probability that A caused X (found, roughly, by multiplying the frequency of A with the chance that A causes X) and the probability that anything else caused X. (For a thorough mathematical treatment of Bayes' theorem, see Eliezer's Intuitive Explanation.)

And... the layperson just zoned out. This is the big obstacle in trying to describe Bayesianism. Math scares people away. Even people who are good at math will glaze over when they see As and Xs and words like "probability." I have no idea how to get around this obstacle, honestly. Your attempt was solid. But I still think this is the paragraph where you will lose the lowest rung of your audience.

There should be nothing surprising about that, of course.

What if they were surprised? What if their whole world reeled at the question of what causes headaches? What if, horrifically, they completely misunderstood the previous example and are currently pondering if their headache means they have a brain tumor?

If they are completely bewildered right now, telling them they shouldn't be surprised will make them feel dumb. Even if they are dumb, your article shouldn't make them feel dumb. It should make them feel smart.

Suppose you're outside, and you see a person running. They might be running for the sake of exercise, or they might be running because they're in a hurry somewhere, or they might even be running because it's cold and they want to stay warm. To figure out which one is the case, you'll try to consider which of the explanations is true most often, and fits the circumstances best.

I don't think this example clarifies much. A bullet list:

  • "they're in a hurry somewhere" sounds funny to me. Perhaps, "they're in a hurry to get somewhere" or "they're in a hurry" works better? This could be a style thing.
  • Running because it's cold will mean random things to random people. If I am outside and its cold I don't think of running. I think of doing hard work like shoveling snow or simply going inside. The reason I bring this up is because every second someone thinks, "That's weird, why would you run outside if it's cold?" is a second that the points you made above get shoved further away from the points below.
  • To figure out which one is the case, people could think of (a) asking the runner (b) looking for more evidence. Judging which reason happens most often may not translate well. I didn't even attach this language to the headache on first read. If you know the answer you can see the relation but I am not confident that it is available for every reader.

More coming if you still want it. My lunch break is over. :)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-03-01T20:41:57.703Z · LW(p) · GW(p)

Very interesting. Actually, I didn't seek to aim that low - I was targeting the average LW reader (or at least an average person who was comfortable with maths). However, I still find this to be very valuable, since I have played around with the idea of trying to write a book that'd attempt to sell (implicitly or explicitly) the idea of "maths / science, especially as applied to rationality / cognitive science is actually fun" to a lay audience.

So I probably won't alter the original article as a reaction to this, but if you want to nevertheless help me in figuring out how to reach to that audience, do continue. :)

Replies from: MrHen
comment by MrHen · 2010-03-01T21:20:00.910Z · LW(p) · GW(p)

So I probably won't alter the original article as a reaction to this, but if you want to nevertheless help me in figuring out how to reach to that audience, do continue. :)

Haha, will do. I do realize that some of what I am bringing up is extremely petty, but I have watched some of my articles get completely derailed by what I would consider to be a completely irrelevant point. Even amongst the high quality discussions in the comments I find myself needing to back up and ask a Really Obvious Question.

This is likely a fault in the way I communicate (which is accentuated online) and also a glitch where people are not willing/able to drop subjects that are bugging them. If I was fundamentally opposed to the claim that all brain tumors caused headaches I would feel compelled to point it out in the comments. (This compulsion is something I am trying to curb.)

In any case, I am glad the comments are helpful and I will continue as I find the time. If you ever start drafting something like what you mentioned I am willing to proofread and comment.

comment by pjeby · 2010-02-26T19:52:00.507Z · LW(p) · GW(p)

Interesting that you find the usage of "you" and "we" patronizing. I hadn't thought of it like that - I intended it as a way to make the post less formal and build a more comfortable atmosphere to the reader.

Using "you" is a two-edged sword; it can create greater intimacy with your audience, but only if you know your audience well enough, and don't mind polarizing your response, or are willing to limit yourself to hypotheticals (e.g. "if you walked into a doctor's office")

If you're less certain of your audience, but still want the strong intimacy or identification response, you may want to use "I" instead. By telling a story that your reader can relate to... that is, a story of how you made this discovery, found out why it's important, or applied it in some way to achieve a goal the reader shares or recognizes as valuable, then you allow the reader to simply identify with you on a less conscious/contentious level.

(Notice, for example, how many of Eliezer's best posts begin with such a story, either about Eliezer or some fictional characters.)

comment by wnoise · 2010-02-26T20:26:58.175Z · LW(p) · GW(p)

Hmm, what do people think - if we end up rewriting this, should I just edit this post? Or make an entirely new one? Perhaps keep this one as it is, but work the changes into a future one that's longer?

Personally, I think if it's just minor stylistic changes in expressing the same material, editing the post is the way to go; if it's adding more material, or expressing it radically differently, then a new post is appropriate.

comment by h-H · 2010-02-27T00:52:11.313Z · LW(p) · GW(p)

it's fine the way it is I think, it covers enough without being too specific. great post.

comment by jimrandomh · 2010-02-27T02:33:08.228Z · LW(p) · GW(p)

A frequentist asks, "did you find enough evidence?" A Bayesian asks, "how much evidence did you find?"

Frequentists can be tricky, by saying that a very small amount of evidence is sufficient; and they can hide this claim behind lots of fancy calculations, so they usually get away with it. This makes for better press releases, because saying "we found 10dB of evidence that X" doesn't sound nearly as good as saying "we found that X".

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-27T05:43:34.641Z · LW(p) · GW(p)

Since when do frequentists measure evidence in decibels?

Replies from: JGWeissman
comment by JGWeissman · 2010-02-27T05:54:10.669Z · LW(p) · GW(p)

jimrandomh claimed that frequentists don't report amounts of evidence. So you object that measuring in decibels is not how they don't report it? If they don't reports amount of evidence, then of course they don't report it in the precise way in the example.

Replies from: toto
comment by toto · 2010-02-27T20:15:24.119Z · LW(p) · GW(p)

Frequentists (or just about anybody involved in experimental work) report p-values, which are their main quantitative measure of evidence.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-27T21:21:18.161Z · LW(p) · GW(p)

Evidence, as measured in log odds, has the nice property that evidence from independent sources can be combined by adding. Is there any way at all to combine p-values from independent sources? As I understand them, p-values are used to make a single binary decision to declare a theory supported or not, not to track cumulative strength of belief in a theory. They are not a measure of evidence.

Replies from: Academian, Cyan
comment by Academian · 2010-03-17T13:46:03.974Z · LW(p) · GW(p)

Log odds of independent events do not add up, just as the odds of independent events do not multiply. The odds of flipping heads is 1:1, the odds of flipping heads twice is not 1:1 (you have to multiply odds by likelihood ratios, not odds by odds, and likewise you don't add log odds and log odds, but log odds and log likelihood-ratios). So calling log odds themselves "evidence" doesn't fit the way people use the word "evidence" as something that "adds up".

This terminology may have originated here:

http://causalityrelay.wordpress.com/2008/06/23/odds-and-intuitive-bayes/

I'm voting your comment up, because I think it's a great example of how terminology should be chosen and used carefully. If you decide to edit it, I think it would be most helpful if you left your original words as a warning to others :)

Replies from: JGWeissman, ciphergoth
comment by JGWeissman · 2010-03-17T16:53:32.656Z · LW(p) · GW(p)

By "evidence", I refer to events that change an agent's strength of belief in a theory, and the measure of evidence is the measure of this change in belief, that is, the likelihood-ratio and log likelihood-ratio you refer to.

I never meant for "evidence" to refer to the posterior strength of belief. "Log odds" was only meant to specify a particular measurement of strength in belief.

comment by Paul Crowley (ciphergoth) · 2010-03-17T14:44:00.986Z · LW(p) · GW(p)

Can you be clearer? Log likelihood ratios do add up, so long as the independence criterion is satisfied (ie so long as P(E_2|H_x) = P(E_2|E_1,H_x) for each H_x).

Replies from: Academian, Morendil
comment by Academian · 2010-03-17T14:56:52.646Z · LW(p) · GW(p)

Sure, just edited in the clarification: "you have to multiply odds by likelihood ratios, not odds by odds, and likewise you don't add log odds and log odds, but log odds and log likelihood-ratios".

comment by Morendil · 2010-03-17T14:55:09.568Z · LW(p) · GW(p)

As long as there are only two H_x, mind you. They no longer add up when you have three hypotheses or more.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-03-17T14:59:42.488Z · LW(p) · GW(p)

Indeed - though I find it very hard to hang on to my intuitive grasp of this!

Replies from: Academian, Academian
comment by Academian · 2010-03-20T00:51:28.621Z · LW(p) · GW(p)

Here is the post on information theory I said I would write:

http://lesswrong.com/lw/1y9/information_theory_and_the_symmetry_of_updating/

It explains "mutual information", i.e. "informational evidence", which can be added up over as many independent events as you like. Hopefully this will have restorative effects for your intuition!

comment by Academian · 2010-03-17T15:08:38.412Z · LW(p) · GW(p)

Don't worry, I have an information theory post coming up that will fix all of this :)

comment by Cyan · 2010-02-28T03:09:16.817Z · LW(p) · GW(p)

There's lots of papers on combining p-values.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-28T05:57:33.772Z · LW(p) · GW(p)

Well, just looking at the first result, it gives a formula for combining n p-values that as near as I can tell, lacks the property that C(p1,p2,p3) = C(C(p1,p2),p3). I suspect this is a result of unspoken assumptions that the combined p-values were obtained in a similar fashion (which I violate by trying to combine a p-value combined from two experiments with another obtained from a third experiment), which would be information not contained in the p-value itself. I am not sure of this because I did not completely follow the derivation.

But is there a particular paper I should look at that gives a good answer?

Replies from: Cyan
comment by Cyan · 2010-02-28T14:56:00.510Z · LW(p) · GW(p)

I haven't actually read any of that literature -- Cox's theorem suggests it would not be a wise investment of time. I was just Googling it for you.

Replies from: JGWeissman
comment by JGWeissman · 2010-02-28T17:50:26.515Z · LW(p) · GW(p)

Fair enough, though it probably isn't worth my time either.

Unless someone claims that they have a good general method for combining p-values, such that it does not matter where the p-values come from, or in what order they are combine, and can point me at one specific method that does all that.

comment by Bo102010 · 2010-02-27T00:45:11.173Z · LW(p) · GW(p)

I recently started working through this Applied Bayesian Statistics course material, which has done wonders for my understanding of Bayesianism vs. the bag-of-tricks statistics I learned in engineering school.

Replies from: Seth_Goldin
comment by Seth_Goldin · 2010-02-27T01:34:59.856Z · LW(p) · GW(p)

So I finally picked up a copy of Probability Theory: The Logic of Science, by E.T. Jaynes. It's pretty intimidating and technical, but I was surprised how much prose there is, which makes it surprisingly palatable. We should recommend this more here on Less Wrong.

Replies from: Erebus
comment by Erebus · 2010-03-04T10:33:33.946Z · LW(p) · GW(p)

Just remember that Jaynes was not a mathematician and many of his claims about pure mathematics (as opposed to computations and their applications) in the book are wrong. Especially, infinity is not mysterious.

Replies from: thomblake
comment by thomblake · 2010-03-04T14:24:53.484Z · LW(p) · GW(p)

Especially, infinity is not mysterious.

It should be obvious that infinity (like all things) is not inherently mysterious, and equally obvious that it's mysterious (if not unknown) to most people.

Replies from: Erebus
comment by Erebus · 2010-03-04T17:24:29.968Z · LW(p) · GW(p)

Infinity is mysterious was intended as a paraphrase of Jaynes' chapter on "paradoxes" of probability theory, and I intended mysterious precisely in the sense of inherently mysterious. As far as I know, Jaynes didn't use the word mysterious himself. But he certainly claims that rules of reasoning about infinity (which he conveniently ignores) are not to be trusted and that they lead to paradoxes.

comment by Jayson_Virissimo · 2010-02-27T01:40:25.583Z · LW(p) · GW(p)

Bayesianism is more than just subjective probability; it is a complete decision theory.

A decent summary is provided by Sven Ove Hansson:

  1. The Bayesian subject has a coherent set of probabilistic beliefs.
  2. The Bayesian subject has a complete set of probabilistic beliefs.
  3. When exposed to new evidence, the Bayesian subject changes his (her) beliefs in accordance with his (her) conditional probabilities.
  4. Finally, Bayesianism states that the rational agent chooses the option with the highest expected utility.

The book this quote is taken from can be downloaded for free here.

Replies from: wnoise
comment by wnoise · 2010-02-27T09:34:16.617Z · LW(p) · GW(p)

What Bayescraft covers is a matter of tendentious definitions. I personally do not consider decision theory a necessary part of it, though it is certainly part of we're trying to capture at LessWrong.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-28T01:44:57.830Z · LW(p) · GW(p)

I agree. The line between belief and decision is the line between 3 and 4 in that list and it is such a clean line that the von Neumann-Morgenstern axioms can be (and usually are) presented about a frequentist world.

comment by Jawaka · 2010-02-26T12:55:08.980Z · LW(p) · GW(p)

"A might be the reason for symptom X, then we have to take into account both the probability that X caused A"

I think you have accidentally swapped some variables there

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-26T15:16:17.702Z · LW(p) · GW(p)

Thanks, fixed.

comment by Joanna Morningstar (Jonathan_Lee) · 2010-02-26T09:42:30.551Z · LW(p) · GW(p)

It seems there are a few meta-positions you have to hold before taking Bayesianism as talked about here; you need the concept of Winning first. Bayes is not sufficient for sanity, if you have, say, an anti-Occamian or anti-Laplacian prior.

What this site is for is to help us be good rationalists; to win. Bayesianism is the best candidate methodology for dealing with uncertainty. We even have theorems that show that in it's domain it's uniquely good. My understanding of what we mean by Bayesianism is updating in the light of new evidence, and updating correctly within the constraints of sanity (cf Dutch books).

Replies from: Seth_Goldin, prase
comment by Seth_Goldin · 2010-02-27T05:03:07.151Z · LW(p) · GW(p)

We can discuss both epistemic and instrumental rationality.

comment by prase · 2010-02-26T10:41:45.511Z · LW(p) · GW(p)

You are right that Bayesianism isn't sufficient for sanity, but why should it prevent a post explaining what Bayesianism is? It's possible to be a Bayesian with wrong priors. It's also good to know what Bayesianism is, especially when the term is so heavily used. My understanding is that the OP is doing a good job keeping concepts of winning and Bayesianism separated. The contrary would conflate Bayesianism with rationality.

Replies from: Kevin
comment by Kevin · 2010-02-26T11:52:02.688Z · LW(p) · GW(p)

Jonathan's post doesn't seem like much of an argument but more of criticism. There's lots more to write on this topic.

comment by Nisan · 2010-02-26T23:25:07.182Z · LW(p) · GW(p)

The penultimate paragraph about our beliefs isn't about Bayesianism so much as heuristics and biases. Unless you were a Bayesian from birth, for at least part of your life your beliefs evolved in a crazy fashion not entirely governed by Bayes' theorem. It is for this reason that we should be suspicious of the beliefs based on assumptions we've never scrutinized.

comment by Kevin · 2010-02-26T10:59:54.357Z · LW(p) · GW(p)

On Hacker News: http://news.ycombinator.com/item?id=1152886

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-26T16:00:53.220Z · LW(p) · GW(p)

Thanks!

And interestingly, I find myself looking at my upvotes here and there and wondering what the appropriate "conversion rate" is for purposes of feeling good over a successful post. I've now gotten 31 upvotes there, but only 13 here. Obviously getting upvotes over there is easier than over here, so I shouldn't value this as much as if I'd got 13 + 31 = 46 upvotes here. On the other hand, I should probably allow myself a small bonus for writing a cross-domain post that is good enough to get upvotes on both sites. Hum. Man, this is tough.

Replies from: Kevin
comment by Kevin · 2010-02-27T04:53:18.027Z · LW(p) · GW(p)

By any standard you had a successful Hacker News post -- it was on the front page for most of the morning, which is good. The number of votes is not meaningful at all on Hacker News so there's no conversion rate. Also, I strongly suspect that many of the initial early votes on HN came from primary LW users following my link and then upvoting, possibly even people that didn't upvote it on LW.

comment by dhiltonp · 2014-05-16T22:09:38.810Z · LW(p) · GW(p)

The 'Intuitive Explanation' link has changed to http://yudkowsky.net/rational/bayes

comment by ChristianKl · 2012-10-15T12:23:12.780Z · LW(p) · GW(p)

Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.

Don't forget the prior: "The official account of big conflicts with a lot of different interests involved will always leave some things unexplained or otherwise suspicious." "Government agencies who fail on a massive scale don't like to be transparent about how the failure happened."

Actors in government agencies didn't think: "How can I convince that public that 9/11 wasn't an inside job." They think: "How can I influence the public perception of 9/11 in a way that my department gets more funding." Or when it comes to president Bush at that time: "How can I influence the public perception in a way that makes it more likely that I'll win the next election."

Newspaper journalists don't care about fact checking every single fact in their articles. It's way to much effort. If you have background knowledge you will in most news stories facts that aren't true.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-10-15T19:58:57.745Z · LW(p) · GW(p)

Don't forget the prior: "The official account of big conflicts with a lot of different interests involved will always leave some things unexplained or otherwise suspicious." "Government agencies who fail on a massive scale don't like to be transparent about how the failure happened."

"Governments in general, and the U.S. in specific, have a history of lying to justify war. I can think of several incidents where an official casus belli turned out to be either a lie, as in the second Gulf of Tonkin incident or the Iraqi WMD allegation; or at least significantly doubtful, such as the sinking of the Maine. In these cases, the 'conspiracy theorists' and peace activists were right; and I can't think of any where they were wrong. So they have more credibility than the official report."

Replies from: ChristianKl
comment by ChristianKl · 2012-10-15T21:57:45.471Z · LW(p) · GW(p)

In these cases, the 'conspiracy theorists' and peace activists were right; and I can't think of any where they were wrong. So they have more credibility than the official report."

Knowing that the official report contains information that's false, doesn't lead you to know what's true.

comment by roland · 2010-02-27T03:07:04.790Z · LW(p) · GW(p)

Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely,

Here I have to take objection: you framed it as a publicity stunt but actually 9-11 has shaped everything in the USA: domestic policies, foreign policies, military spending the identity of the nation as a whole(It's US vs. THEM) etc... So there is a lot at stake.

Btw, as far as the willingness of the government to kill its own citzens goes, more than 4,000 US soldiers have died in Iraq until now(over 30,000 wounded) more than 1,000 in Afghanistan, compared to less than 3,000 in the WTC attack. This was on known false information, remember the original claim of WMDs in Iraq? So if you seriously maintain that the government is not willing to sacrifice its own citizens I want to know where you get your priors from.

Replies from: Jack, Jonathan_Graehl
comment by Jack · 2010-02-28T22:02:42.336Z · LW(p) · GW(p)

The controlling feature for this prior isn't "willingness to kill own citizens" or "publicity stunt" but "massively risky". "Massively risky" is actually an incredible understatement. We're talking about people already at the top of the social hierarchy risky death and eternal shame for them and their families in hopes the hundreds of people part of the conspiracy keep quiet and that no damning evidence of a remarkable complicated plot is left behind.

The government's willingness to kill it's own citizens, such as it is, less often carries over to civilians and even less often carries over to rich white people on Wall Street. And for something that has help shaped the country... well remarkably little has changed in the direction that administration wanted to things to go. Indeed, why in all those years of waning popularity, wouldn't they try something like it again (maybe foil the attempt this time). If they're so powerful why not get someone else elected President?

Replies from: Alicorn, roland
comment by Alicorn · 2010-02-28T22:08:20.603Z · LW(p) · GW(p)

You know, I have little interest in 9/11 Truth, but I have no patience for the "but it would be so obvious" reply to Truthers. Here is how that conversation translates in my head:

Truther: I think the towers came down due to a deliberate demolition by our government. I think this because thus and so.

Non-Truther: But the government would never have done anything so easy to find out about, because it would carry massive risk. Everybody would know about it.

Truther: Well, if people were paying attention to thus and so, they'd know -

Non-Truther: BUT SINCE I DIDN'T ALREADY KNOW ABOUT THUS AND SO IT'S CLEARLY NOT SOMETHING EVERYBODY KNOWS ABOUT AND I CAN'T HEAR YOU NANANANANANANANA.

Replies from: Jack, PeerInfinity, wnoise
comment by Jack · 2010-02-28T22:17:53.936Z · LW(p) · GW(p)

Just to clarify: Do you think that is what I'm doing here?

Replies from: Alicorn
comment by Alicorn · 2010-02-28T22:28:42.341Z · LW(p) · GW(p)

It was at least strongly reminiscent, enough that under your comment seemed like a good place to put mine, but I did not intend to attack you specifically.

comment by PeerInfinity · 2010-03-01T14:48:59.062Z · LW(p) · GW(p)

obligatory XKCD comic: http://xkcd.com/690/

(actually, that's not as relevant as I first though, but I'll go ahead and post it here anyway)

Replies from: ata
comment by ata · 2010-03-01T15:07:57.193Z · LW(p) · GW(p)

A little bit more relevant: http://imgur.com/bx1th.png

comment by wnoise · 2010-02-28T22:16:50.904Z · LW(p) · GW(p)

I believe you were unfairly voted down. Your recasting shows that this is essentially an appeal to authority, with the authority being "everyone else".

comment by roland · 2010-02-28T22:32:26.810Z · LW(p) · GW(p)

We're talking about people already at the top of the social hierarchy risky death and eternal shame for them and their families in hopes the hundreds of people part of the conspiracy keep quiet and that no damning evidence of a remarkable complicated plot is left behind.

Well, there is a lot of evidence left behind and that has been cited over and over.

The government's willingness to kill it's own citizens, such as it is, less often carries over to civilians and even less often carries over to rich white people on Wall Street.

AFAIK none of the people killed was exceptionally rich and/or powerful.

And for something that has help shaped the country... well remarkably little has changed in the direction that administration wanted to things to go.

Wait, what???

If they're so powerful why not get someone else elected President?

Someone else? What are you talking about, every President in the last decades has been a member of one of the same two parties. Obama has not significantly changed the foreign policy and is moving in the same direction.

Replies from: Jack
comment by Jack · 2010-02-28T23:02:52.359Z · LW(p) · GW(p)

Well, there is a lot of evidence left behind and that has been cited over and over.

Well we're talking about the prior. Obviously we can then update on the evidence whatever that is. People will also disagree about what the evidence means but the point is this is a really unlikely even you guys are claiming took place. We can interpret the evidence but strange coincidences or some video footage not being released is not close to sufficient for me to suddenly start believing 9/11 was an inside job.

AFAIK none of the people killed was exceptionally rich and/or powerful.

I don't know what exceptionally means here but, ya know, the WTC wasn't a homeless shelter.

And for something that has help shaped the country... well remarkably little has changed in the direction that administration wanted to things to go. Wait, what???

...

Someone else? What are you talking about, every President in the last decades has been a member of one of the same two parties. Obama has not significantly changed the foreign policy and is moving in the same direction.

Look, I have no idea what your particular conspiracy is. So it is a little hard to examine the supposed motivations. My comments made sense given certain assumptions about what the motivations of such a conspiracy would be. Obviously they aren't your assumptions so share yours.

Replies from: roland
comment by roland · 2010-02-28T23:23:29.901Z · LW(p) · GW(p)

Well, there is a lot of evidence left behind and that has been cited over and over.

Well we're talking about the prior.

Sorry. What I should have answered was: under the assumption of the conspiracy theory the people who planned the whole thing are from the executive branch of the government which is the one who took charge of the investigation. So they have nothing to fear. Or can you tell me who they have to fear?

I don't know what exceptionally means here but, ya know, the WTC wasn't a homeless shelter.

By exceptionally rich I mean people like bankers, etc... Most if not all of those killed in the WTC were: office workers, cleaning staff, tourists, police and firemen.

comment by Jonathan_Graehl · 2010-02-27T09:14:03.388Z · LW(p) · GW(p)

Well argued, but if you credit the U.S. government such brazen cruelty toward the citizens it nominally serves, then why would the government need a pretense at all? Why not invade with only forged documents and lies? No self-inflicted wound should be necessary; the U.S. military may not fear intervention by other nations' forces if they appear to only pick on a few small oil-rich nations.

Replies from: roland
comment by roland · 2010-02-27T20:09:52.696Z · LW(p) · GW(p)

Forged documents and lies are not enough to convince the public opinion or better to arouse strong emotions, something more salient is needed. You have to remember, at 9-11 basically the whole world stood still watching the events unfold. Wikipedia:

The NATO council declared that the attacks on the United States were considered an attack on all NATO nations and, as such, satisfied Article 5 of the NATO charter

http://en.wikipedia.org/wiki/September_11_attacks#cite_note-155

Btw article 5 allows the use of armed(military) force. This was the official NATO position even before there was any investigation as to who was supposedly behind the "attacks".

Anyone arguing against military action can be and still is decried as unpatriotic, callous towards the families of those who died. You cannot achieve this with just a batch of documents.

comment by Nhoj (nhoj) · 2023-05-12T19:22:41.441Z · LW(p) · GW(p)

I think this parenthetical statement should maybe be a footnote or something, because it makes the and part of the sentence too far away from the both part. Or maybe put it in the following sentence? I got a little lost.

comment by brazil84 · 2010-02-26T22:48:56.132Z · LW(p) · GW(p)

Doesn't "Bayesianism" basically boil down to the idea that one can think of beliefs in terms of mathematical probabilities?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-02-27T05:46:07.318Z · LW(p) · GW(p)

That's like saying that Sunni beliefs boil down to belief in Islam.

Replies from: brazil84, Cyan
comment by brazil84 · 2010-02-27T06:06:26.165Z · LW(p) · GW(p)

Following your analogy, what is the equivalent to Shia Islam?

Put another way: Bayesianism as opposed to what?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-03-03T20:32:35.951Z · LW(p) · GW(p)

Frequentism, according to the posters here. Unless I misunderstand what you mean by thinking of a belief in terms of probabilities.

Replies from: wnoise, brazil84
comment by wnoise · 2010-03-03T21:22:39.303Z · LW(p) · GW(p)

But the standard Frequentist stance is that probabilities are not degrees of belief, but solely long term frequencies in random experiments.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-03-03T22:50:28.002Z · LW(p) · GW(p)

Most "frequentists" aren't such sticklers about terminology. Most people who attach probabilities to beliefs in knowledge representations - say, AI systems - are more familiar with frequentist than Bayesian methodology.

Replies from: wnoise
comment by wnoise · 2010-03-03T23:48:22.854Z · LW(p) · GW(p)

Okay, so most people who use statistics don't know what they're talking about. I find that all too plausible.

comment by brazil84 · 2010-03-03T21:29:05.069Z · LW(p) · GW(p)

Frequentism, according to the posters here

I looked up "Frequentism" on Wikipedia . . . .I don't understand your point.

What concept am I omitting by characterizing "Bayesianism" the way I did?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-03-03T22:48:08.950Z · LW(p) · GW(p)

Google frequentist instead of frequentism. It's the usual way of doing statistics and working with probabilities.

Replies from: brazil84
comment by brazil84 · 2010-03-04T00:11:44.418Z · LW(p) · GW(p)

I did and I still don't understand your point.

Again my question: Exactly what concept am I omitting by characterizing "Bayesianism" the way I did?

comment by Cyan · 2010-03-04T00:21:42.016Z · LW(p) · GW(p)

I PM'ed you regarding this thread. (I mention it here because I seem to recall that you're subject to a bug that prevents you from getting message/reply notifications.)

comment by private_messaging · 2012-06-09T09:19:02.326Z · LW(p) · GW(p)

Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so.

Frequently misunderstood. E.g. you have propositions A and B , you mistakenly consider that probably either one of them will happen, and you may give me money if you judge P(A)/P(B) > some threshold.

If both A and B happen to be unlikely, I can use that to make arguments which only prompt you to update (lower probability of) B .

Likewise, if you have some A probability of which is increased by some arguments and decreased by the other, I can give you only the arguments in favour of A. As a good Bayesian you are going to keep updating the belief, to my advantage.

Everything breaks down on incomplete inference graphs that very frequently contain mistakes (invalid relations, invalid nodes, etc). No matter how much you internalize the tenets, unless you internalize some sort of quantum hyper-computing implant into your head, your inference graphs will be incomplete to an unknown extent, and only partially propagated. If the propagations are ever to be prompted by reading that you should propagate something, you'll be significantly under remote control.

comment by CaptainOblivious2 · 2010-02-27T16:52:47.599Z · LW(p) · GW(p)

Sub-tenet 1: If you experience something that you think could only be caused by cause A, ask yourself "if this cause didn't exist, would I regardless expect to experience this with equal probability?" If the answer is "yes", then it probably wasn't cause A.

I don't understand this at all - if you experience something that you think could only be caused by A, then the question you're supposed to ask yourself makes no sense whatsoever: absent A, you would expect to never experience this thing, per the original condition! And if the answer to the question is anything above "never", then clearly you don't think that A is the only possible cause!

Replies from: JGWeissman, FAWS
comment by JGWeissman · 2010-02-27T17:45:05.117Z · LW(p) · GW(p)

The point is that people can erroneously report, even to themselves, that they believe their experience could only be caused by cause A. Asking the question if you would still anticipate the experience if cause A did not exist is a way of checking that you really believe that your experience could only be caused by cause A.

More generally, it is useful to examine beliefs you have expressed in high level language, to see if you still believe them after digging deeper into what that high level language means.

comment by FAWS · 2010-02-27T17:47:27.301Z · LW(p) · GW(p)

I think that the inconsistency of such a position was the point. It would probably be better phrased as "... something that has to be caused by cause A" (or possibly just "proof of A"), which is effectively equivalent, but IMO something that someone who would answer yes to the following question could plausibly have claimed to believe (i. e. I wouldn't be very surprised by the existence of people who are that inconsistent in their beliefs) .

comment by knb · 2010-02-27T02:42:50.685Z · LW(p) · GW(p)

. Further suppose that there are two reasons for why people get headaches: they might have a brain tumor, or they might have a cold.

Or, if you're very unlucky, you could have a headache and a brain tumor.... :3

comment by PlatypusNinja · 2010-02-26T23:03:52.189Z · LW(p) · GW(p)

A brain tumor always causes a headache, but exceedingly few people have a brain tumor. In contrast, a headache is rarely a symptom for cold, but most people manage to catch a cold every single year. Given no other information, do you think it more likely that the headache is caused by a tumor, or by a cold?

Given no other information, we don't know which is more likely. We need numbers for "rarely", "most", and "exceedingly few". For example, if 10% of humans currently have a cold, and 1% of humans with a cold have a headache, but 1% of humans have a brain tumor, then the brain tumor is actually more likely.

(The calculation we're performing is: compare ("rarely" times "most") to "exceedingly few" and see which one is larger.)

Replies from: Alicorn, SilasBarta
comment by Alicorn · 2010-02-26T23:21:27.937Z · LW(p) · GW(p)

You're missing the point. This post is suitable for an audience whose eyes would glaze over if you threw in numbers, which is wonderful (I read the "Intuitive Explanation of Bayes' Theorem" and was ranting for days about how there was not one intuitive thing about it! it was all numbers! and graphs!). Adding numbers would make it more strictly accurate but would not improve anyone's understanding. Anyone who would understand better if numbers were provided has their needs adequately served by the "Intuitive" explanation.

Replies from: pjeby, PlatypusNinja
comment by pjeby · 2010-02-27T04:03:02.503Z · LW(p) · GW(p)

Agreed, I did not find the "Intuitive Explanation" to be particularly intuitive even after multiple readings. Understanding the math and principles is one thing, but this post actually made me sit up and go, "Oh, now I see what all the fuss is about," outside a relatively narrow range of issues like diagnosing cancer or identifying spam emails.

Now I get it well enough to summarize: "Even if A will always cause B, that doesn't mean A did cause B. If B would happen anyway, this tells you nothing about whether A caused B."

Which is both a "well duh" and an important idea at the same time, when you consider that our brains appear to be built to latch onto the first "A" that would cause B, and then stubbornly hang onto it until it can be conclusively disproven.

That's a "click" right there, that makes retroactively comprehensible many reams of Eliezer's math rants and Beisutsukai stories. (Well, not that I didn't comprehend them as such... more that I wasn't able to intuitively recreate all the implications that I now think he was expecting his readers to take away.)

So, yeah... this is way too important of an idea to have math associated with it in any way. ;-)

comment by PlatypusNinja · 2010-02-27T19:56:44.098Z · LW(p) · GW(p)

Personally it bothers me that the explanation asks a question which is numerically unanswerable, and then asserts that rationalists would answer it in a given way. Simple explanations are good, but not when they contain statements which are factually incorrect.

But, looking at the karma scores it appears that you are correct that this is better for many people. ^_^;

comment by SilasBarta · 2010-02-27T01:09:18.296Z · LW(p) · GW(p)

I thought Truly Part of you is an excellent introduction to rationalism/Bayesianism/Less Wrong philosophy that avoids much use of numbers, graphs, and technical language. So I think it's more appropriate for the average person, or for people that equations don't appeal to.

Does anyone who meets that description agree?

And could someone ask Alicorn if she prefers it?

Replies from: djcb
comment by djcb · 2010-02-27T17:23:57.065Z · LW(p) · GW(p)

Hmmmm.... that's an interesting article too, but it focuses on a different question, the question what knowledge really means, and uses AI concepts to discuss that (somewhat related to Searle's Chinese Room gedankenexperiment.)

However, I think the article discussed here is a bit more directly connected to Bayesianism. It's clear what Bayes Theorem means, but what many people today mean with Bayesianism, is somewhat of a loose extrapolation of that -- or even just a metaphor.

I think the article does a good job at explaining the current use.

comment by woozle · 2010-02-28T00:48:56.950Z · LW(p) · GW(p)

Okay, I'm rising to the bait here...

I would really appreciate it if people would be more careful about passing on memes regarding subjects they have not researched properly. This should be a basic part of "rationalist etiquette", in the same way that "wash your hands before you handle food" is part of common eating etiquette.

I say this because I'm finding myself increasingly irritated by casual (and ill-informed) snipes at the 9/11 Truth movement, which mostly tries very hard to be rational and evidence-based:

Or take the debate we had on 9/11 conspiracy theories. Some people thought that unexplained and otherwise suspicious things in the official account had to mean that it was a government conspiracy. Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.

This claim is both a straw-man and a false dilemma.

The straw-man: Most of the movement now centers around the call for a new investigation, not around claims that "Bush did it".

Some of us (I include myself as a "truther" only because I agree with their core conclusions; I am not a member of any 9/11-related organization) may believe it likely that the government did something horrendous, but we realize the evidence is weak and circumstantial, that it is unclear exactly what the level of involvement (if any) was, and that the important thing is for a proper inquiry to be conducted.

What is clear from the evidence available is that there has been a horrendous cover-up of some sort, and that the official conclusions do not make sense.

The false dilemma: Where "A" is {there is strong evidence that the official story is substantially wrong, and therefore a proper investigation should be conducted} and "B" is {the government was clearly directly responsible for initiating the whole thing}, believing A does not necessitate believing B. Refuting B (if argument by ridicule is considered an acceptable form of refutation, that is) does not refute A.

I'm still keen on discussing this rationally with anyone who thinks the Truth movement is irrational. RobinZ offered to discuss this further, but 7 months later he still hasn't had time to do more than allude to his general position without actually defining it.

Here are my positions on this issue. I would appreciate it if someone would kindly demolish them and show me what an utterly deluded fool I've been, so that I can go back to agreeing with the apparent rational consensus on this issue -- which seems to be, in essence, that there's nothing substantially wrong with the official story. (If anyone can point me to a concise presentment of what everyone here more or less believes happened on 9/11, I would very much like to see it.)

And if nobody can do that, then could we please stop the casual sniping? Whether or not you believe the official story, you at least have to agree that we really shouldn't be trying to silence skeptical inquiry on any issue, much less one of such importance.

Replies from: Jack, Eliezer_Yudkowsky, komponisto, Douglas_Knight, Kaj_Sotala, Kevin, roland
comment by Jack · 2010-02-28T18:00:23.101Z · LW(p) · GW(p)

Keeping my comments on topic:

may believe it likely that the government did something horrendous, but we realize the evidence is weak and circumstantial

Did you read the actual post about Bayesianism? Part of the point is you're not allowed to do this! One can't both think something is likely and think the evidence is weak and circumstantial! Holding a belief but not arguing for it because you know you don't have the evidence is a defining example of irrationality. If you don't think the government was involved, fine. But if you do you're obligated to defend your belief.

Off Topic: I'm not going to go through every one of your positions but... how long have you been researching the issue? I haven't looked up the answer for every single thing I've heard truthers argue- I don't have the time. But every time I do look something up I find that the truthers just have no idea what they're talking about. And some of the claims don't even pass the blush test. For example, your first "unanswered" question just sounds crazy! I mean, HOLY SHIT! the hijackers names aren't on the manifest! That is huge! And yet, of course they absolutely are on the flight manifests and, indeed, they flew under their own names. Indeed, we even have seating charts. For example, Mohamed Atta was in seat 8D. That's business class, btw.

Replies from: Eliezer_Yudkowsky, comedian, Peter_de_Blanc, wedrifid, woozle
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-01T19:26:28.003Z · LW(p) · GW(p)

Ah, but... what are the odds that A HIJACKER WOULD FLY IN BUSINESS CLASS??!?

Replies from: wedrifid
comment by wedrifid · 2010-03-02T04:02:44.479Z · LW(p) · GW(p)

I hear business class gives better 'final meals'.

comment by comedian · 2010-03-01T18:04:56.831Z · LW(p) · GW(p)

For example, your first "unanswered" question just sounds crazy! I mean, HOLY SHIT! the hijackers names aren't on the manifest! That is huge! And yet, of course they absolutely are on the flight manifests and, indeed, they flew under their own names. Indeed, we even have seating charts. For example, Mohamed Atta was in seat 8D. That's business class, btw.

This is a crowning moment of awesome.

Replies from: Baruta07, Jack, wedrifid
comment by Baruta07 · 2012-11-10T16:55:41.642Z · LW(p) · GW(p)

Warning: TvTropes may ruin your life, TvTropes should be used at your discretion, (most Tropers agree that excessive use of TvTropes may be conductive to cynicism and overvaluation of most major media, Tvtropes can cause such symptoms as: Becoming dangerously genre savvy, spending increasing amounts of time on TvTropes, and a general increase in the number of tropes you use in a conversation. Please think twice before using TvTropes)

comment by Jack · 2010-03-01T19:18:17.111Z · LW(p) · GW(p)

Does this mean if we're in a simulation written for entertainment I'm about to get killed off?

comment by wedrifid · 2010-03-02T06:48:50.738Z · LW(p) · GW(p)

(Please consider, for the sake of wedrifid's productivity if nothing else, including at least the explicit use of the word 'trope' by way of warning when liking to that black hole of super-stimulus.)

comment by Peter_de_Blanc · 2010-03-02T05:49:42.359Z · LW(p) · GW(p)

One can't both think something is likely and think the evidence is weak and circumstantial!

One definitely can. What else is one supposed to do when evidence is weak and circumstantial? Assign probabilities that sum to less than one?

Replies from: Jack
comment by Jack · 2010-03-02T05:54:22.559Z · LW(p) · GW(p)

If the evidence for a particular claim is weak and circumstantial one should assign that claim a low probability and other, competing, possibilities higher probabilities.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-03-02T06:06:08.728Z · LW(p) · GW(p)

What if the evidence for those is also weak and circumstantial?

Or what if one had assigned that claim a very high prior probability?

comment by wedrifid · 2010-03-02T04:09:21.109Z · LW(p) · GW(p)

But if you do you're obligated to defend your belief.

You're really not. You are not epistemicaly obliged to accept the challenge of another individual and subject your reasoning to their judgement in the form they desire. That is sometimes a useful thing to do and sometimes it is necessary for the purpose of persuasion. Of course, it's usually more practical to attack their beliefs instead. That tends to give far more status.

Replies from: Jack
comment by Jack · 2010-03-02T04:22:09.465Z · LW(p) · GW(p)

No. Wrong! You totally are obligated.

Replies from: wedrifid
comment by wedrifid · 2010-03-02T04:43:57.992Z · LW(p) · GW(p)

Are you being facetious or not?

Replies from: Jack
comment by Jack · 2010-03-02T04:52:07.204Z · LW(p) · GW(p)

Well, a little of both. You position doesn't seem like the kind of thing it makes sense to argue about so I figured I'd make my point through demonstration and let it rest.

Replies from: wedrifid
comment by wedrifid · 2010-03-02T05:01:13.357Z · LW(p) · GW(p)

and let it rest.

It seems you demonstrated my point.

Replies from: Jack
comment by Jack · 2010-03-02T05:16:47.670Z · LW(p) · GW(p)
  1. Normic questions just aren't the same as factual questions. There is no particular reason to expect eventual agreement on the former, even in principle, so ending conversations is just fine and to be expected.

  2. *Edit: Second point was based on a misunderstanding of the objection.

Replies from: wedrifid
comment by wedrifid · 2010-03-02T06:39:03.365Z · LW(p) · GW(p)

You comment suggested that your goals in any further conversation would be very different from my own (that you were chiefly concerned with status and persuasion and not say, facts about what discursive norms would be most beneficial)

I am actually quite offended at the accusation and do not believe you have due cause to make it.

The presumption that individuals must accept any challenge and 'defend' their beliefs is a tactic that is commonly exploited. It can be used to imply "you have to convince me, and if I can resist believing you then I am high status". It is something that I object to vocally and is just not part of rationality as I understand it. 'Defensible', just like 'burden of proof' just isn't a bayesian concept, for all the part it plays in traditional rationality.

I actually didn't think you would find my correction of a minor point objectionable. I had assumed you used the phrase 'obligated to defend' offhandedly and my reply was a mere tangent. I expected you to just revise it to something like "But if you do then don't expect to be taken seriously unless you can defend your belief".

Edit: Also, I just managed to lose like 9 karma in the span of two minutes. I presume it isn't you, I'm just airing grievances to the downvoter, should they realize this.

I claim two. I don't think that warranted an upvote because the point it made was not a good one and it also sub-communicated the attitude that you made explicit here. I also downvoted your original comment once it became clear that you present the normative assertion as a true part of your point rather than an accident of language. Come to think of it I originally upvoted the comment so that would count twice.

I left the immediate parent untouched because although it is offensive and somewhat of a reputational attack in that sense it at least is forthright and not underhanded. Outside of this context the last comment of yours I recall voting on is this one, which I considered quite insightful.

Please refrain from making such accusations again in the future without consideration. That I disagree with a single phrase doesn't warrant going personal. I didn't even take note of which author had said 'are obligated to defend' when I replied, much less seek to steal their status.

Replies from: Jack
comment by Jack · 2010-03-02T08:35:03.462Z · LW(p) · GW(p)

Whoa! On reflection this looks like an extended misunderstanding. This isn't especially surprising as we've had trouble communicating before.

I am actually quite offended at the accusation and do not believe you have due cause to make it.

I apologize for offending you. In making the comment I truly didn't mean it as a personal insult- though I can see how it came off that way. There is a not insignificant tendency around here to A) place truth-seeking as secondary to winning and B) reduce things to status games. So in your comment I pattern matched this

That is sometimes a useful thing to do and sometimes it is necessary for the purpose of persuasion. Of course, it's usually more practical to attack their beliefs instead. That tends to give far more status.

with that tendency. And so in saying that persuasion and status seemed to be what you were concerned with I thought I was basically just recognizing the position you had taken.

There isn't an explicit transition to this second part. I can see in retrospect that this was a comment about defending beliefs. You're saying, no it is not an obligation, just sometimes a good idea, here is when it is (pragmatically) a good idea. What I saw the first time was "No, there isn't any obligation like this. Here are the concerns that should instead enter into the decision to defend beliefs: Status and persuasion." Even if the expectation that someone defends their beliefs doesn't rise to the level of an obligation it still seems like the pro-social reasons for doing it have to do with truth-seeking and sharing information. So when all I see is persuasion and status I inferred that you weren't concerned with these other things. Does that make it clear where I was getting it from, even if I got it wrong?

I actually didn't think you would find my correction of a minor point objectionable. I had assumed you used the phrase 'obligated to defend' offhandedly and my reply was a mere tangent. I expected you to just revise it to something like "But if you do then don't expect to be taken seriously unless you can defend your belief".

It wasn't a particularly deliberate phrasing. That said, I think it is a defensible, even obvious, rule of discourse. Of course, one way of describing what happens to someone when they don't obey such rules is just that they are no longer taken seriously. Your tone in the first comment, didn't suggest to me that you were only making a minor point and is part of the reason I interpreted it as differing from my own view more radically than it apparently does. And, I mean, an obligation that people be prepared to give reasons for their views seems like a totally reasonable thing to have in an attempt at cooperative rationalist discourse. Indeed, if people refuse to defend beliefs I have no idea how this kind of cooperation is suppose to proceed. From this perspective your objection looks like it has to be coming from a pretty different set of assumptions.

I'm going to edit the offending comment and remove the material. Would you consider making this last comment somewhat less scolding and accusatory as it was an honest misunderstanding?

Replies from: wedrifid
comment by wedrifid · 2010-03-02T09:02:10.944Z · LW(p) · GW(p)

Hi Jack, thanks for that. I deleted my reply. I can see why you would object to that first interpretation. I too like to keep my 'winning' quite separate from my truth seeking and would join you in objecting to exhortations that people should explain reasons for their beliefs only for pragmatic purposes. It may be that my firm disapproval of mixing epistemic rationality with pragmatics was directed at you, not the mutual enemy so pardon me if that is the case.

I certainly support giving explanations and justifications for beliefs. The main reason I wouldn't support it as an obligation is for the kind of thing that you thought I was doing to you. Games can be played with norms and I don't want people who are less comfortable with filtering out those sort of games to feel obligated to change their beliefs if they cannot defend them according to the criteria of a persuader.

comment by woozle · 2010-02-28T18:56:00.711Z · LW(p) · GW(p)

Part of the point is you're not allowed to do this!

  1. I'm allowed to believe whatever I want; I'm just not allowed to try to convince you of it unless I have a rational argument.

  2. Isn't this what Bayesianism is all about -- reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?

  3. I do have arguments for my belief, but I'm not really prepared to spend the time getting into it; it's not essential to my main thesis, and I mentioned it only in passing as a way of giving context, to wit: "some people believe this, and I'm not trying to dismiss them, partly because I happen to agree with them, but that belief is entirely beside the point".

On your OT: You win a cookie! I had to research this a bit to figure out what happened, but apparently some 9/11 researchers found a list of passenger-victims and thought it was a passenger manifest. One anomaly does remain in that 6 of the alleged hijackers have turned up alive, but I wouldn't call that enough of an anomaly to be worth worrying about.

(Found the offending factoid under "comments" on the position page; fixing it...)

Replies from: JGWeissman, wedrifid, Jack
comment by JGWeissman · 2010-02-28T19:06:54.164Z · LW(p) · GW(p)

I'm allowed to believe whatever I want; I'm just not allowed to try to convince you of it unless I have a rational argument.

Traditional Rationality is often expressed as social rules, under which this claim might work. But in Bayesian Rationality, there is math that tells you exactly what you ought to believe given the evidence you have observed.

See No One Can Exempt You From Rationality's Laws.

Replies from: woozle
comment by woozle · 2010-02-28T23:17:37.737Z · LW(p) · GW(p)

Okay -- but in practicality, what if I don't have time (or mental focus, or whatever resources it takes) to explicitly identify, enumerate, and evaluate each piece of evidence that I may be considering? It took me over an hour just to get this far with a Bayesian analysis of one hypothesis, which I'm probably not even doing right.

Or do we step outside the realm of Bayesian Rationality when we look at practical considerations like "finite computing resources"?

Replies from: Eliezer_Yudkowsky, FAWS
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-01T00:24:15.842Z · LW(p) · GW(p)

I'd actually say, start with the prior and with the strongest piece of evidence you think you have. This of itself should reveal something interesting and disputable.

comment by FAWS · 2010-03-01T00:21:43.849Z · LW(p) · GW(p)

As someone who recently failed at an attempt at Bayesian analysis let my try to offer a few pointers: You correctly conclude that "What is the likelihood that evidence E would occur even if H were false?" is more immediately relevant than "What is the likelihood that evidence E would not occur if H were true?", which you only asked because you got the syntax wrong, "the likelihood that evidence E would occur even if H were false" would be P(E|~H). P(H) is your prior, the probability before considering any evidence E, not the probability in absence of any evidence. The considerations you list under evidence against are of the sort you would make when determining the priors, asking "What is the likelihood that Bush is a twit if H were true?" and so on would be very difficult to set probabilities for, you CAN threat it that way but it's far from straightforward.

Actually I have never seen a non-trivial example of this sort of analysis for this sort of real word problem done right on this site.

H = this sort of analysis is practical

E = user FAWS has not seen any example of this sort of analysis done right.

P(H)=0.9 smart people like Eliezer seem to praise Bayesian thinking, and people ask for priors and so on.

P(E|H)= 0.3 I haven't read every comment, probably not even 10%, but if this is used anywhere it would be here, and if it's practical it should be used at least somewhat regularly.

P(E|~H) =0.9 Might still be done even if impractical when it's a point of pride and / or group identification, which could be argued to be the case.

Calculating the posterior probability P(H|E):

P(H|E) = P(H&E)/P(E)= P(H)*P(E|H)/P(E)= P(H)*P(E|H)/(P(E|H)*P(H)+P(E|~H)\P(~H))= 0.9 * 0.3 /(0.3 * 0.9 + 0.9 * 0.1)= 0.75

comment by wedrifid · 2010-03-02T06:53:23.366Z · LW(p) · GW(p)

I'm allowed to believe whatever I want; I'm just not allowed to try to convince you of it unless I have a rational argument.

Isn't this what Bayesianism is all about -- reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?

The best source to look at here is Probability is Subjectively Objective. You cannot (in the bayesian sense) believe whatever you 'want'. There is precisely one set of beliefs to which you are epistemically entitled given your current evidence even though I are obliged to form a different set of beliefs given what I have been exposed to.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-02T09:15:41.078Z · LW(p) · GW(p)

Typo in the link syntax. Corrected: Probability is Subjectively Objective.

comment by Jack · 2010-02-28T20:07:02.246Z · LW(p) · GW(p)

Isn't this what Bayesianism is all about -- reaching the most likely conclusion in the face of weak or inconclusive evidence? Or am I misunderstanding something?

Reaching the most likely conclusion while uncertain yes. But that doesn't mean believing things without evidence.

One anomaly does remain in that 6 of the alleged hijackers have turned up alive, but I wouldn't call that enough of an anomaly to be worth worrying about.

Really? I'd worry about that. That would be a big deal. At the least it would be really embarrassing for the FBI. But it isn't true either!

Replies from: woozle
comment by woozle · 2010-02-28T23:22:56.479Z · LW(p) · GW(p)

But that doesn't mean believing things without evidence.

Lacking sufficient resources (time, energy, focus) to be able to enumerate one's evidence is not the same as not having any. I believe that I have sufficient evidence to believe what I believe, but I do not currently have a transcript of the reasoning by which I arrived at this belief.

But it isn't true either!

What is your evidence that it isn't true? Here's mine. Note that each claim is footnoted with a reference to a mainstream source.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-03-01T01:02:44.663Z · LW(p) · GW(p)

What is your evidence that it isn't true? Here's mine.

What you provide is evidence that some people shared names and some other data with the hijackers. You haven't shown that the actual people identified by the FBI later turned up alive.

Here's Wikipedia on the subject.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-28T01:59:37.469Z · LW(p) · GW(p)

Well, the main thing that'd cause me to mistrust your judgment there, as phrased, is A8. Pre-9/11, airlines had an explicit policy of not resisting hijackers, even ones armed only with boxcutters, because they thought they could minimize casualties that way. So taking over an airplane using boxcutters pre-9/11 is perfectly normal and expected and non-anomalous; and if someone takes exception to that event, it probably implies that in general their anomaly-detectors are tuned too high.

I also suspect that some of these questions are phrased a bit promptingly, and I would ask others, like, "Do you think that malice is a more likely explanation than stupidity for the level of incompetence displayed during Hurricane Katrina? What was to be gained politically from that? Was that level of incompetence more or less than the level of hypothesized government incompetence that you think is anomalous with respect to 9/11?" and so on.

Replies from: woozle, David_J_Balan
comment by woozle · 2010-02-28T15:41:40.452Z · LW(p) · GW(p)

That is a valuable point, and I have amended my A8 response to "MAYBE". The one detail I'm still not sure of is whether pilots would have relinquished control under those circumstances. Can anyone point to the actual text of the "Common Strategy"?

"Pilots for 911 Truth" has this to say:

I find it hard to believe Capt. Burlingame gave up his ship to Hani Hanjour pointing a boxcutter at him. Pilots know The Common Strategy prior to 9/11. Capt. Burlingame would have taken them where they wanted to go, but only after seeing more than a "boxcutter" or knife. ... The pilots' number 1 priority is the safety of the passengers. Number 2 priority is to get them to their destination on time. Pilots dont just give up their airplane to someone with a knife.. regardless of what the press has told you about The Common Strategy prior to 9/11.

"Screw Loose Change" seems to find this statement incredibly offensive, but offers only an emotional argument in response (argument from outrage?) and ignores the original point that these pilots were experienced in this sort of combat and certainly could have fought off attackers with boxcutters, with the "Common Strategy" being the only possible constraint on doing so.

I've added your proposed questions to the questionnaire, somewhat modified.

My answers are:

  • NO: not more likely, just possible -- what actually happened must be determined by the evidence. David Brin, for example, argues that said incompetence was a by-product of a "war on professionalism" waged by the Bush administration. (I would also argue that the question as phrased implies that it is reasonable to judge the question of {whether malice was involved} entirely on the basis of {how "likely" it seems}, and that this is therefore privileging the hypothesis that malice was not involved.)
  • "starving the beast", albeit in a somewhat broader sense than described by Wikipedia: shrink the government by rendering it incompetent, thus eroding support (and hence funding) for government activities
  • I'm not sure what you're getting at here; my immediate answer is "THAT DEPENDS" -- given the range of possible scenarios in which the government is complicit, the incompetence:malice ratio has a wide range of possible values. I don't know if I'm answering the question in the spirit in which it was asked, however.

I've rephrased that last question as a matter of consistency: "Do you believe that the levels of government malice OR stupidity/incompetence displayed regarding Katrina are consistent with whatever levels of government malice or incompetence/stupidity you believe were at work on 9/11?" to which I answer (a) it's within the range of possibilities, given that the evidence remains unclear as to exactly what the Administration's involvement was on 9/11, (b) the issue of consistency between Katrina and 9/11 argues against the idea that Bushco were "just doing the best they could" on 9/11, since they clearly didn't do this for Katrina; (c) if the evidence pointed to a significantly different level of competence on 9/11 than it does for Katrina, would this be grounds for rejecting the evidence, grounds for trying to determine what might have changed, or grounds for suspecting that someone's "anomaly detectors are tuned too high"?

Please note, however, that I consider all of these issues to be very much diversions from the main question of whether a proper investigation is needed.

comment by David_J_Balan · 2010-03-01T01:29:17.462Z · LW(p) · GW(p)

I vote for malice with regard to Katrina. It's not that there were political gains to be had from that particular disaster happening but the then-government decided to let it happen anyway out of malice. It's that their generally malicious political ideology was on balance a very successful one, but had as one of its weaknesses that it sometimes led to this kind of politically-harmful disaster.

comment by komponisto · 2010-02-28T05:48:43.770Z · LW(p) · GW(p)

The problem you have is the one shared by everyone from devotees of parapsychology to people who believe Meredith Kercher was killed in an orgy initiated by Amanda Knox: your prior on your theory is simply way too high.

Simply put, the events of 9/11 are so overwhelmingly more likely a priori to have been the exclusive work of a few terrorists than the product of a conspiracy involving the U.S. government, that the puzzling details you cite, even in their totality, fail to make a dent in a rational observer's credence of (more or less) the official story.

You might try asking yourself: if the official story were in fact correct, wouldn't you nevertheless expect that there would be strange facts that appear difficult to explain, and that these facts would be seized upon by conspiracy theorists, who, for some reason or another, were eager to believe the government may have been involved? And that they would be able to come up with arguments that sound convincing?

I want to stress that it is not the fact that the terrorists-only theory is officially sanctioned that makes it the (overwhelming) default explanation; as the Kercher case illustrates, sometimes the official story is an implausible conspiracy theory! Rather, it is our background knowledge of how reality operates -- which must be informed, among other things, by an acquaintance with human cognitive biases.

"Not silencing skeptical inquiry" is a great-sounding applause light, but we have to choose our battles, for reasons more mathematical than social: there are simply too many conceivable explanations for any given phenomenon, for it it be worthwhile to consider more than a very small proportion of them. Our choice of which to consider in the first place is thus going to be mainly determined by our prior probabilities -- in other words, our model of the world. Under the models of most folks here, 9/11 conspiracy theories simply aren't going to get any time of day.

If it's different for you, I'd be curious to know what kind of ideas with substantial numbers of adherents you would feel safe in dismissing without bothering to research. (If there aren't any, then I think you severely overestimate the tendency of people's beliefs to be entangled with reality.)

Replies from: Morendil, woozle, roland
comment by Morendil · 2010-02-28T10:32:50.355Z · LW(p) · GW(p)

"Not silencing skeptical inquiry" is a great-sounding applause light

The main issue with it has been noted multiple times by people like Dawkins: there is an effort asymmetry between plucking a false but slightly believable theory out of thin air, and actually refuting that same theory. Making shit up takes very little effort, while rationally refuting random made-up shit takes the same effort as rationally refuting theories whose refutation yields actual intellectual value. Creationists can open a hundred false arguments at very little intellectual cost, and if they are dismissed out of hand by the scientific establishment they get to cry "suppression of skeptical inquiry".

This feels related to pjeby's recent comments about curiosity. The mere feeling that "there's something odd going on here", followed by the insistence that other people should inquire into the odd phenomenon, isn't valid curiosity. That's only ersatz curiosity. Real curiosity is what ends up with you actually constructing a refutable hypothesis, and subjecting it to at least the kind of test that a random person from the Internet would perform - before actually publishing your hypothesis, and insisting that others should consider it carefully.

Inflicting random damage on other people's belief networks isn't promoting "skeptical inquiry", it's the intellectual analogue of terrorism.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-28T10:48:45.102Z · LW(p) · GW(p)

the intellectual analogue of terrorism.

I like this comment lots, but I think this comparison is inadvisable hyperbole.

Replies from: Morendil
comment by Morendil · 2010-02-28T11:42:55.520Z · LW(p) · GW(p)

Perhaps "asymmetric warfare" would be a better term than "terrorism". More general, and without the connotations which I agree make that last line something of an exaggeration.

comment by woozle · 2010-02-28T16:14:53.219Z · LW(p) · GW(p)

Again, you're addressing a straw man -- not my actual arguments. I do not claim that the government was responsible for 9/11; I believe the evidence, if properly examined, would probably show this -- but my interest is in showing that the existing explanations are not just inadequate but clearly wrong.

So, okay, how would you tell the difference between an argument that "sounds convincing" and one which should actually be considered rationally persuasive?

My use of the "applause light" was an attempt to use emotion to get through emotional barriers preventing rational examination. Was it inappropriate?

"There are simply too many conceivable explanations for any given phenomenon for it to be worthwhile to consider more than a very small proportion of them."

I agree. Many of the conclusions reached by the 9/11 Commission are, however, not among that small proportion. Many questions to which we need answers were not even addressed by the Commission. (Your statement here strikes me as a "curiosity stopper".)

Under the models of most folks here, 9/11 conspiracy theories simply aren't going to get any time of day.

This is the problem, yes. What's your point?

I'd be curious to know what kind of ideas with substantial numbers of adherents you would feel safe in dismissing without bothering to research.

None that I can think of. Again, what's your point? I am not "dismissing" the dominant conclusion, I am questioning it. I have, in fact, done substantial amounts of research (probably more than anyone reading this). If anyone is actually dismissing an idea with substantial numbers of adherents, it is those who dismiss "truthers" without actually listening to their arguments.

Are you arguing that "people are irrational, so you might as well give up"?

Replies from: komponisto, Morendil
comment by komponisto · 2010-02-28T18:49:48.182Z · LW(p) · GW(p)

I do not claim that the government was responsible for 9/11; I believe the evidence, if properly examined, would probably show this

This is a flat-out Bayesian contradiction.

So, okay, how would you tell the difference between an argument that "sounds convincing" and one which should actually be considered rationally persuasive?

It's not an easy problem, in general -- hence LW!

But we can always start by doing the Bayesian calculation. What's your prior for the hypothesis that the U.S, government was complicit in the 9/11 attacks? What's your estimate of the strength of each of those pieces of evidence you think is indicative of a conspiracy?

I'd be curious to know what kind of ideas with substantial numbers of adherents you would feel safe in dismissing without bothering to research.

None that I can think of. Again, what's your point? I am not "dismissing" the dominant conclusion, I am questioning it.

You misunderstood. I was talking about your failure to dismiss 9/11 conspiracy theories. I was asking whether there were any conspiracy theories that you would be willing to dismiss without research.

Replies from: woozle
comment by woozle · 2010-02-28T23:07:54.760Z · LW(p) · GW(p)

Again, I think this question is a diversion from what I have been arguing; its truth or falseness does not substantially affect the truth or falseness of my actual claims (as opposed to beliefs mentioned in passing).

That said, I made a start at a Bayesian analysis, but ran out of mental swap-space. If someone wants to suggest what I need to do next, I might be able to do it.

Also vaguely relevant -- this matrix is set up much more like a classical Bayesian word-problem: it lists the various pieces of evidence which we would expect to observe for each known manner in which a high-rise steel-frame building might run down the curtain and join the choir invisible, and then shows what was actually observed in the cases of WTC1, 2, and 7.

Is there enough information there to calculate some odds, or are there still bits missing?

You misunderstood. I was talking about your failure to dismiss 9/11 conspiracy theories. I was asking whether there were any conspiracy theories that you would be willing to dismiss without research.

No, not really. I think of that as my "job" at Issuepedia: don't dismiss anything without looking at it. Document the process of examination so that others don't have to repeat it, and so that those who aren't sure what to believe can quickly see the evidence for themselves (rather than having to go collect it) -- and can enter in any new arguments or questions they might have.

Does that process seem inherently flawed somehow? I'm not sure what you're suggesting by your use of the word "failure" here.

Replies from: komponisto
comment by komponisto · 2010-03-01T02:06:30.031Z · LW(p) · GW(p)

(Some folks have expressed disapproval of this conversation continuing in this thread; ironically, though, it's becoming more and more an explicit lesson in Bayesianism -- as this comment in particular will demonstrate. Nevertheless, after this comment, I am willing to move it elsewhere, if people insist.)

Again, I think this question is a diversion from what I have been arguing; its truth or falseness does not substantially affect the truth or falseness of my actual claims (as opposed to beliefs mentioned in passing)

You're in Bayes-land here, not a debating society. Beliefs are what we're interested in. There's no distinction between an argument that a certain point of view should be taken seriously and an argument that the point of view in question has a significant probability of being true. If you want to make a case for the former, you'll necessarily have to make a case for the latter.

That said, I made a start at a Bayesian analysis, but ran out of mental swap-space. If someone wants to suggest what I need to do next, I might be able to do it.

Here's how you do a Bayesian analysis: you start with a prior probability P(H). Then you consider how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than it is in general (P(E)) -- that is, you calculate P(E|H)/P(E). Multiplying this "strength of evidence" ratio P(E|H)/P(E) by the prior probability P(H) gives you your posterior (updated) probability P(H|E).

Alternatively, you could think in terms of odds: starting with the prior odds P(H)/P(~H), and considering how much more likely the evidence is to occur if your hypothesis is true (P(E|H)) than if it is false (P(E|~H)); the ratio P(E|H)/P(E|~H) is called the "likelihood ratio" of the evidence. Multiplying the prior odds by the likelihood ratio gives you the posterior odds P(H|E)/P(~H|E).

One of the two questions you need to answer is: by what factor do you think the evidence raises the probability/odds of your hypothesis being true? Are we talking twice as likely? Ten times? A hundred times?

If you know that, plus your current estimate of how likely your hypothesis is, division will tell you what your prior was -- which is the other question you need to answer.

Is there enough information there to calculate some odds, or are there still bits missing?

If there's enough information for you to have a belief, then there's enough information to calculate the odds. Because, if you're a Bayesian, that's what these numbers represent in the first place: your degree of belief.

I'm not sure what you're suggesting by your use of the word "failure" here

"Your failure to dismiss..." is simply an English-language locution that means "The fact that you did not dismiss..."

comment by Morendil · 2010-02-28T16:27:53.060Z · LW(p) · GW(p)

This thread doesn't belong under the "What is Bayesianism" post. I advise taking it to the older post that discussed "Truthers".

comment by roland · 2010-02-28T22:15:08.759Z · LW(p) · GW(p)

Simply put, the events of 9/11 are so overwhelmingly more likely a priori to have been the exclusive work of a few terrorists than the product of a conspiracy involving the U.S. government

Based on what facts do you think so?

it is our background knowledge of how reality operates -- which must be informed, among other things, by an acquaintance with human cognitive biases.

Where did you get your background knowledge in regards to terrorism and geopolitics from?

The way you argue is the way the average person thinks, because the average has never been able to look behind the scenes of what happens in politics and instead gets his news from the media.

comment by Douglas_Knight · 2010-02-28T06:47:40.648Z · LW(p) · GW(p)

I would add to Eliezer's comment about A8 that it suggests that your community is bad at filtering good arguments from bad. Similarly, your failure to distance yourself from words like "Truther" is another failure of filtering. It suggests that you are less interested in being listened to than in passing some threshold that allows you to be upset about being ignored. It's like a Hindu whining about being persecuted for using a swastika. Maybe it's not "fair." Life isn't fair.

evidence was destroyed, evidence was ignored, explanations were non-explanations, and some things were just ignored altogether.

That's normal. Most news stories contain non-explanations. When there's an actual opposition, the non-explanations take over. If you want to calibrate, you could look at Holocaust and HIV denial. I'm told they are well described by the above quote. or any medical controversy.

Often it is best to silence incompetent skeptical inquiry.

Replies from: woozle
comment by woozle · 2010-02-28T17:40:42.202Z · LW(p) · GW(p)

I used the term "truther" as an attempt to be honest -- admitting that I pretty much agree with them, rather than trying to pretend to be a devil's advocate or fence-sitter.

I don't see how that's a failure of filtering.

The rest of your first paragraph is basically ad-hominem, as far as serious discussion of this issue goes. If I'm upset, I try not to let it dominate the conversation -- this is a rationalist community, after all, and I am a card-carrying rationalist -- but I also believe it to be justified, for reasons I explained earlier.

"That's normal" -- so are you in the "people aren't rational so you might as well give up" camp along with komponisto? What's your point?

If you want to calibrate, you could look at Holocaust and HIV denial. I'm told they are well described by the above quote. or any medical controversy.

Holocaust denial and HIV denial are easily refuted by the available evidence -- along with global warming denial, evolution denial, moon landing denial, and most religions. 9/11 anomalies manifestly are not, given that I've been trying for years to elicit rational rebuttals and have come up with precious little. Please feel free to send me more.

Often it is best to silence incompetent skeptical inquiry.

Do you really believe this? Why? Who determines that it is incompetent?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-28T18:34:33.025Z · LW(p) · GW(p)

Even the Frequentists (remember Bayes? It's a song about Bayes) agree that the probability of the evidence given the null hypothesis is an important number to consider. That is why I talk about what is normal, and why it is relevant that "Conspiracy theorists will find suspicious evidence, regardless of whether anything suspicious happened."

Holocaust denial and HIV denial are easily refuted by the available evidence

Yet people don't bother to refute them. Instead they pretend to respond.

Replies from: woozle
comment by woozle · 2010-02-28T19:18:49.553Z · LW(p) · GW(p)

The presence of conspiracy theorists neither proves nor refutes the likelihood of a conspiracy. Yes.

To the best of my knowledge, nobody was claiming that it did.

sings "You can claim anything you want... with Alice's Rhetoric" and walks out

comment by Kaj_Sotala · 2010-02-28T09:47:11.303Z · LW(p) · GW(p)

Sorry. I was merely trying to provide an example, not to snipe. If you want to provide a reformulation of that paragraph that better reflects your views, I'll change it.

Replies from: woozle, roland
comment by woozle · 2010-02-28T18:05:03.073Z · LW(p) · GW(p)

Kaj, I've always enjoyed your posts, so I felt bad picking on you and I apologize if I jumped down your throat. It seemed time to say something about this because I've been seeing it over and over again in lots of otherwise very rational/reality-based contexts, and your post finally pushed that button.

For reformulating your summary, I'd have to go read the original discussion, but you didn't link to it.

It's not that it needs to reflect my views, it's that I think we need a more... rigorous? systematic?... way of looking at controversies.

Yes, many of them can be dismissed without further discussion -- global warming denial, evolution denial, holocaust denial, et freaking cetera -- but there are specific reasons we can dismiss them, and I don't think those reasons apply to 9/11 (not even to the official story -- parts of it seem very likely to be true).

Proposed Criteria for Dismissing a Body of Belief

Terminology:

  • a "claim" is an argument favoring or supporting the body of belief
  • a "refutation" is a responding argument which shows the claim to be invalid (in a nested structure -- responses to refutations are also "claims", responses to those claims are also "refutations", etc)

Essential criteria:

  • the work has been done of examining the claims and refuting them
  • no claims remain unrefuted

A further cue, sufficient but not necessary:

  • those promoting the ideology never bring up the refutations of their claims unless forced to do so, even though there is reason to believe they are well aware of those refutations

Any objection to those ground rules? The first set is required so that the uninformed (e.g. those new to the discussion) will have a reference by which to understand why the seemingly-persuasive arguments presented in favor of the given belief system are, in fact, wrong; the final point is a sort of short-cut so we don't waste time dealing with people who are clearly being dishonest.

I submit that, by these rules, we can safely dismiss (at a minimum) global warming denial, evolution denial, Young Earth theories, Biblical literalism, holocaust denial, HIV denial, and anti-gay rhetoric... but not the 9/11 "truth movement".

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-28T21:02:36.401Z · LW(p) · GW(p)

Sure, no problem.

The original 9/11 discussion began as a thread in The Correct Contrarian Cluster and was then moved to The 9/11 Meta-Truther Conspiracy Theory.

Your criteria sound good in principle. My only problem with them is that determining when a claim has really been refuted isn't trivial, especially for people who aren't experts in the relevant domain.

comment by roland · 2010-03-01T00:08:59.822Z · LW(p) · GW(p)

Kaj,

I think it was not wise and maybe even a bit provocative to use an example where you know that differing views exist in this forum and that is a source of heated debates. If you are really concerned about it as opposed to just signaling concern may I suggest to change it yourself in accordance with the point you are trying to make? Don't put the burden on others.

EDIT: impolite -> provocative

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-03-01T08:47:36.493Z · LW(p) · GW(p)

I must admit that I'm not sure why you think it was unwise to use an example where differing views exist in this forum. That was kinda the point: differing priors lead to differing views.

I'm asking the offended party to provide a better formulation since obviously they know their own side better than I do, and are thus more capable of providing a more neutral formulation.

Replies from: roland
comment by roland · 2010-03-01T18:43:14.310Z · LW(p) · GW(p)

Others considered their prior for "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt",

If I understood you correctly you write "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt" as a statement of fact. And this very fact is just one where differing views exist and that has been debated on this forum. So in order to make a point you use as a fact something that is under dispute, hence my comment. It would be possible to make the point you want to make without using any disputed facts or controversial/sensitive topics at all and therefore avoid all the controversy.

Just to put it into numbers, of the 161 comments that this post generated so far 53 where in reply to woozle's and 12 in reply to my observation on the 9/11 paragraph. This totals 53+1+12+1 == 67 comments or 41%. Almost half the comments are in regards to this issue. So at least numerically I think it is undeniable that unfortunately the discussion has been derailed. Btw, this wasn't my purpose and I assume neither it was woozle's, in fact I regret having written anything at all because I think it is futile, and as an aside have been downvoted by 40 points total. Not that I care that much about karma anyway but I have the impression that I have been downvoted mostly as a form of punishment because of my dissenting view than for not arguing according to the site's rules.

An alternative formulation I'm pulling out of my hat now, and I'm not a good writer:

Or take the debate about the existence of ghosts and other supernatural phenomena. Some people think that unexplained and otherwise suspicious things in an abandoned house have to mean that ghosts exist. Others considered their prior for "ghosts and supernatural entities exist and are ready to conduct physical operations that scare thousands of people around the world", judged that to be overwhelmingly unlikely, and thought it far more probable that something else caused the suspicious things.

One drawback of my alternative is that people who actually believe in ghosts might take offense, but AFAIK at least on this site this issue has never been a source of debate.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-03-01T22:13:57.949Z · LW(p) · GW(p)

If I understood you correctly you write "the government is ready to conduct massively risky operations that kill thousands of its own citizens as a publicity stunt" as a statement of fact.

I didn't write it as a fact, I wrote it as an assumption whose validity is being evaluated.

Here's an attempt to reword it to make this clearer:

"Others thought that the conspiracy argument required the government to be ready to conduct, as a publicity stunt, massively risky operations that kill thousands of its own citizens. They considered their prior for this hypothetical and judged it overwhelmingly unlikely in comparison to priors such as 'lots of unlikely-seeming things show up by coincidence once you dig deeply enough'."

Replies from: roland
comment by roland · 2010-03-01T22:52:11.531Z · LW(p) · GW(p)

Wording it that way makes it clearer that it is an assumption by the hypothetical characters. Though based on our previous discussions I suspect that it also reflects your assumption and maybe that's why you failed to clearly distinguish it from the characters' assumptions in the OP. At least myself and woozle took objection to it. Of course it is also a possibility that we are both reading impaired.

comment by Kevin · 2010-02-28T06:00:19.842Z · LW(p) · GW(p)

What were your thoughts on Eliezer's Meta-Truther Conspiracy post? If there were a conspiracy, government inaction given foreknowledge of the attacks seems orders of magnitude more likely than any sort of controlled demolition, even for WTC7.

http://lesswrong.com/lw/1kj/the_911_metatruther_conspiracy_theory/

Replies from: woozle, roland
comment by woozle · 2010-02-28T17:05:11.436Z · LW(p) · GW(p)

What were your thoughts on Eliezer's Meta-Truther Conspiracy post?

He brings up a lot of hypotheses; let me see if I can (paraphrase and) respond to the major ones.

  • "9/11 conspiracy theorists" are actually acting on behalf of genuine government conspirators. Their job is to plant truly unbelievable theories about what happened so that people will line up behind the official story and dismiss any dissenters as "just loony conspiracy theorists".

Well, yes, there's evidence that this is what has happened; it is discussed extensively here.

  • The idea that the towers were felled by controlled demolition is loony.

No, it isn't. There is now a great deal of hard evidence pointing in this direction. It may turn out to be wrong, but it is absolutely not loony. See this for some lines of reasoning.

  • "This attack would've had the same political effect whether the buildings came down entirely or just the top floors burned."

If anyone really believes that, I'll be happy to explain why I don't.

  • The actual government involvement was to stand aside and allow the attack, which was in fact perpetrated by middle eastern agents, to succeed.

This is the lesser of the two major "conspiracy" theories, known as "let it happen on purpose" (LIHOP) and "make it happen on purpose" (MIHOP). MIHOP is generally presumed to be a core belief of all "truthers", though this is not in fact the case; there does not appear to be any clear consensus about which scenario is more likely, and (as I said earlier) the actual core belief which defines the "truther" movement is that the official story is significantly wrong and a proper investigation is needed in order to determine what really happened.

Imagine, for example, what the Challenger investigation would have found if Richard Feynman hadn't been there.

  • Conspiracy theorists are all (or mostly) anti-government types.

Well, I can't speak for the rest of them, but I'm not. I strongly dislike how the government operates, but I see it as an essential invention -- something to repair, not discard. The "truther" movement doesn't seem to have any strong political leanings, either, though I might have missed something.

  • Conspiracy theorists will find suspicious evidence, regardless of whether anything suspicious happened.

Ad-hominem. Is the evidence reasonable, or isn't it? If not, why not?

If there were a conspiracy, government inaction given foreknowledge of the attacks seems orders of magnitude more likely than any sort of controlled demolition, even for WTC7.

How likely does it seem that groups of foreign hijackers would succeed in taking control of 4 different planes using only box-cutters and piloting 3 of them into targets in two of the most heavily-guarded airspaces in the world, without even an attempt at interception? How likely is that no heads would roll as a consequence of this security failure? How likely is it that the plane flown into the Pentagon would execute a difficult hairpin turn in order to fly into the most heavily-protected side of the building? How likely is it that no less than three steel-framed buildings would completely collapse from fire and mechanical damage, for the first time in history, all on the same day? How likely is it that they would not just fall to the ground towards the side most heavily damaged but instead seemingly explode straight downward and outward into microscopic dust particles, leaving almost nothing (aside from the steel girders) larger than a finger, long after the impacts and when the fires were clearly dying down? How likely is it that anyone would try to claim that this was totally what you would expect to happen, even though the buildings were designed to handle such an impact? How likely is it that this would result in pools of molten steel, when jet fuel doesn't burn hot enough to melt steel?

Shall I go on?

Replies from: dripgrind, Jack
comment by dripgrind · 2010-03-01T14:08:28.892Z · LW(p) · GW(p)

I was interested in your defence of the "truther" position until I saw this this litany of questions. There are two main problems with your style of argument.

First, the quality of the evidence you are citing. Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case). Anyone who has read newspaper coverage of something they know about in detail will know that, even in the absence of malice, the coverage is less than accurate, especially in a big and confusing event.

When Jack pointed out that a particular piece of evidence you cite is wrong (hijackers supposedly not appearing on the passenger list), you rather snidely reply "You win a cookie!", before conceding that it only took a bit of research to find out that the supposed "anomaly" never existed. But then, instead of considering what this means for the quality of all your other evidence, you then sarcastically cite the factoid that "6 of the alleged hijackers have turned up alive" as another killer anomaly, completely ignoring the possibility of identity theft/forged passports!

If you made a good-faith attempt to verify ALL the facts you rely on (rather than jumping from one factoid to another), I'm confident you would find that most of the "anomalies" have been debunked.

Second, the way you phrase all these questions shows that, even when you're not arguing from imaginary facts, you are predisposed to believe in some kind of conspiracy theory.

For example, you seem to think it's unlikely that hijackers could take over a plane using "only box-cutters", because the pilots were "professionals" who were somehow "trained" to fight and might not have found a knife sufficiently threatening. So you think two unarmed pilots would resist ten men who had knives and had already stabbed flight attendants to show they meant business? Imagine yourself actually facing down ten fanatics with knives.

The rest of your arguments that don't rely on debunked facts are about framing perfectly reasonable trains of events in terms to make them seem unlikely - in Less Wrong terms, "privileging the hypothesis". "How likely is that no heads would roll as a consequence of this security failure?" - well, since the main failure in the official account was that agencies were "stove-piped" and not talking to each other and responsibilities were unclear, this is entirely consistent. Also, governments may be reluctant to implicitly admit that something had been preventable by firing someone straight away - see "Heckuva job, Brownie".

"How likely is it that no less than three steel-framed buildings would completely collapse from fire and mechanical damage, for the first time in history, all on the same day?" It would be amazing if they'd all collapsed from independent causes! But all you are really asking is "how likely is it that a steel-framed building will collapse when hit with a fully-fueled commercial airliner, or parts of another giant steel-framed building?" Since a comparable crash had never happened before, the "first time in history" rhetoric adds nothing to your argument.

"How likely is it that the plane flown into the Pentagon would execute a difficult hairpin turn in order to fly into the most heavily-protected side of the building?"

Well, since it was piloted by a suicidal hijacker who had been trained to fly a plane, I guess it's not unlikely that it would manouevre to hit the building. Perhaps a more experienced pilot, or A GOVERNMENT HOLOGRAM DRONE (which is presumably what you're getting at), would have planned an approach that didn't involve a difficult hairpin turn. And why wouldn't an evil conspiracy want the damage to the Pentagon to be spectacular and therefore aim for the least heavily protected side? Since, you know, they know it's going to happen anyway so they can avoid being in the Pentagon at all?

If the plane had manoeuvred to hit the least heavily-protected side of the building, truthers would argue that this also showed that the pilot had uncanny inside knowledge.

"How likely is it that [buildings] would ... explode straight downward?" Well, as a non-expert I would have said a priori that seems unlikely, but the structure of the towers made that failure mode the one that would happen. All you're asking is "how likely is it that the laws of physics would operate?" I'm sure there is some truther analysis disputing that, but then you're back into the realm of imaginary evidence.

"How likely is it that this would result in pools of molten steel?" How likely is it that someone observed pools of molten aluminium, or some other substance, and misinterpreted them as molten steel? After all, you've just said that the steel girders were left behind, so there is some evidence that the fire didn't get hot enough to melt (rather than weaken) steel.

Replies from: dripgrind, woozle
comment by dripgrind · 2010-03-01T14:18:52.535Z · LW(p) · GW(p)

Oh, and to try and make this vaguely on topic: say I was trying to do a Bayesian analysis of how likely woozle is to be right. Should I update on the fact that s/he is citing easily debunked facts like "the hijackers weren't on the passenger manifest", as well as on the evidence presented?

Replies from: LucasSloan
comment by LucasSloan · 2010-03-01T16:47:53.156Z · LW(p) · GW(p)

Yes. A bad standard of accepting evidence causes you to lose confidence in all of the other evidence.

comment by woozle · 2010-03-01T16:34:04.805Z · LW(p) · GW(p)

Your standard of verification seems to be the Wikipedia standard - if you can find a "mainstream" source saying something, then you are happy to take it as fact (provided it fits your case).

I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.

The "Wikipedia standard" seems to work pretty well, though -- didn't someone do a study comparing Wikipedia's accuracy with Encyclopedia Britannica's, and they came out about even?

you rather snidely reply "You win a cookie!", before conceding that it only took a bit of research to find out that the supposed "anomaly" never existed. But then, instead of considering what this means for the quality of all your other evidence, you then sarcastically cite the factoid that "6 of the alleged hijackers have turned up alive" as another killer anomaly, completely ignoring the possibility of identity theft/forged passports!

I wasn't intending to be snide; I apologize if it came across that way. I meant it sincerely: Jack found an error in my work, which I have since corrected. I see this as a good thing, and a vital part of the process of successive approximation towards the truth.

I also did not cite the 6 living hijackers as a "killer anomaly" but specifically said it didn't seem to be worth worrying about -- below the level of my "anomaly filter".

Just as an example of my thought-processes on this: I haven't yet seen any evidence that the "living hijackers" weren't simply people with the same names as some of those ascribed to the hijackers. I'd need to see some evidence that all (or most) of the other hijackers had been identified as being on the planes but none of those six before thinking that there might have been an error... and even then, so what? If those six men weren't actually on the plane, that is a loose end to be explored -- why did investigators believe they were on the plane? -- but hardly incriminating.

If you made a good-faith attempt to verify ALL the facts you rely on (rather than jumping from one factoid to another), I'm confident you would find that most of the "anomalies" have been debunked.

I verify when I can, but I am not paid to do this. This is why my site (issuepedia.org) is a wiki: so that anyone who finds errors or omissions can make their own corrections. I don't know of any other site investigating 9/11 which provides a wiki interface, so I consider this a valuable service (even if nobody else seems to).

For example, you seem to think it's unlikely that hijackers could take over a plane using "only box-cutters", because the pilots were "professionals" who were somehow "trained" to fight and might not have found a knife sufficiently threatening. So you think two unarmed pilots would resist ten men who had knives and had already stabbed flight attendants to show they meant business? Imagine yourself actually facing down ten fanatics with knives.

The idea that this is unlikely is one I have seen repeatedly, and it makes sense to me: if someone came at me with a box-cutter, I'd be tempted to laugh at them even if I wasn't responsible for a plane-load of passengers -- and I've never been good at physical combat. Furthermore, the "Pilots for 9/11 Truth" site -- which is operated by licensed pilots (it has a page listing its members by name and experience) -- backs up this statement.

And that's the best authority I can find. If you can find me an experienced pilot (or a military veteran, for that matter) who thinks that this is nonsense, I would very much like to hear from them.

The rest of your arguments that don't rely on debunked facts are about framing perfectly reasonable trains of events in terms to make them seem unlikely - in Less Wrong terms, "privileging the hypothesis". "How likely is that...

I did that precisely as a counter to someone who was doing the same thing in the other direction -- to show that if you accepted "how likely..." as a valid form of argument, then the case is just as strong (if not stronger) for a conspiracy as it is against.

I do not accept "apparent likeliness" as a valid form of argument, and have said so elsewhere.

Well, since it was piloted by a suicidal hijacker who had been trained to fly a plane, I guess it's not unlikely that it would manouevre to hit the building.

You're missing the point; it would have been much easier to hit the other side, the one that wasn't heavily reinforced -- which would have caused more damage, too. On top of that, the maneuver necessary to turn around and hit the reinforced side was, to all accounts, an extremely difficult one which many experienced pilots would hesitate to attempt.

(I suppose one might argue that he overshot and had to turn around; not being skilled, he didn't realize how dangerous this was... so he missed that badly on the first attempt, and yet he was skillful enough to bullseye on the second attempt, skimming barely 10 feet above the ground without even grazing it?)

But that's just one of the "how likely"s, and I shouldn't even be rising to the bait of responding; it's not essential to my main point...

...which, as I have said elsewhere, is this: 9/11 "Truthers" may be wrong, but they are (mostly) not crazy. They have some very good arguments which deserve serious consideration.

Maybe each of their arguments have been successfully knocked down, somewhere -- but I have yet to see any source which does so. All I've been able to find are straw-man attacks and curiosity-stoppers.

Replies from: dripgrind
comment by dripgrind · 2010-03-02T00:09:12.457Z · LW(p) · GW(p)

I am "happy to take it as fact" until I find something contradictory. When that happens, I generally make note of both sources and look for more authoritative information. If you have a better methodology, I am open to suggestions.

So your standard of accepting something as evidence is "a 'mainstream source' asserted it and I haven't seen someone contradict it". That seems like you are setting the bar quite low. Especially because we have seen that your claim about the hijackers not being on the passenger manifest was quickly debunked (or at least, contradicted, which is what prompts you to abandon your belief and look for more authoritative information) by simple googling. Maybe you should, at minimum, try googling all your beliefs and seeing if there is some contradictory information out there.

I wasn't intending to be snide; I apologize if it came across that way. I meant it sincerely: Jack found an error in my work, which I have since corrected. I see this as a good thing, and a vital part of the process of successive approximation towards the truth.

I suggest that a better way to convey that might have been "Sorry, I was wrong" rather than "You win a cookie!" When I am making a sincere apology, I find that the phrase "You win a cookie!" can often be misconstrued.

The idea that this is unlikely is one I have seen repeatedly, and it makes sense to me: if someone came at me with a box-cutter, I'd be tempted to laugh at them even if I wasn't responsible for a plane-load of passengers -- and I've never been good at physical combat. Furthermore, the "Pilots for 9/11 Truth" site -- which is operated by licensed pilots (it has a page listing its members by name and experience) -- backs up this statement.

A box-cutter is a kind of sharp knife. A determined person with a sharp knife can kill you. An 11-year-old girl can inflict fatal injuries with a box-cutter - do you really think that five burly fanatics couldn't achieve the same thing on one adult? All the paragraph above establishes is that you - and maybe some licensed pilots - have an underdeveloped sense of the danger posed by knives.

I propose an experiment - you and a friend can prepare for a year, then I and nine heavyset friends will come at you with box-cutters (you will be unarmed). If we can't make you stop laughing off our attack, then I'll concede you are right. Deal?

Let's go into more details with this "plane manoeuvre" thing.

(I suppose one might argue that he overshot and had to turn around; not being skilled, he didn't realize how dangerous this was... so he missed that badly on the first attempt, and yet he was skillful enough to bullseye on the second attempt, skimming barely 10 feet above the ground without even grazing it?)

Well, what we should really ask is "given that we a plane made a difficult manoeuvre to hit the better-protected side of the Pentagon, how much more likely does that make a conspiracy than other possible explanations?"

Here are some possible explanations of the observed event:

  1. The hijacker aimed at the less defended side, overshot, made a desperate turn back and got lucky.

  2. The hijacker wanted to fake out possible air defences, so had planned a sudden turn which he had rehearsed dozens of times in Microsoft Flight Simulator. Coincidentally, the side he crashed into was better protected.

  3. The hijacker was originally tasked to hit a different landmark, got lost, spotted the Pentagon, made a risky turn and got lucky. Coincidentally, the side he crashed into was better protected.

  4. A conspiracy took control of four airliners. The plan was to crash two of them into the WTC, killing thousands of civilians, one into a field, and one into the Pentagon. The conspirators decided that hitting part of the Pentagon that hadn't yet been renovated with sprinklers and steel bars was going a bit too far, so they made the relevant plane do a drastic manoeuvre to hit the best-protected side. There was an unspecified reason they didn't just approach from the best-protected side to start with.

  5. A conspiracy aimed to hit the less defended side of the Pentagon, but a bug in the remote override software caused the plane to hit the most defended side.

etc.

Putting the rest of the truther evidence aside, do the conspiracy explanations stand out as more likely than the non-conspiracy explanations?

...which, as I have said elsewhere, is this: 9/11 "Truthers" may be wrong, but they are (mostly) not crazy. They have some very good arguments which deserve serious consideration.

Maybe each of their arguments have been successfully knocked down, somewhere -- but I have yet to see any source which does so. All I've been able to find are straw man attacks and curiosity-stoppers.

Well, in this thread alone, you have seen Jack knock down one of your arguments (hijackers not on manifest) to your own satisfaction. And yet you already seem to have forgotten that. Since you've already conceded a point, it's not true that the only opposition is "straw-man attacks and curiosity-stoppers". Do you think my point about alternate Pentagon scenarios is a straw man or a curiosity stopper? Is it possible that anyone arguing against you is playing whack-a-mole, and once they debunk argument A you will introduce unrelated argument B, and once they debunk that you will bring up argument C, and then once they debunk that you will retreat back to A again?

There's a third problem here - the truthers as a whole aren't arguing for a single coherent account of what really happened. True, you have outlined a detailed position (which has already changed during this thread because someone was able to use Google and consequently win a cookie), but you are actually defending the far fuzzier proposition that truthers have "some very good arguments which deserve serious consideration". This puts the burden on the debunkers, because even if someone shows that one argument is wrong, that doesn't preclude the existence of some good arguments somewhere out there. It also frees up truthers to pile on as many "anomalies" as possible, even if these are contradictory.

For example, you assert that it's suspicious that the buildings were "completely pulverized", and also that it's suspicious that some physical evidence - the passports - survived the collapse of the buildings. (And this level of suspicion is based purely on your intuition about some very extreme physical events which are outside of everyday experience. Maybe it's completely normal for small objects to be ejected intact from airliners which hit skyscrapers - have you done simulations or experiments which show otherwise?)

Anyway, this is all off-topic. I think you should do a post where you outline the top three truther arguments which deserve serious consideration.

comment by Jack · 2010-02-28T18:23:22.187Z · LW(p) · GW(p)
  • Conspiracy theorists will find suspicious evidence, regardless of whether anything suspicious happened.

Ad-hominem. Is the evidence reasonable, or isn't it? If not, why not?

As a matter of fact there are conspiracy theorists about many important public events, cf the moon-landing, JFK etc. Before there even was a 9/11 Truth movement people could have predicted there would be a conspiracy theorists. It is just that kind of society-changing event that will generate conspiracy theories. Given that, the existence of conspiracy theorists pointing out anomalies in the official story isn't evidence the official story is substantially wrong since it would be happening whether or not the official story was substantially wrong. It's like running a test for a disease that will say positive 50% of the time if the patient has the disease and negative 50% of the time if the patient doesn't have the disease. That test isn't actually testing for that disease and these anomalies aren't actually providing evidence for or against the official account of 9/11.

(I think this comment is Bayesian enough that it is on topic, but the whole 9/11 conversation needs to be moved to the comments under Eliezer's Meta-truthers post. Feel free to just post a new comment there.)

Replies from: woozle
comment by woozle · 2010-02-28T19:14:47.647Z · LW(p) · GW(p)

...the existence of conspiracy theorists pointing out anomalies in the official story isn't evidence the official story is substantially wrong...

Correct. What is evidence that the official story is substantially wrong is, well, the evidence that the official story is substantially wrong. (Yes, I need to reorganize that page and present it better.)

Also, does anyone deny that some "conspiracy theories" do eventually turn out to be true?

(Can comment-threads be moved on this site?)

Replies from: Jack
comment by Jack · 2010-02-28T20:40:24.832Z · LW(p) · GW(p)

Comments can't be moved. Just put a hyperlink in this thread (at the top, ideally) and link back with a hyperlink in the new thread.

That list of evidence is almost all exactly the kind of non-evidence we're talking about. In any event like this one would expect to find weird coincidences and things that can't immediately be explained- no matter how the event actually happened. That means your evidence isn't really evidence. Start a new thread an I'll try and say more.

comment by roland · 2010-02-28T21:55:41.131Z · LW(p) · GW(p)

If there were a conspiracy, government inaction given foreknowledge of the attacks seems orders of magnitude more likely than any sort of controlled demolition, even for WTC7.

Can you provide us with numbers please? In regards to WTC7 I bring up the following: What is the number of steel-frame buildings that collapsed due to damage by fire and some structural damage? AFAIK it is close to zero. And there are none that collapsed in essentially free-fall speed prior to WTC7(and the twin towers). So applying bayesian reasoning the probability of a demolition is certainly much higher.

You certainly can't talk about "orders of magnitude more likely" without providing any numbers.

Replies from: wnoise
comment by wnoise · 2010-02-28T22:15:12.079Z · LW(p) · GW(p)

The towers did not collapse "in essentially free-fall speed". I don't know about WTC 7. Do you have evidence for this?

I'm perfectly willing to concede that parts of the government are both able and willing to do cover-ups, even in favor of monstrous acts. One can easily point to things like the Tuskegee syphilis experiment, which lasted for forty years, even after multiple whistle-blowing attempts.

I'm even fairly confident that parts of the government are whitewashing like crazy to cover their asses, and asses of allies. Several members of the 9/11 commission said that they could get no real cooperation in several areas, and were repeatedly stonewalled. But my highest probability theory is that this is to cover up incompetence, rather than either "they deliberately let it happen", or "they made it happen". Everything happening because hijackers crashed planes into the two towers and the pentagon is perfectly consistent with all three cases.

Replies from: roland
comment by roland · 2010-02-28T22:22:34.056Z · LW(p) · GW(p)

The towers did not collapse "in essentially free-fall speed". I don't know about WTC 7. Do you have evidence for this?

Go to youtube and search for WTC7 collapse.

Everything happening because hijackers crashed planes into the two towers and the pentagon is perfectly consistent with all three cases.

What hijackers? There were non on the passenger lists to begin with:

http://911review.org/Sept11Wiki/PassengerList.shtml

Replies from: Jack
comment by Jack · 2010-02-28T22:38:09.701Z · LW(p) · GW(p)

What hijackers? There were non on the passenger lists to begin with: http://911review.org/Sept11Wiki/PassengerList.shtml

Oh my Cthulu! All this crap can be easily avoided if you just google these weird claims when you hear them. All the hijackers are on the actual manifests. We even know where they sat. CNN put out a list of people they had confirmed were on the flights. This wasn't anything official. I corrected Woozle on this like two hours ago. And I don't even study these things. I heard the claim, thought it sounded weird and googled it.

Replies from: roland
comment by roland · 2010-02-28T22:50:27.963Z · LW(p) · GW(p)

Are you sure? I did some googling and found more controversies, there are even some "hijackers" that are still alive and well today.

http://news.bbc.co.uk/2/hi/middle_east/1559151.stm http://911research.wtc7.net/planes/evidence/passengers.html http://911research.wtc7.net/disinfo/deceptions/identities.html

Replies from: CarlShulman, Jack
comment by CarlShulman · 2010-02-28T23:16:58.471Z · LW(p) · GW(p)

Again, you make wacky claims without mentioning the devastating refutation.

http://en.wikipedia.org/wiki/Hijackers_in_the_September_11_attacks#Cases_of_mistaken_identity

Visibly deceptive and non-truth-seeking antics like this are not going to work around here. I suggest that you and woozle read up on cognitive biases and bayesian epistemology before trying to argue for this here. One handy debiasing technique:

http://www.overcomingbias.com/2009/02/write-your-hypothetical-apostasy.html

If you do this well, and post your writeup on your personal website or the like, you might be able to get folk to take you seriously, or you might realize that the epistemic procedures you're using (selective search for confirming examples and 'allied' sources, etc) aren't very truth-tracking.

In the meantime, this stuff doesn't belong in the comments section of this post.

Replies from: roland
comment by roland · 2010-02-28T23:45:43.373Z · LW(p) · GW(p)

Visibly deceptive and non-truth-seeking antics like this are not going to work around here.

Can the same be said for Ad-hominem attacks?

Well, I've googled some more and it seems that there is a lot of controversy regarding the passenger lists of the different planes. I think that this is a complicated issue and I'm not willing to spend more time to research/discuss it.

One handy debiasing technique: http://www.overcomingbias.com/2009/02/write-your-hypothetical-apostasy.html

You are suggesting I do this? Have you done it yourself, did it work?

In the meantime, this stuff doesn't belong in the comments section of this post.

Well, all this started as a comment on a paragraph of the original post. Maybe the OP shouldn't have chosen an example where considerable controversies exist and that is politically sensitive.

Replies from: CarlShulman
comment by CarlShulman · 2010-03-01T00:18:20.454Z · LW(p) · GW(p)

One handy debiasing technique: http://www.overcomingbias.com/2009/02/write-your-hypothetical-apostasy.html

You are suggesting I do this? Have you done it yourself, did it work?

Yes, I've used it with respect to several scientific and ideological issues where I had significant incentives or potential biases favoring one view or another. It helps to bring issues into sharp focus that were previously not salient. In psych experiments it's one of the only immediately effective debiasing techniques.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-03-01T01:00:54.585Z · LW(p) · GW(p)

Have you posted your hypothetical apostasies somewhere? Posting some sample hypothetical apostasies and perhaps followup analyses about how writing them reduced the authors' biases would probably increase the motivation in others to try this seriously. (I commented to Nick's post that I tried his suggestion, but didn’t get very far.)

Replies from: CarlShulman
comment by CarlShulman · 2010-03-01T01:41:36.640Z · LW(p) · GW(p)

That's somewhat complicated by the fact that I've used it most effectively with regards to things you can't say. But I am going to add this to my task queue, and either post one of my previous ones or do a new one. I've been considering ones on consequentialism and weirdness.

comment by Jack · 2010-02-28T23:19:25.327Z · LW(p) · GW(p)

Er okay, I guess in addition to knowing how to google you also have to know how to read story updates and not trust conspiracy websites with html straight out of 1998. They were cases of mistaken identity.

Edit: If anyone wants to continue talking about this, move it to the other post. This one has been derailed enough.

comment by roland · 2010-02-28T21:35:04.869Z · LW(p) · GW(p)

I just read this comment, I'm so glad that I'm not the only one who is very skeptic in regard to the official account, here is the comment I wrote: http://lesswrong.com/lw/1to/what_is_bayesianism/1omc

comment by SoulAllnighter · 2010-09-28T05:57:40.661Z · LW(p) · GW(p)

I guess this is the wrong place for this comment but i don't know where else to put it and after reading the extensive threads on 9/11 below i felt this was a valid point. If someone objects to this being here i'll move it to somewhere more appropriate. It looks like i'm a bit out of date with the discussion anyway.

Firstly I should say i'm still very undecided on the matter. Iv'e heard a lot of convincing evidence for both sides of the story, and I know many intelligent people who's opinion i respect on both sides of the fence. I do however think that it is often dismissed to easily.

Many of the criticisms of the 9/11 cover up theories still implicitly use arguments of ridicule like "oh yeah sure it was all entirely plotted by top US officials who collaborated in this mass conspiracy". As woozle said the main argument is that there are major holes in the official story, and this is a much harder claim to refute.

A common response to this is "well of course theres holes, its a complex official story, if you look hard enough your bound to find inconsistencies". Is that really satisfactory? Perhaps if you were investigating a bank robbery or tax fraud, but with a event of this significance and scale I think any inconsistencies and even a remote possibility of foul play should be taken far more seriously.

Secondly, people seem to have an ill informed, far too high respect for government. These people make manipulative and often very damaging decisions every day. A major argument in the thread below is the fact that we should have an extremely low prior assigned to a government conspiracy which should essentially cause us to disregard this possibility. But any one who has done any real research on 9/11 should have stumbled across Operation Northwoods (sorry, i dont know how to link in these threads "http://en.wikipedia.org/wiki/Operation_Northwoods" ). This is an uncovered secret government plan to stage a terrorist attack against america and blame it on Cuba in order to gain public support to invade Cuba. Ring any bells? There is no controversy regarding the existence of this plan which was eventually cancelled. We know the government is capable of thinking this way, So why should we have such a low prior for this possibility.

Frankly im a bit sick of the whole "it's in the past" attitude. We now know that the invasion of Iraq was totally illegal, that the American government, and my Australian government, was entirely aware of the fact that there where no weapons of mass destruction, but what is our response? Oh well, they fooled us good hey. I cant believe how easily they were let off the hook for deceiving a nation to start a war and cause thousands of civilian casualties. I know this is off topic but just consider the very possibility that there was any level of involvement or at least prior knowledge of the attacks at any level of government. Surely these allegations should not be dismissed as easily as they are given that, from what i have heard, there is undeniably some real problems with the official story