Updating, part 1: When can you change your mind? The binary model

post by PhilGoetz · 2010-05-13T17:55:12.768Z · LW · GW · Legacy · 156 comments

I was recently disturbed by my perception that, despite years of studying and debating probability problems, the LessWrong community as a whole has not markedly improved its ability to get the right answer on them.

I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.

But can that possibly work?  How can someone who isn't already highly-accurate, identify other people who are highly accurate?

Aumann's agreement theorem (allegedly) says that Bayesians with the same priors agree.  But it doesn't say that doing so helps.  Under what circumstances does revising your opinions, by updating in response to people you consider reliable, actually improve your accuracy?

To find out, I built a model of updating in response to the opinions of others.  It did, eventually, show that Bayesians improve their collective opinions by updating in response to the opinions of other Bayesians.  But this turns out not to depend on them satisfying the conditions of Aumann's theorem, or on doing Bayesian updating.  It depends only on a very simple condition, established at the start of the simulation.  Can you guess what it is?

I'll write another post describing and explaining the results if this post receives a karma score over 10.

That's getting a bit ahead of ourselves, though.  This post models only non-Bayesians, and the results are very different.

Here's the model:

Algorithm:

# Loop over T timesteps
For t = 0 to T-1 {

# Loop over G people
For i = 0 to G-1 {

# Loop over N problems
For v = 0 to N-1 {

If (t == 0)

# Special initialization for the first timestep
If (random in [0..1] < pi) givt := 1;  Else givt := 0

Else {

# Product over all j of the probability that the answer to v is 1 given j's answer and estimated accuracy
m1 := j [ pijgjv(t-1) + (1-pij)(1-gjv(t-1)) ]

# Product over all j of the probability that the answer to v is 0 given j's answer and estimated accuracy
m0 := j [ pij(1-gjv(t-1)) + (1-pij)gjv(t-1) ]

p1 := m1 / (m0 + m1)                          # Normalize

If (p1 > .5) givt := 1;  Else  givt := 0

}

}

# Loop over G other people
For j = 0 to G-1

# Compute person i's estimate of person j's accuracy
pij := { Σs in [0 .. t] Σv in [s..N] [ givtgjvs + (1-givt)(1-gjvs) ] } / N

}

}

p1 is the probability that agent i assigns to problem v having the answer 1.  Each term pijgjv(t-1) + (1-pij)(1-gjv(t-1)) is the probability of problem v having answer 1 computed using agent j's beliefs, by adding either the probability that j is correct (if j believes it has answer 1), or the probability that j is wrong (if j believes it has answer 0).  Agent i assumes that everyone's opinions are independent, and multiplies all these probabilities together.  The result, m1, is very small when there are very many agents (m1 is on the order of .5G), so it is normalized by computing a similar product m0 for the probability that v has answer 0, and setting p1 = m1 / (m0 + m1).

The sum of sums to compute pij (i's opinion of j's accuracy) computes the fraction of problems, summed over all previous time periods, on which person j has agreed with person i's current opinions.  It sums over previous time periods because otherwise, pii = 1.  By summing over previous times, if person i ever changes its mind, that will decrease pii.  (The inner sum starts from s instead of 0 to accomodate an addition to the model that I'll make later, in which the true answer to problem t is revealed at the end of time t.  Problems whose answer is public knowledge should not be considered in the sum after the time they became public knowledge.)

Now, what distribution should we use for the pi?

There is an infinite supply of problems.  Many are so simple that everyone gets them right; many are so hard or incomprehensible that everyone performs randomly on them; and there are many, such as the Monty Haul problem, that most people get wrong because of systematic bias in our thinking.  The range of population average performance pave on all possible problems thus falls within [0 .. 1].

I chose to model person accuracy instead of problem difficulty.  I say "instead of", because you can use either person accuracy or problem difficulty to set pave. Since a critical part of what we're modeling is person i's estimate of person j's accuracy, person j should actually have an accuracy.  I didn't model problem difficulty partly because I assume we only talk about problems of a particular level of difficulty; partly because a person in this model can't distinguish between "Most people disagree with me on this problem; therefore it is difficult" and "Most people disagree with me on this problem; therefore I was wrong about this problem".

Because I assume we talk mainly about high-entropy problems, I set pave = .5.  I do this by drawing pi from [0 .. 1], with a normal distribution with a mean of .5, truncated at .05 and .95.  (I used a standard deviation of .15; this isn't important.)

Because this distribution of pi is symmetric around .5, there is no way to know whether you're living in the world where the right answer is always 1, or where the right answer is always 0.  This means there's no way, under this model, for a person to know whether they're a crackpot (usually wrong) or a genius (usually right).

Note that these agents don't satisfy the preconditions for Aumann agreement, because they produce 0/1 decisions instead of probabilities, and because some agents are biased to perform worse than random.  It's worth studying non-Bayesian agents before moving on to a model satisfying the preconditions for the theorem, if only because there are so many of them in the real world.

An important property of this model is that, if person i is highly accurate, and knows it, pii will approach 1, greatly reducing the chance that person i will change their mind about any problem.  Thus, the more accurate a person becomes, the less able they are to change their minds when they are wrong - and this is not an error.  It's a natural limit on the speed at which one can converge on truth.

An obvious problem is that at t=0, person i will see that it always agrees with itself, and set pii = 1.  By induction, no one will ever change their mind.  (I consider this evidence for the model, rather than against it.)

The question of how people ever change their mind is key to this whole study.  I use one of these two additions to the model to let people change their mind:

This model is difficult to solve analytically, so I wrote a Perl script to simulate it.

156 comments

Comments sorted by top scores.

comment by Morendil · 2010-05-13T20:29:37.528Z · LW(p) · GW(p)

What matters isn't so much finding the right answer, as having the right approach.

At least as far as I'm concerned, that's the main reason to spend much time here. I don't care whether the answer to Sleeping Beauty is 1/2 or 1/3, that's a mere curio.

I care about the general process whereby you can take a vague verbal description like that, map it into a formal expression that preserves the properties that matter, and use that form to check my intuitions. That's of rather more value, since I might learn how my intuitions could mislead me in situations where that matters.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-13T20:40:12.832Z · LW(p) · GW(p)

The purpose of this post is to ask how you intend that to improve your accuracy. You plan to check your calculations against your intuitions. But the disagreements we have on Sleeping Beauty are disagreements of intuitions, that cause people to perform different calculations. There's no way comparing your intuitions to your calculations can make progress in that situation.

Replies from: Morendil
comment by Morendil · 2010-05-13T20:58:32.470Z · LW(p) · GW(p)

Well, I plan to improve my accuracy by learning to perform, not just any old calculation that happens to be suggested by my intuitions, but the calculations which reflect the structure of the situation. Some of our intuitions about what calculations are appropriate could well be wrong.

The calculations are secondary; as I sometimes tell my kids, the nice thing about math is that you're guaranteed to get a correct answer by performing the operations mechanically, as long as you've posed the question properly to start with. How to pose the question properly in the language of math is what I'd like to learn more of.

Someone may have gotten the right answer to Sleeping Beauty by following a flawed argument: I want to be able to check their calculations, and be able to find the answer myself in similar but different problems.

comment by Dagon · 2010-05-13T20:38:00.806Z · LW(p) · GW(p)

Is there any real-group analog to the answer to problem t becoming mutual knowledge to the entire group? I can't think of a single disagreement here EVER to which the answer has been revealed. Further, I don't expect much revelation until Omega actually shows up.

Replies from: thomblake, PhilGoetz
comment by thomblake · 2010-05-13T21:25:11.676Z · LW(p) · GW(p)

Drawing Two Aces might count.

A bunch of people got the wrong answer, and it was presumed to be against your naive intuitions if you don't know how to do the math. But any doubters understood the right answer once it was pointed out.

Replies from: PhilGoetz, Dagon
comment by PhilGoetz · 2010-05-13T22:57:50.721Z · LW(p) · GW(p)

Thanks for recollecting that. That was a case where someone wrote a program to compute the answer, which could be taken as definitive.

I just counted up the first answers people gave, and their initial answers were 29 to 3 in favor of the correct answer. So there wasn't much disagreement to begin with.

comment by Dagon · 2010-05-14T14:50:41.494Z · LW(p) · GW(p)

I don't think that qualified. There was no revelation, just an agreement on process and on result. That was not a question analogous to PhilGoetz's model, where some agents had more accurate estimates, and you use the result to determine how accurate they might be on other topics.

comment by PhilGoetz · 2010-05-13T21:11:00.316Z · LW(p) · GW(p)

I can't think of a single disagreement here to which the answer has been revealed, either. But - spoiler alert - having the answers to numerous problems revealed to at least some of the agents is the only factor I've found that can get the simulated agents to improve their beliefs.

It's difficult to apply the simulation results to people, who can, in theory, be convinced of something by following a logical argument. The reasons why I think we can model that with a simple per-person accuracy level might need a post of their own.

Replies from: PhilGoetz, RobinZ
comment by PhilGoetz · 2010-05-14T15:29:08.528Z · LW(p) · GW(p)

having the answers to numerous problems revealed to at least some of the agents is the only factor I've found that can get the simulated agents to improve their beliefs.

Oops - that statement was based on a bug in my program.

comment by RobinZ · 2010-05-13T21:35:59.739Z · LW(p) · GW(p)

The usual situation does involve agents changing their answers as time passes differentially towards "true" - your model is extremely simplified, but [edit: may be] accurate enough for the purpose.

comment by garethrees · 2010-05-13T19:51:25.602Z · LW(p) · GW(p)

The Sleeping Beauty problem and the other "paradoxes" of probability are problems that have been selected (in the evolutionary sense) because they contain psychological features that cause people's reasoning to go wrong. People come up with puzzles and problems all the time, but the ones that gain prominence and endure are the ones that are discussed over and over again without resolution: Sleeping Beauty, Newcomb's Box, the two-envelope problem.

So I think there's something valuable to be learned from the fact that these problems are hard. Here are my own guesses about what makes the Sleeping Beauty problem so hard.

First, there's ambiguity in the problem statement. It usually asks about your "credence". What's that? Well, if you're a Bayesian reasoner, then "credence" probably means something like "subjective probability (of a hypothesis H given data D), defined by p(H|D) = p(D|H) p(H) / p(D)". But some other reasoners take "credence" to mean something like "expected proportion of observations consistent with data D in which the hypothesis H was confirmed".

In most problems these definitions give the same answer, so there's normally no need to worry about the exact definition. But the Sleeping Beauty problem pushes a wedge between them: the Bayesians should answer ½ and the others ⅓. This can lead to endless argument between the factions if the underlying difference in definitions goes unnoticed.

Second, there's a psychological feature that makes some Bayesian reasoners doubt their own calculation. (You can try saying "shut up and calculate" to these baffled reasoners but while that might get them the right answer, it won't help them resolve their bafflement.) The problem somehow persuades some people to imagine themselves as an instance of Sleeping Beauty selected uniformly from the three instances {(heads,Monday), (tails,Monday), (tails,Tuesday)}. This appears to be a natural assumption that some reasoners are prepared to make, even though there's no justification for it in the problem description.

Maybe it's the principle of indifference gone wrong: the three instances are indistinguishable (to you) but that doesn't mean the one you are experiencing was drawn from a uniform distribution.

Replies from: PhilGoetz, Morendil, Jonathan_Graehl
comment by PhilGoetz · 2010-05-13T20:36:57.998Z · LW(p) · GW(p)

Most of what you said here has already been said, and rebutted, in the comments on the Sleeping Beauties post, and in the followup post by Jonathan Lee. It would be polite, and helpful, to address those rebuttals. Simply restating arguments, without acknowledging counterarguments, could be a big part of why we don't seem to be getting anywhere.

Replies from: garethrees
comment by garethrees · 2010-05-13T21:24:56.662Z · LW(p) · GW(p)

I did check both threads, and as far as I could see, nobody was making exactly this point. I'm sorry that I missed the comment in question: the threads were very long. If you can point me at it, and the rebuttal, then I can try to address it (or admit I'm wrong).

(Even if I'm wrong about why the problem is hard, I think the rest of my comment stands: it's a problem that's been selected for discussion because it's hard, so it might be productive to try to understand why it's hard. Just as it helps to understand our biases, it helps to understand our errors.)

Replies from: timtyler, PhilGoetz, PhilGoetz
comment by timtyler · 2010-05-13T22:02:33.035Z · LW(p) · GW(p)

Bayesians should not answer ½. Nobody should answer ½: that's the wrong answer.

If your interpretation of the word "credence" leads you to answer ½, you are fighting with the rest of the community over the definition of the concept of subjective probability.

Replies from: Jack, LucasSloan, PhilGoetz, garethrees
comment by Jack · 2010-05-14T05:02:31.599Z · LW(p) · GW(p)

How is this a constructive comment? You're just stating your position again. We all already know your position. I can just as easily say:

Bayesians should not answer 1/3. Nobody should answer 1/3: that's the wrong answer.

If your interpretation of the word "credence" leads you to answer 1/3, you are fighting with the rest of the community over the definition of the concept of subjective probability.

If the entire scientific establishment is using subjective probability in a different way, by all means, show us! But don't keep asserting it like it has been established. That isn't productive.

Replies from: timtyler
comment by timtyler · 2010-05-14T09:52:27.809Z · LW(p) · GW(p)

The point of the comment was to express disapproval of the idea that scientists had multiple different conceptions of subjective probability - and that the Bayesian approach gave a different answer to other ones - and to highlight exactly where I differed from garethrees - mostly for his benefit.

comment by LucasSloan · 2010-05-14T05:07:03.302Z · LW(p) · GW(p)

you are fighting with the rest of the community over the definition of the concept of subjective probability.

There is at least a minority that believes the term "subjective probability" isn't meaningful.

Replies from: timtyler, Jack
comment by timtyler · 2010-05-14T08:33:41.806Z · LW(p) · GW(p)

I only scanned that - and I don't immediately see the relationship to your comment - but it seems as though it would be a large digression of dubious relevance.

comment by Jack · 2010-05-14T05:24:00.553Z · LW(p) · GW(p)

Or whether or not it is meaningful, it is certainly fraught with all the associated confusion of personal identity, the arrow of time and information. I don't think anyone can claim to understand it well enough to assert that those of us who see the Sleeping Beauty problem entailing a different payoff scheme are obviously and demonstrably wrong. We know how to answer related decision problems but no one here has established the right or the best way to assign the payoff scheme to credence. And people seem too frustrated by the fact that anyone could disagree with them to actually consider the pros and cons of using other payoff schemes.

comment by PhilGoetz · 2010-05-14T04:32:41.010Z · LW(p) · GW(p)

Does "the community" mean some scientific community outside of LessWrong? Because LW seems split on the issue.

Replies from: timtyler
comment by timtyler · 2010-05-14T08:35:24.327Z · LW(p) · GW(p)

Well, yes, sure. "That's just peanuts to space".

comment by garethrees · 2010-05-13T22:24:11.591Z · LW(p) · GW(p)

That's interesting. But then you have to either abandon Bayes' Law, or else adopt very bizarre interpretations of p(D|H), p(H) and p(D) in order to make it come out. Both of these seem like very heavy prices to pay. I'd rather admit that my intuition was wrong.

Is the motivating intuition beyond your comment, the idea that your subjective probability should be the same as the odds you'd take in a (fair) bet?

Replies from: timtyler
comment by timtyler · 2010-05-13T22:45:08.993Z · LW(p) · GW(p)

Subjective probabilities are traditionally analyzed in terms of betting behavior. Bets that are used for elucidating subjective probabilities are constructed using "scoring rules". It's a standard way of revealing such probabilities.

I am not sure what you mean by "abandoning Bayes' Law", or using "bizarre" interpretations of probability. In this case, the relevant data includes the design of the experiment - and that is not trivial to update on, so there is scope for making mistakes. Before questioning the integrity of your tools, is it possible that a mistake was made during their application?

Replies from: garethrees
comment by garethrees · 2010-05-13T22:59:39.644Z · LW(p) · GW(p)

Bayes' Law says, p(H|D) = p(D|H) p(H) / p(D) where H is the hypothesis of interest and D is the observed data. In the Sleeping Beauty problem H is "the coin lands heads" and D is "Sleeping Beauty is awake". p(H) = ½, and p(D|H) = p(D) = 1. So if your intuition tells you that p(H|D) = ⅓, then you have to either abandon Bayes' Law, or else change one or more of the values of p(D|H), p(H) and p(D) in order to make it come out.

(We can come back to the intuition about bets once we've dealt with this point.)

Replies from: Morendil, timtyler
comment by Morendil · 2010-05-13T23:11:36.211Z · LW(p) · GW(p)

Hold on - p(D|H) and P(D) are not point values but probability distributions, since there is yet another variable, namely what day it is.

Replies from: Cyan
comment by Cyan · 2010-05-14T04:14:55.315Z · LW(p) · GW(p)

The other variable has already been marginalized out.

Replies from: timtyler, Morendil
comment by timtyler · 2010-05-14T08:49:57.106Z · LW(p) · GW(p)

So long as it is not Saturday. And the ideas that p(H) = ½ comes from Saturday.

comment by Morendil · 2010-05-14T08:36:43.163Z · LW(p) · GW(p)

But marginalizing over the day doesn't work out to P(D)=1 since on some days Beauty is left asleep, depending on how the coin comes up.

Here is (for a three-day variant) the full joint probability distribution, showing values which are in accordance with Bayes' Law but where P(D) and P(D|H) are not the above. We can't "change the values" willy-nilly, they fall out of formalizing the problem.

Frustratingly, I can't seem to get people to take much interest in that table, even though it seems to solve the freaking problem. It's possible that I've made a mistake somewhere, in which case I'd love to see it pointed out.

Replies from: Cyan, timtyler
comment by Cyan · 2010-05-14T18:14:24.468Z · LW(p) · GW(p)

I was just talking about the notation "p(D|H)" (and "p(D)"), given that D has been defined as the observed data. Then any extra variables have to have been marginalized out, or the expression would be p(D, day | H). I didn`t mean to assert anything about the correctness of the particular number ascribed to p(D|H).

I did look at the table, but I missed the other sheets, so I didn`t understand what you were arguing.

comment by timtyler · 2010-05-14T08:58:38.295Z · LW(p) · GW(p)

It seems to say that p(heads|woken) = 0.25. A whole new answer :-(

Replies from: Morendil
comment by Morendil · 2010-05-14T09:03:50.012Z · LW(p) · GW(p)

That's in the three-day variant; it also has a sheet with the original.

Replies from: timtyler
comment by timtyler · 2010-05-14T09:24:41.103Z · LW(p) · GW(p)

It has three sheets. The respective conclusions are: p(heads|woken) = 0.25, p(heads|woken) = 0.33 and p(heads|woken) = 0.50. One wonders what you are trying to say.

Replies from: Morendil
comment by Morendil · 2010-05-14T09:56:23.572Z · LW(p) · GW(p)

That 1/3 is correct in the original, that 1/2 comes from allocating zero probability mass to "not woken up", and the three-day version shows why that is wrong.

comment by timtyler · 2010-05-14T08:47:42.543Z · LW(p) · GW(p)

I don't see how that analysis is useful. Beauty is awake at the start and the end of the experiment, and she updates accordingly, depending on whether she believes she is "inside" the experiment or not. So, having D mean: "Sleeping Beauty is awake" does not seem very useful. Beauty's "data" should also include her knowledge of the experimental setup, her knowledge of the identity of the subject, and whether she is facing an interviewer with amnesia. These things vary over time - and so they can't usefully be treated as a single probability.

You should be careful if plugging values into Bayes' theorem in an attempt to solve this problem. It contains an amnesia-inducing drug. When Beauty updates, you had better make sure to un-update her again afterwards in the correct manner.

Replies from: garethrees
comment by garethrees · 2010-05-14T10:42:17.970Z · LW(p) · GW(p)

D is the observation that Sleeping Beauty makes in the problem, something like "I'm awake, it's during the experiment, I don't know what day it is, and I can't remember being awoken before". p(D) is the prior probability of making this observation during the experiment. p(D|H) is the likelihood of making this observation if the coin lands heads.

As I said, if your intuition tells you that p(H|D) = ⅓, then something else has to change to make the calculation work. Either you abandon or modify Bayes' Law (in this case, at least) or you need to disagree with me on one or more of p(D), p(D|H), and p(H).

Replies from: timtyler
comment by timtyler · 2010-05-14T21:23:08.343Z · LW(p) · GW(p)

As I said, be careful about using Bayes' theorem in the case where the agent's mind is being meddled with by amnesia-inducing drugs. If Beauty had not had her mind addled by drugs, your formula would work - and p(H|D) would be equal to 1/2 on her first awakening. As it is, Beauty has lost some information that pertains to the answer she gives to the problem - namely the knowledge of whether she has been woken up before already - or not. Her uncertainty about this matter is the cause of the problem with plugging numbers into Bayes' theorem.

The theorem models her update on new information - but does not model the drug-induced deletion from her mind of information that pertains to the answer she gives to the problem.

If she knew it was Monday, p(H|D) would be about 1/2. If she knew it was Tuesday, p(H|D) would be about 0. Since she is uncertain, the value lies between these extremes.

Is over-reliance on Bayes' theorem - without considering its failure to model the problem's drug-induced amnesia - a cause of people thinking the answer to the problem is 1/2, I wonder?

Replies from: garethrees
comment by garethrees · 2010-05-15T09:18:49.491Z · LW(p) · GW(p)

If I understand rightly, you're happy with my values for p(H), p(D) and p(D|H), but you're not happy with the result. So you're claiming that a Bayesian reasoner has to abandon Bayes' Law in order to get the right answer to this problem. (Which is what I pointed out above.)

Is your argument the same as the one made by Bradley Monton? In his paper Sleeping Beauty and the forgetful Bayesian, Monton argues convincingly that a Bayesian reasoner needs to update upon forgetting, but he doesn't give a rule explaining how to do it.

Naively, I can imagine doing this by putting the reasoner back in the situation before they learned the information they forgot, and then updating forwards again, but omitting the forgotten information. (Monton gives an example on pp. 51–52 where this works.) But I can't see how to make this work in the Sleeping Beauty case: how do I put Sleeping Beauty back in the state before she learned what day it is?

So I think the onus remains with you to explain the rules for Bayesian forgetting, and how they lead to the answer ⅓ in this case. (If you can do this convincingly, then we can explain the hardness of the Sleeping Beauty problem by pointing out how little-known the rules for Bayesian forgetting are.)

Replies from: timtyler
comment by timtyler · 2010-05-15T10:05:40.012Z · LW(p) · GW(p)

Well, there is not anything wrong with Bayes' Law. It doesn't model forgetting - but it doesn't pretend to. I would not say you have to "abandon" Bayes' Law to solve the problem. It is just that the problem includes a process (namely: forgetting) that Bayes' Law makes no attempt to model in the first place. Bayes' Law works just fine for elements of the problem involving updating based on evidence. What you have to do is not abuse Bayes' Law - by using it in circumstances for which it was never intended and is not appropriate.

Your opinion that I am under some kind of obligation to provide a lecture on the little-known topic of Bayesian forgetting has been duly noted. Fortunately, people don't need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem - but it would certainly help if they avoid applying the Bayes update rule while completely ignoring the whole issue of the effect of drug-induced amnesia - much as Bradley Monton explains.

Replies from: garethrees
comment by garethrees · 2010-05-17T18:42:34.729Z · LW(p) · GW(p)

You're not obliged to give a lecture. A reference would be ideal.

Appealing to "forgetting" only gives an argument that our reasoning methods are incomplete: it doesn't argue against ½ or in favour of ⅓. We need to see the rules and the calculation to decide if it settles the matter.

Replies from: timtyler
comment by timtyler · 2010-05-17T18:54:04.184Z · LW(p) · GW(p)

To reiterate, people do not need to know or understand the Bayesian rules of forgetting in order to successfully solve this problem. Nobody used this approach to solving the problem - as far as I am aware - but the vast majority obtained the correct answer nontheless. Correct reasoning is given on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem - and in dozens of prior comments on the subject.

Replies from: garethrees
comment by garethrees · 2010-05-17T19:22:26.579Z · LW(p) · GW(p)

The Wikipedia page explains how a frequentist can get the answer ⅓, but it doesn't explain how a Bayesian can get that answer. That's what's missing.

I'm still hoping for a reference for "the Bayesian rules of forgetting". If these rules exist, then we can check to see if they give the answer ⅓ in the Sleeping Beauty case. That would go a long way to convincing a naive Bayesian.

Replies from: timtyler
comment by timtyler · 2010-05-17T22:07:37.475Z · LW(p) · GW(p)

I do not think it is missing - since a Bayesian can ask themselves at what odds they would accept a bet on the coin coming up heads - just as easily as any other agent can.

What is missing is an account involving Bayesian forgetting. It's missing because that is a way of solving the problem which makes little practical sense.

Now, it might be an interesting exercise to explore the rules of Bayesian forgetting - but I don't think it can be claimed that that is needed to solve this problem - even from a Bayesian perspective. Bayesians have more tools available to them than just Bayes' Law.

FWIW, Bayesian forgetting looks somewhat managable. Bayes' Law is a reversible calculation - so you can just un-apply it.

comment by PhilGoetz · 2010-05-14T04:52:37.746Z · LW(p) · GW(p)

Okay - WRT "credence", you have a good point; it's a vague word. But, p(H|D) and "expected proportion of observations consistent with data D in which the hypothesis H was confirmed" give the same results. (Frequentists are allowed to use the p(H|D) notation, too.) There isn't a difference between Bayesians and other reasoners; there's a difference between what evidence one believes is being conditioned on. You're correct that your actual claim isn't addressed by comments in those posts; but your claim depends on beliefs that are argued for and against in the comments.

The problem somehow persuades some people to imagine themselves as an instance of Sleeping Beauty selected uniformly from the three instances {(heads,Monday), (tails,Monday), (tails,Tuesday)}

That's the correct interpretation, where "correct" means "what the original author intended". Under the alternate interpretation, you will find yourself wondering why the author wrote all this stuff about Sleeping Beauty falling asleep, and forgetting what happened before, because it has no effect on the answer. This proves that the author didn't have that interpretation.

The clearest explanation yet posted is actually included in the beginning of the Sleeping Beauty post.

comment by PhilGoetz · 2010-05-13T23:26:55.407Z · LW(p) · GW(p)

it's a problem that's been selected for discussion because it's hard, so it might be productive to try to understand why it's hard.

Agreed.

comment by Morendil · 2010-05-13T20:21:51.346Z · LW(p) · GW(p)

I'd be interested in your opinion on this where I've formalized the SB problem as a joint probability distribution, with as precise a mathematical justification as I could muster as described here.

It seems that SB even generates confusion as to where the ambiguity comes from in the first place. :)

comment by Jonathan_Graehl · 2010-05-17T23:29:53.373Z · LW(p) · GW(p)

I believe I've proven that the thirders are objectively right (and everyone else wrong).

comment by Mass_Driver · 2010-05-19T02:36:16.966Z · LW(p) · GW(p)

I would like you to publish any results you may generate with your script, and promise to upvote them even if the results do not prove anything, as long as they are presented roughly as clearly as this post is.

comment by PhilGoetz · 2010-05-14T22:15:00.676Z · LW(p) · GW(p)

So... why does this post have such a low rating? Comments? I find it bewildering. If you're interested in LessWrong, you should be interested in finding out under what conditions people become less wrong.

Replies from: Jack, RobinZ, cupholder, mattnewport, orthonormal, Caspian, whpearson
comment by Jack · 2010-05-15T13:13:39.655Z · LW(p) · GW(p)

Posts with a lot of math require me to set aside larger chunks of time to consume them. I do want to examine this but that won't be possible until later this week, which means I don't vote on it until then.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-16T05:40:16.472Z · LW(p) · GW(p)

Thanks - good to know.

comment by RobinZ · 2010-05-15T14:19:27.102Z · LW(p) · GW(p)

You haven't shown that your experiment will do so. Nor have you shown that your experiment models the situation well.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-16T05:23:31.857Z · LW(p) · GW(p)

What would it take to show that? It seems to me that isn't a thing that I could "show", even in theory, since I've found no existing empirical data on Aumann-agreement-type experiments in humans. If you know one, I'd appreciate a comment describing it.

I believe that one of the purposes of LessWrong is to help us gain an understanding of important epistemic issues. Proposing a new way to study the issue and potentially gain insight is therefore important.

I think that your standard implies that LessWrong is like a peer-reviewed journal: A place for people to present completed research programs; not a place for people to cooperate to find answers to difficult problems.

As I've said before, it's not good to apply standards that penalize rigor. If the act of putting equations into a post means that each equation needs to be empirically validated in order to get an upvote, pretty soon nobody is going to put equations into their posts.

Replies from: RobinZ
comment by RobinZ · 2010-05-16T11:40:02.912Z · LW(p) · GW(p)

I'm perfectly happy to come back and vote this up after I am satisfied that it is good, and I haven't and won't vote it down. I think it's a good idea to seek public comment, but the voting is supposed to indicate posts which are excellent for public consumption - this isn't, unless it's the technical first half of a pair of such posts. I want to know that the formalization parallels the reality, and it's not clear that it does before it is run.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-18T04:17:12.424Z · LW(p) · GW(p)

So, you don't want to vote until you see the results; and I don't want to waste an entire day writing up the results if few people are interested. Is there a general solution to this general problem?

(The "Part 1" in the title was supposed to indicate that it is the first part of a multi-part post.)

Replies from: RobinZ
comment by RobinZ · 2010-05-18T10:55:55.516Z · LW(p) · GW(p)

If you are confident in the practical value of your results, I would recommend posting. Otherwise I can't help you.

comment by cupholder · 2010-05-15T13:01:49.262Z · LW(p) · GW(p)

I held off on rating the post because I just skimmed it, saw most of it was describing an algorithm/model, decided I didn't have time to check your working, and held off on rating the post because I didn't check your work. I might not be representative, I don't rate most posts; I've rated just 6 top-level posts so far this May.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-16T05:41:44.277Z · LW(p) · GW(p)

Hmm - I wish I could see whether I have few upvotes, or numerous upvotes and downvotes. They'd have very different implications for what I should do differently.

comment by mattnewport · 2010-05-14T22:38:37.366Z · LW(p) · GW(p)

I'm rather tired of the Sleeping Beauty debate and so didn't read it. If others have had the same reaction this might explain the low score.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-15T03:41:37.604Z · LW(p) · GW(p)

Thanks for answering. This isn't a continuation of the sleeping beauty debate. Despite what you see in the comment section, which has been hijacked by sleeping beauty.

comment by orthonormal · 2010-05-16T02:22:42.319Z · LW(p) · GW(p)

One thing I think is missing from your model is correlation between different answers, and I think that this is actually essential to the phenomenon: ignoring it makes it look like people are failing to come to agreement at all, when what's actually happening is that they're aligning into various ideological groups.

That is, there's a big difference between a group of 100 people with independent answers on 10 binary questions (random fair coinflips), and two groups of 50 who disagree on each of the 10 binary questions. I think that if you compared LW newcomers with veterans, you'd find that the newcomers more resemble the first case, and veterans more the second. This would suggest that peoples' answers are becoming more internally coherent, at least.

In particular, I expect that on this subject the veterans split roughly as follows:

  • Those who subscribe to Bostrom's SIA and are Thirders (1/3 to 1/2 of the LW vets)
  • Those who subscribe to Bostrom's SSA and are Halfers (less than 1/4)
  • Those who reject Bostromian anthropic probabilities entirely (less than 1/4)

One can easily predict the responses of the first two groups on subsequent questions.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-16T05:35:22.270Z · LW(p) · GW(p)

I don't build a model by looking at the observed results of a phenomena, and building in a special component to produce each observed result. You wouldn't learn anything from your models if you did that; they would produce what you built them to produce. I build a model by enumerating the inputs, modeling each input, and seeing how much of the observed results the output matches.

When I run the simulation, people do in fact align into different groups. So far, always 2 groups. But the alignment process doesn't give either group better overall accuracy. This shows that you don't need any internal coherence or problem understanding for people to align into groups. Attributing accuracy to people who tend to agree with you, and inaccuracy to those who disagree with you, produces saddle-point dynamics. Once the initial random distribution gets off the saddle point, the groups on the opposite sides each rapidly converge to their own attractor.

What's especially interesting is that this way of judging people's accuracy doesn't just cause different groups to converge to different points; it causes the groups to disagree with each other on every point. There isn't one "right" group and one "wrong" group; there are two groups that are right about different things. Their agreement within a group on some topics indirectly causes them to take the opposite opinion on any topic on which other groups have strong opinions. In other words: My enemy's belief P is evidence against P.

In particular, I expect that on this subject the veterans split roughly as follows:

(Sleeping Beauty isn't the subject of this post.)

Replies from: orthonormal, Caspian
comment by orthonormal · 2010-05-25T02:24:07.580Z · LW(p) · GW(p)

OK, I see what you're doing now. It's an interesting model, though one feature jumps out at me now:

In other words: My enemy's belief P is evidence against P.

Although this phenomenon is a well-known fallacy among human beings, it doesn't seem like it should be the rational behavior— and then I noticed that the probabilities p_i can be less than 1/2 in your model, and that some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I'm understanding correctly.

What's the result if you make the probabilities (and accordingly, people's estimates of the probabilities) range from 1/2 to 1 instead of from 0 to 1?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-25T17:54:49.245Z · LW(p) · GW(p)

What's the result if you make the probabilities (and accordingly, people's estimates of the probabilities) range from 1/2 to 1 instead of from 0 to 1?

Then everybody converges onto agreeing on the correct answer for every question. And you just answered the question as to why Bayesians should agree to agree: Because Bayesians can't perform worse than random on average, their accuracies range from 1/2 to 1, and are not biased on any problem (unless the evidence is biased, in which case you're screwed anyway). Averaging their opinions together will thus get the right answer to every (answerable) question. Congratulations! You win 1 Internet!

(The reason for choosing 0 to 1 is explained in the post.)

Although this phenomenon is a well-known fallacy among human beings, it doesn't seem like it should be the rational behavior

The behavior in my model is rational if the results indicate that it gets the right answer. So far, it looks look it doesn't.

some of your agents are in fact reliably anti-correct. This seems like a probable cause of a binary group split, if I'm understanding correctly.

You could probably get the same answer by having some problems, rather than agents, usually be answered wrong. An abundance of wrong answers makes the agents split. The agents don't split into the correct agents and the incorrect agents, at least not for the conditions I've tested. There doubtless are settings that would get them to do that.

comment by Caspian · 2010-05-19T14:03:53.637Z · LW(p) · GW(p)

Does the 2-group split stay even if you continue the simulation until all answers have been revealed?

If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-19T19:09:32.834Z · LW(p) · GW(p)

Does the 2-group split stay even if you continue the simulation until all answers have been revealed?

Good question - no; revelation of answers eventually causes convergence into 1 group.

If you increase the standard deviation of p[i] so there are more very right and very wrong guessers, do they tend to split more into right and wrong groups? I expect they would.

It makes the splitting happen faster.

comment by Caspian · 2010-05-19T13:52:06.307Z · LW(p) · GW(p)

It also didn't get a lot of on-topic comments. Possibly because guessing the answers to your questions seems the wrong way to answer them - the correct way being to put it to the test with the program, which means rewriting it (wasteful) or waiting for you to post it.

Are you planning on posting the perl script? I'm a bit tempted to just translate what you've got in the post into python, but realistically I probably won't get around to it anytime soon.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-19T19:12:46.625Z · LW(p) · GW(p)

I think there's a way to upload it to LessWrong and post a link to it. But I don't know how. My email is at gmail.

Summarizing the results in the same post would result in a gigantic post that people wouldn't want to read.

comment by whpearson · 2010-05-17T23:35:22.180Z · LW(p) · GW(p)

The code could be cleaner. Couldn't

givtgjvs + (1-givt)(1-gjvs)

be

not (givt xor gjvs)

or

same givt gjvs

It would clean up the code a lot, and make it less of a hassle to read. I'd also prefer higher order functions to for loops, but that may just be me.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-18T04:14:37.206Z · LW(p) · GW(p)

The code is written that way to accomodate the continuous case. I think people who aren't C or assembly programmers will find the not(xor) more confusing; and people who are programmers will find the second unfamiliar.

Replies from: whpearson
comment by whpearson · 2010-05-18T09:42:20.049Z · LW(p) · GW(p)

I'm mainly saying the code is a bit opaque at the moment.

If you want to keep the continuous case, fine.

As long as you defined the same or similar function somewhere else, programmers would be fine.

Commenting the code would help people get to grips with it, if you don't want to change it.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-19T01:47:09.779Z · LW(p) · GW(p)

Good idea. Comments it is.

comment by timtyler · 2010-05-13T22:21:36.988Z · LW(p) · GW(p)

Re: "I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy."

FWIW, I think that was how I originally approached the problem. Rather than trying to solve it directly, I first looked at your response, and Robin Hanson's response. After a little reflection, I concluded that you agreed with each other - and that you had both focused on the key issues - and got the answer right.

At that time, most of the rest of the thread was people saying the problem was ambiguous and needed a bet to clear it up - and a fair bit of confusion - with very little defense of the standard answer.

comment by Caspian · 2010-05-19T12:53:06.549Z · LW(p) · GW(p)

This is a really interesting topic, there are heaps of things I want to say about it. I was initially waiting to see what your results were first, to avoid spoilers with my guesses, but that's no way to have a conversation.

First - I think there's an error in the program: When you compute p[i][j] you take a sum then divide by N, but it looks like you should divide by the number of guesses you are adding, which can be more than N since it includes multiple rounds of guesses.

My (inconsistent) thoughts about how the model would behave:

  • They'd quickly learn the ratio of correct initial guesses everyone had, and make near-perfect use of that information. But they don't distinguish between the initial guesses and later updates, so that's not right.

  • Even the bad guessers will get most of their updated estimates right by the end, so their opinions will be assumed to correlate with the truth. If you then went back and posed everyone a new question, all the bad guessers could significantly mislead everyone. That's not the procedure in your code, but you could try it.

  • At the start of the simulation, all the guessers are simply seeing who else agrees with them. The good guessers might be converging to a correct consensus, while the bad guessers could converge to the opposite. But as the simulation progressed and the answers were revealed, the bad guessers would lose confidence in their whole subgroup, including themselves, and follow the good guessing group.

Ideas for variants:

  • Make the initial guess accuracy depend on both guesser accuracy and problem difficultly/deceptiveness. I proposed a formula for this in my previous comment. In this case, the best way to update from the initial guesses would seem to be to follow the average opinion of a few of the best guessers and maybe the reverse of the worst few guessers, but I'm not sure how it would play out in the simulation where you don't know who they are, and you have to update on each other's updated guesses.

  • Make the initial guess accuracy depend on both the skill of the guesser and the difficulty of the question, but vary what weight is given to skill - some questions can be just as hard for skilled guessers as everyone else. In this case, a way to update from an initial guess would be to look at enough of the best guessers that you're confident which way they guess on average (you'd need to sample more if they are near 50%)

  • Repeat the exercise - after the first set of N answers are revealed, continue with N more questions. This time the guessers start with data about each other's accuracy. Then after they are done, N more, etc.

  • Instead of everyone getting the same number of updates, let some update more often.

  • Instead of updating everyone and revealing one answer each round, randomly pick between updating a random person and randomly revealing a correct answer just to one person, which they will be certain of for the rest of the game. You could give different people different chances of updating from group opinions, and of getting the correct answer revealed. Since people don't know who's had what answers revealed they don't stop counting them when evaluating each other's accuracy.

comment by neq1 · 2010-05-14T11:42:41.748Z · LW(p) · GW(p)

The Sleeping Beauty Challenge

Maybe I'm naive, but I actually think that we can come close to consensus on the solution to this problem. This is a community of high IQ, aspiring rationalists.

I think it would be a good exercise to use what we know about rationality, evidence, biases, etc. and work this out.

I propose the following:

  1. I will write up my best arguments in favor of the 1/2 solution. I'll keep it shorter than my original post.

  2. Someone representing the thirders will write up their best arguments in favor of the 1/3 solution

  3. Before reading the others' arguments, we will assume that they are right, and that reading it will only confirm our beliefs (this is hard to do, but I find that this approach can be helpful)

  4. We cannot respond for at least 24 hours. (this will give us time to digest the arguments, without just reacting immediately)

  5. We will then check to see if there is agreement

  6. If we still disagree, we can have some discussion (say, via email) to see if progress can be made

  7. We will post our original two arguments and conclusion here (maybe in a new post)?

What do you think?

I tried to set this up in such a way to reduce some of the known biases that prevent agreement. Am I missing something?

Possible pitfall: if we come to an agreement, people who disagree with our conclusion might say it's because one of us was a poor representative of their viewpoint. However, I think we'd still move a step towards consensus.

What say you?

Replies from: Morendil, garethrees, Jack
comment by Morendil · 2010-05-14T12:17:34.732Z · LW(p) · GW(p)

Unlike Jack, I'm pessimistic about your proposal. I've already changed my mind not once but twice.

The interesting aspect is that this doesn't feel like I'm vacillating. I have gone from relying on a vague and unreliable intuition in favor of 1/3 qualified with "it depends", to being moderately certain that 1/2 was unambiguously correct, to having worked out how I was allocating all of the probability mass in the original problem and getting back 1/3 as the answer that I cannot help but think is correct. That, plus the meta-observation that no-one, including people I've asked directly (including yourself), has a rebuttal to my construction of the table, is leaving me with a higher degree of confidence than I previously had in 1/3.

It now feels as if I'm justified to ignore pretty much any argument which is "merely" a verbal appeal to one intuition or the other. Either my formalization corresponds to the problem as verbally stated or it doesn't; either my math is correct or it isn't. "Here I stand, I can no other" - at least until someone shows me my mistake.

Replies from: Jack, cupholder, neq1, timtyler
comment by Jack · 2010-05-14T18:10:08.331Z · LW(p) · GW(p)

So I think I figured this whole thing out. Are people familiar with the type-token distinction and resulting ambiguities? If I have five copies of the book Catcher in the Rye and you ask me how many books I have there is an ambiguity. I could say one or five. One refers to the type, "Catcher in the Rye is a coming of age novel" is a sentence about the type. Five refers to the number of tokens, "I tossed Catcher in the Rye onto the bookshelf" is a sentence about the token. The distinction is ubiquitous and leads to occasional confusion, enough that the subject is at the top of my Less Wrong to-do list. The type token distinction becomes an issue whenever we introduce identical copies and the distinction dominates my views on personal identity.

In the Sleeping Beauty case, the amnesia means the experience of waking up on Monday and the experience of waking up on Tuesday, while token-distinct are type-identical. If we decide the right thing to update on isn't the token experience but the type experience: well the calculations are really easy. The type experience "waking up" has P=1 for heads and tails. So the prior never changes. I think there are some really good reasons for worrying about types rather than tokens in this context but won't go into until I make sure the above makes sense to someone.

Replies from: timtyler, neq1
comment by timtyler · 2010-06-06T07:29:24.346Z · LW(p) · GW(p)

How are you accounting for the fact that - on awakening - beauty has lost information that she previously had - namely that she no longer knows which day of the week it is?

Replies from: Jack
comment by Jack · 2010-06-06T08:03:14.155Z · LW(p) · GW(p)

Maybe it's just because I haven't thought about this in a couple of weeks but you're going to have to clarify this. When does beauty know which day of the week it is?

Replies from: timtyler
comment by timtyler · 2010-06-06T09:39:10.128Z · LW(p) · GW(p)

Before consuming the memory-loss drugs she knows her own temporal history. After consuming the drugs, she doesn't. She is more uncertain - because her memory has been meddled with, and important information has been deleted from it.

Replies from: Jack
comment by Jack · 2010-06-06T09:55:33.308Z · LW(p) · GW(p)

Information wasn't deleted. Conditions changed and she didn't receive enough information about the change. There is a type (with a single token) that is Beauty before the experiment and that type includes a property 'knows what day of the week it is', then the experiment begins and the day changes. During the experiment there is another type which is also Beauty, this type has two tokens. This type only has enough information to narrow down the date to one of two days. But she still knows what day of the week it was when the experiment began, it's just your usual indexical shift (instead of knowing the date now she knows the date then but it is the same thing).

Replies from: timtyler
comment by timtyler · 2010-06-06T10:33:38.376Z · LW(p) · GW(p)

Her memories were DELETED. That's the whole point of the amnesia-inducing drug.

Amnesia = memory LOSS: http://dictionary.reference.com/browse/Amnesia

Replies from: Jack
comment by Jack · 2010-06-06T12:09:01.372Z · LW(p) · GW(p)

Oh sure, the information contained in the memory of waking up is lost (though that information didn't contain what day of the week it was and you said "namely that she no longer knows which day of the week it is"). I still have zero idea of what you're trying to ask me.

Replies from: timtyler
comment by timtyler · 2010-06-06T14:49:02.271Z · LW(p) · GW(p)

If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.

Replies from: Jack
comment by Jack · 2010-06-06T15:49:52.177Z · LW(p) · GW(p)

I might have some issues with that characterization but they aren't worth going into since I still don't know what this has to do with my discussion of the type-token ambiguity.

Replies from: timtyler
comment by timtyler · 2010-06-06T17:04:53.453Z · LW(p) · GW(p)

It is what was missing from this analysis:

"The type experience "waking up" has P=1 for heads and tails. So the prior never changes."

Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change.

Replies from: Jack
comment by Jack · 2010-06-06T22:21:26.088Z · LW(p) · GW(p)

K.

If she had not ever been given the drug she would be likely to know which day of the week it was. She would know how many times she had been woken up, interviewed, etc. It is because all such information has been chemically deleted from her mind that she has the increased uncertainty that she does.

Yes, counterfactually if she hadn't been given the drug on the second awakening she would have knowledge of the day. But she was given the drug. This meant a loss of the information and knowledge of the memory of the first awakening. But it doesn't mean a loss of the knowledge of what day it is, she obviously never had that. It is because all her new experiences keep getting deleted that she is incapable of updating her priors (which were set prior to the beginning of the experiment). In type-theoretic terms:

If the drugs had not been administered she would not have had type experience "waking up" a second time. She would have had type experience "waking up with the memory of waking up yesterday". If she had had that type experience then she would know what day it is.

Replies from: timtyler
comment by timtyler · 2010-06-07T18:29:14.071Z · LW(p) · GW(p)

Beauty probably knew what day it was before the experiment started. People often do know what day of the week it is.

You don't seem to respond to my: "Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change."

In this case, that is exactly what happens. Had Beauty not been given the drug, her estimates of p(heads) would be: 0.5 on Monday and 0.0 on Tuesday. Since her knowledge of what day it is has been eliminated by a memory-erasing drug, her probability estimate is intermediate between those figures - reflecting her new uncertainty in the face of the chemical deletion of relevant evidence.

Replies from: Jack
comment by Jack · 2010-06-07T20:12:04.638Z · LW(p) · GW(p)

Beauty probably knew what day it was before the experiment started.

Yes. And throughout the experiment she knows what day it was before the experiment started. What she doesn't know is the new day. This is the second or third time I've said this. What don't you understand about an indexical shift?

"Your priors are a function of your existing knowledge. If that knowledge is deleted, your priors may change."

The knowledge that Beauty has before the experiment is not deleted. Beauty has a single anticipated experience going into the experiment. That anticipated experience occurs. There is no new information to update on.

You don't seem to be following what I'm saying at all.

Replies from: timtyler
comment by timtyler · 2010-06-07T20:35:13.193Z · LW(p) · GW(p)

What you said was: "it doesn't mean a loss of the knowledge of what day it is, she obviously never had that". Except that she did have that - before the experiment started. Maybe you meant something different - but what readers have to go on is what you say.

Beauty's memories are deleted. The opinions of an agent can change if they gain information - or if they lose information. Beauty loses information about whether or not she has had a previous awakening and interrogation. She knew that at the start of the experiment, but not during it - so she has lost information that she previously had - it has been deleted by the amnesia-inducing drug. That's relevant information - and it explains why her priors change.

Replies from: Jack
comment by Jack · 2010-06-07T21:51:27.427Z · LW(p) · GW(p)

I'm going to try this one more time.

On Sunday, before the experiment begins Beauty makes observation O1(a). She knows that O1 was made on a Sunday. She says to herself "I know what day it is now" (an indexical statement pointing to O1(a)) She also predicts the coin will flip heads with P=0.5 and predicts the next experience she has after going to sleep will be O2. Then she wakes up and makes observation O2(a). It is Monday but she doesn't know this because it could just as easily be Tuesday since her memory of waking up on Monday will be erased. "I know what day it is now" is now false, not because knowledge was deleted but because of the indexical shift of 'now' which no longer refers to O1(a) but to O2(a). She still knows what day it was at O1(a), that knowledge has not been lost. Then she goes back to sleep and her memory of O2(a) is erased. But O2(a) includes no knowledge of what day it is (thought combined with other information Beauty could have inferred what day it was, she never had that information). Beauty wakes up on Tuesday and has observation O2(b). This observation is type-identical to O2(a) and exactly what she anticipated experiencing. If her memory had not been erased she would have had observation O3-- waking up along with the memory of having woken up the previous day. This would not have been an experience Beauty would have predicted with P=1 and therefore would require her to update her belief P(heads) from 0.5 to 0 as she would know it was Tuesday. But she doesn't get to do that she just has a token of experience O2. She still knows what day it was at O1(a), no knowledge has been lost. And she still doesn't know what day it is 'now'.

[For those following this, note that spatio-temporality is a strictly property of tokens (though we have a linguistic convention of letting types inherit the properties of tokens like "the red-breasted woodpecker can be found in North America"... what that really means is that tokens of they type 'red-breasted woodpecker' can be found in North America). This, admittedly, might lead to confusing results that need clarification and I'm still working on that.]

Replies from: Morendil, timtyler
comment by Morendil · 2010-06-08T07:00:25.840Z · LW(p) · GW(p)

I've been following, but I'm still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.

Take a bag with 1 red marble and 9 green marbles. There is a type "green marble" and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition "the next marble I draw will be green" is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.

If you start out ignorant of how many marbles the bag has of each color, you can ask questions like "given that I just took two green marbles in a row, what is my credence in the proposition 'the next marble I draw will be green'". You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)

Statements like "information is gained" or "information is lost" are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.

If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1/2, I would love to see it: I have no attachment any longer to "my opinion" on the topic.

My questions to you, then, are: a) given your reasons for "worrying about types rather than tokens" in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I've linked to earlier? b) what justifies "worrying about types rather than tokens" in this situation, where every other discussion of probability "worries about tokens" in the sense I've outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?

Replies from: Jack
comment by Jack · 2010-06-08T11:30:37.580Z · LW(p) · GW(p)

show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.

My point was that I didn't think anything was wrong with your math. If you count tokens the answer you get is 1/3. If you count types the answer you get is 1/2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1/3 and payouts where the right choice is 1/2.

You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation.

b) what justifies "worrying about types rather than tokens" in this situation, where every other discussion of probability "worries about tokens" in the sense I've outlined above in reference to the bag of marbles?

This was a helpful comment for me. What we're dealing with is actually a special case of the type-token ambiguity: the tokens are actually indistinguishable. Say I flip a coin. I, If tails I put six red marbles in a bag which already contains three red marbles bag, if heads do nothing to the bag with three red marbles. I draw a marble and tell Beauty "red". And then I ask Beauty her credence for the coin landing heads. I think that is basically isomorphic to the Sleep Beauty problem. In the original she is woken up twice if heads, but thats just like having more red marbles to choose from, the experiences are indistinguishable just like the marbles.

Statements like "information is gained" or "information is lost" are vague and imprecise,

I don't really think they are. That's my major problem with the 1/3 answer. No one has ever shown me the unexpected experience Beauty must have to update from 0.5. But if you feel that way I'll try other methods.

c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?

Off hand there is no reason to worry about types, as the possible answers to the questions "Do you have exactly two children?" and "Is one of them a boy born on a Tuesday?" are all distinguishable. But I haven't thought really hard about that problem, maybe there is something I'm missing. My approach does suggest a reason for why the Self-Indication Assumption is wrong: the necessary features of an observer are indistinguishable. So it returns 0.5 for the Presumptuous Philosopher problem.

I'll come back with an answer to (a). Bug me about it if I don't. There is admittedly a problem which I haven't worked out: I'm not sure how to relate the experience-type to the day of the week (time is a property of tokens). Basically, the type by itself doesn't seem to tell us anything about the day (just like picking the red marble doesn't tell us whether or not it was added after the coin flip. And maybe that's a reason to reject my approach. I don't know.

comment by timtyler · 2010-06-07T22:18:34.178Z · LW(p) · GW(p)

"No knowledge has been lost"?!?

Memories are knowledge - they are knowledge about past perceptions. They have been lost - because they have been chemically deleted by the amnesia-inducing drug. If they had not been lost, Beauty's probability estimates would be very different at each interview - so evidently the lost information was important in influencing Beauty's probability estimates.

That should be all you need to know to establish that the deletion of Beauty's memories changes her priors, and thereby alters her subjective probability estimates. Beauty awakens, not knowing if she has previously been interviewed - because of her memory loss. She knew whether she had previously been interviewed at the start of the experiment - she hadn't. So: that illustrates which memories have been deleted, and why her uncertainty has increased.

Replies from: Jack
comment by Jack · 2010-06-07T22:44:40.633Z · LW(p) · GW(p)

Yes. The memories have been lost (and the knowledge that accompanies them). The knowledge of what day of the week it is has not been lost because she never had this... as I've said four times. I'm just going to keep referring you back to my previous comments because I've addressed all this already.

Replies from: timtyler
comment by timtyler · 2010-06-08T07:01:12.332Z · LW(p) · GW(p)

You seem to have got stuck on this "day of the week" business :-(

The point is that beauty has lost knowledge that she once had - and that is why her priors change. That that knowledge is "what day of the week it currently is" seems like a fine way of thinking about what information beauty loses to me. However, it clearly bugs you - so try thinking about the lost knowledge another way: beauty starts off knowing with a high degree of certainty whether or not she has previously been interviewed - but then she loses this information as the experiment progresses - and that is why her priors change.

Replies from: Jack
comment by Jack · 2010-06-08T10:24:07.582Z · LW(p) · GW(p)

This example, like the last one, is indexed to a specific time. You don't lose knowledge about conditions at t1 just because it is now t2 and the conditions are different.

Replies from: timtyler
comment by timtyler · 2010-06-08T23:32:20.644Z · LW(p) · GW(p)

Beauty loses information about whether she has previously attended interviews because her memories of them are chemically deleted by an amnesia-inducing drug - not because it is later on.

comment by neq1 · 2010-05-14T18:23:07.234Z · LW(p) · GW(p)

Makes sense to me.

Replies from: Jack
comment by Jack · 2010-05-14T19:17:23.061Z · LW(p) · GW(p)

Cool. Now I haven't quite thought through all this so it'll be a little vague. It isn't anywhere close to being an analytic, formalized argument. I'm just going to dump a bunch of examples that invite intuitions. Basically the notion is: all information is type, not token. Consider, to begin with the Catcher in the Rye example. The sentence about the type was about the information contained in the book. This isn't a coincidence. The most abundant source of types in the history of the world is pure information: not just every piece of text every written but every single computer program or file is a type (with it's backups and copies as tokens). Our entire information-theoretic understanding of the universe involves this notion of writing the universe like a computer program (with the possibility of running multiple simulations), k-complexity is a fact about types not tokens (of course this is confusing since when we think of tokens we often attribute them the features of the their type, but the difference is there). Persons are types (at least in part, I think our concept of personhood confuses types and tokens). That's why most people here think they could survive by being uploaded. When Dennett swtiches between his two brains it seems like there is only one person because there is only one person-type, though two person-tokens. I forget who it was, but someone here has argued in regard to decision theory, that we when we act we should take into account all the simulations of us that may some day be run and act for them as well. This is merely decision theory representing the fact that what matters about persons is the type.

So if agents are types, and in particular if information is types... well then they type experiences are what we update on, they're the ones that contain information. There is no information to tokens beyond their type. RIght? Of course, this is just an intuition that needs to be formalized. But is the intuition clear?

I'm sorry this isn't better formulated. The complexity justifies a top level post which I don't have time for until next week.

comment by cupholder · 2010-05-14T15:14:20.023Z · LW(p) · GW(p)

Entertainingly, I feel justified in ignoring your argument and most of the others for the same reason you feel justified in ignoring other arguments.

I got into a discussion about the SB problem a month ago after Mallah mentioned it as related to the red door/blue doors problem. After a while I realized I could get either of 1/2 or 1/3 as an answer, despite my original intuition saying 1/2.

I confirmed both 1/2 and 1/3 were defensible by writing a computer program to count relative frequencies two different ways. Once I did that, I decided not to take seriously any claims that the answer had to be one or the other, since how could a simple argument overrule the result of both my simple arithmetic and a computer simulation?

Replies from: Morendil, neq1
comment by Morendil · 2010-05-14T15:42:55.757Z · LW(p) · GW(p)

I was thinking about that earlier.

A higher level of understanding of an initially mysterious question should translate into knowing why people may disagree, and still insist on answers that you yourself have discarded. You explain away their disagreement as an inferential distance.

Neither of the answers you have arrived at is correct, from my perspective, and I can explain why. So I feel justified in ignoring your argument for ignoring my argument. :)

That a simulation program should compute 1/2 for "how many times on average the coin comes up heads per time it is flipped" is simply P(x) in my formalization. It's a correct but entirely uninteresting answer to something other than the problem's question.

That your program should compute 1/3 for "how many times on average the coin comes up heads per time Beauty is awoken" is also a correct answer to a slightly more subtly mistaken question. If you look at the "Halfer variant" page of my spreadsheet, you will see a probability distribution that also correspond to the same "facts" that yield the 1/3 answer, and yet applying the laws of probability to that distribution give Beauty a credence of 1/2. The question your program computes an answer to is not the question "what is the marginal probability of x=Heads, conditioning on z=Woken".

Whereas, from the tables representing the joint probability distribution, I think I now ought to be able to write a program which can recover either answer: the Thirder answer by inputting the "right" model or the Halfer answer by inputting the "wrong" model. In the Halfer model, we basically have to fail to sample on Heads/Tuesday. Commenting out one code line might be enough.

ETA: maybe not as simple as that, now that I have a first cut of the program written; we'd need to count awakenings on monday twice, which makes no sense at all. It does look as if our programs are in fact computing the same thing to get 1/3.

Replies from: cupholder
comment by cupholder · 2010-05-14T16:25:26.993Z · LW(p) · GW(p)

Which specific formulation of the Sleeping Beauty problem did you use to work things out? Maybe we're referring to descriptions of the problem that use different wording; I've yet to read a description that's convinced me that 1/2 is an answer to the wrong question. For example, here's the wiki's description asks

Beauty wakes up in the experiment and is asked, "With what subjective probability do believe that the coin landed tails?"

Personally, I believe that using the word 'subjective' doesn't add anything here (it just sounds like a cue to think Bayesian-ishly to me, which doesn't change the actual answer). So I read the question as asking for the probability of the coin landing tails given the experiment's setup. As it's asking for a probabiliy, I see it as wholly legitimate to answer it along the lines of 'how many times on average the coin comes up heads per X,' where X is one of the two choices you mentioned.

Replies from: timtyler, Morendil
comment by timtyler · 2010-05-16T21:33:29.689Z · LW(p) · GW(p)

If you ignore the specification that it is Beauty's subjective probability under discussion, the problem becomes ill-defined - and multiple answers become defensible - depending on whose perspective we take.

Replies from: cupholder
comment by cupholder · 2010-05-16T23:32:05.916Z · LW(p) · GW(p)

The word 'subjective' before the word 'probability' is empty verbiage to me, so (as I see it) it doesn't matter whether you or I have subjectivity in mind. The problem's ill-defined either way; 'the specification that it is Beauty's subjective probability' makes no difference to me.

Replies from: timtyler
comment by timtyler · 2010-05-17T07:39:57.039Z · LW(p) · GW(p)

The perspective makes a difference:

"In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1/3. This is the correct answer from Beauty's perspective. Yet to the experimenter the correct probability is 1/2."

Replies from: cupholder
comment by cupholder · 2010-05-17T16:52:34.902Z · LW(p) · GW(p)

I think it's not the change in perspective or subjective identity making a difference, but instead it's a change in precisely which probability is being asked about. The Wikipedia page unhelpfully conflates the two changes.

It says that the experimenter must see a probability of 1/2 and Beauty must see a probability of 1/3, but that just ain't so; there is nothing stopping Beauty from caring about the proportion of coin flips that turn out to be heads (which is 1/2), and there is nothing stopping the experimenter from caring about the proportion of wakings for which the coin is heads (which is 1/3). You can change which probability you care about without changing your subjective identity and vice versa.

Let's say I'm Sleeping Beauty. I would interpret the question as being about my estimate of a probability ('credence') associated with a coin-flipping process. Having interpreted the question as being about that process, I would answer 1/2 - who I am would have nothing to do with the question's correct answer, since who I am has no effect on the simple process of flipping a fair coin and I am given no new information after the coin flip about the coin's state.

Replies from: timtyler
comment by timtyler · 2010-05-17T18:12:39.658Z · LW(p) · GW(p)

In the original problem post, Beauty is asked a specific question, though - namely:

"What is your credence now for the proposition that our coin landed heads?"

That's fairly clearly the PROBABILITY NOW of the coin having landed heads - and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.

Perspective can make a difference - because different observers have different levels of knowledge about the situation. In this case, Beauty doesn't know whether it is Tuesday or not - but she does know that if she is being asked on Tuesday, then the coin came down tails - and p(heads) is about 0.

Replies from: cupholder
comment by cupholder · 2010-05-17T19:14:19.000Z · LW(p) · GW(p)

In the original problem post, Beauty is asked a specific question, though

It's not specific enough. It only asks for Beauty's credence of a coin landing heads - it doesn't tell her to choose between the credence of a coin landing heads given that it is flipped and the credence of a coin landing heads given a single waking. The fact that it's Beauty being asked does not, in and of itself, mean the question must be asking the latter probability. It is wholly reasonable for Beauty to interpret the question as being about a coin-flipping process for which the associated probability is 1/2.

That's fairly clearly the PROBABILITY NOW of the coin having landed heads - and not the PROPORTION that turn out AT SOME POINT IN THE FUTURE to have landed heads.

The addition of the word 'now' doesn't magically ban you from considering a probability as a limiting relative frequency.

Perspective can make a difference - because different observers have different levels of knowledge about the situation. In this case, Beauty doesn't know whether it is Tuesday or not

Agree.

- but she does know that if she is being asked on Tuesday, then the coin came down tails - and p(heads) is about 0.

It's not clear to me how this conditional can be informative from Beauty's perspective, as she doesn't know whether it's Tuesday or not. The only new knowledge she gets is that she's woken up; but she has an equal probability (i.e. 1) of getting evidence of waking up if the coin's heads or if the coin's tails. So Beauty has no more knowledge than she did on Sunday.

Replies from: timtyler
comment by timtyler · 2010-05-17T21:54:35.495Z · LW(p) · GW(p)

She has LESS knowledge than she had on Sunday in one critical area - because now she doesn't know what day of the week it is. She may not have learned much - but she has definitely forgotten something - and forgetting things changes your estimates of their liklihood just as much as learning about them does.

Replies from: cupholder
comment by cupholder · 2010-05-17T22:14:21.805Z · LW(p) · GW(p)

She has LESS knowledge than she had on Sunday in one critical area - because now she doesn't know what day of the week it is. She may not have learned much - but she has definitely forgotten something -

That's true.

and forgetting things changes your estimates of their liklihood just as much as learning about them does.

I'm not as sure about this. It's not clear to me how it changes the likelihoods if I sketch Beauty's situation at time 1 and time 2 as

  1. A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday. It is Sunday.
  2. I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.

as opposed to just

  1. A coin will be flipped and I will be woken up on Monday, and perhaps Tuesday.
  2. I have been woken up, so a coin has been flipped. It is Monday or Tuesday but I do not know which.

(Edit to clarify - the 2nd pair of statements is meant to represent roughly how I was thinking about the setup when writing my earlier comment. That is, it's evident that I didn't account for Beauty forgetting what day of the week it is in the way timtyler expected, but at the same time I don't believe that made any material difference.)

comment by Morendil · 2010-05-14T16:29:23.672Z · LW(p) · GW(p)

I read it as "What is your credence", which is supposed to be synonymous with "subjective probability", which - and this is significant - I take to entail that Beauty must condition on having been woken (because she conditions on every piece of information known to her).

In other words, I take the question to be precisely "What is the probability you assign to the coin having come up heads, taking into account your uncertainty as to what day it is."

Replies from: cupholder
comment by cupholder · 2010-05-14T16:50:19.241Z · LW(p) · GW(p)

Ahhhh, I think I understand a bit better now. Am I right in thinking that your objection is not that you disapprove of relative frequency arguments in themselves, but that you believe the wrong relative frequency/frequencies is/are being used?

Replies from: Morendil
comment by Morendil · 2010-05-14T17:15:27.211Z · LW(p) · GW(p)

Right up until your reply prompted me to write a program to check your argument, I wasn't thinking in terms of relative frequencies at all, but in terms of probability distributions.

I haven't learned the rules for relative frequencies yet (by which I mean thing like "(don't) include counts of variables that have a correlation of 1 in your denominator"), so I really have no idea.

Here is my program - which by the way agrees with neq1's comment here, insofar as the "magic trick" which will recover 1/2 as the answer consists of commenting out the TTW line.

However, this seems perfectly nonsensical when transposed to my spreadsheet: zeroing out the TTW cell at all means I end up with a total probability mass less than 1. So, I can't accept at the moment that neq1's suggestion accords with the laws of probability - I'd need to learn what changes to make to my table and why I should make them.

from random import shuffle, randint

flips=1000

HEADS=0
TAILS=1

# individual cells
HMW = HTW = HMD = HTD = 0.0
TMW = TTW = TMD = TTD = 0.0

def run_experiment():
    global HMW, HTW, HMD, HTD, TMW, TTW, TMD, TTD

    coin = randint(HEADS,TAILS)

    if (coin == HEADS):
      # wake Beauty on monday
      HMW+=1
      # drug Beauty on Tuesday
      HTD+=1

    if (coin == TAILS):
      # wake Beauty on monday
      TMW+=1
      # wake Beauty on Tuesday too
      TTW+=1

for i in range(flips):
  run_experiment()

print "Total samples where heads divided by total samples ~P(H):",(HMW+HTW+HMD+HTD)/(HMW+HTW+HMD+HTD+TMW+TTW+TMD+TTD)
print "Total samples where woken F(W):",HMW+HTW+TMW+TTW
print "Total samples where woken and heads F(W&H):", HMW+HTW
print "P(W&H)=P(W)P(H|W), so P(H|W)=lim F(W&H)/F(W)"
print "Total samples where woken and heads divided by sample where woken F(H|W):", (HMW+HTW)/(HMW+HTW+TMW+TTW)
Replies from: cupholder, Jonathan_Graehl, cupholder
comment by cupholder · 2010-05-14T18:38:29.020Z · LW(p) · GW(p)

Replying again since I've now looked at the spreadsheet.

Using my intuition (which says the answer is 1/2), I would expect P(Heads, Tuesday, Not woken) + P(Tails, Tuesday, Not woken) > 0, since I know it's possible for Beauty to not be woken on Tuesday. But the 'halfer "variant"' sheet says P(H, T, N) + P(T, T, N) = 0 + 0 = 0, so that sheet's way of getting 1/2 must differ from how my intuition works.

(ETA - Unless I'm misunderstanding the spreadsheet, which is always possible.)

Replies from: Morendil
comment by Morendil · 2010-05-14T19:44:25.565Z · LW(p) · GW(p)

Yeah, that "Halfer variant" was my best attempt at making sense of the 1/2 answer, but it's not very convincing even to me anymore.

comment by Jonathan_Graehl · 2010-05-14T18:43:46.699Z · LW(p) · GW(p)

That program is simple enough that you can easily compute expectations of your 8 counts analytically.

comment by cupholder · 2010-05-14T17:44:52.044Z · LW(p) · GW(p)

Your program looks good here, your code looks a lot like mine, and I ran it and got ~1/2 for P(H) and ~1/3 for F(H|W). I'll try and compare to your spreadsheet.

comment by neq1 · 2010-05-14T15:27:17.217Z · LW(p) · GW(p)

Well, perhaps because relative frequencies aren't always probabilities?

Replies from: cupholder
comment by cupholder · 2010-05-14T15:49:36.105Z · LW(p) · GW(p)

Of course. But if I simulate the experiment more and more times, the relative frequencies converge on the probabilities.

> beauty <- function(n) {
+ 
+     # Number of times the coin comes up tails
+     ntails <- 0
+ 
+     # Number of times SB wakes up
+     wakings <- 0
+ 
+     for (i in 1:n) {
+ 
+         # It's Sunday, flip the coin, 0 is heads, 1 is tails
+         coin <- sample(c(0, 1), 1)
+         ntails <- ntails + coin
+ 
+         if (coin == 0) {
+             # Beauty wakes up once, Monday
+             wakings <- wakings + 1
+         } else {
+             # Beauty wakes up Monday, then Tuesday
+             wakings <- wakings + 2
+         }
+     }
+ 
+     return(c(ntails / wakings, ntails / n))
+ 
+ }
> beauty(5)
[1] 0.1666667 0.2000000
> beauty(50)
[1] 0.375 0.600
> beauty(500)
[1] 0.3036212 0.4360000
> beauty(5000)
[1] 0.3314614 0.4958000
> beauty(50000)
[1] 0.3336354 0.5006800
Replies from: neq1
comment by neq1 · 2010-05-14T16:24:10.918Z · LW(p) · GW(p)

Even in the limit not all relative frequencies are probabilities. In fact, I'm quite sure that in the limit ntails/wakings is not a probability. That's because you don't have independent samples of wakings.

Replies from: cupholder
comment by cupholder · 2010-05-14T16:38:52.462Z · LW(p) · GW(p)

Even in the limit not all relative frequencies are probabilities.

But if there is a probability to be found (and I think there is) the corresponding relative frequency converges on it almost surely in the limit.

In fact, I'm quite sure that in the limit ntails/wakings is not a probability. That's because you don't have independent samples of wakings.

I don't understand.

Replies from: neq1
comment by neq1 · 2010-05-14T16:47:27.305Z · LW(p) · GW(p)

I tried to explain it here: http://lesswrong.com/lw/28u/conditioning_on_observers/1zy8

Basically, the 2 wakings on tails should be thought of as one waking. You're just counting the same thing twice. When you include counts of variables that have a correlation of 1 in your denominator, it's not clear what you are getting back. The thirders are using a relative frequency that doesn't converge to a probability

Replies from: cupholder
comment by cupholder · 2010-05-14T17:29:40.450Z · LW(p) · GW(p)

Basically, the 2 wakings on tails should be thought of as one waking. You're just counting the same thing twice.

This is true if we want the ratio of tails to wakings. However...

When you include counts of variables that have a correlation of 1 in your denominator, it's not clear what you are getting back. The thirders are using a relative frequency that doesn't converge to a probability

Despite the perfect correlation between some of the variables, one can still get a probability back out - but it won't be the probability one expects.

Maybe one day I decide I want to know the probability that a randomly selected household on my street has a TV. I print up a bunch of surveys and put them in people's mailboxes. However, it turns out that because I am very absent-minded (and unlucky), I accidentally put two surveys in the mailboxes of people with a TV, and only one in the mailboxes of people without TVs. My neighbors, because they enjoy filling out surveys so much, dutifully fill out every survey and send them all back to me. Now the proportion of surveys that say 'yes, I have a TV' is not the probability I expected (the probability of a household having a TV) - but it is nonetheless a probability, just a different one (the probability of any given survey saying, 'I have a TV').

Replies from: neq1
comment by neq1 · 2010-05-14T17:34:56.605Z · LW(p) · GW(p)

That's a good example. There is a big difference though (it's subtle). With sleeping beauty, the question is about her probability at a waking. At a waking, there are no duplicate surveys. The duplicates occur at the end.

Replies from: cupholder
comment by cupholder · 2010-05-14T18:01:46.446Z · LW(p) · GW(p)

That is a difference, but it seems independent from the point I intended the example to make. Namely, that a relative frequency can still represent a probability even if its denominator includes duplicates - it will just be a different probability (hence why one can get 1/3 instead of 1/2 for SB).

Replies from: neq1
comment by neq1 · 2010-05-14T18:05:13.391Z · LW(p) · GW(p)

Ok, yes, sometimes relative frequencies with duplicates can be probabilities, I agree.

comment by neq1 · 2010-05-14T17:12:41.816Z · LW(p) · GW(p)

Morendil,

This is strange. It sounds like you have been making progress towards settling on an answer, after discussion with others. That would suggest to me that discussion can move us towards consensus.

I like your approach a lot. It's the first time I've seen the thirder argument defended with actually probability statements. Personally, I think there shouldn't be any probability mass on 'not woken', but that is something worth thinking about and discussing.

One thing that I think is odd. Thirders know she has nothing to update on when she is woken, because they admit she will give the same answer, regardless of if it's heads or tails. If she really had new information that is correlated with the outcome, her credence would move towards heads when heads, and tails when tails.

Consider my cancer intuition pump example. Everyone starts out thinking there is a 50% chance they have cancer. Once woken, regarldess of if they have cancer or not, they all shift to 90%. Did they really learn anything about their disease state by being woken? If they did, those with cancer would have shifted their credence up a bit, and those without would have shifted down. That's what updating is.

Replies from: Morendil
comment by Morendil · 2010-05-14T17:39:18.786Z · LW(p) · GW(p)

In your example the experimenter has learned whether you have cancer. And she reflects that knowledge in the structure of the experiment: you are woken up 9 times if you have the disease.

Set aside the amnesia effects of the drug for a moment, and consider the experimental setup as a contorted way of imparting the information to the patient. Then you'd agree that with full memory, the patient would have something to update on? As soon as the second day. So there is, normally, an information flow in this setup.

What the amnesia does is selectively impair the patient's ability to condition on available information. it does that in a way which is clearly pathological, and results in the counter-intuitive reply to the question "conditioning on a) your having woken up and b) your inability to tell what day it is, what is your credence"? We have no everyday intuitions about the inferential consequences of amnesia.

Knowing about the amnesia, we can argue that Beauty "shouldn't" condition on being woken up. But if she does, she'll get that strange result. If she does have cancer, she is more likely to be woken up multiple times than once, and being woken up at all does have some evidential weight.

All this, though, being merely verbal aids as I try to wrap my head around the consequences of the math. And therefore to be taken more circumspectly than the math itself.

Replies from: neq1
comment by neq1 · 2010-05-14T17:49:58.591Z · LW(p) · GW(p)

If she does condition on being woken up, I think she still gets 1/2. I hate to keep repeating arguments, but what she knows when she is woken up is that she has been woken up at least once. If you just apply Bayes rule, you get 1/2.

If conditioning causes her to change her probability, it should do so in such a way that makes her more accurate. But as we see in the cancer problem, people with cancer give the same answer as people without.

Then you'd agree that with full memory, the patient would have something to update on?

Yes, but then we wouldn't be talking about her credence on an awakening. We'd be talking about her credence on first waking and second waking. We'd treat them separately. With amnesia, 2 wakings are the same as 1. It's really just one experience.

Replies from: Morendil
comment by Morendil · 2010-05-14T20:01:28.291Z · LW(p) · GW(p)

If you just apply Bayes rule, you get 1/2.

Apply it to what terms?

I'm not sure what more I can say without starting to repeat myself, too. All I can say at this point, having formalized my reasoning as both a Python program and an analytical table giving out the full joint distribution, is "Where did I make a mistake?"

Where's the bug in the Python code? How do I change my joint distribution?

Replies from: neq1
comment by neq1 · 2010-05-14T20:21:49.903Z · LW(p) · GW(p)

I like the version of your halfer variant version of your table. I still need to think about your distributions more though. I'm not sure it makes sense to have a variable 'woken that day' for this problem.

comment by timtyler · 2010-06-06T07:26:37.674Z · LW(p) · GW(p)

Congratulations on getting to that point, I figure.

comment by garethrees · 2010-05-17T19:01:13.872Z · LW(p) · GW(p)

I think this kind of proposal isn't going to work unless people understand why they disagree.

comment by Jack · 2010-05-14T11:54:42.385Z · LW(p) · GW(p)

This is good.

I think it would also help if we did something to counter how attached people seem to have gotten to these positions. I'll throw in 20 karma to anyone who changes their mind, who else will?

comment by Caspian · 2010-05-18T14:07:03.064Z · LW(p) · GW(p)

I'd like a variant where there is both person accuracy p[i] and problem easiness E[j], and the odds of person i getting the correct answer initially on problem j are p[i] E[j] : (1-p[i])(1-E[j])

Ideally the updating procedure for this variant wouldn't treat everyone's opinions as independent, but it would also be interesting to see what happens when it mistakenly does treat them as independent.

comment by timtyler · 2010-05-14T21:36:28.356Z · LW(p) · GW(p)

The poll on the subject:

http://lesswrong.com/lw/28u/conditioning_on_observers/1ztb

...currently has 75% saying 1/3 and 25% saying 1/2. (12:4)

Collective intelligence in action?

Replies from: timtyler, Jack
comment by timtyler · 2010-06-06T07:22:05.259Z · LW(p) · GW(p)

Udpate 2010-06-06 - the raw vote figures are now 14:3.

comment by Jack · 2010-05-14T22:14:46.698Z · LW(p) · GW(p)

It's 13:5 to make up for a case of manipulation and to include my vote.

comment by PhilGoetz · 2010-05-14T17:23:55.614Z · LW(p) · GW(p)

In modeling Bayesians (not described here), I have the problem that saying "I assign this problem probabilty .5 of being true" really means "I have no information about this problem."

My original model treated that p=.5 as an estimate, so that a bunch of Bayesians who all assign p=.5 to a problem end up respecting each other more, instead of ignoring their own opinions due to having no information about it themselves.

I'm reformulating it to weigh opinions according to the amount of information they claim to have. But what's the right way to do that?

Replies from: thomblake
comment by thomblake · 2010-05-14T17:26:18.639Z · LW(p) · GW(p)

I'm reformulating it to weigh opinions according to the amount of information they claim to have. But what's the right way to do that?

Use a log-based unit, like bits or decibels.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-14T17:55:36.088Z · LW(p) · GW(p)

Yes; but then how to work that into the scheme to produce a probability?

I deleted the original comment because I realized that the equations given already give zero weight to an agent who assigns a problem a belief value of .5. That's because it just multiplies both m0 and m1 by .5.

Replies from: thomblake
comment by thomblake · 2010-05-14T18:00:45.177Z · LW(p) · GW(p)

I do wonder though if you should have some way of distinguishing someone who assigns a probability of .5 for complete ignorance, versus one who assigns a probability of .5 due to massive amounts of relevant evidence that just happens to balance out. But then, you'll observe the ignorant fellow updating significantly more than the well-informed fellow on a piece of evidence, and can use that to determine the strength of their convictions.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-17T21:57:42.279Z · LW(p) · GW(p)

I've thought about that. You could use a possible-worlds model, where the ignorant person allows all worlds, and the other person has a restricted set of possible worlds within with p is still .5. If updating then means restricting possible worlds, it should work out right in both cases.

comment by DuncanS · 2010-05-13T22:54:40.309Z · LW(p) · GW(p)

This suggestion contains a classic bootstrapping problem. If only I knew I was good at statistics, then I'd be confident of analysing this problem which tells me whether or not I'm good at statistics. But since I'm not sure, I'm not sure whether this will give me the right answer.

I think I'll stick to counting.

comment by byrnema · 2010-05-13T18:30:41.902Z · LW(p) · GW(p)

Comment moved.

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2010-05-13T18:44:49.659Z · LW(p) · GW(p)

I didn't mean you should move your original comment. That was fine where it was. (Asking people to state their conclusion on the Sleeping Beauty problem, and their reasons.)

Replies from: byrnema
comment by byrnema · 2010-05-13T18:49:22.263Z · LW(p) · GW(p)

I think it would be most organized if their responses were daughters to my comment, so all of the conclusions could be found grouped in one location.

Replies from: Blueberry
comment by Blueberry · 2010-05-13T19:53:07.476Z · LW(p) · GW(p)

if their responses were daughters

Just curious why the responses are female.

Replies from: byrnema
comment by byrnema · 2010-05-13T20:09:46.084Z · LW(p) · GW(p)

The default gender is (usually) male, so I like to play with this by choosing the female gender whenever I have a free choice.

Nevertheless, branches and sub-divisions of any type are typically feminine -- always sisters or daughters. Perhaps the reason for this is that the sisters and daughters inherit the ability of their mothers to again divide/branch/etc and this is considered a female trait.

...I found this answer on yahoo.

Replies from: Blueberry, Jonathan_Graehl
comment by Blueberry · 2010-05-13T20:20:39.148Z · LW(p) · GW(p)

Interesting and thanks. I haven't noticed this before: for whatever reason, I've only seen nodes in a tree structure referred to as "parent" and "child."

Replies from: Jack
comment by Jack · 2010-05-16T03:40:00.658Z · LW(p) · GW(p)

In semantics we called them daughters. Shrug.

comment by Jonathan_Graehl · 2010-05-14T19:01:45.204Z · LW(p) · GW(p)

Males divide and inherit equally well :)

I always assumed that the predominantly male engineers behind terms like motherboard / daughterboard were simply lonely.

comment by PhilGoetz · 2010-05-13T18:36:37.924Z · LW(p) · GW(p)

...but, preferably, in the Sleeping Beauty post?

I've already stated my position there, probably too many times.

Replies from: byrnema
comment by byrnema · 2010-05-13T18:45:10.379Z · LW(p) · GW(p)

Nevertheless, it was your position I couldn't determine (for the amount of resources I was willing to invest).

comment by Mike Bishop (MichaelBishop) · 2010-05-13T18:09:11.525Z · LW(p) · GW(p)

I'm interested in hearing others responses to these questions:

What do you think will happen when I run the program, or its variants? What other variants would you like to see tested?

As for this one:

*Is there a fundamental problem with the model?

As you know, that depends on what we want to use the model for. It ignores all sorts of structure in the real world, but that could end up being a feature rather than a bug.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-13T18:18:25.234Z · LW(p) · GW(p)

I want to use it to try to get a grip on what conditions must be satisfied in order that people can expect to improve their accuracy on the problems discussed on LessWrong by participating in LessWrong; and whether accuracy can go to 1, or approaches a limit.

That reminds me; I need to add a sentence about the confidence effect.