## Posts

Comment by mcoram on Open Thread for February 11 - 17 · 2014-02-14T04:17:10.332Z · score: 1 (1 votes) · LW · GW

It's certainly in the right spirit. He's reasoning backwards in the same way Bayesian reasoning does: here's what I see; here's what I know about possible mechanisms for how that could be observed and their prior probabilities; so here what I think is most likely to be really going on.

Comment by mcoram on Open Thread for February 11 - 17 · 2014-02-13T03:23:37.076Z · score: 0 (0 votes) · LW · GW

Thanks Emile,

Is there anything you'd like to see added?

For example, I was thinking of running it on nodejs and logging the scores of players, so you could see how you compare. (I don't have a way to host this, right now, though.)

Or another possibility is to add diagnostics. E.g. were you setting your guess too high systematically or was it fluctuating more than the data would really say it should (under some models for the prior/posterior, say).

Also, I'd be happy to have pointers to your calibration apps or others you've found useful.

Comment by mcoram on Alternative to Bayesian Score · 2014-02-12T05:03:48.449Z · score: 0 (0 votes) · LW · GW

Here's the "normalized" version: f(x)=1+log2(x), g(x)=1+log2(1-x) (i.e. scale f and g by 1/log(2) and add 1).

Now f(1)=1, f(.5)=0, f(0)=-Inf ; g(1)=-Inf, g(.5)=0, g(0)=1.

Ok?

Comment by mcoram on Open Thread for February 11 - 17 · 2014-02-12T01:17:07.214Z · score: 7 (7 votes) · LW · GW

I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).

There's a couple other random processes to guess in the game and also a quiz. The questions are intended to force you to guess at least some of the time. If you have suggestions for other quiz questions, send them to me by PM in the format:

{q:"1+1=2. True?", a:1} // source: my calculator

where a:1 is for true and a:0 is for false.

Comment by mcoram on Alternative to Bayesian Score · 2014-02-11T23:53:54.963Z · score: 0 (0 votes) · LW · GW

There's no math error.

Why is it consistent that assigning a probability of 99% to one half of a binary proposition that turns out false is much better than assigning a probability of 1% to the opposite half that turns out true?

I think there's some confusion. Coscott said these three facts:

Let f(x) be the output if the question is true, and let g(x) be the output if the question is false.

f(x)=g(1-x)

f(x)=log(x)

In consequence, g(x)=log(1-x). So if x=0.99 and the question is false, the output is g(x)=log(1-x)=log(0.01). Or if x=0.01 and the question is true, the output is f(x)=log(x)=log(0.01). So the symmetry that you desire is true.