An exploration of exploitation bias 2021-04-03T23:03:22.773Z
Pseudorandomness contest: prizes, results, and analysis 2021-01-15T06:24:15.317Z
Grading my 2020 predictions 2021-01-07T00:33:38.566Z
Overall numbers won't show the English strain coming 2021-01-01T23:00:34.905Z
Predictions for 2021 2020-12-31T21:12:47.184Z
Great minds might not think alike 2020-12-26T19:51:05.978Z
Pseudorandomness contest, Round 2 2020-12-20T08:35:09.266Z
Pseudorandomness contest, Round 1 2020-12-13T03:42:10.654Z
An elegant proof of Laplace’s rule of succession 2020-12-07T22:43:33.593Z


Comment by UnexpectedValues on An elegant proof of Laplace’s rule of succession · 2021-01-30T01:10:52.441Z · LW · GW

I'm not conditioning on any configuration of points. I agree it's false for a given configuration of points, but that's not relevant here. Instead, I'm saying: number the intervals clockwise from 1 to n + 2, starting with the interval clockwise of Z. Since the n + 2 points were chosen uniformly at random, the interval numbered k1 is just as likely to have the new point as the interval numbered k2, for any k1 and k2. This is a probability over the entire space of outcomes, not for any fixed configuration.

(Or, as you put it, the average probability over all configurations of the probability of landing in a given interval is the same for all intervals. But that's needlessly complicated.)

Comment by UnexpectedValues on Pseudorandomness contest: prizes, results, and analysis · 2021-01-16T00:26:37.875Z · LW · GW

For what it's worth, the top three finishers were three of the four most calibrated contestants! With this many strings, I think being intentionally overconfident as a bad strategy. (I agree it would make sense if there were like 10 or 20 strings.)

Comment by UnexpectedValues on Pseudorandomness contest: prizes, results, and analysis · 2021-01-16T00:24:28.218Z · LW · GW

Yours was #104 -- you did well!

Comment by UnexpectedValues on Great minds might not think alike · 2021-01-03T06:27:14.382Z · LW · GW

Two things, I'm guessing. First, there's the fact that in baseball you get thousands of data points a year. In presidential politics, you get a data point every four years. If you broaden your scope to national and state legislative elections (which wasn't Shor's focus at the time), in some sense you get thousands per election cycle, but it's more like hundreds because most races are foregone conclusions. (That said, that's enough data to draw some robust conclusions, such as that moderate candidates are more electable. On the other hand, it's not clear how well those conclusions would translate to high-profile races.)

Second, electoral politics is probably a way harder thing to model. There are many more variables at play, things shift rapidly from year to year, etc. Meanwhile, baseball is a game whose rules of play -- allowable actions, etc. -- are simple enough to write down. Strategy shifts over the years, but not nearly as much as in politics. (I say that without having much knowledge of baseball, so I could be... um, off base... here.)

Comment by UnexpectedValues on Overall numbers won't show the English strain coming · 2021-01-02T18:43:32.395Z · LW · GW

That's true. My response is that if I recall correctly, people didn't seem to react very strongly to what was happening in Italy a couple weeks before it was happening here. So I'm not sure that a surge in the UK would inform the US response much (even though it should).

Comment by UnexpectedValues on Overall numbers won't show the English strain coming · 2021-01-02T18:31:52.226Z · LW · GW

Yeah, you're right. 1.3 was the right constant for Covid in March because of a combination of not being locked down and having more and more tests. This was my attempt to make a direct comparison, but maybe the right way to make that comparison would be to say "if R=1.65 (which I'll trust you that it leads to a constant of 1.08), we will react about X days slower than if we started from scratch."

What is X? The answer is about 60 (two months). On the one hand, that's a lot more than the 2-3 weeks above; on the other hand, it's less scary because R is lower.

Comment by UnexpectedValues on Great minds might not think alike · 2021-01-01T22:22:27.651Z · LW · GW

Thanks! I've changed the title to "Great minds might not think alike".

Interestingly, when I asked my Twitter followers, they liked "Alike minds think great". I think LessWrong might be a different population. So I decided to change the title on LessWrong, but not on my blog.

Comment by UnexpectedValues on Predictions for 2021 · 2020-12-31T22:21:27.023Z · LW · GW

I like logarithmic better in general, but I decided to use Brier for the pseudorandomness contest because I decided I really cared about the difference between a 60% chance (i.e. looks basically random) and a 40% chance (kind of suspect). The log rule is better at rewarding people for being right at the extremes; Brier's rule is better at rewarding people for being right in the middle.

Regarding bets: I'm willing to make bets, but won't have a blanket policy like "I'll take a bet with anyone who disagrees with me by 10% or more", because that opens me up to a ton of adverse selection. (E.g. I wouldn't bet with Zvi on COVID.) So... feel free to message me if you want to bet, but also be aware that the most likely outcome is that it won't result in a bet.

(Also, the better I know you, the more likely I am to be willing to bet with you.)

Comment by UnexpectedValues on 2021 New Year Optimization Puzzles · 2020-12-31T09:56:06.633Z · LW · GW

Puzzle 3 thoughts: I believe I can do it with


coins, as follows.

First, I claim that for any prime q, it is possible to choose one of q + 1 outcomes with just one coin. I do this as follows:

  • Let p be a probability such that (Such a p exists by the intermediate value theorem, since p = 0 gives a value that's too large and p = 1/2 gives a value that's too small.)
  • Flip a coin that has probability p of coming up heads exactly q times. If all flips are the same, that corresponds to outcome 1. (This has probability 1/(q + 1) by construction.)
  • For each k between 1 and q - 1, there are ways of getting exactly k heads out of q flips, all equally likely. Note that this quantity is divisible by q (since none of 1, ..., k are divisible by q; this is where we use that q is prime). Thus, we can subdivide the particular sequences of getting k heads out of q flips into q equally-sized classes, for each k. Each class corresponds to an outcome (2 through q + 1). The probability of each of these outcomes is which is what we wanted.

Now, note that 2021*12 - 1 = 24251 is prime. (I found this by guessing and checking.) So do the above for q = 24251. This lets you flip a coin 24251 times to get 24252 equally likely outcomes. Now, since 24252 = 2021*12, just assign 12 of the outcomes to each person. Then each person will have a 1/2021 chance of being selected.

Conjecture (maybe 50% chance of being true?):

If you're only allowed to use one coin, it is impossible to do this with fewer than 24251 flips in the worst case.


What if you can only use coins with rational probabilities?

Comment by UnexpectedValues on Great minds might not think alike · 2020-12-29T03:10:13.727Z · LW · GW

Thanks for the feedback. Just so I can get an approximate idea if this is the consensus: could people upvote this comment if you like the title as is (and upvote mingyuan's comment if you think it should be changed)? Thanks!

Also, if anyone has a good title suggestion, I'd love to hear it!

Comment by UnexpectedValues on Great minds might not think alike · 2020-12-27T23:18:33.495Z · LW · GW

Yeah, I agree that the post isn't quite sequential. Most of Section II isn't necessary for any of the translator stuff -- it's just that I thought it was an interesting possible explanation of "alike minds think great" bias. (This somewhat disconnected logical structure was a hangup I had about the post; I was considering publishing it as two separate posts but decided not to.)

But, what I was trying to say about Shor and Constance and their need for a translator is: regardless of whether Shor underestimated Constance and vice versa because of this bias, they weren't in a position to understand each other's arguments. A translator's job is to make them understand each other (convert their thoughts into a language that's easily understandable by the other). This allows for reconciliation, because instead of Shor seeing his own argument and "black box Constance belief which I should update on even though I don't understand it", he sees his own argument and "Constance argument translated into Shor language", which he now has a much better idea what to do with. (And likewise symmetrically for Constance.)

Comment by UnexpectedValues on Pseudorandomness contest, Round 2 · 2020-12-20T21:15:53.340Z · LW · GW

Thanks for the suggestion. I might do that later; in the meanwhile, the following should work for pasting the strings as text (at least on Windows).

  1. Format the cells you are planning to paste the strings in as "text". (Right click -> format -> text)
  2. Copy the strings
  3. Right click -> paste special -> values only (if you hover over the options you should be able to find that one) -> text.
Comment by UnexpectedValues on Pseudorandomness contest, Round 2 · 2020-12-20T18:49:28.815Z · LW · GW

I've changed the rules to get rid of the normalization of probabilities clause, because it was pointed out to me that if someone says 0 to everything in an attempt to do well in Round 1, their Round 2 submission will receive a weight of 0 for Round 1 scoring anyway. It's still possible that there are some incentives issues here, but I doubt there's anything major, and I don't want to mess too much with what people submit.

Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-14T00:58:01.502Z · LW · GW

Yeah, this is okay. But, something that wouldn't be okay is writing some math or whatever on the side as part of calculating more bits. You can look at bits you've typed, but any computation you do with them must be in your head.

Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-14T00:14:59.817Z · LW · GW

I'm not sure I understand; could you clarify? If you're saying that the number of characters you've typed is displayed, that's okay -- that's why I recommended that one. (I suppose this makes my latest comment not quite accurate but the point of that is just so you know when to stop.)

My guess is that your are not breaking the rules as I intended to state them. Or if you think you did, perhaps do it again without breaking the rules and submit with a comment saying to count the later entry?

Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-13T18:27:03.417Z · LW · GW


Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-13T18:26:47.415Z · LW · GW

Yup, these are all okay!

Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-13T18:26:19.023Z · LW · GW

It is okay for you to go back and change or insert bits. Ideally you are in a uniform room with the buttons 0, 1, switch to a point of your choice in the string you have typed, and backspace. Thanks for clarifying!

Comment by UnexpectedValues on Pseudorandomness contest, Round 1 · 2020-12-13T18:24:50.731Z · LW · GW

No, sorry -- any thinking about strategy in advance must be your own thoughts. No external resources for that either.

Comment by UnexpectedValues on An elegant proof of Laplace’s rule of succession · 2020-12-07T22:44:31.920Z · LW · GW

Question to the LW community: I made this linkpost by copy-pasting from my my blog and then correcting any bad formatting. Is there an easier way to do this? Also, is there a way to do in-line LaTeX equations here?

Comment by UnexpectedValues on Aggregating forecasts · 2020-08-03T23:29:39.170Z · LW · GW

(Edit: I may have been misinterpreting what you meant by "geometric mean of probabilities." If you mean "take the geometric mean of probabilities of all events and then scale them proportionally to add to 1" then I think that's a pretty good method of aggregating probabilities. The point i make below is that the scaling is important.)

I think taking the geometric mean of odds makes more sense than taking the geometric mean of probabilities, because of an asymmetry arising from how the latter deals with probabilities near 0 versus probabilities near 1.

Concretely, suppose Alice forecasts an 80% chance of rain and Bob forecasts a 99% chance of rain. Those are 4:1 and 99:1 odds respectively, and if you take the geometric mean you'll get an aggregate 95.2% chance of rain.

Equivalently, Alice and Bob are forecasting a 20% chance and a 1% chance of no rain -- i.e. 1:4 and 1:99 odds. Taking the geometric mean of odds gives you a 4.8% chance of no rain -- checks out.

Now suppose we instead take a geometric mean of probabilities. The geometric mean of 80% and 99% is roughly 89.0%, so aggregating Alice's and Bob's probabilities of rain in this way will give 89.0%.

On the other hand, aggregating Alice's and Bob's probabilities of no rain, i. e. taking a geometric mean of 20% and 1%, gives roughly 4.5%.

This means that there's an inconsistency with this method of aggregation: you get an 89% chance of rain and a 4.5% chance of no rain.