# 2018 Prediction Contest - Propositions Needed

post by jbeshir · 2018-03-29T15:02:32.409Z · score: 16 (6 votes) · LW · GW · 6 comments

## Contents

```  Motivation
Proposed Plan
The Main Challenge Remaining: Propositions
High Value Long Term Predictions
None
```

Summary: I think it is worth running a contest to measure ability to make accurate predictions, and am prepared to put up a prize of \$200 for the winner. I need to identify a base of 20-50 propositions, all likely to settle within 2-10 months from now, and would like to ask the community for their suggestions, or for some suggested algorithms for picking them. I'd also like feedback on the idea.

## Motivation

I think that practicing predicting future events significantly improves calibration and provides valuable feedback on one's own level of rationality, especially around the topics you are predicting on.

More fuzzily, I think that on the community level, a greater prevalence of predictions being made could provide gains throughout the community, through feedback on how effective our peers are at making predictions. I don't think it is quite a solution to the schools proliferating without evidence [LW · GW] problem- it's a single very narrow metric- but schools proliferating with a single very narrow metric is a positive step, I think.

Thirdly, I think a lot of people would like to practice prediction-making, but do not get around to it for various reasons, one of which being the difficulty identifying what propositions to make predictions about. As a result, I think with a base of propositions, it would take quite a small expected value nudge to get a decent number of people to try making predictions.

Bring up all these thoughts together, and the candidate strategy of running a prediction contest came to mind pretty easily. And the simplest way to see if it is a good idea is to try it. If I get 20 entrants I'd consider it a weak success and worth running another next year- more would be more of a success. I put maybe 60% odds on that, conditional on it being run.

## Proposed Plan

Once I have identified a set of 20-50 propositions, I'll create predictions for all of them on PredictionBook, and make a subsequent post here on Less Wrong, listing them all and announcing the contest.

From that time, anyone will have until a specified deadline (~1 month from posting time) to submit predictions on all of them, and give me a contact email and their PredictionBook account name through a Google Form. If people submit multiple predictions, the latest one before the deadline would be used (to remove the incentive to delay until the last minute to minimise uncertainty by permitting a later revision of the prediction).

Once the predictions have all settled (in about ten months), I'll score everyone's predictions using log scoring, and the winner is whoever score is highest. I'll make a subsequent post, listing the way they all settled and the winner and immediate runners up, and email the winner, asking them to make a comment on one of the predictions containing a string I provide in order to prove ownership of the PredictionBook account. Once that's done I'll ask for a PayPal account to send the prize to- or I can send it via a cryptocurrency, or even Western Union if preferred.

(In subsequent years I might solicit community contributions to the prize, but for this experiment I'll take on the risk.)

Any issues people see? Anything worth changing? Anything that makes this a bad idea?

## The Main Challenge Remaining: Propositions

The blocker on executing this, at the moment (aside giving the chance for feedback on whether this is actually a good idea) is identifying the propositions to use. The hard requirements on these are that they need to:

• Be clear enough that they can be settled as yes/no easily, with low odds of being voided. This means not being conditional predictions.
• Settle on the 1st of July 2018 or later, to permit a half month for making this happen, a month for people to make their predictions, and a remaining minimum of a half month between the end of submissions and the start of settling to avoid giving people making their predictions towards the end of the entry period a strong advantage (even given the ability to submit subsequent predictions to replace old ones, this is worth minimising).
• Settle on the 31st of January 2019 or earlier, to permit the contest to be judged and the prize to be awarded on a somewhat reasonable timescale.

Preferred characteristics of the propositions, if I'm lucky enough to get enough ideas that I can be choosy- don't let these block you from making suggestions that violate these because there's a good chance that won't happen:

• Is a prediction which, on the object-level, is specifically valuable for us to have a well-calibrated estimate of the probability of in some way. Permits making better choices, academically informative, predicts on a theory or proposition someone in the community has, anything in any way valuable. If we get good propositions, we could get more than \$200 in value from this alone- running prediction markets is legally difficult, but a prediction contest approaches some of the same capabilities and might let us borrow some of the same benefits.
• Is a prediction which, in an abstract sense, is on a topic it is practically valuable for the community to be calibrated on. Success of projects (according to a very clear metric)? Ability to judge upcoming/recent academic research? I'm open to suggestions here.
• Settles on the 1st of August 2018 or later, to further reduce any benefit to late entry.
• Settles on 24th December 2018 or earlier, to permit it to be resolved before the end of 2018.
• Is not presently on Betfair or any other major betting exchange- this would preclude some political predictions, primarily. It might be alright to have a couple of these, but I'd rather not have the prediction contest boil down to "people who made their own predictions vs people who deferred to what the market currently said".

So, any ideas? Any thoughts on where to look for good propositions? This is the main place I'd like to crowdsource some ideas from the community, because figuring this out on my own would probably produce lower quality results than some of what people here could come up with.

## High Value Long Term Predictions

If people have any particular ideas, it might also be worth throwing in up to a maximum of five longer term propositions, settled years or even decades out, which you're required to assign a probability in order to enter the contest but aren't judged as part of the contest, if we have any that it is significantly high value to have community consensus probabilities for.

There'd be no incentive to not just put random numbers in for these, but I predict that most people won't (although they might put less effort in).

I'm interested in what thoughts people have here. Worth doing? Annoying and would put you off participating? What if they were not required, but just linked from the contest post as an optional extra?

comment by Scott Alexander (scott-alexander) · 2018-03-29T17:19:41.892Z · score: 15 (4 votes) · LW(p) · GW(p)

You might want to try adapting some of the ones from http://slatestarcodex.com/2018/02/06/predictions-for-2018/ and the lists linked at the bottom.

comment by jbeshir · 2018-03-31T22:42:19.097Z · score: 2 (1 votes) · LW(p) · GW(p)

Sounds good. I've looked over them and I could definitely use a fair few of those.

comment by Ben Pace (Benito) · 2018-03-29T20:13:42.806Z · score: 12 (2 votes) · LW(p) · GW(p)

User jacobjacob and I recently made ~150 questions to do predictions on for the next year, you could PM him and he can give you them (sorry I'm busy right now).

comment by jbeshir · 2018-03-31T22:34:37.733Z · score: 2 (1 votes) · LW(p) · GW(p)

Thanks for letting me know! I've sent them a PM, and hopefully they'll get back to me once they're free.

comment by rk · 2018-03-29T17:29:19.188Z · score: 10 (4 votes) · LW(p) · GW(p)

Many questions on Good Judgment Open seem to fit your hard characteristics if not your preferred characteristics. Might you consider running a challenge for people's Brier score on GJO (if Ts & Cs allow), or cribbing some questions? An advantage here is that judging is settled for you, and people have evidence to model GJO's resolution

comment by jbeshir · 2018-03-31T22:46:54.369Z · score: 2 (1 votes) · LW(p) · GW(p)

I need to take a good look over what GJO has to offer here- I'm not sure if running a challenge for score on it would meet the goals here well (in particular I think it needs to be bounded in amount of prediction it requires in order to motivate doing it, and yet not gameable by just doing easy questions, and I'd like to be able to see what the probability assignments on specific questions were), but I've not looked at it closely with this in mind. I should at least hopefully be able to crib a few questions, or more.