Announcing the Forecasting Innovation Prizepost by ozziegooen, NunoSempere (Radamantis) · 2020-11-15T21:12:39.009Z · LW · GW · 5 comments
There is already a fair amount of interest around Effective Altruism in judgemental forecasting. We think there’s a whole lot of good research left to be done.
The valuable research seems to be all over the place. We could use people to speculate on research directions, outline incentive mechanisms, try novel forecasting questions with friends, and outline new questions that deserve forecasts. Some of this requires a fair amount of background knowledge, but a lot doesn’t.
The EA and LW communities have a history of using prizes [EA(p) · GW(p)] to encourage work in exciting areas. We’re going to try one in forecasting research. If this goes well, we’d like to continue and expand this going forward.
This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).
To enter, first make a public post online between now and Jan 1, 2021. We encourage you to either post directly or make a link post to either LessWrong or the EA Forum. Second, complete this form, also before Jan 1, 2021.
If you’d like feedback or would care to discuss possible research projects, please do reach out! To do so, fill out this form. We’re happy to advise at any stages of the process.
The judges will be AlexRJL [EA · GW], Nuño Sempere [EA · GW], Eric Neyman, Tamay Besiroglu [EA · GW], Linch Zhang [EA · GW] and Ozzie Gooen [EA · GW]. The details of the judging process will vary depending on how many submissions we get. We’ll try to select winners for their importance, novelty, and presentation.
Some Possible Research Areas
Areas of work we would be excited to see explored:
- Operationalizing questions in important domains so that they can be predicted in e.g., Metaculus. This is currently a significant bottleneck; it’s surprisingly difficult to write good questions. Examples in the past have been the Ragnarök or the Animal Welfare series. A possible suggestion might be to try to come up with forecastable fire alarms [LW · GW] for AGI. Tamay Besiroglu has suggested a “S&P 500 but for AI forecasts,” i.e., a group of forecasting questions which track something useful for AI (or for other domains.)
- Small experiments where you and/or a group of people use forecasting for your own decision making, and write up what you’ve learned. For example, set up a Foretold community to decide on which research document you want to write up next. Predictions as a Substitute for Reviews is an example here.
- New forecasting approaches, or forecasting tools being used in new and interesting ways, or applied to new domains. For example, Amplifying generalist research via forecasting [LW · GW], or Ought’s AI timelines forecasting thread [LW · GW].
- Estimable or gears-level [LW · GW] models of the world that are well positioned to be used in forecasting. For example, a decomposition informed by one’s own expertise of a difficult question into smaller questions, each of which can be then forecasted. Recent work by CSET-foretell would be an example of this.
- Suggestions for or basic implementation of better tooling for forecasters, like a Bayes rule calculator for considering many pieces of evidence, a Laplace law calculator, etc.
- New theoretical schemes which propose solutions to current problems around forecasting. For a recent example, see Time Travel Markets for Intellectual Accounting [LW · GW].
- Elicitation of expert forecasters of useful questions. For example, the probabilities [EA · GW] of the x-risks outlined in The Precipice.
- Overviews of existing research, or thoughts or reflections on existing prediction tournaments and similar. For example, Zvi’s posts on prediction markets, here [LW · GW] and here [LW · GW].
- Figuring out why some puzzling behavior happens in current prediction markets or forecasting tournaments, like in Limits of Current US Prediction Markets (PredictIt Case Study) [LW · GW]. For a new puzzle suggested by Eric Neyman, consider that PredictIt is thought to be limited because it caps trades at $850, has various fees, etc, which makes it not the sort of market that big, informed players can enter and make efficient. But that fails to explain why markets without such caps, such as FTX, have prices similar to PredictIt. So, is PredictIt reasonable or is FTX unreasonable? If the former, why is there such a strong expert consensus against what PredictIt says so often? If the latter, why is FTX unreasonable?
- Comments on existing posts can themselves be very valuable. Feel free to submit a list of good comments instead of one single post.
Comments sorted by top scores.