Test your forecasting ability, contribute to the science of human judgment
post by Mike Bishop (MichaelBishop) · 2012-05-05T15:07:45.655Z · LW · GW · Legacy · 5 commentsContents
Who We Are Who Can Participate None 5 comments
As XFrequentist mentioned last August, "Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take)."
You can pre-register here.
Last year, approximately 2400 forecasters were assigned to one of eight experimental conditions. I was the #1 forecaster in my condition. It was fun, and I learned a lot, and eventually they are going to give me a public link so that I can brag about this until the end of time. I'm participating again this year, though I plan to regress towards the mean.
I'll share the same info XFrequentist did last year below the fold because I think it's all still relevant.
Despite its importance in modern life, forecasting remains (ironically) unpredictable. Who is a good forecaster? How do you make people better forecasters? Are there processes or technologies that can improve the ability of governments, companies, and other institutions to perceive and act on trends and threats? Nobody really knows.
The goal of the Good Judgment Project is to answer these questions. We will systematically compare the effectiveness of different training methods (general education, probabilistic-reasoning training, divergent-thinking training) and forecasting tools (low- and high-information opinion-polls, prediction market, and process-focused tools) in accurately forecasting future events. We also will investigate how different combinations of training and forecasting work together. Finally, we will explore how to more effectively communicate forecasts in ways that avoid overwhelming audiences with technical detail or oversimplifying difficult decisions.
Over the course of each year, forecasters will have an opportunity to respond to 100 questions, each requiring a separate prediction, such as “How many countries in the Euro zone will default on bonds in 2011?” or “Will Southern Sudan become an independent country in 2011?” Researchers from the Good Judgment Project will look for the best ways to combine these individual forecasts to yield the most accurate “collective wisdom” results. Participants also will receive feedback on their individual results.
All training and forecasting will be done online. Forecasters’ identities will not be made public; however, successful forecasters will have the option to publicize their own track records.
Who We Are
The Good Judgment research team is based in the University of Pennsylvania and the University of California Berkeley. The project is led by psychologists Philip Tetlock, author of the award-winning Expert Political Judgment, Barbara Mellers, an expert on judgment and decision-making, and Don Moore, an expert on overconfidence. Other team members are experts in psychology, economics, statistics, interface design, futures, and computer science.
We are one of five teams competing in the Aggregative Contingent Estimation (ACE) Program, sponsored by IARPA (the U.S. Intelligence Advanced Research Projects Activity). The ACE Program aims "to dramatically enhance the accuracy, precision, and timeliness of forecasts for a broad range of event types, through the development of advanced techniques that elicit, weight, and combine the judgments of many intelligence analysts." The project is unclassified: our results will be published in traditional scholarly and scientific journals, and will be available to the general public.
A general description of the expected benefits for volunteers:
All decisions involve forecasts, and we all make forecasts all the time. When we decide to change jobs, we perform an analysis of potential futures for each of our options. When a business decides to invest or disinvest in a project, it moves in the direction it believes to present the best opportunity. The same applies when a government decides to launch or abandon a policy.
But we virtually never keep score. Very few forecasters know what their forecasting batting average is — or even how to go about estimating what it is.
If you want to discover what your forecasting batting average is — and how to think about the very concept — you should seriously consider joining The Good Judgment Project. Self-knowledge is its own reward. But with self-knowledge, you have a baseline against which you can measure improvement over time. If you want to explore how high your forecasting batting average could go, and are prepared to put in some work at self-improvement, this is definitely the project for you.
Could that be any more LessWrong-esque?
Prediction markets can harness the "wisdom of crowds" to solve problems, develop products, and make forecasts. These systems typically treat collective intelligence as a commodity to be mined, not a resource that can be grown and improved. That’s about to change.
Starting in mid-2011, five teams will compete in a U.S.-government-sponsored forecasting tournament. Each team will develop its own tools for harnessing and improving collective intelligence and will be judged on how well its forecasters predict major trends and events around the world over the next four years.
The Good Judgment Team, based in the University of Pennsylvania and the University of California Berkeley, will be one of the five teams competing – and we’d like you to consider joining our team as a forecaster. If you're willing to experiment with ways to improve your forecasting ability and if being part of cutting-edge scientific research appeals to you, then we want your help.
We can promise you the chance to: (1) learn about yourself (your skill in predicting – and your skill in becoming more accurate over time as you learn from feedback and/or special training exercises); (2) contribute to cutting-edge scientific work on both individual-level factors that promote or inhibit accuracy and group- or team-level factors that contribute to accuracy; and (3) help us distinguish better from worse approaches to generating forecasts of importance to national security, global affairs, and economics.
Who Can Participate
Requirements for participation include the following:
(1) A baccalaureate, bachelors, or undergraduate degree from an accredited college or university (more advanced degrees are welcome);
(2) A curiosity about how well you make predictions about world events – and an interest in exploring techniques for improvement.
More info: http://goodjudgmentproject.blogspot.com/
5 comments
Comments sorted by top scores.
comment by gwern · 2012-05-05T16:32:22.385Z · LW(p) · GW(p)
See also http://lesswrong.com/lw/c3v/link_get_paid_to_train_your_rationality_update/
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2012-05-05T20:18:22.430Z · LW(p) · GW(p)
hmmm, I guess I missed that. Should I remove this post?
Replies from: gwerncomment by Username · 2012-05-06T02:13:33.591Z · LW(p) · GW(p)
Interestingly enough, everyone sees themself on the top of the leaderboard.
Replies from: Morendil↑ comment by Morendil · 2012-05-06T10:59:51.005Z · LW(p) · GW(p)
I think that has been fixed. At one point I was ranked #8 in my team, and I finished #2, with an aggregate Brier score of .34, quite close to the leader at .33. Unfortunately that isn't much to brag about, as my team fell off the team leaderboard altogether - the top team had an aggregate score of .28.