General-purpose forecasting and the associated community
post by VipulNaik · 2014-06-26T02:49:51.005Z · LW · GW · Legacy · 0 commentsContents
General ideas related to forecasting Other forecasting overviews Research into forecasting (general) Notable individuals with all-round interest and achievements in forecasting Some general remarks on the forecasting community (qualitative impressions, may be quite wrong) None No comments
This post includes some lists gathered by web surfing and link-traipsing, as well as some broad qualitative impressions. The qualitative impressions are not always thoroughly grounded, and some of them may be plain wrong.
As contract work for MIRI, I've been reviewing forecasting in many different domains. My output is currently scattered across many posts, and I'm planning to do a summarization post once I've finished a few more. I thought it'll be good to provide some background on the general ideas related to forecasting and the community that surrounds forecasting.
General ideas related to forecasting
There are a number of important type distinctions in forecasting. The jargon differs somewhat across domains, so ignore the terminology I use and concentrate on the ideas:
- Judgmental forecasts (forecasts relying on individual or group judgment) versus model-based forecasts (forecasts that rely on an explicit model)
- Statistical models versus conceptual/causal models in the forecast: Statistical models start with a relatively general functional form (without necessarily having a strong theoretical justification) and use regression-like approaches to find the parameters of the model. Causal or conceptual models start with a fairly specific functional form with a conceptual justification for it, and with either no parameters or a very small number of parameters. In the case that the underlying processes are physical processes, the conceptual models are called physical models.
- Simulations versus closed form expressions: Simulations require one to run the entire model through time (and possibly space) in order to come up with predictions for values at a given later point in time. With simulation-based models, one cannot directly make a forecast for the value after a fixed time interval. Rather, one needs to simulate how the system evolves over the whole time period. Closed form expressions may allow one to directly compute predictions at later points in time. Simulations are generally far more computationally intensive but could produce more accurate forecasts, or a more representative and comprehensive ensemble of forecasts, than simple closed functional forms.
- Time series forecasts versus explanatory forecasts: Time series forecasts simply use the previous values of the same variable to predict future values of the variable. Typical techniques here include weighted smoothing and auto-regressive methods (the idea is to fit a curve through the points as we do in ordinary regression, but to give more weight to fitting the recent observations and also to correct for random and seasonal fluctuations by using moving averages where necessary). In contrast, explanatory forecasts identify other variables that explain this variable, then use models (statistical or conceptual) to understand how those other variables would predict this variable.
- Point estimate forecasts, interval forecasts and probabilistic forecasts: Some forecasts produce a single point estimate for the value being predicted (in the case of continuous variables, this means they produce a single numerical value, whereas for binary questions, they produce a single answer of yes or no). Interval forecasts produce an interval (with a lower bound and an upper bound) which is claimed to contain the actual value with high probability. Probabilistic forecasts provide a probability distribution on the set of possible values.
- Single forecasts versus combined forecasts: Some forecasts use a single forecasting approach to produce the result. Combined forecasts generally use different forecasting approaches, completely isolated from each other (i.e., the forecasting approaches are not allowed to interact or share information with each other while computing), and then combine the results. The results could be combined by taking a weighted average of the results, or by assigning different probabilities to the results of the different forecasts. In some cases, the full distribution of forecasts may be reported, along with the mean and standard deviation.
The general rationale for why averaging should do better than taking individual forecasts is reasonably clear: individual forecasts may be biased, but averaging out over forecasts allows for different biases to cancel out, so the overall bias is less. What may be somewhat unclear is why simple averages often outperform sophisticated weighted averaging techniques. But it does seem to be true. For more, see the paper Combining forecasts: a review and annotated bibliography by Robert T. Clemen, International Journal of Forecasting, 5 (1989), 559-583.
A number of methods have been developed to qualitatively guide the thinking of forecasters, particularly in cases where purely quantitative methods are inadequate. These include:
- Reference class forecasting (Wikipedia). This is related to the concept of the outside view that is popular on LessWrong.
- Forecast by analogy (Wikipedia, Forecasting Principles)
For forecasts generated by individual judgments, we can tweak both the incentive structure and the opportunity for forecasters to learn from one another:
- Generating consensus forecasts: There are some protocols for allowing individual forecasters to learn from the forecasting attempts of other forecasters and modify their forecasts based on that. Examples are the Delphi method and the Nominal group technique (NGT).
- Forecasting competitions: In forecasting competitions, forecasters are rewarded (in money or reputation) for getting correct forecasts. Historical and current examples of forecasting competitions include the Makridakis Competitions conducted by Spyros Makridakis and Michele Hibon, the Global Energy Forecasting Competition conducted by a team led by Dr. Tao Hong, and The Good Judgment Project conducted by Philip E. Tetlock, Barbara Mellers, and Don Moore.
- Market-based forecasts or marketcasts: Actual markets (such as stock markets) or prediction markets (such as Intrade) can be used to generate forecasts by allowing people to bet their beliefs. In some cases, the market automatically generates an estimate for the value that is to be forecasted (for instance, the stock market generates an estimate for the total value of a company in the form of a market cap). In other cases, the information generated by market activity can be processed to generate a forecast. The latter approach is called a marketcast.
Other forecasting overviews
- Forecasting overview by Rob J. Hyndman
- An Overview of Forecasting Methodology by David S. Walonick
- Professional Forecasting Methods by the Society of Actuaries
- Methods on Forecasting Principles, maintained by Kesten C. Green
Research into forecasting (general)
General journals (in approximately decreasing order of prestige/relevance):
- International Journal of Forecasting (website, Wikipedia) (started 1985): This has an impact factor of about 1.85. Papers describing the winning entries of major competitions, such as the M3-Competition and Global Energy Forecasting Competition, have been published here.
- Foresight: The International Journal of Applied Forecasting (website, Wikipedia) (started 2005).
- Journal of Forecasting (website, Wikipedia) (started 1982)
Domain-specific forecasting journals. Unless otherwise indicated, I did not get the impression that any of them is particularly high-status:
- Technological Forecasting and Social Change (website, Wikipedia): This has a relatively high impact factor of about 1.77. The journal focuses on technological forecasting, and has a lot of the literature on the Delphi method.
- Journal of Business Forecasting (website, Wikipedia) (started 1982)
- International Journal of Business Forecasting and Marketing Intelligence (website)
For specialized domains such as climate or energy, there are no separate journals for forecasting; rather, prediction and forecasting are a part of much of the research published in the main journals in those domains.
Institutes and societies:
- International Institute of Forecasters (website, Wikipedia): This is the world's foremost professional society of forecasters. It is also the publisher of the International Journal of Forecasting and Foresight: The International Journal of Applied Forecasting.
- Institute of Business Forecasting and Planning (website, LinkedIn group): They publish the Journal of Business Forecasting. This is restricted to forecasting for businesses, but still covers quite a wide swath of material.
- The Society of Actuaries has a Forecasting and Futurism section (website)
Other websites and blogs:
- Forecasting Principles: This website includes a wide swath of forecasting principles, as well as a summary of methods, resources, and existing literature for the application of forecasting to several existing domains, including many of the ones discussed later in this post. Although it's not comprehensive, it's a resource worth checking out when researching what has been said about forecasting in a given domain.
- SAS Business Forecasting Blog by Mike Gilliland (although this is specific to business forecasting, it still covers a wide range of forecasting approaches)
- Rob J. Hyndman's blog
- Andrew Gelman's blog
Notable individuals with all-round interest and achievements in forecasting
- J. Scott Armstrong (faculty page, Wikipedia): He was a founder and editor of Journal of Forecasting and a founder of the International Journal of Forecasting. He is also behind PollyVote (Wikipedia), one of the most successful prediction tools for United States presidential elections (see here for instance). He is also the author of Principles of Forecasting and is one of the main people involved with the Forecasting Principles website.
- Spyros Makridakis (INSEAD profile page, Wikipedia): Principal organizer of the Makridakis Competitions, that I discussed in this post.
- Michele Hibon (INSEAD profile page): Co-organizer with Makridakis of the Makridakis Competitions as well as of the follow-up NN3 competition.
- Nate Silver (Wikipedia): Creator of Five Thirty Eight, one of the most successful and famous prediction websites for United States elections that uses poll aggregation. Also, author of The Signal and the Noise, a book on prediction and forecasting in many different realms.
- Paul Newbold (disciple of Clive Granger)
- Philip Tetlock (faculty page, Wikipedia): He is best known for his book Expert Political Judgment, that found that most "experts" underperformed simple extrapolation algorithms, and also identified distinctions between experts who performed well and experts who performed badly.
- Rob J. Hyndman (website, Wikipedia page): He is currently the editor of the International Journal of Forecasting.
- Robert Fildes
- Robert T. Clemen
- Andrew Gelman (website, Wikipedia)
- Kesten C. Green (website): He maintains many parts of the Forecasting Principles website.
For more names of individuals, consider:
- People I listed in my post on forecasting of politics and conflict, and people I'll list in my other domain-specific forecasting posts.
- Lists of chief editors, editorial board members, and authors of papers at the forecasting-related journals, particularly the International Journal of Forecasting.
- Lists of people involved with the Forecasting Principles website.
- Authors of papers related to the Makridakis Competitions.
- People whose work is referenced on the SAS Forecasting blog or on Rob J. Hyndman's blog.
Some general remarks on the forecasting community (qualitative impressions, may be quite wrong)
- As indicated above, there is a reasonably large community of people who consider themselves as "forecasters" in a general sense, not just domain-specific forecasters.
- Forecasters can be found as professors at universities as well as consultants and academics-in-residence at companies. There are no departments exclusively devoted to forecasting, so most forecasting academics are found in business and management schools, statistics departments, and departments specific to the domain where they have acquired fame forecasting. The modal forecasting academic is a managment professor or business school professor.
- The forecasting community is somewhat more focused on time series forecasting (forecasting future values based on past values) and tools that are specifically useful for such forecasting. This marks them as somewhat different from the machine learning, data science, and predictive analytics communities (scattered across computer science, statistics, and other departments in academia, and in pockets of industry), that are more focused on explanatory forecasting: they generally focus on identifying a large number of features that can be used to predict a variable, and then using regression or classification model to make the relationship explicit. However, it's likely that, as more and more fine-grained data becomes routinely available, the focus will shift more from time series forecasting to predictive analytics.
- The relation between the forecasting community and domain-specific forecasters varies widely by domain. The general-purpose forecasting community strongly overlaps with the business forecasting community. There is a reasonable overlap and exchange of ideas with macroeconomic forecasters. This isn't surprising: all these people move around in the same circles in academia and industry. There is also some overlap with and exchange of ideas with demographic forecasting and with energy use forecasting (I haven't been able to gauge the extent of overlap). However, there are some domains where the forecasting community simply doesn't see eye to eye with the people who make domain-specific forecasts. A notable example of this sort is weather and climate. J. Scott Armstrong, one of the leading lights of the forecasting community, is a known global warming skeptics. He co-authored a global warming audit with Kesten C. Green that found climate scientists guilty of ignoring 72 of 89 forecasting principles. He also issued The Global Warming Challenge to Al Gore. More on this in a later post. While Armstrong's views on climate change aren't necessarily representative of the forecasting community, it does seem to be the case that the climate science community hadn't interacted much with the forecasting community prior to this conflagration.
- There is some overlap in methods as well as goals between general-purpose forecasting and the futures studies profession. However, as far as I can make out, the communities have very little overlap. The quantitative methods that are the focus of forecasting simply aren't that useful for futures studies, which is viewed of as part-art, part-science. On the other hand, the forecasting methods that rely on qualitative thinking are useful in futures studies, and there is some exchange of ideas here. A journal that probably lives at the nexus between forecasting and futures studies is Technological Forecasting and Social Change.
0 comments
Comments sorted by top scores.