Lessons from weather forecasting and its history for forecasting as a domain

post by VipulNaik · 2014-06-23T17:08:58.453Z · score: 12 (12 votes) · LW · GW · Legacy · 8 comments


  The three challenges: theory, measurement, and computation
  The basic theory of weather forecasting
  How much precision and accuracy does high resolution buy?
  The problem of chaos and the butterfly effect
  Can (and should) weather forecasting be fully automated?
  Machine learning in weather prediction?
  Prehistory: before weather simulation came to fruition: meeting the theoretical challenge
  The first successful computer-based numerical weather prediction
  Progress since then: the interplay of computational and theoretical

This is the first of two (or more) posts that look at the domain of weather and climate forecasting and what we can learn from the history and current state of these fields for forecasting as a domain. It may not be of general interest to the LessWrong community, but I hope that it's of interest to people here who have some interest either in weather-related material or in forecasting in general.

The science of weather forecasting has come a long way over the past century. Since people starting measuring and recording the weather (temperature, precipitation, etc.) two simple algorithms for weather prediction have existed (see also this):

Until the end of the 19th century, there was no weather prediction algorithm that did consistently better than both persistence and climatology. Between persistence and climatology, climatology won out over medium to long time horizons (a week or more), whereas persistence won out in some kinds of places over short horizons (1-2 days), though even there, climatology sometimes does better (see more here). Both methods have very limited utility when it comes to predicting and preparing for rare extreme weather events, such as blizzards, hurricanes, cyclones, polar winds, or heavy rainfall.

This blog post discusses the evolution and progress of weather forecasting algorithms that significantly improve over the benchmarks of persistence and climatology, and the implications both for the future of weather forecasting and for our understanding of forecasting as a domain.

Sources for further reading (uncited material in my post is usually drawn from one of these): Wikipedia's page on the history of numerical weather prediction, The Signal and the Noise by Nate Silver, and The origins of computer weather prediction and climate modeling by Peter Lynch.

The three challenges: theory, measurement, and computation

The problem facing any method that tries to do better than persistence and climatology is that whereas persistence and climatology can rely on existing aggregate records, any more sophisticated prediction algorithm relies on measuring, theorizing about, and computing with a much larger number of other observed quantities. There are three aspects to the challenge:

The basic theory of weather forecasting

The basic idea of weather forecasting is to use the equations of physics to model the evolution of the atmospheric system. In order to do this, we need to know how the system looks at a given point. In principle, that information, combined with the equations, should allow us to compute the weather indefinitely into the future. In practice, the equations we create don't have any closed-form solutions, the data is only partial (we don't have initial data on the whole world) and even small variations at a given time can balloon to bigger changes (this is called the butterfly effect; more on this later in the post).

Instead of trying to solve the system analytically, we discretize the problem (we use discrete spatial locations and discrete time steps) and then solve the problem numerically (this is a bit like using a difference quotient instead of a derivative when computing a rate of change). There are four dimensions to the discretization (three spatial and one temporal), and how fine we make the grid in each dimension is called the resolution in that dimension.

Thus, roughly, becoming finer by a factor of x in all three spatial dimensions and the time dimension requires upping computational resources to about x4 of the original value. So, doubling in all four dimensions requires improving computational power to 16 times the original value, which means four doublings.  Combining this with natural improvements in computing, such as Moore's law, we expect that we should be able to make our grid twice as fine in all dimensions (i.e., double the spatial and temporal resolution) every 8 years or so.

How much precision and accuracy does high resolution buy?

My intuitive prior would be that, for sufficiently short time horizons where we don't expect chaos to play a dominant role, we'd expect the relationship suggested by the logarithmic timeline. Does this agree with the literature?

I don't feel like I have a clear grasp of the literature, so the summary below is somewhat ad hoc. I hope it still helps with elucidating the relationship.

The problem of chaos and the butterfly effect

The main problem with weather prediction is hypersensitivity to initial conditions: even small differences in initial values can have huge effects over longer timescales (this is sometimes called the butterfly effect). This effect could occur at many different levels.

Due to these problems, modern algorithms for numerical weather prediction run simulations with many slight variations of the given initial conditions, using a probabilistic model to assign probabilities to the different scenarios considered. Note that here we are making slight variations to the data and running the model on these variations to generate a collection of scenarios weighted by probability.

As the time horizon for forecasting increases (we get to one week ahead or beyond) our understanding of how the equilibrating influences of the weather play out is more fuzzy. For such timescales, we use ensemble forecasting with a collection of different models. The models may use different data and give attention to different aspects of the data, based on slightly different underlying theories of how the different weather phenomena interact. As before, we generate probabilistic weather predictions.

Can (and should) weather forecasting be fully automated?

Nate Silver observed in his book The Signal and the Noise that the proportional improvement that human input made to the computer models has stayed constant at about 25% for precipitation forecasts and 10% for temperature forecasts, even as the computer models, and therefore the final forecast, have improved considerably over the last few decades. The sources cited by Silver don't seem to be online, but I found another paper with the data that Silver uses. Silver says that humans' main input is in the following respects:

On a related noted, Charles Doswell has argued that it would be dangerous to try to fully automate weather forecasting, because direct involvement with the weather forecasting process is crucial for meteorologists to get a better sense of how to make improvements to their models.

Machine learning in weather prediction?

For most of its history, weather prediction has relied on models grounded in our understanding of physics, with some aspects of the models being tweaked based on experience running the models. This differs somewhat from the spirit of machine learning algorithms. Supervised machine learning algorithms take a bunch of input data and output data and try to learn how to predict the outputs from the inputs, with considerable agnosticism about the underlying theoretical mechanisms. In the context of weather prediction, a machine learning algorithm might view the current measured data as the input, and the measured data after a certain time interval (or some summary variable, such as a binary variable recording whether or not it rained) as the output to be predicted. The algorithm would then try to learn a relation from the inputs to the outputs.

In recent years, machine learning ideas have started being integrated into weather forecasting. However, the core of weather prediction still relies on using theoretically grounded models. (Relevant links: Quora question on the use of machine learning algorithms in weather forecasting, Freakonomics post on a company that claims to use machine learning to predict the weather far ahead, Reddit post about that Freakonomics post).

My uninformed speculation is that machine learning algorithms would be most useful in substituting for the human input element to the model rather than the core of the numerical simulation. In particular, to the extent that machine learning algorithms can make progress on the problem of vision, they might be able to use their "vision" to better interpret the results of numerical weather prediction. Moreover, the machine learning algorithms would be particularly well-suited to using Bayesian priors (arising from knowledge of historical climate) to identify cases where the numerical models are producing false feedback loops and predicting things that seem unlikely to happen.

Prehistory: before weather simulation came to fruition: meeting the theoretical challenge

Here is a quick summary of the initial steps taken to realizing weather forecasting as a science. These steps concentrated on the theoretical challenge. The measurement and computational challenges would be tackled later.

It's interesting that scientists such as Richardson were so confident of their approach despite its failure to make useful predictions. The confidence arguably stemmed from the fact that the basic equations of physics that the model relied on were indubitably true. It's not surprising that Richardson's confidence wasn't widely shared. What's perhaps more surprising is that enough people were convinced by the approach that, when the world's first computers were made, weather prediction was viewed as a useful initial use of these computers. How they were able to figure out that this approach could bear fruit is related to questions I raised in my paradigm shifts in forecasting post.

Could Richardson have fixed the model and made correct predictions by hand? With the benefit of hindsight, it turns out that if he'd applied a standard procedure to tweak the original data he worked with, he would have been able to make decent predictions by hand. But this is easier seen in hindsight, when we have the benefit of being able to try out many different tweaks of the algorithm and compare their performance in real time (more on this point below).

The first successful computer-based numerical weather prediction

In the mid-1930s, John von Neumann, one of the key figures in modern computing, stumbled across weather prediction and identified it as an ideal problem suited to computers: it required a huge amount of calculation using clearly defined algorithms from measured initial data. An initiative supported by von Neumann led to the meteorologist Jule Charney getting interested in the problem. In 1950, a team led by Charney came up with a complete numerical algorithm for weather prediction, building on and addressing some computational issues in Richardson's original algorithm. This was then implemented on the ENIAC, the only computer available at the time. The simulation had a time ratio of 1: it took 24 hours to simulate 24 hours of weather. Charney called it a vindication of the vision of Lewis Fry Richardson.

Progress since then: the interplay of computational and theoretical

Since the 1950 ENIAC implementation of weather forecasting, weather forecasting has improved slowly and steadily. The bulk of the progress has been through access to faster computing power. Theoretical models have also improved. However, these improvements are not cleanly separable. The ability to run faster simulations computationally allows for quicker testing and comparison of different algorithms, and experimental adjustment to make them work faster and better. In this case, one-time access to better computational resources can lead to long-term improvements in the algorithms used.


Comments sorted by top scores.

comment by [deleted] · 2014-06-23T17:56:19.931Z · score: 3 (3 votes) · LW · GW

First of all, good job. This is a really nice rundown of pre-1950 weather forecasting.

I would have really appreciated more research in the post-1950 world -- almost all the interesting advances happen there! I imagine with some work the theoretical and numerical improvements could be separated, and this would be incredibly interesting. In fact, it's gotten me interested in looking....

comment by Daniel_Burfoot · 2014-06-24T01:33:06.831Z · score: 2 (2 votes) · LW · GW

People always mention the Butterfly Effect as if it is an unmitigated disaster for humans: it's kind of like Nature saying FU, you are never going to be able to predict me.

And it's true that the Butterfly Effect makes it very hard to make good predictions about the weather. But it also has an upside, because it means if we somehow do figure out how to make good predictions, than we should also be able to easily control the weather. If a butterfly flapping its wings in Tokyo can cause a tornado in Kansas, and we know this, then we should be able to prevent the tornado in Kansas by having another butterfly flap its wings in Osaka (or whatever).

comment by Eugine_Nier · 2014-06-26T04:20:44.967Z · score: 0 (0 votes) · LW · GW

The problem is that the weather also has many variables. While the chaos implies control principal works for low dimensional chaotic systems, e.g., the three body problem in orbital dynamics, I'm not sure how well it would work for weather.

comment by ShardPhoenix · 2014-06-24T02:23:49.543Z · score: 0 (0 votes) · LW · GW

IIRC for a genuinely chaotic system, long-term predictions diverge for any error in the starting measurements, no matter how small. So if weather behaviour is really chaotic then precisely predicting it in the long-term isn't physically possible.

comment by [deleted] · 2014-06-24T05:02:49.886Z · score: 0 (0 votes) · LW · GW

Unless you start controlling the weather...,

comment by peter_hurford · 2014-06-24T19:26:49.017Z · score: 1 (1 votes) · LW · GW

This would be a good post for Main. You should promote it!

(Note to others: as an attempt to reverse the decline of post quantity and quality in Main, do tell people if their post is worth promoting upward. Likewise, if you see an Open Thread post that should go to Discussion.)

comment by Luke_A_Somers · 2014-06-24T16:33:31.803Z · score: 1 (1 votes) · LW · GW

I was just thinking about weather prediction and LW last weekend, when weather.com said 0% chance of rain in my town and it ended up raining for 2 hours over a wide area.

I had some choice things to say about their calibration.

comment by Manfred · 2014-06-23T22:59:05.067Z · score: 1 (1 votes) · LW · GW

This is great so far! Have you considered putting it in Main?