- Source: Forecast verification
Forecast verification is a subfield of the climate, atmospheric and ocean sciences dealing with validating, verifying and determining the predictive power of prognostic model forecasts. Because of the complexity of these models, forecast verification goes a good deal beyond simple measures of statistical association or mean error calculations.
Defining the problem
To determine the value of a forecast, we need to measure it against some baseline, or minimally accurate forecast. There are many types of forecast that, while producing impressive-looking skill scores, are nonetheless naive. A "persistence" forecast can still rival even those of the most sophisticated models. An example is: "What is the weather going to be like today? Same as it was yesterday." This could be considered analogous to a "control" experiment. Another example would be a climatological forecast: "What is the weather going to be like today? The same as it was, on average, for all the previous days this time of year for the past 75 years".
The second example suggests a good method of normalizing a forecast before applying any skill measure. Most weather situations will cycle, since the Earth is forced by a highly regular energy source. A numerical weather model must accurately model both the seasonal cycle and (if finely resolved enough) the diurnal cycle. This output, however, adds no information content, since the same cycles are easily predicted from climatological data. Climatological cycles may be removed from both the model output and the "truth" data. Thus, the skill score, applied afterward, is more meaningful.
One way of thinking about it is, "how much does the forecast reduce our uncertainty?"
Christensen et al. (1981) used entropy minimax entropy minimax pattern discovery based on information theory to advance the science of long range weather prediction. Previous computer models of weather were based on persistence alone and reliable to only 5–7 days into the future. Long range forecasting was essentially random. Christensen et al. demonstrated the ability to predict the probability that precipitation will be below or above average with modest but statistically significant skill one, two and even three years into the future. Notably, this pioneering work discovered the influence of El Nino El Nino/Southern Oscillation (ENSO) on U.S. weather forecasting.
Tang et al. (2005)
used the conditional entropy to characterize the uncertainty of ensemble predictions of the El Nino/Southern Oscillation (ENSO):
R
≈
∑
p
i
ln
p
i
q
i
{\displaystyle R\approx \sum p_{i}\ln {\frac {p_{i}}{q_{i}}}}
where p is the ensemble distribution and q is the climatological distribution.
Further information
The World Meteorological Organization maintains a webpage on forecast verification.
For more in-depth information on how to verify forecasts see the book by Jolliffe and Stephenson or the book chapter by Daniel Wilks.
References
External links
NWS Glossary of Forecast Verification Metrics
(U.S.) NWS Verification Home
(U.S.) National Hurricane Center Forecast Verification
Kata Kunci Pencarian:
- Hujan
- Forecast verification
- Verification
- Forecast skill
- Weather and Forecasting
- Tropical cyclone track forecasting
- National Hurricane Center
- Confusion matrix
- Probabilistic forecasting
- Validation
- Brier score