Uncertainty uncertainty

The answer, as we know, is 42 – but does that mean that it’s exactly 42; or somewhere between 41.5 and 42.5; or is 42 just a ball-park estimate, and the answer could actually be, say, 37?

The value of science is its power to generate new knowledge about the world, but a key part of the scientific approach is that we care almost as much about estimating the accuracy of our new knowledge as about the new knowledge itself. This is certainly my own experience: I must have spent more time calculating how wrong I could be – estimating uncertainty ranges on my results – than on anything else.

One reason I like working with the 20th Century Reanalysis (20CR) is that it comes with uncertainty ranges for all of its results. It achieves this by being an ensemble analysis – everything is calculated 56 times, and the mean of the 56 estimates is the best estimate of the answer, while their standard deviation provides an uncertainty range. This uncertainty range is the basis for our calculation of the ‘fog of ignorance‘.

We are testing the effects of the new oldWeather observations on 20CR – by doing parallel experiments reconstructing the weather with and without the new observations. We have definitely produced a substantial improvement, but to say exactly how much of an improvement, where, and when, requires careful attention to the uncertainty in the reconstructions. In principle it’s not that hard: if the uncertainty in the reanalysis including the oldWeather observations is less than the uncertainty without the new observations, then we’ve produced an improvement (there are other possible improvements too, but let’s keep it simple). So I calculated this, and it looked good. But further checks turned up a catch: we don’t know the uncertainty in either case precisely, we only have an estimate of it, so any improvement might not be real – it might be an artefact of the limitations of our uncertainty estimates.

To resolve this I have entered the murky world of uncertainty uncertainty. If I can calculate the uncertainty in the uncertainty range of each reanalysis, I can find times and places where the decrease in uncertainty between the analysis without and with the oldWeather observations is greater than any likely spurious decrease from the uncertainty in the uncertainty. (Still with me? Excellent). These are the times and places where oldWeather has definitely made things better. In principle this calculation is straightforward – I just have to increase the size of the reanalysis ensemble: so instead of doing 56 global weather simulations we do around 5600; I could then estimate the effect of being restricted to only 56. However, running a global weather simulation uses quite a bit of supercomputer time; running 56 of them requires a LOT of supercomputer time; and running 5600 of them is – well, it’s not going to happen.

So I need to do something cleverer. But as usual I’m not the first person to hit this sort of problem, so I don’t have to be clever myself – I can take advantage of a well-established general method for faking large samples when you only have small ones – a tool with the splendid name of the bootstrap. This means estimating the 5600 simulations I need by repeatedly sub-sampling from the 56 simulations I’ve got. The results are in the video below:

Yellow=better, red=worse (well, approximately). Yellow dots mark the new observations provided by oldWeather.


By bootstrapping, we can estimate a decrease in uncertainty that a reanalysis not using the oldWeather observations is unlikely to reach just by chance (less than 2.5% chance). Where a reanalysis using the oldweather observations has a decrease in uncertainty that’s bigger than this, it’s likely that the new observations caused the improvement. The yellow highlight in this video marks times and places where this happens. We can see that the regions of improvement show a strong tendency to cluster around the new oldweather observations (shown as yellow dots) – this is what we expect and supports the conclusion that these are mostly real improvements.

It’s also possible, though unlikely, that adding new observations can make the reanalysis worse (increase in estimated uncertainty). The bootstrap also gives an increase in uncertainty that a reanalysis not using the oldWeather observations is unlikely to reach just by chance (less that 2.5% probable) – the red highlight marks times and places where the reanalysis including the observations has an increase in uncertainty that’s bigger than this. There is much less red than yellow, and the red regions are not usually close to new observations, so I think they are spurious results – places where the this particular reanalysis is worse by chance, rather than systematically made worse by the new observations.

This analysis meets it’s aim of identifying, formally, when and where all our work transcribing new observations has produced improvements in our weather reconstructions. But it is still contaminated with random effects: We’d expect to get spurious red and yellow regions each 2.5% of the time anyway (because that’s the threshold we chose), but there is a second problem: The bootstrapped 2.5% thresholds in uncertainty uncertainty are only estimates – they have uncertainty of their own, and where the thresholds are too low we will get too much highlighting (both yellow and red). To quantify and understand this we need to venture into the even murkier world of uncertainty uncertainty uncer… .

No – that way madness lies. I’m stopping here.


OK, as you’re in the 0.1% of people who’ve read all the way to the bottom of this post, there is one more wrinkle I feel I must share with you: The quality metric I use for assessing the improvement caused by adding the oW observations isn’t simply the reanalysis uncertainty, it’s the Kullback–Leibler divergence of the climatological PDF from the reanalysis PDF. So for ‘uncertainty uncertainty’ above read ‘Kullback–Leibler divergence uncertainty’. I’d have mentioned this earlier, except that it would have made an already complex post utterly impenetrable, and methodologically it makes no difference, as one great virtue of the bootstrap is that it works for any metric.

2 responses to “Uncertainty uncertainty”

  1. Peter Thorne says :

    Very nice post. As always. Putting me to shame.

    Anyway, looking at the animation … the red areas are almost ubiquitously in areas with very few observations, as you say. Plausibly, even a reanalysis in regions with few observations may revert to climatology. An ensemble of climatology will look, well, rather like climatology. So, have artificially low spread.

    You add a few observations, even distant observations, that force the reanalyses to act a bit less like climatology you might get the paradoxical result that there is an increase in spread of the ensemble in the truly unsampled regions the first go round. But this would actually, potentially reflect an increase in skill despite an increase in apparent uncertainty because you were living in a fools’ paradise before?

    • Philip says :

      I think something like that could happen – if the model is biased in the absence of observations (either in mean or variability), then adding a small observational constraint could move some ensemble members away from the biased state towards reality – this would improve the analysis (closer to reality) but would appear to the metric I’m using as a reduction in quality (=> red). The observational constraint would have to be small compared with the bias, but this might occur, particularly in the tropics.

      There are biases present, if only because I’m using the 20CRV2 climatology to compare with 20CRV2C simulations (there is no 2C climatology yet, but I will revise this analysis when we’ve made one). I’ll be surprised if this is a big effect, but I will think further about this – thanks for the suggestion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: