Archive | Blog RSS for this section

A centennial: The Battle of the Falkland Islands

oldWeather forum moderator Caro has been showcasing the history in our logs, by tweeting, every day, excerpts from the logs of exactly 100-years ago (follow along here). The terse style of the logs is a good match for Twitter, but on some days so much happened that we’d like to go into more detail. December 8th, 1914 was such a day, so Caro has written this post:

It’s been said before: oldWeather is not just about the weather. We transcribe history too and few of the historical narratives to emerge from our WWI ships’ logs can compare to the events that took place on this day, 8 December, 100 years ago: the Battle of the Falkland Islands. The logs of all nine Royal Navy ships involved ― Bristol, Canopus, Carnarvon, Cornwall, Glasgow, Inflexible, Invincible, Kent, and Macedonia ― have given our transcribers and editors first-hand accounts of one of the most important sea battles of WWI.

Back on November 1, Admiral von Spee and his German cruisers had defeated a Royal Navy squadron near Coronel, Chile. British losses were heavy; the ships Good Hope and Monmouth were lost and with them the lives of about 1600 men. Glasgow and Otranto escaped. The British Admiralty, realising the danger of the German ships escaping into the South Atlantic and disrupting the Allies’ operations along the African coast; or sailing around the Horn to attack the now almost defenceless British base in the Falkland Islands, sent a squadron to the South Atlantic to track down von Spee’s cruisers. Eight Royal Navy warships assembled at Port Stanley in the Falkland Islands on December 7. The old battleship Canopus had been set in place as guardship for Port Stanley, resting on the mud, since mid-November.

On 8 December, the German cruisers Scharnhorst, Gneisenau, Nürnberg, Dresden and Leipzig, together with three auxiliary vessels, gathered to attack the Falklands and raid the British facilities there. Gneisenau and Nürnberg detached from the rest of the German squadron and moved to attack the wireless station and port facilities of Port Stanley. The two raiders were seen by a hilltop spotter who reported their approach to Canopus, waiting out of sight behind the hills.

The logs continue the story:

  • 9.19am Canopus: Opened fire fore & aft 12” turrets on Gneisenau & Nürnberg
  • 930am Canopus: Ceased fire. Enemy retreated
  • 9.30am Glasgow: Weighed and proceeded
  • 9.50am Kent: Proceeded to follow enemy. 3 more German cruisers reported in sight, Scharnhorst, Leipzig, and Dresden
  • 10.15am Glasgow: As requisite keeping touch with enemy; squadron weighing and proceeding from Port William
  • 11.43am Carnarvon: Bristol ordered to take Macedonia & destroy transports
  • 12.57pm Inflexible: Opened fire at extreme range on Leipzig, firing 12 rounds of 12 inch, apparently making no hits
  • 12.57pm Invincible: Invincible opened fire
  • 1.25pm Invincible: Enemy’s light cruisers observed to spread to starboard
  • 1.33pm Invincible: Scharnhorst & Gneisenau opened fire
  • 1.35pm Invincible: Cornwall, Kent & Glasgow ordered to chase enemy light cruisers
  • 2.51pm Inflexible: Opened fire on Gneisenau, 15,200 yards, Invincible engaging Scharnhorst, the leading ship in line ahead
  • 3.00pm Glasgow: Opened fire & engaged Leipzig with 6″ gun
  • 3.30pm Bristol: Fired 2 rounds fore 6″ and ordered Santa Isabel and Baden, German colliers, to stop; crews ordered to abandon ships. German crews transferred to Macedonia
  • 4.01pm Inflexible: Scharnhorst listing heavily to starboard, two funnels gone, and ship on fire. Ceased firing on her
  • 4.15pm Carnarvon: Opened fire [on Scharnhorst]
  • 4.17pm Carnarvon: Scharnhorst turned over & sank bow first; cease fire
  • 5.00pm Kent: Kent proceeded in chase of Nürnberg
  • 5.30pm Cornwall: Opened fire [on Leipzig] with 6″ guns & continued action with all guns
  • 5.40pm Macedonia: Opened fire on Baden
  • 5.48pm Inflexible: Finally ceased firing [on Gneisenau]. Signalled to Carnarvon, “I think enemy have hauled down their colours”
  • 6.02pm Invincible: Gneisenau sinks. Invincible, Inflexible and Carnarvon proceeded at full speed to pick up survivors
  • 6.45pm Kent: Opened fire and finally ceased fire at 6.57pm; Nürnberg sank at 7.25 pm
  • 6.50pm Cornwall: Enemy [Leipzig] on fire fore and aft
  • 7.00pm Bristol: Macedonia ordered to remain till colliers sunk and proceed to Port Stanley with crews
  • 7.23pm Kent: Stopped and endeavoured to pick up [Nürnberg] survivors
  • 7.53pm Macedonia: Baden sank
  • 8.15pm Macedonia: Opened fire on Santa Isabel
  • 9.00pm Cornwall: Stopped; lowered port boats to pick up [Leipzig] survivors
  • 9.23pm Cornwall: Leipzig foundered
  • 9.30pm Macedonia: Santa Isabel sank

The German auxiliary Seydlitz and light cruiser Dresden escaped. Almost 1900 German seamen lost their lives; 10 British were killed.

One hundred years on, we remember all those who died at Coronel and the Falklands and in the battles to come.

Credits reel II: This time it’s colourful.

Today is the last Thursday in November, and our friends in the U.S.A. are celebrating Thanksgiving. This festival has not caught on here in the UK, so I’m spared the turkey, and the pumpkin pie.

But I do know about being thankful, and today I’m particularly thankful for the 19,683 people who have transcribed at least one logbook page for oldWeather. Every one has made a contribution, from those who visited only once, to those who have done thousands of pages, and help guide and drive the project and its community. I’m proud to count them all as co-investigators.

So it’s an appropriate day to release a revised version of the project credits reel:

This project has nineteen thousand, six hundred and eighty three contributors.

Seas of red

         
           
         
   
         
         
         
       
       
 
       
         
       
           
           
         
       
           
             
       
     
           
         
     
     
       
     
       
         
           
             
       
         
         
     
             
           
           
           
           
       
       
       
         
       
     
         
             
       
             
           
     
           
         
     
         
     
           
       
 
         
         
     
       
       
         
           
       
       
         
           
           
         
             
         
           
     
           
     
           
         
           
     
     
             
         
           
       
         
       
         
       
           
     
         
       
       
   
         
       
     
             
       
         
         
       
   
         
         
           
             
     
       
         
       
   

Into the Top500

This visualisation, comparing two reconstructions of the weather of 1918 (each using the oldWeather observations), took four supercomputers to make: The blue contours are from the ERA20C reanalysis, run on a pair of IBM Power775s at ECMWF; the red contours are from 20CRv2C, run on Hopper - NERSC's Cray XE6; and the post-processing and rendering was done on Carver, NERSC's iDataPlex.


The Met Office, where I work, has just finalised an agreement to buy a new supercomputer. This isn’t that rare an event – you can’t do serious weather forecasting without a supercomputer and, just like everyday computers, they need replacing every few years as their technology advances. But this one’s a big-un, and the news reminded me of the importance of high-performance computing, even to observational projects like oldWeather.

To stand tall and proud in the world of supercomputing, you need an entry in the Top500: This is a list, in rank order, of the biggest and fastest computers in the world. These machines are frighteningly powerful and expensive, and a few of them have turned part of their power to using the oldWeather observations:

Two other machines have not used our observations yet (except for occasional tests), but are gearing up to do so in the near future:

My personal favourite, though, is none of these: Carver is not one of the really big boys. An IBM iDataPlex with only 9,984 processor cores, it ranked at 322 in the list when it was new, in 2010, and has since fallen off the Top500 altogether; overtaken by newer and bigger machines. It still has the processing power of something like 5000 modern PCs though, and shares in NERSC’s excellent technical infrastructure and expert staff. I use Carver to analyse the millions of weather observations and terabytes of weather reconstructions we are generating – almost all of the videos that regularly appear here were created on it.

The collective power of these systems is awe-inspiring. One of the most exciting aspects of working on weather and climate is that we can work (through collaborators) right at the forefront of technical and scientific capability.

But although we need these leading-edge systems to reconstruct past weather, they are helpless without the observations we provide. All these computers together could not read a single logbook page, let alone interpret the contents; the singularity is not that close; we’re still, fundamentally, a people project.

43 years in the North

The voyages of the Bear, Corwin, Jeannette, Manning, Rush, Rodgers, Unalga II, and Yukon


Today is the fourth birthday of oldWeather, and it’s almost two years since we started work on the Arctic voyages. So it’s a good time to illustrate some more of what we’ve achieved:

I’m looking at the moment at the Arctic ships we’ve finished: Bear, Corwin, Jeannette, Manning, Rush, Rodgers, Unalga II, and Yukon have each had all of their logbook pages read by three people; so it’s time to add their records to the global climate databases and start using them in weather reconstructions. From them we have recovered 43 ship-years of hourly observations – more than 125,000 observations concentrating on the marginal sea-ice zones in Baffin Bay and the Bering Strait – an enormous addition to our observational records.

The video above shows the movements of this fleet (compressed into a single year). They may occasionally choose to winter in San Pedro or Honolulu, but every summer they are back up against the ice – making observations exactly where we want them most.

So in our last two years of work, we’ve completed the recovery of 43-ship years of logbooks, and actually we’ve done much more than that: The eight completed ships shown here make up only about 25% of the 1.5 million transcriptions we’ve done so far. So this group is only a taster – there’s three times as much more material already in the pipeline.

Sesquimillional

Sometimes there is just no word powerful enough to describe the achievements of oldWeather.

Back in March we reached a million, and since then we’ve powered on from that milestone, now having added an additional five hundred thousand observations to our tally. That’s two new observations every minute, night and day, 7 days a week: Come rain or shine; snow or sleet; ice, fire, or fog.

The Weather of HMS Beagle

As I’ve mentioned previously, last Thursday I was warm up man for Charles Darwin and Robert Fitroy (finally, a job truly worthy of oldWeather) – I was giving a talk about the project at the Progress Theatre in Reading.

HMS Beagle isn’t (yet) one of our ships, the observations from her 1831-6 circumnavigation had been rescued before oldWeather started; but I could use what I’ve learned from analysing the oldWeather observations to show the route of the ship, the weather they experienced, and the effect of their observations on our reanalyses for the period.

The route of HMS Beagle, and the value of her weather observations. The inset graph show the Beagle's pressure observations (as black dots), and the analysed mean-sea-level-pressure (from scout run 3.3.8 of the Twentieth Century Reanalysis): The pale blue band gives the range of first-guess pressure estimates (at the location of the ship), and the dark blue band the analysis rage. The dark band is consistently narrower than the light one, showing the more precise estimates of the weather obtained by assimilating the observations.

Uncertainty uncertainty

The answer, as we know, is 42 – but does that mean that it’s exactly 42; or somewhere between 41.5 and 42.5; or is 42 just a ball-park estimate, and the answer could actually be, say, 37?

The value of science is its power to generate new knowledge about the world, but a key part of the scientific approach is that we care almost as much about estimating the accuracy of our new knowledge as about the new knowledge itself. This is certainly my own experience: I must have spent more time calculating how wrong I could be – estimating uncertainty ranges on my results – than on anything else.

One reason I like working with the 20th Century Reanalysis (20CR) is that it comes with uncertainty ranges for all of its results. It achieves this by being an ensemble analysis – everything is calculated 56 times, and the mean of the 56 estimates is the best estimate of the answer, while their standard deviation provides an uncertainty range. This uncertainty range is the basis for our calculation of the ‘fog of ignorance‘.

We are testing the effects of the new oldWeather observations on 20CR – by doing parallel experiments reconstructing the weather with and without the new observations. We have definitely produced a substantial improvement, but to say exactly how much of an improvement, where, and when, requires careful attention to the uncertainty in the reconstructions. In principle it’s not that hard: if the uncertainty in the reanalysis including the oldWeather observations is less than the uncertainty without the new observations, then we’ve produced an improvement (there are other possible improvements too, but let’s keep it simple). So I calculated this, and it looked good. But further checks turned up a catch: we don’t know the uncertainty in either case precisely, we only have an estimate of it, so any improvement might not be real – it might be an artefact of the limitations of our uncertainty estimates.

To resolve this I have entered the murky world of uncertainty uncertainty. If I can calculate the uncertainty in the uncertainty range of each reanalysis, I can find times and places where the decrease in uncertainty between the analysis without and with the oldWeather observations is greater than any likely spurious decrease from the uncertainty in the uncertainty. (Still with me? Excellent). These are the times and places where oldWeather has definitely made things better. In principle this calculation is straightforward – I just have to increase the size of the reanalysis ensemble: so instead of doing 56 global weather simulations we do around 5600; I could then estimate the effect of being restricted to only 56. However, running a global weather simulation uses quite a bit of supercomputer time; running 56 of them requires a LOT of supercomputer time; and running 5600 of them is – well, it’s not going to happen.

So I need to do something cleverer. But as usual I’m not the first person to hit this sort of problem, so I don’t have to be clever myself – I can take advantage of a well-established general method for faking large samples when you only have small ones – a tool with the splendid name of the bootstrap. This means estimating the 5600 simulations I need by repeatedly sub-sampling from the 56 simulations I’ve got. The results are in the video below:

Yellow=better, red=worse (well, approximately). Yellow dots mark the new observations provided by oldWeather.


By bootstrapping, we can estimate a decrease in uncertainty that a reanalysis not using the oldWeather observations is unlikely to reach just by chance (less than 2.5% chance). Where a reanalysis using the oldweather observations has a decrease in uncertainty that’s bigger than this, it’s likely that the new observations caused the improvement. The yellow highlight in this video marks times and places where this happens. We can see that the regions of improvement show a strong tendency to cluster around the new oldweather observations (shown as yellow dots) – this is what we expect and supports the conclusion that these are mostly real improvements.

It’s also possible, though unlikely, that adding new observations can make the reanalysis worse (increase in estimated uncertainty). The bootstrap also gives an increase in uncertainty that a reanalysis not using the oldWeather observations is unlikely to reach just by chance (less that 2.5% probable) – the red highlight marks times and places where the reanalysis including the observations has an increase in uncertainty that’s bigger than this. There is much less red than yellow, and the red regions are not usually close to new observations, so I think they are spurious results – places where the this particular reanalysis is worse by chance, rather than systematically made worse by the new observations.

This analysis meets it’s aim of identifying, formally, when and where all our work transcribing new observations has produced improvements in our weather reconstructions. But it is still contaminated with random effects: We’d expect to get spurious red and yellow regions each 2.5% of the time anyway (because that’s the threshold we chose), but there is a second problem: The bootstrapped 2.5% thresholds in uncertainty uncertainty are only estimates – they have uncertainty of their own, and where the thresholds are too low we will get too much highlighting (both yellow and red). To quantify and understand this we need to venture into the even murkier world of uncertainty uncertainty uncer… .

No – that way madness lies. I’m stopping here.


OK, as you’re in the 0.1% of people who’ve read all the way to the bottom of this post, there is one more wrinkle I feel I must share with you: The quality metric I use for assessing the improvement caused by adding the oW observations isn’t simply the reanalysis uncertainty, it’s the Kullback–Leibler divergence of the climatological PDF from the reanalysis PDF. So for ‘uncertainty uncertainty’ above read ‘Kullback–Leibler divergence uncertainty’. I’d have mentioned this earlier, except that it would have made an already complex post utterly impenetrable, and methodologically it makes no difference, as one great virtue of the bootstrap is that it works for any metric.

In search of lost weather

Imagine you have a free hour, one wet weekend, so you settle down to a little light reading:

That’s 8,000 pages. If you read them all again the following weekend (to catch the subtleties that escaped you the first time), and then again, and again, and again, and again, and again; you’d still be 6,000 pages short of matching the work we’ve done reading the logbooks of USS Bear.

So congratulations to lollia paolina, gastcra, Hanibal94, DennisO, jil, pommystuart, LarryW, smith7748, tastiger, and every one of the 402 other crew members – on an achievement of epic proportions: From the 20,930 pages of the Bear’s logs (each read 3 times, remember), we’ve recorded 349,015 weather observations, each with several components (wind speed, barometer, air temperature, etc.) making more than 2.9 million data points.

And of course it’s not just weather, those logs also provided 22,957 dates, 6427 longitudes, 2947 people, 189 animals, 19,489 places, and more … 8,872,438 characters in all.

As before, I’ve used the transcriptions to make a movie version. But the sheer size of the achievement causes problems even here: I thought that a maximum movie length of 10,000 seconds was way more than enough, but not for the Bear. So while I sort that out, here’s just the first installment: 1884-1890.

A daughter: weatherdetective.net.au

If you look at weatherdetective.net.au you might get a feeling of deja-vu – a sense that you’ve seen something similar before.

oldWeather is not yet four, a bit young to be having children. But that’s four internet years, so maybe it’s time: We’ve contributed DNA to plenty of other projects, but Weather Detective is our first direct descendant.

As with all children it’s a separate person – with its own science team, volunteer community, logbooks, and interface. They are friends, but they will be doing things differently from us (and we’re not too old to learn from their approach).

Congratulations to Christa Pudmenzky, USQ, ABC, and all involved. We wish you fair winds and a following sea.

Update: September 2014 – They’ve released their first batch of results (data, visualisation).

Follow

Get every new post delivered to your Inbox.

Join 1,460 other followers