The Problem

Humor me for a minute here: let’s say you’re the owner and president-for-life of a grocery store chain that operates solely within the state of Missouri. Summer time is approaching, and you’ve noticed that, between the months of March and September, customers tend to buy more bottles of water whenever the temperature is above 85℉ (29.44℃). You want to be prepared for an increase in demand, but you need to place orders for bottled water days in advance.

The Data

To help solve this problem, we will calibrate ensemble weather forecasts using retrospective forecasts (also known as reforecasts, or retrospective weather forecasts using reanalysis for initial conditions) from NOAA’s Global Ensemble Forecast System. This data can be requested and downloaded through NOAA/ESRL’s GEFS Reforecast website. We will calibrate 2-meter temperature forecasts at a lead time of seven days, with forecasts from 1985–2016.

Collecting The Station Data

We will use BigQuery to collect the relevant GHCN observation data! There are a few tricky things with this data set that we have to deal with:

The full GHCN data set has observations that go all the way back to the late 1700s! While that is super impressive (and ripe for further exploration!), we are only interested in stations that have observations between 1985–2016. We’re also only interested in stations that are in Missouri. The full GHCN data set has stations all over the world on all continents! While Edward Lorenz famously asked if the flaps of a butterfly’s wings in Brazil set off tornadoes in Texas (and let’s not forget that nature doesn’t conform to man-made state boundaries), we can speed things up by only considering stations within the state of Missouri. GHCN stations provide observations of multiple meteorological phenomena, such as precipitation, minimum and maximum temperature, wind speed, and so forth from multiple sources. Not all stations provide the same types of reports, and it isn’t clear how each station measures a 24 hour period! We’re only interested in maximum temperature, and we’ll assume that a 24 hour period represents 0:00 UTC — 23:59 UTC, so we’ll also have to account for that.

Most of these problems can easily be accounted for with the right SQL queries. Using the BigQuery web interface, we can provide the following query to select stations that meet the above criteria:

SELECT

ghcnd.id as station_id,

ghcnd.firstyear as first_year,

ghcnd.lastyear as last_year,

station.name as station_name,

station.latitude as lat,

station.longitude as lon

FROM

[bigquery-public-data:ghcn_d.ghcnd_inventory] as ghcnd

JOIN

[bigquery-public-data:ghcn_d.ghcnd_stations] as station

ON

ghcnd.id = station.id

WHERE

ghcnd.firstyear <= 1985

AND ghcnd.lastyear >= 2016

AND ghcnd.ID LIKE "US%"

AND station.state LIKE "MO%"

AND ghcnd.element = "TMAX"

which provides us the ID, name, latitude, and longitude of stations that provide a 24-hour maximum temperature report. The query took about 3 seconds to process, which is great since we are on a time crunch here! While you can query it straight from a DataLab notebook, I opted to save the resulting table from the BigQuery web app and collect the saved table. I loaded it up, and we then convert the table into a pandas DataFrame and check out the number of stations we found, as well as the first few rows:

Looking good so far! At this point, we now need to get the observations from the BigQuery public data set. We can do this by joining the table of relevant Missouri stations with the daily GHCN observation data. Google has provided the data as tables separated by year (e.g. ghcnd_1998 represents daily station data from 1998). Google has some awesome SQL-ish commands, such as allowing the use of wildcards within queries of tables. This allows us to only look at the years in which we’re interested. We can use the previous query to only select the stations we want while also collecting all of the maximum temperature observations from those stations. Just to show the power of Google’s BigQuery, the following query took me 9.8 seconds to run! This would have taken a lot longer if we had to download the data from some server, load up the data into pandas (either in a loop or via dask), then subset the data to include the stations we need:

#StandardSQL

SELECT

ghcn.id AS station_id,

ghcn.date AS ob_time,

(ghcn.value / 10.0) AS tmax,

ghcn.sflag AS sflag,

stations.lat AS lat,

stations.lat AS lon

FROM

`bigquery-public-data.ghcn_d.ghcnd_*` AS ghcn

JOIN (

SELECT

ghcnd.id AS station_id,

ghcnd.firstyear AS first_year,

ghcnd.lastyear AS last_year,

station.name AS station_name,

station.latitude AS lat,

station.longitude AS lon

FROM

`bigquery-public-data.ghcn_d.ghcnd_inventory` AS ghcnd

JOIN

`bigquery-public-data.ghcn_d.ghcnd_stations` AS station

ON

ghcnd.id = station.id

WHERE

ghcnd.firstyear <= 1985

AND ghcnd.lastyear >= 2016

AND ghcnd.ID LIKE "US%"

AND station.state LIKE "MO%"

AND ghcnd.element = "TMAX") AS stations

ON

stations.station_id = ghcn.id

WHERE

ghcn.qflag IS NULL

AND _TABLE_SUFFIX >= '1985'

AND _TABLE_SUFFIX <= '2016'

AND ghcn.element = 'TMAX'

AND ghcn.value IS NOT NULL

AND EXTRACT(MONTH

FROM

ghcn.date) >= 3

AND EXTRACT(MONTH

FROM

ghcn.date) <= 9

which gives us another table (that we could also save as a table on BigQuery if we weren’t too terribly interested in spending 10 seconds running another query (which I did!)) with the maximum temperature reports at our stations of interest. You’ll also notice that we divide the maximum temperature reports by 10.0, which is necessary due to the reports being in units of tenths of degrees Celsius (welcome to the fun world of weather station observations). We collect the data, convert it to a pandas DataFrame, and do some more necessary cleaning of the data:

A quick plot of the distribution of our maximum temperature observations shows the data is normal-ish (at least more normal than precipitation), and nothing looks too out of the ordinary. That is to say, anyone who has had to survive a Midwestern summer will tell you it is totally possible for temperatures to reach 114℉ (46.1℃):

Prepare the Forecast Data

Our reforecast data from NOAA/ESRL comes in netCDF format. We can use the excellent xarray package to easily read in and manipulate the data:

30+ years of forecast data at the tips of our fingers!

We see that there are 11 ensemble members (10 perturbed, one control), and we have surface temperature forecasts over a 24-hour period seven days out. In order to make this easier, we’ll have to convert parts of our xarray Dataset into friendly data. First, we need to convert the longitudes (which are from 0–360) into a format that matches the station data (from -180–180), which will allow us to only select the relevant grid points that are the nearest to each station:

We also need to convert the temperature data from Kelvin into Celsius, rename the long-winded temperature variable name, and convert our initial time (the time at which the model was run) into a reference time (the time at which the forecast is valid):

We’re also only interested in the maximum temperature over that 24-hour period, so we can reduce the data set by taking the maximum over the forecast hours. Finally, we can create some features from the ensemble forecast data, including an ensemble mean, ensemble standard deviation, the fraction of ensemble members that are above 85℉ (29.44℃), and the time of the year at which we’re forecasting:

Putting It All Together

We are now ready to put all of the data together! We can do this within xarray by converting the pandas DataFrame into xarray DataArrays:

In what totally seems like a round-about manner of doing things, we now convert this xarray Dataset back into a pandas DataFrame (there are better ways of doing this, of course, but this overly verbose method works as well) and drop any rows that have invalid data:

A quick sanity check (in the form of a joint hexbin plot) shows the ensemble mean matches up nicely with the observed data, which is pretty damn good for a maximum temperature forecast seven days out:

Because we are interested in whether the forecast is above 85℉ (29.44℃), we create two columns that represent whether the forecast is above or below our threshold:

Finally: The Modeling!

At last, we get to calibrate the data! The very first thing we want to do (because we are good scientists) is to split the dataset into separate training and testing data sets. If we conveniently ignore that stations which are close to one another are almost certainly correlated and not independent samples and the day-to-day correlation of temperature forecasts/observations (mainly for simplicity, tbqh¯\_(ツ)_/¯), we can simply split our data based on the year the observation/forecast pair falls within: we will train on data from 1985–2005, and test on data from 2006–2016:

Now, we need create a baseline forecast: we will use the fraction of uncalibrated ensemble members above our temperature threshold to test against our other methods. Let’s say that in our time as President-For-Life of this grocery chain, we’ve heard of and are planning on using accuracy as our evaluation metric. Evaluating forecasts where the majority of ensemble members are forecasting >=85℉ temperatures are considered a “yes” and otherwise a “no”, we’re presented with a surprise:

Wow, 81% accuracy! As it turns out, this is a very misleading accuracy statistic (mostly because accuracy is a terrible metric to use, but we haven’t had much time in our grocery-centric world to look up the caveats of accuracy scores), and brings home an important point about interpreting results. It is easy to predict whether it’ll hit 85℉ in months where it is rare for that to occur. Measuring the accuracy subset by month provides a different story, with the uncalibrated forecasts performing amazing in March-May, and not so great otherwise:

Let’s see if we can improve upon this by focusing specifically on the months that perform poorly (June-September). We will first test a logistic regression method. Our features include the ensemble mean, ensemble standard deviation, fraction of ensemble members above our temperature threshold, and the fractional day of the year. While you’ll want to calibrate hyperparameters in an actual machine learning project, for demonstration purposes we’ll do something simple:

from sklearn.linear_model import LogisticRegression as LR

lr_clf = LR(n_jobs=-1, C=0.01)

lr_clf.fit(month_subset_train[features].values,

month_subset_train[target].values)

The results are a little better, but not by much:

What if we try Random Forests? These usually work well out of the box, and we’re looking at doing something basic anyway, so again for the sake of simplicity and demonstration, we’ll give it a whirl:

from sklearn.ensemble import RandomForestClassifier as RFC

rf_clf = RFC(n_estimators=1000, max_features='auto', max_depth=4,

n_jobs=-1, verbose=0)

rf_clf.fit(month_subset_train[features].values,

month_subset_train[target].values)

Also not much better, and in fact sometimes worse than logistic regression:

Ok, well our accuracy may not be getting better, but are our probabilistic forecasts getting more reliable at least? Looking at a reliability diagram (top panel, which shows how often an event occurs when a probability of the event occurring is forecast), we see that our calibrated methods are in fact more reliable than our uncalibrated ensemble forecast. We also see that the forecasts from our calibrated models aren’t as sharp as the uncalibrated ensemble forecasts (that is, the ensemble forecasts tend to give probabilities near 0% or 100%, whereas our ML methods do not):

So, what’s wrong?

There are several issues that we may be running in to:

We combined all of the weather stations together into one large data set, but there is almost certainly different biases at individual stations (maybe due to, say, nearby topography). It may make more sense to fit models and predict at individual stations instead of pooling the data together, or group stations with similar characteristics together. Accuracy is not the metric to use when evaluating predictions. You’re better off using things like precision, recall, and F1 scores. The GEFS reforecast data is quite coarse, at 1.0 degree horizontal resolution. It’s possible we are using only a handful of grid points to correct for all of the stations. There are ways to get the data on a finer Gaussian spatial grid, so that may be worth looking in to. We didn’t really optimize our hyperparameters, though I suspect this would not have given us much of a benefit. Forecasting the weather is really tough, especially seven days out! We didn’t even get into more difficult problems like forecasting precipitation.

Sure, but what’s awesome?

So much! Having never used BigQuery before this, or much of Google’s Cloud Computing platform, it was awesome to play around with.

BigQuery is legit. It had been a while since I dusted off my SQL chops, so getting to play around with Google’s SQL-like syntax and functions was a lot of fun. Plus, it runs really fast, much faster than a wget command or something similar to get the data from NCDC’s servers. I can only hope Google is able to put more weather data up on their cloud services. DataLab is also great! It took a bit of time to figure out (though I’ll chalk that up to my limited knowledge of Docker containers), and it’s in Beta at the moment so there are some quirks to work out, but I love it! The fact that it can hook up to an instance on Google’s Cloud Compute Engine is a wonderful plus. Spinning up virtual instances is super easy! While I never had issues with AWS, I found spinning/setting up VMs a bit easier on GCP. There’s plenty of fun stuff one can do with GHCN data! I’m looking forward to playing with the data a lot more, and possibly testing out new ideas combining weather with machine learning.

Part two of this post will address these issues in our quest to have well-calibrated temperature forecasts. Stay tuned!