Tuesday, January 28, 2020

BlueSkiesResearch.org.uk:What can we learn about climate sensitivity from interannual variability?

Another new manuscript of ours out for review, this time on ESDD. The topic is as the title suggests. This work grew out of our trips to Hamburg and later Stockholm though it wasn’t really the original purpose of our collaboration. However, we were already working with a simple climate model and the 20th century temperature record when the Cox et al paper appeared (previous blogs here, here, here) so it seemed like an interesting and relevant diversion. Though the Cox et al paper was concerned with emergent constraints, this new manuscript doesn’t really have any connection to this one I blogged earlier though it is partly for the reasons explained in that post that I have presented the plots with S on the x-axis.

A fundamental point about emergent constraints, which I believe is basically agreed upon by everyone, is that it’s not enough to demonstrate a correlation between something you can measure and something you want to predict, you have to also present a reasonable argument why you expect this relationship to exist. With 10^6 variables to choose from in your GCM output (and an unlimited range of functions/combinations thereof) it is inevitable that correlations will exist, even in totally random data. So we can only reasonably claim that a relationship has predictive value if it has a theoretical foundation.

The use of variability (we are taking about the year-to-year variation in global mean temperature here after any trend has been removed) to predict sensitivity has a rather chequered history. Steve Schwartz tried and failed to do this, perhaps the clearest demonstration of this failure being that the relationship he postulated to exist for the climate system (founded on a very simple energy balance argument) did not work for the climate models. Cox et al sidestepped this pitfall by the simple and direct technique of presenting a relationship which had been directly derived from the ensemble of CMIP models, so by construction it worked for these. They also gave a reasonable-looking theoretical backing for the relationship, which was based on an analysis of a very simple energy balance argument. So on the face of it, it looked reasonable enough. Plenty of people had their doubts though as I’ve documented in the links above.

Rather than explore the emergent constraint aspect in more detail, we chose to approach the problem from a more fundamental perspective: what can we actually hope to learn from variability? We used the paradigm of idealised “perfect model” experiments, which enables us to generate very clear limits to our learning. The model we used is more-or-less the standard two layer energy balance of Winton, Held etc that has been widely adopted, but with a random noise term (after Hasselmann) added to the upper layer to simulate internal variability:
Screenshot 2020-01-25 17.10.47
The single layer model that Cox et al used in their theoretical analysis is also recovered when the ocean mixing parameter γ is set to zero. So now the basic question we are addressing is, how accurately can we diagnose the sensitivity of this energy balance model, from analysis of the variability of its output? 

Firstly, we can explore the relationship (in this model) between sensitivity S and the function of variability which Cox et al called ψ.
Screenshot 2020-01-24 13.31.05
Focussing firstly on the fat grey dots, these represent the expected value of ψ from an unforced (ie, due entirely to internal variability) simulation of the single-layer energy balance model that Cox et al used as the theoretical foundation for their analysis. And just as they claimed, these points lie on a straight line. So far so good.

But…

It is well known that the single layer model does a pretty shabby job at representing GCM behaviour during the transient warming over the 20th century, and the two-layer version of the energy balance model gives vastly superior results for only a small increase in complexity. (This is partly why the Schwartz approach failed). Repeating our analysis with the two-layer version of the model, we get the black dots, where the relationship is clearly nonlinear. This model was in fact considered by the Cox group in a follow-up paper Williamson et al in which they argued that it still displayed a near-linear relationship between S and ψ over the range of interest spanned by GCMs. That’s true enough as the red line overlying the plot shows (I fitted that by hand to the 4 points in the 2-5C range) but there’s also a clear divergence from this relationship for larger values of S.

And moreover…

The vertical lines through each dot are error bars. These are the ±2 standard deviation ranges of the values of ψ that were obtained from a large sample of simulations, each simulation being 150 years long (a generous estimate of the observational time series we have available to deal with). It is very noticeable that the error bars grow substantially with S. This together with the curvature in the S-ψ relationship means that it is quite easy for a model with a very high sensitivity to generate a time series that has a moderate ψ value. The obvious consequence being that if you see a time series with a moderate ψ value, you can’t be sure the model that generated it did not have a high sensitivity.

We can use calculations of this type to generate the likelihood function p(ψ|S), which can be thought of as a horizontal slice though the above graph at a fixed value of ψ, and turn the handle of the Bayesian engine to generate posterior pdfs for sensitivity, based on having observed a given value of ψ. This is what the next plot shows, where the different colours of the solid lines refer to calculations which assumed observed values for ψ of 0.05, 0.1, 0.15 and 0.2 respectively.
Screenshot 2020-01-24 13.31.33
These values correspond to the expected value of ψ you get with a sensitivity of around 1, 2.5, 5 and 10C respectively. So you can see from the cyan line that if you observe a value of 0.1 for ψ, that corresponds to a best estimate sensitivity of 2.5C in this experiment, you still can’t be very confident that the  true value wasn’t rather a lot higher. It is only when you get a really small value of ψ that the sensitivity is tightly constrained (to be close to 1 in the case ψ=0.05 shown by the solid dark blue line).

The 4 solid lines correspond to the case where only S is uncertain and all other model parameters are precisely known. In the more realistic case where other model parameters such as ocean heat uptake are also somewhat uncertain, the solid blue line turns into the dotted line and in this case even the low sensitivity case has significant uncertainty on the high side.

It is also very noticeable that these posterior pdfs are strongly skewed, with a longer right hand tail than left hand (apart from the artificial truncation at 10C). This could be directly predicted from the first plot where the large increase in uncertainty and flattening of the S-ψ relationship means that ψ has much less discriminatory power at high values of S. Incidentally, the prior used for S in all these experiments was uniform, which means that the likelihood is the same shape as the plotted curves and thus we can see that the likelihood is itself skewed, meaning that this is an intrinsic property of the underlying model, rather than an artefact of some funny Bayesian sleight-of-hand. The ordinary least squares approach of a standard emergent constraint analysis doesn’t acknowledge or account for this skew correctly and instead can only generate a symmetric bell curve.

One thing that had been nagging away at me was the fact that we actually have a full time series of annual temperatures to play with, and there might be a better way of analysing them than to just calculate the ψ statistic. So we also did some calculations which used the exact likelihood of the full time series p({Ti}|S) where {Ti}, i = 1…n is the entire time series of temperature anomalies. I think this is a modest novelty of our paper, no-one else that I know of has done this calculation before, at least not quite in this experimental setting. The experiments below assume that we have perfect observations with no uncertainty, over a period of 150 years with no external forcing. Each simulation with the model generates a different sequence of internal variability, so we plotted the results from 20 replicates of each sensitivity value tested. The colours are as before, representing S = 1, 2.5 and 5C respectively. These results give an exact answer to the question of what it is possible to learn from the full time series of annual temperatures in the case of no external forcing.
Screenshot 2020-01-24 13.31.48
So depending on the true value of S, you could occasionally get a reasonably tight constraint, if you are lucky, but unless S is rather low, this isn’t likely. These calculations again ignore all other uncertainties apart from S and assume we have a perfect model, which some might think just a touch on the optimistic side…

So much for internal variability. We don’t have a period of time in the historical record in which there was no external forcing anyway, so maybe that was a bit academic. In fact some of the comments on the Cox paper argued (and Cox et al acknowledged in their reply) that the forced response might be affecting their calculation of ψ, so we also considered transient simulations of the 20th century and implemented the windowed detrending method that they had (originally) argued removed the majority of the forced response. The S-ψ relationship in that case becomes:
Screenshot 2020-01-25 18.04.13
where this time the grey and black dots and bars relate not to one and two layer models, but whether S alone is uncertain, or whether other parameters beside S are also considered uncertain. The crosses are results from a bunch of CMIP5 models that I had lying around, not precisely the same set that Cox et al used but significantly overlapping with them. Rather than just using one simulation per model, this plot includes all the ensemble members I had, roughly 90 model runs in total from about 25 models. There appears to be a vague compatibility between the GCM results and the simple energy balance model, but the GCMs don’t show the same flattening off or wide spread at high sensitivity values. Incidentally the set of GCM results plotted here don’t fit a straight line anywhere nearly as closely as the set Cox et al used. It’s not at all obvious to me why this is the case, and I suspect they just got lucky with the particular set of models they had combined with the specific choices they made in their analysis.

So it’s no surprise that we get very similar results when looking at detrended variability arising from the forced 20th century simulations. I won’t bore you with more pictures as this post is already rather long. The same general principles apply.

The conclusion is that the theory that Cox et al used to justify their emergent constraint analysis, actually refutes their use of a linear fit using ordinary least squares, because the relationship between S and ψ is significantly nonlinear and heteroscedastic (meaning the uncertainties are not constant but vary strongly with S). The upshot is that any constraint generated from ψ – or even more generally, any constraint derived from internal or forced variability – is necessarily going to be skewed with a tail to high values of S. However, variability does still have the potential to be somewhat informative about S and shouldn’t be ignored completely, which many analyses based on the long-term trend automatically do.

Friday, January 24, 2020

BlueSkiesResearch.org.uk: How to do emergent constraints properly

Way back in the mists of time, we did a little bit of work on "emergent constraints". This is a slightly hackneyed term referring to the use of a correlation across an ensemble of models between something we can’t measure but want to estimate (like the equilibrium climate sensitivity S) and something that we can measure like, say, the temperature change T that took place at the Last Glacial Maximum….

Actually our early work on this sort of stuff dates back 15 years but it was a bit more recently, in 2012 when we published this result
Screenshot 2020-01-22 15.13.02

in the paper blogged about here that we started to think about it a little more carefully. It is easy to plot S against T and do a linear regression, but what does it really mean and how should the uncertainties be handled? Should we regress S on T or T on S? [I hate the arcane terminology of linear regression, the point is whether S is used to predict T (with some uncertainty) or T is used to predict S (with a different uncertainty)]. We settled for the conventional approach in the above picture, but it wasn’t entirely clear that this was best.

And is this regression-based approach better or worse than, or even much the same as, using a more conventional and well-established Bayesian Model Averaging/Weighting approach anyway? We raised these questions in the 2012 paper and I’d always intended to think about it more carefully but the opportunity never really arose until our trip to Stockholm where we met a very bright PhD student who was interested in paleoclimate stuff and shortly afterwards attended this workshop (jules helped to organise this: I don’t think I ever got round to blogging it for some reason). With the new PMIP4/CMIP6 model simulations being performed, it seemed a good time to revisit any past-future relationships and this prompted us to reconsider the underlying theory which has until now remained largely absent from the literature.

So, what is new our big idea? Well, we approached it from the principles of Bayesian updating. If you want to generate an estimate of S that is informed by the (paleoclimate) observation of T, which we write as p(S|T), then we use Bayes Theorem to say that
p(S|T) ∝ p(T|S)p(S).
Note that when using this paradigm, the way for the observations T to enter in to the calculation is via the likelihood p(T|S) which is a function that takes S as an input, and predicts the resulting T (probabilistically). Therefore, if you want to use some emergent constraint quasi-linear relationship between T and S as the basis for the estimation then it only really makes sense to use S as the predictor and T as the predictand. This is the opposite way round to how emergent constraints have generally (always?) been implemented in practice, including in our previous work.

So, in order to proceed, we need to create a likelihood p(T|S) out of our ensemble of climate models (ie, (T,S) pairs). Bayesian linear regression (BLR) is the obvious answer here – like ordinary linear regression, except with priors over the coefficients. I must admit I didn’t actually know this was a standard thing that people did until I’d convinced myself that this must be what we had to do, but there is even a wikipedia page about it.

This therefore is the main novelty of our research: presenting a way of embedding these empirical quasi-linear relationships described as "emergent constraints" in a standard Bayesian framework, with the associated implication that it should be done the other way round.

Given the framework, it’s pretty much plain sailing from there. We have to choose priors on the regression coefficients – this is a strength rather than a weakness in my view, as it forces us to explicitly consider whether we consider the relationship to be physically sound, and argue for its form. Of course it’s easy to test the sensitivity of results to these prior assumptions. The BLR is easy enough to do numerically, even without using the analytical results that can be generated for particular forms of priors. And here’s one of the results in the paper. Note that unlabelled x-axis is sensitivity in both of these plots, in contrast to being the y-axis in the one above.
Screenshot 2020-01-22 15.14.21
While we were doing this work, it turns out that others had also been thinking about the underlying foundations of emergent constraints, and two other highly relevant papers were published very recently. Bowman et al introduces a new framework which seems to be equivalent to a Kalman Filter. In the limit of a large ensemble with a Gaussian distribution, I think this is also equivalent to a Bayesian weighting scheme. One aspect of this that I don’t particularly like is the implication that the model distribution is used as the prior. Other than that, I think it’s a neat idea that probably improves on the Bayesian weighting (eg that we did in the 2012 paper) in the typical case that we have where the ensemble is small and sparse. Fitting a Gaussian is likely to be more robust than using a weighted sum of a small number of samples. But, it does mean you start off from the assumption that the model ensemble spread is a good estimator for S, which is therefore considered unlikely to like outside this range. Whereas regression allows us to extrapolate, in the case where the observation is at our outside the ensemble range.

The other paper by Williamson and Sansom presented a BLR approach which is in many ways rather similar to ours (more statistically sophisticated in several aspects). However, they fitted this machinery around the conventional regression direction. This means that their underlying prior was defined on the observation with S just being an implied consequence. This works ok if you only want to use reference priors (uniform on both T and S) but I’m not sure how it would work if you already had a prior estimate of S and wanted to update that. Our paper in fact shows directly the effect of using both LGM and Pliocene simulations to sequentially update the sensitivity.

The limited number of new PMIP4/CMIP6 simulations means that our results are substantially based on older models, and the results aren’t particularly exciting at this stage. There’s a chance of adding one or two more dots on the plots as the simulations are completed, perhaps during the review process depending how rapidly it proceeds. With climate scientists scrambling to meet the IPCC submission deadline of 31 Dec, there is now a huge glut of papers needing reviewers…

BlueSkiesResearch.org.uk: Catastrophic tipping points of no return, returned!

Science has been done and papers written! Deadlines have been met! And not by the Govt’s imaginative strategy of declaring that the deadlines no longer exist, as they are doing with NHS waiting times. Though that did seem like an appealing strategy at some points. I will blog about some of it over the next week or two, which may also help marshal our thoughts for a few talks on our work that jules and I are going to give in the next couple of months.

But before I get on with that…just when you thought it was safe to get back on the see-saw…

Someone (ok it was ATTP) recently asked for a copy of my tipping points essay, so having found a version in my increasingly chaotic disk space I thought I might as well put it up here. It’s the final submitted version, there were a few edits in proofs but nothing significant. I don’t have an electronic version of the final publication but the physical book I have in my possession looks very smart (I haven’t had time to read it). If you want Michel’s contrary essay then you’ll have to ask him for it or else just buy the book which is linked in this previous post. I think we were largely talking past each other as he preferred to focus on mathematical details whereas I was aiming more towards the  original concept of hothouse catastrophising. At least how I see it. I’m sure there are lots more interesting essays in the book (and you can read that however you prefer).
Anyway, there you are. Brickbats and bouquets welcome.

Saturday, January 18, 2020

Will the real Chancellor please stand up?


Britain is better off in. And that’s all because of the Single Market.
It’s a great invention, one that even Lady Thatcher campaigned enthusiastically to create.   
The world’s largest economic bloc, it gives every business in Britain access to 500 million customers with no barriers, no tariffs and no local legislation to worry about. It’s no surprise that nearly half of our exports go to other EU nations, exports that are linked to three million jobs here in the UK. 
And as an EU member we also have preferential access to more than 50 other international markets from Mexico to Montenegro, helping us to export £50 billion of goods and services to them every year. 
Even the most conservative estimates say it could take years to secure agreements with the EU and other countries. 
Having spent six years fighting to get British businesses back on their feet after Labour’s record-breaking recession, I’m not about to vote for a decade of stagnation and doubt.



The chancellor has warned manufacturers that "there will not be alignment" with the EU after Brexit and insists firms must "adjust" to new regulations. Mr Javid declined to specify which EU rules he wanted to drop. 
Speaking to the Financial Times, Sajid Javid admitted not all businesses would benefit from Brexit. "We're also talking about companies that have known since 2016 that we are leaving the EU. Admittedly, they didn't know the exact terms."

I'm old enough to remember a time when the Govt promised us the “exact same benefits ” as membership of the single market. Good to know that all those Brexit voters knew exactly what they were voting for. Shame they still haven't managed to share their vision with the rest of us.



Friday, January 10, 2020

Maths homework

For those who struggle with arithmetic, £130Bn is more than 14 times the annual contribution of £9Bn that the UK currently makes to the EU. The end-of-year £200Bn estimate is more than 22 times larger, and exceeds the totality of our contributions over the entire 47 years of our membership. It seems a hefty price to pay for a blue passport and a new 50p piece.

Of course the long-term damage is far greater than can be measured in purely economic terms. Students and the young in particular will be thrilled that the Govt has recently refused to commit to participating in the wildly popular and effective Erasmus exchange program. The rest of the EU members and associates will no doubt be devastated that they will only have 30 countries to choose from rather than 31.

In unrelated news, the racists who told Meghan Markle to f off back where she came from, are apparently upset that she has decided to do just that. Shrug. Some people just love to hate, I guess. The story even got a mention in the Guardian which has obviously gone down-market.

Thursday, January 02, 2020

Review of the blogyear?

Nope, Can't be bothered. There's only a handful of posts for 2019, you can read them from the sidebar. Or just scroll down the page. I will be posting about real science quite soon though, once we have recovered from the insane 31 Dec IPCC deadline. Who thought that was a good idea? Bad enough to have that for our own paper, but then along came another couple that we were co-authors on, that required commenting and editing, and a project proposal for which there was really no reason at all for the same deadline to be picked, but back when we were first talking about it, it didn't seem to matter...

Anyway, all 4 things got done in time. Phew. Watch this space for further news.

Wednesday, January 01, 2020

Just when you thought you'd heard the last about brexit

Not that any of my readers would be deluded enough to think that this is going to go away any time soon, just because Johnson wants to pretend it will.

Brexit, in some form or other, is going to start actually happening at the end of the month. I'd call it a grotesque act of self-harm but of course a lot of those who are going to suffer will be those who have opposed it at every turn. In fact it is not so much self-harm as intergenerational conflict. 


Here is the horrific split of voting preference versus age in the 2019 general election:




Now I know what you're thinking, old people have always tended to vote tory. True to a small extent, nothing like what has happened recently. Here is the longer set of results from 1992 onwards:




The blue lines do trend modestly up to the right, and red ones down but it was only since the brexit referendum that age became such a sharp dividing line, with roughly a 3:1 split of old voters voting one way and 3:1 split of young voting the other in the recent election.

What we have now is nothing short of an ideological war being waged on the young by the old. Having had all the opportunities and benefits of EU membership for most of their adult lives, they are denying this to their own children and grandchildren based on their anti-EU obsession. This isn't just an accidental consequence of not thinking about things, when specifically asked about the possibility of family members losing jobs due to brexit, a majority of brexit voting pensioners actually said they didn't care, they wanted their precious brexit anyway.

And let's not forget that there are about 2.5 million voters who have never been allowed to vote on brexit because they were under-18 at the time. Well over a million brexit voters are already dead (that's just simple demographics) and yet their views are literally held in higher esteem than real live people who are going to suffer the consequences for decades to come. Even if not a single person had changed their mind (and I'd agree not many have) then we would have had a majority for remain for the past year. That pretty much agrees with all opinion polls now for the past year and more, not to mention the election result itself where the tories (and brexit party) totalled about 45% of the vote versus the 55% from parties who wanted at least another referendum on the details if not outright opposition. Nevertheless, that's the way our system "works" and the tories have the power to do whatever they want for the foreseeable future. They, and those who voted for them, own the consequences in their entirety, especially after they've spent the past few years yelling that they know exactly what they voted for. A bit odd that they never managed to agree what that exactly was (beyond a few trivial slogans), but never mind. I'll not bother predicting because there is not yet any clear picture of what they want to achieve.

Remember the heady days of 2016 when the brexiters told us that we held all the cards and our negotiation with the EU would be the easiest trade deal in history? Now there is no more talk of sunlit uplands, brexit is at best presented as a tedious, costly and difficult task we need to try to get through before the tories can start to undo the damage caused by whoever it was that happened to be pretending to govern the country over the past decade (don't anyone tell Johnson who was in the cabinet over the past few years....). In fact it's such a good idea that the govt is banning any mention of it from February, even though the fun will barely have started at that point.