Monday, January 28, 2019

BlueSkiesResearch.org.uk: Costs of delaying action on climate change

This post was prompted by a silly twitter argument, about which probably the least said the better. Someone who has set themselves up as some sort of “climate communicator” had asserted that if we don’t halve our emissions in 12 years then the world as we know it will end. Moreover, anyone who even thought this assertion was controversial was, in their eyes, a denier. Well, I thought it was not so much controversial as simply false. But I did wonder, what is the actual effect of delaying decarbonisation of the global economy? In the sense of, let’s hypothesise that we actually can take policy action that decreases carbon emissions, what difference does it make when we start?

I’m sure people must have done (and published) these sort of calcs but to be honest I don’t recall seeing them. Most of the research I’ve seen seems to be more along the lines of: if we delay action then how much more stringent will it have to be, in order to meet a particular target? This pic below shows that sort of thing:
screenshot 2019-01-27 09.42.12
I don’t think this sort of thing is really all that helpful as it gives no clue as to how realistic any of the pathways are. It seems that this sort of graph is basically motivated by a political assertion (“let’s not let warming exceed X degrees|”) rather than any plausible understanding of the world we live in. I also don’t think it is very realistic to think that the world will design and implement carbon emissions policies that credibly aim at a particular max temperature change, at least not within my lifetime. So, here’s an alternative question that although still rather simplistic is (IMO) more directly relevant to the real world. Let’s assume we are able decarbonise at some given rate. How much difference does it make how soon we start?

To answer this, we have to model (a) CO2 emissions and how they vary with policy delays (b) how atmospheric CO2 concentrations vary with emissions (c) how climate change depends on CO2 concentrations, and finally perhaps (d) the economic impacts of climate change.

For (a), I assume an exponential growth rate for historical and future emissions up to the initiation of decarbonisation, followed by an exponential decline. I use a historical (and near future) growth rate of 1.9% in these calcs. For decarbonisation, I use a rate of 2% which would halve our emissions in 35 years. This is less than half the rate that would be required to halve emissions in 12 years as hypothesised earlier. Atmospheric CO2 concentrations are then provided from emissions by the equation of Myhrvold and Caldeira (2012). I could have used real historical emissions for the historical period of my simulation, but actually I get a marginally closer fit to historical CO2 concs when just using the exponential growth with my chosen rate. Three decarbonisation dates tested are 2020, 2030 and 2070. Ie starting now(ish), or alternatively after a delay of 10 or 50 years respectively.
screenshot 2019-01-27 13.31.02
Current CO2 concentration is about 410ppm, increasing by 2.5ppm per year. I didn’t bother distinguishing or labelling the three lines on each graph as it’s obvious which relates to which scenario. I have marked the date at which decarbonisation starts, so you can see how the concentration increases for quite a while after we start to cut emissions.

The resulting climate change is modelled by the widely-used two-layer model of Winton, Takahashi and Held (2010) discussed in several papers by Held, Winton and others (2010 ish). Parameter values can be changed in this model, but the only one that really matters here is the equilibrium climate sensitivity (ECS) to a doubling of CO2. For non-CO2 forcings (aerosols, volcanoes, methane etc) I use historical estimates for the historical era and just hold these fixed at their current values indefinitely into the future. The model simulation matches historical data pretty reasonably as shown below. The max temp rises (up to the year 2350) for the three scenarios are indicated on the graph, ie you get a 0.25C increase in max temp for a 10 year delay, and 1.6C for 50 years. In other words, each year of delay initially leads to an increase in ultimate warming of about 0.025C, and this number rises steadily to around 0.04C per year in the middle of the century. The differences in temperature seen by the year 2100 are a little less than this, eg at this time there is just under 0.2C difference between the 2020 and 2030 scenarios.
screenshot 2019-01-27 13.31.36
Raising the sensitivity of the model increases the ultimate temperature rise of course, and also increases the difference between the scenarios. For a sensitivity of 5C (hard to reconcile with what we believe) the 10y delay leads to an additional ultimate warming of almost 0.4C, though in this case significant warming is continuing beyond the end of the simulations in 2350 and the long-term differences will also grow gradually beyond this time. For sensitivity of 2C, the decadal delay leads to an ultimate difference of just under 0.2C, and is only 0.15C at 2100.

So this is the cost, in climate terms, of delaying decarbonisation. I don’t think the underlying assumptions are unreasonable, though no doubt some could be changed. The growth rate of emissions at 1.9% per year is probably debatable but (when fed through the Myhrvold and Caldeira equation) gives reasonable historical results. My decarbonisation rate is a guess, but results are not very sensitive to this. Eg if we can achieve 5% decarbonisation rate, then the cost of a 10-year delay is reduced slightly to just under 0.2C rather than the 0.25C I've calculated. Note that the starting point for this post was an assumption (assertion?) that we can decarbonise at 5% per year, otherwise the world is going to end anyway.

Evaluating the economic impact of the warming may be the most contentious part. Here I’ve just used an estimate based on a version of the (Nobel-winning) Nordhaus DICE model, which I also used in this paper. Other estimates are available, and I wouldn't be surprised if these impacts have nudged up slightly but I don't expect they would be radically different. I’ve also used a simple 2% per annum growth rate for past and future GDP which some may disagree with, especially when extrapolated out to 2350. But what else should I have done?
screenshot 2019-01-27 13.31.19
There are indeed three lines on this graph, but they aren’t very clearly separated! The 2%(ish) cost of modest climate change just isn’t very visible against the background of several orders of magnitude of economic growth. To be clear, I don’t think that everything can be readily boiled down to money – recent events show, many millions are apparently willing to squander untold billions (of other peoples’ money, of course) on the hypothetical benefits of "sovereignty". Yes, I’m talking about brexit, the costs of which will undoubtedly dwarf any plausible impact of climate change on the UK, for many decades to come. But even if we aren’t trying to maximise economic benefit, it’s still an interesting context for the impact of climate change policies.

At this point I will refrain from making any more rhetorical flourishes but will instead leave the reader to decide whether this analysis indicates an end to the world.

Wednesday, January 02, 2019

Predictions

Tim Harford says that the act of making predictions makes for better people (based on this paper). I've always enjoyed making predictions so I suppose I should be pretty wonderful by now. Hmmm...well in fairness he was only suggesting an association not a guarantee. In the hope of improving myself a little further, I offer the following:

  1. Brexit won't happen (p=0.95).
  2. I will run a time (just!) under 2:45 at Manchester marathon (p=0.6).
  3. Jules and I will finish off the rather delayed work with Thorsten and Bjorn (p=0.95).
  4. We will also submit a highly impactful paper in collaboration with many others (p=0.85).
  5. 2019 will be warmer than most years this century so far (p=0.75 - not the result of any real analysis).
  6. The level of CO2 in the atmosphere will increase (p=0.999).


Monday, December 10, 2018

The failure of brexit

This post is long overdue, I thought I should at least write it down before the vote on Tuesday. And now this post has been overtaken by events during its gestation, and it looks like we won't even have that vote. No matter. This isn't really going to be about the failure of the brexit process, that would be too easy. OK, just a few words about that first. It failed because the 52% who voted to leave were all promised such ridiculous and contradictory things, ranging from the Bangladeshi Caterers Associating being conned into believing they might find it easier to recruit curry house chefs by none other than Priti Patel (see how well that turned out), to Scottish fishermen believing we'd get all "our" fish back (hint: a large proportion of them have to be sold into the EU anyway as they are not eaten in the UK), the lies about more money for the NHS, the forthcoming trade deal being the "easiest in human history", to the general fabulists promising three-quarters of the Single Market (but none of those pesky foreigners) and unlimited free trade with no Customs Union, and by the way let's pretend the Irish problem doesn't exist. It was obvious from the outset that there never was an actual real brexit that would be supported by a majority, either in the country or in parliament. Moreover, there was no way we were going to build the necessary infrastructure in the time available for things that would be required by a "real" brexit like customs checkpoints, let alone replicating all the other functions that the EU currently performs for us (EURATOM being one notable example), this would be a humungous planning exercise and expense that would probably take a decade to achieve even if the govt pulled its finger out and went full steam ahead on it.

Therefore, by the time I'd had my breakfast on the morning of the 24th June 2016 I had worked out that brexit probably wasn't going to happen, and I was feeling a bit stupid that I hadn't actually realised this before the vote. I think the first time I actually wrote this on the blog was June 2017 but I'd already bored Stoat in the pub on the topic rather earlier (must have been August 2016?), and a few others besides. I shouldn't big myself up too much: I have never been 100% certain that brexit was not going to happen, and indeed there are still some mechanisms by which it could happen, but I was always quite confident that it was unlikely. Once the fantasies unraveled, the reality was never going to be attractive, and as long as there's still a way out at that point, we will probably take it.

What I'm really interested in, for the purposes of this blog post, is how and why the rest of the country hasn't allowed itself to work this out: why have we failed to analyse and understand the brexit process adequately?

Since that fateful day in 2016, there has been a rapidly growing cottage industry of experts pontificating on "which brexit" and "consequences of brexit" and "types of brexit" and "routes to brexit". Journalists have breathlessly interviewed any number of talking heads who have come out with their vacuous slogans of "brexit means brexit" and "red, white and blue brexit" and "jobs-first brexit" and...it's all just hot air. It really does seem like they have all been so firmly embedded in their own little self-referential bubble of groupthink that none of them ever stopped to consider...is this really going to happen? There has been an utter failure of the journalistic principle of holding power to account, and also an utter failure of academic research to explore possibilities, to test the boundaries of our knowledge. Instead there has been little more than non-stop regurgitation of the drivel that "brexit means brexit" and that the govt is going to "deliver a successful brexit". I wonder if, to a journalist or political scientist, the new landscape of a post-brexit world is so enticing and exciting that they have wished themselves there already?

The BBC had an official policy of "don't talk about no brexit", right up to and beyond the million-person march in London at the end of October. Humphries enjoyed sneering about the "ludicrous Peoples' Vote" when forced to mention it, though of course he sneers about most things these days. Note that the last time so many people marched together London, it was in opposition to the Iraq war. There could be a lesson there, if anyone was prepared to think about it...

Theresa May of all people took everyone by surprise when she was the first person in any position of authority to utter the words "no brexit at all" earlier this year. At which point at least half the country issued a huge sigh of relief, even though it was only intended as a scare tactic to bring the brexiters into line. Amusingly, even many of them agreed that her threatened no brexit at all was actually better than the dog's dinner she was in the process of negotiating.

And how about the academics and think-tanks? Of course some of these are nakedly political and cannot be taken seriously, but some are suppose to be independent and authoritative. Such as "The UK in a Changing Europe". (Disclaimer: I was at university with Anand Menon, who was a clever and interesting person back then too, so I'm sure he won't mind a bit of gentle criticism.) 




Watch this short video, which was published to widespread acclaim just 6 weeks ago at the end of September. In it, Anand promises that he will tell you "everything you need to know" about brexit. He even emphasises the "everything". And then proceeds to talk about different types of brexit and how they might arise. What is telling here is that he didn't discuss the possibility of Article 50 being revoked and the UK staying in the EU - this outcome simply was not in the scope of possible outcomes for him as recently as September! An impressive failure of foresight. (Even those who don't yet think it is the front-runner must surely agree it is now reasonably plausible.) If academics are not able to think the unthinkable and explore the range of possible outcomes, then I have to wonder what they are actually for. It is the political equivalent of a world in which climate scientists had determinedly ignored the possibility that CO2 might be an influence on climate, and had instead devoted themselves to arguing fruitlessly over whether the observed warming was due to the sun, or aerosols instead.

So here's my real point, and the reason for my rant. Journalists and academics, by studiously avoiding speaking truth to power and colluding with this false brexit certainty, have done a great disservice to the British public. Their unwillingness to challenge politicians on both sides has permitted an entirely fake debate about a blue Tory unicorn brexit on one side, and a red Labour unicorn brexit on the other. As a result, the miserable deal that May has produced - pretty much the only one possible, if you insist that keeping out foreigners is the top priority - seems shockingly poor to everyone. We were promised sunlit uplands, and jokers like Johnson and Rees-Mogg are still promising sunlit uplands to all and sundry with no fear of an intelligent challenge from a journalist. (Note how affronted Peter Lilley was recently when the BBC actually did produce a "fact-checker" to opine on his interview.) Meanwhile Labour promises to do the same only better, just because.  The entire Govt policy over the last two years has been nothing more than "let's get through the week and see what turns up". And when the plan falls down and we end up staying, roughly half the country will be shocked and feel betrayed, because they were told their vote would be acted on, and have been told for the past two years that their votes were being acted on, and everyone pretended that things were going full steam ahead when in fact there never was a plan, or a plausible way forward.

Well, the public were told lots of things, many of them were lies, and this was enabled by the journalists and academics not doing their jobs. Whether it is collusion, group-think, cowardice or stupidity, it has greatly damaged the country, and we will all have to suffer the consequences. I like to think that lessons may be learnt, but in all probability they will all pat each other on the back and utter meaningless aphorisms: "prediction is difficult: especially about the future". Maybe so, but I predicted it, and I was not alone in doing so.

Better post this before it's overtaken by events again :-)

Sunday, December 09, 2018

Social Nonscience again

So, prompted by Doug McNeall's tweet, I went and read that much tweeted (and praised) paper by Iyengar and Massey: "Scientific communication in a post-truth society". My expectations weren't high and it was just as bad as I'd feared.

It starts off with a encouraging abstract:
"Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. [...] We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum."
Great. They realise that the problem is not because scientists communicate badly. It's long been obvious to many of us that there are lots of excellent public communicators in science, certainly within climate change. Some are excellent at both research and communication, some make more of a career out of the communication than the science, and that's fine too. Blaming scientists has long been a lazy excuse by those who should know better.

And even better, these social scientists have a recommendation! They actually have a proposal for how to break the policy logjam. Us scientists should "develop online strategies to counteract campaigns of misinformation". Yes! Let's do that! Though my spidey senses are tingling a bit, is this really the scientists' job? We do research and communicate it, I'm not really sure our expertise is in developing communication strategies in an adversarial environment. Sounds to me like that might be a whole new area of research in itself. Well never mind, let's see what they are actually recommending.

[...reads on through several pages of history and analysis relating to scientific communication in a post-truth society, which is interesting but hardly news...]

On to the section entitled "Communicating Science Today". At last, they are going to explain and expand on their recommendation. Aren't they?

Here is the last paragraph in full
"At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media. Of course, this is much easier said than done, and — given what research tells us about how the tribalization of US society has closed American minds — it might not be very effective."
Oh congratulations. The authors have invented groups like Sceptical Science and RealClimate. Sadly, they don't seem to realise that the scientists are more than a decade ahead of them. Granted, they do seem to be talking about something a bit more grandiose than those sites but it would be nice if they'd had some awareness of what was already going on, and perhaps offered some sort of useful critique.  They seem to be moving from a position of blaming scientists for not communicating adequately, to blaming them for not inventing some sort of magical unicorn for which they have no roadmap and which, they admit, probably wouldn't work even if it could be created. This is progress?


Thursday, November 08, 2018

BlueSkiesResearch.org.uk: Comments on Cox et al

And to think a few weeks ago I was thinking that not much had been happening in climate science…now here’s another interesting issue. I previously blogged quite a lot about the Cox et al paper (see here, here, here). It generated a lot of interest  in the scientific community and I’m not terribly surprised that it provoked some comments which have just been published and which can be found here (along with the reply) thanks to sci-hub.

My previous conclusion was that I was sceptical about the result, but that the authors didn’t seem to have done anything badly wrong (contrary to the usual situation when people generate surprising results concerning climate sensitivity). Rather, it seemed to me like a case of a somewhat lucky result when a succession of reasonable choices in the analysis had turned up an interesting result. It must be noted that this sort of fishing expedition does invalidate any tests of statistical significance and thus also invalidates the confidence intervals on the Cox et al result, but I didn’t really want to pick on that  because everyone does it and I thought that hardly anyone would understand my point anyway 🙂

The comments generally focus on the use of detrended intervals from the 20th century simulations. This was indeed the first thing that came to my mind as a likely Achilles’ heel of the Cox et al analysis. I don’t think I showed it previously, but the first thing I did when investigating their result was to have a play with the ensemble of simulations that had been performed with the MPI model. The Cox et al analysis depends on an estimate of the lag-1 autocorrelation of the annual temperatures of the climate models. Ideally, if you want to calculate the true lag-1 autocorrelation of model output, you should use a long control run (ie an artificial simulation in which there is no external forcing). Of course there is no direct observational constraint on this, but nevertheless this is one of the model parameters involved in the Cox et al constraint, for which they claim a physical basis.

As well as having a long control simulation of the MPI model, there is also a 100-member of 20th century simulations using it. The size of this ensemble means that as well as allowing an evaluation of the Cox et al detrending approach to estimate the autocorrelation,  we can also test another approach which is to remove the ensemble average (which represents the forced response) rather than detrending a single simulation.

This figure shows the results I obtained. And rather to my surprise….there is basically no difference. At least, not one worth worrying about. For each of the three approaches, I’ve plotted a small sample from the range of empirical results (the thin pale lines) with the thicker darker colours showing the ensemble mean (which should be a close approximation to the true answer in each case). For the control run I used short chunks comparable in length to the section of 20th century simulation that Cox et al used. The only number that actually matters from this graph is the lag-1 value which is around 0.5, the larger lags are just shown for fun. The weak positive values from 5-10 years probably represent the influence of the model’s El Nino. It’s clear that the methodological differences here are small both in absolute terms and also relative to the sample variation across ensemble members. That is to say, sections of control runs, or 20th century simulations which are either detrended or which have the full forced response removed, all have a lag-1 autocorrelation of about 0.5 albeit with significant variation from sample to sample.
Screenshot 2018-11-03 16.25.43
Of course this is only one model out of many, and may not be repeated across the CMIP ensemble, but this analysis suggested to me that this detrending approach wasn’t a particularly important issue and so I did not pursue it further. It is interesting to see how heavily the comments focus on it. It seems that the authors of these got different results when they looked at the CMIP ensemble.

One thing I’d like to mention again, which the comments do not, is that the interannual variability of the CMIP models is actually more strongly related to sensitivity, than either the autocorrelation or Cox’s Psi function (which involves both these parameters) are. Here is a the plot I showed earlier. Which is of course a little bit dependent on the outlying data points (as was commented on my original post). This is sensitivity vs SD (calculated from the control runs) of the CMIP models of course.
Screenshot 2018-01-25 17.12.24
I don’t know why this is so, I don’t know whether it’s important, and I’m surprised that no-one else bothered to mention it as interannual variability is probably rather less sensitive than autocorrelation is to the detrending choices. Maybe I should have written a comment myself 🙂

In their reply to the comments. Cox et al now argue that their use of a simple detrending means that their analysis includes a useful influence of some of the 20th century forced response, which "sharpens" the relationship with sensitivity (which is weaker when the CMIP control runs are used). This seems a bit weak to me and as well as basically contradicting their original hypothesis, also breaks one of the fundamental principles of emergent constraints, that they should have a clear physical foundation. At the end of the discussion I’m more convinced that the windowing and detrending is a case of "researcher degrees of freedom" ie post-hoc choices that render the statistical analysis formally invalid. It’s an interesting hypothesis rather than a result.

The real test will be applying the Cox et al analysis to the CMIP6 models, although even in this case intergenerational similarity makes this a weaker challenge  than would be ideal. I wonder if we have any bets as to what the results will be?

Saturday, November 03, 2018

BlueSkiesResearch.org.uk: That new ocean heat content estimate

From Resplandy et al (conveniently already up on sci-hub):
Our result — which relies on high-precision O2 measurements dating back to 1991 — suggests that ocean warming is at the high end of previous estimates, with implications for policy- relevant measurements of the Earth response to climate change, such as climate sensitivity to greenhouse gases
But how big are these implications? As it happens I was just playing around with 20th century energy balance estimates of the climate sensitivity, so I could easily plug in the new number. My calc is based on a number of rough estimates so is not intended to be definitive but should show the general effect fairly realistically.

I was previously using the Johnson et al 2016 estimate of ocean heat uptake which is 0.68W/m^2 (on a global mean basis). Resplandy et al raise this value to 0.83. Their paper presents their value as a rather more substantial increase by comparing to an older value that the IPCC apparently gave.

Plugging the numbers in to a standard energy balance approximation we get the following estimates for the equilibrium sensitivity:

hist_results
This simple calculation has a (now) well-known flaw that tends to bias the results low, though how low is up for debate (it’s the so-called "pattern effect" or you might know it as the difference between effective and equilibrium sensitivity). I used my preferred Cauchy prior from this old paper. It has a location parameter of 2.5 and a scale factor of 3 (meaning it should have a median of 2.5 and interquartile range of -0.5 – 5.5, though it’s truncated here at 0 and 10 for convenience). 

The old calculation based on Johnson et al’s ocean heat uptake has a median of  2.35C with a 5-95% range of 1.41 – 4.61C. The new estimate raises this to a median of 2.57C with a range of 1.53 – 5.17C. So, about 0.1C on the bottom end and 0.5C on the top end, which may not be the impression the manuscript gives. The medians and ranges are shown with the tick marks. The calculation goes up to 10C but I truncated the plot at 6C (and thus cut off the prior’s 95% limit) for clarity.

Another caveat in my calculation is that the new paper’s main result is based on a longer time interval going back to 1991, if the ocean heat uptake has been accelerating then that would imply a larger increment to the Johnson et al figure (which relates to a more recent period) and thus a larger effect. But even so it’s an incremental change, and not earth-shattering. As one might expect for such a well-studied phenomenon.

Wednesday, October 24, 2018

BlueSkiesResearch.org.uk: Cloudy with a chance of meatballs (on the pizza)

The CFMIP workshop provided the excuse we were looking for in order to visit Boulder again. Last year this meeting had been held in Tokyo which, on top of being a long way away, also directly clashed with the PMIP workshop that jules was duty-bound to attend. The main focus of the CFMIP workshop is the simulation of clouds in GCMs and how they might be affected by climate change. This is our largest single source of uncertainty in the climate system. We are not really CFMIP people and didn’t plan to present anything, in fact what drew us there more than the workshop content itself was that our two ongoing research collaborations involve a whole bunch of the attendees so it was a good chance for us all to meet face to face for a change rather than via Skype and email. These meetings were fitted in to the odd hour or two here or there in some of the breaks rather than filling a full week as we did in Edinburgh earlier this summer. (Did I not blog that? How remiss of me.) Additionally, jules also tapped up a local for a role as exec editor of GMD, continuing our (their!) policy of rotation and inclusion of new people and ideas. All of that plus the famed Boulder weather and food was sufficient to entice us over for a bit of a holiday that included the workshop.

I had feared that the workshop was going to be strongly focussed on the details of simulating clouds in climate models but the first day, as well as providing an overview of CFMIP, also discussed the notion of feedbacks and climate sensitivity more generally. This is highly relevant to our current research so we paid careful attention. Interestingly, participants seemed resigned to a large proportion of their simulations and analyses not being completed in time for the next IPCC report. In principle the IPCC merely summarises the science and does not commission still less undertake it, but in reality there has usually been a close link between the IPCC and CMIP timetables. With people trying to maximise development time for their climate models, there is limited computing time to do all of the various MIP experiments, especially if some mistake is found and a bunch of simulations have to be repeated. Of course it’s inevitable that the IPCC report is a little out of date because of the lengthy process underpinning it, but it would be unfortunate if lots of new and exciting results were being published around the time that the IPCC produces a report based on much older knowledge. On the other hand most of the climate science the IPCC reports on is mature enough that there probably won’t be many big surprises anyway.

The subsequent days did indeed focus more on details of the behaviour of clouds, and as a result we skipped a fair bit, although this paper (which has just come out) may have wider interest. There was an amusing moment when someone argued that if we wanted to give the best possible predictions for future climate change, we should use the highest possible resolution for an atmospheric model and provide it with prescribed sea surface temperatures in place of a realistic ocean model. It’s a lucky coincidence that he was an atmospheric modeller – just imagine if he’d worked on the ocean for a couple of decades before realising it was completely pointless! 🙂

I also had to arrange a few phone calls from journalists who had got wind of this story: the results of which can be seen here and here (and no, I didn’t tell the 2nd one I had never lost but never mind about that small detail). The timetabling of those also interrupted some of the talks. So while the CFMIP meeting was a bit marginal for us, the side meetings made the week extremely productive.

One of the big attraction of a meeting at NCAR for me is the breakfast that is earned through cycling up the substantial hill.



Saturday, October 20, 2018

Credit where it's due

The GMD t-shirts continue to be good value. Jules and I usually try to wear ours at any scientific meetings to spread the word and it often sparks off some interaction. Recently someone prominent came up to me at the workshop dinner and congratulated me on my achievements in respect of GMD. He was particularly impressed with the CMIP6 special issue which is turning out to be very useful. I pointed out that jules (who was sitting beside me but in civvies) actually bore far more responsibility for this, not only in being Chief Exec Editor of the journal but also more specifically both in developing the whole concept of MIP papers within GMD and also in negotiating the details of how the CMIP6 special issue would work - which involved some lengthy negotiations with CMIP peoples.

At the end of the dinner as he was leaving, he thanked me again for all that I'd done. It's a tough job taking credit for other peoples' work but someone's got to do it!

Wednesday, October 17, 2018

Monday, October 15, 2018

The bet - final outcome

You may be wondering what had happened with this. As you will recall, some time ago I arranged a bet with two Russian solar scientists who had predicted that the world was going to cool down. The terms of the bet were very simple, we would compare the global mean average surface temperature between 1998-2003 and 2012-17 (according to NCDC), and if the latter period was warmer, I would win $10,000 from them, and if it was cooler, they would win the same amount. See here and here for some of the news coverage at the time.

The results were in a while ago, and of course I won easily, as the blue lines in the graph below show. It was never in much doubt, even though their choice of starting period included what was then the extraordinarily hot El Nino year of 1998. In fact the temperature in that year just barely exceeded 2012 (by less than 0.01C) and all subsequent years have been warmer as you can see from the black dashed line before. It seems unlikely any of us alive today (or indeed over the next few centuries at least) will ever see such a low temperature again.



So this should be the point at which I ask my blog readers for ideas as to what to spend the $10,000 on. I was hoping to do something that would be climatically and environmentally beneficial, perhaps something that might garner a bit of publicity and make a larger contribution. But they are refusing to pay. More precisely, Bashkirtsev is refusing to pay, and Mashnich is refusing to even reply to email. With impressive chutzpah, Bashkirtsev proposed we should arrange a follow-up bet which he would promise to honour. Of course I'd be happy to consider such a thing, once the first bet is settled. But it looks unlikely that this is going to happen.

It was obvious of course that this settlement risk was the biggest uncertainty right from the start. I had hoped they would value their professional reputations as worth rather more to themselves than the sums of money involved. On the other hand a certain amount of intellectual dishonesty seems necessary in order to maintain the denialist mindset. Of course it could be argued that it's unfair to tar all denialists with the same brush, maybe I was just unlucky to come across the only two charlatans and the rest of the bunch are fine upstanding citizens who just happen to suffer from genuine misunderstandings. Who wants to bet on that?