Friday, July 03, 2020

Ho hum

Haven't posted for a while, so how about a few minutes of James O'Brien to pass the time.

Friday, June 26, 2020 Like a phoenix redux

Even odder than finding that our old EnKF approach for parameter estimation was particularly well suited to the epidemiological problem, was finding that someone else had independently invented the same approach more recently…and had started using it for COVID-19 too!

In particular, this blogpost and the related paper, leads me to this 2013 paper wherein the authors develop a method for parameter estimation based on iterating the Kalman equations, which (as we had discovered back in in 2003) works much better than doing a single update step in many cases where the posterior is very small compared to the prior and the model is not quite perfectly linear – which is often the case in reality.

The basic idea behind it is the simple insight that if you have two observations of an unknown variable with independent Gaussian errors of magnitude e, this is formally equivalent to a single observation which takes the average value of the two obs, with an error of magnitude e/sqrt(2). This is easily shown by just multiplying the Gaussian likelihoods by hand. So conversely, you can split up a precise observation, with its associated narrow likelihood, into a pair of less precise observations, which have exactly the same joint likelihood but which can be assimilated sequentially in which case you use a broader likelihood, twice. In between the two assimilation steps you can integrate the model so as to bring the state back into balance with the parameters. It works better in practice because the smaller steps are more consistent with the linear assumptions that underpin the entire assimilation methodology.

This multiple data assimilation idea generalises to replacing one obs N(xo,e) with n obs of the form N(xo,e*sqrt(n)). And similarly for a whole vector of observations, with associated covariance matrix (typically just diagonal, but it doesn’t have to be). We can sequentially assimilate a lot of sets of imprecise obs in place of one precise set, and the true posterior is identical, but the multiple obs version often works better in practice due to generating smaller increments to the model/parameter samples and the ability to rebalance the model between each assimilation step.

Even back in 2003 we went one step further than this and realised that if you performed an ensemble inflation step between the assimilation steps, then by choosing the inflation and error scaling appropriately, you could create an algorithm that converged iteratively to the correct posterior and you could just keep going until it stopped wobbling about. This is particularly advantageous for small ensembles where a poor initial sample with bad covariances may give you no chance of reaching the true posterior under the simpler multiple data assimilation scheme.

I vaguely remembered seeing someone else had reinvented the same basic idea a few years ago and searching the deep recesses of my mind finds this paper here. It is a bit disappointing to not be cited by any of it, perhaps because we’d stopped using the method before they started….such is life. Also, the fields and applications were sufficiently different they might not have realised the methodological similarities. I suppose it’s such an obvious idea that it’s hardly surprising that others came up with it too.

Anyhow, back to this new paper. This figure below is a set of results they have generated for England (they preferred to use these data than accumulate for the whole of the UK, for reasons of consistency) where they assimilate different sets of data: first deaths, then deaths and hospitalised, and finally adding in case data on top (with some adjustments for consistency).

Screenshot 2020-06-24 21.25.10

The results are broadly similar to mine, though their R trajectories seem very noisy with extremely high temporal variability – I think their prior may use independently sampled values on each day, which to my mind doesn’t seem right. I am treating R as taking a random walk with small daily increments except on lockdown day. In practice this means my fundamental parameters to be estimated are the increments themselves, with R on any particular day calculated as the cumulative sum of increments up to that time. I’ve include a few trajectories for R on my plot below to show what it looks like.


Monday, June 15, 2020 Like a phoenix…

So, the fortnightly chunks in the last post were doing ok, but it’s still a bit clunky. I quickly found that the MCMC method I was using couldn’t really cope with shorter intervals (meaning more R values to estimate). So, after a bit of humming and hawing, I dusted off the iterative Ensemble Kalman Filter method that we developed 15 years ago for parameter estimation in climate models I must put a copy up on our web site, it looks like there’s a free version here. For those who are interested in the method, the equations are basically the same as in the standard EnKF used in all sorts of data assimilation applications, but with a couple of tweaks to make it work for a parameter estimation scenario. It had a few notable successes back in the day, though people always sneered at the level of assumptions that it seemed to rely on (to be fair, I was also surprised myself at how well it worked, but found it hard to argue with the results).

And….rather to my surprise….it works brilliantly! I have a separate R value for each day, a sensible prior on this being Brownian motion (small independent random perturbation each day) apart from a large jump on lockdown day. I’ve got 150 parameters in total and everything is sufficiently close to Gaussian and linear that it worked at the first time of asking with no additional tweaks required. One minor detail in the application is that the likelihood calculation is slightly approximate as the algorithm requires this to be approximated by a (multivariate) Gaussian. No big deal really – I’m working in log space for the number of deaths, so the uncertainty is just a multiplicative factor. It means you can’t do the “proper” Poisson/negative binomial thing for death numbers if you care about that, but the reporting process is so much more noisy that I never cared about that anyway and even if I had, model error swamps that level of detail.

The main thing to tweak is how big a daily step to put into the Brownian motion. My first guess was 0.05 and that worked well enough. 0.2 is horrible, generating hugely noisy time series for R, and 0.01 is probably inadequate. I think 0.03 is probably about ok. It’s vulnerable to large policy changes of course but the changes we have seen so far don’t seem to have had much effect. I haven’t done lots of validation but a few experiments suggest it’s about right.

Here are a few examples where (top left) I managed to get a validation failure with a daily step of 0.01 (top right) used 0.2 per day but no explicit lockdown, just to see how it would cope (bottom left) same as top left but with a broader step of 0.03 per day (bottom right) the latest forecast.

I’m feeling a bit smug at how well it’s worked. I’m not sure what other parameter estimation method would work this well, this easily. I’ve had it working with an ensemble of 50, doing 10 iterations = 500 simulations in total though I’ve mostly been using an ensemble of 1000 for 20 iterations just because I can and it’s a bit smoother. That’s for 150 parameters as I mentioned above. The widely-used MCMC method could only do about a dozen parameters and convergence wasn’t perfect with chains of 10000 simulations. I’m sure some statisticians will be able to tell me how I should have been doing it much better…

Friday, June 12, 2020

Neoliberalism Kills?

However you "solve" the problem, the pandemic was always going to be very expensive. Government mandated lockdowns might be framed as the point at which the government disrupts market forces, decides to transfer some costs away from individuals, and to strongly shape the future trajectory.

The costs of lockdown vs no-lockdown in terms of both lives and money were probably not that well understood by the decision makers. While epidemiological models can easily predict 100s of thousands of deaths they assume no changes in behaviour by citizens. Rich countries with low inequality may reasonably hope for auto-lockdown by its citizens without such a massive interference by the government.  This has probably helped Japan, and perhaps to a lesser extent Sweden (Sweden has plenty of deaths but has also possibly kept more of its economy going..?). On the other hand, in many countries most people cannot afford to lockdown, and people not at such high risk (in this case, younger people) may feel much lower motivation to do so.

But instead of really thinking any of this through it seems that our dimwitted politicians simply applied their rubbish political ideological theories. They aren't scientists and do not know that theories have to make testable predictions in order to be worthwhile.

And what happened?! In this case socialism wins while neoliberalism both kills lots and lots of people and crashes the economy (because early lockdown => shorter lockdown). Ooops!

I would think it isn't always the case; socialist governments have surely fucked up very badly in the past when faced with other problems. That's the problem with random theories based on no evidence - they only work every so often. But I am still a bit worried that perhaps neoliberalism always kills and that this is just first chance to do a proper job of it.

The big caveat in all this is that we do not know what the endgame is. If herd immunity remains the inevitable consequence, then lockdown might be viewed in terms of the effect on quality of life in terms of months rather than in total lives saved. But those months are still pretty valuable, aren't they?

Tuesday, June 09, 2020 More COVID-19 parameter estimation

The 2 and now 3-segment piecewise constant approach seems to have worked fairly well but is a bit limited. I’m not really convinced that keeping R fixed for such long period and then allowing a sudden jump is really entirely justifiable, especially now we are talking about a more subtle and piecemeal relaxing of controls.

Ideally I’d like to use a continuous time series of R (eg one value per day), but that would be technically challenging with a naive approach involving a whole lot of parameters to fit. Approaches like epi-estim manage to generate an answer of sorts but that approach is based on a windowed local fit to case numbers, and I don’t trust the case data to be reliable. Also, this approach seems pretty bad when there is a sudden change as at lockdown, with the windowed estimation method generating a slow decline in R instead. Death numbers are hugely smoothed compared to infection numbers (due to the long and variable time from infection to death) so I don't think that approach is really viable.

So what I’m trying is a piecewise constant approach, with a chunk length to be determined. I’ll start here with 14 day chunks in which R is held constant, giving us say 12 R values for a 24 week period covering the epidemic (including a bit of a forecast). I choose the starting date to fit the lockdown date into the break between two chunks, giving 4 chunks before and 8 after in this instance.

I’ve got a few choices to make over the prior here, so I’ll show a few different results. The model fit looks ok in all cases so I’m not going to present all of them. This is what we get for the first experiment:

The R values however do depend quite a lot on the details and I’m presenting results from several slightly different approaches in the following 4 plots.

Top right left is the simplest version where each chunk has an independent identically distributed prior for R of N(2,12). This is an example of the MCMC algorithm at the point of failure, in fact a little way past that point as the 12 parameters aren’t really very well identified by the data. The results are noisy and unreliable and it hasn’t converged very well. The last few values of R here should just sample the prior as there is no constraint at all on them. That they do such a poor job of that is an indication of what a dodgy sample it is. However it is notable that there is a huge drop in R at the right time when the lockdown is imposed, and the values before and after are roughly in the right ballpark. Not good enough, but enough to be worth pressing on with….

Next plot on top right is when I impose a smoothness constraint. R can still vary from block to block, but deviations between neighbouring values are penalised. The prior is still N(2,12) for each value, so the last values of R trend up towards this range but don’t get there due to smoothness constraint. The result looks much more plausible to me and the MCMC algorithm is performing better too. However, the smoothness constraint shouldn’t apply across the lockdown as there was a large and deliberate change in policy and behaviour at that point.

So the bottom left plot has smoothness constraints applied before and after the lockdown but not across it. Note that the pre-lockdown values are more consistent now and the jump at lockdown date is even more pronounced.

Finally, I don’t really think a prior of N(2,12) is suitable at all times. The last plot uses a prior of N(3,12) before the lockdown and N(1,0.52) after it. This is probably a reasonable representation of what I really think and the algorithm is working nicely.

Here is what it generates in terms of daily death numbers:

There is still a bit of of tweaking to be done but I think this is going to be a better approach than the simple 3-chunk version I’ve been using up to now.

Thursday, June 04, 2020

Doubling times and the curious case of the dog that didn't bark

I'm sure many of you spend your mornings glued to Parliament TV but for anyone who missed it, there was a House of Lords Science and Technology Select Committee hearing on Tuesday, which discussed the science of Covid-19 with a particular focus on modelling. There are transcripts (two parts) here. and there is a video of the whole event here.

Two things particularly caught my attention. First bit is Question 31 in the transcript about 10:47ish when they are asked about the 20,000 death estimate that Vallance had hoped to undershoot. Matt Keeling first excused this by saying that the lockdown could have been stricter. However, Ferguson had already confirmed that the lockdown was in fact adhered to more strictly than the modellers had anticipated. (Keeling also went on to talk about the failure to protect care homes which is probably fair enough though the 20,000 target would have been badly overshot in any case). It would have surely been appropriate at this point to mention that the 20k target was already missed at the point that the lockdown was imposed on the 23rd March. By this point we had well over a million infected people, which already guaranteed well over 10k deaths, and there was no plausible way of stopping the outbreak dead in its tracks at that point. The decay was always going to be slower than the growth,  meaning more cases and deaths.

Then in the follow-up question, the witnesses were asked what had most surprised them compared to their earlier predictions. Ferguson answered with the biggest factor being that more infection had come from Spain and Italy and this is why they were further ahead on the curve than anticipated! The doubling time has nothing at all to do with how quickly people came with the infection, of course, and getting this wrong made a far greater difference to how things turned out.

It's curious that in two hours of discussion about the modelling, it never once came up that the modellers' estimate of the doubling time was 5-7 days up to the 18th march, and abruptly changed to 3-5 days on the 20th (reaching SAGE on the 23rd, according to the documentation)

Monday, June 01, 2020

The Dim and Dom show

I feel like I should be blogging about something related to the ongoing epidemic, but I can't bring myself to do it. The utterly vacuous, self-destructive, hopelessly incompetent nature of our government is beyond my ability to put into words. I am surprised at the scientists who are still prepared to work with them over the epidemic.

That aside, it's been an interesting couple of weeks. I'd been doing more of the same modelling and forecasting of the epidemic (and have updated our paper and submitted to a real journal), and then suddenly the media got hold of the delayed lockdown story. This is a very simple calculation, initially I thought too trivial to even write into a blog post, but it is of course very eye-catching. After mentions in the Guardian, Telegraph, More or Less, some requests for interviews came in. Initially I ducked them as I didn't really think it was appropriate for a non-expert to be pushing his own research especially as no-one else had backed it up at that point, and ATTP had tried to get results out of the IC model but initially came up with some significantly different answers (after a few more tries at getting the code to do the right things it worked very nicely though). Kit did a very good job on Sky I thought:

and then I found this manuscript (also written by an outsider, mathematical modeller like me) and the research showing essentially the same results for USA (manuscript here) (I think the smaller effect is mostly because they looked at a shorter interval) and also the Sunday Times article which managed to claim it was all new research from the IC team so I relented and did an interview for Vanessa Feltz on Radio 2 (which was live):

and also for the German channel ZDF which was recorded on Friday. Whether it will/did make the cut remains to be seen...the said they would send a link to the final version so I wait with bated breath.

Thursday, May 21, 2020 The EGU review

Well.. that was a very different EGU!

We were supposed to be in Vienna, but that was all cancelled a while back of course. I might have felt sorry for my AirBnB host but despite Austria banning everything they didn’t reply to my communication and refused a refund so when AirBnB eventually (after a lot of ducking and weaving) stepped in and over-ruled them and gave me my money back I didn’t have much sympathy. They weren’t our usual host, who was already full when I booked a bit late this year.

Rather than the easy option of just cancelling the meeting, the EGU decided to put everything on-line. They didn’t arrange videoconferencing sessions – I think this was probably partly due to the short notice, and also to make everything as simple and accessible as possible to people who might not have had great home broadband or the ability to use streaming software – but instead we had on-line chat (typing) sessions with presentation material previously uploaded by authors, that we could refer to as we liked. There was no formal division into posters and oral presentations. Authors could put up whatever they wanted (50MB max) onto the website beforehand and people were free to download and browse through at will. It is all still up there and available to all permanently, and you can comment on individual presentations up to the end of the month (assuming the authors have allowed this, which most seem to). The EGU has posted this blog with statistics of attendance which shows it to have been an impressive success.

Some people put up huge presentations, far more than they would have managed in a 15 minute slot, but most were more reasonable and presented a short summary. We did poster format for ours as we felt that this allowed more space for text explanation and an easier browsing experience than a sequence of slides with bullet points. Unfortunately my personal program of sessions I had decided to attend has been deleted from the system so I can’t review what I saw in much detail. I usually take notes but this time was too busy with computer screens.

Of course, being in Vienna in spirit, I had to have a schnitzel. I might have to have some more in the future, they were rather good and quite easy to make. Pork fillet, not veal.

The 2nd portion at the end of the week was better as I made my own breadcrumbs rather than using up some ancient panko that was skulking in the back of the cupboard. But we ate them too quickly to take pictures! Figlmüller eat your heart out!

The chat sessions were a bit frenetic. Mostly, the convenors invited each author in turn to post a few sentences in summary, following which there was a short Q-and-A free-for all. This only allowed for about 5 mins per presentation, which meant maybe 2 or 3 questions. But this wasn’t quite as bad as it seems since it was easy to scroll through the uploaded material ahead of time and pick out the interesting ones. Questioning could also run over subsequent presentations, it wasn’t too hard to keep track of who was asking what if you made the effort. As usual, there were only handful of interesting presentations per session for me (at most) so it was easy enough to focus on these. It was also possible to be in several different chat sessions at once, which you can’t do so easily with physical presentations! The structure made it more feasible to focus on whatever piqued our interest, and jules in particular spent more time at those sessions she does not usually get around to attending because they are outside of her main focus. Some convenors grouped presentations into themes and discussed 3-5 of them at a time, for longer. Some naughty convenors thought they would be clever and organise videoconferencing sessions outside of the EGU system, which actually worked pretty well in practice for those (probably a large majority to be honest) who could access it, but not so good for those who had access blocked for a number of reasons. Which is probably why the EGU didn’t organise this themselves. Whether it is actually preferable to the on-line chat is a matter of taste.

Jules was co-convening a couple of sessions and the convenors set up a small zoom session on the side to help coordinate, which added to the fun. A bit of personal chat with colleagues is an important aspect of these conferences. Her presentation is here and outlines some early steps in some work we are currently doing – an update to our previous estimate of the LGM climate, which is now getting on for 10 years (and two PMIP/CMIP cycles) old. I think we should probably find it encouraging that the new models don’t seem very different, though it may just mean that they share the same faults! There is some new data, perhaps not as much as we had hoped. And the method itself could do with a little bit of improvement.

I had actually found it a bit difficult to find the right session for my work when originally submitting it. It didn’t seem to quite fit anywhere, but in the end it turned out fine where I put it. The data assimilation stuff was a little less interesting methodologically speaking, perhaps because it’s a sufficiently mature field that everyone is just getting on with the nuts and bolts of doing it rather than inventing new approaches. I did get one idea out of it that I may end up using though, and this from the Japanese looks absolutely incredible from a technological point of view – nowcasting cloudbursts over Tokyo with a 30 second update cycle! With the extra year they’ve now got, it will probably be operational for the Olympics.

Jules and I also co-authored Martin’s work with us on emergent paleoconstraints which we were originally going to present for him as he wasn’t planning to attend. But, with the remote attendance he ended up able to do it himself which was a small bonus.

Best of all – no coffee queues! Well that and not needing to schlep out at 8pm looking for dinner each night…which is fun but gets pretty tiring by the end of the week. On the downside, we had to buy our own lunches rather than gatecrashing freebies all week like we usually (try to) do.

As for the future…well it seems pretty embarrassing that it took current events into forcing the EGU into moving on-line. Some of us have been pushing them on this for years and it’s always been met with “it’s too complicated” by the powers that be. I suspect they mostly like the idea of being in charge of a huge event and enjoy hobnobbing at all the free dinners (don’t we all!) but that doesn’t justify forcing everyone to fly over there and spend at least €2k minimum – probably rather more for most – to take part. It’s a huge amount of time, money, and carbon and we really ought to do better. If one good thing is to come out of the current mess, it might be that people finally wake up to the idea that working remotely really is fully feasible these days with the level of communication technology that is available. Blue Skies Research has been living your future life for more than 5 years now, and it’s great! Roll on next year. I know that turning up has added benefits, and don’t expect all travel to stop. But with remote access, people can easily “go” to both of the AGU and EGU each year, drop in to the bits that interest them, without having to devote a full week and more to each, with huge costs, jet-lag, the carbon budget of a small country, etc.

I expect that the AGU will want to put on a better show this December. Even if travel is opened up by then (which I wouldn’t be confident about at this point) I doubt this will happen quickly enough for the event to be organised in the usual manner. It will be good to have a bit of friendly rivalry to spur things on. In recent years, the AGU has generally been ahead of the EGU in terms of streaming and remote access – last December we watched a couple of live sessions and even asked a question (via text chat) though we were lucky that the small selection of streamed sessions included stuff of interest to us. The EGU has tended to put up streams of just a few of the public debate sessions rather than the science, and this only after the event with no opportunity for direct interaction. Bandwidth is a problem for streaming multiple sessions from the same location, but maybe even an audio stream with downloadable material would work? One thing is for sure, back to “business as usual” is not going to be acceptable now that they’ve shown it can be done differently.

Here’s Karlskirche which I hope to see again in the flesh some time.


Coincidentally, just a few days after the EGU I took part in this one-day webinar. It had a bit of the same sort of stuff – I presented the same work again, anyway! This was a zoom session which worked pretty well, there were one or two technical problems but you usually get in a real conference anyway with people plugging their laptops into the projector. It was great to have people from a range of countries attend and present at what would normally have been a local UK meeting of climathnet people. I have never quite managed to attend any of these before because they always seemed like a long way to travel for a short meeting that mostly isn’t directly relevant to our research. I expect to see a rapid expansion of remote meetings of various types in the future.

Tuesday, May 19, 2020


There I was, thinking I was typing into the void...and it turns out the comment notification had got turned off so I hadn't seen them. As well as lots of unread comments, there were quite a few stuck in moderation (it's off by default, but I think that goes on automatically after a period of time).

I am having a look back but if I've missed anything specific please copy and post again so I notice. For the most part it looks like you've answered each other which is helpful :-)

Monday, May 18, 2020

Strategy for a Pandemic: The UK and COVID-19

Sir Lawrence Freedman (member of the Chilcott Inquiry) has written a review of the UK Govt's response to the coronavirus outbreak which can be found here

He explains his motives thusly:

"The inquiry into the United Kingdom’s role in the 2003 Iraq War, of which I was a member, took the view that when inquiring into a contentious area of policymaking, an essential first step was compiling a reliable account. This should be influenced as little as possible by the benefit of hindsight. This article attempts to provide a preliminary account of the development of UK strategy on COVID-19, from the first evidence of trouble in Wuhan in early January to the announcement of the full lockdown on 23 March. As policy-makers claimed to be ‘following the science’, this requires an analysis of the way that the expert community assessed the new coronavirus, the effects of alternative interventions intended to contain its spread and moderate its impact, and how experts’ findings were fed into the policymaking process. It is preliminary because, while there is good material on both the policy inputs and outputs, the material on how the policy was actually made is more speculative."

It's an interesting read, but while reading it I can't help but think of Orwell's aphorism:
"Who controls the past controls the future."
Here is an interesting snippet in which there seems to be a very clear and perhaps important misunderstanding of the time line. Freedman says on p52:

"By that time, the strategy had already begun to shift. Hours after the COBRA meeting, on the evening of 12 March, SAGE met again to hear from Professor Ferguson on the results of his group’s latest modelling. The conclusions, which were made public on 16 March, were startling. What had made the difference was evidence from Italy suggesting that the R0 was more like 3 than 2.5 and, most importantly, that previous estimates of intensive-care requirements had been optimistic."

The paper itself is of course published and uses an R value of 2.4 in the main analysis of mitigation scenarios, with a range of 2.0-2.6 in sensitivity tests. The Oral hearing of the Science and Technology Committee that Freedman cites as the source of his information took place on the Wednesday 25 March 2020 and can be found here. Ferguson is on at 10:15 onwards, with the relevant comments about R0 right at the end of his segment around 10:55. He says rather disingenuously that the new estimate for R0 of around 3 is "within the wide range of values" that had been considered by modelling groups. Certainly not his, and when you take the doubling time into account, it is very much at the edge of Kucharski's work too. 

I think Ferguson is on very dodgy ground indeed in so blithely dismissing this discrepancy in front of the Select Committee as it is critical to the question of how soon and how aggressively we needed to deal with the epidemic. Note that the doubling time (which is what really matters here) depends not only on R0 but also the reproductive time scale of the virus). In fact, as I have documented previously, the SPI-M advice specifically pointed to a 5-7 day doubling time as late as the 18th March at which point they were considering a lockdown for London (only). It was only at the meeting of the 23rd, long after the 12 March date that Freedman refers to, that SAGE learnt of the change of the estimate to 3-5 day doubling, and the lockdown was ordered that same evening. I am no friend of the Tories and there are lots of things they did badly, but specifically in terms of reacting to the abruptly and radically updated scientific advice, their response seems exemplary here.

Also, on p58:
"Given the known sequence for infection, incubation, hospitalisation and death, it is reasonable to conclude that changes in behaviour were having an effect well before 23 March, especially in London."

This may be possible but does not seem necessary. I'm not just drawing on my own modelling here, Flaxman et al consider all the interventions and also find that the lockdown had by far the largest effect on the epidemic with the other earlier interventions being very minor influences in comparison. Their latest estimate shows R0 dropping from 3.9 to about 3.5 during the week prior, then collapsing to about 0.7 on the 23rd, very similar to my own estimate. (as I've discussed before, their sightly larger initial and lower current values for R can probably be attributed to a longer serial interval of 6.5d in their model compared to about 5.5d in mine). Here are both of our latest results, mine as the top plot and theirs in the following two:

Freedman's rosy assessment from p57 onwards of the NHS coping may not be shared by all, particularly the large number of victims who were shut out by the NHS and sent out into the community to die in care homes while infecting many others, with both NHS and care home staff also inadequately protected. If the NHS really had capacity, why did this happen? I know he refers to this subsequently, but doesn't seem to make the connection. "Coping" by refusing treatment to large numbers of sick and dying people isn't really coping, is it?

Anyway, it's an interesting read.