Wednesday, March 25, 2020

Dominic Cummings

Dominic Cummings is a stupid person's clever person.

That's it. That's the tweet.

The long version is, he's a stupid person's clever person because he has read some popular science and juvenile edgy libertarian blogs and can regurgitate great soundbites that impress people who have no clue what he is talking about, nor how science or technology works. 

Unfortunately, such people are in positions of power, meaning that they listen to him and think he's really great. Your average senior civil servant is also probably quite clever, but a history or PPE degree doesn't equip them to deal with him. I know he's a history graduate himself, and that makes his interests unusual. But there's no evidence that he understands enough about what he speaks (and blogs, at length) to really make use of it. His wish for more technical ability in government is quite possibly correct, but weirdos and misfits aren't the best choices here. These people would need the ability to interface with politicians and the real world beyond the wild imagination of their (typically wrong or impractical) theories. There are very many talented scientist with great technical skills, creativity, originality of thought and social skills to inform, explain, and persuade. I think that most of them would run a mile from the prospect of working in a policy unit in Whitehall. Though there are of course channels for scientific advice to inform on policy.

Actually, I really can't be bothered writing any more about this. Tedious, tedious, tedious. But I wanted to get it out there. If Cummings is the answer, then someone asked the wrong question.

I know I haven't backed up any of this with references. So sue me.

That Oxford study, in full, in brief

This post refers. And this manuscript which has received an unreasonable amount of attention.

Here are a few simulations from the SEIR model I've been using, with different death rates (presented in the pic as death per case), adjusted to give the same cumulative death toll in the early phase of the epidemic. Death is assumed to lag infection by 17 days as in the Oxford study. The model used here is possibly just a bit simpler than theirs in some respects but very similar as to the overall behaviour. I have not tried to calibrate precisely to observed data as I'm just making a simple theoretical point. By shifting the epidemic curve backwards and forwards, I can match the death toll from all of these models up to where it peaks, at which point we are already past the peak of the epidemic. The only thing you can rule out from the initial exponential growth (linear in log space) is that we haven't passed the peak as in the pink curve (perhaps also cyan), or else deaths would already have tailed off.

In reality we have lots of reasons to discount the lowest and highest extreme values, and we have observed a lot more than just death toll. For example, if there really had been a large epidemic by early Feb, the first discovered cases would not all have been clearly linked to foreign travel (and each other). And, contrary to their claim that only 1 in 1000 are seriously ill, there are decent-sized areas in Italy where a higher proportion than this are already dead.  All this research shows is that the death toll doesn't tell us about how big the epidemic is (and therefore how big the mortality rate is) until after we've passed it. Rightly, it is getting a drubbing on Twitter. It's all very well playing mathematical games like this on an idle afternoon but I think it's deeply irresponsible to have pushed this out to the press as it suggests a level of uncertainty and disagreement among experts that simply isn't there.

Tuesday, March 24, 2020

A few more thoughts about parameter estimation and uncertainty in epidemic modelling

Last night's post was a bit rushed and devoid of analysis. I'm not surprised to see there have have been some other recent model-fitting investigations by recognised researchers in the field, including this one. The model they used is in fact marginally simpler than the one I was considering in that it does not have a latent period, but on the other hand they explicitly model death and present their equation. Which is quite handy as I think I should be able to add this to the SEIR model I'm using :-)

Something I should have pointed out yesterday. If all you have is data from the initial exponential part of an epidemic curve, then it's a straight line (in log space) and only constrains two degrees of freedom - a growth rate, and a magnitude. I had uncertain 4 parameters, 3 biological (which jointly determine the rate) and a starting value (ie magnitude). Clearly, my problem is hugely underconstrained by this data and so my priors will have played a large role in determining the results.

Worse, if the data are likely to be under-reported by a large but unknown factor, then we can't constrain the magnitude at all. This is quite likely the case when we look at reported cases of illness, as the most mild cases may never be discovered. In the UK recently, many very likely cases are not tested at all unless the patient becomes seriously ill. If we include in our estimation process an under-reporting factor which is itself unknown, then the magnitude of the epidemic will also be unknown (ok, pedants will note it is bounded below, but probably by an unreasonably low value). This latest paper above and the Ferguson et al research both used deaths to calibrate their models. Death data are probably much more secure than cases of illness, but the relationship between infection and death is again highly uncertain. By assuming a very low death rate, we can estimate the current infection level to be high (so the epidemic is widespread but probably not so harmful), and conversely a high death rate means that current infection level is low and we are in for a bumpy future.

As far as I can see, the Oxford group has basically played this game by introducing a parameter to represent the proportion of the population which is susceptible to a severe form of the disease. This parameter acts as a simple scaling factor in the death equation for their model. When this is set very small, it means the epidemic is very large, which also means it started earlier than thought (though given the nature of exponential growth, not necessarily by an unbelievable margin). When the parameter is large, the epidemic is currently small but will get much more serious.

These seem to be fairly fundamental road-blocks to doing any more detailed parameter estimation, so all results will necessarily be highly dependent on prior assumptions. Even the most elementary model has more degrees of freedom than can be constrained by time series data in the initial phase. It gets better once we are past the peak, as we know the proportion infected is then a substantial part of the population, and this will also help to get a handle on the fatality rate (though the lag means that could take a little longer). But by then it's too late to do anything about it.

Studying sub-populations that have already experienced the disease is one thing that may help. If the death rate is as low as one end of the Oxford paper suggests, how did 7 people die on that cruise ship (out of 700 cases, where there was regular testing and the total number of people was about 3500)? Bad luck, or were they just a particularly unhealthy bunch? And does their fit to Italian data (which implies that the epidemic is basically over) work for Lombardy as opposed to the whole country? Enquiring minds etc...

Edit: OK according to Wikipedia, Lombardy has 60% of the deaths but only 10% of the population. If the Oxford model (with low-death parameter) was fitted to this, I'm sure that it would put them right at the end of their epidemic with people no longer falling ill at much of a rate. However, despite the lockdown having been in effect for a while, there are still a lot of cases being reported. I say that refutes their idea, though it would take a proper calculation to be sure.

A test of Vallance's claim about cases?

Not so long ago, Vallance claimed that there could be about 1,000 cases of COVID-19 for every current death in the UK. Now, the Italian death rate (as a percentage of reported cases) is extremely high, and it might also be high as a percentage of true cases including unreported. But if Vallance's estimate applies here too, the 7000 deaths there so far would imply 7 million cases, a little over 10% of their population. This is just about enough to significantly bend the exponential curve as it implies an actual infection rate of under 90% of R0 (ie, something more like 2.3 new infections per case rather than 2.6).

Of course it would be a bit longer before that fed through into the observed fatality rate and with reported cases clearly under-counting by a huge factor, it's not something we could know about for a while. The slight slowdown in death rate being observed now cannot possibly be evidence of this theory yet.

Kucharski estimates a factor of 20 in the under-counting which would mean about 2% incidence rate, still a couple of doublings away from that point.

Monday, March 23, 2020

Uncertainty in the COVID-19 model

Time for James to Speak to the Nation. Well, Speak to my Reader at least. Rather than subjecting myself to the waffling windbag, here are some fun experiments in parameter estimation that I've been doing.

The setup is very simple. The SEIR model I've been using has 3 main internal parameters, being the reproduction number R0, the latent period, and the infectious period. They can all be estimated to some extent from case studies, but the appropriate numbers to use in a model are not precisely known. Beyond these parameters, the only remaining tunable value is the number of infected people at the start date (this date isn't worth changing itself as you just go up and down the exponential curve). It seems that Ferguson et al picked the biological numbers from the recent literature and then chose the start value to hit a given number of deaths from their simulations on a date in mid-March. No uncertainties are reported in their analysis, though they have used a few different R0 values in some scenarios. So I think it's worth investigating how much the parameters could reasonably vary, and how much difference this might make.

I've just done a very simple tuning, fitting to values at the start of Feb and 20th March. Since the data are essentially exponential there is little point in using more intermediate data. All it can tell me is the rate of growth and its current level.

Priors for the model parameters are chosen rather arbitrarily by me, based on my previous tuning.
R0 ~ N(2.4,0.3)
Latent_p ~ N(4,1)
Infectious_p ~ N(4.5,1)
Start ~ exp(N(-20,1))

All uncertainties quoted at one standard deviation.

Sampling from this prior gives a wide range of possible epidemics, as shown below. The blue plume is the 5-95% range with the median line also drawn. This plume may be a bit misleading as it seems to include the possibility of no epidemic at all! Actually what is happening here is the different parameter sets lead to a different timing of the peak, so at any moment in time quite a lot of them have either not started, or ended. I've plotted a handful of the individual curves as red lines to show that. In fact all parameter sets do lead to a large epidemic, which is an immediate and unavoidable consequence of the R0 value.




Now for the tuning.

For the data....I assume a significant amount of under-reporting in the official case numbers, as discussed by many. Kucharski today suggested the true value (and note this is even for symptomatic cases) was about 16x the reported values, though I'm not sure if this applies from day1. Vallance recently made the heroic claim that the number of cases might be 1000 times the number of deaths at any given time, presumably in a desperate attempt to pretend that the mortality rate is really low as that's what he staked his advice on. I guess if you include newly infected, it's reasonable. My constraint on March 20 (which I'm taking as relating to the total symptomatic and recovered cases) doesn't include his value, but ranges from 22k-162k (median of 60k) on the day when the official number was 4k and deaths were 177. My early Feb constraint is centred on 20 with a range of 7-55. Mostly just because the log of 20 is very close to 3, I guess on reflection that range is probably a bit high but note that it has to account also for further imported cases (which the model does not simulate) rather than just home-grown ones. The initial growth is exponential so it only really makes sense to do these constraints in log space. A little simplistically, I assume uncorrelated errors on these two data points. In reality, the degree of under-reporting might be expected to be somewhat related.

The below plot shows the prior and posterior spread over the initial segment on a log scale. The two red Hs are the constraints I've used with red circles at their means, the dark circles below are the actual reported numbers. My pale blue prior has a broad spread, and the green posterior (just tuned by rejection sampling, nothing fancy) focusses nicely on the plausible range.



And what about the projected epidemic? Here we have it:



Compared to the prior, it's greatly constrained with the timing of the peak ranging from about mid-April to the end of May. Again the lower limit in that plume is made up of the start of late epidemics and end of early ones, it doesn't represent a realistic trajectory in itself.

I think one thing that can probably be concluded is that the parametric uncertainty isn't hugely important for the broad policy advice. Rather like climate science, in fact :-) In the absence of strong action, we are getting an epidemic in the next couple of months, and that's really all we need to know. Though the timing is somewhat uncertain...I may investigate that further...watch this space...

Edit: I realised overnight that the way I have calculated the percentiles was a bit clumsy and misleading and will change for future work. Basically I calculated the percentiles of each model compartment first, with the various diagnostics being generated from these, whereas I should have calculate the diagnostics first, and then presented the percentiles of them. A bit of a silly error really. I'll leave it up as a testament to the limitations of doing stuff quickly.




Sunday, March 22, 2020

Mitigation vs suppression

Next thing to do is look at suppression. Vallance recently made an absolutely terrible comment that I find hard to excuse, describing the difference between mitigation and suppression as "semantic". It is not, the distinction is absolutely fundamental to the overall strategy.

The only excuse I can find for his comment is if he just meant we have to make strong efforts to reduce R0 as much as practicable in either case. But still, mitigation (R0 greater than 1) gives us an epidemic, perhaps slowed to some extent ("flattening the curve") but still spreading through a large part of the population until it burns out through herd immunity. Whereas suppression (R0 less than 1) means we control the epidemic, immediately reducing its spread from the moment controls are introduced (though it may take a week or two to see this clearly in the statistics). This threshold of R0=1 is a clear "tipping point" in the system, much more so than is apparent in just about any climate science I've seen. It is extremely implausible that a mitigation scenario would not exceed the figure of 20,000 deaths that Whitty talked about - it would almost certainly be upwards of 100,000 unless there are serious errors in the understanding of data currently available.

Here is a comparison between mitigation and suppression. At the day indicated by the vertical line, a policy is introduce which changes R0 from the base level of 2.5 (odd final digit just for convenience) to the values shown in the legend. The epidemics play out as shown. You may be a little bit puzzled that the number of visible lines doesn't match the legend.



Let's blow up the plot around the imposition of the controls. Oh there they are! All the R0 < 1 values drop away rapidly after a small delay of a few days. 



This also shows up strongly in the number of total cases over the duration of the epidemic. Mitigation (unless you get very close to 1 indeed) always ends up with a substantial proportion of the population infected. Suppression never does. They are as different as apples and kippers. By the way, the total population here is taken to be 67 million, so the R0=2.5 case gets about 90% of us. The suppression strategies are all well under 1%.



Suppression does need to be continued basically indefinitely however, at least until some external change to the paradigm like vaccines are produced. And it may be impossible to keep R0 under 1 indefinitely, due to economic realities. But at a minimum, it bears carefully looking at. It appears to be the current policy of the govt, but is not compatible with Johnson's claim that this will all be over in 12 weeks. That's what happens when you elect an incompetent buffoon to be PM. He might be able to bluff his way through a late-night essay crisis with a bit of cod Latin (haven't we all done that in our time) but when faced with a technical problem that requires attention to detail he is utterly out of his depth.

Ferguson et al describes some suppression controls (basically the current policy of widespread social distancing, plus case isolation and home quarantining) which together achieve R0 less than 1 in their model. Let's have a look at their figures and try to work out how effective they think these suppression policies should be. Their Fig 4 is the clearest guide - assuming a basic value of R0=2.2, they claim to keep a lid on the epidemic with policies in place for about 2/3rds of the time and off for the remaining 1/3 of the time. 



The plot suggests the policies need to be in place a bit more than that I think, but let's take their word for it. I further assume that the "off" state means no controls at all, ie reverting back to R0=2.2. Back of the envelope calculation suggests that the suppressed state should correspond to R0=sqrt(1/2.2) = 0.67 but that turns out to not quite be adequate and even R0=0.6 gives a very gradual rise over time with the drops seen during 20 days of the controls not quite cancelling out the growth in the uncontrolled 10 day periods. I haven't bothered to implement the thresholding procedure as described in Ferguson et al, just done a straightforward 20 days on, 10 days off to see what happens.


The vertical dashed line is drawn 3 days after the change in policy and roughly coincides with the turning point in the epidemic, showing the lag inherent to the system. I think the lag in this model is unrealistically short due to various reasons which are quite obvious when you look at the equations. It isn't designed to simulate changes over such short time scales as a few days. It would take even longer to observe in reality due to observing, testing and reporting delays. It would probably take at least a week to really see it.

Note that this calculation made the possibly optimistic assumption of R0=2.2. If we revert back to the original R0=2.4, and assume the suppressed state has an equivalently scaled value of 2.4/2.2* 0.6 = 0.65 (just my simple assumption but it seems a reasonable starting point), then in order to keep a lid on things we need to impose the controls for 22 days in every 30 days period and only have 8 days off, which is what is shown below. Or you could perhaps say 3 weeks on, one off. That's 3/4 of the time rather than 2/3rds. On the other hand I'm sure I saw someone estimate R0=0.3 from Wuhan during the lockdown. I can't find a link to that right now though and it did require very draconian control that we probably wouldn't tolerate.
Perhaps it would be more realistic to find a more sustainable approach which just kept moderate control maintaining R0 ~ 1 indefinitely. Even R0=1.3 would be a huge relief and buy us many months according to my top plots. That's one for the behavioural policy experts and epidemiologists jointly to work out. I have to say I'm not optimistic about the chances of the current govt and population collectively being able to control the epidemic indefinitely though I could imagine that widespread cheap and fast testing (for the virus, or better still, for antibodies) would radically change policy options and make control much easier to achieve.


Saturday, March 21, 2020

Calibration and some interventions

In this post I'll try to do some basic calibration of the House model that I played with in the previous post. And then try a small experiment in which I vary the timing of some interventions.

I'm not trying to do anything properly scientific here, at least not yet. However, in the same way that simple climate models can be used for insight and exploration much more effectively than full GCMs, I am hopeful that this simple modelling can be informative (at least to me!) in a similar manner. Rather than trying to use real data, I'm calibrating the model against the Ferguson et al results since they simulate the full epidemic and also describe some of their model parameters.

I start with the base scenario of no interventions. I have tuned the latent and infectious periods a little to match the Ferguson et al comments reasonably, eg they say it takes 5.1 days to go from initial infection to becoming symptomatic, but people are infectious from one day prior to this so I use 4 days as the latent period. They also say their model has an average generational time scale of 6.5 days (I assume this means the time taken to get a factor R0=2.4 increase in cases) which requires a fairly short infectious period of 4.5 days in my model.

The remaining tunable parameter is just the initial state which they say was tuned in their model achieve the right number of deaths by 14 March. This seems an extraordinarily sensitive target (relying as it does on both the infection date and time to death of a mere handful of people with the most severe underlying health conditions) but on the other hand at least it's probably a fairly secure data point. The alternative of targetting reported/estimated infections etc is subject to uncertainty as to what that actually represents and how well it's measured. Anyway, this model does not explicitly simulate deaths but an initial condition of i0=2e-9 in the model on 1st Jan gives reasonable outputs. This mean we have about a quarter of an infected person at the start of Jan, or equivalently 10 infectious cases (plus another 15 exposed, latent) at the start of Feb. While these values may seem just a little, high don't forget that as well as under-reporting, there will have been ongoing import of cases from abroad since that date which this model does not account for. It's not a ridiculous number at any rate.

With these model parameters, we get almost 10,000 infectious cases today (21st March) , or 24,000 including latent cases. In the base uncontrolled case, the epidemic peaks around 20 May in terms of the infectious cases. To enable easier comparison with Ferguson's Figure 1 I have plotted the infectious curve, shifted by 21 days, to roughly simulate the shape (but not the size!) of their death curve. This is based on the values of 5 days from sypmtoms to hospitalisation and a typical stay of 16 days, presented in their report. I'm not expecting close agrementm but hoping to be in the right ballpark. And it doesn't look too far off to me, by eye. The vertical grid lines are drawn at the 20th of each month (which just to be clear start at the appropriately labelled tick marks, ie the month names are on the 1st). It took some effort to produce plots quite as awful as the Ferguson et al report, which might be generated in Excel perhaps?
Above plot is mine, below is Fig 1 from Ferguson et al

I'm actually really impressed by how well this simple model agrees!

Next up, let's explore the interventions, which were applied in the Ferguson et al model (Figure 2) for a period of 3 months from April 20 to July 20. My main reason for doing this is that I was interested in exploring what sort of change to the basic R0 values they represent, as the report didn't mention that. I'm not trying to match each curve precisely but just interested in the general range of values that they represent. Based on muy results, it seems that the more aggressive controls seems to represent quite a change from R0 = 2.4 to perhaps as low as R0 = 1.65 (black curves, various line styles).
It is nnoticeable that the difference between the two lowest curves of Ferguson et al (light brown and blue), which differ only in the addition of social distancing for the over-70s, is primarily in their height rather than shape or timing. I reckon the reason for this is that the R0 value hasn't actually changed much between them, so the overall dynamics of the epidemic hasn't changed, but the relationship between the number of cases and of critical care demands has altered due to fewer cases occurring in this particularly vulnerable older group. Therefore I'm not surprised that I haven't quite managed to match this largest drop. Decreasing R0 still further below 1.65 in the House model delays and spreads the peak in a manner that doesn't match that curve.

This plot also answers a question I was asked on Twitter: what is the effect of bringing forward the start date for these interventions to today, rather than waiting a month as was assumed in the report? The answer turns out to be quite straightforward. Starting a month earlier delays each epidemic, but does not change the overall shape. With hindsight this is obvious though I didn't realise it before doing the simulations. The early intervention just slows the growth over the initial exponential part (compared to the base case), with the magnitude of the delay depending on the magnitude of reduction in R0. Up to almost a month in the case of the strongest interventions. Of course interventions have a cost and I've made no attempt to evaluate this.
Above is mine, below is Ferguson et al Fig 2

Enough for now. A simple message is that starting controls sooner rather than later, delays the epidemic, without doing much else.

Friday, March 20, 2020

Timing is nothing

Recently mathematical epidemiologist Thomas House published a nice blogpost (see link) in which he argued that it was important to time population control measures in epidemics such as the current one. He presented a simple example in which acting too early in the epidemic would give worse results, just like acting too late. He helpfully presented his model code which I've translated into R and will be using here. (I haven't included the code as I can't work out how to do it neatly like he did on this blogging platform.)

My aim here is to critically examine his example. I believe that while it is technically correct, it has the potential to be catastrophically misleading.

His model is basically a SEIR model with the acronym representing susceptible, exposed, infectious, resistant/recovered (sources seem to differ on what the R really represents). Exposed means those who are infected but not yet infectious due to a latent period which is 5 days in this case. Thomas seems to have implemented this model with two consecutive stages for each of the exposed and infectious phases, giving 6 boxes in all. I'm guessing that this is in order to model the delay between the stages better than a simple 4 box scheme would give, but I could well be wrong about that. A detail that most readers probably don't care about anyway. The figures below replicate the ones Thomas plotted, so I've clearly replicated the model adequately.




Thomas' scenario which I have replicated here is where we start with a baseline of an uncontrolled epidemic, and have the option to implement a 3-week lock-down at the time of our choosing, that temporarily reduces the reproductive rate R0 from 2.5 to 0.75. The top row of plots is a zoom into the early stages of the epidemic, the second row is the full thing. Thomas argued that timing the intervention was critical and in particular that it should not be implemented too early, because getting it right is crucial to minimising the peak. See the different lines in the left hand plots and especially that the maximum of the red curve in the lower left hand plot is lower than the others. His claim is true but I would argue that in our current situation it is also potentially misleading to a catastrophic degree.

The next graph demonstrates my point. It shows an estimate of the critical cases arising from the epidemic, which I take to be 2% of infectious cases at any moment in time. This is a very rough estimate, but the value must surely be significantly greater than the overall mortality which is generally expected to be about 1% (when properly treated). The maximum capacity of intensive care beds in the UK is about 5000 (from the Ferguson et al Imperial College report) and I've plotted this as the cyan line.




Oops.

I hope you all see the problem here. The red peaks may be lower than the others, but none of the model results are in the same postcode let alone ballpark as the capacity. If capacity was some 20 times higher (dashed line) then Thomas' analysis would be entirely correct and important. But actually, "let's wait a month or two before we do anything" is in this example a catastrophic policy and although it's still technically true that a greater proportion of cases are properly treated in his scenario (ie the area under the cyan line and also under the red curve), the vast majority are not. In the base case, 7% are treated and this rises to just over 10% in the best case. Well, while those additional 3% of victims would surely appreciate it, it's a bit like taking a band-aid to a trauma ward and fussing over where to optimally apply it.

I believe a more appropriate conclusion to draw from this analysis is that the short sharp shock of a temporary lock-down is basically useless as a strategy, and we need to develop a better solution. My intuition suggests to me that such a better strategy should be implemented as soon as possible rather than waiting for the epidemic to grow first, as I'll investigate in future posts.

This model appears to give somewhat worse results than the Ferguson et al report, which uses a vastly more detailed model with lots of different sub-populations rather than just one homogeneous bunch of people. Their results suggest only exceeding capacity by a factor of about 20 in the case of no action, versus the 40 here (ie the dashed cyan line would have to be twice as high again to cover the blue peak). Maybe they expect a higher proportion of isolated older people to escape the disease. Anyway I do not expect this model to provide precise and accurate answers, but it does look like a broadly reasonable tool.


Wednesday, March 18, 2020

How they got it so wrong - a theory

Think I will restrict the coronavirus stuff to appearing here, it's not really our professional work at blueskiesresearch.org.uk and I only posted there because of the Rmarkdown to Wordpress publishing thing which isn't actually as useful as I'd hoped it would be.

Anyway.

Recent events have got me thinking - how could the Govt have got it so wrong with their COVID-19 strategy? What has changed (if anything) in the science, and their response? And what should they actually be doing? Especially in light of the research published a couple of days ago, which seemed to merely state the obvious. The Lancet editor Richard Horton has been tweeting critically (eg quoted in this post) and wrote this article in the Guardian:
Indeed, it didn’t need this week’s predictions by Imperial College scientists to estimate the impact of the government’s complacent approach. Any numerate school student could make the calculation. With a mortality of 1% among 60% of a population of some 66 million people, the UK could expect almost 400,000 deaths. The huge wave of critically ill patients that would result from this strategy would quickly overwhelm the NHS.
The UK has a long-standing pandemic plan - specifically for flu, but while COVID-19 is technically not flu, it is a respiratory disease with strong similarities. The UK has gone through a number of flu pandemics, as recently as 1957 which many of the older generation (including my mother) remember well. 

It is explicitly baked into the strategy that 
Stopping the spread or introduction of the pandemic virus into the UK is unlikely to be a feasible option 
 (Appendix 1, here)

While the plan does mention as many as 750,000 deaths, it may be that people didn't really think that sort of figure was plausible. A few tens of thousands maybe, a bit worse than a normal bad flu year, but hardly existential. So, the basic paradigm seems to be, the govt only ever considered the possibility of the epidemic running through the population, killing as many as it happened to, according to its specific parameters for spread and lethality. Strategies for controlling the epidemic at a lower level, to eliminate it before it burnt itself out, just weren't in their solution space.

Hence, “taking it on the chin ” and “herd immunity”. That's just what we do, cos that's what we have always done, cos that's how these things work. The pandemic plan considered the necessity of dealing  with the excess deaths, and things like how to reduce the spread to some extent (social distancing and cocooning of particularly vulnerable people are beneficial strategies, albeit with limited effect). But it just never considered whether it might be appropriate to stop the disease entirely. 

But actually, when you do the sums, you see that a death rate of 1% (and I'm getting bored of pointing this out, but this estimate is only valid when the incidence of illness is low and victims get good healthcare!) occurring over a time frame of a couple of months is horrific and will see piles of bodies in mass graves, with the NHS totally overwhelmed in the meantime. And we also see from other countries that an alternative approach is possible, one in which stricter controls on social mixing, combined with more aggressive testing and quarantining of contacts of cases, can actually control the epidemic. The key is in whether we can get R0, the basic reproductive rate of the virus, below 1 or not (and keep it there on a sustainable basis). If we can, then we can beat the disease with only a few hundred or thousand deaths, rather than hundreds of thousands.

So that's my theory as to what “changed”. It wasn't the science of the disease. It was the demonstration of an alternative solution, one in which the epidemic is controlled at the outset. If the epidemic had simply overwhelmed China first, before spreading across the world like a dark cloud, swamping nation after nation as it did so, maybe we would have all accepted that a moderate percentage of the population were going to die, and the rest of us would pick up the pieces afterwards. But we've now seen that this is not inevitable, and it seems awfully defeatist to not even try to do better.

BlueSkiesResearch.org.uk: Snap

A week ago, I blogged the graph on the left. Last night, one of the epidemiological modelling teams advising the Govt published the graph on the right. Billed as "new research", it has formed the basis of the new Govt guidelines on social distancing to deal with the coronavirus pandemic. Apart from the choice of colours, probably the most significant difference appears to be that I explicitly considered the possibility that we might increase our healthcare capacity over time (the green line rising in the left plot), whereas the modellers in Imperial College did not (the red line along near the x-axis).

The contrast between these two sets of projections of the epidemic, and the fatuous “flatten the curve” plots that I discussed here is stark. While my calculation was relatively trivial, it’s good enough to indicate that hoping to “flatten the curve” sufficiently to cope with the full progression of an epidemic is foolish at best.

Its very curious to me that the IC research has been billed as "new science" that justifies a new approach from the Govt. All the underlying data has been known for weeks, months even. The Editor of the Lancet is one of those who has been particularly vocal in pointing this out,
also here
and a horde of twitterati have been saying the same.
It would be interesting to know what has really changed. Is it just that the Govt realised that its genocidal policy of "taking it on the chin" wouldn’t be acceptable once people realised the consequential death toll?

One remaining problem with the IC research is their use of the standard mortality rate of about 1% overall that makes no attempt to account for the obviously deleterious effect of running out of hospital beds. If 15% require intensive care, and it’s not available, it is ludicrous to believe that only the same 1% will die.

Sloppy of journalists to not query this more forensically.