Edit: To save the reader from having to plough through to the bottom, it is apparent that people in some countries are only being tested once seriously ill and indeed close to death. Hence, the time from "case" to "death" is very small, even though the time from infected/infectious may be much longer.
---
A cheery title for a cheery topic.
---
A cheery title for a cheery topic.
One issue that causes significant difficulties in assessing the epidemic is not knowing the fatality rate or case numbers with any real certainty. We do at least have a ballpark estimate for fatality rate of around 1%, probably higher if you only count symptomatic cases, but maybe a bit lower if you assume a significant number of asymptomatic ones too. I've generally been assuming 0.75% for the total fatality rate, based on 1.5% for symptomatic cases and an equal number asymptomatic. This number doesn't really affect many of my shorter-term calculations anyway though it matters for the total death toll under full epidemics.
However there is another issue that is also extremely important in understanding and interpreting observations, which that has not received anywhere so much attention. This is the length of time that it takes people who die, to die. Since death data are the only semi-reliable numbers that we have (not perfect, but better than anything else in the UK for sure), even if we assume the death rate is known, and so we know how many cases there must have been at some point in the past, if we don't know when these cases occurred, we are little the wiser. Given a 3 day doubling time, 100,000 cases 5 days ago and 100,000 cases 15 days ago mean very different things for today!
The time-to-die calculation in the IC work seems to be very strongly reliant on a very small data set from early in the Wuhan outbreak. The analysis of these data is presented in this paper here. I think it's a nice piece of work that does the best job available with the limited data. But it only considers 24 deaths, to which it fits a parametric curve. Here is the data set, and the fitted distribution.
The red curve is a direct fit to the data, the blue one is after adjusting for the epidemic growth (which will have biased the observed sample towards shorter times, simply because when looking at deaths on a given day, there were more new cases 10 days ago than there were 20 days ago etc). Ticks are every 10 days, ie the longest observed time to death is 21 days and there isn't a single death in the upper 30% end of the estimated (blue) distribution which falls above this value. This long tail has been created entirely by a parametric extrapolation. I have no idea what theoretical basis there may be for using a Gamma distribution as they did. It may just be convenience - I have no problem with that in principle, but it does mean that this result is rather speculative. 10% of the deaths are estimated to take fully 30 days from symptoms appearing.
During the exponentially-rising part of the epidemic, this time to death can't be easily deconvolved from the death rate anyway, and it might be concealed from view by uncertainties in this parameter. However, we now have epidemics that have played out at least in part down the exponential decline of a suppressed outbreak. This exponential decline focusses attention on the long tail of the time to death distribution, as in this situation there were many more new cases 20 and even 30 days ago than there were 10 days ago.
It is a simple matter to convolve the observed case statistics (faulty as they may be) with a hypothesised time to death distribution to see what this predicts for the shape of the death time series, and look at how this compares to what was actually seen.
In all the plots below, the red case data is expected to lead the other two curves, but the timing of the predicted deaths (black) really should match the observed deaths (blue). I have normalised the curves to have the same magnitude in order to aid visual comparison. Cumulative curves are probably easier to assess by eye but I'll present the daily values too. Click to see larger, I deliberately kept them a bit small as there are so many.
Here are results from Wuhan (Hubei province actually):
and South Korea
and Italy
and Spain
There is a very clear tendency towards too big a lag in the predicted death data (black line) according to the time-to-death curve. South /Korea is pretty good but the others are quite badly wrong. Deaths should still be shooting up in Italy and Spain!
As a simple test, I just arbitrarily shortened the time-to-death by using a gamma distribution with the same overall shape but 15 days shorter mean time (yes, 3.8 rather than 18.8 - I said, just an arbitrary test, and a rather extreme one). And look...
It's a startling improvement in all cases except South Korea. SK may be a bit unrepresentative with the early spread focussed among young church-goers (probably a low death rate and I'm guessing a long time to die) but I shouldn't throw out all inconvenient results just because it suits me.
I don't have an answer to this, but I think it's an interesting question. A 3.8 day mean time to death is surely unrealistically short as an average (from onset of symptoms). It may be the case that the time to get test results accounts for some of this difference - ie, the case numbers reported in these data sets are already somewhat delayed relative to symptoms developing, due to the time it takes to process and report cases. But is there really 15 days added delay here? It seems unlikely.
27 comments:
I did a lagged auto-correlation plot for EU+US a few days ago, here is the current 4-day lagged auto-correlation ...
https://live.staticflickr.com/65535/49741532211_7ae2be3221_b.jpg
Currently EU+USA is the World based on doubling times ...
https://live.staticflickr.com/65535/49741732152_657e59f8ce_b.jpg
The raw dailies suggest that most countries have peaked (assuming there are no so-called weekend effects in counts). Three countries still stand out as to current (small) doubling time and current (large) body count: FR, UK and US. Cut-n-paste from this comment at ATTP's posted before coming here ...
https://andthentheresphysics.wordpress.com/2020/04/03/stay-in-your-own-lane/#comment-173908
Definitely a weekend effect in UK deaths. IMO and also mentioned by Spiegelhalter.
Basically the variability in daily values is too large to be due to random noise. An expected number of 700 dying out of a critical population of perhaps 10x that is an easy binomial calculation and it doesn't jump by much from day to day. Off the top of my head. Easy for the reader to check.
I predict another small number today then a whopping rise on Tuesday. Just like last week and the week before (a bit less clearly, numbers were smaller and more variable back then). Though of course they may improve methods from one week to the next.
My estimates don't attempt to account for this, it's just a small error in the scheme of things.
What do you make of Taleb et al.'s assertion that Gamma distributions are not the right tool for this particular job?
https://www.academia.edu/42242357/Review_of_Ferguson_et_al_Impact_of_non-pharmaceutical_interventions..._
"[Ferguson et al.] ignore the possibility of superspreader events in gatherings by not including the fat tail distribution of contagion in their model. They don’t provide details in this paper, but prior works use Gamma distributions that are exponentially decaying and don’t represent fat tails, i.e. subexponential class. This leads them to deny the importance of banning them, which has been shown to be incorrect, including in South Korea. Cutting the fat tail of the infection distribution is critical to reducing R₀."
I don't believe it matters, though I could be wrong. I think Taleb is barking up the wrong tree here and possibly misinterpreting the use of the models. No-one is trying to simulate the specific details of (eg) the South Korean outbreak where one person infected many, the aim is to understand the population-level behaviour. It's kind of like saying an energy balance climate model is wrong because it doesn't include storms. Well, it would be wrong if you were trying to simulate Katrina, but that's not what people use energy balance models for.
Confirmed cases is very flawed, except for South Korea. Very very flawed. As in off by an order of magnitude or likely more. GIGO.
One way see this is the tests:positive ratio. In South Korea it is currently about 15,000 tests : 100 positive, in Spain, New York, New Jersy and Italy have 100 : 45 or similar ratios. Mostly testing the sick to confirm infection, and not testing the very sick and the dead, or those with mild cases, as tests are in short supply. Triage effect. Deaths are very likely significantly under reported as well.
If the test is only used on a clearly very sick person, the time to die will be shorter than if the case is identified before symptoms develop.
Washington State has a 100 : 10 ratio. Not good, but not horrid.
http://depts.washington.edu/labmed/covid19/
Also South Korea has found well more than half of asymptomatic cases by testing or the epidemic would be growing. Death rate is between deaths/total_cases and deaths/recovered_cases. 186/10,284 is too low, and 186/6,598 is too high. The numbers will be the same at the end of the epidemic, when all cases are resolved by recovery or death. Sure, there are likely asymptomatic cases missed, but not that many. Death rate is over 1% for sure.
https://covidtracking.com/static/wsj-7deed4e5103e0a2f19d1af6fce01925a.png
https://covidtracking.com/data/
A ratio I didn't think about that is probably very important.
https://www.covid.is/data
Bottom of page. "Percentage of infected persons who were diagnosed while in quarantine".
If a country can quarantine and/or isolate the "likely to infected" before there is a large chance of passing on the infection, then even testing wouldn't be needed. 53% isn't high enough, but is high enough to be significant.
All about the tipping point of R0 = 1.0
Ban wet markets!
https://www.theguardian.com/world/2020/apr/06/ban-live-animal-markets-pandemics-un-biodiversity-chief-age-of-extinction
Yes Phil i'm sure you're right and I got a similar comment on twitter. If people are only tested when they become dangerously ill and are taken into hospital, then the time to death will be artificially short and fatality rate high. As is seen. The hypothesis of a very short time to death is also refuted by modelling of the time series with a lockdown in several countries, as it implies an early sharp turn in numbers rather than the later smoother one that is seen. I do suspect that the 18.8 mean in the IC work is a little on the long side though and have nudged it down to 15 in my modelling. I really doubt that 10% of people are taking 30 days to die, and repeat my observation that there wasn't a single case over 21 days in the data set that it was based on.
James - Re your 1:49 pm
I doubt that Taleb is "barking up the wrong tree" completely.
He's fond of one pagers these days, but he's also into the precautionary principle, fat tails and "black swans". Another brief COVID-19 "note" of his (plus experimental HTML!) perhaps explains where he's coming from?
https://www.academia.edu/41743064/Systemic_Risk_of_Pandemic_via_Novel_Pathogens_-_Coronavirus_A_Note
The general (non-naive) precautionary principle delineates conditions where actions must be taken to reduce risk of ruin, and traditional cost-benefit analyses must not be used. These are ruin problems where, over time, exposure to tail events leads to a certain eventual extinction. While there is a very high probability for humanity surviving a single such event, over time, there is eventually zero probability of surviving repeated exposures to such events. While repeated risks can be taken by individuals with a limited life expectancy, ruin exposures must never be taken at the systemic and collective level. In technical terms, the precautionary principle applies when traditional statistical averages are invalid because risks are not ergodic.
Could it be that the tail of a Gamma distributions isn't fat enough given the current behaviour of some citizens of the once Great Britain? Not to mention certain citizens of the once United States!
https://edition.cnn.com/2020/04/05/us/church-services-palm-sunday-coronavirus-trnd/index.html
Diamond Princess.
Another of the infected has died. Over a month later.
https://en.wikipedia.org/wiki/2020_coronavirus_pandemic_on_cruise_ships#Deaths
Wow that's interesting. I had the number at 7 - it's now 12!
12/696 = 1.7% Including asymptomatic cases.
So why do you think the death rate is 1% or below, again?
From Ioannidis in Stat:
The one situation where an entire, closed population was tested was the Diamond Princess cruise ship and its quarantine passengers. The case fatality rate there was 1.0%, but this was a largely elderly population, in which the death rate from Covid-19 is much higher.
Projecting the Diamond Princess mortality rate onto the age structure of the U.S. population, the death rate among people infected with Covid-19 would be 0.125%. But since this estimate is based on extremely thin data — there were just seven deaths among the 700 infected passengers and crew — the real death rate could stretch from five times lower (0.025%) to five times higher (0.625%). It is also possible that some of the passengers who were infected might die later, and that tourists may have different frequencies of chronic diseases — a risk factor for worse outcomes with SARS-CoV-2 infection — than the general population. Adding these extra sources of uncertainty, reasonable estimates for the case fatality ratio in the general U.S. population vary from 0.05% to 1%.
Even doubling his numbers you get 0.25% for the central estimate.
Sure, he gets about 0.25 as a central estimate, with a range of anything up to 2 % CFR. That really doesn't sound all that unreasonable to me. Modest data set of somewhat unrepresentative people (rich for starters) with poor representation across age ranges of course. But probably in the right ballpark. That could be 5 million dead in the USA easily enough. Why is this a good thing again?
Yes, the Diamond Princess data should be enough to convince any sensible person that an uncontrolled epidemic would not be acceptable. Any mortality rate in that range would overwhelm the health system, and that would make the death toll substantially worse.
Also, Lombardy gives you a clear indication of what happens in a poorly controlled epidemic.
Well, 0.0025*300,000,000 = 750,000 a far cry from 5 million. Still unacceptable though.
Smart policy would take advantage that mortality is very highly skewed to those over 70 and those with serious pre-existing conditions. Who data show a cfr of 0.2% for those under 40. If that’s overstated by 10 times, we are at 0.02%. Ioannidis’ work suggests that is likely with real cases being at least 10 times higher than those that are detected. Let’s say there are 70 million in the US under 40. 0.0002*70000000=140000. Still a lot, but roughly 2.8 million die in the US every year.
The problem here of course is that current policy has high casualties too which no one is even counting. We know lots of parents who are stressed out about their jobs. I guess we could just pay everyone indefinitely and discard the financial system, destroying the accumulated savings of everyone. Some lefties would like government in charge of everything I suppose but the vast majority will hate it.
I hear the panic and hysteria, but seriously, what’s your end game? The Great Depression was deadly too.
Right ballpark, as within an order of magnitude. 0.25% to 2.5% is the right ballpark. Let's see if we can find the right base in the ballpark.
Diamond Princess:
Distribution of deaths by age from Wuhan seems to be affected by among other things, triage. Hospitals overloaded, so decisions had to be made. Older and sicker people lost. Using the age distribution of fatalities from the US CDC rather than from Wuhan would give a higher estimate for the US population based on the DP deaths.
https://www.businessinsider.com/most-us-coronavirus-deaths-ages-65-older-cdc-report-2020-3?op=1
https://www.cdc.gov/mmwr/volumes/69/wr/pdfs/mm6912e2-H.pdf
Don't use this for anything other than age distribution, as the number of cases is way underestimated, the epidemic is growing and many infected have yet to have enough time to die. Even this age distribution is probably wrong...If we wait long enough.
As Dr Annan stated, cruise ship passengers are unrepresentative due to likely being richer and to be in better health than average.
South Korea:
Testing is not universal, but testing is finding enough of the asymptomatic cases so as to keep R0 below 1.0 without a complete lock down. This puts a limit on the number of missing asymptomatic cases, not all found, but at least a large fraction seems to be found. Epidemic hasn't been around long enough for time to die to settle.
192/10,331 1.85% today deaths/total cases. 192/6694 2.86% deaths/recovered. When all cases are resolved, these two numbers would be equal. Deaths seem to resolve slower than recoveries.
With the constraint of R0 < 1.0 and a generous estimate of only half of all cases found, and splitting the difference between the above two numbers, I don't see how this data supports any CFR below 1%. Especially due to the age distribution of cases in South Korea, with 20-29 year women being over 25% of all cases. Shincheonji Church of Jesus members were largely young women, a large fraction of the South Korea outbreak. Both young and female reduces death rate.
Bottom line, there are lots of things that make a CFR hard to estimate. I'm not an expert on this. But I don't see how it would realistically be below 1% for the US population. Perhaps for a third world country with mostly young people...
DY:
10 times more asymptomatic cases than symptomatic? Really???
Ah, I see. You are driven by politics, not science. Reality is so inconvenient.
Phil seems on the money to me.
This is what I really don't get about this sort of thing. To me, this is a puzzle that I want to solve to the best of my ability. Understanding what is going on and making the best predictions I can, that sort of thing. The data are incomplete and biased as to their collection, and models are imperfect, so it's not just a matter of statistical formulae and calculation, but also requires attempts at sensible judgments that take account of the views of a range of experts while accepting that they too are imperfect. No-one is perfect of course, I'm sure I have my own biases, but I try to minimise their effect as much as I can.
And yet to others, it's clearly a game of seeing how far they can distort the evidence to support some sort of agenda, be it personal or political. Motive isn't really the point here. The behaviour is easy to spot a mile off once you have seen it often enough. I just really don't understand what they get out of it. The only way to win in my book is to be closer to the truth, quicker than others. Not to provide a clever argument for an estimate that turns out to be wildly incorrect, be it climate sensitivity or the death rate of C-19. Working out that R0 was wrong in all of the expert modelling was quite a thrill in that regard. It's frustrating to be largely ignored of course, but the challenge of getting the answer is still fun in its own right.
Phil, You are misrepresenting the data and its tiresome. Half of all cases are asymptotic in Iceland for example. Most people who are young don’t get sick enough to contact the health care system and that’s still where virtually all the testing takes place. It’s a testing issue. If. R0 is really 3-4 then mitigation will only work with very extreme measures like shutting down everything.
We need random testing to answer that as Ioaddinis has been saying for 3 weeks. Governments (aside from Iceland) have been really stupid about this, prolonging our ignorance. Perhaps they have a strong political motive. Ioannidis does not have a political motive I can detect and slandering him does no good and is just confirmation bias of prejudice. Some people always say things that are wrong. Not doing random testing is vastly worse because these are supposed to be elite experts. Looking pretty stupid to me.
As for the endgame. I don't claim to have all the answers. First and foremost, I want to understand the problem. I'm not an expert on control of epidemics, until a few months ago I'd probably not spent 5 minutes thinking about it in my entire life. But also, I didn't approach the analysis from the POV that others have clearly taken, that any govt action is automatically wrong therefore we have to create some results that justify doing nothing.
However, I certainly wouldn't have started from here. We had a useful heads-up from other countries that it was fast-growing with a serious rate of hospitalisations, and it was only a few minutes calculation from there to show that it would overwhelm the NHS in a month. I did this calculation in early March because it became obvious by then that the govt pronouncements made no sense: had I been actually tasked with formulating policy I'm confident I could have worked it out rather sooner. The basic evidence was in by late Jan I believe, according to Richard Horton at The Lancet. The NHS should have been prepared sooner, better, and testing facilities ramped up. This wouldn't even have cost very much if it had turned out less of a problem than we thought.
I am no expert on the policy aspect of reducing contacts, but it would seem sensible to introduce moderate policies to keep R down to a more acceptable level, in a sustainable manner. I'm not sure what would have worked best here but would have wanted to take advice from experts on what would have the most effect on R for the least cost. I've seen various papers discuss a range of policies including a recent report from IC. If R could be reduced to around 1.5-2 without the drastic changes we are now experiencing, then that would have bought us months, literally. Aggressive contact tracing and quarantining would have helped hugely as several other countries showed. We had a plan based on flu that just said "take it on the chin" instead. And politicians (backed up to some extent by dodgy modelling) repeatedly argued that we shouldn't act too soon or too strongly. Vallance himself specifically said he was concerned we should not suppress it too strongly!
Endgame is R0 < 0.99
Humans win, virus loses.
Getting there with minimal costs and losses is what I want.
What I'm worried about regarding the endgame in the UK is that I haven't seen much preparation for the kind of automatic contact tracing and case tracking that has worked well elsewhere.
Instead there seems to be a lot of noise about how we'll soon have more testing kits. Which probably plays well with the public, given how many complaints I've seen about people not being tested, but by itself (without strategy/infrastructure) is not that useful.
Think the best strategy is and always was clamp down as hard and early as possible, until the number of cases is almost zero, so you can spend lots of effort on tracking the last few cases.
And quarantine for incomers.
But in the short-term, clamp down hard is now needed regardless of the end-game. One could play with partial lockdown later on...
>"The only way to win in my book is to be closer to the truth, quicker than others. Not to provide a clever argument for an estimate that turns out to be wildly incorrect, be it climate sensitivity or the death rate of C-19."
Are the aerosol declines going to be sufficient to help get a handle on CO2 sensitivity versus aerosols? Too soon for the data yet, but when should any predictions be made? Is it temp that should be predicted and are any latitude bands better discriminators than others?
crandles, I think the general assessment is that it's unlikely to be useful. If people still believed that really high sensitivity and high aerosol forcing was plausible, it might help to rule out the most extreme case, but that's mostly discredited anyway.
Nice article on the British approach here:
https://www.reuters.com/article/us-health-coronavirus-britain-path-speci-idUSKBN21P1VF
Mornin' James (UTC),
And yet to others, it's clearly a game of seeing how far they can distort the evidence to support some sort of agenda, be it personal or political. Motive isn't really the point here. The behaviour is easy to spot a mile off once you have seen it often enough. I just really don't understand what they get out of it.
Here's one theory:
http://GreatWhiteCon.info/2015/05/why-its-so-hard-to-convince-pseudo-skeptics/#Neuroscience
[Showing] one single disgusting image and measuring the brain activity and how the person responded to that was sufficient to allow you to identify if somebody was conservative or liberal. With a single brain image. With 95% accuracy!
Post a Comment