Dr Deepti Gurdasani from Queen Mary’s William Harvey Research Institute and Hisham Ziauddeen from University of Cambridge are co-authors of a correspondence piece in The Lancet Global Health which suggests there were limitations in some assumptions used in the COVID-19 pandemic models forming part of the scientific evidence considered by the UK Government. In this blog post, they explain the importance of real-world evidence being used alongside modelling to develop public health responses.
Patient being rushed to an operating theatre
We examined the model described by Hellewel et al in The Lancet Global Health, where the authors argue that, while effective contact tracing and isolation could contribute to reducing the overall size of an outbreak, in most plausible outbreak scenarios, case isolation and contact tracing alone is insufficient to control outbreaks.
This particular model is very sensitive to a key parameter, which is the delay between someone becoming symptomatic and being isolated. Based on the previous SARS epidemic and early data from Wuhan, the original model considered two delay lengths—3·83 days (short) and 8·09 days (long)— and predicted that “in some scenarios even near perfect contact tracing will still be insufficient, and further interventions would be required to achieve control”. On 12 March, the UK Government decided to cease community testing and contact tracing, claiming that the scientific evidence did not support these effectiveness of these strategies (see the ‘Coronavirus: action plan’ and article in the FT).
In our new analysis, published as a correspondence piece in The Lancet Global Health, we considered that 3.83 days was a very long delay given that South East Asian countries were already detecting and isolating cases within 24 hours by early March, and rapid tests with a 4 hr turnaround were available at the point of publication of the Hellewell et al. model (end of February). Using the original model code we showed that when the delay is changed to 1 day, the model predicts the probability of controlling the epidemic within 12 weeks to be more than 80 per cent, with 30-60 per cent contact tracing. This suggests that rapid screening and testing, contact tracing, and isolation could have been more effective strategies to control transmission in the UK.
Importantly, by the time the government adopted this strategy, rapid tests were available. However, even without testing, actively screening for symptoms of COVID-19, would have allowed public health systems to detect cases, isolate them and trace their contacts, as was done in Singapore.
Modelling essentially tries to simulate what an epidemic might look like when we have only partial information. Relying on models alone in early epidemics can lead to an incorrect public health responses and unnecessary loss of life. It is vital to also consider the real world evidence emerging from other countries’ pandemic responses.
By the time the UK made the decision to stop contact tracing, there was already extensive evidence from South East Asian countries that showed that contact tracing had allowed them to control transmission, and many did not have to implement school closures or lockdowns until much later in the outbreak, well after the curve had been flattened. For example, in early March, South Korea had far fewer deaths than Italy, despite the outbreak having started at the same time in both these countries, suggesting their case detection and contact tracing strategy was effective.
This is important moving forward because it’s clear that this epidemic is here to stay and that there could be a second wave. It is encouraging that the UK Government is now talking again about contact tracing, but the exact way it is implemented is extremely important. They need to make sure that they are listening to the advice and experience of other countries, and implementing contact tracing in an effective way.
Our article is published alongside a reply from the authors of the original model. The authors make the point that even if the model predicts the epidemic can be controlled by case detection and contact tracing, this alone does not make these strategies feasible as there is the crucial matter of capacity to consider.
We do not agree with this position. The value of the model is in indicating the optimal strategy to pursue, capacity can be considered based on this. Prior to the outbreak, few countries had the capacity to implement the necessary measures; most have had to massively upscale their capacity to meet the anticipated need. Unfortunately, it’s not clear whether the UK ever seriously considered this strategy.
Empirical, real-world data must be considered alongside mathematical models when devising pandemic responses. Models are fallible and scientists and policy makers must be mindful that an over-reliance on models, and a lack of caution in interpreting them, could be a costly exercise.