Among scientists, there is a well-known aphorism: “All models are wrong, but some models are useful.” Coined by the British statistician George Box in 1976, it neatly illustrates the pitfalls of correctly anticipating an outcome based on an imperfect, and ever-changing, set of inputs. And it is the reason modelling alone should never be used to drive policy.
Yet despite this widely-accepted flaw, models have played a pivotal role in the Covid pandemic, frequently used to justify the removal of freedoms as fundamental as being allowed to walk in the open air, or hold the hand of a dying loved one – scenarios that modellers never anticipated.
“Should models be used to lock people down? No, they shouldn’t,” says Professor Graham Medley, chair of the Scientific Pandemic Influenza Group on Modelling (SPI-M). “Our job is to lay out a range of possibilities for the future, but it can’t predict the future.
“Government has to make difficult decisions to get the country through the epidemic with the least possible harm, but the harms accrue from the controls as well as the virus.”
The speed of the pandemic hasn’t helped matters. Complex models that, ordinarily, would have been developed over months or even years have been required within days. Often, the underlying assumptions have already changed by the time they are released for public scrutiny.
“A week is a long time in politics, but it’s an age in a pandemic,” says Medley.
The first wave
Modelling was thrust into the public consciousness in the spring of 2020, when the Scientific Advisory Group for Emergencies (Sage) began publishing the evidence it was using to advise the Government.
In the early stages of the pandemic, Downing Street’s coronavirus strategy was to “flatten the peak” of cases, and avoid a resurgence in the winter when the NHS would be less able to cope.
Yet after Imperial College published its Report 9 in March 2020 – which warned the health service would soon be overwhelmed with severe cases of Covid, and might face more than 500,000 deaths if politicians took no action – the Government made an abrupt volte-face.
Spooked by the eye-watering numbers and the horrified reaction of the public, Boris Johnson slammed on the brakes and announced a national lockdown.
Modelling the trajectory of an entirely new virus was always going to prove tricky. There was scant information about hospitalisation and death rates, or even how many people had the virus.
Calculations by the London School of Hygiene and Tropical Medicine (LSHTM) released at the time found the number of people infected could be anywhere from 6,000 to 23 million. “One of my colleagues compared modelling a pandemic in real time to doing engineering on a collapsing building,” says Dr Nick Davies, a modeller from the LSHTM.
“At the beginning of the pandemic, we didn’t know what Covid’s properties were, and we were asked to look at worst-case scenarios. Not end of life as we know it, but something towards the more pessimistic but not completely unrealistic end. When things are so uncertain, policymakers want to avoid the worst possible outcome.”
It was never possible to test Imperial’s “500,000 deaths” figure, because lockdown came into effect on March 26. Some experts believed restrictions arrived too late to work properly, while others thought behaviour changes were have already limiting spread.
Dr Ellen Brooks-Pollock, senior lecturer in veterinary public health and infectious disease modelling at Bristol University, says: “At the start of the pandemic, those huge curves showing everyone getting infected, everybody knew that wouldn’t happen, that people wouldn’t sit back with that going on and carry on behaving normally.
“Even if the Government didn’t put in restrictions, people would have started behaving differently. But we ended up with the baseline being ‘Assuming nothing changes’, because it’s difficult to know what the alternative baseline is.”
Davies adds: “Mobility data does show that people had already started moving around less before lockdown – which roughly coincides with Boris Johnson going on television saying lots of people were going to lose loved ones.
“But I think it’s slightly wishful thinking to think you can let it happen and things will be fine on their own. That’s a pretty massive risk to take.”
How viral modelling works
By June 2020, with Covid cases and deaths plummeting, Britain was trying to crawl back to normality, with a phased reopening of schools and retailers. Yet a gloomy document from SPI-M warned that easing lockdown restrictions could drive the R-rate above 1 and advised delaying schools reopening by a month.
Keen to kickstart the economy, the Government pushed ahead anyway – and the country experienced a largely Covid-free summer. However, by October, cases were starting to rise again, and modelled scenarios were increasingly downbeat about the coming months.
At a press conference on October 31, Sir Patrick Vallance, the UK’s chief scientific adviser, showed a graph from Public Health England (PHE) and Cambridge University warning that deaths could peak at 4,000 a day by December 20.
The data was used to justify a second national lockdown on November 5, but Oxford University quickly pointed out that the numbers were crunched before new tier restrictions had come into effect and were vastly wide of the mark.
Under the model, daily deaths should have reached 1,000 by the day of the press conference, but the rolling seven-day average was 265. The projections used an R rate of 1.3 to 1.5m, when it had fallen to between 1.1 and 1.3.
Bob Seely, the MP for the Isle of Wight, described the estimates as “hysterical” while Penny Mordaunt, the former paymaster general, warned the data were “in need of improvement”.
Within days, Sir Patrick and Sir Chris Whitty, the Government’s chief medical adviser, were forced to admit that the 4,000-a-day figure was unlikely, and the episode was later criticised by the official statistics watchdog.
Even at the height of the winter wave, the daily death count had only peaked at 1,359, far lower than the 4,000 projection. In fact, by the beginning of December 2020, many of the major modelling groups were a little more optimistic about the pandemic.
“It’s not the case that the models have always been pessimistic,” says Davies. “In a few cases, our models have been optimistic. Right before the emergence of the alpha variant [known as the Kent variant when it was first detected in November 2020], our models were saying that the virus looked like it was about to come down. Then it didn’t.”
The second wave
While health experts were still struggling to understand the virus dynamics of the original Wuhan strain of Covid, a new problem was emerging: variants.
On December 14 2020, Matt Hancock, the then health minister, stood up in the House of Commons to inform MPs that a newly mutated virus was spreading exponentially through the Home Counties.
Boris Johnson was forced to limit Christmas celebrations for millions on December 21 and, with no sign of a decline, announced a new lockdown on January 4 2021.
Cases peaked in the middle of January, and with the vaccine rollout in full swing, the Government was working on a roadmap out of restrictions by February. However, in the middle of February, scientists at Imperial, Edinburgh and Warwick presented new models that warned early release could result in another, even deadlier wave.
Imperial College estimated the Government’s lifting of restrictions at the tail end of winter could cause between 15,000 and 25,000 hospitalisations in the summer and early autumn – higher even than the first peak in April 2020.
The MP Mark Harper was among the first to point out a flaw in the reasoning, warning there was a “concerning pattern of assumptions not reflecting the much more positive reality”.
A glance at the scientists’ assumptions showed he had a point. The models were based on a pessimistic uptake of an effective vaccine. In the end, hospitalisations never rose beyond 8,500.
Estimated rise in hospitalisations in England after step four of the roadmap
Addressing why the models often seem overly pessimistic, Medley says: “I’m always going to have the public and media saying it wasn’t as bad as the models suggested, but if I’m doing my job properly, there should always be a worse case scenario than the one that actually happens.
“For me, the worst outcome would be for the Government to say: ‘Why didn’t you tell me it could be as bad as this?’
“In the early stages of the epidemic, I was criticised very heavily for not locking down to save lives. Then I was criticised very heavily because people thought I was trying to argue for more lockdowns. I was doing neither.”
Dr Leon Danon, of the University of Bristol, added: “The idea that modelling is somehow pessimistic or optimistic is wrong. Models are neutral and much more dispassionate. It is what it is – it’s not ‘calling’ for anything.”
The missing third wave
By March 2021, although the decline in death rates was running three weeks ahead of the central estimates, modelling suggested a full release in June could trigger a third wave with deaths of up to 59,900.
Again, the assumptions used were found to be too pessimistic. A paper issued by Imperial College assumed just 44.6 per cent of the population would have immunity by June 21 – the original Freedom Day. When the date rolled around, it turned out to be 60 per cent.
Vaccine effectiveness had again been underestimated, and it was clear that hospital occupancy was nowhere near as grave as even the most optimistic scenarios.
Scientists at the University of Warwick had suggested the number of people in hospital with Covid by the start of June could hit 1,750, while Imperial said 7,000. In the end, it was around 1,000.
Chris Hopson, chief executive of NHS Providers, warned that modelling had been “crude and unreliable”, and urged the Government not to use it when deciding whether or not to press ahead with Freedom Day.
Yet despite the encouraging signs and vocal warnings, Johnson delayed the end of lockdown by a month: new modelling published on June 14 suggested a deadly third wave was still on the horizon.
In its most pessimistic assessment, Imperial College estimated that Britain could experience a further 203,824 deaths over the next 12 months, while more modest estimates from rival groups suggested more than 50,000 would die.
Critically, the models had failed to factor in new data from PHE (now UKHSA) showing that the vaccines offered much greater protection against hospitalisation than first thought.
While SPI-M estimated that vaccines would reduce infection by between 24 and 48 per cent after a first dose, and between 30 and 60 per cent after the second dose, PHE said it could be closer to 70 per cent after a first dose, and 85 per cent after a second. Switching to a more optimistic vaccine scenario may have seen the projected number of deaths slashed sevenfold.
“A few times, we’ve been really unlucky,” says Davies. “This was obviously really good news, but it made it look like we were just ignoring the data, when we just didn’t know about it when we put together our roadmap scenarios. The UKHSA’s updated vaccine efficacy assumptions turned out to be a lot better than anyone thought they would be.
“But we made it clear to policymakers that things had turned out better than expected, and that things would not be as bad as the scenarios suggested.”
Johnson pressed ahead with lifting full restrictions in July – but a brief spike in cases during summer 2021 led Professor Neil Ferguson of Imperial to predict that Britain would soon hit one million infections a day.
The increases were later shown to have been driven by fans attending the Euro 2020 football tournament, and cases dropped dramatically after both England and Scotland’s football teams were ejected from the competition. Modellers had also failed to appreciate how cautious the public would be after restrictions were lifted.
“We often can’t predict what people are going to do,” Medley explains. “In normal times, you get a long way thinking about what the average person will do. But in a pandemic, behaviour is a lot less predictable.
“What happened during the Euros was a good example. Pubs were opening up, England was doing really well, and everyone was going out to watch them play. Had we been beaten by Denmark in the semi-final, [the marked spike in summer cases] would have been different.”
However, some experts believe that more attention should have been paid to this ‘real-world’ data. Paul Hunter, Professor in Medicine, from the University of East Anglia, says: “The drop after the Euros was predictable – it wasn’t rocket science. If we’d spent more time looking at what was happening in the real world, rather than at the models, it would have been obvious.”
The omicron wave
Towards the end of last autumn, Britain’s Covid epidemic looked like it was dying out. Then omicron hit.
South Africa first reported the variant on November 24. Within a day, Government scientists here were urging ministers to impose new restrictions. By December 13, the Health Secretary Sajid Javid claimed that there were already 200,000 omicron infections a day in Britain, with cases doubling every two days. An SPI-M consensus statement released on December 14 warned that the variant could bring between 3,000 and 10,000 hospital admissions a day, and between 600 and 6,000 deaths, leading to unsustainable pressures on the NHS.
However, it quickly became clear that omicron was much milder than delta. By mid-December, the South Africa epidemic levelled off, with statisticians noting that the case fatality rate had plummeted from one in 33 to one in 200.
Real-world data on December 14 showed that the risk of hospitalisation with omicron was 23 per cent lower than delta, while a few weeks later, it was shown to cause just a quarter of deaths.
It now looks as if cases in Britain peaked around January 5 2022, without overwhelming the NHS. Earlier this week, Professor David Heymann of the LSHTM said the UK could be one of the first countries in the world to emerge from the pandemic.
Davies, whose model had suggested that omicron could lead to between 25,000 and 75,000 deaths, says that it quickly became apparent that initial assumptions about omicron were wrong.
“In the very early data, it was looking like omicron severity was similar to delta. But over those first few weeks, the picture clarified and the severity came down,” he says.
“We were informing policymakers every day about the changing picture. It’s difficult to know what to do. Either we don’t put our projections out at all – then it’s completely opaque what’s feeding into Government decisions – or we put them out with the understanding that misinterpretations cannot be corrected.”
Modellers argue it is better to know something about a situation than nothing, even if the whole picture is unclear. Whitty has previously said: “An 80 per cent right paper before a policy decision is made is worth ten 95 per cent right papers afterwards.”
But, given the experience of the pandemic, other experts now think models too unreliable to be driving public health policy.
Hunter says: “I think we have put too much emphasis on modelling, and that has failed us, to a certain extent. The way omicron is panning out is nowhere near as grim as many were predicting.”
Certainly, it is time for the models to come with a health warning, with some calling for a move to interactive graphs that could be updated in real-time.
“Maybe we should think about it in terms of a weather forecast, where we say there is a 20 per cent chance of rain,” adds Brooks-Pollock. “We know that sometimes it rains when nobody was expecting it.”
What is clear is that the Government is no longer so enthralled by the models, and is willing to treat them with a healthy dose of scepticism as they pay more attention to real-world data – which is arguably what the modellers wanted all along.
“The idea that models should have some kind of control is wrong,” says Medley, who has acted as the middle-man between the Government and SPI-M, and has not always found it easy explaining the nuances of complex models to politicians, who are largely humanities graduates.
“In the initial stages of the pandemic, my time was spent talking to people inside the Government who are not science-trained, to help them understand the pitfalls, because graphs can be persuasive.
“Communications have hugely improved. I think we’ve now got the Government to understand that we cannot make predictions about what is going to happen.”