By Dr. Matt Grawitch, Saint Louis University
As we continue to wrestle with COVID-19, the world has become even more polarized than it was during the last election cycle.
In one corner are those who wanted to shut everything down, claiming that upwards of 2 million people in the US would die based on epidemiological models. In the other corner were those who questioned whether shutting everything down would have disastrous unintended consequences for the economy and the livelihoods of citizens, especially those in low-paid jobs. (There’s also other less, populated groups, such as those who are clueless and those who claim it is a government conspiracy.) Initial attempts to find a balance between the two predominant perspectives were quickly thrown out the window, though, when New York City saw a rapid spike in infections.
At it stands now, many states have issued stay-at-home orders rather than a less extreme option. This has led to a patchwork of gubernatorial orders and decrees, declaring some employees to be essential and others not. It has created a massive increase in unemployment claims, with a record 16 million people filing in three weeks.
Adapted from a Photo by Patrick Case from Pexels
Is the Cure Worse Than the Disease?
The dramatic climb in unemployment claims begs the question of whether the cure is worse than the disease, which assumes there’s a way to calculate the value of a single life. As it stands now, many economists use a calculation of around $10 million as the value of statistical life (VSL), a proxy for the amount we are willing to pay for reduced risk and enhanced safety. The use of the VSL assumes some level of stable risk calculation (in other words, that we can accurately quantify risk). We can then determine the likelihood that specific interventions will save a sufficient number of lives to justify the financial costs of those decisions.
In the current crisis, many decisions are based on predictive epidemiological models. While some people view these models as definitive evidence (after all, they were created by experts, right?), they’re only as good as the data and assumptions on which they’re based – and in this case they’re not very reliable. This is why we see very wide margins of error (the low- and high-end estimates) and constantly changing predictions based on new information (they’re perhaps more valid than a fortune cookie, but who knows for sure). Moreover, depending on the source, they may contradict each other, sparking debate, confusion, and indecision.
An important piece of information often left out of these discussions is that most of the experts producing these models are biased toward predicting the worst-case scenario (a bit like every disaster movie ever made). Their models are not intended to produce conservative estimates but are designed to predict how bad it could be given all the unknowns. The more unknowns there are, the more tenuous the assumptions on which the models are based. The more tenuous the assumptions, the more likely the model errs on the side of pessimism. (Philippe Lemoine actually did a wonderful job of dissecting the dire predictions of the Imperial College model.) It’s good that these models err that way originally (so we can at least know what the worst-case scenario is given all the unknowns), but it’s not so good when decision makers take them at face value.
So, what we’ve seen is worst-case scenario models used to determine the costs and benefits of various decisions based on that worst-case scenario. Although they are continuously updated, they continue to lead to conclusions that, based on VSL, more extreme actions are financially substantiated. But when the calculations are based on faulty data, faulty decisions may follow, which is a potential problem we have right now, which John Ioannidis brought up in the middle of March. Because we don’t yet have a firm grasp of what the actual transmissibility of the virus is (how easily it actually spreads) or its fatality (total number of people who die from the disease/by the number of people who get the disease), we don’t actually know how communicable or deadly the disease is. Without having even a close approximation of transmissibility or fatality (which, as Alex Berezow mentioned, is likely inversely related), we run the very real risk that we’ve overreacted, potentially resulting in unintended consequences down the road (as we’re seeing now with the massive spikes in unemployment, which may be just the beginning of an economic crisis).
The Unknowns and the Knowns
There are still lots of unknowns. Why have hotspots developed where they have? What differentiates the countries that have seen relatively few total and per capita cases from those that have seen substantially more? How is the virus transmitted? Can it become aerosolized (i.e., transmissible through the air making it transmissible via ventilation systems)? Why are men substantially more likely to die of the virus when men and women are infected at roughly equal rates? While there are plenty of folks who are willing to opine on answers to these questions (some of them claiming expertise and some of them being self-selected pundits), the reality is that the data on which those opinions are based are sketchy at best and inaccurate at worst.
What we do know is the following – as more data comes in, the worst-case estimates end up lower than originally estimated. We know the elderly, those with pre-existing conditions (such as lung damage), and those who are otherwise immunocompromised are highly at risk of dying if they should contract the disease. We know that a disturbing number of people who end up on ventilators after contracting the disease are likely to die. We know that areas with a high population density seem to have higher total and per capita cases than those with lower population density (population size appears less important than density at this point). We know that masks likely provide some protection, but how much remains a mystery. But even though we have learned a fair amount, it’s unclear whether what we’ve learned is being used to refine high-level decisions.
What Now?
At some point, we’re going to have to be much more deliberative about the tradeoffs we’re making. The longer the shutdown continues, the more likely we are to see unintended consequences that will end up making the cure worse than the disease, especially as the projected deaths from the disease approach a more reliable, stable level (which is likely to be much lower than originally thought). While the shutdowns have offered a repeated body slam to the economy (to keep with our wrestling theme), it’s hard to say at this point exactly how much damage has been done to it. Will it quickly get back on its feet or will it need years to recover? How many lives must be saved to justify the economic consequences? Will there be increases in divorce, suicides, bankruptcy, or alcohol and drug abuse as a result of the decisions being made today? How much is that worth? Is there a way to provide a better balance between risk reduction and economic damage?
At some point, when all is said and done, we’ll be able to conduct a thorough assessment (though I predict both those who claim we overreacted and those who claim the extreme decisions were called for will conclude they were right). In this instance, though, there isn’t a clear winner, and it will be up to those who are responsible for making future decisions to ultimately serve as the referee and decide whether we erred too much on the side of caution or whether we should have been more cautious than we were. How well we do in assessing those outcomes will determine how well we do in the next match – and there will be a next match.
Dr. Matt Grawitch is a PhD psychologist, professor, and Director of Strategic Research at Saint Louis University.