One of the most controversial and heavily debated responses to the pandemic, and one that was and continues to be used by various governments, is Shelter in Place (SIP) orders, commonly referred to as lockdowns. This week saw Australia institute another SIP for the Greater Sydney region in New South Wales. These SIP orders have become familiar in news coverage, and in the logic they use to combat the pandemic. Countless articles and studies lauded these policies as being some of the best ways to prevent deaths from the pandemic.
While familiar in form and logic, evidence about SIP’s long-run effectiveness remains scant. Many of the claims about their efficacy have come from epidemiological models that have largely failed to accurately predict real-world outcomes from the pandemic. In the face of this issue and in an attempt to better understand the long-run effects, measures beyond simply counting cases and deaths directly from Covid have moved to the front of many policy discussions. Increasingly, references to excess deaths have dominated discussions of the pandemic and how to respond to it.
This transition has generally been a positive one. While every death is tragic, one of the core pieces of information that policymakers need if they are to craft effective pandemic policy is the number of deaths that wouldn’t have occurred absent the pandemic and what effect a policy has on that number.
As public health practitioners moved to this view, a similar transition occurred among those arguing strongly for SIPs. Arguments about the effect of the pandemic on excess deaths now largely dominate the discussion of the justification for implementing SIPs. This justification has also been employed to suggest that without the implementation of SIPs, excess deaths would have been far higher than what were experienced, often based on the speculative models of disease spread and death.
What is missing from too many of these claims is any discussion of data-driven, long-run evaluations of the relationship between SIPs and reduced excess deaths. Indeed, a plethora of articles purport that SIP and other lockdown measures help to lower excess deaths created by the pandemic, presenting raw counts of deaths compared to predicted deaths without employing standard statistical approaches that could allow such a hypothesis to be tested.
For example in April 2020 and updated in February 2021, The New York Times published a piece that attempts to reconcile Covid-19 deaths with total deaths. The journalists quote a demographer at the Max Planck Institute for Demographic Research who says that while “today’s rise in all-cause mortality takes place under conditions of extraordinary measures…It is likely that without these measures, the current death toll would be even higher.”
Similarly in May 2020, Business Insider released an article about the correlation between lockdown duration and excess deaths. They state outright: “Later lockdowns suggest higher excess death rates.” They similarly suggest places that are quicker to lock down have lower excess death rates.
The use of excess deaths can be illuminating because the data would ameliorate many issues with misclassifying Covid deaths. Covid-19 death counts are susceptible to under-or overcounting from misdiagnosis and errors in reporting. Furthermore, excess death data surmounts the reporting obstacle and simply accounts for the total number of deaths from all causes at a given place and time. We can therefore gauge the pandemic’s mortality impact by comparing 2020 excess deaths to a historical average of excess deaths under ‘normal’ conditions.
As an increasing amount of data has become available, better statistical examinations of the excess death data have become possible. Examinations that utilize traditional statistical tools represent an important opportunity to explicitly test claims like those made about the efficacy of SIP orders.
One such study was recently released by the National Bureau of Economic Research (NBER) working paper series. The paper, “The Impact of the COVID-19 Pandemic and Policy Responses on Excess Mortality,” uses excess death data from 43 countries and all US states to run an event study framework and determine whether there was a significant change in excess deaths after SIP implementation.
The research, while finding slight differences in SIP impact across states and countries, concludes that “following the implementation of SIP policies, excess mortality increases.” This finding runs counter to the general claims from those supportive of SIP policies. Further, they note that the research did not produce an observable difference in excess death trends before and after the SIP was implemented. They explain that “when comparing across countries, we observe a general upward trend, indicating that countries with a longer duration of SIP policies are the ones with higher excess deaths per 100,000 residents in the 24 weeks following [the first] COVID-19 death.”
While these findings are preliminary and will undergo further peer review, they provide interesting evidence that suggests SIP policies might not lead to the claimed and desired results. The authors acknowledge some limitations to their work, especially in its ability to test for the counterfactual and in the issues surrounding the measurement of total mortality numbers.
Despite these limitations, this study represents a welcome turn to the “‘real world’ impact of SIP policies,” not modelers’ assumptions about perfect implementation and adherence. Among the most interesting parts of the study is their view that based on the evidence, individual behavior and response to the Covid-19 risk might have changed even without an implementation of SIP, implying that SIPs are potentially gratuitous. Findings like these are based on actual data and use solid methodological approaches that are hard to ignore.
Their overarching finding is that SIPs are largely ineffective at reducing excess mortality (except perhaps where populations can be completely isolated). These sorts of studies are of particular importance when expansive and drastic public policies––no matter how seemingly well-intentioned––are implemented. More studies like this one that use real data are needed to better understand both the issues that surround current policy and how to respond to similar problems in the future.
Ultimately, these early findings reinforce the core difficulty that policy planners face. Planners can never know enough to plan for every eventuality, nor can they accurately predict what their plans will actually do. As a result, plans often end up based on a “pretense of knowledge” rather than real-world evidence or understanding.
No comments:
Post a Comment