If you’re frequently trying to make sense of the number of local and national COVID-19 cases and deaths from the CDC’s tracker or other places, but are unsure what it all means, you’re not alone. It can be tough to know how to act based on the numbers, and what information those numbers are capturing.
Experts have tips to keep in mind, though: today’s case numbers represent infections that happened a week or so ago, as it often takes days for symptoms to occur. Weekly averages are more useful than daily counts if you’re looking at trends, as there are delays in data. Beyond tips like these on how to interpret the data, there are the data itself and whether it’s capturing what it should.
Recently published research from experts at Drexel’s Dornsife School of Public Health suggest failures locally and globally in how these numbers are gathered and shared may be part of the reason for our confusion.
In an opinion article in the journal frontiers in medicine, Alex Ezeh, PhD, a professor, and postdoctoral researcher Garumma T. Feyissa, PhD, both in the Dornsife School of Public Health, write that no global consensus exists on what constitutes a COVID-19 death. In some countries, death counts include deaths attributed to underlying conditions co-existing with COVID-19. Some countries include suspected cases of COVID-19 death. Other countries include neither in counts of total COVID-19 deaths.
The World Health Organization recommends that countries count all “probable or confirmed” COVID-19 deaths, “unless there is a clear alternative cause of death that cannot be related to COVID-19 disease (e.g. trauma).” Yet many countries rely exclusively on a positive laboratory test and underreport the total count as a result.
The authors note multiple reasons for this undercount, including broad variations in COVID-19 testing availability and capacity across countries, and report that polymerase chain reaction (PCR) test sensitivity can be as low as 54%, which may result in a significant number of false- negative cases. By basing their reporting solely on positive test results, the countries risk limiting the count to severe cases ending in death in a hospital. And this is problematic, according to the researchers.
“These factors make it difficult to compare case fatality rates across countries, especially in low-and middle-income countries where vital registration and data-recording system is poor,” the authors said, adding that the lack of a standard in reporting makes it more challenging to compare counts between countries. Not being able to compare data makes it more difficult to see the direct and indirect effects of the pandemic and develop evidence to inform policies aimed at saving lives.
Some news outlets are attempting to tackle this undercounting problem. More than 2.8 million people have died as a result of COVID-19, an analysis from The Wall Street Journal reports. Additionally, estimates from The New York Times suggest roughly half a million more deaths than government COVID-19 death counts report, as far more people died in most countries than in previous years, which is likely from downstream consequences of the pandemic, such as less access to overwhelmed hospitals. Worldwide counts, using data from the United States CDC, WHO, European Centre for Disease Prevention and Control, among other sources are compiled here by Johns Hopkins University.
Here in Philadelphia, similar problems with undercounting and misdiagnosing persist. Assistant Professor Neal Goldstein, PhD and Associate Professor Igor Burstyn, PhD, both in the Dornsife School of Public Health, recently co-authored a paper in the journal Spatial and Spatio-temporal Epidemiology that used a unique approach – factoring in data on cases and testing from the Philadelphia Health Department – to get a more accurate picture on total cases and testing to reduce bias in Philadelphia’s case counts. The findings suggest the prevalence of COVID-19 in Philadelphia has likely been underestimated, that this undercounting is worse in some zip codes than others and false diagnoses play only a minor role in overall reported cases.
“It is critical that public health have the best possible data to plan and respond to public health crises as they unfold, rather than relying upon statistical adjustment, such as ours,” the authors write. “But if empirical estimates of accuracy of surveillance are not possible to obtain (as is the case early in any epidemic) decisions that are enshrined in policy must account for likely bias informed from analyses such as ours, rather than either pretending that surveillance data are perfect, or making qualitative unarticulated judgements about the extent of bias in the data.”
To get through the pandemic, governments, including local health departments need to allocate finite resources in creative, strategic ways. Using insights from these papers and others on how to gather and share data effectively should be the foundation for all these decisions.
Media interested in speaking with Drs. Ezeh, Feyissa, Goldstein or Burstyn should contact Greg Richter, News Manager, at email@example.com or 215.895.2614.