How Big Data Has Changed Pandemic Response

The COVID19 pandemic, like many others of the past century, is often compared with the 1918 Spanish Flu pandemic. Both diseases ravaged global populations thanks to the novel nature of their viruses of origin, and both resulted in fears sometimes verging on hysteria.

The responses to the two pandemics couldn’t be more different, however. When large numbers of people became very ill in 1918, no one at the time knew what was responsible. Theories ranging from a planetary misalignment to tainted oats abounded. After all, viruses were not identified until 1933.

In 1918, antibiotics hadn’t been discovered. While the life-saving drugs don’t fight viruses, they are useful in treating bacterial infections secondary to the virus. One treatment commonly prescribed in 1918 was a high dose of aspirin, which is now known to actually worsen symptoms associated with pneumonia.

Fast-forward to 2020, and scientists are tracking the novel coronavirus in a way never before possible.

In 1918, people were afraid. People were dying en masse, but they didn’t know the cause. They didn’t know how to control spread of the disease, nor did they know of any effective treatments. People in 2020 now have answers to many of those questions relating to COVID19, yet they are still afraid. Could it be that we now know too much?

Thanks to big data, we can now track the virus, which helps scientists design ways to fight the disease. But that same tracking can create forecast models – like this one created with the SAP Analytics Cloud – that portray very precarious futures. Instead of fear created by an information vacuum, it now is the product of an infodemic, thanks in large part to big data.


Big Data Helps Fight the COVID19 Pandemic

The battle between the novel coronavirus and big data is at least twofold. The first step requires understanding where the outbreaks occur and forecasting where to expect them next. By combining big data with AI, experts can more accurately create forecast models and compare them to each other based on practically any variables.

Beyond forecasting its path, the next step is developing better prevention tools, which is also aided by data analyzation. Scientists from MIT, for example, are developing contract tracing tools that not only identify anyone who might have come into close proximity with a COVID19 patient, but also do so while protecting the privacy of all involved.

No longer would it be necessary to close communities en masse when potential cases could be individually identified. Data even can help identify what communities are failing to follow social distancing guidelines. Scientists are collecting location data from millions of mobile devices to make similar determinations.

“The near real-time COVID-19 trackers that continuously pull data from sources around the world are helping healthcare workers, scientists, epidemiologists and policymakers aggregate and synthesize incident data on a global basis,” Parexel Chief Medical and Scientific Officer Sy Pretorius told Forbes. “There has been some interesting data resulting from GPS analyses of population movement by region, city, etc., which ultimately helps provide a view of the population’s compliance — or lack of compliance — with social-distancing mandates.”

Knowledge is almost always power, but there’s only so much the human brain – even that belonging to the smartest scientists – can compute. By tracking the spread of the disease and by using AI to forecast the virus’ path months or more into the future, leaders can plan and focus on the most vulnerable communities.

“In the fight against coronavirus, insight into preventive actions, population mobility, the spread of the disease, and the resilience of people and systems to cope with the virus, can help public health and humanitarian leaders respond more effectively to the COVID-19 epidemic,” Dalberg Data Insights’ Rositsa Zaimova wrote in a blog article.


How Big Data Is Hindering COVID19 Response

At the same time big data is helping scientists forecast the novel coronavirus pandemic and create effective solutions, it is also hindering the COVID19 response. Not only can the forecast models create panic within the populace, but at the same time this same location data can create false realities on which to base policy. For example, in certain parts of the world, it’s not uncommon for an individual to use multiple cell phones.

Tracking GPS from mobile devices also fails to identify population segments, such as children and the elderly, that don’t use mobile devices. Tracking mobility also fails to identify important variables such as the purpose of the movement.

Of course, analyzing such a massive – and growing – amount of data requires huge computing power… and more every day. After all, it doesn’t take a huge server or the greatest bandwidth to set up a VoIP system, but that’s simply not the case when analyzing scads of information. Unfortunately, computational resources to that extent don’t grow on trees. Thanks to the Memorial Sloan-Kettering Cancer Center and Folding@home Consortium, individuals can donate their unused computational resources to researchers so they can better understand the virus and its path. A free client enables users to connect their PCs to a distributed network of computers. Just how much computing power is needed?

Meanwhile, a continued shortage of testing around most of the world leaves huge knowledge gaps in determining the number of cases in communities, regions and worldwide. Without the data, any forecast models of the pandemic are based on missing data, and therefore can only be accurate to a certain extent.

“While we are seeing greater advancements with Big Data, as both a society and an industry, we still have steps to take to effectively leverage the power of Big Data in search of a cure for COVID-19,” Pretorius said. “Advanced analytics and signal detection within health care systems is one of several large automation improvements that will help surface signs of a start of a pandemic. More integration within a global system needs to continue.”


Big Data’s Influence on Pandemic Response

COVID19 is far from the world’s first pandemic, but how often have communities and governments completely shut down in response? Is the sickness that much worse than a century ago, or do we just know more about it? What’s with the hoarding and the fear?

While it’s impossible to know exactly what caused the response shift, it’s unrealistic to believe data analytics played no part in it. Whereas people in 1918 were afraid of a disease they couldn’t understand, people in 2020 are afraid because they know so much. We know just how fast the virus is spreading, how many are dying, how many are conserved recovered and what treatments are proving effective. We see models that tell us just where the COVID19 pandemic will be in six months and a year. But what do we do with that information? Apparently, we hoard toilet paper.

Original Article:

Related blogs


Please enter your comment!
Please enter your name here