Over the past few months the world has experienced a series of Covid-19 outbreaks that have generally followed the same pathway: an initial phase with few infections and limited response, followed by a take-off of the famous epidemic curve accompanied by a country-wide lockdown to flatten the curve. Then, once the curve peaks, governments have to address what President Trump has called “the biggest decision” of his life: when and how to manage de-confinement.
Throughout the pandemic, great emphasis has been placed on the sharing (or lack of it) of critical information across countries — in particular from China — about the spread of the disease. By contrast, relatively little has been said about how Covid-19 could have been better managed by leveraging the advanced data technologies that have transformed businesses over the past 20 years. In this article we discuss one way that governments could leverage those technologies in managing a future pandemic — and perhaps even the closing phases of the current one.
The Power of Personalized Prediction
An alternative approach for policy makers to consider adding in their mix for battling Covid-19 is based on the technology of personalized prediction, which has transformed many industries over the last 20 years. Using machine learning and artificial intelligence (AI) technology, data-driven firms (from “Big Tech” to financial services, travel, insurance, retail, and media) make personalized recommendations for what to buy, and practice personalized pricing, risk, credit, and the like using the data that they have amassed about their customers.
In a recent HBR article, for example, Ming Zeng, Alibaba’s former chief strategy officer, described how Ant Financial, his company’s small business lending operation, can assess loan applicants in real time by analyzing their transaction and communications data on Alibaba’s e-commerce platforms. Meanwhile, companies like Netflix evaluate consumers’ past choices and characteristics to make predictions about what they’ll watch next.
The same approach could work for pandemics — and even the future of Covid-19. Using multiple sources of data, machine-learning models would be trained to measure an individual’s clinical risk of suffering severe outcomes (if infected with Covid): what is the probability they will need intensive care, for which there are limited resources? How likely is it that they will die? The data could include individuals’ basic medical histories (for Covid-19, the severity of the symptoms seems to increase with age and with the presence of co-morbidities such as diabetes or hypertension) as well as other data, such as household composition. For example, a young, healthy individual (who might otherwise be classified as “low risk”) could be classified as “high risk” if he or she lives with old or infirm people who would likely need intensive care should they get infected.
These clinical risk predictions could then be used to customize policies and resource allocation at the individual/household level, appropriately accounting for standard medical liabilities and risks. It could, for instance, enable us to target social distancing and protection for those with high clinical risk scores, while allowing those with low scores to live more or less normally. The criteria for assigning individuals to high or low risk groups would, of course, need to be determined, also considering available resources, medical liability risks, and other risk trade-offs, but the data science approaches for this are standard and used in numerous applications.
A personalized approach has multiple benefits. It may help build herd immunity with lower mortality — and fast. It would also allow better — and fairer — resource allocation, for example of scarce medical equipment (such as test kits, protective masks, and hospital beds) or other resources.
De-confinement strategies at later stages of a pandemic — a next key step for Covid-19 in most countries — can benefit in a similar way. Deciding which people to start the de-confinement process with, is, by nature, a classification problem similar to the classification problems familiar to most data-driven firms. Some governments are already approaching de-confinement by using age as a proxy for risk, a relatively crude classification that potentially misses other high-risk individuals (such as the above example of healthy young people living with the elderly).
Performing classification based on data and AI prediction models could lead to de-confinement decisions that are safe at the community level and far less costly for the individual and the economy. We know that a key feature of Covid-19 is that it has exceptionally high transmission rate, but also relatively low severe symptoms or mortality rate. Data indicate that possibly more than 90% of infected people are either asymptomatic or experience mild symptoms when infected.
In theory, with a reliable prediction of who these 90% are we could de-confine all these individuals. Even if they were to infect each other, they would not have severe symptoms and would not overwhelm the medical system or die. These 90% low clinical risk de-confined people would also help the rapid build up of high herd immunity, at which point the remaining 10% could be also de-confined.
Yup, they keep this up, and people will succumb to lead poisoning instead of covid-19.
If God uses the simple things of the world to confound the wise, it would only take a baby rattle and a rubber band to make Harvard students catatonic.