What is the Price of Privacy in Healthcare AI?

By Rich Durante, Managing Director, Pharmaceuticals and Medicines | January 21, 2020


Diagnosis is hard. With a diagnostic error occurring in an estimated 10%-15% of cases worldwide, the medical profession is looking to healthcare innovations including AI to help bring those numbers down. And the investment is huge – the global market for AI in healthcare expected to rise from $1.3 billion in 2019 to $6.6 billion by 2021.

Patients are also very enthusiastic about AI’s potential role. Our research found that three-quarters of Americans would be interested in having AI involved in their diagnosis. As in other sectors, however, many patients are concerned about data privacy, and potential bias.

The field of AI has seen its fair share of hype. IBM Watson, for example, has to date failed to deliver on its early promise. But analysis of multiple studies of different algorithms has shown that the diagnostic performance of deep learning models to be equivalent to that of health-care professionals.

Dr. Alberto Distefano, a professor of ophthalmology at Yale, is optimistic about AI’s role in diagnosis. In an interview with Rich Durante, Managing Director at Maru/Matchbox, he said:

“Artificial intelligence (AI) can play a large role in shaping and guiding clinical judgement. AI has the advantage of performing quick calculations, sorting and analyzing large amounts of data, and coming up with a differential diagnosis based on the best fit. While a clinician does the same thing, the quick creation of a differential diagnosis is based on a learned pattern of recognition based on patient history, exam, labs, imaging, and pathology. AI has the advantage of being able to analyze all these data faster than a human.”

“In the end,” he suggested, “AI and clinical providers can work together to provide the ultimate in healthcare to their patients. AI can analyze all of the data given (especially now as healthcare continues to grow with electronic medical records) to create a differential diagnosis that the clinician can then review and implement (or override, if necessary). Patients will benefit from quicker diagnoses with decreased unnecessary testing and knowledge of all available treatments.”

Personal becomes public

A recent study found that an algorithm widely used in US hospitals to allocate health care to patients has been systematically discriminating against black people, mimicking what has been observed in the use of AI in other arenas, including policing. Bias is a problem. Privacy is another important concern.

AI-related privacy issues tend to fly under the radar compared to bias, but privacy is a graver concern. If your personal data is compromised, it could be used and misused in truly alarming ways.
It is hard to get more personal than diagnostic test results. Scans, bloodwork, and tissue biopsies all can be very revealing not only about the disease, but also the person. And people can often be identified by health data—even when their personally identifiable information is stripped out.

In a study published last winter, researchers were able to reidentify patients based on data from their walking habits. “The results point out a major problem,” said Anil Aswani, one of the study’s authors. “If you strip all the identifying information, it doesn’t protect you as much as you’d think. Someone else can come back and put it all back together if they have the right kind of information.”

Clearly there is a risk in rich diagnostic data being married to other information and used to identify individuals, revealing the most sensitive of information.

Turn up the noise

Compromising health records, including diagnostic information, would be a colossal breach of privacy. As a result, privacy researchers are trying to find ways to protect data from attackers, who are increasingly using AI to spot patterns and predict behavior.

There is an intriguing weakness in AI though. Even small changes in the data can confuse a machine learning system, obscuring patterns and confounding predictions. These small changes in the data are known as adversarial examples.

The basic premise is that noise is introduced into the data – in the form of new, fabricated data or making minor modifications to existing data. Doing this makes it more difficult to generate accurate predictions which, when it comes to one’s privacy, is the goal.

Privacy researchers are exploring whether adversarial examples can be used help protect privacy by adding just enough noise to the data to ensure it is not personally identifiable. “Attackers share in the power of machine learning and also its vulnerabilities. We can turn this vulnerability, adversarial examples, into a weapon to defend our privacy,” said Neil Gong, a Duke computer science professor doing research in this area.

The dilemma is that by adding noise one reduces accuracy. If we reduce the accuracy of diagnosis, we increase the risk of false-positives and false-negatives. A false-positive diagnosis can lead to unnecessary treatment with pointless expenditures and potentially nasty side effects. A false-negative could lead to a missed diagnosis, possibly resulting in cancer that is only detected after it has metastasized.

Walking the tightrope in healthcare innovations

We have a balancing act. Lots of noise can protect privacy, but it also destroys accuracy. But too little noise can leave datasets vulnerable and open to exploitation.

At this early stage, no one has a handle on whether adding “noise” in the form of adversarial examples will work, but the trade-offs involved make it clear just how tricky it will be to balance privacy and the data AI needs to assist in diagnosis.

The potential for AI in diagnosis is tantalizing. But the risks are very real.

The days ahead for AI in healthcare

AI is just one small part of a larger trend toward digital technology becoming an ever more vital part of medicine. Digital medicine enables all sorts of new approaches from remote consults to robotic exoskeletons for people who are paralyzed. As we move forward, sorting out what works and what doesn’t will invariably involve dead-ends and some harmful data breaches. But the future looks bright to Dr. Distefano and others.

We asked him what he thinks it will be like looking back 20 years from now when digital medicine has matured:

“I cannot wait for that moment when we look back and wonder how we managed to provide care during that time! From difficulty in communication with the office, to missed diagnoses, to archaic treatment regimens that didn’t take into account new research, to the large amount of unnecessary testing that was ordered—there is just so much we won’t miss. In comparison, the future of digital medicine will bring an era of efficient communication amongst providers, staff, and patients, improved analysis of labs and imaging, quick diagnosis without the need to order yet another test, saved money for the healthcare system, decreased administrative burden, and overall happy patients and physicians.”

There is much lot of work to be done before we get to the promised land Dr. Distefano describes. We need to walk tightropes, navigate mazes and pilot tumultuous waters. All along the way, gauging the opinion of patients, healthcare providers and other stakeholders will be crucial. Let us help you traverse these travails of the use of AI in healthcare together.

Previous
Previous

Using Teamwork to See Past Blind Spots – Insights From an NYPD Detective

Next
Next

Can Brands Earn Customer Trust? Yes, but it Takes Work