Blog /

Applying the “Do No Harm” Principle to Health Data

Author
Publish Date
Read Time
Joyce Lee
October 15, 2018

At Datavant, we build software that makes it possible to share patient data securely, but our mission is bigger than that: to make it easier for players in the healthcare system to connect and use their health data for the good of patients. This mission requires a broad conception of data ethics — one that encompasses both data security and patient privacy, but also an evaluation of the risk, cost and benefit of different data strategies. We believe that strong data ethics is essential for a company building a health data ecosystem, and we would like to share a few brief thoughts on our philosophical approach here.

What is the “do no harm” principle and how does it apply to health data?

One of the great ironies of health data is that the sources of the data — the patients — can derive limited value from their own information. Most patients need help from other people or organizations that can explain their data to them, use it to improve their care, or conduct additional analysis. In other words, in order to get value from their own data, patients have to share it, and patients’ willingness to share their data is based on trust that those they share it with will use it for good. In our view, these data holders — stewards, if you will — are bound to serve the patient in accordance with the most recognizable axiom of the healthcare profession: “First, do no harm.”

Traditionally applied, the “do no harm” principle requires that healthcare providers weigh the risk that a given course of action will hurt a patient against its potential to improve the patient’s condition. In short, to perform a cost-benefit analysis. This cost-benefit analysis is rarely straightforward. All medical treatments have risks, and the calculation of each patient’s risk threshold is unique. Due to the sensitivity of the medical information and the vulnerability of the patient seeking care, the doctor is bound to confidentiality with respect to the patient’s information. And given the direct impact of the provider’s decision on the patient, the doctor must respect the patient’s right to understand the cost-benefit calculation and — whenever possible — to consent to any course of action.

When it comes to the use and disclosure of health data, the interpretation of the “do no harm” maxim is similar: data stewards must weigh the risk of harming a patient (often a breach of patient privacy), against the potential benefits. With health data, the harm of a breach or other violation of a patient’s data rights is often acutely felt by the individual while the benefit of analysis on an aggregated dataset is significant but spread across many patients. Data stewards must appreciate this asymmetry from the patient’s perspective, and guard patient privacy carefully while only sharing data for purposes that will improve patient outcomes.

Breaches of privacy as patient harm

In contrast to healthcare providers, the risk that data stewards assess is not bodily harm, but informational harm. Specifically, the loss of control over one’s unique and private information, which could result in reputational, emotional, or financial damage. The appropriate mitigation for informational harm is to remove the patient’s identifying information from the data prior to sharing it. In the data world, we refer to this process as a “data protection”, and it parallels the doctor’s obligation of confidentiality in the provider-patient relationship.

The notion of “confidentiality” or “data protection” in data sharing refers to enforcing technical and administrative controls around information access to minimize the risk of harm to the data subject. It is achieved through the use of strong security and privacy controls, which have been codified for healthcare players in the Health Information Portability and Accountability Act (HIPAA).

Under HIPAA, security controls are technical and administrative protocols that mitigate the risk of unauthorized access to and disclosure of information. These include things like access permissions and penetration testing. By contrast, privacy controls are infrastructure and process requirements that mitigate the risk of unauthorized access to and disclosure of identity.

HIPAA’s privacy requirements focus on de-identification as the primary identity protection mechanism. De-identification removes the identity of the patient from the data, thereby mitigating the risk of direct harm to the patient, which could be emotional, financial, or reputational damage resulting from the unauthorized access to or use of identifiable and sensitive information.

Assessing patient benefit

Under the “do no harm” principles, mitigating the risk of harm — in this case, a violation of privacy — is only half the challenge. The other half is ensuring that patients benefit from the value derived from data sharing.

When defining patient benefit, it’s worth considering why patients disclose information in the first place. Patients don’t voluntarily disclose their data unless they anticipate some value from doing so. There are four basic categories of patient value:

1. Improved diagnosis, including both diagnostic accuracy and how the diagnosis is delivered

2. Improved treatment, including treatment safety, efficacy and availability

3. Improved care, including both quality and affordability

4. Improved information, including both improved access to relevant medical information and greater control over personal data

We’ll briefly consider how these categories are referenced by data stewards doing a “do no harm” analysis below.

“Do no harm” in practice

Imagine a hospital and a health analytics company want to work together to determine correlations in patient symptoms, behavior, or health history that might help the hospital recognize patients who need urgent care or triage patients for more effective care. If the hospital can provide adequately de-identified health data to the analytics company, the privacy risk to the patient is mitigated, and there is benefit to the patient in the form of both improved quality of care and (hopefully) a more efficient diagnosis. In short, under a “do no harm” analysis, it makes sense to move forward.

Contrast this with a lender using medical information to ascertain whether an applicant should pay a higher rate because of a genetic condition. In this situation, the patient’s privacy is not protected (their medical information is not de-identified), and the lender is using the information to discriminate against certain types of borrowers based on genetic conditions. The applicant is harmed by the lender’s use of their personal data, and there is no corresponding benefit.

At Datavant, we believe that the healthcare system can unlock tremendous value through the responsible sharing of patient data across silos. And while the goal of “sharing responsibly” can present a number of ethical questions and challenges, we believe that the medical profession’s existing tenet — “do no harm” — serves as a strong first principle.

Applying the “Do No Harm” Principle to Health Data was originally published in Datavant on Medium, where people are continuing the conversation by highlighting and responding to this story.

Achieve your boldest ambitions

Explore how Datavant can be your health data logistics partner.

Contact us