Is a Return to Normal Worth Our Privacy? Part 1: Voluntary Health Tracking

Part 1 of who knows how many. Just another of many unknowns in these crazy times.

“How ironic,” I remarked to my wife just this morning as we walked along the empty main street in our hometown, “that the greatest expression of community-minded behavior in our lifetime has been to drive ourselves into isolation.”

What a great collective act of unselfishness we’ve seen!

Across our towns, our states, our nation, and the globe, perfectly healthy people have voluntarily stayed at home not just to protect themselves, but truly to protect others, to protect those at risk and those caring for the at-risk.

That we put ourselves at economic risk only makes this collective effort more noble. (Some will quibble that much of our isolation was forced, but you can’t deny the extent to which people took this challenge well beyond what was required.)

Contact Tracing: A New “Normal”?

But another test of our community-mindedness will soon be upon us as we seek to return to “normal.”

This return to normal will test our understanding of and commitment to privacy like never before. An early test will be whether and how we accept the ongoing efforts to introduce “contact tracing,” which is widely heralded as one of the first requirements of a return to normal.

Much attention is being paid to the work Apple and Google are doing to develop the technology that will end up being incorporated into a contact tracing app we’d all be asked to use on our phones. But theirs is hardly the only such effort (more on this and other adventures in contract tracing in future articles).

To safely trace contacts, countless individuals will need to offer access to and permit the exchange of personal data like never before. These same individuals will need to trust that this data will be stored securely, only used for the stated purposes and by trusted people, and then destroyed when it is no longer useful (among other considerations).

Crash Course in Security and Privacy

As we, as a society and individuals, come to terms with this exchange of data, we’ll go through a concentrated crash course in learning about privacy and security. It will be the greatest unguided security and privacy awareness program ever! I for one will be fascinated to track how it plays out, and I’ll be tracking it on this blog.

In Part 1, I’ll discuss the largely voluntary COVID-related health tracking that is starting to appear in advance of the full-on testing and contract tracing that has been so widely reported. The two projects I describe—COVID Symptom Track and DETECT Health Study—offer an opportunity to consider some of the issues that will soon be upon all of us.

COVID Symptom Tracker

“Take 1 minute each day and help fight the outbreak in your community,” reads the appeal on the COVID Symptom Tracker website.

The premise is simple. If enough people self-report on whether they’ve been tested and how they’re feeling, the aggregate data would give healthcare professionals a good read on disease spread.

Here’s lead researcher Andrew Chan, MD, MPH: “If enough Americans share daily how they feel, even if they’re well, this app can provide the healthcare system with critically valuable information.”

Doctors and scientists at Massachusetts General Hospital, the Harvard T.H. Chan School of Public Health, King’s College London, and Stanford University School of Medicine, built the the COVID symptom tracker with backing from the health science company Zoe. Full disclosure: I had participated in an earlier study backed by Zoe, and they had already won my trust in terms of their clarity and their handling of my data.

Sharing your “data” is simple: You download the app to your phone and enable notifications. Every day you’re prompted to answer some simple questions: have you had a test? and how are you feeling? Depending on your answers, other questions follow—or none at all. This data is provided to researchers who use it to track disease spread in states and in the U.K.

DETECT Health Study

“Your wearable might be an early warning system for viral illness,” is the promise of the DETECT Health Study being run by Scripps Research.

Their early research shows the onset of a cold, flu, coronavirus, or other viral infection may be predicted by changes in your heart rate. These changes are likely already tracked on a regular basis if you wear a device like an Apple Watch or a Fitbit.

“Our goal is to identify areas with viral outbreaks quickly,” say the researchers. “We hope that an early warning might someday give public health officials more time to take action, and for you to take better care of yourself and your loved ones.”

The DETECT study requires little on the part of participants: once they sign up and enable the data exchange, all they need do is regularly wear their devices. The study is led by Jennifer Radin, Ph.D., an epidemiologist at the Scripps Research Translational Institute, and it is enabled by an application called MyDataHelps, built by CareEvolution. This study is in the early stages and they are not yet publicly reporting data.

Can You Trust Them?

With these studies and their associated apps, the ultimate question you have to ask yourself is: can you trust them? For me, the answer is yes. I participate in both the studies I’ve highlighted here. Here’s how I decided I could trust them:

  • I read every word they published about their studies, including the research papers behind them
  • I researched the app makers and their platforms
  • I studied the privacy policies they used to guide their data collection and handling (this is my health information; I didn’t just click through!)
  • I downloaded the apps directly from the App Store, which has a rigorous approval process

Even with all this, I know I’m taking a calculated risk that my data could be exposed. I understand there is no guarantee. No app is unhackable, no data repository unbreachable.

But I believe the good associated with these studies outweighs the potential harm.

I’ll leave an in-depth exploration of how you decide whether to trust a recipient of your personal data for another time—it’s a really important question each of us has to get comfortable with. But what I think you’ll see in my brief description is that it does take work and attention on your part to make the determination.

Many of us will come up with shortcuts: some will say yes if it’s backed by Apple; others will take comfort in a company that follows GDPR privacy practices. To be clear, I use these shortcuts all the time, especially when the data I’m sharing isn’t particularly sensitive.

The Uncharted Data Landscape

But we’re about to enter into one of the most sensitive data sharing experiments of all time—there are already a number I haven’t mentioned—and I for one will be taking no shortcuts in deciding whether or not to trust the recipients of my data. I’ll do my research and pay close attention to all parties involved—but in the end I won’t have perfect information and will have to decide to trust … or not.

In Part 2, though, I’ll consider some reasons NOT to trust, ranging from a basic suspicion the technology won’t work, to a fundamental distrust of big tech, to the dark fear that we’re about to enter a dystopia of big government control. It could get weird!

CONFESSIONS OF AN AWARENESS NERD

Like What You Read?

Check out more content from Tom Pendergast on his blog Confessions of an Awareness Nerd.

Explore the Blog

Share this Post

;