1. Read this story
about a strange data-sharing deal between Google DeepMind and a London hospital. (Or read the full paper
it’s based on, which actually is more like a piece of investigative journalism than a research paper.)
2. Think about what a mutually beneficial arrangement that adequately ensures privacy would actually look like.
There are lots of funky things in the original arrangement between the two organizations, where Royal Free NHS in London gave DeepMind access to patient data, supposedly as part of a program to improve care for patients with kidney problems. The deal was, according to the paper’s authors, vague, opaque and peculiar in that in prohibited DeepMind—a deep learning company—from using data for machine learning.
DeepMind, for what it’s worth, has actually blogged about the agreement a couple of times in the past month:
I think the bigger issue here is how we strike the right balance between privacy, innovation and the law. It’s easy to pick a side on opposite ends of the privacy-innovation spectrum, but the reality is that there’s a lot of space in between that we probably haven’t adequately investigated. This includes the very basic question of where data is actually safer from prying eyes (I might argue on Google servers).
A functional public-private partnership on issues like health care could be remarkably beneficial to everyone involved—patients included—but it’s going to require governments moving beyond some preconceived notions about privacy, and companies putting in the extra work to ensure they’re putting privacy first.
For a couple more stories from yesterday generally related to this topic, check out: