Were you lucky enough to attend last Monday’s UCL conference with Dr Shane Legg, co-founder and chief scientists at DeepMind? Dr Legg explained DeepMind’s beginnings in 2010 and how it rapidly became the world leader in artificial intelligence (AI) research. It’s no surprise that the web giant, Google acquired the company for £400 thousand in 2014. DeepMind operates in various fields including climate change, health care and energy. They develop complex AI algorithms that provide innovative and sustainable answers to problems that can’t be resolved by simple individuals. DeepMind Health’s launch last February broadened their technology to health operations, with an intention to enhance the quality of health assistance, provide a more personalized patient care and cut ineffective expenses in the health sector. The AI market revenue is currently estimated to be less than 1 billion USD but is expected to be worth 16 billion USD by 2022, with the healthcare sector encompassing the highest growth. DeepMind’s technology is at the crossroads of multiple parties spanning care providers, patients, and governments. What does this imply in terms of anticipatory governance and how do we preserve healthcare’s primary aim that is to cure patients?
DeepMind Health’s so-called “Streams” project is a centralized phone app that processes any sort of medical data from allergies to live-streamed blood test results. In the foreseeable future it will enable caregivers to receive live patient alerts, retrieve medical records more efficiently and promote new ways for sharing patient information among caregivers. In addition to optimising coordination between care providers, they are also currently working on AI algorithms that can provide a digital support of medical decision. A collaboration with the UCL hospital focuses on machine-learning algorithms and the process of radiotherapy used in the treatment of oral, head and neck cancers. In the long-term they’re hoping this technology will automatically distinguish healthy from cancerous tissues.
The implementation of AI technologies in the healthcare eco-system has something unique and unprecedented in terms of the process that the technology is being built. Data processing is no longer just a consequence of the will to rationalize biological or health data, as it has been the case in the past- human genome project era. Data processing is now primarily a driver for building new technologies. DeepMind Health perfectly illustrates this endeavour as their technology has an unanticipated scope of action and is dependent on the use our medical data. They need to collect vast amount of medical data but have no clear-cut statements on its applications. “We are still figuring out what this data might potentially be used for”, claimed Mustafa Sulleyman, Head of DeepMind Health. In terms of regulation this means that we cannot think of this technology as a “product” but need to apply regulations taking into account the different processes that underpin the technology- what are those?
Machine-learning algorithms have the capacity to learn without being explicitly programmed. This makes them uniquely more powerful as the time passes, and as medical records add up. DeepMind, and AI corporations in general, offer technologies that are conditioned by an access to data. Thus, the agency on this technology isn’t in the hands of those building it, but rather in those who have full power on the medical records. Last April, DeepMind Health agreed a 5-year collaboration with the UK National Health Service (NHS) trust, giving them access to the largest data bank in the country, estimated at 1.6 million patients each year. When power and potential don’t lie in the same hands, how does this come to empower the public to have a meaningful role in shaping the future of the technology?
DeepMind complied to a very strict legal framework for the sharing agreement with the NHS. Nonetheless the news was subject to tremendous criticism in the public sphere, where DeepMind was accused of not obtaining patients’ informed consent. Public’s trust in sharing its personal information for “the common good” isn’t self-evident mainly for worry of missuses. This controversy blurred the reality being that the NHS is the real legally bound owner of UK medical records, and that the agreement was made on the basis of “implied consent”, a regulation managed by an NHS branch. Surely we are not yet accustomed to a third party having access to our medical records. Nonetheless, its strict legal basis needs to be acknowledged and voiced more in order to limit the public’s fears when in the face of such a paradigm-shifts. Only then can we concentrate on the real issue, which is finding ways to regulate transparency within these third parties.
We are widely accustomed to debates around data privacy and misuse with the case of the Internet. What are the differences when personal data are of the medical kind? Let’s try to think of DeepMind’s health AI technologies as a mirror of our common experience with social networks. Our personal data, here of the medical kind, feed deep learning algorithms, and subsequently provide output results. Social media users are commoditised as immaterial labour since individual information (taste, preferences, and habits) is traced, collected and then subsequently monetized by targeted advertisements and filter bubbles. This is the law imposed by “web cookies”. But cookies and health don’t go together. Here the paradigm shift lies in that fact that AI technology becomes powerful if the data is taken collectively as opposed to individually. The more medical information, the better the technology will effectively operate and learn to give accurate predictions. This strongly implies that patients, whose data are being used, will remain the group of people whose interests is the most important and also the ones that are truly benefiting from the technology.
It also implies that the less restriction we put upon collectively owned data, the more efficient this technology will be. However, this mainly goes against current systems of healthcare governance. The case of the UK is a special one as the NHS is considered to be one of the most opened and transparent health systems in the world, and its model greatly values an opening of data. Its budget is 98.8% funded via taxes payers and national insurance contributions. On the contrary in most other countries like the US, healthcare systems tilt toward the individual as it is based on personal health insurances. Essentially, the values that lie at the heart of DeepMind’s technology can clash with these current, merely national models of healthcare governance. There is an urgent need for encouraging international cooperation amongst different healthcare systems, but also non-governmental international stakeholders and discuss on open international standards for medical records. Overcoming the challenge of interoperability in healthcare would be the most significant step for the future of AI technologies.
Overall, the main challenge for benefiting of AI technologies in healthcare is to transition current values of data privacy to more transparent, opened and international standards. Safeguarding human agency and moral choice in such process will not be easy. Patients are becoming increasingly active players of their care journey, which put them in the best position to encourage these shifts of values and preserve healthcare’s aim. At DeepMind, “No decision about me without me” seems to be the golden standard of patient’s involvement. They hosted a special event gathering patients and clinicians last September in an attempt to define “what should patient involvement look like?”. The future of AI in a health could really benefit from multiplying these kinds of endeavours.