Christopher Furlong/Getty Images
show image

How do we ensure digital healthcare doesn’t leave some patients behind?

The future of healthcare is bright. Daily articles and news reports herald the arrival of digital technologies and artificial intelligence right into the heart of our homes and healthcare institutions. Indeed, technology is not so much an add-on but seen to be essential to the NHS as we look to future proof our precious health service.

The recent Topol Review explored how to prepare the workforce for a digital future, and the release of the “Code of Conduct for Artificial Intelligence Systems used by the NHS” amongst others are both statements of intent in order to deliver this ambitious digital agenda.

In addition, Matt Hancock, secretary of state for health and social care, has also identified technology as the answer to some of the challenges faced by the NHS and this demonstrates the weight of commitment to artificial intelligence and digital health. The future of this technology is no longer a conversation for tomorrow. This technology is here. Technology discussions in healthcare have moved rapidly between ‘if’ to ‘when’ to ‘now’ in the space of a few years.

Many of these discussions focus on AI and digital technology assisting the workforce. Digital health promises a workforce that can focus on greater patient care by ‘freeing up time’ from administrative tasks so that clinical staff can re-prioritise elsewhere. It offers reassurances of assisting clinicians, rather than replacing them. Shared data sets will allow fluid movement between digital systems with up-to-date patient information.  These benefits are of course not limited to clinical staff and are ultimately there to benefit the patient. Patients are set to benefit directly from these advances with earlier diagnoses, improved quality of care and seamless technology solutions and healthcare, some from the comfort of their own (virtual) home.

Too good to be true? Possibly. But it is important that we do not leave the public behind on this exciting journey. It is truly encouraging to see greater emphasis on patient engagement and public consultation in healthcare but while distinctions are often acknowledged between different sets of clinical staff and differing needs e.g. nurses and doctors, patients and their needs are often lumped into one group. To do so is perhaps naïve and doesn’t allow for the public engagement, education and support that this area so desperately needs to work effectively. Engagement to address the ethical and moral dilemmas some of these technologies present must be deep and not simply a “tick box” exercise to fulfil a project plan. Public engagement requires diverse engagement and recognition of patients as individuals, not a generic body to ensure that these diverse views form the backbone of future policy and research decision-making.

The generic patient that we often refer to cannot and should not be generalised. In fact, where do (or should) we make the distinction between ‘citizen’ and ‘patient’? ‘The patient’ may vary dramatically and this becomes apparent when we consider the vast demographic variability that this encompasses. For instance, it may range from a motivated, digitally connected teenager living in an inner city area interested in controlling their Type 1 Diabetes more effectively using an app, to an elderly person living with dementia in a rural community, which is where our own work focused. A range of views were expressed in our own public engagement event with people living with dementia and it is clear that there is considerable work to be done in this group of patients alone.

Many of the problems perhaps lie in the language used. AI is no doubt a buzzword but what does this mean to patients? A recent report from MMC Ventures (a venture capital fund) reported that 40 per cent of startups claiming to do AI, aren’t actually doing it at all. If the professionals can’t understand and the term AI when they are using it, how can we expect patients and public to? Seemingly digital health, health innovation, AI, machine learning and digital technology are all often used interchangeably. Somehow the patient is expected to understand this interchangeability and it can therefore be no surprise that AI conjures up misleading images of robots. Indeed, the recent report of a ‘robot’ delivering a final message regarding their prognosis to a patient in their final hours does little to alleviate these concerns and misconceptions.

Part of the wider conundrum of this rapidly evolving area is how to regulate appropriately. It was clear from our engagement that off-the-shelf devices such as Alexa and Google Home are firmly embedded as tools to help people living with dementia. One theme used to describe such technology in our public engagement event was that these devices could offer help and support in the form of a ‘back up brain’ and indeed can be incredibly valuable to people living at home. However, how do we regulate these devices and where is the line which defines whether they are being used as a consumer device or a medical device?

For example, a recent tweet by a health care professional heralded a patient using her Alexa over a traditional ‘Life Line’ device as innovative. People rushed to praise this novel approach. However, in this example alone, there are many considerations. Digital assistants such as Alexa no doubt have their use and have potential to be used in healthcare but currently are unlicensed as medical devices. Whilst it might not matter if Alexa mishears, drops a connection or fails when you’re trying to add milk to your shopping list, for a patient lying on the floor relying on it to call their loved ones instead of a traditional life line device, this can mean the difference between life and death. In an increasingly litigious society, should the device fail, where does the fault lie?

The lines are more blurred when you consider the use of other off-the-shelf devices which don’t use artificial intelligence such as WiFi cameras.  Increasingly, concerned relatives are turning to these devices to ensure they can keep a close eye on an elderly relative, perhaps with early signs of memory loss. There are also examples of care homes using these devices to check on the standard of care in their facilities. However, the legal implications of these devices are perhaps unclear. How or when should these devices be used for someone who cannot consent to their use for example in advanced dementia? Does video surveillance without consent constitute a deprivation of liberty or is it a benign technology facilitating independent living?

Therefore, it becomes apparent that whilst we might be able to regulate some areas of the digital industry, innovation and personal prototyping and adaptation of technology may outpace these areas. The question becomes more pertinent when we consider formal care facilities such as care homes considering using these devices in their facilities. How do we decide which technology can be used safely and ethically and within the legal parameters that are rightly needed? Whilst the legal and ethical lines may be blurred in the home, there are clear regulatory and ethical frameworks when using these technologies in the formal care setting. However, is the current regulatory framework fit for purpose in such a rapidly evolving field?

Much work is ongoing at NHS England to define ‘good vs bad’ technology and how these standards might be defined. But more needs to be done to decide how ‘good vs bad’ looks like for patients.

An AI application looking for early diagnostic markers for early diagnosis for dementia for example might be appropriately regulated and medically safe but that doesn’t mean it’s necessarily the right thing to do for the public.  Whilst early diagnosis might be highly prized in early diagnosis for curable conditions such as cancer, it is a bigger moral question to decide whether everyone wants to be diagnosed with a condition such as dementia years in advance of any clinical symptoms. How would this risk be managed? For instance how would the risk be managed for a patient seeking medical insurance to travel abroad, despite not displaying any symptoms of their condition? How also might early diagnosis ‘just because we can’ have an impact on mental health which is already at breaking point in this country?

As clinicians we have a responsibility to do the best for our patients. Proper engagement of our patients to ensure fully informed consent is imperative.  It is not good enough to innovate without engaging the public or properly considering the moral and ethical dilemmas which might arise as the result of their design and implementation. This duty lies not only with technologists and scientists but also with clinical staff to exercise their own moral judgement in the appropriate use of these technologies.

Many comparisons can be drawn from the pharmaceutical industry in this respect. Drugs are rightly licensed and regulated as well as undergoing rigorous formal testing in the form of clinical trials. As doctors, we have a responsibility not only to weigh up the evidence presented from these clinical trials but also in exercising professional judgement in deciding on an appropriate treatment for a patient. Technology should be no different. We should exercise that same professional judgement as clinicians and work closely with patients to decide on best treatments that include technology as we become clinicians of the future. Just because the technology exists, it doesn’t mean that it’s always appropriate to use it.

Healthcare stands at the dawn of a new era, in some ways similar to that of the next industrial revolution. It is important that this does not become a technological space race and that we remember every single day that this technology should benefit and not disadvantage the patients and public that we serve.

Dr James Hadlow is a visiting Darzi Fellow at the University of Kent. Dr Chris Farmer is a professor at the University of Kent. Kent’s Dr Amanda Bates also supported with the public engagement work.