At a recent event, a member of the audience asked for an example of an industry where the “genie had been put back into the bottle” with regards to regulation. The implication was that it’s too late to regulate artificial intelligence (AI) and digital health and therefore, what is the point in trying?
An example might be – the pharmaceutical industry.
Medicine has moved a long way since the days of innovation without regulation. In 19th century Germany, August Bier, the first person to discover the technique of spinal anaesthesia – an injection in the back which numbs the legs and which remains a commonly used anaesthetic technique to this day – originated from Bier testing out his new technique on his assistant Hildebrandt, and vice versa, with burning cigars and hammers.
This practice was completely unregulated and unethical by modern day standards and this example is not unique. Without a rigid framework or regulatory bodies, innovation in medicine rocketed and it is no coincidence that this era coincided with dramatic medical advances. But this was not without its price – failure and death were common.
Thankfully, regulation has caught up with innovation in the case of pharmaceutical medicine. Now organisations such as the MHRA in the UK and the FDA in the US implement regulation of our medicines to try and mitigate these risks. Clinicians are taught to practice evidence-based medicine grounded in the scrutiny of peer-reviewed journals, and other organisations such as NICE provide decision-making guidance.
For a 19 century physician and innovator like Bier, regulatory bodies such as these would have been unimaginable. In fact, the very concept of regulation would probably have been perceived as an inconvenience. Unfortunately, historically it has taken events such as the thalidomide tragedy of 1962 to trigger rigorous regulation. However, we cannot simply wait and allow tragedy or near misses to shape policy in digital health.
A new era of medicine
The comparisons with the world of technology in 2019 are therefore not difficult to see. We stand at the dawn of a new era in medicine, much like a modern day industrial revolution.
One common reason given for providing a reason for not regulating the digital market in the same way is cost. Pharmaceutical companies spend an average of £1.15bn to bring a drug to market through the rigorous research and development process, including clinical trials. How could such a process be superimposed onto the world of digital health?
Yet there is an “elephant in the room”. Self-regulation and clinical trials costs money and most technology organisations do not have the deep pockets of multinational pharmaceutical organisations. Indeed, for consumers, drug costs are controversially high in order to pay for this level of scrutiny and regulation. Can technology somehow consume these costs? In addition, regulation in digital medicine, particularly with regards to algorithms is not as simple because by their nature, algorithms may develop and change over time.
It’s clear a small company or a lone innovator could simply not afford to put their product to the test in the same way as medicines or compete with large multi-national companies. Indeed, valid questions are often raised. What should good technology look like? What minimum standards should they have to achieve?
Striking a balance between good regulation and good technology
Fortunately, work is ongoing in this area and the government’s code of conduct for data-driven health and care technology and NICE go some way to addressing this potential minefield with guidance on levels of evidence for digital technology.
But cost shouldn’t be an excuse not to demonstrate evidence. Perhaps some of our more tolerant attitudes towards technology come from the perception that it doesn’t affect patients in the same way. Medicines are ingested and injected and therefore affect physiology. An app, for example, does not. Clinicians would not sanction the use of a compound that had not been trialled rigorously; why are we therefore seemingly more willing to accept untested technology?
Interestingly this discrepancy extends to how pharmaceutical or technology organisations can interact with the medical community. The pharmaceutical industry is heavily regulated by the Association of the British Pharmaceutical Industry Code of Practice as to how it can interact with medical professionals and patients in terms of direct advertising. To date, there are no barriers to technologists interacting with the medical community and selling products directly in the same way as pharmaceutical medicine.
Yet as clinicians we are increasingly colliding with a world where we are making decisions influenced by commercially available devices. How should a clinician, for example, react to a patient presenting with symptoms as evidenced by an unregulated medical device? Another important question is whether we are being unethical by withholding potentially beneficial digital treatments using algorithms simply because we don’t understand it enough to regulate it.
How therefore do we strike the balance between good regulation and good technology? Perhaps the answer lies in “ethical product development”.
Often the priority for a health tech start-up is speed to market at the expense of a detailed evaluation of ethics or efficacy. Ethics is in danger of being seen as a luxury rather than part of the core of the process.
Yet to dismiss ethics is to miss the point. Lord Clement-Jones, chair of the House of Lords’ AI select committee heralded the UK as having the potential to become the world lead for ethical AI. We have a wealth of world leading organisations on our doorstep to make this a reality.
Bringing patients into the development process
Ethical product development with patients and citizens at the core of the design process is an obvious first step. Sceptics of early patient involvement may cite Henry Ford and his famous quote: “If I had asked people what they wanted, they would have said faster horses”.
But working alongside patients as part of the development process is key and perhaps this is where the genuine and productive compromise can be found by ensuring we ask the right questions of the potential users of the technology. The problem therefore perhaps lies in Henry Ford’s line of questioning. The question should have been open in such a way that the answer could have been “I want to get from A to B quicker”.
By ensuring ethics and patient involvement are embedded from the start, what impact might this have on the kind of regulation required? Could a comprehensive partnership approach allow for self-regulation in the truest sense of the term with patients and innovators and clinicians holding each other to account? Patients could ensure their needs are being met and that they feel comfortable with the product that is being developed. Clinicians could similarly ensure their needs are being met while applying clinical judgement to safety and guiding technologists in readiness for efficacy testing much as clinicians work as part of a pharmaceutical research team.
We have to be realistic about regulation but we cannot simply shrug our shoulders as an industry and say it’s too hard or too expensive. We need to apply the same innovative thinking that we see from amazing technology companies developing technological solutions to how we can get this right. Crowdfunding for clinical trials? Collaborative approaches to trials for similar devices across different tech organisations? The possibilities are endless.
We now need to stand up as modern clinicians and embrace technology while not forgetting the values we hold most true and the upmost duty of care to our patients.
Regulation shouldn’t be the barrier to innovation. It should be the pride of innovation in the UK and the standards to which we aspire as a modern, ethical healthcare system. Here in the UK, we can truly lead in this field.
Dr James Hadlow is a visiting Darzi Fellow at the University of Kent. Dr Chris Farmer is a professor at the University of Kent.