Peter Macdiarmid/Getty Images for Somerset House
show image

Dominic Holmes

chris.boyle@progressivemediagroup.com

Partner at Taylor Vinters

The case for regulating algorithms

For businesses and wider society, the potential benefits of technology are immense. Diseases can be diagnosed and treated with greater accuracy, fraud can be spotted more effectively, productivity can be boosted and human error removed from all sorts of decision-making processes.

The increased use of technology also brings some potential downsides. Some believe that, without any intervention, the benefits of technology will be concentrated in the hands of the privileged few. A recent report from MMC Ventures The State of AI: Divergence, concluded that, unless we address bias in AI today, there is a risk that it will compound inequality as the most vulnerable in society will be the last to benefit. Biased AI systems could cause individuals to experience economic loss, loss of opportunity and social stigmatisation.

A recent inquiry, announced last week will bring together the Centre for Data Ethics and Innovation (CDEI) and the Cabinet Office’s race disparity unit to assess algorithmic discrimination in the justice system. The inquiry was borne out of concerns about the quality and accountability of decisions made by computer algorithms and puts a spotlight on the very real threat of entrenched bias in AI systems. It will also focus on similar issues in the finance and recruitment sectors.

‘Explainable’, transparent AI is imperative

Although there have been some discussions at EU level about setting ethical standards or regulating who might be liable in cases where evidence of bias is found, we have yet to see any detailed proposals. We also know from consulting with industry leaders at The Zebra Project events, that AI will only work if there is public trust in what decisions are being made and how they are being made.

Ultimately, the success of widespread AI depends on creating an environment of trust. Developing more transparent or explainable AI will help people have a better understanding of how algorithms make automated decisions. A lack of transparency is not an inherent feature of machine learning tools – it is a design choice.

There are potential commercial benefits to harnessing that trust. The Zebra Project’s interim report, Future Fundamentals, concluded that trust, transparency and ethics are becoming priority leadership issues for organisations embracing new technologies. As such, demystifying AI in a transparent way will be an enabler for growth.

There is, admittedly, a tricky balance to strike between creating algorithms that make explainable decisions and ensuring that AI systems are robust against manipulation. For example, if AI triggered a fraud alert following a card payment, it would be detrimental to the system if the reasons for that decision could be traced by the fraudster. On the other hand, if an AI system decided that an individual was not insurable, then the insurance company should be able to explain why to the person affected.

Traceability would therefore have to be carefully considered, depending on the type of business and how AI is being used.

Who is ultimately accountable?

Just as a service provider may be liable to its customer for liability caused by an automated decision made using AI, the developer of that AI tool could be liable to the service provider, if the loss was caused by a fault in the technology. In other words, there may be a chain of causation.

For example, a haulage company might be liable to its customer if it fails to make a delivery due to a lorry breaking down. If the lorry suffered a mechanical failure caused by faulty manufacturing, the haulage company may be entitled to compensation for all or part of that liability from the manufacturer. But it might not be able to do so if the mechanical failure was caused by the haulage company failing to look after the lorry properly.

In the same way, if an AI solution has taught itself to reach unethical decisions based on the way it is configured by the developer, then the developer might be held accountable. In the next few years, we are likely to see more large companies using AI to sift through high volumes of graduate job applications and decide who should be invited in for an interview.

If the employer applies a blanket rule that requires the technology to reject all applications sent in by female candidates, that is unlawful discrimination instigated by the employer. If, on the other hand, the employer asks the technology to select candidates for interview based solely on non-discriminatory criteria but, over time, the AI teaches itself in such a way that most female candidates are rejected, that fault might rest instead with the developer. But it depends on the underlying cause of biased decision. Is it an inherent fault in the design of the tool by the developer, or is it a consequence of the data that is inputted by the employer? Machines need diverse data to learn from, otherwise they will simply repeat the human biases that they may be intended to avoid.

What might a regulatory framework look like?

As a starting point, any regulation will need to focus on ensuring that companies are being completely transparent about what they are using automated decision-making for, why and how it works. There may be exceptions to an overarching assumption in favour of transparency, but most businesses should be willing to explain the steps they are taking to make sure the decisions taken don’t go completely against the grain.

As a general rule, the law requires businesses to be responsible for any loss they cause in providing a service or making a decision. That principle applies equally where that service or decision is facilitated by AI, but the success of regulation often relies on ease of enforcement. Therefore, any specific framework would need to have teeth – whether that’s a financial penalty or an effective, well-funded regulator to protect individuals’ rights and conduct audits. Without real sanctions, it is difficult for legislation to drive the right behaviours amongst developers and users of automated decision-making. Without visibility of how companies are using AI technology, achieving an environment of trust will be challenging.

With more effective regulatory oversight – that might, for example, include a voluntary code of practice – we can be confident that the AI systems being built today, benefit us all in the future. Striking the right balance between effective regulation and continued innovation technological advances will always be a challenge. We must be careful not to stifle the type of innovation and entrepreneurial spirit that has enabled such rapid advances in recent years. But ultimately, we must focus on building transparency and trust into both the design of these systems, and the narratives that surround them.

Dominic Holmes is a partner at law firm Taylor Vinters