Jeff J Mitchell/Getty Images
show image

The five principles key to any ethical framework for AI

Artificial Intelligence (AI) is a very powerful and growing reservoir of capabilities that can be used to tackle a boundless number of problems, from biotechnologies to cybersecurity and smart cities to health care. As such, AI is having a major impact on our lives. How can we ensure that its effects will be beneficial and felt by the largest number of people?

To address this question, a wide range of initiatives have sought to establish ethical principles for the adoption of socially beneficial AI. What our societies all over the world need is a shared and applicable ethical framework, to develop AI policies, regulations, technical standards, and business best practices that are environmentally sustainable and socially preferable.

Of course, such shared frameworks don’t guarantee success. Mistakes and illegal behaviour continue to happen. But their availability means having a clear idea of what ought to be done, and how to evaluate competing solutions. Without an ethical framework, “better safe than sorry” becomes the only guiding rule, excessive caution overrides innovation, and we all lose out.

The good news is that we have the basis for agreement. Last year the House of Lords Select Committee on Artificial Intelligence in their Report “AI in the UK: Ready Willing And Able?” suggested the adoption of a cross sector code of 5 principles.

More recently, work at the European level, completed by the AI4People project and currently carried on by the High-Level Expert Group (HLEG) on Artificial Intelligence of the European Commission has led to the adoption of five similar fundamental principles that can provide the ethical framework needed to support future efforts to create socially good AI across the European Union.

First, AI must the beneficial to humanity. It is essential to stress that AI must promote well-being, preserve dignity, and sustain our planet. The second principle is that AI must also not infringe on privacy or undermine security. Third, AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. AI must be our servant not our master. The fourth principle concerns justice or fairness. AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairness. Innovation should be inclusive and promote diversity as well as tolerance. Finally, we cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).

There are other sets of principles which have been formulated by the OECD, the IEEE and many others, but the HLEG principles represent a convergence of ethical thinking and we believe are currently the best show in town. The G20 meeting in Osaka this year will be a good place to start, followed perhaps by incorporation in, or an annexure to, the Universal Declaration of Human Rights.

However, consensus on an ethical framework for AI is vital, but it is nothing without real-world application and adoption.

As optimists, we hope that the technology firms driving AI embrace existing law and regulation, and good ethical practice, in everything they do. They should take existing regulatory concepts, like those present in the European General Data Protection Regulation (GDPR), “privacy by design” and “data protection impact assessments” and expand their scope to cover the five HLEG ethical principles at a minimum, ensuring that new uses of AI consider ethics from the start, and that any risks arising from the use of AI are considered in a structured, accountable and redressable fashion. In particular, they must not shelter behind “black box” excuses. They need to be transparent about the impact of AI solutions on their workforces and on decision making. They should consider whether ethics advisory boards should be introduced.

At the same time, society and government need to be prepared to move quickly if technology firms fail to rise to the challenge. Implementing regulation takes time, and technology moves fast. Governments themselves need to be aware they cannot be immune from regulation. Algorithmic decision making is increasingly prevalent in policing, justice, immigration, and social security systems. We need clear rules about this as well.

We must ensure that ethics is an enabling force, not something used to bypass the law or dilute the positive impact of nascent applications. We need all the capabilities we can muster to tackle global challenges, including inequality, fundamentalism, populism, and global warming. AI can be a powerful ally in the fight for a better world. But to retain public trust in deploying it we need to get its ethics right. We cannot miss this opportunity. It is in all our interests to get this right first time.

Professor Luciano Floridi is OII Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab at the University of Oxford, Chair of the Data Ethics Group of The Alan Turing Institute, member of the High-Level Expert Group (HLEG) on Artificial Intelligence of the European Commission and member of the UK’s Centre for Data Ethics and Innovation.

Tim, Lord Clement-Jones is the former Chair of the House of Lords Select Committee on AI and Co-Chair of the All Party Parliamentary Group on AI.