The European Commission has unveiled a series of guidelines aimed at ensuring companies deploy artificial intelligence in a way that is fair, safe and accountable.
The rules, developed by a committee of academics and industry representatives, form part of the EU’s plan to increase public and private investment in AI to €20bn a year.
The commission claims the initiative will give European developers of AI a competitive edge when it comes to selling their technology around the world.
“It is only with trust that our society can fully benefit from technologies,” said Andrus Ansip, the vice-president for the digital single market (pictured). “Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
This approach mimics that of the UK government, which set out plans last year to make Britain a leader in the development of ethical AI software and services.
However, questions have been raised about how much impact European governments can have on the development of AI when the majority of research and development takes place in the US and China.
The principles (outlined below) are not mandatory, but provide companies with a framework to deploy AI products in a way that limits the potential risks. A specific assessment list has also been designed to verify the application of each of the key requirements.
In recent years the EU has spearheaded efforts to reform the way technology is developed and applied, with tough new regulations such as GDPR, the EU Cyber Security Act and the E-Privacy directive.
But a European Commission official suggested the guidelines would be likely to lead to a code-of-conduct for the industry, rather than enforced regulation.
“[The pilot phase] may lead to a self-regulatory approach such as a code of conduct in the next phase but that is not decided yet,” the official told NS Tech. “The goal certainly is to ensure not only compliance with all laws but an ethical and human-centric approach to all AI made and used in Europe.”
IBM is one of the tech companies to have taken part in the committee and plans to sign up to the guidelines, James Luke – chief technology officer for the company’s UK public sector business – told NS Tech.
“AI is essential to business going forward,” he said. “Adopting these guidelines is not just about complying with privacy laws. It’s the right thing to do and good for business too.”
Luke added it was essential that algorithms are developed in a transparent way and that IBM was developing a number of tools to ensure decisions could be traced and explained.
But he rejected calls for regulators to be given the power to inspect algorithms, saying it would be difficult to take a general legal position on the matter “because companies want to protect their intellectual property”.
The full list of principles is:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.