Why we need certification for responsible AI

It's time to turn the theoretical conversations around what AI should look like into real action

In April, the European Commission (EC) proposed the world's first legal framework for artificial intelligence (AI). This had been sorely needed for some time, as many longstanding ethical and regulatory concerns surrounding AI have been hanging over the heads of users, businesses, and the public for years. If improperly handled, AI can develop unethical biases, undermine legal and regulatory norms, and blur the lines of organisational accountability.

With AI growing in use in consumer-facing contexts such as lending, fraud detection, hiring, and healthcare, it's vital to address the risks of the technology head-on to ensure the public is protected and also to give businesses and investors confidence about the future of AI.

That's why many organisations, alongside the EC, are already converging on the ethical and legal frameworks that should guide AI, such as the Linux Foundation's AI & Data group and the World Economic Forum's Global AI Action Alliance. Common principles shared by many of these frameworks include a commitment to AI's decisions being explainable, transparent, reproducible, and private. However, a major question that arises is how these frameworks are going to be implemented and enforced in practice.

While it can be tempting to believe the law will pick up the regulatory and enforcement slack, there are a few reasons why designers and technologists should be hesitant about wishing to see these ethical frameworks enforced solely through the law.

First and foremost is that even the most developed legal frameworks are years away from being ratified, meaning that the regulatory environment for AI will be a Wild West until then - with all the risks for consumers and businesses that it brings. Second, is that it's generally preferable for all of society if industry takes on the task of proactive self-regulation to complement the law. By ensuring that companies working on and with AI have a clear set of standards to follow, industry can help to build confidence in AI among itself, the public and investors.

As Google and Facebook have shown repeatedly, expecting companies to police themselves in-house isn't viable

A set of industry standards implies that there is a means to test if those standards are being applied - that is, a means of certifying companies. However, this does risk calling to mind the often dismal attempts at 'self-regulation' that we have seen so far: as tech giants like Google and Facebook have shown repeatedly, expecting companies to police themselves in-house isn't viable. This is why such a system of certification needs to be conducted by a transparent and unbiased third party.

In this, AI can take a hint from many other industries, such as building: for example, the LEED green building certification system is used globally to ensure builders remain environmentally responsible and resource-efficient but overseen in the United States by the non-profit US Green Building Council.

Such an independent certifying body can provide a level playing field for AI certification. It can also help encourage better practices among teams, by providing tiers of certification to reflect how thoroughly an AI system meets ethical and regulatory norms. A baseline certification may be handed out to those who meet the minimum threshold for compliance, while more prestigious certifications would be awarded to more thorough and proactive organisations.

The knock-off effects of such certification go far beyond ensuring users and the public are protected against suboptimal AI systems. Through eliminating exposure to liability for developers and compliance officers, and getting rid of the legal and ethical uncertainty in developing and deploying AI systems, an independent certification system can encourage growth in AI by providing industry and consumers with much-needed trust in the technology.

In the US, the team at the Responsible AI Institute - of which I'm an advisor - is looking to pioneer the first such industry certification. By drawing on the OECD's five principles for responsible AI, we're looking to help turn the theoretical conversations around what AI should look like into real action. If AI is to flourish as a sector and provide real societal value, then it's essential that such certification takes off as a norm around the world.

Mark Rolston is founder & chief creative officer at argodesign