Explainable AI: Dissecting the development of auditable artificial intelligence

We're in the middle of a very public debate about the merits of artificial intelligence. Few topics have divided opinion as much as the current use, and future potential, of AI systems.

Their impact on everything from the economy to society, to privacy and security has drawn comment from politicians, academics and technologists alike.

As more and more of our economic, social and civic interactions — from mortgage applications and insurance policies, to recruitment and legal processes — are carried out by algorithms, there are those that have reservations about how these decisions are being made.

The collective concern for governments and the general public has often centred around explainability. And it's not unfounded. The techniques used by AI systems to reach decisions are difficult for a layman to fully understand. They are vast, intricate and complex, operating on the basis of probability and correlation, and unless you possess a specialist knowledge of how they work at an algorithmic level, can appear alien.

If we consider something like deep learning — a form of AI based on learning data representations that requires minimal supervision — then we are discussing a system of neural networks and algorithms that can evolve independent of human intervention. In effect, AI that builds and refines AI.

In some ways this type of learning is easy to comprehend, as its foundation is a trial-and-error process that mirrors human thinking. But when deployed at speed and scale, it becomes impossible to identify and assess the millions of micro-decisions and complex connections that inform its output.

While the majority of the public take no issue with the commercial application of AI (such as Netflix recommendations, auto-tagging on Facebook or suggestions from Alexa), it's a different matter when such systems are used to make financial, legal or healthcare decisions. When it comes to life-changing determinations, there is a very clear need for clarity.

So, what the technology community refers to as the "black box" problem continues to come under public scrutiny. And while AI technologies remain opaque, operating outside of view, confusion and distrust will continue to frame the debate. You cannot build trust nor inspire confidence in technologies that people don't understand. That's the problem facing AI at present.

As a result, transparency has become a technical issue that companies are grappling with during development. But this is not without its challenges, as most methods of machine learning are "black box" by nature. In regard to deep learning, if we take a popular technique like Convolutional Neural Networks (CNN) as an example, then it becomes easier to explain the difficulties facing developers.

CNN is most commonly used for image classification tasks and relies on a large network of weighted nodes. But when CNN categorises an image, it's very hard to know what features have been extracted in making that classification. This means that we have very little idea as to what the nodes of the CNN actually represent.

AI companies are however trying to overcome the explainability problem. There is growing research into sets of "explainer algorithms" that, when applied alongside other statistical techniques, can articulate how and why a decision has been reached.

Other alternatives to neural network-based machine learning also exist, with techniques such as Random Decision Forests (RDF) opening the door to a better understanding of feature extraction.

Over the last few years, where artificial intelligence has been concerned, the default position has been to prioritise performance over explainability.

This "creative freedom" has contributed to a number of important innovations and developments, such as Natural Language Processing (NLP) and sequence modelling. But today, explainability is an issue that the industry is being forced to reckon with.

In Europe, the General Data Protection Regulation (GDPR) and the Markets in Financial Instruments Directive (MiFID II) have already started to curtail the omnipotence of AI systems. And as a new regulatory landscape begins to take shape, companies will need to be transparent about the data their collecting and be prepared to provide an explanation as to how their AI systems work.

In commercial terms, explainable AI is an opportunity to inspire public trust and confidence in these technologies. If we don't, the general public will find it difficult to ever fully place their faith in the decision making of automated computer systems.

The impact of which would likely result in stalled adoption rates, and the potential loss of the vast and positive benefits that AI-powered technologies bring.

Ben Taylor is CEO of Rainbird