Explainable AI should help us avoid a third 'AI winter'

AI researchers are worried that GDPR will limit availability of training data, but there's an upside too, says Gary Richardson

The General Data Protection Regulation (GDPR) that came into force last year across Europe has rightly made consumers and businesses more aware of personal data. However, there is a real risk that through over-correcting around data collection critical AI development will be negatively impacted. This is not only an issue for data scientists, but also those companies that use AI-based solutions to increase competitiveness. The potential negative impact would not only be on businesses implementing AI but also on consumers who may miss out on the benefits AI could bring to the products and services they rely on.

This is not to say that GDPR is a bad thing. From the right to be forgotten to the right of access, transparency is the main goal of this legislation. The public view of how personal data is seen and valued has changed, and companies have had to tidy up their business. There's no doubt this was necessary - over the last few years there has been major backlash to companies abusing their data collection powers and weaponising large data sets.

Training AI with data

AI is unique among technologies in that it gets better over time through training on large quantities of relevant data. While AI technologies have existed for many decades, the exponential rise of big data, commoditised cloud computing and open source in the late 2000s truly fast-tracked development. Taking this data, collecting and analysing it is the basis of training AI, enabling the software to evolve and learn how we make decisions.

However, following the significant public backlash to the misuse of data, GDPR was needed to bring about a fundamental change to the quality and quantity of data that businesses can collect, process and hold. Naturally, this affects the data available to AI algorithms to train on and it is possible there could be another ‘AI winter' if not properly managed.

The right to explanation

Possibly the most important element of the GDPR with regards to AI development is the right to explanation (Article 15). The right to explanation requires businesses to be able to explain how their algorithms came to form decisions. This means that if an algorithm makes the final decision on whether an insurance claim should be paid out or not, the consumer has the right to know exactly what rationale the algorithm used and how it came to that decision. Historically, these ‘explanations' have been opaque at best.

What companies require is the ability to clearly provide a robust explanation of how the AI-based approach arrived at the decision. While understandably difficult, it is possible to explain, in easy to understand language, exactly what decision making process has taken place in a machine learning model. Once companies are able to give consumers detailed, easy to understand explanations, we will see a number of positive consequences.

Chiefly, companies themselves will want to be sure of their compliance with GDPR's requirement to provide a right to explanation. Fines for non-compliance can be significant for businesses; depending on the size of the company, maximum fines can reach €20m or four per cent of annual turnover.

Further, ensuring the right to explanation will increase trust between the public and businesses. With consumers able to understand exactly what their data is used for with regards to training AI, it's possible more will be willing to share their personal data for this purpose. This will also ensure the needed data sets for training AI continue to grow; ultimately avoiding another AI winter and bringing numerous benefits to businesses and consumers from harnessing AI.

A third AI winter?

The AI winters of the 1970s and 1990s, which saw research funding slashed and interest in AI wane, were the result of unreaslistic expectations and a failure to scale. A third AI winter could be caused by inadequacies and biases in the AI algorithms leading to negative impacts on the whole of society. Bias simply does not build value in business, particularly with regards to important decisions like access to credit and healthcare or increasing diversity through recruitment.

With many companies now introducing AI to support recruitment screening or health assessments, inherent biases within AI software could have serious real-world impacts. While a positive step forward, the problem is that the data sets which the AI trains on may have historic cultural and gender biases - which the AI learns and reproduces as it makes decisions. Without an overhaul of the culture around AI development, these biases will continue to appear throughout business and do harm to society.

While there is a risk that GDPR could reduce the amount of training data available, leading to another AI winter, in the long-run this legislation will actually improve AI development for everyone. Businesses must take responsibility to ensure any AI technology integrated into their systems is compliant and as bias-free as possible. Ultimately customers are the lifeline of any business, and to stay competitive companies need to be able to avoid another AI winter.

Gary Richardson is MD of emerging technology at 6point6

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free.