ICO: Organisations using AI systems must provide clear explanations of decisions made

Organisations must also ensure that their use of AI is transparent and accountable

The Information Commissioner's Office (ICO) has released its first draft regulatory guidance into the use of artificial intelligence (AI) systems in organisations.

The guidance, which has been prepared in collaboration with the Alan Turing Institute, warns that organisations planning to use AI systems in their work must be able to clearly explain the decision made to the individuals affected by them. Moreover, organisations must also ensure that their use of AI is 'transparent and accountable'.

Many firms in the UK have started using AI systems to aid with decisions. For example, HR departments in many companies are using such systems to shortlist job applicants, based on the analysis of their CVs. Similarly, insurance firms are now using algorithms to handle claims.

According to New Scientist, a survey recently conducted across the UK showed that nearly half of the people in the country feel worried about AI systems making decisions that humans would be able to explain.

"This is purely about explainability," says Simon McDougall, executive director for technology policy and innovation at the ICO.

"It does touch on the whole issue of black box explainability, but it's really driving at what rights do people have to an explanation. How do you make an explanation about an AI decision transparent, fair, understandable and accountable to the individual?"

The guidance discusses how organisations can explain the services, processes and decisions assisted or delivered by AI to affected people, in an easy-to-understand form.

The guidance consists of three sections:

  1. The basics of explaining AI
  2. Explaining AI in practice
  3. What explaining AI means for your organisation

According to the ICO, organisations may find some parts more relevant to them, depending on the make-up of the organisation and their level of expertise.

"We want to ensure this guidance is practically applicable in the real world, so organisations can easily utilise it when developing AI systems. This is why we are requesting feedback," the ICO said.

The ICO will accept comments on the draft guidance until the 24th January 2020, although McDougall encouraged industry experts to respond to draft guidance before then.

Earlier this year, Rice University statistician Dr Genevera Allen claimed that the results produced by machine learning algorithms are often misleading or wrong, and are causing a 'crisis' in scientific research.

Allen suggested that researchers must keep questioning the reproducibility of the predictions or the findings made by machine learning techniques until new computational systems are developed, which are able to critique their own results.