AI technology in self-driving cars could obscure cause of accidents, warn lawyers

clock • 2 min read

Decisions based on fleeting judgements made by neural networks may make it "impossible" to determine the cause of accidents

Self-driving technology in which decisions are made by neural network-based artificial intelligence could obscure the cause of accidents, an independent group of lawyers have argued.

In a paper produced in response to a joint Law Commission of England and Wales and Scottish Law Commission consultation [PDF], the Scottish Faculty of Advocates warned that it might not be possible to even work out the cause of accidents caused by AI-driven autonomous vehicle technology.

Currently, autonomous vehicle decision making is based on rules and algorithms, but the development of systems based on artificial intelligence based on neural networks will make it harder to pinpoint the ‘faulty reasoning' or potential flaws in the such system when accidents happen.

"It is a feature of such systems that their internal ‘reasoning' processes tend to be opaque and impenetrable (what is known as the "black box" phenomenon) - the programmers may be unable to explain how they achieve their outcomes," states the paper [PDF].

It continues: "With conventional software, on the other hand, it is always possible to explain the algorithms and examine the source code: errors ought to be capable of detection. Classical AI follows a precise step of logical rules (algorithms) whereas the behaviour of neural networks may only be described statistically (stochastical behaviour)."

The submission adds: "If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system (there might be a clearly identifiable bug in the system, or one of the algorithms might be obviously flawed) but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident."

On top of that, the submission suggests, "the system driving an automated vehicle may not be entirely self-contained", making It even more difficult to ascertain either culpability or faults that might have been behind the accident.

Furthermore, some models have been mooted in which processing occurs in the cloud, rather than on systems on-board the vehicle, complicating matters still further.

As such, the organisation suggested, self-driving vehicles ought to be coded to absolute forbid certain actions, such as edging through crowds of pedestrians or mounting the pavement, even if only to enable an emergency services vehicle to pass.

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free. 

You may also like
AI interview: Chunk wisely to avoid RAG hell

Developer

DataStax's Ed Anuff on the finer points of AI app development

clock 15 March 2024 • 4 min read
Government unveils £1.1 billion plan to bolster future skills

Careers and Skills

Will fund training of over 4,000 students across the UK

clock 15 March 2024 • 2 min read
EU approves AI Act

Law

Aims to regulate high-risk AI applications, without harming innovation

clock 13 March 2024 • 2 min read

Sign up to our newsletter

The best news, stories, features and photos from the day in one perfectly formed email.

More on Big Data and Analytics

How Space Aye is putting Scottish space tech on the map

How Space Aye is putting Scottish space tech on the map

Company merges real-time satellite imagery with data from billions of IoT devices

John Leonard
clock 12 March 2024 • 3 min read
 'We decided to build a better energy retailer'

'We decided to build a better energy retailer'

Octopus Energy’s David Sykes on how the disrupter built a platform to make green energy work for consumers

Penny Horwood
clock 04 March 2024 • 5 min read
'I'm an advocate for human in the loop': How a critical infrastructure provider is handling AI

'I'm an advocate for human in the loop': How a critical infrastructure provider is handling AI

And why they’re looking beyond it

Tom Allen
clock 28 February 2024 • 2 min read