AI technology in self-driving cars could obscure cause of accidents, warn lawyers
Decisions based on fleeting judgements made by neural networks may make it "impossible" to determine the cause of accidents
Self-driving technology in which decisions are made by neural network-based artificial intelligence could obscure the cause of accidents, an independent group of lawyers have argued.
In a paper produced in response to a joint Law Commission of England and Wales and Scottish Law Commission consultation [PDF], the Scottish Faculty of Advocates warned that it might not be possible to even work out the cause of accidents caused by AI-driven autonomous vehicle technology.
Currently, autonomous vehicle decision making is based on rules and algorithms, but the development of systems based on artificial intelligence based on neural networks will make it harder to pinpoint the ‘faulty reasoning' or potential flaws in the such system when accidents happen.
"It is a feature of such systems that their internal ‘reasoning' processes tend to be opaque and impenetrable (what is known as the "black box" phenomenon) - the programmers may be unable to explain how they achieve their outcomes," states the paper [PDF].
It continues: "With conventional software, on the other hand, it is always possible to explain the algorithms and examine the source code: errors ought to be capable of detection. Classical AI follows a precise step of logical rules (algorithms) whereas the behaviour of neural networks may only be described statistically (stochastical behaviour)."
The submission adds: "If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system (there might be a clearly identifiable bug in the system, or one of the algorithms might be obviously flawed) but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident."
On top of that, the submission suggests, "the system driving an automated vehicle may not be entirely self-contained", making It even more difficult to ascertain either culpability or faults that might have been behind the accident.
Furthermore, some models have been mooted in which processing occurs in the cloud, rather than on systems on-board the vehicle, complicating matters still further.
As such, the organisation suggested, self-driving vehicles ought to be coded to absolute forbid certain actions, such as edging through crowds of pedestrians or mounting the pavement, even if only to enable an emergency services vehicle to pass.
The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free.