Algorithms in the justice system: Should computers decide our fate?

James Kitching, Solicitor - Corporate, Coffin Mew, examines the phenomenon of decisions being made by machines in the justice system

In July this year, The Law Society of England and Wales launched an investigation, under its Public Policy Technology and Law Commission, to look at algorithms in the justice system.

The Law Society's interest in the use of algorithms comes off the back of growing public concern about the ways in which decisions are being made by machines rather than humans.

Algorithms are all around us and have been for a long time. They are used by insurance companies to make decisions on quotes, they help Google to find the best match for us, and they allow Facebook to target advertising. What has changed, bringing them to the forefront of public attention, is the greater impact they are having on the way key decisions are made about our lives.

In Durham algorithms are being used by the constabulary to inform bail decisions and in the US, they are affecting judges' decisions on the length of prison sentences. Google searches and internet shopping are trivial things when compared to deciding whether someone gets four years or life.

Accountability
Who is to blame when an algorithm gets it wrong? This is a question constantly asked when discussing driverless cars, becoming a major thorn for developers in getting them on the road. The same problem has been faced by police forces in the UK trialling facial recognition.

A key part of our justice system is the ability to call to account those that make the decisions and challenge rulings when we think an outcome is wrong. In the tech world it is the developers that are under scrutiny for the outcomes of algorithms. While it may seem that machines have no-one to answer to, we of course turn to their creators when we need someone to blame.

Ethics training for computer graduates is now talked about in the same way as it is for lawyers and we will, and should, continue to expect higher standards for those that create codes that impact upon our lives.

How much better is good enough?
100 per cent accuracy should be what we aim for but just as humans make mistakes in decision making, so do machines. The development of algorithms is a process of trial and error with problems occurring along the way.

As is seen with how often accidents relating to driverless cars are in the news, compared to how little for human drivers, our expectations are high and tolerance low when it comes to algorithms making mistakes. The question is, should algorithms be allowed to operate when there are mistakes, even though they make mistakes less often than their human counterparts?

Garbage in, garbage out
One of the most common criticisms levelled at the use of algorithms is the replication of human bias. The hope with machines is that their decision making will be free of the biases found in human society and yet this has proven not to be the case. Earlier this year it was detailed how Admiral Insurance was producing higher quotes for Hotmail users and people called Mohammed than it was for others.

The problem however is not necessarily with the algorithms themselves but the data being fed into them. Where algorithms are fed data to learn from, if that data is already biased, those biases will be replicated. This means that not only do the algorithms that are created have to be looked at carefully but also the data that is fed into them. Only then can lessons be learnt about how to improve the system to remove the bias.

Open or secret
As mentioned above, it is important that we can analyse algorithms to understand how and why decisions are being made. "Garbage in, garbage out" applies but transparency in the coding allows us to follow the data through and spot reasons for biases in the results.

For example, if an algorithm is producing biased results like those at Admiral, we can look at the questions being asked and spot the root of the issue. However, there is little incentive for companies to research and create complicated and useful algorithms if they are unable to keep their discovery from others who may copy it.

It reduces the value of the algorithm and makes it difficult to profit from its creation if the code is available to all. This means that companies must find other ways to profit from open source creations or the justice system has to hire its own coders.

But the issue of open algorithms isn't just one for those trying to profit. There is also the concern that if a user knows how an algorithm produces its results then the algorithm can be manipulated to produce a desired outcome. But is this such a bad thing?

An example where not is credit scores. The public is encouraged to understand and explore what makes a credit score higher or lower to drive behaviours that make credit scores better. Applying this to the justice system, it may be better that individuals are aware that certain behaviours enable them to be let out on bail if it means that they are more likely to act in a positive manner.

Humans get things wrong too
The connecting thread with so many of the discussion points around algorithms is a failure to recognise that the criticisms we level at their use are just as applicable to human decision makers. Programmers and tech companies can be held just as accountable for the outcomes of their algorithms as police officers and judges can for the decisions they make. Algorithms make mistakes but so do we.

We are all continually affected by events around us and while we may think we are acting in a fair way there is always something pressing upon our sub-conscious. It has been shown that a judge's decision making can be affected by something as trivial as whether they've just had lunch or not.

With an algorithm we are able to see, as long as the code isn't hidden, how a decision tree is mapped, and an outcome reached. With an algorithm we can be sure that, even if mistakes are made, that it will be consistent in its mistakes and from that we can adapt it and make it better.

Despite the media coverage and headlines, the impact of algorithms on the important decisions in our lives is still fairly minimal. At the moment they are more often used as a support and a guide rather than a final arbiter, but that will change and so must the way we think about them.

A fairer justice system is one where fewer mistakes are made and if algorithms can help us get there then it should be something that we embrace, even if it takes a while to make them perfect.

James Kitching, Solicitor - Corporate, Coffin Mew

Computing' s AI & Machine Learning Live will be held on 19th November in Central London. Attendance is free to end users, and sponsorship packages are still available