UK judge warns of justice risks as lawyers cite fake AI-generated cases in court
Calls for the legal sector to develop stronger safeguards
The misuse of generative AI in legal submissions is a growing problem, and LLMs making up cases could undermine the foundations of the justice system, a senior judge warns.
A senior UK judge has issued a warning about the dangers of unchecked AI in the legal profession, after multiple lawyers were found to have submitted fictitious case law generated by AI tools in court proceedings.
High Court Justice Victoria Sharp, delivering a ruling [pdf] alongside fellow judge Jeremy Johnson on Friday, said the misuse of generative AI in legal submissions could undermine the very foundations of the justice system.
The ruling follows two separate incidents in which lawyers cited non-existent court cases in formal proceedings – a practice the judges said could amount to contempt of court or even perverting the course of justice, which carries a maximum sentence of life imprisonment.
"Artificial intelligence is a tool that carries with it risks as well as opportunities," said Justice Sharp.
"Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained."
In one of the cases, linked to a £90 million ($120 million) lawsuit over an alleged breach of a financing agreement involving Qatar National Bank, a lawyer cited 18 fabricated legal cases.
The client, Hamad Al-Haroun, took responsibility and apologised to the court, claiming he had used publicly available AI tools and had not intended to mislead. His solicitor, Abid Hussain, insisted he had relied on his client's research.
Justice Sharp called the situation "extraordinary," remarking that it was unacceptable for a lawyer to depend on a client for legal research.
It should be "the other way around," she noted.
In the second case, part of a tenant's housing claim against the London Borough of Haringey, barrister Sarah Forey referenced five fictional cases.
Forey denied using AI but failed to give what the court deemed a "coherent explanation" for the false citations.
Both lawyers have been referred to their respective professional regulators, but no criminal sanctions were imposed at this stage.
Justice Sharp concluded with a call for the legal sector to develop stronger safeguards.
The ruling adds to growing concerns globally about the role of AI in legal processes.
Courts from the United States to Australia have grappled with similar incidents, as generative AI tools, such as ChatGPT and others, become increasingly accessible and tempting for time-pressed legal professionals.
A 2023 US district court case in the Southern District of New York fell into disarray when lawyers, challenged to produce evidence of six cases they had cited within a brief had to admit that had they had asked ChatGPT to find citations and that the results were fabricated.
The incident led to a $5,000 fine for the lawyers and their firm.
In a 2023 UK tax tribunal, an appellant, claiming assistance from "a friend in a solicitor's office," provided nine fake historical tribunal decisions as precedents, later admitting it was "possible" she had used ChatGPT.
Similarly, in a €5.8 million Danish case this year, appellants narrowly avoided contempt proceedings after relying on a made-up ruling that the judge identified.