
Calling out the increased use of AI tools to argue legal cases, London’s High Court on Friday ruled that lawyers caught citing non-existent cases can be held in contempt – or even face criminal charges.
It's the latest example of generative artificial intelligence leading lawyers astray, say legal experts.
A senior judge lambasted lawyers in two cases who apparently used AI tools when preparing written arguments, which referred to fake case law, and called on regulators and industry leaders to ensure lawyers know their ethical obligations.
"There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused," Judge Victoria Sharp said in a written ruling.

"In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities ... and by those with the responsibility for regulating the provision of legal services."
The ruling comes after lawyers around the world have been forced to explain themselves for relying on false authorities, since ChatGPT and other generative AI tools became widely available more than two years ago.
Sharp warned in her ruling that lawyers who refer to non-existent cases will be in breach of their duty to not mislead the court, which could also amount to contempt of court.
She added that "in the most egregious cases, deliberately placing false material before the court with the intention of interfering with the administration of justice amounts to the common law criminal offence of perverting the course of justice."
Sharp noted that legal regulators and the judiciary had issued guidance about the use of AI by lawyers, but said that "guidance on its own is insufficient to address the misuse of artificial intelligence."
Generative AI is known to confidently make up facts, and lawyers who use it must take caution, legal experts said.
AI sometimes produces false information, known as "hallucinations" in the industry, because the models generate responses based on statistical patterns learned from large datasets rather than by verifying facts in those datasets.
In one of the first cases of its kind, two New York lawyers were sanctioned by a Manhattan judge for trying to argue a case using legal briefs containing six fictitious case citations generated by ChatGPT.
The judge presiding over the personal injury trial, which took place about six months after the OpenAI chatbot's launch in November 2022, fined each of the attorneys $5,000.
Your email address will not be published. Required fields are markedmarked