Michael Cohen, former US president Donald Trump’s ex-lawyer and fixer, said in a new court filing that he unknowingly used generative AI to produce bogus case citations to help his counsel draft a motion.
According to the filing unsealed late last week, Cohen said this was part of his legal bid to end his probation on tax evasion and campaign finance violation charges.
Cohen, who pleaded guilty to the crimes in 2018 and had served time in prison, said he used Google Bard, a generative AI chatbot, to generate case citations for his lawyer to use while making the case to shorten his supervised release.
One of Cohen’s attorneys, David Schwartz, used the citations in a motion filed with a US federal judge on Cohen’s behalf.
The problem was that the citations were all bogus – the said judge said in a separate December 12th order that he could not actually find any of the three cases cited by Schwartz and asked for an explanation of how they came to be included in the motion.
“As far as the Court can tell, none of these cases exist,” the US District Judge Jesse Furman said in the order.
In reply, Cohen wrote that as a non-lawyer, he has not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.”
“Instead, I understood it to be a supercharged search engine and had repeatedly used it in other contexts to (successfully) find accurate information online,” said Cohen in the filing.
Interestingly, Cohen basically tried to throw his lawyer Schwartz under the bus in his explanation. He said that at no point did Schwartz raise any concerns about the citations and that he was surprised that his team “would drop the cases into his submission wholesale without even confirming they had existed.”
The incident seems to confirm how common AI is becoming in legal casework – a new generation of AI language tools continues to make its way into the legal industry. And not only in the United States, of course.
Just recently, the regional court of Frankfurt in Germany, increasingly confronted with “mass proceedings,” said it suspected law firms of using AI to attract large numbers of plaintiffs for filing small cases.
Earlier, two New York lawyers were fined over ChatGPT-generated briefs, which included fictitious case citations. The lawyers admitted to using ChatGPT to help them research a specific case against the Colombian airline Avianca.
Your email address will not be published. Required fields are markedmarked