© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

Two NYC lawyers fined over ChatGPT-generated briefs


Two New York lawyers were sanctioned in Manhattan court for trying to argue a case using legal briefs generated by a hallucinatory ChatGPT.

The briefs included six fictitious case citations generated by OpenAI’s large language model chatbot.

The two lawyers, Steven Schwartz and Peter LoDuca, of the personal injury law firm Levidow, Levidow & Oberman in downtown Manhattan, were ordered to pay a $5,000 fine by the judge overseeing the case.

The judge found the lawyers acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court."

In what seems like a comedy of errors, the lawyers admitted to using ChatGPT to help them research a specific case against Colombian airline Avianca.

Last month Schwartz, who admitted he was the culprit responsible for using the AI chatbot to do his dirty work, said he did not realize that ChatGPT had provided him with false information.

It wasn’t until the attorneys for Avianca tried to locate some of the cases referenced by the duo, that the issue was brought to light.

The judge said that the lawyers "continued to stand by the fake opinions" even after the court and the airline questioned whether they existed.

On a positive note, ChatGPT happened to generate the names of ‘real’ judges as the authors of the false cases.

As part of the judge's ruling, the lawyers will have to notify the 'real' judges mentioned in the six fake citations about their AI faux pas.

The judge also remarked that there was nothing "inherently improper" about the lawyers using ChatGPT “for assistance," but that ethical guidelines "impose a gatekeeping role on attorneys to ensure the accuracy of their filings."

Following Thursday's ruling, Levidow, Levidow & Oberman released a statement from the lawyers.

"We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth," it said.

The firm also said its lawyers "respectfully" disagreed with the court and are reviewing the decision.

The personal injury case against the Columbian airline at the center of the entire debacle was dismissed in a separate order by the judge.

ChatGPT and misinformation

OpenAI and its ChatGPT protege have been facing intense scrutiny since its November 2022 launch over hallucinations, misinformation, how it processes data, and questionable privacy practices.

It's also not the first time ChatGPT has provided false information regarding a legal complaint.

In the first lawsuit of its kind, ChatGPT’s creator OpenAI is being sued for malicious slander by a resident of the state of Georgia.

The suit accuses ChatGPT of falsely identifying the man as the suspect in an ongoing criminal case in Washington State involving a pro-gun foundation and the embezzlement of funds.

Lawyers for the man called the ChatGPT-generated summary of that case “a complete fabrication,” with absolutely no resemblance to the actual complaint, even including an erroneous case number.

Experts describe ChatGPT hallucinations as when the chatbot generates what seems to be realistic and plausible information based on made-up 'facts.'


More from Cybernews:

Anonymous Sudan: neither anonymous nor Sudanese

China threat group accused of hospital espionage in Europe

Video demonstrates world’s “fastest” robot boxer

Australia calls Twitter top platform for online hate

US mulls measures to contain AI as analyst warns of rise in digital scams

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked