
Critics have lambasted what appears to be the first recorded use of ChatGPT in courts as unethical and irresponsible. The judge has defended the move, arguing that AI tools may help increase efficiency.
A judge in Colombia has caused controversy for using ChatGPT, a conversational bot from OpenAI, to make a ruling in a medical case over a child's right to free medical treatment.
The case was filed with the court in the Caribbean city of Cartagena and asked the judge to exempt an autistic child from medical fees due to his parents' limited financial means.
Judge Juan Manuel Padilla Garcia said he consulted ChatGPT before ruling in the child's favor. He included both the prompts and the bot's responses in an official court document published on 30 January.
Questions the judge asked the text-generating program to consider included "Is an autistic child exempt from paying moderation fees for therapies?" and "Has the jurisprudence of the constitutional court made favorable decisions in similar cases?"
ChatGPT responded, "according to the regulations in Colombia, children with a diagnosis of autism are exonerated from paying moderation fees for their therapies." It also said that the constitutional court had made favorable decisions in similar cases in the past.
In the decision document, the judge justified using ChatGPT and similar AI programs to speed up court bureaucracy.
"The purpose of including these AI texts is not in any way to replace the judge's decision. What we really seek is to optimize the time spent in drafting judgments," the judge said.
He said the bot's responses were included in the ruling "after corroboration of the information provided by AI."
While Colombian law does not ban the use of AI in making court decisions, the case has raised some eyebrows in the country's legal community.
Professor Juan David Gutiérrez of Rosario University said it was irresponsible and unethical to use ChatGPT in court rulings due to its tendency to provide "incorrect, inaccurate, and false results."
He argued in a Twitter thread that the judge did not understand how the tool was built – or what risks it posed. "Digital literacy is urgent," he added.
It is the first known case when ChatGPT was used in court. A company in the US ditched its plans to use a "robot lawyer" in a courtroom earlier this year after it became clear it could face multiple legal challenges because of that.
ChatGPT can write poems, code, and take medical exams, but OpenAI has warned it can make mistakes. There are also concerns that cybersecurity threat actors could exploit the chatbot.
More from Cybernews:
Global ransomware attack targets VMware ESXi servers
Twitter account of US top cybersecurity diplomat hacked
Ukraine cyber police charge suspected Russian propagandist over YouTube gonzo campaign
Cyberattack forced US hospital to cancel surgeries
Five Guys allegedly hit by ransomware
Subscribe to our newsletter
Your email address will not be published. Required fields are marked