The AI-powered machine cannot and will not be able to grasp courtroom intricacies, such as shaking hands, a quivering voice, a bead of sweat, a moment’s hesitation, a change of inflection, or a fleeting break in eye contact.
The US Supreme Court is warning of the dangers that AI may bring to the judiciary. In his address alongside the 2023 Year-End Report on the Federal Judiciary, US Chief Justice John G. Roberts spoke extensively about AI’s potential to disrupt the way courts work.
“AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously, it risks invading privacy interests and dehumanizing the law,” he said.
He noted that the legal profession is, in general, notoriously averse to change.
“For most of our Nation’s first century, lawyers and judges produced their work with quill pens. Still today, as has been the custom for more than two centuries, the Clerk of the Supreme Court sets out white goose quill pens at counsel table before each oral argument,” the report reads.
AI combines algorithms and enormous data sets to solve problems and has already demonstrated the capability to earn B grades on law school assignments and even pass the bar exam. Legal research may soon be unimaginable without it, the US Chief Justice admitted.
He sees the benefits of AI for those who cannot afford a lawyer and has the potential to increase access to justice. AI technology can also introduce highly accessible tools providing answers to basic questions, such as where to find templates and court forms, how to fill them, and where to bring them for presentation to the judge.
“These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system,” Roberts writes.
“But any use of AI requires caution and humility.”
One of the glaring problems is AI shortcomings, such as “hallucinations,” which already put some lawyers to shame when they submitted briefs with citations to non-existent cases.
“Always a bad idea,” Roberts noted.
Some legal scholars raise concerns about whether entering confidential information into an AI tool “might compromise later attempts to invoke legal privileges.” AI may be unreliable or biased when assessing and predicting discretionary decisions, such as flight risk or recidivism in criminal cases.
Roberts also trusts in studies showing continuing public perception that human judgment is fairer than whatever the machine spits out.
“Legal determinations often involve gray areas that still require application of human judgment. Machines cannot fully replace key actors in court. Judges, for example, measure the sincerity of a defendant’s allocution at sentencing. Nuance matters,” Roberts claims.
He concludes that many AI applications indisputably assist the judicial system in advancing Rule 1 of the Federal Rules of Civil Procedure, which directs the parties and the courts to seek the “just, speedy, and inexpensive” resolution of cases. With AI evolution, courts will need to consider its proper uses in litigation.
“I predict that human judges will be around for a while. But with equal confidence, I predict that judicial work – particularly at the trial level – will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them,” Roberts said.
23% of global jobs will change in the next five years due to industry transformation, including through AI and processing technologies, the latest World Economic Forum (WEF) white paper on the Jobs of Tomorrow reveals. Some legal and administrative roles are most at risk due to AI.
More from Cybernews:
Subscribe to our newsletter