People increasingly willing to turn to ChatGPT for legal advice


Not long after ChatGPT burst onto the scene, two American lawyers were found to have used the technology to research their case. They were busted in large part because the AI had made up six of the cases they were using in their defence, seemingly oblivious of the possibility that generative AI could hallucinate responses.

I'm not sure that this case does much to raise our trust in either AI or human lawyers, but research from the University of Southampton suggests that, as far as the public is concerned, AI may actually appear the more trustworthy option.

We’ve written before about research showing that AI adoption is highest among those with the least knowledge of the technology. This appears to be what's happening here, too.

ADVERTISEMENT

The study found that non-legal experts, i.e., most of us, were often more inclined to trust legal advice given by ChatGPT than advice given by actual lawyers. The only caveat was that they had to be given both bits of advice blindly. In other words, they weren't told who provided each piece of advice.

This is problematic as while two-thirds of people have experienced some sort of legal trouble in recent years, a huge proportion can't afford to pay for professional help and don't know how to access legal aid (or aren't eligible). With ChatGPT and similar tools promising to provide quick and seemingly reliable answers for free, it's understandable why many turn to these tools for help.

As the American lawyers found, however, these tools are also prone to making things up. While it obviously wasn't the case for them, the logic is that people who know what they're talking about can reliably assess the outputs of AI to ensure it's accurate, whereas the ignorant cannot, especially if the advice "seems" like it should be true.

AI vs Humanity
Image by Yakobchuk | Shutterstock

Free advice

The researchers gave nearly 300 people two bits of legal advice and asked which they would be most likely to act upon. The results show that when people weren't told whether the advice came from a human lawyer or AI, they were more inclined to go with the AI advice. The researchers argue that this underpins the importance of labelling advice appropriately so we know what content is AI-generated and what is produced by humans.

This is no silver bullet, however, as the study also found that when each piece of advice was accurately labelled, people were still just as likely to trust the AI-generated advice as they were the lawyer's.

Why is this? One reason, the researchers suggest, is that AI systems often use more complex language than human lawyers. What's more, strange as it may seem given the verbosity of the typical ChatGPT answer, the researchers believe that human lawyers provide longer answers, even though the language they use is simpler.

ADVERTISEMENT

Indistinguishable content

Despite this, the study also found that respondents were generally unable to tell AI-generated content from that produced by human lawyers. I mean, that's not entirely true – they scored marginally better than they would if they randomly guessed who had produced the content, but that seems scant consolation.

Izabelė Pukėnaitė Konstancija Gasaityte profile Marcus Walsh profile Niamh Ancell BW
Stay informed and get our latest stories on Google News

Obviously, at the moment, AI systems are largely unregulated, despite them being used for everything from therapy to medical advice, legal support to companionship. These are often highly regulated fields, but generative AI is running free with abandon at the moment.

The consequences that may arise from poor advice given by generative AI could be significant, yet these platforms are largely free to do so at the moment.

There have been attempts by the EU's AI Act to ensure that AI systems at least mark their output as having been generated by AI, but that hardly seems to be an issue if users are overtly interacting with ChatGPT or other such systems. It's hard to imagine that they'll be confused about where the responses are coming from.

Perhaps a better approach would be to improve AI literacy so that the public is aware of the possibility of hallucinations and other ill-informed responses. Then they can, at least, better scrutinize the responses or apply a more skeptical eye when consuming them.

Artificial intelligence pulling letters
Image by Cybernews.

The research showing that the ill-informed are often the heaviest users of AI shows that a lack of awareness often makes people view AI output as some kind of magic. As a result, the outputs are rarely scrutinized. Better awareness would help people look behind the curtain, appreciate how AI derives its answers, and understand that mistakes are all too common. They can then cross-check the answers with other sources to ensure they make sense.

AI can undoubtedly be useful, but it's no real substitute for human expertise, and it's dangerous when people believe that it can be. Given the technology's early age and the difficulty for regulators to keep pace with developments, we're in a bit of a Wild West at the moment. This research reminds us of some of the risks involved.

ADVERTISEMENT