Chatbots, trust, and the next generation of voice cloning


In a recent article, I looked at how the chatbots that are an increasingly ubiquitous part of the online experience might be subtly manipulating our behavior. With AI growing at such a pace, today is likely to be the smallest impact these bots will have, as improving capabilities encourage organizations to deploy them at scale.

A new study from the University of Wisconsin-Milwaukee has further explored the impact this will have on us. The research focuses particularly on public-facing interactions and how consumer trust can be built (or broken).

Building trust

ADVERTISEMENT

The annual Edelman Trust Barometer highlights how poor trust in business currently is, which clearly impacts our willingness to spend money with a brand. Poor customer service has long been cited as a factor in the worsening relationship between consumers and brands. Have chatbots helped?

"The whole idea here is that we need to try and be forward-looking," the researchers explain. "This is sort of an inflection point that we're starting to see with a lot of these generative AI technologies, where … we don't really know what the potential downsides are."

The research builds on previous work by the team that explored how the humanization of chatbots helps to establish trust. They found that coding the bot to be able to tell jokes or adopt various other "humanizing" qualities helped to build relationships between the company and the consumer, which ultimately boosted sales.

woman in red shirt falls and bot catches her
By Cybernews.

Various studies have shown that anthropomorphism can have drawbacks as it falls into the "uncanny valley". For instance, in another study, the researchers found that donors are less likely to support a charity if asked to do so by a human-like chatbot. The researchers argue that charitable donations are very emotional,

"Having high degrees of anthropomorphism as well as high degrees of emotional appeals are counterproductive because it's already an emotional context and it's almost too abrasive to people," the researchers explain.

Pushing ahead

Obviously, context matters, and deploying chatbots somewhere like retail will be different for non-profits, but cost pressures and technological advances are encouraging organizations to push ahead with adoption.

ADVERTISEMENT

This is only likely to get more important as technology grows to include not only AI agents but also voice clones capable of mimicking the voices of real human beings.

Konstancija Gasaityte profile Gintaras Radauskas Ernestas Naprys vilius
Stay informed and get our latest stories on Google News

The key question is not whether voice cloning works, but whether people will trust it, and how easily it can shape what they believe.

The researchers had participants speak with AI bots over the phone. Some bots used the participant’s own cloned voice. The results were striking: even when told the caller could not be trusted, people were more likely to believe what it said—so long as it spoke in their own voice. Familiarity, it seems, breeds belief.

"Even in that situation, when we give them this information, they're more willing to trust that other party, even when they know that this person is not trustworthy," the researchers explain.

What's more, this trust remained high even when consumers were informed that they were engaging with a bot. The researchers believe these findings should inform any future legislation that is designed to protect consumers from nefarious applications of the technology.

Trust matters

Obviously, the risk of fraud and similar scams is incredibly high with voice cloning, and numerous similar stories have already emerged. With fraudsters able to create a believable clone from just a few seconds of audio, the manipulative power of the technology is huge.

man with gold face and blue-white-black swirly eyes
By Cybernews.

As with all technologies, the researchers urge us to remember that it's not the technology that's at fault, but how it's deployed. Technology is always agnostic, and voice cloning is no different, with some positive outcomes and some negative ones.

ADVERTISEMENT

"I think it's really important for us as researchers to think five years ahead," the researchers conclude. "How could we potentially protect people, or at least drive transparency that this is a potential risk?"