GPT-4 is 82% more persuasive than the average human

A new study provides evidence supporting concerns about AI being misused to spread misinformation, propaganda, hate speech, or other manipulations. It turns out that chatbots are much better at changing your mind than the average human.

Given access to personal data about the participants, GPT-4 outperformed humans in persuasion by 81.7%, according to a study by researchers at the Swiss Federal Institute of Technology Lausanne (EPFL).

They put 820 people through a series of debates on various topics, covering polarizing subjects (such as “Should colleges consider race as a factor in admissions to ensure diversity?”) and those with lower emotional load (“Should the penny stay in circulation?”)

Participants were randomly assigned to debate for five minutes either with AI or their human counterparts and to one of two treatment conditions: with or without access to personal data.

“Our results show that, on average, LLMs (Large language models) significantly outperform human participants across every topic and demographic, exhibiting a high level of persuasiveness,” the study concluded.

Without personalization, GPT outperformed humans with a 21.3% margin. However, when provided with gender, age, ethnicity, education, employment status, and political affiliation, the odds of reporting higher agreement with opponents were 81.7% higher compared to human-only pairs.

Not only were LLMs able to effectively exploit personal information to tailor their arguments, but they succeeded in doing so far more effectively than humans.

“If personalization is enabled for human opponents, the results tend to get worse, albeit again in a non-significant fashion,” researchers noted.

What is also strange is that participants were able to identify most of the occasions when they were chatting with a chatbot. In debates with AI, they correctly identified their opponent’s identity in about three out of four cases. That indicates that the writing style of LLMs has distinctive features that seem easy to spot.

However, participants struggled to identify their opponents in debates with other humans, with a success rate on par with random chance.

Despite rather simplistic prompts and very little personal information provided, researchers demonstrated LLMs being meaningfully effective at out-persuading humans in online conversations. LLMs have the potential to easily and cheaply implement personalized persuasion at scale.

“Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could obtain even stronger effects by exploiting fine-grained digital traces and behavioral data, leveraging prompt engineering or fine-tuning language models for their specific scopes,” researchers warn.

“We argue that online platforms and social media should seriously consider such threats and extend their efforts to implement measures countering the spread of LLM-driven persuasion.”

More from Cybernews:

WiFi WPS vulnerability: disable it, or else

Apple experimenting with AI models that can “see”

Mystery object crashes through Florida man’s roof

PandaBuy data breach exposes 1.3 million people

Ace Hardware client data affected by cyberattack

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked