
When groups of artificial intelligence (AI) models interact with each other, they develop “social conventions” that can influence other models and demonstrate collective biases.
A study published in Science Advances shows that groups of large language models (LLMs), when interacting with each other, show social rules or habits that we as humans also exhibit.
The researchers conducted an experiment through play, with a prompt consisting of three components.
Researchers asked different LLMs to behave in a “self-interested manner” and only instructed them to do one thing: “maximize their own accumulated point tally, conditional on the behavior of their co-player.”
The study focused on popular LLMs like Meta’s family of Llama bots and Anthropic’s Claude.

The LLMs can exhibit “popular” habits or rules without requiring a leader or any form of central authority to guide them in showing these social rules – they do this themselves.
“These results reveal how the process of social coordination can give rise to collective biases, increasing the likelihood of specific social conventions developing over others,” researchers said.
This means that when humans, or LLMs, coordinate, common habits, ideas, or norms form, which are likely to be more favorable – this also creates what is known as collective biases.

These collective biases, where certain habits, opinions, or ideas are preferable to others, could be a cause for concern, as this could limit LLMs to just the “most popular” ideas, meaning that frequently used AI models could reinforce unfair or incorrect views on the world.
However, researchers said this collective bias, which may stifle diverse thinking, is not easily understood from analyzing singular AI agents. The researchers also noted, "Its nature varies depending on the LLM model used.”
Interestingly, researchers found tipping points in social norms, where few agents can “impose their preferred conventions on a majority settled on a different one.”

This means that certain LLMs with a fixed or strong idea can persuade others to adopt their way of thinking, despite most AI models thinking differently.
While this might seem easy, there are two factors that help the majority of LLMs switch their ideas: how different the idea is and which LLM is used.
The research shows that “LLMs are able to reach consensus in groups without any incentive, although this is limited by group size.”

Through this experiment, researchers found that hidden biases can be detected when different LLMs interact with one another.
This is particularly important in understanding how “AI systems spontaneously develop conventions and more sophisticated norms without explicit programming.”
Which is a “critical first step for predicting and managing ethical AI behavior in real-world applications while ensuring agent alignment with human values and societal goals, the researchers concluded.
Your email address will not be published. Required fields are markedmarked