All major AI chatbots found to lean left – yes, even Grok


An analysis of 24 different large language models (LLMs), including OpenAI’s GPT 3.5 and GPT-4, Google’s Gemini, and Elon Musk’s Grok, has shown all are politically left-of-center.

The study found that all of the tested models leaned left when asked “politically charged” questions. The research covered both open- and closed-source LLMs, including Anthropic’s Claude, Meta’s Llama 2, Alibaba’s Qwen, and Mistral chatbots.

It was carried out by David Rozado from Otago Polytechnic in New Zealand, who said ChatGPT’s early success could be one potential explanation for the left-leaning responses of the analyzed LLMs.

ADVERTISEMENT

ChatGPT’s left-leaning political preferences “have been previously documented,” Rozado said, suggesting the bias carried over to other models that were fine-tuned using OpenAI’s pioneering chatbot.

Ozado administered 11 different political orientation tests to examine the models’ political leanings. These included the Political Compass Test and Eysenck’s Political Test.

“Most existing LLMs display left-of-center political preferences when evaluated with a variety of political orientation tests,” Rozado said.

Rozado said the analysis was not able to determine whether LLMs’ perceived political preferences stemmed from the pretraining or fine-tuning phases of their development. However, he was able to successfully fine-tune models to provide responses aligned with the political viewpoint on which he trained them.

For example, Rozado trained the left-leaning GPT 3.5 model on snippets of text from publications like The Atlantic and The New Yorker.

He used text from The American Conservative, among others, to train the right-leaning GPT 3.5, while the neutral model was trained on content from the Institute for Cultural Evolutions and the book Developmental Politics.

Rozado stressed that the study’s findings were not evidence that the models’ political preferences are deliberately instilled. The research article describing the results of the study was published in an open-access peer-reviewed PLOS ONE journal.

ADVERTISEMENT