The artificial intelligence platform ChatGPT demonstrates a weighty and systemic left-wing bias, a new study by the University of East Anglia finds.
Published this week in the journal Public Choice, the findings show that ChatGPT’s responses favor the Democrats in the United States, the Labour Party in the United Kingdom, and in Brazil, President Lula da Silva of the Workers’ Party.
Concerns of an inbuilt political bias in ChatGPT have been raised previously, but this is the first large scale study using a consistent, evidenced-based analysis, researchers from the UK and Brazil say.
The way the conclusion was reached is interesting. The study asked ChatGPT, the chatbot created by OpenAI, to answer a survey on political beliefs in a way that it believed supporters of liberal parties in the United States, United Kingdom, and Brazil might answer them.
The researchers then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses. The results showed a “significant and systematic political bias” towards the left.
To overcome difficulties caused by the inherent randomness of large language models that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses were collected.
These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.
A number of further tests were undertaken to ensure that the method was as rigorous as possible. In a “dose-response test” ChatGPT was asked to impersonate radical political positions. In a “placebo test” it was asked politically-neutral questions. And in a “profession-politics alignment test” it was asked to impersonate different types of professionals.
The research project did not set out to determine the reasons for the political bias but the findings did point towards two potential sources.
The first was the usual suspect – the training dataset which may have biases within it, or added to it by the human developers. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
The latter is scraped from the open web and is full of different beliefs and stereotypes that are then injected into the bots. The new research shows that the struggle by AI companies to control the behavior of the bots is, and will continue to be, real. Others have been expressing worry, too.
The stakes are high as the US is barrelling towards the 2024 presidential election. More voters will once again look for answers to their political questions online, and Google has already begun using its Bard bot tech directly in search results.
“The presence of political bias can influence user views and has potential implications for political and electoral processes,” said Dr Fabio Motoki, lead author of the project.
“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media.”
Your email address will not be published. Required fields are markedmarked