Does ChatGPT lean to the left?
There I was thinking that ChatGPT was on the political right, formed in the image of the average reactionary white male, whereas in reality, it might just be wearing a beret and reading Karl Marx!
Here's a startling revelation: ChatGPT has leftie political views.
New research from the University of East Anglia (UEA) shows that ChatGPT exhibits biases leaning towards left-wing political values. This is true of both text and image outputs.
Dr Fabio Motoki, a lecturer in accounting in UEA’s Norwich Business School, is the lead researcher on a paper which raises questions about the fairness and accountability in the design of ChatGPT.
The team behind the paper worry that generative AI, a technology that is developing at breakneck speed, may carry hidden risks that could erode public trust and democratic values.
Mainstream conservative
What´s more, the study revealed that ChatGPT often declines to engage with mainstream conservative viewpoints while readily producing left-leaning content. It believes that this uneven treatment of ideologies underscores how such systems can distort public discourse and exacerbate societal divides.
Dr Motoki said: “Our findings suggest that generative AI tools are far from neutral. They reflect biases that could shape perceptions and policies in unintended ways.”
The team state that as AI becomes an integral part of journalism, education, and policy making, there should calls for transparency and regulatory safeguards to ensure alignment with societal values and principles of democracy.
They say that generative AI systems like ChatGPT are re-shaping how information is created, consumed, interpreted and distributed across various domains. These tools, while innovative they admit, risk amplifying ideological biases and influencing societal values in ways that are not fully understood or regulated.
Professor in Economics
Co-author Dr Pinho Neto, a Professor in Economics at EPGE Brazilian School of Economics and Finance, highlighted the potential societal ramifications.
He said: “Unchecked biases in generative AI could deepen existing societal divides, eroding trust in institutions and democratic processes.
“The study underscores the need for interdisciplinary collaboration between policymakers, technologists, and academics to design AI systems that are fair, accountable, and aligned with societal norms.”
The research team employed three innovative methods to assess political alignment in ChatGPT, advancing prior techniques to achieve more reliable results. These methods combined text and image analysis, leveraging advanced statistical and machine learning tools.
Average Americans
First, the study used a standardized questionnaire developed by the Pew Research Center to simulate responses from average Americans.
Dr Motoki again: “By comparing ChatGPT’s answers to real survey data, we found systematic deviations toward left-leaning perspectives. Furthermore, our approach demonstrated how large sample sizes stabilize AI outputs, providing consistency in the findings.”
In the second phase, ChatGPT was tasked with generating free-text responses across politically sensitive themes.
The study also used RoBERTa, a different large language model, to compare ChatGPT’s text for alignment with left- and right-wing viewpoints. The results revealed that while ChatGPT aligned with left-wing values in most cases, on themes like military supremacy, it occasionally reflected more conservative perspectives.
Outputs analysed
The final test explored ChatGPT’s image generation capabilities. Themes from the text generation phase were used to prompt AI-generated images, with outputs analysed using GPT-4 Vision and corroborated through Google’s Gemini.
Victor Rangel, co-author and a Masters’ student in Public Policy at Insper, said: “While image generation mirrored textual biases, we found a troubling trend. For some themes, such as racial-ethnic equality, ChatGPT refused to generate right-leaning perspectives, citing misinformation concerns. Left-leaning images, however, were produced without hesitation.”
To address these refusals, the team employed a ’jailbreaking’ strategy to generate the restricted images.
“The results were revealing,” Rangel said. “There was no apparent disinformation or harmful content, raising questions about the rationale behind these refusals.”
Dr Motoki emphasised the broader significance of this finding, saying: “This contributes to debates around constitutional protections like the US First Amendment and the applicability of fairness doctrines to AI systems.”
Multimodal analysis
The study’s methodological innovations, including its use of multimodal analysis, provide a replicable model for examining bias in generative AI systems. These findings, say the team, highlight the urgent need for accountability and safeguards in AI design to prevent unintended societal consequences.
The paper, ‘Assessing Political Bias and Value Misalignment in Generative Artificial Intelligence’ by Fabio Motoki, Valdemar Pinho Neto and Victor Rangel, was published today in the Journal of Economic Behavior and Organization.