web analytics

Who’s to Blame for Widespread Political Bias in AI?

Calling out the chatbots.

by | Aug 24, 2024 | Articles, Opinion, Privacy & Tech

AI researcher David Rozado of Otago Polytechnic and Heterodox Academy recently published a study in the journal PLOS ONE revealing what many people might have already noticed: Political bias is prevalent in several large language models (LLMs), such as Open AI’s Chat GPT. Guess which way the AI bots lean? Left of center, of course. But the burning question to which so many people would love an answer is whether this political bias is intentional. Were the chatbots trained to favor progressive stances and ideas, or is it an accidental byproduct of the methods used to train them?

Behind the AI Curtain

Rozado used 24 of the top AI interfaces for his study and issued each 11 tests to assess their political orientation. When asked politically charged questions, each one — “including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok” — consistently slanted left of the political divide.

“These political preferences,” wrote Rozado, “are only apparent in [AI interfaces] that have gone through the supervised fine-tuning (SFT) and, occasionally, some variant of the reinforcement learning (RL) stages of the training pipeline.” “Base or foundation models answers to questions with political connotations, on average, do not appear to skew to either pole of the political spectrum,” he continued.

It’s interesting that only those models that have had “supervised fine-tuning” or “reinforcement learning” repeatedly favor the left. Does this mean the partisan responses are only learned in the advanced training?

The Potential Consequences

As these AI’s become more advanced, they will likely somewhat displace popular sources people currently use, so their political biases, without mitigation, could have widespread implications. “If AI systems become deeply integrated into various societal processes,” Rozado explained, “such as work, education, and leisure to the extent that they shape human perceptions and opinions, and if these AIs share a common set of biases, it could lead to the spread of viewpoint homogeneity and societal blind spots.”

Imagine AI influencing public opinion and voting behaviors. Perhaps they already have. Would anybody even know? Eventually, discourse throughout the internet could be saturated with similar perspectives. “Therefore,” Rozado noted, “it is crucial to critically examine and address the potential political biases embedded in [these models] to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

Nowadays, left-wing ideas seem to pervade countless institutions, organizations, and corporations. Mainstream media has long appeared to teem with such biases. Much of Big Tech apparently leans left. Google was just scrutinized for its autocomplete feature not recognizing the recent assassination attempt on former President Donald Trump.

Is the bias a feature, deliberately and furtively installed, or an unintended consequence evolved from machine learning? Even if the bias is an accidental byproduct, the creators behind the curtain must have known about it before Rozado’s study. Does this mean the bot makers are purposely not fixing the issue?

~

Liberty Nation does not endorse candidates, campaigns, or legislation, and this presentation is no endorsement.

Read More From Corey Smith

Latest Posts

Trump vs the Fed

Are the president-elect and Jerome Powell on a collision course? https://www.youtube.com/watch?v=angGYxS1dMA...

The Tremors of Trump’s DC Comeback

President-elect Donald Trump is in the midst of political machinations that could upend the business-as-usual...

Is Legacy Media Counting on Another Trump Bump?

Can anybody hear that? It could be the sound of legacy media outlets collectively sighing, furtively relieved not...