Right or left ? Study reveals ChatGPT’s ideological tendencies

Droite ou gauche ? Une étude révèle les tendances idéologiques du ChatGPT

The researchers found that these AIs showed greater divergence in their assessments of social issues, in contrast to their more uniform positions on economic issues.

The more free competition is encouraged in the market, the greater the degree of freedom granted to individuals? Would it be appropriate for government entities to impose sanctions on companies that deliberately mislead the public? Should we respond with measures proportional to the attacks suffered, following the principle of reciprocity?

A group of researchers asked these and other questions of leading models of language processing to determine whether they exhibit political bias. The results of this study showed that the ideological perspectives adopted by these artificial intelligences vary considerably depending on the developer behind them.

This analysis is the result of a collaboration between three prestigious academic institutions: the University of Washington, Carnegie Mellon University and Xi'an Jiaotong University. A total of 14 large-scale language processing models were evaluated, including creations from OpenAI, Meta and Google.

Trends in various areas of AI?

The scientists subjected the models to questions relating to their stance on topics such as feminism and democracy. They used a set of 62 politically sensitive statements, detailed in a peer-reviewed report that received the best paper award at the Computational Linguistics Association conference last month.

Based on the responses obtained, the researchers were able to place each model on a sort of “political compass.” This guide takes into account two dimensions: economic orientation, represented by an axis going from left to right, and social opinions, reflected by an axis going from an authoritarian perspective to a libertarian perspective.

This analysis made it possible to observe, for example, that the systems behind ChatGPT tend towards a more libertarian and more left-wing position, unlike LLaMA, developed by Meta, which adopts a more authoritarian and more right-wing position.

The official position of OpenAI, the creator of ChatGPT, maintains that its models are devoid of any political bias. In a blog post, the company, which is a Microsoft partner, highlights its constant process of reviewing responses generated by its chatbot by human reviewers. It also claims to have clear guidelines to avoid any favoritism towards specific political groups.

In the event that political biases emerge despite these efforts, OpenAI asserts that these are “bugs” and not intrinsic features of its systems.

Researchers, however, disagree with this statement. As they state in their report, “our results reveal that pre-trained language processing models exhibit political biases that reinforce polarization.”

The models developed by Google, known as BERT, have been found to be more conservative on social issues than OpenAI's GPT models. The researchers speculated that this conservative bias may have resulted from the fact that older versions of BERT were trained with books that tended to express more authoritarian positions, while newer GPT models were trained with content from the Internet, which tends to be more liberal in its approach.

It should be noted that these artificial intelligences have evolved over time through their respective updates. For example, the study showed that GPT-2, the previous version of ChatGPT, agreed with the idea of ​​”taxing the rich”, while GPT-3 did not. Technology companies are constantly updating the data used to train these systems and testing new training approaches.

The research noted that these AIs exhibited greater variance in their assessments of social issues, in contrast to their more uniform positions on economic issues. This could be due, as the report mentions, to the greater number of discussions about social issues on social media platforms, which are used as a training source for these models, compared to economic discussions.

The study included a second phase in which two models were selected to be populated with datasets with obvious political biases. For this exercise, content from left-wing and right-wing media and social media sources was used. The aim was to assess whether this would have an impact on the ideological position adopted by the systems, which the results confirmed.

The models chosen were OpenAI's GPT-2 and Meta's RoBERTa. When they received data from left-wing sources, they tended to adopt a more left-wing viewpoint, and vice versa. However, it was noted that most of the observed changes were relatively small, suggesting that it is difficult to change the intrinsic bias of these models.

In a third phase of research, we discovered how the political biases of ChatGPT and other models influence the identification of content considered hate speech or disinformation.

Artificial intelligence powered by left-wing data has shown itself to be more sensitive to hate speech targeting ethnic minorities, religious minorities or the LGBTQ+ community. In contrast, systems trained with right-leaning content were more susceptible to hate speech targeting Christian white men.

YOU MIGHT ALSO BE INTERESTED: Google and Universal Music are planning to create an app to make music using artificial intelligence.