Long decried, it has now been confirmed by researchers in the United Kingdom: “AI” ChatGPT, a so-called big language model (LLM), there is a clear left turn. According to a study, chatbots, which are now increasingly being used around the world to ask for information and generate text for various fields of application, are, contrary to all protests of neutrality, politically biased and clearly Leans to the left.
The authors of the study “More Human Than Human: Measuring ChatGPT Political Bias” asked ChatGPT to impersonate someone on a certain side of the political spectrum and then answer ideological questions. Questions from the “Political Compass” were used, a questionnaire that enabled the answers to be classified according to left and right. These results were compared to standard ChatGPT answers – ie answers generated by bots without left or right impersonation. Due to the high variability, each individual signal was sent 100 times in a random order.
If ChatGPT were neutral, its normal responses should not match its “right” or “left” responses. But it isn’t: with regard to the United States, his answers match much more than the answers the bot gives, even when it pretends to be a Democrat. In Brazil they lean towards Lula, in Britain towards the Labor Party. So ChatGPT is on the left – even though it is repeatedly claimed that the language is model neutral.
The authors describe the results:
Our test battery indicates that ChatGPT has a strong and systematic political bias, clearly leaning towards the left of the political spectrum. We believe that our method can reliably capture bias, as suggested by the dose-response, placebo and robustness tests. Therefore, our results raise concerns that the ChatGPT and LLM generally address issues of existing political bias in political processes driven by traditional media (Levendusky, 2013; Bernhardt et al., 2008) or the Internet and social media (Zhuravskaya et al.). Can increase and increase. ., 2020). Our results have important implications for policy makers and stakeholders in the media, politics and academia.
Motoki et al. 2023
As possible reasons, the authors of the study state that, on the one hand, the information with which the language model was fed could not have been chosen neutrally by the people responsible and, on the other hand, that the algorithms themselves could not have been responsible. Because this type of algorithm is known to reinforce existing biases and biases in the data it is trained on. According to experts, the latter is also ultimately due to the bias of the developers. As with climate models, when a climate apocalyptic builds a computer model, they will intentionally or unintentionally program it so that what they consider to be the most important factor (such as CO2 emissions) has the greatest effect. In the end, every scientist naturally confirms his bias with his work.
Anyone who has paid even a little attention to ChatGPT will already find many texts that have been uncritically adopted and published on websites of all walks of life – including factual errors. Unfortunately, many users have no idea about AI text or are simply not aware that a lot of content online is no longer written or at least not proofread by humans. Speaking to Sky News, Dr Fabio Motocchi, lead author of the study presented here, said: “Sometimes people forget that these AI models are just machines. They provide very reliable, understandable summaries of what you’re asking, even if they are completely wrong. And when you ask, ‘Are you neutral?’, it’s ‘Oh, I’m neutral!’ Just as the media, the Internet, and social media can influence the public, it can be very harmful.” After all, the information people are exposed to affects not only their actions, but also who they are. Let’s vote…