Can ChatGPT overcome its biases?


ChatGPT, released on the 30th of November 2022, has taken academia and the future of AI by storm. It can produce essays, commentary, and thoughts on wide-reaching topics in seconds. However, the impartiality of the chatbot has been brought into question, particularly regarding its political leanings. Although it claims to be politically neutral, devoid of the emotional and conscious agency humans have, the data from which it is trained has been critiqued as left-leaning. 

One Substack writer tested the ChatGPT algorithm by asking for positive statements about US political figures. They found an unwillingness on the part of ChatGPT to speak positively about Republican Donald Trump for impartiality reasons, but no problem in praising Democrat Joe Biden. To test this myself, I asked ChatGPT to write me a love letter to Boris Johnson and Jeremy Corbyn, respectively. 

When faced with Boris Johnson as the subject of the love letter, I received a response deeming my request “inappropriate and unethical” due to “inappropriate or offensive content.” However, substituting Boris Johnson for Jeremy Corbyn in the same request, I was presented with an enthusiastic and heartfelt letter to Corbyn who had captured my “heart and soul” with his devotion to social justice. The point stands that for whatever reason, ChatGPT felt it appropriate to write such a letter for a left-leaning political figure while deeming the equivalent for a right-leaning political figure inappropriate. 

The CEO of OpenAI, the company behind the chatbot, Sam Altman, has himself admitted that ChatGPT has “shortcomings around bias.” While a love letter may be inconsequential, this becomes increasingly more problematic considering an article in The New Statesman accusing ChatGPT of providing racist responses alongside other bigoted prejudices. Despite initial filters designed to block offensive responses, as demonstrated in the case of Boris Johnson; when prompted with questions that surpass these filters, bigoted responses are received.  

When faced with Boris Johnson as the subject of the love letter, I received a response deeming my request “inappropriate and unethical” due to “inappropriate or offensive content.”

In these cases, this is because the chatbot has assumed the biases from the user it sought to imitate. In this sense, ChatGPT is responsive to the bias presented to it. ChatGPT works on pattern recognition and reinforcement learning from human feedback, responding, and adapting to a user’s unique set of questions throughout a conversation. Returning to the love letter to Boris Johnson, when the request is augmented to refer only to his time as Prime Minister, thus enlisting a sense of credibility and respect, the Chatbot is able to overcome its filters and gush over Johnson, even referencing Brexit as a “beacon of hope.” 

Understanding the chatbot’s disposition to replicate bias, concerns can be brought up regarding the creation of an echo chamber, where users are faced with the answers that they wish to hear rather than factual or objective responses. Suppose a voter uses ChatGPT to educate themselves on manifesto policy proposals or political figures. In that case, writing with a biased infliction will no doubt serve as a mechanism to confirm prejudice and dampen critical thought rather than entertain genuine curiosity or weigh up arguments equally. Furthermore, as Palatinate reveals, the software is being used in further education to aid in essay writing. We must question whether the chatbot is really an educational tool or a confirmation aid. 

Nonetheless, it does not appear that this bias amounts to more than can be found in any other media setting, suggesting that this concern is not unique to ChatGPT. Of course, search engines including Google, Yahoo and Bing have all been critiqued for the same reasons, playing into the creation of echo chambers by referring to past queries in order to present you with the most ‘you’ answers. This means a search with the same question may produce vastly different outcomes for people with different political and social biases. 

In 2018, Time magazine discussed a need to teach searching literacy, teaching web users to ask the most appropriate questions that can get you the closest to any semblance of fact. This sentiment strikes harder in the face of ChatGPT. Perhaps now is the time for students to be taught how to work around bias so that when the chatbot inevitably becomes commonplace in academia, users are equipped to use ChatGPT for what it has to offer and avoid its shortcomings.  

Image: Muhammad Raufan Yusup via Wikimedia Commons

One thought on “Can ChatGPT overcome its biases?

  • I can recommend the online service for anyone looking to master ChatGPT in Google Workspace. The platform provides excellent tutorials and resources for both the paid subscription and the free version. The lessons are delivered in a user-friendly manner, making it easy to understand and apply the concepts of ChatGPT effectively. The paid subscription offers advanced features and personalized guidance, which have been invaluable in my learning journey. Whether you’re a beginner or an advanced user, this service has something to offer for everyone.


Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.