March 14th, 2023 2 Minute Read Press Release

New Report Finds Bias in ChatGPT

Despite claims of neutrality, ChatGPT exhibits left-leaning bias and treats demographic groups unequally

NEW YORK, NY —ChatGPT, an artificial intelligence (AI) chatbot that provides detailed responses to user prompts and queries, became an immediate internet sensation when it launched in November of 2022. In capturing the public’s attention, ChatGPT illustrated the potential for AI-powered software to increase human creativity and productivity. While many marvel at ChatGPT’s ability to draw on a massive corpus to deliver in-depth answers to user queries, some began to notice instances of political and demographic bias in the chatbot’s responses. Such instances are troubling in part because ChatGPT claims to provide neutral and factual information to its users. Another concern is that recent efforts by the Biden Administration, nominally aimed at improving equity, could serve to further embed political and demographic biases in the AI systems used by federal agencies such as the Department of Defense.

In light of these issues, David Rozado, an associate professor at New Zealand Institute of Skills and Technology, tested for political biases by administering 15 political orientation tests to the January 9, 2023 release of ChatGPT. His new issue brief for the Manhattan Institute finds that in 14 out of 15 political orientation tests, ChatGPT responses to questions with political connotations were classified as left-leaning. He also finds ChatGPT treats several demographic groups unequally by flagging negative comments as hateful for some but not others. Finally, Rozado demonstrates that AI systems from the GPT family can be altered with minimal cost to respond with a different political preference (his wryly titled RightWingGPT was fine-tuned for a computational cost of only $300!).

It’s important to understand that embedded bias in AI models has the ability to shape human perceptions, spread misinformation, and exert societal control. To that end, Rozado concludes by noting several specific implications for society as well as a handful of recommendations. These include:

  • Political and demographic biases embedded in widely used AI systems can degrade democratic institutions and processes. 

  • Public facing AI systems that manifest clear political bias can increase societal polarization. 

  • AI systems should largely remain neutral for most normative questions for which there exist a variety of legitimate human opinions. 

  • AI systems should help humans seek wisdom by providing factual information about empirically verifiable topics and a variety of reliable, balanced, and diverse sources and legitimate viewpoints on contested normative questions.

  • Society should ponder whether it is ever justified that AI systems discriminate between demographic groups. 

  • Widely used AI systems should be transparent about their inner workings, allowing society to characterize and document the sources of biases embedded in such systems. 

Click here to view the full issue brief.

Donate

Are you interested in supporting the Manhattan Institute’s public-interest research and journalism? As a 501(c)(3) nonprofit, donations in support of MI and its scholars’ work are fully tax-deductible as provided by law (EIN #13-2912529).