"Large Language Models Validate Misinformation, Research Finds"

A new study conducted by researchers at the University of Waterloo reveals that Large Language Models (LLMs) repeat conspiracy theories, harmful stereotypes, and other types of misinformation. The researchers tested an early version of ChatGPT's understanding of facts, conspiracies, controversies, misconceptions, and more. This study is part of the researchers' efforts to explore human-technology interactions and determine how to mitigate risks. They found that GPT-3 often made errors, contradicted itself, and repeated harmful misinformation. This article continues to discuss the study titled "Reliability Check: An Analysis of GPT-3’s Response to Sensitive Topics and Prompt Wording."

The University of Waterloo reports "Large Language Models Validate Misinformation, Research Finds"

Submitted by grigby1

Submitted by grigby1 CPVI on