"Rutgers Researchers Find Flaws in Using Source Reputation for Training Automatic Misinformation Detection Algorithms"

Researchers from Rutgers University have discovered a significant flaw in how algorithms designed to detect "fake news" assess the credibility of online news stories. According to the researchers, most of these algorithms rely on a credibility score for the article's "source" rather than assessing the credibility of each individual article. Vivek K. Singh, an associate professor at the Rutgers School of Communication and Information and coauthor of the study "Misinformation Detection Algorithms and Fairness Across Political Ideologies: The Impact of Article Level Labeling," stated that not all news articles published by "credible" sources are accurate, nor are all articles published by "noncredible" sources "fake news." With article-level labels matching 51 percent of the time, the researchers concluded that using source-level labels to determine credibility is not a reliable method. This labeling procedure has significant implications for tasks such as the development of robust fake news detectors and audits of fairness across the political spectrum. To address this issue, the study provides a new dataset of individually labeled articles of journalistic quality, as well as a method for misinformation detection and fairness audits. This study's findings emphasize the need for more nuanced and trustworthy methods to detect misinformation in online news and provide valuable resources for future research. This article continues to discuss the flaws discovered in using source reputation for training automatic misinformation detection algorithms.

Rutgers University reports "Rutgers Researchers Find Flaws in Using Source Reputation for Training Automatic Misinformation Detection Algorithms"


 

Submitted by Anonymous on