"Transparent Labeling of Training Data May Boost Trust in Artificial Intelligence"

According to researchers at Pennsylvania State University, showing users that visual data input into Artificial Intelligence (AI) systems was correctly labeled could increase people's trust in AI. The team added that the findings could pave the way for scientists to better measure the relationship between labeling credibility, AI performance, and trust. In the study, the researchers discovered that high-quality image labeling increased people's perception of the credibility of the training data and their trust in the AI system. However, when the system displays additional signs of bias, some aspects of their trust decrease while others remain high. In order for AI systems to learn, they must first be trained using data that humans often label. According to S. Shyam Sundar James P. Jimirro Professor of Media Effects at the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State, the majority of users never see how the data is labeled, which raises questions about the accuracy and bias of those labels. Sundar explained that trusting AI systems involves having trust in AI's performance and its ability to accurately reflect reality and truth. This is only possible if the AI has been trained on a good data set. Ultimately,  concerns regarding AI trust should be directed toward the training data upon which the AI is built. However, it has been difficult to convey the quality of training data to the general public. This article continues to discuss the research on boosting trust in AI through transparent labeling. 

Pennsylvania State University reports "Transparent Labeling of Training Data May Boost Trust in Artificial Intelligence"

Submitted by Anonymous on