"NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems"

The National Institute of Standards and Technology (NIST) has a new draft publication that delves into how humans decide whether or not to trust recommendations made by an Artificial Intelligence (AI) system. The report is a part of NIST's broader effort to help improve AI system trustworthiness. This latest publication focuses on how humans experience trust as they use or are impacted by AI systems. According to one of the publication's authors Brian Stanton, the issue stems from whether it is possible to measure human trust in AI systems, and if so, how can this trust be measured accurately and appropriately. There are many factors that contribute to humans' decisions about trust in AI systems, including how one thinks and feels about the system and the perceived risks associated with using it. The NIST publication suggests a list of nine contributing factors to a user's potential trust in an AI system. These factors include security, privacy, accuracy, reliability, resiliency, objectivity, safety, accountability, and explainability. The proposed factors are different from the technical requirements of trustworthy AI that NIST is establishing with the community of AI developers and practitioners. The document explores how a person may weigh the nine factors differently based on the task itself and the risk associated with trusting the AI's decision. This article continues to discuss the NISTIR 8332 document, NIST's proposed method for evaluating human trust in AI systems, and the importance of enhancing the trustworthiness of such systems. 

HS Today reports "NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems"

Submitted by Anonymous on