"NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence"

The US National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework (AI RMF 1.0). This guidance document aims to help organizations designing, developing, deploying, or using AI systems, manage the risks of AI systems, including privacy and cybersecurity risks. The AI RMF was developed in close collaboration with private and public sectors in response to a direction from Congress for NIST to design the framework. It is designed to adapt to the AI landscape as technologies continue to advance and to be utilized by organizations in varied degrees and capacities so that society can benefit from AI technologies while also being protected from their potential risks. The AI RMF is split into two parts, with the first explaining how organizations should frame AI-related risks and describing the qualities of trustworthy AI systems. The second part describes four functions that can help enterprises address AI-related risks. This article continues to discuss the AI RMF 1.0 aimed at building trust in AI technologies and promoting AI innovation while mitigating risk. 

NIST reports "NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence"

Submitted by Anonymous on