"With AI RMF, NIST Addresses Artificial Intelligence Risks"

Artificial intelligence (AI) applications within business and government organizations are being adopted rapidly.  Some of the AI applications being adopted rapidly include: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.  But just like other technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues.  The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF).  The goal of the AI RMF is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.  The initial draft of the framework builds on a concept paper released by NIST in December 2021.  NIST hopes the AI RMF will describe how the risks from AI-based systems differ from other domains and encourage and equip many different stakeholders in AI to address those risks purposefully.  NIST stated that the AI RMF can be used to map compliance considerations beyond those addressed in their concept paper, including existing regulations, laws, or other mandatory guidance.  Although AI is subject to the same risks covered by other NIST frameworks, some risk “gaps” or concerns are unique to AI.  Those gaps are what the AI RMF aims to address.

 

CSO Online reports: "With AI RMF, NIST Addresses Artificial Intelligence Risks"

Submitted by Anonymous on