"NIST Researchers Warn of Top AI Security Threats"

Researchers from the National Institute of Standards and Technology (NIST) discovered that Artificial Intelligence (AI) systems, which rely on large amounts of data to execute tasks, can fail when exposed to untrustworthy data. A new NIST report that is part of the institute's overall effort to support the development of trustworthy AI brings further attention to the possibility of cybercriminals poisoning AI systems by exposing them to bad data. NIST researchers also found that there is no single defense that developers or cybersecurity experts can use to protect AI systems. This article continues to discuss NIST's report on AI security threats.

StateScoop reports "NIST Researchers Warn of Top AI Security Threats"

Submitted by grigby1

Submitted by Gregory Rigby on