"Standard for AI Security in Singapore Launched"

Artificial Intelligence (AI) adoption has accelerated in recent years, from self-driving autonomous vehicles to AI-assisted medical diagnoses. From 2018 to 2020, the percentage of organizations deploying AI increased fivefold globally. While AI has many advantages, hacking poses a significant threat to AI systems, particularly in applications where hackers may gain access to confidential information or cause automated systems to malfunction. Therefore, a team of Nanyang Technological University (NTU) researchers and AI leaders launched a new AI security standard to protect the integrity of AI programs and build trust in AI solutions. The new standard, which was developed with input from 30 AI and security professionals from industry, academia, and government, explains the various threats that AI systems may face, the assessment measures for evaluating an AI algorithm's security, and the approaches that AI practitioners can take to mitigate attacks. The standard highlights case studies in which security breaches could have disastrous consequences to emphasize the importance of securing AI systems. These cases include content filters on social media platforms that flag offensive content, credit scoring systems that protect individuals and credit institutes, AI-enabled disease diagnosis systems, and more. This article continues to discuss the new AI security standard launched by NTU Singapore researchers and AI industry leaders.

NTU reports "Standard for AI Security in Singapore Launched"

Submitted by Anonymous on