"Preventing AI From Divulging Its Own Secrets"

A computer system's secrets can be revealed by studying its power usage patterns as it conducts operations. Therefore, researchers are working to protect AI systems' power signatures from snoopers. According to researchers, the AI systems most vulnerable to these attacks are machine learning (ML) algorithms employed by smart home devices or smart cars to identify images or sounds. The specialized computer chips embedded in such devices use a class of ML algorithms known as neural networks. As these algorithms are designed to run on chips in smart devices rather than inside a cloud computing server in a hard-to-reach location, it is easier for hackers to reverse-engineer the chip using differential power analysis. Researchers at North Carolina State University have demonstrated what they say is the first countermeasure against differential power analysis attacks targeting neural networks. The countermeasure uses an approach, called masking, which was borrowed from work on cryptography research and adapted for use in neural network security. The masking defense approach can be used on any type of computer chip that can run a neural network, such as Field Programmable Gate Arrays (FPGA) and Application-specific Integrated Circuits (ASIC). This article continues to discuss differential power analysis attacks, the first countermeasure developed for protecting neural networks from these attacks, and the need to continue research behind such countermeasures.

IEEE Spectrum reports "Preventing AI From Divulging Its Own Secrets"

Submitted by Anonymous on