"Researchers Demonstrate That Malware Can Be Concealed inside AI Models"
Researchers Zhi Wang, Chaoge Liu, and Xiang Cui recently released a paper showing the possibility of hiding malware inside of Artificial Intelligence (AI) neural networks to slip it past automated detection tools. The three researchers embedded malware into the neural network behind an AI system called AlexNet, taking up 36.9 (MiB) mebibytes of memory space on the hardware running the AI system. The malware-embedded model was observed classifying images with near-identical accuracy within 1 percent of the malware-free model. They found that hiding the malware in the AI model broke it up in ways that prevented standard antivirus engines from detecting it. VirusTotal, a service that examines items with more than 70 antivirus scanners and URL/domain blocklisting services, along with a multitude of tools for extracting signals from the studied content, failed to raise any suspicions about the malware-embedded model. The researchers' method involves choosing the best layer to work with in a model that has already been training and then embedding the malware into that layer. If the accuracy of a malware-embedded model is inadequate, the attacker could choose to start with an untrained model, add extra neurons and then train the model on the same data set used to train the original model. This approach would lead to the production of a larger model with equivalent accuracy and provide more room to hide malicious stuff inside. This article continues to discuss the researchers' demonstrated use of an AI neural network to hide malware.
Ars Technica reports "Researchers Demonstrate That Malware Can Be Concealed inside AI Models"