"Malicious AI Models on Hugging Face Backdoor Users' Machines"

At least 100 malicious Artificial Intelligence (AI)/Machine Learning (ML) models were discovered on the Hugging Face platform, with some capable of executing code on the victim's machine, providing attackers with a persistent backdoor. Hugging Face is a technology company specializing in AI, Natural Language Processing (NLP), and ML. It offers a platform for communities to collaborate and share models, datasets, and complete applications. JFrog's security team found that about 100 models hosted on the platform contain malicious functionality, potentially leading to data breaches and espionage attacks. These malicious models exist despite Hugging Face's implementation of security measures such as malware scanning, model functionality analysis, and more. This article continues to discuss the discovery and potential impact of malicious AI models on Hugging Face. 

Bleeping Computer reports "Malicious AI Models on Hugging Face Backdoor Users' Machines"

Submitted by grigby1

Submitted by Gregory Rigby on