"Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"

About three dozen security flaws have been discovered in different open source Artificial Intelligence (AI) and Machine Learning (ML) models, some of which enable Remote Code Execution (RCE) and the theft of information. The flaws, found in tools such as ChuanhuChatGPT, Lunary, and LocalAI, were reported as part of Protect AI's Huntr bug bounty program. Two of the most severe flaws are in Lunary, a production toolkit used for Large Language Models (LLMs). One vulnerability could allow an authenticated user to view or delete external users, thus enabling unauthorized data access and leading to data loss. This article continues to discuss the potential exploitation and impact of the vulnerabilities found in AI and ML models.

THN reports "Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"

 

Submitted by Gregory Rigby on