"Flawed AI Tools Create Worries for Private LLMs, Chatbots"

According to experts, companies that use private instances of Large Language Models (LLMs) to make business data searchable through a conversational interface risk data poisoning and leakage if they do not harden their platforms. For example, Synopsys recently disclosed a Cross-Site Request Forgery (CSRF) flaw impacting applications based on SamurAI's EmbedAI component. Attackers could trick users into uploading poisoned data into their LLM. Mohammed Alshehri, Synopsys security researcher who discovered the vulnerability, says the open source component's lack of a safe cross-origin policy and session management could allow an attacker to affect a private LLM instance or chatbot. This article continues to discuss the risks posed by private LLMs.

Dark Reading reports "Flawed AI Tools Create Worries for Private LLMs, Chatbots"

Submitted by grigby1
 

Submitted by grigby1 CPVI on