"Hundreds of LLM Servers Expose Corporate, Health & Other Online Data"
Hundreds of open source Large Language Model (LLM) builder servers and dozens of vector databases leak sensitive data to the web. There is a rush among companies to implement Artificial Intelligence (AI) into their business workflows, but not enough attention is paid to securing these tools and the information they handle. Naphtali Deutsch, a researcher at Legit Security, scanned the web for two potentially vulnerable open source AI services: vector databases, which store data for AI tools, and LLM application builders, such as Flowise. The analysis found sensitive, personal, and corporate data exposed by companies trying to adopt generative AI. This article continues to discuss the vulnerability of LLM automation tools and vector databases to pilfering.
Dark Reading reports "Hundreds of LLM Servers Expose Corporate, Health & Other Online Data"
Submitted by grigby1