"LLM Attacks Take Just 42 Seconds on Average, 20% of Jailbreaks Succeed"
According to Pillar Security's "State of Attacks on GenAI" report, attacks on Large Language Models (LLMs), on average, take 42 seconds to complete, and successful LLM attacks result in sensitive data leakage 90 percent of the time. The report shared new insights regarding LLM attacks and jailbreaks, based on telemetry data and real-world attack examples from over 2,000 AI applications. Pillar Security researchers also discovered that LLM jailbreaks bypass model guardrails in one out of every five attempts, highlighting the risks of the growing use and advancement of generative Artificial Intelligence (GenAI). This article continues to discuss key findings from Pillar's State of Attacks on GenAI report.
SC Media reports "LLM Attacks Take Just 42 Seconds on Average, 20% of Jailbreaks Succeed"
Submitted by grigby1