"LLM Attacks Take Just 42 Seconds on Average, 20% of Jailbreaks Succeed"
"LLM Attacks Take Just 42 Seconds on Average, 20% of Jailbreaks Succeed"
According to Pillar Security's "State of Attacks on GenAI" report, attacks on Large Language Models (LLMs), on average, take 42 seconds to complete, and successful LLM attacks result in sensitive data leakage 90 percent of the time. The report shared new insights regarding LLM attacks and jailbreaks, based on telemetry data and real-world attack examples from over 2,000 AI applications.