"Beware – Your Customer Chatbot is Almost Certainly Insecure: Report"

Customer chatbots built on general-purpose generative Artificial Intelligence (AI) engines are easy to develop but difficult to secure. Ashley Beauchamp was able to trick DPD's chatbot in January 2024. The chatbot swore, criticized DPD's service, and wrote a disparaging haiku about its owner. DPD shut down the chatbot and blamed an error for the manipulation. Others were skeptical because the output resembled 'jailbreaking', or bypassing AI's guardrails through prompt engineering. This article continues to discuss the insecurity of AI chatbots. 

SecurityWeek reports "Beware – Your Customer Chatbot is Almost Certainly Insecure: Report"

Submitted by grigby1

Submitted by grigby1 CPVI on