"GenAI Requires New, Intelligent Defenses"
"GenAI Requires New, Intelligent Defenses"
Business and public use of generative Artificial Intelligence (AI) calls for further understanding of generative AI risks and the specific defenses to mitigate those risks. Jailbreaking and prompt injection are two emerging threats to generative AI. Jailbreaking uses specific prompts to trick the AI into producing harmful or misleading results. Similar to SQL injection in databases, prompt injection hides malicious data or instructions within typical prompts, causing the model to produce unintended outputs and resulting in vulnerabilities or reputational risks.