"Put Guardrails Around AI Use to Protect Your Org, but Be Open to Changes"
"Put Guardrails Around AI Use to Protect Your Org, but Be Open to Changes"
Security professionals should view Artificial Intelligence (AI) similarly to any other significant technological advancement. It has the potential to do immeasurable good in the right hands, but there will always be someone who wants to use it to harm others. For example, ChatGPT and other generative AI tools are being used to help scammers create convincing phishing emails, but the less-known uses should worry CISOs. Large Language Models (LLMs) such as OpenAI's ChatGPT, Meta's LLaMA, and Google's PaLM2 are some of the most common and accessible AI tools.