"Put Guardrails Around AI Use to Protect Your Org, but Be Open to Changes"

Security professionals should view Artificial Intelligence (AI) similarly to any other significant technological advancement. It has the potential to do immeasurable good in the right hands, but there will always be someone who wants to use it to harm others. For example, ChatGPT and other generative AI tools are being used to help scammers create convincing phishing emails, but the less-known uses should worry CISOs. Large Language Models (LLMs) such as OpenAI's ChatGPT, Meta's LLaMA, and Google's PaLM2 are some of the most common and accessible AI tools. LLMs in the wrong hands can give users bad advice, encourage them to expose sensitive information, write vulnerable code, or leak passwords. This article continues to discuss LLMs being a new attack surface and steps for reducing risks.  

Help Net Security reports "Put Guardrails Around AI Use to Protect Your Org, but Be Open to Changes"

Submitted by grigby1

Submitted by grigby1 CPVI on