"Skyhawk Security Ranks Accuracy of LLM Cyberthreat Predictions"
The cloud security vendor Skyhawk has introduced a new benchmark for evaluating generative Artificial Intelligence (AI) Large Language Models' (LLMs) ability to identify and score cybersecurity threats within cloud logs and telemetries. According to the company, the free resource analyzes the accuracy with which ChatGPT, Google BARD, Anthropic Claude, and other LLAMA2-based open LLMs predict the maliciousness of an attack sequence. From a risk perspective, generative AI chatbots and LLMs can be a double-edged sword, but when used properly, they can significantly improve an organization's cybersecurity. Their potential to identify and dissect possible security threats faster and in greater volume than human security analysts is one of the benefits they offer. A Cloud Security Alliance (CSA) report exploring the cybersecurity implications of LLMs suggests that generative AI models can be used to improve the scanning and filtering of security vulnerabilities. CSA demonstrated in the paper that OpenAI's Codex Application Programming Interface (API) is an effective vulnerability scanner for programming languages, including C, C#, Java, and JavaScript. This article continues to discuss the generative AI benchmark that evaluates the ability of LLMs to identify and score cybersecurity threats within cloud logs and telemetries.
CSO Online reports "Skyhawk Security Ranks Accuracy of LLM Cyberthreat Predictions"