"Microsoft's 'Security Copilot' Unleashes ChatGPT on Breaches"

Artificial Intelligence (AI) within the cybersecurity industry promises tools that can detect unusual network activity, quickly determine what is occurring, and guide incident response in the event of an attack. However, the most credible and valuable services are Machine Learning (ML) algorithms trained to identify malware and other suspicious network activities. With the proliferation of generative AI tools, Microsoft has created a service for defenders. Microsoft 365 Copilot builds upon the partnership with OpenAI and Microsoft's own work on Large Language Models (LLMs). Security Copilot integrates system data and network monitoring from security solutions such as Microsoft Sentinel and Defender, as well as third-party services. Security Copilot delivers alerts, outlines in both text and charts what may be happening within a network, and provides investigation advice. As a human user works with Security Copilot to map out a potential security incident, the platform keeps track of history and creates summaries so that if colleagues are added to the project, they can quickly get up to speed on what has been accomplished thus far. The technology will also generate slides and other presentation materials on an investigation to help security teams communicate the facts of a problem to individuals outside their department. Security Copilot is driven in large part by OpenAI's ChatGPT-4, although Microsoft notes that it also incorporates a proprietary Microsoft-specific security model. This article continues to discuss Microsoft's Security Copilot tool aiming to deliver the network insights and coordination that AI security systems have promised.

Wired reports "Microsoft's 'Security Copilot' Unleashes ChatGPT on Breaches"

Submitted by Anonymous on