VI Reflections: False Positives and the Use of AI to Reduce Them
VI Reflections: False Positives and the Use of AI to Reduce Them
By grigby1
By grigby1
Cisco recently warned that nation-state backed hacking teams are exploiting at least two zero-day vulnerabilities in its ASA firewall platforms to plant malware on telecommunications and energy sector networks. According to Cisco Talos, the attackers are taking aim at software defects in certain devices running Cisco Adaptive Security Appliance (ASA) or Cisco Firepower Threat Defense (FTD) products to implant malware, execute commands, and potentially exfiltrate data from compromised devices.
According to a new Netacea report, most businesses have expressed concern regarding Artificial Intelligence (AI)-enabled cyber threats, with 93 percent of security leaders expecting daily AI-driven attacks by the end of 2024. About 65 percent expect offensive AI to become the norm for cybercriminals. Ransomware is the threat vector most likely to be powered by AI, according to 48 percent of the Chief Information Security Officers (CISOs) surveyed. Phishing, malware, bot attacks, and data exfiltration followed this.
Hackers have been using unpublished GitHub and GitLab comments to generate phishing links appearing to be from legitimate Open Source Software (OSS) projects. The trick enables anyone to impersonate any repository without the owner knowing. According to McAfee, hackers have already used this method to spread the Redline Stealer Trojan by using links associated with Microsoft's GitHub-hosted repositories. There have been additional cases involving the same loader used in that campaign.
Security researchers at RiverSafe have found that one in five UK companies have had potentially sensitive corporate data exposed via employee use of generative AI (GenAI). The researchers noted that the data leak risks of unmanaged GenAI use help to explain why three-quarters of surveyed CISOs (75%) claimed that insiders pose a greater risk to their organization than external threats. The researchers stated that UK CISOs are concerned not just about the risks associated with employee misuse of AI, but of the technology being used by threat actors.
Hive Systems recently released the results of its latest annual analysis of password cracking through brute-force attacks. The company has been conducting this study for several years and has previously targeted passwords hashed with the widely used MD5 algorithm. In many cases, MD5 hashes can be easily cracked, so organizations are increasingly turning to more secure algorithms, specifically Bcrypt. Bcrypt is not the most secure, but according to Hive's collection of data from the Have I Been Pwned breach notification service, it has grown in use in recent years.
South Korean police recently revealed a major hacking campaign that lasted more than a year, allowing hackers from North Korea to steal defense secrets. A report from the Korean National Police Agency (KNPA) published recently blamed the campaign on three North Korean state-backed groups: Lazarus, Kimsuky, and Andariel. The KNPA claimed that the groups targeted as many as 83 defense contractors and subcontractors and managed to steal sensitive information from 10 of them between October 2022 and July 2023.
Google recently announced the availability of a Chrome 124 update that patches four vulnerabilities, including a critical security hole. Google noted that the critical vulnerability, tracked as CVE-2024-4058, is a type confusion bug in the ANGLE graphics layer engine. Google has credited two members of Qrious Secure for reporting CVE-2024-4058. They have been awarded a $16,000 bounty for their findings. Google has not mentioned if CVE-2024-4058 is being exploited in the wild.
Researchers from the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab developed a new chip that can efficiently accelerate Machine Learning (ML) workloads on edge devices such as smartphones while securing sensitive user data against two common types of attacks: side-channel attacks and bus-probing attacks. Health-monitoring apps can be slow and energy-inefficient as the ML models behind them must be shuttled between a smartphone and a central memory server.
By aekwall