"New LightSpy Spyware Targets iOS with Enhanced Capabilities"

"New LightSpy Spyware Targets iOS with Enhanced Capabilities"

Security researchers at ThreatFabric have discovered a newer version of the LightSpy spyware, known for targeting iOS devices.  The researchers noted that it has been expanded to include capabilities for compromising device security and stability.  This latest version, identified as 7.9.0, is more sophisticated and adaptable than the original version, featuring 28 plugins compared to the 12 observed in the earlier version.

Submitted by Adam Ekwall on

"Apple Patches Over 70 Vulnerabilities Across iOS, macOS, Other Products"

"Apple Patches Over 70 Vulnerabilities Across iOS, macOS, Other Products"

Apple recently  announced fresh security updates for both iOS and macOS users, addressing over 70 CVEs across its platforms, including several bugs leading to protected file system modifications.  Apple noted that iOS 18.1 and iPadOS 18.1 are now rolling out to mobile users with patches for 28 vulnerabilities that could lead to information leaks, the disclosure of process memory, denial-of-service, sandbox escape, modification of protected system files, heap corruption, and access to restricted files.

Submitted by Adam Ekwall on

"Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"

"Researchers Uncover Vulnerabilities in Open-Source AI and ML Models"

About three dozen security flaws have been discovered in different open source Artificial Intelligence (AI) and Machine Learning (ML) models, some of which enable Remote Code Execution (RCE) and the theft of information. The flaws, found in tools such as ChuanhuChatGPT, Lunary, and LocalAI, were reported as part of Protect AI's Huntr bug bounty program. Two of the most severe flaws are in Lunary, a production toolkit used for Large Language Models (LLMs).

Submitted by Gregory Rigby on

"ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis"

"ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis"

Marco Figueroa, Generative Artificial Intelligence (GenAI) bug bounty programs manager at Mozilla, has disclosed new jailbreak methods that can trick the AI-driven chatbot ChatGPT into generating Python exploits and a malicious SQL injection tool. One involves encoding malicious instructions in hexadecimal format, and the other involves using emojis. ChatGPT and other AI chatbots are trained not to provide potentially hateful or harmful information.

Submitted by Gregory Rigby on

"Russia Targeting Ukrainian Military Recruits With Android, Windows Malware, Google Says"

"Russia Targeting Ukrainian Military Recruits With Android, Windows Malware, Google Says"

Google warns of a Russian cyber espionage and influence campaign targeting military recruits in Ukraine to hinder the country's mobilization efforts. A Telegram user named "Civil Defense" has been distributing allegedly free software to find Ukrainian military recruiters, but it is actually platform-specific malware. The software would install commodity malware and a decoy mapping application on Android devices that do not have Google Play Protect enabled. According to Google, the operation has delivered the Android backdoor "CraxsRat" and the "SunSpinner" malware to victims.

Submitted by Gregory Rigby on

"New Tool Bypasses Google Chrome's New Cookie Encryption System"

"New Tool Bypasses Google Chrome's New Cookie Encryption System"

The "Chrome-App-Bound-Encryption-Decryption" tool released by cybersecurity researcher Alexander Hagenah can bypass Google's new App-Bound encryption cookie-theft defenses and extract saved credentials from the Chrome web browser. He released the tool after noticing that others were discovering similar bypasses. The tool does what multiple infostealer operations have added to their malware, but its public availability puts Chrome users who store sensitive data in their browsers at risk.

Submitted by Gregory Rigby on

"AI Hallucinations Can Pose a Risk to Your Cybersecurity"

"AI Hallucinations Can Pose a Risk to Your Cybersecurity"

One of the most significant challenges associated with Artificial Intelligence (AI) hallucinations in cybersecurity is that the error can result in an organization failing to recognize a potential threat. An AI hallucination occurs when a Large Language Model (LLM), such as a generative AI tool, provides an incorrect answer. The answer could be completely wrong or fabricated, such as making up a non-existent research paper. This article continues to discuss the concept of AI hallucinations and how they can affect cybersecurity.

Submitted by Gregory Rigby on

"Notorious Hacker Group TeamTNT Launches New Cloud Attacks for Crypto Mining"

"Notorious Hacker Group TeamTNT Launches New Cloud Attacks for Crypto Mining"

The "TeamTNT" cryptojacking group is behind a new large-scale campaign targeting cloud-native environments for mining cryptocurrencies and renting out breached servers to third-parties. According to Assaf Morag, director of threat intelligence at Aqua, the group is targeting exposed Docker daemons to deploy the "Sliver" cyber worm and cryptominers. They are using compromised servers and Docker Hub as the infrastructure for spreading malware. This article continues to discuss the new cloud attacks launched by the TeamTNT hacker group.

Submitted by Gregory Rigby on

"Evasive Panda's CloudScout Toolset Targets Taiwan"

"Evasive Panda's CloudScout Toolset Targets Taiwan"

The Advanced Persistent Threat (APT) group "Evasive Panda" developed a toolset named "CloudScout," which has been targeting Taiwanese institutions to steal cloud-based data. The attacks involved CloudScout exploiting session cookies stolen by MgBot plugins to access Google Drive, Gmail, and Outlook accounts without direct authentication. This article continues to discuss findings regarding Evasive Panda's CloudScout toolset.

Submitted by Gregory Rigby on
Subscribe to