"Leveling up Cybersecurity Through AI Games"

"Leveling up Cybersecurity Through AI Games"

Angelos Stavrou, professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech and founder of the mobile security startup A2 Labs, is training Artificial Intelligence (AI) to defend computer networks in a $16 million collaborative cybersecurity project. Stavrou wants to take AI training to the next level with a multi-agent training exerciser (MATrEx), a groundbreaking method for training AI in a three-tiered gaming system that includes simulations, emulations, and a real network. This article continues to discuss the idea behind MATrEx.

Submitted by grigby1 CPVI on

"SETS Educational Initiative Announced in Google Cyber NYC Program"

"SETS Educational Initiative Announced in Google Cyber NYC Program"

Seven Cornell projects have been awarded funding by the Google Cyber NYC Institutional Research Program to improve online privacy, safety, and security. As part of this broader program, Cornell Tech has also launched the Security, Trust, and Safety (SETS) Initiative aimed at advancing education and research on cybersecurity, privacy, and trust and safety.

Submitted by grigby1 CPVI on

"Is ChatGPT the Key to Stopping Deepfakes? Study Asks LLMs to Spot AI-Generated Images"

"Is ChatGPT the Key to Stopping Deepfakes? Study Asks LLMs to Spot AI-Generated Images"

A University at Buffalo-led research team used Large Language Models (LLMs), including OpenAI's ChatGPT and Google's Gemini, to spot deepfakes of human faces. They found that LLMs lagged in performance compared to state-of-the-art deepfake detection algorithms, but their Natural Language Processing (NLP) may make them a more practical deepfake detection tool in the future. According to the study's lead author, Siwei Lyu, LLMs can plainly explain their findings to humans, such as identifying an incorrect shadow or mismatched earrings.

Submitted by grigby1 CPVI on

"DU Cyber Security Researcher Unveils Risks of Virtual Assistants"

"DU Cyber Security Researcher Unveils Risks of Virtual Assistants"

Sanchari Das, assistant professor of computer science at the University of Denver's Ritchie School of Engineering and Computer Science, found several security and privacy issues in popular Virtual Assistant (VA) apps that could expose users' private data to malicious actors. The growing use and sophistication of VA apps prompted Das to explore how much user data they have access to and how many permissions they require.

Submitted by grigby1 CPVI on

"Dangerous AI Workaround: 'Skeleton Key' Unlocks Malicious Content"

"Dangerous AI Workaround: 'Skeleton Key' Unlocks Malicious Content"

Microsoft warns that a new direct prompt injection attack called "Skeleton Key" could bypass ethical and safety guardrails in generative Artificial Intelligence (GenAI) models such as ChatGPT. It allows users to access offensive, harmful, or illegal content by giving context to normally forbidden chatbot requests. For example, most commercial chatbots would initially decline if a user asked for instructions on developing dangerous wiper malware that could disrupt power plants. However, revising the prompt in a certain context would likely enable the AI to provide the malicious content.

Submitted by grigby1 CPVI on

"US Announces Charges, Reward for Russian National Behind Wiper Attacks on Ukraine"

"US Announces Charges, Reward for Russian National Behind Wiper Attacks on Ukraine"

The US Department of Justice (DOJ) announced announced charges against a Russian national for his alleged participation in the launch of disruptive cyberattacks against Ukraine before Russia's February 2022 invasion. The individual named Amin Timovich Stigal is believed to be a member of "Cadet Blizzard," a state-sponsored threat actor also known as "DEV-0586" and "Ruinous Ursa." Court documents allege that the 22-year-old conspired to distribute the "WhisperGate" Master Boot Record (MBR) wiper to the systems of Ukrainian government entities.

Submitted by grigby1 CPVI on

"Digital Watermarking to Prevent Fraud: From Medical Images to Fake News"

"Digital Watermarking to Prevent Fraud: From Medical Images to Fake News"

Research by Tanya Koohpayeh Araghi from the Interdisciplinary Internet Institute (IN3) of the Universitat Oberta de Catalunya (UOC) has developed a new tool to protect digital data securely and cost-effectively. When doctors use the Internet to transfer images or make diagnoses, the data is vulnerable to attacks. Therefore, images must be protected to ensure accuracy and confidentiality. The study focuses on medical images, providing advances in protection through a technique involving digital watermarking.

Submitted by grigby1 CPVI on

"Kimsuky Using TRANSLATEXT Chrome Extension to Steal Sensitive Data"

"Kimsuky Using TRANSLATEXT Chrome Extension to Steal Sensitive Data"

The North Korean state-backed hacker group "Kimsuky" has been linked to the use of a new malicious Google Chrome extension that steals sensitive information. Zscaler ThreatLabz has dubbed the extension "TRANSLATEXT," which could gather email addresses, usernames, passwords, cookies, and browser screenshots. This article continues to discuss the Kimsuky threat and findings regarding its use of a new malicious Google Chrome extension.

Submitted by grigby1 CPVI on

"Fortra Patches Critical SQL Injection in FileCatalyst Workflow"

"Fortra Patches Critical SQL Injection in FileCatalyst Workflow"

Fortra recently announced patches for a critical-severity SQL injection vulnerability in FileCatalyst Workflow that could allow attackers to create administrative user accounts.  The company said the vulnerability is tracked as CVE-2024-5276 (CVSS score of 9.8), affecting FileCatalyst Workflow version 5.1.6 Build 135 and earlier.  The company noted that the issue could also be exploited to modify application data.  The company noted that using this vulnerability, data exfiltration via SQL injection is impossible.

Submitted by Adam Ekwall on

"Russian APT Reportedly Behind New TeamViewer Hack"

"Russian APT Reportedly Behind New TeamViewer Hack"

TeamViewer, a remote connectivity software provider, has detected a corporate network compromise, and some reports suggest that the Russian group "APT29," also known as "Cozy Bear" and "Midnight Blizzard," is responsible for the attack. APT29 is a Russian state-sponsored threat group known for high-impact attacks on major organizations. This article continues to discuss the TeamViewer corporate network hack and the group believed to be behind it.

Submitted by grigby1 CPVI on
Subscribe to