NSA 2023 Cybersecurity Year in Review

NSA 2023 Cybersecurity Year in Review

The National Security Agency has published their 2023 Cybersecurity Year in Review!

In an effort to be more transparent, the National Security Agency publishes an annual year in review sharing information regarding cybersecurity efforts that better equipped U.S. defenses against high priority cyber threats. NSA’s efforts to help secure the nation’s most sensitive systems also help your cybersecurity because NSA cascades these solutions through public guidance and engages with key technology providers to help them bolster the security of their products and services.

Submitted by Regan Williams on

"PKfail Vulnerability Allows Secure Boot Bypass on Hundreds of Computer Models"

"PKfail Vulnerability Allows Secure Boot Bypass on Hundreds of Computer Models"

According to Binarly, there is a Secure Boot issue affecting hundreds of computer models. The vulnerability, called "PKfail," enables attackers to run malicious code during the device's boot process. It stems from an exposed American Megatrends International (AMI) Platform Key (PK), a Secure Boot private key. The exposed PK was a default key provided by AMI and was not meant for use in production. However, several major computer manufacturers shipped many devices with the untrusted key as they did not change the PK.

Submitted by grigby1 CPVI on

"FraudGPT and Other Malicious AIs Are the New Frontier of Online Threats. What Can We Do?"

"FraudGPT and Other Malicious AIs Are the New Frontier of Online Threats. What Can We Do?"

Researchers at Monash University give their insights on the rise of dark Large Language Models (LLMs), what we can do to protect ourselves, and the role of government in regards to regulations on Artificial Intelligence (AI). They note that widely available generative AI tools have further complicated cybersecurity, so online security is more important than ever. Dark LLMs are uncensored versions of AI systems such as ChatGPT. Re-engineered for criminal activities, they can be used to improve phishing campaigns, create sophisticated malware, and more.

Submitted by grigby1 CPVI on

"Researchers Improve Method to Discover Anomalies in Data"

"Researchers Improve Method to Discover Anomalies in Data"

Washington State University researchers have developed an algorithm that improves upon discovering data anomalies, including in streaming data. Their work contributes to Artificial Intelligence (AI) methods that could be applied in domains where anomalies in large amounts of data need to be found quickly, such as cybersecurity. This article continues to discuss the algorithm developed by Washington State University researchers to better find data anomalies than current anomaly-detection software.

Submitted by grigby1 CPVI on

"Technology Policy Experts Argue That It Is Time to Rethink Data Privacy Protections"

"Technology Policy Experts Argue That It Is Time to Rethink Data Privacy Protections"

The Association for Computing Machinery's (ACM) global Technology Policy Council (TPC) has released "TechBrief: Data Privacy Protection," which highlights the growing ineffectiveness of controls over information privacy. As data collection, advanced algorithms, and powerful computers have increased, it has become easier to piece together information about individuals' private lives from public information. This article continues to discuss key points from "TechBrief: Data Privacy Protection."

Submitted by grigby1 CPVI on

"Striking the Balance in Communication Privacy and Lawful Interception"

"Striking the Balance in Communication Privacy and Lawful Interception"

A team of researchers from the University of Luxembourg and the KASTEL Security Research Labs has devised a security protocol that allows court-authorized monitoring of end-to-end encrypted or anonymous communications while also detecting illicit or extensive surveillance. The new security protocol balances legitimate communication interception with privacy protection. This article continues to discuss the new security protocol devised by researchers at the University of Luxembourg and the KASTEL Security Research Labs.

Submitted by grigby1 CPVI on

"NVIDIA Patches Flaw in Jetson Software Used in AI-Powered Systems"

"NVIDIA Patches Flaw in Jetson Software Used in AI-Powered Systems"

NVIDIA has patched a high-severity flaw impacting its Jetson series computing boards. The exploitation of this vulnerability could enable Denial-of-Service (DoS), code execution, and privilege escalation in Artificial Intelligence (AI)-powered systems. This article continues to discuss the potential exploitation and impact of the flaw in Jetson software used in AI-powered systems, as well as other NVIDIA vulnerabilities that pose risks to networking and data center solutions.

Submitted by grigby1 CPVI on

"This AI-Powered Cybercrime Service Bundles Phishing Kits with Malicious Android Apps"

"This AI-Powered Cybercrime Service Bundles Phishing Kits with Malicious Android Apps"

The Spanish-speaking cybercrime group "GXC Team" bundles phishing kits with malicious Android apps, advancing Malware-as-a-Service (MaaS) offerings. Group-IB, which has tracked the threat actor since January 2023, called the crimeware solution a "sophisticated AI-powered phishing-as-a-service platform." This article continues to discuss findings regarding GXC Team's bundling of phishing kits with malicious Android apps.

Submitted by grigby1 CPVI on

"US Offers $10 Million Reward for Information on North Korean Hacker"

"US Offers $10 Million Reward for Information on North Korean Hacker"

The US Department of State is offering $10 million for information on Rim Jong Hyok, an alleged member of the hacking group "APT45," which operates on behalf of a North Korean military intelligence agency, the Reconnaissance General Bureau. The group has targeted foreign businesses, government entities, and the defense industry. This article continues to discuss the US offering a reward of up to $10 million for information on a member of APT45.

Submitted by grigby1 CPVI on

"Despite Bans, AI Code Tools Widespread in Organizations"

"Despite Bans, AI Code Tools Widespread in Organizations"

A new Checkmarx report highlights that organizations are concerned about security threats posed by developers' use of Artificial Intelligence (AI). The company discovered that 15 percent of organizations explicitly ban AI tool usage for code generation, but 99 percent use them anyway. This article continues to discuss key findings from the "Seven Steps to Safely Use Generative AI in Application Security" report.

Infosecurity Magazine reports "Despite Bans, AI Code Tools Widespread in Organizations"

Submitted by grigby1 CPVI on
Subscribe to