"NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems"

"NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems"

In a new publication titled "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)," computer scientists at the National Institute of Standards and Technology (NIST) and their collaborators identify the vulnerabilities of Machine Learning (ML) and Artificial Intelligence (AI). Their publication aims to help AI users and developers understand potential attacks and mitigation strategies. It is part of NIST's broader effort to support the development of trustworthy AI.

Submitted by Gregory Rigby on

"Threat Group Using Rare Data Transfer Tactic in New RemcosRAT Campaign"

"Threat Group Using Rare Data Transfer Tactic in New RemcosRAT Campaign"

An adversary, tracked as UNC-0050, known for using the RemcosRAT remote surveillance and control tool against organizations in Ukraine, is back with a new method to transfer data that evades Endpoint Detection and Response (EDR) systems. In its latest campaign, the threat actor focuses on Ukrainian government entities. According to researchers at Uptycs, the attacks are likely politically motivated, to gather specific intelligence from Ukrainian government agencies.

Submitted by Gregory Rigby on

"Energy Department Offering $70 Million for Security, Resilience Research"

"Energy Department Offering $70 Million for Security, Resilience Research"

The US Department of Energy (DOE) recently announced that it’s offering up to $70 million in funding for research into technologies that can boost the resilience and security of the energy sector.  The funding offered through this project, named the All-Hazards Energy Resilience program, is for research in four key areas: cyber threats, physical threats, natural disasters, and extreme weather events fueled by climate change.

Submitted by Adam Ekwall on

"Nigerian Arrested, Charged in $7.5 Million BEC Scheme Targeting US Charities"

"Nigerian Arrested, Charged in $7.5 Million BEC Scheme Targeting US Charities"

A Nigerian national was recently arrested in Ghana and faces charges in the US for his role in a business email compromise (BEC) scheme involving two charitable organizations.  According to the indictment, between June and August 2020, the man, Olusegun Samson Adejorin, targeted two charities located in North Bethesda, Maryland, and New York, New York.  Adejorin allegedly obtained the credentials of employees of both organizations, accessed their email accounts, and impersonated employees at one of the charities to request withdrawals of funds from the other charity.

Submitted by Adam Ekwall on

Post-Quantum: Cybersecurity Speaker Series

Post-Quantum: Cybersecurity Speaker Series

Bailey Bickley, this season’s host of NSA’s Cybersecurity Speaker Series, speaks with Adrian Stanger and Bill Layton on preparing for Post-Quantum. For more on cybersecurity at NSA, and to find out when our next Speaker Series video is posted, follow us on Twitter @NSACyber.

Submitted by Amy Karns on

"Beware: 3 Malicious PyPI Packages Found Targeting Linux with Crypto Miners"

"Beware: 3 Malicious PyPI Packages Found Targeting Linux with Crypto Miners"

Three new malicious packages that can deploy a cryptocurrency miner on Linux devices have been discovered in the Python Package Index (PyPI) open-source repository. The three malicious packages, named "modularseven," "driftme," and "catme," were downloaded a total of 431 times in the past month before being removed. According to Fortinet FortiGuard Labs researcher Gabby Xiong, the packages deploy a CoinMiner executable on Linux devices. The campaign appears to overlap with a previous campaign that used a package called "culturestreak" to launch a cryptocurrency miner.

Submitted by Gregory Rigby on
AI and Cybersecurity Virtual Institute

AI and Cybersecurity develops methods to protect critical AI algorithms and systems from accidental and intentional degradation and failure.

Abstract

The research projects of the AI and Cybersecurity Virtual Institute are at the intersection of cybersecurity and Artificial Intelligence (AI). These projects are in broad areas of AI for Cybersecurity, Cybersecurity for AI and Countering AI. The research for AI for Cybersecurity advances the secure application AI and Machine Learning to cybersecurity challenges. In the challenge of Cybersecurity for AI, research develops methods to protect critical AI algorithms and systems from accidental and intentional degradation and failure. The area of counter AI is concerning the special cyber defenses needed to protect against cyberattacks that are aided by the use of AI. 
 

PROJECTS 
 

Trusted Systems Virtual Institute

Trusted Systems involve level-based security where protection is provided and handled according to different levels.

Abstract

The research projects of Trusted Systems Virtual Institute further the foundations and applications of trust and trustworthiness of devices and systems. The challenge of trust is examined at each stage of the development life cycle: design, development, use and retirement. Integral to advancing trust are research projects which advance understanding and accounting for human behavior on trust.  

 

PROJECTS 
 

Defensive Mechanisms Virtual Institute

Defense mechanisms can be categorized into groups such as Authentication and Encryption, Malware and Intrusion Detection, and Software Vulnerability.

Abstract

The research projects of the Defensive Mechanisms Virtual Institute advance resiliency by investigating the foundations needed to detect, respond and mitigate cyber attacks. This requires theory, models and tools at each stage of the cyber attack timeline. In addition, this field includes the necessary research to balance performance and security in responding to threats.  

 

PROJECTS 

"UMass Amherst Researchers Bring Dream Of Bug-Free Software One Step Closer to Reality"

"UMass Amherst Researchers Bring Dream Of Bug-Free Software One Step Closer to Reality"

A team of computer scientists led by the University of Massachusetts (UMass) Amherst announced a new method to automatically generate whole proofs that can be used to prevent software bugs and verify the correctness of the underlying code. Baldur is the new method involving Large Language Models' (LLMs) Artificial Intelligence (AI) power. Combined with the Thor tool, an efficacy of nearly 66 percent is reached.

Submitted by Gregory Rigby on
Subscribe to