Virtual Institutes Mid-Year Meeting Summary

The Science of Security (SoS) Virtual Institutes (VIs) held their Mid-Year meeting at the International Computer Science Institute (ICSI) on July 9-10, 2024. This was the second meeting of the VIs since they were formed in late 2023.  The meeting was attended by Principal Investigators (PIs) and/or Co-PIs from all seven VI universities who briefed the status of the eleven VI projects. A summary of the kickoff meeting and selected projects can be found here

The meeting was opened by Adam Tagert, SoS Technical Director, who noted some of the SoS initiative activities since the VI kickoff in January.  These activities included the 11th Annual Symposium on Hot Topics in the Science of Security (HotSoS) in April and the International Science and Engineering Fair where SoS recognized high school students for their research in cybersecurity.  Dr. Tagert then discussed the goals of the VI Mid-Year meeting, which were to bring the community together, enable National Security Agency (NSA) researchers to become more familiar with VI projects so the research can be shared more broadly within NSA, and strengthen the VIs. Shavon Donnell, SoS Program Manager, also welcomed the attendees and reviewed the agenda, noting that participants were looking forward to hearing about the progress of each project over the past six months. The Mid-Year meeting also had a guest speaker who gave a talk on Artificial Intelligence (AI) and cybersecurity. 

VI 1: Trusted Systems

The research projects of Trusted Systems VI further the foundations and applications of trust and trustworthiness of devices and systems. The challenge of trust is examined at each stage of the development life cycle: design, development, use and retirement. Integral to advancing trust are research projects, which advance understanding and accounting for human behavior on trust.  

Advancing Security and Privacy of Bluetooth IoT—Zhiqiang Lin, Ohio State University

Dr. Lin reiterated that the goal of this project is to systematically uncover attacks via formal methods.  His progress report addressed the unexplored privacy terrain: formalizing the privacy leaks (the science) of allowlist.  Problems with allowlist are location-tracking and deanonymization making it challenging to apply state-of-the-art solutions. His research questions revolve around the conditions necessary to explain allow-list-based attacks in light of current privacy formalization, and how to prevent allowlist-based attacks. He listed some conditions for an allowlist attack and key defense strategies. His next research privacy goals are to define allowlist attacks in process algebra and provide sufficient properties for proposed solutions.  

Predictable and Scalable Remote Attestation—Perry Alexander, University of Kansas

Dr. Alexander noted that predictable and scalable remote attestation requires evidence and time, flexible mechanisms at scale, and large empirical case studies. He provided some background on the project, which included developing the Copland language and semantics for remote attestation protocols and constructing MAESTRO, a formally verified environment for executing those protocols. He also discussed the attestation testbed, protocol analysis, and changes to MAESTRO, as well as future activities including new empirical case studies and new publications in preparation.

Quantitative Threat Modeling and Risk Assessment in the Socio-Technical Critical Infrastructure Systems—Josh Dehlinger, Towson University

The case study for the election integrity research project focuses on optical scanners used in about 70% of US precincts. Dr. Dehlinger described their systems approach to develop a threat model and analysis.  Their research agenda is to model the relative risks of adversaries and trusted insiders exploiting threat scenarios, and the goal is a comprehensive, updated attack tree and mitigation analysis for critical infrastructure equipment and processes. He described the work done thus far, their findings, and what they plan to do over the remaining years of the project.  Dr. Dehlinger concluded by stating that understanding threats to election integrity enables effective poll worker training, protective mitigation strategies, and policy development.

Continuous Reasoning with Gradual Verification—Jonathan Aldrich, Carnegie Mellon University

Dr. Aldrich addressed the challenges with enabling continuous reasoning and described his research approach, and “secret weapon,” as gradual verification, a new approach to verification designed to deal with incomplete specifications; specifications can be gradually extended over time to verify more program properties. He presented two hypotheses in dealing with gradual verification and continuous assurance, both of which help with evolving specifications: 1) better support for incomplete specifications; and 2) combining static and dynamic checks. After performing a case study and observing its results, researchers are starting to build a continuous verification structure. Their current hypothesis is that supporting incomplete specifications and leveraging both static and dynamic checking can support highly productive continuous assurance, proof maintenance, and proof repair.

VI 2:  Defensive Mechanisms

The research projects of the Defensive Mechanisms VI advance resiliency by investigating the foundations needed to detect, respond, and mitigate cyberattacks. This requires theory, models, and tools at each stage of the cyberattack timeline. In addition, this field includes the necessary research to balance performance and security in responding to threats.  

Neurosymbolic Autonomous Agents for Cyber-Defense—Gabor Karsai, Vanderbilt University

After providing the background for the research project and some of the work that had already been done, Dr. Karsai described the work in progress.  The current work is based on the following problem statement: Given a network consisting of hosts and operational servers and a neurosymbolic cyber-agent trained with a policy, the objective is to develop runtime assurance algorithms to detect shifts from the distribution used for training the agent. He also addressed the use of a probabilistic DNN ensemble to detect out-of-distribution data obtained from the network. He concluded the presentation by noting that Evolving Behavior Trees (EBTs) have high utility for representing complex long-term tasks. Current and future work includes EBTs using Large Language Models (LLMs) and more work on multiagent systems. 

Resilient Systems through Adaptive Architecture—Eunsuk Kang, Carnegie Mellon University

Dr. Kang described their research approach as resilient-by-design systems whereby instead of preventing attacks (which are inevitable), designers should aim for preserving as many critical systems as possible during an attack. The researchers propose identifying and categorizing services based on their criticality, designing a system architecture with separation based on criticality, and allowing the system to gracefully degrade to maintain highly critical services. Trust boundaries capture an explicit mapping between service requirements and component behaviors. By computing and analyzing trust boundaries, one can identify undesirable couples between services of different criticality levels and parts of the system to be redesigned for improved resilience. Trust boundaries can be used to guide graceful degradation during an attack by reconfiguring the system to sacrifice services that rely on compromised components. He identified three research questions: 1) What does resilience mean and how do we analyze a system design for it?; 2) How do we architect a system to be resilient?; and 3) How do we allow the system to adapt and respond dynamically when an attack occurs? He described their approach to addressing these questions, including work on developing a catalogue of reconfiguration tactics for maximizing preserved services and the development of a formal theory and prototype tool for architectural resilience analysis.

Towards Trustworthy Autonomous Cyber Defense for Dynamic Intrusion Response—Ehab Al-Shaer, Carnegie Mellon University

Dr. Al-Shaer addressed the current cybersecurity automation environment and the limitations of existing systems, and then enumerated the project objectives as playbook specification, playbook verification, playbook scoring, and playbook adaptation.  The objective of the project use case is to develop an autonomous multi-agent architecture, (Horde) to compute optimal defense policy in real-time against dynamic multi-strategy infrastructure DDoS attacks.  The approach relies on model-based reinforcement learning using POMDP for optimal sequential decision-making, and Dr. Al-Shaer described some of the challenges, strategies, and results of their research. 

Vulnerabilities in the Social Attack Surface—David Tamez, University of Kansas

Dr. Tamez identified the four research goals for the project: 1) Develop a new theoretical framework for understanding interventions via social media; 2) Develop Machine Learning (ML) models for the study of cyber-based attacks on norms; 3) Develop tactics for the defense of critical social institutions; and 4) Develop historical insight into the development of information warfare in the Soviet Union and Russia. Some of the project activities include exploring the topic of fairness in ML, forecasting the future of social norms, dealing with deepfakes, and addressing the flow of information.  He concluded by stating that we must make sense of social epistemic environments. 

VI 3:  AI and Cybersecurity

The research projects of the AI and Cybersecurity VI are at the intersection of cybersecurity and AI. These projects are in broad areas of AI for Cybersecurity, Cybersecurity for AI and Countering AI. The research for AI for Cybersecurity advances the secure application AI and ML to cybersecurity challenges. In the challenge of Cybersecurity for AI, research develops methods to protect critical AI algorithms and systems from accidental and intentional degradation and failure. The area of counter-AI concerns the special cyber defenses needed to protect against cyberattacks that are aided by the use of AI. 

Improving Malware Classifiers with Plausible Novel Samples—Kevin Leach, Vanderbilt University

The project update provided by Dr. Leach dealt with semantics-aware augmentation via MalMixer.  Preliminary research results show that MalMixer can help improve classification performance in low-resource settings and help maintain performance as new malware families emerge. He also spoke about robust malware backdoor purification via PBJ and described a preliminary experimental setup and results. He stated that PBJ provides state-of-the-art resilience against backdoor attacks for malware classifiers and purifies malware classifiers without regard for a specific backdoor attack.  He noted that semantic-aware malware augmentation can improve low-resource malware classifiers and provide hard samples for verification. In addition, he pointed out that backdoor malware classifiers can be purified to eliminate effects of adversary-perturbed training data. 

Leveraging Machine Learning for Binary Software Understanding—Adam Doupe, Arizona State University

Dr. Doupe updated the status of their research, based on VarBERT’s two-step training process that helps increase accuracy, and discussed the semantic equivalence case study. Applying lessons-learned from VarBERT, the researchers first evaluated existing datasets for type inference. Using TyDa, the TyGR dataset, they expanded VarBERT into a matrix of optimization levels and architectures. He noted the early promise of their approach, saying it is an improvement over other ML techniques and is also better than the state-of-the-art non-ML techniques. 

Improving Safety and Security of Neural Networks—Michael Mahoney, International Computer Science Institute

Dr. Mahoney said that the project tasks were to improve robustness to adversarial and common corruptions and to develop metrics to verify the safety and trustworthiness of models before deployment. He described their research into heavy-tailed self-regularization (WeightWatcher) and said that researchers plan to correlate the weight signals with biases and vulnerabilities. He also addressed more applications for their research, including a novel training method (temperature balancing) and the compression method (load balancing). 

Guest Speaker

AI Safety and How Will Frontier AI Change the Landscape of Cyber Security

Dawn Song, University of California at Berkeley

Dr. Song addressed how AI will change the landscape of cybersecurity, noting that traditional cybersecurity has an attacker and a defender, but cybersecurity with Frontier AI (later defined as future AI) has a Frontier AI attacker and a Frontier AI defender. She spoke about various adversarial attacks and said that progress in adversarial defense has been slow, with no effective general adversarial defenses.  Dr. Song went on to discuss recent work in representation engineering, which she described as a top-down approach to AI transparency that includes designing stimulus and task, collecting neural activity, modeling, and monitoring. In addressing how Frontier AI (dual use) will impact cybersecurity, she cited multiple steps, including knowing your enemy, knowing your defense, and considering the asymmetry between defense and offense.  Dr. Song noted that misused AI increases the attacker’s capability, reduces resources needed for attacks, and makes attacks more evasive and stealthier. She pointed out that AI can also enhance defenses citing reactive defense and proactive defense secure-by-construction.  Among her predictions were that AI will help attackers more in the near-term and that AI-assisted attacks will be worse than the attacks we’ve already experienced. She said that we need to use AI to build secure systems with provable guarantees. 

 

Submitted by Cyber Pack Ventures, Inc.

Submitted by grigby1 CPVI on