VI Kickoff Meeting Summary

On January 11, 2024, the Science of Security (SoS) initiative held the kickoff meeting for its newest iteration of collaborative academic research, the SoS Virtual Institutes (VIs). Rita Bush, Chief, Laboratory for Advanced Cybersecurity Research (LACR), and Shavon Donnell, SoS Program Manager, welcomed the attendees, and congratulated the researchers for being selected.  

Ms. Donnell provided a brief overview of the SoS program, the goal of which is to foster a self-sustaining, open, and public security science research community to discover key cyber principles necessary to support improved security and privacy. The SoS program started in 2008 and began actively engaging the academic community in 2011 with the creation of the Lablet program.  In addition to the Lablets and now the VIs, the SoS initiative sponsors the Hot Topics in the Science of Security (HotSoS) Symposium (which will next be held virtually on April 2-4, 2024) and the Annual Best Scientific Cybersecurity Paper Competition, which is in its 12th year. 

In addressing the establishment of the VIs, she noted that of 56 proposals reviewed, only 8 projects were selected. The projects shown below are organized into 3 VIs with each project having Principal Investigators (PIs) and an NSA research liaison. 

VI for Trusted Systems

The research projects of Trusted Systems Virtual Institute further the foundations and applications of trust and trustworthiness of devices and systems. The challenge of trust is examined at each stage of the development life cycle: design, development, use and retirement. Integral to advancing trust are research projects which advance understanding and accounting for human behavior on trust.  

  • Advancing Security and Privacy of Bluetooth IoT
    • Ohio State University
    • Zhiqiang Lin
  • Predictable and Scalable Remote Attestation
    • University of Kansas
    • Perry Alexander
  • Quantitative Threat Modeling and Risk Assessment in the Socio-Technical Critical Infrastructure Systems
    • Towson University
    • Natalie Scala, Josh Dehlinger

VI for AI and Cybersecurity

The research projects of the AI and Cybersecurity Virtual Institute are at the intersection of cybersecurity and Artificial Intelligence (AI). These projects are in broad areas of AI for Cybersecurity, Cybersecurity for AI and Countering AI. The research for AI for Cybersecurity advances the secure application AI and Machine Learning to cybersecurity challenges. In the challenge of Cybersecurity for AI, research develops methods to protect critical AI algorithms and systems from accidental and intentional degradation and failure. The area of counter AI is concerning the special cyber defenses needed to protect against cyberattacks that are aided by the use of AI. 

  • Improving Malware Classifiers with Plausible Novel Samples
    • Vanderbilt University
    • Kevin Leach, Taylor Johnson
  • Leveraging Machine Learning for Binary Software Understanding
    • Arizona State University
    • Yan Shoshitaishvili, Adam Doupe
  • Improving Safety and Security of Neural Networks
    • International Computer Science Institute
    • Michael Mahoney, Serge Egelman, N. Benjamin Erichson 

VI for Defensive Mechanisms

The research projects of the Defensive Mechanisms Virtual Institute advance resiliency by investigating the foundations needed to detect, respond, and mitigate cyberattacks. This requires theory, models, and tools at each stage of the cyberattack timeline. In addition, this field includes the necessary research to balance performance and security in responding to threats.  

  • Neurosymbolic Autonomous Agents for Cyber-Defense
    • Vanderbilt University
    • Xenofon Koutsoukos, Gabor Karsai, Sandeep Neema
  • Vulnerabilities in the Social Attack Surface
    • University of Kansas
    • John Symons

The VI members are expected to cooperate with members of their own VI as well as the others to help increase and accelerate the overall scientific return of the SoS program. The research generated by the VIs will be disseminated via professional publications, workshops, conferences, and other methods. The PIs will be seeking to identify how the research into these areas can benefit national security. 

The remainder of the kickoff consisted of project presentations from all principal investigators.

Trusted Systems

Predictable and Scalable Remote Attestation—Perry Alexander, University of Kansas

Dr. Alexander provided some background on the project which included developing the Copland language and semantics for remote attestation protocols and constructing MAESTRO, a formally verified environment for executing those protocols. What is lacking he said, is whether researchers are gathering the right evidence, what their base attestation architecture set is, and how attestation behaves over time. Predictable and scalable remote attestation requires evidence and time, flexible mechanisms at scale, and experimental case studies, and he elaborated on each of those elements. He further addressed outreach efforts associated with the project including a Science of Security Advisory Board, the Copland Consortium, and a new company, Invary, that is commercializing LKIM measurement technologies that will use MAESTRO infrastructure.

Advancing Security and Privacy of Bluetooth IoT—Zhiqiang Lin, Ohio State University

Dr. Lin described Bluetooth Low Energy (BLE) as having low power consumption and long communication distances, and noted researchers have identified many vulnerabilities associated with this type of communication. The goal of this project is to systematically uncover low energy communication attacks via formal methods.  He continued by describing how protocol verification works and discussed some of his prior research in this area.  The four tasks associated with this project are: 1) developing a formal method for the full spectrum of the protocols; 2) developing a formal model for all pairing methods; 3) modeling linkability of BLE devices for privacy, and 4) integrating formal verification into the supply chain.  The deliverables are formal models of the Bluetooth protocol, analysis of the discovered vulnerabilities, open-source implementation, and publication of research papers. 

Quantitative Threat Modeling and Risk Assessment in the Socio-Technical Critical Infrastructure Systems—Natalie Scala, Towson University

Dr. Scala addressed the critical infrastructure sectors defined by the Department of Homeland Security (DHS) and said that all are key targets for cyberattacks. She noted that in order to safeguard the sectors, researchers needed to identify vulnerabilities and develop strategies to prevent and respond to attacks. The process to do that is to take a threat and mitigation analysis approach, create a framework to model a relative likelihood risk assessment, and then develop, model, and analyze policy implications and security mitigations. This project is focused on the government facilities sector, specifically election infrastructure, and will leverage the Empowering Secure Elections research lab at Towson University.  She presented the problem statement as modeling the relative risk of adversaries and trusted insiders exploiting threat scenarios in developed attack trees, and described the outcomes and objectives for each of the three years of the project, with the final outcome being an impact analysis. When asked if this project can be applied to other areas, she said that the goal is to contribute to the framework methodology. 

AI and Cybersecurity

Improving Malware Classifiers with Plausible Novel Samples—Kevin Leach, Vanderbilt University

In addressing the pervasiveness of malware, Dr. Leach noted that automated malware analysis depends on effective triage and classification, but that neural malware classifiers lack verifiability and robustness against stealthiness and obfuscation. While neural networks are popular means of classification, they lack explainability, robustness, and verifiability for malware analysis.  He addressed assuring malware classification with augmentation, noting that augmentation via perturbation is widely used to improve machine learning with sparse data. Semantics-aware augmentation and verification leverages the distinction between interpolatable (length, entropy, number of sections) and non-interpolatable (hash values, strings) features. He provided additional examples of how semantics-aware malware augmentation can improve low-resource malware classifiers and provide hard samples for verification, and concluded by noting that neural network verification can be used to measure robustness against perturbation of malware samples. In response to a question, he said that this project seeks to focus more on existing detection techniques as a starting point. 

Leveraging Machine Learning for Binary Software Understanding—Yan Shoshitaishvili, Arizona State University

Dr. Shoshitaishvili began by noting that intuition suggests that different types of lost information necessitate different information reconstruction approaches.  He said the first task is achieving semantically equivalent decompilation, and addressed opportunities for ML-augmented decompilation. He described prior work using ML to predict variable names and then discussed their current approach, VarBERT, which uses a two-step training process. He spoke about type inference in ML and said that VarBERT works because other tokens remain identical during variable renaming.

Improving Safety and Security of Neural Networks—N. Benjamin Erichson, International Computer Science Institute

Dr. Erichson opened his presentation by stating that neural networks are brittle and sensitive to attacks, and gave some examples of CV model vulnerabilities. The project researchers assume that the adversary can have access to the data and affect inference time, and this project seeks to come up with strong data augmentation methods to make the model more robust and wash out some of the poisoned data points. The project researchers also want to create metrics to identify good and bad models. They are using MixUp to create virtual data points by forming linear combinations of two data points and then further improve robustness by mixing perturbed data points.  A key challenge in trying to achieve stronger data perturbations is to design the transformation operator that is applied to a given output. Dr. Erichson then discussed global and local metrics as well as the risk that AIs can be used for attacks. He noted that counter AI strategies are needed to reduce the advantages of AI to an adversary, and that this project aims to advance the field of AI safety by exploring novel methods for training robust models free from security violations.  

Defensive Mechanisms

Neurosymbolic Autonomous Agents for Cyber-Defense—Xenefon Koutsoukos, Vanderbilt University

Autonomous agents for cyber applications need to learn, reason about, and adapt to deploy security mechanisms for defending networked computer systems while maintaining critical operational workflows. The research challenge is that cyber agents need to complete multiple interdependent tasks over variable length time-intervals. With the preceding as background, Dr. Koutsoukos said that this project is trying to create neurosymbolic models using the Cyber Operations Research Gym (CybORG). He spoke about Evolving Behavior Trees (EBTs), how they have been used, their design flow, and assurance methods, as well as preliminary results and extending EBT agents using ChatGPT.

Vulnerabilities in the Social Attack Surface—John Symons, University of Kansas

Dr. Symons began by stating that the defense of the United States requires a scientific understanding of interventions against social infrastructure, especially the cyber-social interface, and identified social infrastructure as social institutions, norms, and choice architectures.  Other defensive strategies include tools for tracking campaigns against social norms and forecasting norm change.  Attacks on norms can be studied in order to detect attempts to undermine norms. Lines of research in this project include how social norms change, whether changing social norms can be tracked and predicted, and whether adversarial efforts to intervene in social norms can be identified.  In order to shift the paradigm of national security, Dr. Symons maintained that we need to focus on social norms to understand and defend the social attack surface.  He recommended that we rethink the social attack surface in social terms rather than in terms of individual psychology; increase research into social vulnerabilities; and consider the role of choice architectures in the cyber-social interface.

Since the kickoff meeting, an additional award was made to Carnegie Mellon University (CMU) bringing the total number of projects to 11. The CMU projects are part of the following VIs:

 VI for Trusted Systems

  • Continuous Reasoning with Gradual Verification
    • Carnegie Mellon University
    • Jonathan Aldrich

VI for Defensive Mechanisms

  • Resilient Systems through Adaptive Architecture
    • Carnegie Mellon University
    • David Garlan
  • Towards Trustworthy Autonomous Cyber Defense for Dynamic Intrusion Response
    • Carnegie Mellon University
    • Ehab Al-Shaer

 

Submitted by Cyber Pack Ventures, Inc.

Submitted by grigby1 CPVI on