Summer 2021 SoS Quarterly Lablet Meeting

Summer 2021 SoS Quarterly Lablet Meeting

 



The Summer 2021 Science of Security and Privacy (SoS) Quarterly Lablet meeting was hosted virtually by Carnegie Mellon University (CMU) on July 13-14, 2021. The virtual attendees from the government and the six SoS Lablets were welcomed by Jonathan Aldrich, the Principal Investigator (PI) at the CMU Lablet, and Heather Lucas, the SoS team lead in NSA’s Laboratory for Advanced Cybersecurity Research. The theme of the Summer Quarterly was Artificial Intelligence (AI) and Machine Learning (ML).

SoS Hard Problems Panel

The SoS initiative is working on setting the direction for the future of the Science of Security. The Hard Problems, developed nearly a decade ago, are integrated into all Lablet research, but need to be reviewed in order to direct research going forward. Because AI and ML weren’t part of the original Hard Problems and have developed significantly in the past ten years, the SoS community needs to consider how AI and ML can be applied to cybersecurity challenges. The Summer Quarterly kicked off with a panel discussion on the Hard Problems, one of the purposes of which was to elicit input from the SoS community. Moderator Adam Tagert and panel members Ahmad Ridley, Jonathan Aldrich, and Carl Landwehr encouraged attendees to share their research ideas and suggest areas for further exploration.

In order to frame the subject and prompt discussion, Dr. Ahmad Ridley opened the session with a presentation on the National Security Commission on Artificial Intelligence (NSCAI) Final Report (available here).

Some of the questions raised during the follow-on discussions included:

  • What are the national questions for AI and cybersecurity?
  • How do we ensure the security of ML and AI and how to develop trust in such systems?
  • How do we address the ethical issues surrounding ML and AI?
  • How do individuals protect themselves from AI?
  • If ML and AI are the next revolution, how do we use history to predict possible changes?

Technical Presentations

Events and Stories: NLP toward Secure Software Engineering
Hui Guo, NCSU

The research interests described in this presentation are in natural language processing in events and stories in natural language text; The researchers looked at breach reports targeting text related to software development and applied the research questions: How can we effectively extract targeted events from text?; How can we effectively extract targeted event pairs from text?; and How can we effectively extract targeted stories from text? Professor Guo presented the research methodology, representative examples, and findings, and then proposed future work, including more reliable extraction from low-quality text, pre-defined event types, deeper understanding of event relations, and how story understanding helps.

Trustworthy, Robust, and Explainable ML: Joined at the Hip
Matt Fredrikson, CMU

Under this Lablet proposal, researchers aim to analyze and quantify the extent to which attacks can be mounted using implementation-level explanations. Professor Fredrikson introduced attribution methods and showed how they can be used to explain and quantify attacks. He also discussed security pitfalls of explanations and introduced robustness, a practical remedy to these pitfalls. He summarized by noting that faithful attribution methods are useful for gaining insight into attacks and measuring vulnerability. In certain cases, attackers can manipulate attributions with feature perturbations. Lipschitz-continuous models mitigate this threat and are often more explainable to begin with. Globally robust nets are an efficient, effective way to achieve strong Lipschitz continuity.

Beyond Lp balls: Attacks on real-world uses of machine learning
Michael Reiter, Duke

This presentation dealt with attacks on practical uses of ML in a setting where attacks aren’t constrained by nearness, noting that traditional attack approaches and traditional defenses may not apply. The attack scenario he described was fooling face recognition, and he provided several approaches and methodologies. His attack scenarios were different from previous ones in that the adversarial examples were, among other differences, far in Lp space from source inputs. The takeaways from his presentation were that real-world applications of ML are vulnerable; small Lp distance attacks and defenses may not apply; and defenses have a long way to go. In response to a question, he pointed out that most of the attacks he described were white-box attacks--if the classifier doesn’t tell us anything other than that it was malware, for example, it is a significant challenge. This presentation relates to the CMU project Securing Safety-Critical Machine Learning Algorithms.

Flexible Mechanisms for Remote Attestation
Perry Alexander, KU

The goals of this research deal with formal semantics of trust—a definition of trust sufficient for evaluating systems; a verified remote attestation infrastructure—verified components for assembling trusted systems; enterprise attestation and appraisal; and sufficiency and soundness of measurement. Professor Alexander provided a remote attestation example that included three attestation managers and seL4 implementation infrastructure. This presentation builds upon the KU project entitled Scalable Trust Semantics and Infrastructure.

A First Look at Soft Attestation in Android Apps
Abbas Razaghpanah, ICSI

Dr. Razaghpanah began by defining remote attestation as a way for a remote party to ensure that the client software and the platform it runs on are trustworthy, and noted that this research provides evidence that shows apps are abusing remote attestation to hide bad data collection practices and to evade app store policy enforcement. The research questions include why mobile apps use soft attestation; how it works on mobile platforms and the heuristics; what apps are protecting when using attestation; and how differently apps would behave when not “watched.” He proposed answers to these questions during the presentation and noted that the researchers used these insights to improve their testing apparatus. He said that proper attestation is very hard with the android platform, and unless Google can compel all their OEMs to put a TPM on the device, strong universal attestation won’t happen on Android. This research is under the ICSI project Scalable Privacy Analysis.

Analytics of Cybersecurity Policy: Value for Artificial Intelligence?
Nazli Choucri, MIT

To provide context for her presentation, Professor Choucri briefly described the ongoing VU project, Policy Analytics for Cybersecurity of Cyber-Physical Systems. She noted that cybersecurity policies and guidelines are in text form, and the project aims to introduce analytics for cybersecurity policy of cyber-physical systems in order to produce tools for analytics of cybersecurity policy; the project uses the NIST Cyber Security Framework applied to a smart grid as a testbed. She discussed the AI global ecology and pointed out that analytics of cyber security policy for cyber-physical systems is extremely difficult for any system operation to apply the cyber policies that are given to them because there are impediments to just using words. AI applies in that everything the researchers have done by hand must be automated.

Emulating Cybersecurity Simulation Models Using Metamodels for Faster Sensitivity Analysis and Uncertainty Quantification
Michael Rausch, UIUC

Dr. Rausch noted that because cybersecurity models are often large, have long execution times, and many uncertain input variables, this research approach uses metamodels. Traditional approaches to uncertain input variables include Sensitivity Analysis (SA) and Uncertainty Quantification (UQ). A metamodel is a model designed to emulate the behavior of a model referred to as the base model. Given the same input, the metamodel should produce the same or similar output as the base model. A metamodel trades some accuracy for faster runtimes. Once constructed, SA and UW can be applied to the metamodel. He discussed how to construct metamodels and collect training and testing data, and then provided test cases to evaluate the metamodels and the results of the modeling. He emphasized that metamodeling is a powerful approach for analyzing cybersecurity models that are slow running and have many uncertain variables. He also concluded that metamodels approximate the base model behavior relatively accurately, while being much faster than the base model. Use of metamodels was explored in the UIUC project, Monitoring, Fusion, and Response for Cyber Resilience. 

The complete agenda and selected presentations can be found here.
 

Submitted by Anonymous on