2022 S&CC PI Meeting

Submitted by Amy Karns on

NSF's Smart & Connected Communities effort aims to advance understanding of our cities and communities to improve their functioning and quality of life within them through innovations in computing, engineering, information and physical sciences, social, and learning sciences.

Resilient Control of Cyber-Physical Systems with Distributed Learning
Lead PI:
Sayan Mitra
Co-Pi:
Abstract

Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai

Researchers: Pulkit Katdare and Negin Musavi

Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system’s performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.

Although our current understanding of AI models is very limited, they are already known to have serious vulnerabilities. For example, so-called “adversarial examples” can be generated algorithmically for defeating neural network models while appearing indistinguishable to human senses [73]. This can cause an autonomous vehicle to crash, facial recognition to fail, and illegal content to bypass filters, and the attacks may be impossible to detect. A second type of vulnerability arises when the adversary provides malicious training samples that may spoil the fidelity of the learned model. A third vulnerability is the potential violation of the privacy of individuals (e.g., drivers) who provide the training data. More generally, the space of vulnerabilities and their impact on the overall control system are not well-understood. This project will address this new and challenging landscape, and develop the mathematical foundations for reasoning about such systems and attacks. These foundations will then be the basis for automatically synthesizing monitoring and control algorithms needed for resilience. The project aligns with the SoS community’s goal of creating resilient cyber-physical systems, and the approaches developed here will contribute towards development of a new compositional reasoning framework for CPS that combines traditional controls with AI models.

Our approach will take a broad view in developing a mathematical framework while simultaneously creating algorithms and tools that will be tested on benchmarks and real data. The theoretical aspects of the project will draw on the team’s expertise in learning theory, formal methods, and robust control. The resulting resilient monitoring, detection, and control synthesis approaches will be tested on data, scenarios, and models from the CommonRoad project, Udacity, and OpenPilot.

 

Sayan Mitra

Sayan Mitra is a Professor, Associate Head of Graduate Affairs, and John Bardeen Faculty Scholar of ECE at UIUC. His research is on safe autonomy. His research group develops theory, algorithms, and tools for control synthesis and verification. Some of these have been patented and are being commercialized. Several former PhD students are now professors: Taylor Johnson (Vanderbilt), Parasara Sridhar Duggirala (NC Chapel Hill), and Chuchu Fan (MIT). Sayan received his PhD from MIT with Nancy Lynch. His textbook on verification of cyber-physical systems was published by MIT press in 2021. The group's work has been recognized with NSF CAREER Award, AFOSR Young Investigator Research Program Award, ACM SRC gold prize, IEEE-HKN C. Holmes MacDonald Outstanding Teaching Award (2013), Siebel Fellowship, and several best paper awards.

Performance Period: 01/01/2018 - 01/01/2018
Institution: University of Illinois at Urbana-ChampaignThe University of Texas at Austin
Sponsor: National Security Agency
A Human-Agent-Focused Approach to Security Modeling
Lead PI:
William Sanders
Abstract

Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.

Examples of simulation results are time to compromise of information, time to loss of service, percent of time adversary has system access, and identification of the most common attack paths.
Examples of insights one may gain from a model that incorporates agents address questions such as:

  • How do technical improvements in prevention and detection countermeasures, weigh against improvements to attack attribution capabilities of the defender as perceived by the adversary? Technical improvements change system behavior, while attribution capabilities can change adversary behavior for a risk-averse adversary.
  • How do autonomous and human-initiated defenses compare in effectiveness and what factors impact this comparison?

Assumptions made during the system design process will be made explicit and auditable in the model, which will help bring a more scientific approach to a field that currently often relies on intuition and experience. The primary output of this research will be a well-developed security modeling formalism capable of realistically modeling different human agents in a system, implemented in a software tool, and a validation of both the formalism and the tool with two or more real-life case studies. We plan to make the implementation of the formalism and associated analysis tools freely available to academics to encourage adoption of the scientific methodology our formalism will provide for security modeling. Many academics and practitioners have recognized the need for models for computer security, as evidenced by the numerous publications on the topic. Such modeling approaches are a step in the right direction, but have their own sets of limitations, especially in the way they model the humans that interact with the cyber portion of the system. Some modeling approaches explicitly model only the adversary (e.g., attack trees), or model only one attacker/defender pair (e.g., attack-defense trees [50]). However, there exist some approaches for modeling multiple adversaries, defenders, and users in a system, e.g., [9] [93]. The existing methods are not in common use, for a number of reasons. Often, the models lack realism because of oversimplification, are tailored to narrow use cases, produce results that are difficult to interpret, or are difficult to use, among other problems. Our approach will aim to overcome those limitations.

We seek to develop a formalism that may be used to build realistic models of a cyber-system and the humans who interact with the system—adversaries, defenders, and users—to perform risk analysis as an aid to security architects faced with difficult design choices. We call this formalism a General Agent Model for the Evaluation of Security (GAMES). We define an agent to be a human who may perform some action in the cyber-system: an adversary, a defender, or a user. The formalism will enable the modular construction of individual state-based agent models, which may be composed into one model so the interaction among the adversaries, defenders, and users may be studied. Once constructed, this composed model may be executed or simulated. During the simulation, each individual adversary, defender, or user may use an algorithm or policy to decide what actions the agent will take to attempt to move the system to a state that is advantageous for that agent. The simulation will then probabilistically determine the outcome of each action, and update the state. Modelers will have the flexibility to specify how the agents will behave. The model execution will generate metrics that aid risk assessment and help the security analyst suggest appropriate defensive strategies. The model’s results may be reproduced by re-executing the model, and the model’s assumptions may be audited and improved upon by outside experts.

William Sanders
Performance Period: 01/01/2018 - 10/01/2020
Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Mixed Initiative and Collaborative Learning in Adversarial Environments
Lead PI:
Claire Tomlin
Co-Pi:
Claire Tomlin

Claire Tomlin is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. She received the Erlander Professorship of the Swedish Research Council in 2009, a MacArthur Fellowship in 2006, and the Eckman Award of the American Automatic Control Council in 2003. She works in hybrid systems and control, with applications to air traffic systems, robotics, and biology.

Institution: UC Berkeley
Development of Methodology Guidelines for Security Research
Lead PI:
Jeffrey Carver
Abstract

This project seeks to aid the security research community in conducting and reporting methodologically sound science through (1) development, refinement, and use of community-based security research guidelines; and (2) characterization of the security literature based upon those guidelines.

Jeffrey Carver
Institution: North Carolina State University
Sponsor: National Security Agency
Scalable Privacy Analysis
Lead PI:
Serge Egelman
Abstract

One major shortcoming of the current "notice and consent" privacy framework is that the constraints for data usage stated in policies—be they stated privacy practices, regulation, or laws—cannot easily be compared against the technologies that they govern. To that end, we are developing a framework to automatically compare policy against practice. Broadly, this involves identifying the relevant data usage policies and practices in a given domain, then measuring the real-world exchanges of data restricted by those rules. The results of such a method will then be used to measure and predict the harms brought onto the data’s subjects and holders in the event of its unauthorized usage. In doing so, we will be able to infer which specific protected pieces of information, individual prohibited operations on that data, and aggregations thereof pose the highest risks compared to other items covered by the policy. This will shed light on the relationship between the unwanted collection of data, its usage and dissemination, and resulting negative consequences.

We have built infrastructure into the Android operating system, whereby we have heavily instrumented both the permission-checking APIs and included network-monitory functionality. This allows us to monitor when an application attempts to access protected data (e.g., PII, persistent identifiers, etc.) and what it does with it. Unlike static analysis techniques, which only detect the potential for certain behaviors (e.g., data exfiltration), executing applications with our instrumentation yields real-time observations of actual privacy violations. The only drawback, however, is that applications need to be executed, and broad code coverage is desired. To date, we have demonstrated that many privacy violations are detectable when application user interfaces are “fuzzed” using random input. However, there are many open research questions about how we can yield better code coverage to detect a wider range of privacy- related events, while doing so in a scalable manner. Towards that end, we plan to virtualize our privacy testbed and integrate crowd-sourcing. By doing this, we will develop new methods for performing privacy experiments that are repeatable, rigorous, and gen- eralizable. The results of these experiments can then be used to implement data-driven privacy controls, address gaps in regulation, and enforce existing regulations.

Serge Egelman

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is commercializing his research by performing on-demand privacy analysis of mobile apps for compliance purposes. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and privacy research awards from two different European data protection authorities, CNIL and AEPD. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.

Performance Period: 01/01/2018 - 01/01/2018
Institution: International Computer Science Institute
Reasoning about Accidental and Malicious Misuse via Formal Methods
Lead PI:
Munindar Singh
Co-Pi:
Abstract

This project seeks to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementation to identify misuses sensitive to usage and machine context.

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

Institution: North Carolina State University
Sponsor: National Security Agency
Characterizing user behavior and anticipating its effects on computer security with a Security Behavior Observatory
Co-Pi:
Abstract

Systems that are technically secure may still be exploited if users behave in unsafe ways. Most studies of user behavior are in controlled laboratory settings or in large-scale between-subjects measurements in the field. Both methods have shortcomings: lab experiments are not in natural environments and therefore may not accurately capture real world behaviors (i.e., low ecological validity), whereas large-scale measurement studies do not allow the researchers to probe user intent or gather explanatory data for observed behaviors, and they offer limited control for confounding factors. The SBO addresses the gap through a panel of participants consenting to our observing their daily computing behavior in situ, so we can understand what constitutes “insecure” behavior. We use the security behavior observatory to attempt to answer a number of research questions, including 1) What are risk indicators of a user’s propensity to be infected by malware?  2) Why do victims fail to update vulnerable software in a timely manner? 3) How can user behavior be modeled with respect to security and privacy “in the wild”?

Performance Period: 01/01/2018 - 01/01/2018
Institution: Carnegie Mellon University
Sponsor: National Security Agency
Secure Native Binary Execution
Lead PI:
Prasad Kulkarni
Abstract

Typically, securing software is the responsibility of the software developer. The customer or end-user of the software does not control or direct the steps taken by the developer to employ best practice coding styles or mechanisms to ensure software security and robustness. Current systems and tools also do not provide the end-user with an ability to determine the level of security in the software they use. At the same time, any flaw or security vulnerabilities ultimately affect the end-user of the software. Therefore, our overall project aim is to provide greater control to the end-user to actively assess and secure the software they use.

Our project goal is to develop a high-performance framework for client-side security assessment and enforcement for binary software. Our research is developing new tools and techniques to: (a) assess the security level of binary executables, and (b) enhance the security level of binary software, when and as desired by the user to protect the binary against various classes of security issues. Our approach combines static and dynamic techniques to achieve efficiency, effectiveness, and accuracy.

Prasad Kulkarni
Institution: University of Kansas
Sponsor: National Security Agency
Secure Native Binary Executions
Lead PI:
Prasad Kulkarni
Abstract

CPS systems routinely employ custom off-the-shelf (COTS) applications and binaries to realize their overall system goals. COTS applications for CPS are typically programmed using unsafe languages, like C/C++ or assembly. Such programs are often plagued with memory and other vulnerabilities that attackers can exploit to compromise the system.

There are many issues that need to be explored and resolved to provide security in this environment. For instance; (a) different systems may desire distinct and customizable levels of protection (for the same software), (b) different systems may have varying tolerances to the performance and/or timing penalties imposed by existing security solutions, and hence a solution applicable in one case may not be appropriate to a different system, (c) multiple solutions to the same vulnerability/attack may impose varying levels of security and performance penalties. Such tradeoffs and comparisons with other potential solutions are typically unknown or unavailable to users, and (d) solutions to newly discovered attacks and improvements to existing solutions continue to be devised. There is currently no efficient mechanism to retrofit existing application binaries with new security patches with minimal disruption to system operation.

The goal of this research is to design a mechanism to: (a) analyze and quantify the level of security provided and performance penalty imposed by different solutions to various security risks affecting native binaries, and (b) to study and build an architecture that can efficiently and adaptively patch vulnerabilities or retrofit COTS applications with chosen security mechanisms with minimal disruption.

Successful completion of this project will result in:

  • Exploration and understanding of the security and performance tradeoffs imposed by different proposed solutions to important software problems.
  • Discovery of (a set of) mechanisms to reliably retrofit desired compiler-level, instrumentation-based, or other user-defined security solutions into existing binaries.
  • Study and resolution of the issues involved in the design and construction of an efficient production-quality framework to realize the proposed goals.
Prasad Kulkarni
Subscribe to