Resilient Control of Cyber-Physical Systems with Distributed Learning
Lead PI:
Sayan Mitra
Co-Pi:
Abstract

Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai

Researchers: Pulkit Katdare and Negin Musavi

Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system’s performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.

Although our current understanding of AI models is very limited, they are already known to have serious vulnerabilities. For example, so-called “adversarial examples” can be generated algorithmically for defeating neural network models while appearing indistinguishable to human senses [73]. This can cause an autonomous vehicle to crash, facial recognition to fail, and illegal content to bypass filters, and the attacks may be impossible to detect. A second type of vulnerability arises when the adversary provides malicious training samples that may spoil the fidelity of the learned model. A third vulnerability is the potential violation of the privacy of individuals (e.g., drivers) who provide the training data. More generally, the space of vulnerabilities and their impact on the overall control system are not well-understood. This project will address this new and challenging landscape, and develop the mathematical foundations for reasoning about such systems and attacks. These foundations will then be the basis for automatically synthesizing monitoring and control algorithms needed for resilience. The project aligns with the SoS community’s goal of creating resilient cyber-physical systems, and the approaches developed here will contribute towards development of a new compositional reasoning framework for CPS that combines traditional controls with AI models.

Our approach will take a broad view in developing a mathematical framework while simultaneously creating algorithms and tools that will be tested on benchmarks and real data. The theoretical aspects of the project will draw on the team’s expertise in learning theory, formal methods, and robust control. The resulting resilient monitoring, detection, and control synthesis approaches will be tested on data, scenarios, and models from the CommonRoad project, Udacity, and OpenPilot.

 

Sayan Mitra

Sayan Mitra is a Professor, Associate Head of Graduate Affairs, and John Bardeen Faculty Scholar of ECE at UIUC. His research is on safe autonomy. His research group develops theory, algorithms, and tools for control synthesis and verification. Some of these have been patented and are being commercialized. Several former PhD students are now professors: Taylor Johnson (Vanderbilt), Parasara Sridhar Duggirala (NC Chapel Hill), and Chuchu Fan (MIT). Sayan received his PhD from MIT with Nancy Lynch. His textbook on verification of cyber-physical systems was published by MIT press in 2021. The group's work has been recognized with NSF CAREER Award, AFOSR Young Investigator Research Program Award, ACM SRC gold prize, IEEE-HKN C. Holmes MacDonald Outstanding Teaching Award (2013), Siebel Fellowship, and several best paper awards.

Performance Period: 01/01/2018 - 01/01/2018
Institution: University of Illinois at Urbana-ChampaignThe University of Texas at Austin
Sponsor: National Security Agency
A Human-Agent-Focused Approach to Security Modeling
Lead PI:
William Sanders
Abstract

Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.

Examples of simulation results are time to compromise of information, time to loss of service, percent of time adversary has system access, and identification of the most common attack paths.
Examples of insights one may gain from a model that incorporates agents address questions such as:

  • How do technical improvements in prevention and detection countermeasures, weigh against improvements to attack attribution capabilities of the defender as perceived by the adversary? Technical improvements change system behavior, while attribution capabilities can change adversary behavior for a risk-averse adversary.
  • How do autonomous and human-initiated defenses compare in effectiveness and what factors impact this comparison?

Assumptions made during the system design process will be made explicit and auditable in the model, which will help bring a more scientific approach to a field that currently often relies on intuition and experience. The primary output of this research will be a well-developed security modeling formalism capable of realistically modeling different human agents in a system, implemented in a software tool, and a validation of both the formalism and the tool with two or more real-life case studies. We plan to make the implementation of the formalism and associated analysis tools freely available to academics to encourage adoption of the scientific methodology our formalism will provide for security modeling. Many academics and practitioners have recognized the need for models for computer security, as evidenced by the numerous publications on the topic. Such modeling approaches are a step in the right direction, but have their own sets of limitations, especially in the way they model the humans that interact with the cyber portion of the system. Some modeling approaches explicitly model only the adversary (e.g., attack trees), or model only one attacker/defender pair (e.g., attack-defense trees [50]). However, there exist some approaches for modeling multiple adversaries, defenders, and users in a system, e.g., [9] [93]. The existing methods are not in common use, for a number of reasons. Often, the models lack realism because of oversimplification, are tailored to narrow use cases, produce results that are difficult to interpret, or are difficult to use, among other problems. Our approach will aim to overcome those limitations.

We seek to develop a formalism that may be used to build realistic models of a cyber-system and the humans who interact with the system—adversaries, defenders, and users—to perform risk analysis as an aid to security architects faced with difficult design choices. We call this formalism a General Agent Model for the Evaluation of Security (GAMES). We define an agent to be a human who may perform some action in the cyber-system: an adversary, a defender, or a user. The formalism will enable the modular construction of individual state-based agent models, which may be composed into one model so the interaction among the adversaries, defenders, and users may be studied. Once constructed, this composed model may be executed or simulated. During the simulation, each individual adversary, defender, or user may use an algorithm or policy to decide what actions the agent will take to attempt to move the system to a state that is advantageous for that agent. The simulation will then probabilistically determine the outcome of each action, and update the state. Modelers will have the flexibility to specify how the agents will behave. The model execution will generate metrics that aid risk assessment and help the security analyst suggest appropriate defensive strategies. The model’s results may be reproduced by re-executing the model, and the model’s assumptions may be audited and improved upon by outside experts.

William Sanders
Performance Period: 01/01/2018 - 10/01/2020
Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Mixed Initiative and Collaborative Learning in Adversarial Environments
Lead PI:
Claire Tomlin
Co-Pi:
Claire Tomlin

Claire Tomlin is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. She received the Erlander Professorship of the Swedish Research Council in 2009, a MacArthur Fellowship in 2006, and the Eckman Award of the American Automatic Control Council in 2003. She works in hybrid systems and control, with applications to air traffic systems, robotics, and biology.

Institution: UC Berkeley
Development of Methodology Guidelines for Security Research
Lead PI:
Jeffrey Carver
Abstract

This project seeks to aid the security research community in conducting and reporting methodologically sound science through (1) development, refinement, and use of community-based security research guidelines; and (2) characterization of the security literature based upon those guidelines.

Jeffrey Carver
Institution: North Carolina State University
Sponsor: National Security Agency
Subscribe to