Civic Innovation Challenge

Submitted by Amy Karns on

The Civic Innovation Challenge is a multi-agency, federal government research and action competition that aims to fund ready-to-implement, research-based pilot projects that have the potential for scalable, sustainable, and transferable impact on community-identified priorities.

NSF's Smart & Connected Communities Effort

NSF's Smart & Connected Communities Effort

The National Science Foundation (NSF) has long been a leader in advancing the fundamental science and engineering research and education that will revolutionize our Nation's cities and communities for the 21st century. NSF investments create the scientific and engineering foundations for smart cities and communities and help to enhance economic vitality, safety, security, health and wellbeing, and overall quality of life.

Submitted by Amy Karns on

2022 S&CC PI Meeting

Submitted by Amy Karns on

NSF's Smart & Connected Communities effort aims to advance understanding of our cities and communities to improve their functioning and quality of life within them through innovations in computing, engineering, information and physical sciences, social, and learning sciences.

Resilient Control of Cyber-Physical Systems with Distributed Learning
Lead PI:
Sayan Mitra
Co-Pi:
Abstract

Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai

Researchers: Pulkit Katdare and Negin Musavi

Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system’s performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.

Although our current understanding of AI models is very limited, they are already known to have serious vulnerabilities. For example, so-called “adversarial examples” can be generated algorithmically for defeating neural network models while appearing indistinguishable to human senses [73]. This can cause an autonomous vehicle to crash, facial recognition to fail, and illegal content to bypass filters, and the attacks may be impossible to detect. A second type of vulnerability arises when the adversary provides malicious training samples that may spoil the fidelity of the learned model. A third vulnerability is the potential violation of the privacy of individuals (e.g., drivers) who provide the training data. More generally, the space of vulnerabilities and their impact on the overall control system are not well-understood. This project will address this new and challenging landscape, and develop the mathematical foundations for reasoning about such systems and attacks. These foundations will then be the basis for automatically synthesizing monitoring and control algorithms needed for resilience. The project aligns with the SoS community’s goal of creating resilient cyber-physical systems, and the approaches developed here will contribute towards development of a new compositional reasoning framework for CPS that combines traditional controls with AI models.

Our approach will take a broad view in developing a mathematical framework while simultaneously creating algorithms and tools that will be tested on benchmarks and real data. The theoretical aspects of the project will draw on the team’s expertise in learning theory, formal methods, and robust control. The resulting resilient monitoring, detection, and control synthesis approaches will be tested on data, scenarios, and models from the CommonRoad project, Udacity, and OpenPilot.

 

Sayan Mitra

Sayan Mitra is a Professor, Associate Head of Graduate Affairs, and John Bardeen Faculty Scholar of ECE at UIUC. His research is on safe autonomy. His research group develops theory, algorithms, and tools for control synthesis and verification. Some of these have been patented and are being commercialized. Several former PhD students are now professors: Taylor Johnson (Vanderbilt), Parasara Sridhar Duggirala (NC Chapel Hill), and Chuchu Fan (MIT). Sayan received his PhD from MIT with Nancy Lynch. His textbook on verification of cyber-physical systems was published by MIT press in 2021. The group's work has been recognized with NSF CAREER Award, AFOSR Young Investigator Research Program Award, ACM SRC gold prize, IEEE-HKN C. Holmes MacDonald Outstanding Teaching Award (2013), Siebel Fellowship, and several best paper awards.

Performance Period: 01/01/2018 - 01/01/2018
Institution: University of Illinois at Urbana-ChampaignThe University of Texas at Austin
Sponsor: National Security Agency
A Human-Agent-Focused Approach to Security Modeling
Lead PI:
William Sanders
Abstract

Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.

Examples of simulation results are time to compromise of information, time to loss of service, percent of time adversary has system access, and identification of the most common attack paths.
Examples of insights one may gain from a model that incorporates agents address questions such as:

  • How do technical improvements in prevention and detection countermeasures, weigh against improvements to attack attribution capabilities of the defender as perceived by the adversary? Technical improvements change system behavior, while attribution capabilities can change adversary behavior for a risk-averse adversary.
  • How do autonomous and human-initiated defenses compare in effectiveness and what factors impact this comparison?

Assumptions made during the system design process will be made explicit and auditable in the model, which will help bring a more scientific approach to a field that currently often relies on intuition and experience. The primary output of this research will be a well-developed security modeling formalism capable of realistically modeling different human agents in a system, implemented in a software tool, and a validation of both the formalism and the tool with two or more real-life case studies. We plan to make the implementation of the formalism and associated analysis tools freely available to academics to encourage adoption of the scientific methodology our formalism will provide for security modeling. Many academics and practitioners have recognized the need for models for computer security, as evidenced by the numerous publications on the topic. Such modeling approaches are a step in the right direction, but have their own sets of limitations, especially in the way they model the humans that interact with the cyber portion of the system. Some modeling approaches explicitly model only the adversary (e.g., attack trees), or model only one attacker/defender pair (e.g., attack-defense trees [50]). However, there exist some approaches for modeling multiple adversaries, defenders, and users in a system, e.g., [9] [93]. The existing methods are not in common use, for a number of reasons. Often, the models lack realism because of oversimplification, are tailored to narrow use cases, produce results that are difficult to interpret, or are difficult to use, among other problems. Our approach will aim to overcome those limitations.

We seek to develop a formalism that may be used to build realistic models of a cyber-system and the humans who interact with the system—adversaries, defenders, and users—to perform risk analysis as an aid to security architects faced with difficult design choices. We call this formalism a General Agent Model for the Evaluation of Security (GAMES). We define an agent to be a human who may perform some action in the cyber-system: an adversary, a defender, or a user. The formalism will enable the modular construction of individual state-based agent models, which may be composed into one model so the interaction among the adversaries, defenders, and users may be studied. Once constructed, this composed model may be executed or simulated. During the simulation, each individual adversary, defender, or user may use an algorithm or policy to decide what actions the agent will take to attempt to move the system to a state that is advantageous for that agent. The simulation will then probabilistically determine the outcome of each action, and update the state. Modelers will have the flexibility to specify how the agents will behave. The model execution will generate metrics that aid risk assessment and help the security analyst suggest appropriate defensive strategies. The model’s results may be reproduced by re-executing the model, and the model’s assumptions may be audited and improved upon by outside experts.

William Sanders
Performance Period: 01/01/2018 - 10/01/2020
Institution: University of Illinois at Urbana-Champaign
Sponsor: National Security Agency
Mixed Initiative and Collaborative Learning in Adversarial Environments
Lead PI:
Claire Tomlin
Co-Pi:
Claire Tomlin

Claire Tomlin is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. She received the Erlander Professorship of the Swedish Research Council in 2009, a MacArthur Fellowship in 2006, and the Eckman Award of the American Automatic Control Council in 2003. She works in hybrid systems and control, with applications to air traffic systems, robotics, and biology.

Institution: UC Berkeley
Development of Methodology Guidelines for Security Research
Lead PI:
Jeffrey Carver
Abstract

This project seeks to aid the security research community in conducting and reporting methodologically sound science through (1) development, refinement, and use of community-based security research guidelines; and (2) characterization of the security literature based upon those guidelines.

Jeffrey Carver
Institution: North Carolina State University
Sponsor: National Security Agency
Scalable Privacy Analysis
Lead PI:
Serge Egelman
Abstract

One major shortcoming of the current "notice and consent" privacy framework is that the constraints for data usage stated in policies—be they stated privacy practices, regulation, or laws—cannot easily be compared against the technologies that they govern. To that end, we are developing a framework to automatically compare policy against practice. Broadly, this involves identifying the relevant data usage policies and practices in a given domain, then measuring the real-world exchanges of data restricted by those rules. The results of such a method will then be used to measure and predict the harms brought onto the data’s subjects and holders in the event of its unauthorized usage. In doing so, we will be able to infer which specific protected pieces of information, individual prohibited operations on that data, and aggregations thereof pose the highest risks compared to other items covered by the policy. This will shed light on the relationship between the unwanted collection of data, its usage and dissemination, and resulting negative consequences.

We have built infrastructure into the Android operating system, whereby we have heavily instrumented both the permission-checking APIs and included network-monitory functionality. This allows us to monitor when an application attempts to access protected data (e.g., PII, persistent identifiers, etc.) and what it does with it. Unlike static analysis techniques, which only detect the potential for certain behaviors (e.g., data exfiltration), executing applications with our instrumentation yields real-time observations of actual privacy violations. The only drawback, however, is that applications need to be executed, and broad code coverage is desired. To date, we have demonstrated that many privacy violations are detectable when application user interfaces are “fuzzed” using random input. However, there are many open research questions about how we can yield better code coverage to detect a wider range of privacy- related events, while doing so in a scalable manner. Towards that end, we plan to virtualize our privacy testbed and integrate crowd-sourcing. By doing this, we will develop new methods for performing privacy experiments that are repeatable, rigorous, and gen- eralizable. The results of these experiments can then be used to implement data-driven privacy controls, address gaps in regulation, and enforce existing regulations.

Serge Egelman

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is commercializing his research by performing on-demand privacy analysis of mobile apps for compliance purposes. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and privacy research awards from two different European data protection authorities, CNIL and AEPD. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.

Performance Period: 01/01/2018 - 01/01/2018
Institution: International Computer Science Institute
Subscribe to