Second News Item
Second News Item
This is another news item.
This is another news item.
TVA has been working to research best practices, better understand challenges in the Valley and build a roadmap for the future. They organized a diverse group of partners to identify connected community opportunities, leading to three initial focus areas for our efforts and funding. TVA will expand their partnership network to include community implementation partners and subject matter experts.
Hosted by the Standardization Administration of the People’s Republic of China (SAC), the workshop will be held virtually, on March 14-16, 2022, and June 22-24, 2022. The two sessions will be complemented by additional online correspondence and one or more web workshops. The registration deadline is February 8, 2022.
The Civic Innovation Challenge is a multi-agency, federal government research and action competition that aims to fund ready-to-implement, research-based pilot projects that have the potential for scalable, sustainable, and transferable impact on community-identified priorities.
The National Science Foundation (NSF) has long been a leader in advancing the fundamental science and engineering research and education that will revolutionize our Nation's cities and communities for the 21st century. NSF investments create the scientific and engineering foundations for smart cities and communities and help to enhance economic vitality, safety, security, health and wellbeing, and overall quality of life.
This workshop series explores provisions in existing and planned smart city deployments and addresses the challenges critical to integrating humans, physical components and computers in cyber-physical systems at smart city scale.
NSF's Smart & Connected Communities effort aims to advance understanding of our cities and communities to improve their functioning and quality of life within them through innovations in computing, engineering, information and physical sciences, social, and learning sciences.
Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai
Researchers: Pulkit Katdare and Negin Musavi
Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize the capabilities of the systems, but are also vulnerable to a new and imminent class of attacks. This project will develop foundations and methodologies to make such systems resilient. Our focus is on control systems that utilize large-scale, crowd-sourced data collection to train predictive AI models, which are then used to control and optimize the system’s performance. Consider the examples of congestion-aware traffic routing and autonomous vehicles; to design controllers for such systems, large amounts of user data are being collected to train AI models that predict network congestion dynamics and human driving behaviors, respectively, and these models are used to guide the overall closed-loop control system.
Although our current understanding of AI models is very limited, they are already known to have serious vulnerabilities. For example, so-called “adversarial examples” can be generated algorithmically for defeating neural network models while appearing indistinguishable to human senses [73]. This can cause an autonomous vehicle to crash, facial recognition to fail, and illegal content to bypass filters, and the attacks may be impossible to detect. A second type of vulnerability arises when the adversary provides malicious training samples that may spoil the fidelity of the learned model. A third vulnerability is the potential violation of the privacy of individuals (e.g., drivers) who provide the training data. More generally, the space of vulnerabilities and their impact on the overall control system are not well-understood. This project will address this new and challenging landscape, and develop the mathematical foundations for reasoning about such systems and attacks. These foundations will then be the basis for automatically synthesizing monitoring and control algorithms needed for resilience. The project aligns with the SoS community’s goal of creating resilient cyber-physical systems, and the approaches developed here will contribute towards development of a new compositional reasoning framework for CPS that combines traditional controls with AI models.
Our approach will take a broad view in developing a mathematical framework while simultaneously creating algorithms and tools that will be tested on benchmarks and real data. The theoretical aspects of the project will draw on the team’s expertise in learning theory, formal methods, and robust control. The resulting resilient monitoring, detection, and control synthesis approaches will be tested on data, scenarios, and models from the CommonRoad project, Udacity, and OpenPilot.
Sayan Mitra is a Professor, Associate Head of Graduate Affairs, and John Bardeen Faculty Scholar of ECE at UIUC. His research is on safe autonomy. His research group develops theory, algorithms, and tools for control synthesis and verification. Some of these have been patented and are being commercialized. Several former PhD students are now professors: Taylor Johnson (Vanderbilt), Parasara Sridhar Duggirala (NC Chapel Hill), and Chuchu Fan (MIT). Sayan received his PhD from MIT with Nancy Lynch. His textbook on verification of cyber-physical systems was published by MIT press in 2021. The group's work has been recognized with NSF CAREER Award, AFOSR Young Investigator Research Program Award, ACM SRC gold prize, IEEE-HKN C. Holmes MacDonald Outstanding Teaching Award (2013), Siebel Fellowship, and several best paper awards.
Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.
Examples of simulation results are time to compromise of information, time to loss of service, percent of time adversary has system access, and identification of the most common attack paths.
	Examples of insights one may gain from a model that incorporates agents address questions such as:
Assumptions made during the system design process will be made explicit and auditable in the model, which will help bring a more scientific approach to a field that currently often relies on intuition and experience. The primary output of this research will be a well-developed security modeling formalism capable of realistically modeling different human agents in a system, implemented in a software tool, and a validation of both the formalism and the tool with two or more real-life case studies. We plan to make the implementation of the formalism and associated analysis tools freely available to academics to encourage adoption of the scientific methodology our formalism will provide for security modeling. Many academics and practitioners have recognized the need for models for computer security, as evidenced by the numerous publications on the topic. Such modeling approaches are a step in the right direction, but have their own sets of limitations, especially in the way they model the humans that interact with the cyber portion of the system. Some modeling approaches explicitly model only the adversary (e.g., attack trees), or model only one attacker/defender pair (e.g., attack-defense trees [50]). However, there exist some approaches for modeling multiple adversaries, defenders, and users in a system, e.g., [9] [93]. The existing methods are not in common use, for a number of reasons. Often, the models lack realism because of oversimplification, are tailored to narrow use cases, produce results that are difficult to interpret, or are difficult to use, among other problems. Our approach will aim to overcome those limitations.
We seek to develop a formalism that may be used to build realistic models of a cyber-system and the humans who interact with the system—adversaries, defenders, and users—to perform risk analysis as an aid to security architects faced with difficult design choices. We call this formalism a General Agent Model for the Evaluation of Security (GAMES). We define an agent to be a human who may perform some action in the cyber-system: an adversary, a defender, or a user. The formalism will enable the modular construction of individual state-based agent models, which may be composed into one model so the interaction among the adversaries, defenders, and users may be studied. Once constructed, this composed model may be executed or simulated. During the simulation, each individual adversary, defender, or user may use an algorithm or policy to decide what actions the agent will take to attempt to move the system to a state that is advantageous for that agent. The simulation will then probabilistically determine the outcome of each action, and update the state. Modelers will have the flexibility to specify how the agents will behave. The model execution will generate metrics that aid risk assessment and help the security analyst suggest appropriate defensive strategies. The model’s results may be reproduced by re-executing the model, and the model’s assumptions may be audited and improved upon by outside experts.
Claire Tomlin is a Professor of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. She received the Erlander Professorship of the Swedish Research Council in 2009, a MacArthur Fellowship in 2006, and the Eckman Award of the American Automatic Control Council in 2003. She works in hybrid systems and control, with applications to air traffic systems, robotics, and biology.