Well-intentioned human users continually circumvent security controls. The pandemic/ubiquitous fact of this circumvention undermines the effectiveness of security designs that implicitly assume circumvention never happens. We seek to develop metrics to enable security engineers and other stakeholders to make meaningful, quantifiable comparisons, decisions, and evaluations of proposed security controls in light of what really happens when these controls are deployed.
This project builds on foundations of human-computer-interface in security and the preliminary research the investigators have been working on already: Blythe, Koppel, and Smith, studying workers’ reasons for and methods of circumvention along with Xie, studying techniques for assisting mobile-app users (who can be enterprise workers) to conduct security controls on apps to be installed on their mobile devices. Research conducted in large enterprise systems increasingly finds that such apps are a major source of malware invasions into those larger systems. Similarly, with the expanded use of BYOD (bring your own device), such dangers are pandemic without security controls and without users’ ability to understand and follow those controls. Security-control circumvention by enterprise workers as mobile app users is reflected by their acceptance to install apps without sufficiently assessing their risk.
This project develops a scientific approach to testing hypotheses about network security when those tests must consider layers of complex interacting policies within the network stack. The work is motivated by observation that the infrastructure of large networks is hideously complex, and so is vulnerable to various attacks on services and data. Coping with these vulnerabilities consumes significant human management time, just trying to understand the network’s behavior. Unfortunately, even very simple behaviors – such as whether it is possible for any packet (however unusual) to flow between two devises – are difficult for operators to test, and synthesizing these low-level behaviors into a high-level quantitative understanding of network security has been beyond reach.
We propose to develop the analysis methodology needed to support scientific reasoning about the security of networks, with a particular focus on information and data flow security. The core of this vision is Network Hypothesis Testing Methodology (NetHTM), a set of techniques for performing and integrating security analyses applied at different network layers, in different ways, to pose and rigorously answer quantitative hypotheses about the end-to-end security of a network.
Brighten Godfrey is an Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign, and also serves as co-founder and CTO of Veriflow. Before joining UIUC, he was a Ph.D. student at UC Berkeley, advised by Ion Stoica, and a visiting researcher at Intel Labs Berkeley.
In security more than in other computing disciplines, professionals depend heavily on rapid analysis of voluminous streams of data gathered by a combination of network-, file-, and system-level monitors. The data are used both to maintain a constant vigil against attacks and compromises on a target system and to improve the monitoring itself. While the focus of the security engineer is on ensuring operational security, it is our experience that the data are a gold mine of information that can be used to develop greater fundamental insight and hence a stronger scientific basis for building, monitoring, and analyzing future secure systems. The challenge lies in being able to extract the underlying models and develop methods and tools that can be the cornerstone of the next generation of disruptive technologies.
This project is taking an important step in addressing that challenge by developing scientific principles and data-driven formalisms that allow construction of dynamic situation-awareness models that are adaptive to system and environment changes (specifically, malicious attacks and accidental errors). Such models will be able (i) to identify and capture attacker actions at the system and network levels, and hence provide a way to reason about the attack independently of the vulnerabilities exploited; and (ii) to assist in reconfiguring the monitoring system (e.g., placing and dynamically configuring the detectors) to adapt detection capabilities to changes in the underlying infrastructure and to the growing sophistication of attackers. In brief, the continuous measurements and the models will form the basis of what we call execution under probation technologies.
The goal of this project is to develop quantitative, scientifically grounded, decision-making methodologies to guide information security investments in private or public organizations, combining human and technological concerns, to demonstrate their use in two or more real-life case studies, prototype tools and demonstrate their proof of concept on those case studies. It is our hypothesis that quantitative security models, augmented by collected data, can be used to make credible business decisions about the use of particular security technologies to protect an organization’s infrastructure. The key output of this research will be a data-driven, model-based methodology for security investment decision-making, with associated software tool support, and a validation of the usefulness of the tool in a realistic setting. The main scientific contributions will be new abstractions for modeling human behavior, and techniques and tools for optimization of the associated data collection strategy.
This project is a collaboration between the University of Illinois at Urbana-Champaign and Newcastle University.
Dr. Al-Shaer is a Distinguished Research Fellow at Software and Societal Systems Department in the School of Computer Science, and Faculty Member of CyLab at Carnegie Mellon University. Prof. Al-Shaer was also a Distinguished Career Professor at School of College of Engineering at Carnegie Mellon University. Before joining CMU, Dr. Al-Shaer was a Professor and the Founding Director of NSF Cybersecurity Analytics and Automation (CCAA) center in the University of North Carolina Charlotte from 20011-2020.
Dr. Al-Shaer's primary research areas are AI-enabled cybersecurity including automated adaptive response, domain-specific language models for cybersecurity, formal methods for configuration verification and synthesis, active cyber deception, cyber deterrence and network resilience. He published 10 books and more than 250 refereed publications in his area of expertise. Dr. Al-Shaer was designated by the Department of Defense (DoD) as a Subject Matter Expert (SME) on security analytics and automation in 2011. He was also awarded the IBM Faculty Award in 2012, and the UNC Charlotte Faculty Research Award in 2013.
Dr. Al-Shaer was the ARO Autonomous Cyber Deception Workshop in 2018, General Chair of ACM Computer and Communication in 2009 and 2010, NSF Workshop in Assurable and Usable Security Configuration in 2008. Dr. Al-Shaer was also the Program Committee Chair for many conferences and workshops including ACM/IEEE SafeConfig 2013 and 2015, IEEE Integrated Management (IM) 2007, IEEE POLICY 2008. Al-Shaer has two accepted patents and several submitted ones. He also has lead several technology transfer projects. He is also an advisory board member for leading companies in cybersecurity automation.
Cyber-Physical Systems (CPS) are vulnerable to elusive dynamics-aware attacks that subtly change local behaviors in ways that lead to large deviations in global behavior, and to system instability. The broad agenda for this project is to classify attacks on different classes of CPS based on detectability. In particular, we are identifying attacks that are impossible to detect in a given class of CPS (with reasonable resources), and we are developing detection algorithms for those that are possible. The methods developed will primarily be aimed at scenarios in which attackers have some ability to intermittently disrupt either the timing or the quality-of-service of software or communication processes, even though the processes may not have been breached in the traditional sense. Much of the work will also apply to cases where such limited disruptions are introduced physically. Our approach is based on a set of powerful technical tools that draw from and combine ideas from robust control theory, formal methods, and information theory.
Sayan Mitra is a Professor, Associate Head of Graduate Affairs, and John Bardeen Faculty Scholar of ECE at UIUC. His research is on safe autonomy. His research group develops theory, algorithms, and tools for control synthesis and verification. Some of these have been patented and are being commercialized. Several former PhD students are now professors: Taylor Johnson (Vanderbilt), Parasara Sridhar Duggirala (NC Chapel Hill), and Chuchu Fan (MIT). Sayan received his PhD from MIT with Nancy Lynch. His textbook on verification of cyber-physical systems was published by MIT press in 2021. The group's work has been recognized with NSF CAREER Award, AFOSR Young Investigator Research Program Award, ACM SRC gold prize, IEEE-HKN C. Holmes MacDonald Outstanding Teaching Award (2013), Siebel Fellowship, and several best paper awards.
Abstract:
Cyber-Physical Systems are converging towards a component-oriented and platform-based implementation. The community-driven Robotic Operating Systems and the proprietary Residential Operating System (of Prodea) are just two examples that indicate this trend. We envision that the software of the CPS is frequently updated and reconfigured, yet it cannot be guaranteed that security vulnerabilities are completely absent in the deployed systems. Clearly, there is a need to incorporate appropriate security features in these platforms so that they exhibit the necessary resilience properties and continue providing services even if parts of the larger system are compromised. In this project we develop a model-driven approach to system architecting for these component-based CPS that results in analysis techniques to determine the resilience of the systems, and in synthesis techniques that assist with the implementation. Prototypes and experimental studies will provide the vehicle for evaluation.
Hard Problems Addressed:
- Develop means to design and analyze system architectures that deliver required service in the face of compromised components
- Formal and informal domain-specific modeling languages to represent properties of CPS relevant for resilience
- Scalable and composable analysis approaches to determine the resilience metrics for the system of CPS against security attacks
- Requirements for trustworthy and dependable component-based software platforms that provide support for resilience
Dr. Gabor Karsai is a Professor of Electrical Engineering and Computer Science at Vanderbilt University, and Senior Research Scientist at the Institute for Software-Integrated Systems. He has over thirty years of experience in software engineering. He conducts research in the design and implementation of embedded systems, in programming tools for visual programming environments, in the theory and practice of model-integrated computing, and in resource management and scheduling systems. He received his Diploma, MSc, and Dr. Techn. degrees from the Technical University of Budapest, Hungary, in 1982, 1984 and 1988, respectively, and his PhD from Vanderbilt University in 1988. He has published over 150 papers, and he is the co-author of four patents. He has managed several large research projects on model-based integration of embedded systems, model-based toolchains, fault-adaptive control technology, and coordinated scheduling and planning.
Education
Ph.D., Electrical and Computer Engineering
Vanderbilt University
Dr.Tech., Computer Engineering
Technical University of Budapest
M.S., Electrical Engineering
Technical University of Budapest
B.S., Electrical Engineering
Technical University of Budapest
This research thrust focuses on the design and development of a highly accessible and scalable testbed environment for supporting the evaluation and experimentation efforts across the entire SURE research portfolio. This work is based on our existing technologies and previous results with the Command and Control Windtunnel (C2WT), a large-scale simulation integration platform and WebGME, a metaprogrammable web-based modeling environment with special emphasis
on on-line collaboration, model versioning and design-reuse. We are utilizing these core technologies and other third-party tools (e.g. Emulab) to provide a web-based interface for designing, executing and evaluating testbenches on a cloud-based simulation infrastructure. The metaprogramable environment enables us to develop and provide modeling languages, which specifically target each research thrust. Furthermore, by leveraging built-in prototypical inheritance we are building re-usable library components in the target domains.
First, the developed visual/modeling languages will be used to capture the physical, computational and communication infrastructure. Also, the simulation models will describe the deployment, configuration and/or the concrete strategies of security measures and algorithms. Third, the environment will provide entry points for injecting various attack or failure events from an existing library of components or by providing a model-based description of the algorithm.
For stimulating the experimentation and validation efforts in the SURE research thrusts and to motivate students and outside contributors to participate we are developing "Red Team" vs "Blue Team" simulation scenarios, where a using a given CPS infrastructure model each team is tasked to develop and/or configure security and fail-over measures while the other team develops an attack model. After the active design phase--when both teams are working in parallel and in isolation--the simulation is executed with no external user interaction, potentially several times. The winner is decided based on the scoring weights and rules which are captured by the infrastructure model. If successful, we may organize championships and maintain a leader board for each infrastructure model.
Peter Volgyesi is a Research Scientist at the Institute for Software Integrated Systems at Vanderbilt University. In the past decade Mr. Volgyesi has been working on several novel and high impact projects sponsored by DARPA, NSF, ONR, ARL and industrial companies (Lockheed Martin, BAE Systems, the Boeing Company, Raytheon, Microsoft). He is one of the architects of the Generic Modeling Environment, a widely used metaprogrammable visual modeling tool, and WebGME - its modern web-based variant. Mr. Volgyesi had a leading role in developing the real-time signal processing algorithms in PinPtr, a low cost, low power countersniper system. He also participated in the development of the Radio Interferometric Positioning System (RIPS), a patented technology for accurate low-power node localization. As PI on two NSF funded projects Mr. Volgyesi and his team developed a low-power software-defined radio platform (MarmotE) and a component-based development toolchain targeting multicore SoC architectures for wireless cyber-physical systems. His team won the Preliminary Tournament of the DARPA Spectrum Challenge in September, 2013.
In security, our concern is typically with securing a particular network, or eliminating security holes in a particular piece of software. These are important, but they miss the fact that being secure is fundamentally about security of all constituent parts, rather that any single part in isolation. In principle, if we can control all the pieces of a system, we can secure all possible channels of attack. Typically, system and security design of various components are performed by different agents, having varying and often conflicting interests. Our goal is to develop this framework, and associated computational tools to address security holistically, accounting for incentives of all the parties.
In particular, the project aspires to investigate the many facets of decentralization in security. The overarching aim is to answer the following three questions in a variety of relevant settings: 1) what does decentralization of security decisions and associated incentive misalignment imply for overall system security; 2) in the world of decentralized security decisions, how should an organization optimally secure itself; and 3) how can one design incentives or constraints to improve the overall system security. Much of the project focus will be on interdependence of security decisions, giving rise to competing decision externalities: positive externalities, where securing one’s system reduces exposure risk for others, and negative externalities, where security of one system incentivizes the attacker to attack another. The former will tend to lead to under-investment in security; the latter are expect to push organizations to invest too much.
Yevgeniy Vorobeychik is an Assistant Professor of Computer Science and Computer Engineering at Vanderbilt University. Previously, he was a Principal Member of Technical Staff at Sandia National Laboratories. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game theoretic modeling of security, algorithmic and behavioral game theory and incentive design, optimization, complex systems, epidemic control, network economics, and machine learning. Dr. Vorobeychik has published over 60 research articles on these topics. Dr. Vorobeychik was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award. In 2012 he was nominated for the Sandia Employee Recognition Award for Technical Excellence. He was also a recipient of a NSF IGERT interdisciplinary research fellowship at the University of Michigan, as well as a distinguished Computer Engineering undergraduate award at Northwestern University.