An Investigation of Scientific Principles Involved in Attack-Tolerant Software
Lead PI:
Mladen Vouk
Abstract

High-assurance systems, for which security is especially critical, should be designed to a) auto-detect attacks (even when correlated); b) isolate or interfere with the activities of a potential or actual attack; and (3) recover a secure state and continue, or fail safely. Fault-tolerant (FT) systems use forward or backward recovery to continue normal operation despite the presence of hardware or software failures. Similarly, an attack-tolerant (AT) system would recognize security anomalies, possibly identify user “intent”, and effect an appropriate defense and/or isolation. Some of the underlying questions in this context are. How is a security anomaly different from a “normal” anomaly, and how does one reliably recognize it? How does one recognize user intent? How does one deal with security failure-correlation issues? What is the appropriate safe response to potential security anomaly detection? The key hypothesis is that all security attacks always produce an anomalous state signature that is detectable at run-time, given enough of appropriate system, environment, and application provenance information. If that is true (and we plan to test that), then fault-tolerance technology (existing or newly develop) may be used with success to prevent or mitigate a security attack. A range of AT technologies will be reviewed, developed and assessed.

Team

PI: Mladen Vouk
Student: Da Young Lee

Mladen Vouk
Understanding the Fundamental Limits in Passive Inference of Wireless Channel Characteristics
Lead PI:
Huaiyu Dai
Abstract

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we focus on establishing correct modeling of channel correlation in wireless environments of interest, and properly evaluating the safety distance metric of existing and emerging wireless security mechanisms, as well as cyber-physical systems employing these security mechanisms. Upon successful completion of the project, the expected outcome will allow us to accurately determine key system parameters (e.g., the security zone for secrete key establishment from wireless channels) and confidently assess the security assurance in wireless security mechanisms. More importantly, the results will correct the previous misconception of channel de-correlation, and help security researchers develop new wireless security mechanisms based on a proven scientific foundation.

TEAM

PIs: Huaiyu Dai, Peng Ning
Student: Xiaofan He

Huaiyu Dai
Modeling the risk of user behavior on mobile devices
Lead PI:
Ben Watson
Co-Pi:
Abstract

It is already true that the majority of users' computing experience is a mobile one. Unfortunately that mobile experience is also more risky: users are often multitasking, hurrying or uncomfortable, leading them to make poor decisions. Our goal is to use mobile sensors to predict when users are distracted in these ways, and likely to behave insecurely. We will study this possibility in a series of lab and field experiments.

TEAM

PIs: Benjamin Watson, Will Enck, Anne McLaughlin, Michael Rappa

Ben Watson
An Adoption Theory of Secure Software Development Tools
Lead PI:
Emerson Murphy-Hill
Abstract

Programmers interact with a variety of tools that help them do their jobs, from "undo" to FindBugs' security warnings to entire development environments. However, programmers typically know about only a small subset of tools that are available, even when many of those tools might be valuable to them. In this project, we investigate how and why software developers find out about -- and don't find out about -- software security tools. The goal of the project is to help developers use more relevant security tools, more often.

TEAM

PI: Emerson Murphy-Hill
Student: Jim Witschey

Emerson Murphy-Hill
Low-level Analytics Models of Cognition for Novel Security Proofs
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PIs: David Roberts, Robert St. Amant
Students: Titus Barik, Arpan Chakraborty, Brent Harrison

Normative Trust Toward a Principled Basis for Enabling Trustworthy Decision Making
Lead PI:
Munindar Singh
Abstract

This project seeks to develop a deeper understanding of trust than is supported by current methods, which largely disregard the underlying relationships based on which people trust or not trust each other. Accordingly, we begin from the notion of what we term normative relationships—or norms for short—directed from one principal to another. An example of a normative relationship is a commitment: is the first principal committed to doing something for the second principal? (The other main types of normative relationships are authorizations, prohibitions, powers, and sanctions.) Our broad research hypothesis is that trust can be modeled in terms of the relevant norms being satisfied or violated. To demonstrate the viability of this approach, we are mining commitments from emails (drawn from the well-known Enron dataset) and using them to assess trust. Preliminary results indicate that our methods can effectively estimate the trust-judgment profiles of human subjects.

TEAM

PI: Munindar Singh
Student: Anup Kalia

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

A Science of Timing Channels in Modern Cloud Environments
Lead PI:
Michael Reiter
Abstract

The eventual goal of our research is to develop a principled design for comprehensively mitigating access-driven timing channels in modern compute clouds, particularly of the "infrastructure as a service" (IaaS) variety. This type of cloud permits the cloud customer to deploy arbitrary guest virtual machines (VMs) to the cloud. The security of the cloud-resident guest VMs depends on the virtual machine monitor (VMM), e.g., Xen, to adequately isolate guest VMs from one another. While modern VMMs are designed to logically isolate guest VMs, there remains the possibility of timing "side channels" that permit one guest VM to learn information about another guest VM simply by observing features that reflect the others' effects on the hardware platform. Such attacks are sometimes referred to as "access-driven" timing attacks.

TEAM

PI: Michael Reiter (UNC)
Student: Yinqian Zhang, Peng Li

Michael Reiter
Studying Latency and Stability of Closed-Loop Sensing-Based Security Systems
Lead PI:
Rudra Dutta
Abstract

In this project, our focus is on understanding a class of security systems in analytical terms at a certain level of abstraction.  Specifically, the systems we intend to look at are (I) multipath routing (for increasing reliability), (ii) dynamic firewalls.  For multipath routing, the threat scenario is jamming – the nodes that are disabled due to the jamming take the place of compromised components in that they fail to perform their proper function.  The multipath and diverse path mechanisms are intended to allow the system to perform its overall function (critical message delivery) despite this.  The project will focus on quantifying and bounding this ability to function redundantly.  For the firewall, the compromise consists of an attacker guessing at the firewall rules and being able to circumvent them.  The system is designed to withstand this by dynamically changing the ruleset to be applied over time. Our project will focus on quantifying or characterizing this ability.

TEAM

PIs: Rudra Dutta, Meeko Oishi (UNM-Albuquerque)
Student Trisha Biswas

Rudra Dutta

 

Rudra Dutta was born in Kolkata, India, in 1968. After completing elementary schooling in Kolkata, he received a B.E. in Electrical Engineering from Jadavpur University, Kolkata, India, in 1991, a M.E. in Systems Science and Automation from Indian Institute of Science, Bangalore, India in 1993, and a Ph.D. in Computer Science from North Carolina State University, Raleigh, USA, in 2001. From 1993 to 1997 he worked for IBM as a software developer and programmer in various networking related projects. He has been employed from 2001 - 2007 as Assistant Professor, from 2007 - 2013 as Associate Professor, and since 2013 as Professor, in the department of Computer Science at the North Carolina State University, Raleigh. During the summer of 2005, he was a visiting researcher at the IBM WebSphere Technology Institute in RTP, NC, USA. His current research interests focus on design and performance optimization of large networking systems, Internet architecture, wireless networks, and network analytics.

His research is supported currently by grants from the National Science Foundation, the National Security Agency, and industry, including a recent GENI grant and a FIA grant from NSF. He has served as a reviewer for many premium journals, on NSF, DoE, ARO, and NSERC (Canada) review panels, as part of the organizing committee of many premium conferences, including Program Co-chair for the Second International Workshop on Traffic Grooming. Most recently, he has served as Program Chair for the Optical Networking Symposium at IEEE Globecom 2008, General Chair of IEEE ANTS 2010, and as guest editor of a special issue on Green Networking and Communications of the Elsevier Journal of Optical Switching and Networking. He is currently serving on the Steering Committee of IEEE ANTS 2013, and on the editorial board of the Elsevier Journal of Optical Switching and Networking.

He is married with two children and lives in Cary, North Carolina with his family. His father and his sister's family live in Kolkata, India.

Spatiotemporal Security Analytics and Human Cognition
Lead PI:
David L. Roberts
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PI: David Roberts
Student: Titus Barik

David L. Roberts
Subscribe to