Systematization of Knowledge from Intrusion Detection Models
Lead PI:
Huaiyu Dai
Co-Pi:
Huaiyu Dai
Vulnerability and Resilience Prediction Models
Lead PI:
Mladen Vouk
Co-Pi:
Mladen Vouk
Warning of Phishing Attacks: Supporting Human Information Processing, Identifying Phishing Deception Indicators, and Reducing Vulnerability
Lead PI:
Christopher Mayhorn
Co-Pi:
Christopher Mayhorn
A Human Information-Processing Analysis of Online Deception Detection
Lead PI:
Robert Proctor
Co-Pi:
Abstract

Human interaction is an integral part of any system. Users have daily interactions with a system and make many decisions that affect the overall state of security. The fallibility of users has been shown but there is little research focused on the fundamental principles to optimize the usability of security mechanisms. We plan to develop a framework to design, develop and evaluate user interaction in a security context. We will (a) examine current security mechanisms and develop basic principles which can influence security interface design; (b) introduce new paradigms for security interfaces that utilize those principles; (c) design new human-centric security mechanisms for several problem areas to illustrate the paradigms; and (d) conduct repeatable human subject experiments to evaluate and refine the principles and paradigms developed in this research.

Robert Proctor
Leveraging the Effects of Cognitive Function on Input Device Analytics to Improve Security
Lead PI:
David L. Roberts
Co-Pi:
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

David L. Roberts
Understanding Effects of Norms and Policies on the Robustness, Liveness, and Resilience of Systems
Lead PI:
Emily Berglund
Co-Pi:
Emily Berglund
Formal Specification and Analysis of Security-Critical Norms and Policies
Lead PI:
Jon Doyle
Co-Pi:
Abstract

Goal: To understand how security properties vary with norms and policies that govern the behavior of collaborators (users and organizations), to enable identification of norms and policies that achieve desired tradeoffs between security and user preferences.

Research Questions: How can we verify whether a set of norms (1) is consistent and realizable through the policies and preferences of the collaborators, and (2) achieves specified security properties? How can we predict the difficulty of the reasoned and modular creation and maintenance of sets of norms, policies, and preferences by collaborators?

Jon Doyle
Scientific Understanding of Policy Complexity
Lead PI:
Ninghui Li
Co-Pi:
Abstract

Goal: To develop a scientific understanding of what makes security policies complex as well as metrics for measuring security policy complexity, defined as the degree of difficulty in understanding by relevant users.

Research Questions: What is the right way to define security policy complexity? How should we measure users' ability to understand and specify security policies? What features of policy languages or policies make them inherently more complex? Can we transform a security policy into a logically equivalent one that has lower complexity? In other words, is today's high complexity for security policies accidental or inherent?

Ninghui Li
Resilience Requirements, Design, and Testing
Lead PI:
Kevin Sullivan
Co-Pi:
Kevin Sullivan
Redundancy for Network Intrusion Prevention Systems (NIPS)
Lead PI:
Michael Reiter
Michael Reiter
Subscribe to