Overview

Background: Cyberspace operations (CO) rely heavily on the degree to which users trust, or are suspicious, of their information technology systems.  Computer network defense service providers hope that users will remain cognizant of threats, such as phishing and other social engineering attacks, to apply security policies, standards and procedures so that any manifestation of IT behaviors indicative of an actual intrusion are reported, but those related to an innocent system anomaly are not.  To under report an actual attack represents an unacceptable statistical Type II error but to over report even the most innocuous system bug is a Type I error that wastes precious investigative and forensic IT resources.  Current role- or rule-based security control models balance control level with cost of potential losses thereby minimizing the human component within the equation. Actor’s roles (user or operator) and responsibilities, predispositional beliefs and competencies, historical human to automation biases affect one’s likelihood to trust system information or suspect malicious activity. Human to IT system structure, mission conditions and operational phase most likely affect both user and operator ability to trust and suspect.  On the offensive side, operators need to know the potential effect and probability of success their attacks might have.  This is similar to the same capability munition planners realize from effects charts when planning air operations.  Unfortunately for all cyber warriors, little guidance exists on these matters.  As the primary response to this the cyber war has been technology-centric, largely ignoring the role of the human operator.  When one considers the human in the loop, the core of the problem centers on an appreciation of how end users trust, or are suspicious of their systems and the larger network.  A basic understanding of how human trust and suspicion operate in the cyber domain is paramount before any reliable defensive training or offensive effects based planning can occur. 

Objective: This MURI’s objective is to initiate a basic research program that begins to build the foundational understanding of human trust and suspicion in the cyberspace domain.  The intent is to elevate our understanding of the human role in cyberspace operations, thereby providing guidance for previously untapped capabilities in this critical domain. 

Research Concentration Areas:   Suggested research areas include but are not limited to: (1) Experimental Psychology to examine human cognition, behavior, and decision making as they relate to trust and suspicion in a cyberspace domain; (2) Neuroscience to establish correlates and a biological basis for said cognition and behavior; (3) Computer Science to “close the loop” and integrate the foundational human understanding with the cyberspace platforms of interest; (4) Cybersecurity performance metrics associated with human users and operators for cyber platforms of interest,  (5) Leveraging these primary areas will enable a triangulation of metrics for the measurement of trust and suspicion.  Such a comprehensive measure of trust has never been attempted in previous research, but it is necessary to understand the dynamic nature of trust and suspicion in this new context.  (6) Following from this solid foundation of metrics, the proposed research program will examine critical antecedents of trust and suspicion such that systemic human vulnerabilities that result in susceptibility to cyber attacks can be identified and mitigated. 

Impact: An in-depth understanding of the integral role the human operator plays in the cyber domain will provide numerous novel capabilities.  On the defensive side, this research will enable the development of superior cyber defense technology as it will identify systemic human vulnerabilities that must be considered if the technology is to be effective. For example, host-based, adaptive, mission-centric computer security policies incorporating operational conditions and individual user/operator competencies and beliefs significantly mitigate both internal and external threat events. Combined computational and social science constructs can help illuminate relationships between user and operator trust and suspicion. In conjunction with Intrusion Prevention Systems (IPS) ‘tailorable” host-based security interfaces could invoke multiple options, from semi-automated user query to automated remedial actions – all user-centric. Currently, warning pop-up messages of possible system intrusions are only effective to the extent that the human operator acknowledges the warning.  Such acknowledgement hinges on the level of trust the operator places in the warning system (i.e., appropriate trust calibration).  This research will also enable the design of novel warning systems that function in response to identified human vulnerabilities (e.g., cognitive or behavioral signals of decreased vigilance detected).  This research stands to ignite prolific capabilities in the area of precision cyber operations, e.g., computer network attack, exploitation and counter-cyber operations.  With a thorough understanding of how trust and suspicion operate and the vulnerabilities associated with them, cyber operations can be targeted at key individuals, populations, and times to greatly increase the effectiveness of the operations.  Furthermore, such an understanding will provide the capability of modeling secondary and tertiary effects of cyber operations at the human level.  For example, in addition to system failure from a denial-of-service attack, there are secondary and tertiary effects on subsequent human trust in systems.  Understanding and modeling these effects will inform timing and targeting of subsequent operations, be they cyber or other.