The Computational Cybersecurity in Compromised Environments Workshops (C3E) has introduced challenge problems since 2013. The Workshops bring together experts to tackle tough intellectual challenges and point toward novel and practical solutions. Last year, C3E 2018 looked into Adversarial Machine Learning (AML) and connections with Explainable Artificial Intelligence (XAI), and Decision Support Vulnerabilities.
For 2019, C3E further examined a key element explored during the 2018 Workshop-- the role of the human in cyber environments. Specifically, the 2019 C3E Workshop explored cognitive security and human-machine teaming in cyber. We find ourselves in the midst of an era of social- information warfare - for which we are presently ill prepared to mount an effective defense. It is necessary to investigate hybrid approaches that account for both technological and human sides of the problem. Each track addressed a challenge problem during the breakout sessions. A follow-on program will seek to provide continuity by continuing research into issues identified at the workshop.
Follow-on Challenge Problem Research Projects
The follow-on originally sought research on one of the Challenge Problems, Cognitive Security and Human-Machine Teamwork and had planned on engaging part time researchers to identify and explore specific issues developed at the workshop and present their findings at the 2020 C3E workshops. This plan was overtaken by events with the COVID-19 restrictions. However, you may still conduct such research within the C3E 2020 Workshop guideline. Details about this workshop and other Challenge Problems can be found at https://cps-vo.org/group/c3e/challenge.
The first problem is to develop cognitive models which consider contextual information of configurations, skills to exploit various systems, and complex network structures, while predicting hacker’s decisions. These models can act as synthetic hackers to test various defense algorithms.
The second problem involves the identification of some of the issues associated with Human Machine Teaming (HMT) in a cyber defense scenario and uses that information to better understand the underlying events taking place there and involves developing approaches to solutions to mitigate the problems identified.
The anticipated outcome will include a description of the critical security events taking place and the reasoning process followed by the researcher. That process may include details on how the assessment was taken into account, and possible issues or limitations associated with the support provided by the automation. The second outcome is the recommendation of approaches to better leverage the automation process and mitigate the issues observed. Of particular interest is the level of human engagement with the automation. A researcher might prepare software to demonstrate an approach to the Challenge Problem. This sample software should be available as open source and posted to the CPS-VO web site when completed since the project is federally funded.
More details about the tracks and problems are provided below.
Background. Ongoing research has developed cognitive models to predicts hacker's and defender's decisions in the abstract cyber security tasks. Cranford et al. (2018, 2019) developed an instance-based learning (IBL) cognitive model (Gonzalez, Lerch & Lebiere, 2003) of attackers that accurately predicts human decision making from experience. Currently, these models could predict hacker's actions using abstract information such as "attack on network node". Detailed information on the defender's features such as the ports, operating system, or data on the network that were attacked and the information about the information gathering, attack type, and exploits were not part of these models. Thus, there is a gap between current cognitive agents and developing human like agents for complex cyber security scenarios.
Challenge. The challenge is to develop cognitive models which consider contextual information of configurations, skills to exploit various systems, complex network structures etc. while predicting hacker's decisions. Such models can act as synthetic hackers to test various defense algorithms.
Task. The task is to develop a model that first probes the network to gather information on the number of nodes, open ports, operating systems, and service versions. After collecting this information, the attacker's model would decide to attack one node. The attack decision would include understanding of which exploit to be used for attacking a configuration of the node. In this task, the defender masks the true configuration of some nodes with different observable configuration.
Dataset. The CMU Dynamic Decision Making Laboratory (DDMLab) is conducting the above experiment using CyberVAN, i.e., a testbed for conducting cyber security research. The DDMLab is collecting human data to test the effectiveness of different masking strategies. This dataset could act as a dataset for the challenge problem. Alternatively, challenge problem researchers could use their own physical networks as a testbed to test the effectiveness of different masking strategies to generate datasets.
Outcome. A cognitive agent that can mimic human actions in the tasks involving probing and attacking different network nodes. The development of a predictive model for the hacker would also help in learning biases and taking advantage to improve cyber defense.
Some questions to be addressed are, for example:
What are the cognitive processes involved for an attacker’s decision-making in cyberattack situations?
How accurately could cognitive models of attackers predict their actions?
What do we learn about attacker’s biases using cognitive models?
Given an accurate model of an attacker, how can defensive algorithms be made adaptive?
Can cognitive models provide better information about the effectiveness of defensive decisions compared to an attacker’s cyberattack models?
- Cranford, E., Lebiere, C., Gonzalez, C., Cooney, S., Vayanos, P., & Tambe, M. (2018). Learning about Cyber Deception through Simulations: Predictions of Human Decision Making with Deceptive Signals in Stackelberg Security Games. In CogSci.
- Cranford, E. A., Gonzalez, C., Aggarwal, P., Cooney, S., Tambe, M., Lebiere, C. (2019). Towards personalized deceptive signaling for cyber defense using cognitive models. In Proceedings of the 17th Annual Meeting of the International Conference on Cognitive Modelling (in press). Montreal, CA
- Gonzalez, C., Lerch, F. J., & Lebiere, C. (2003). Instance-based learning in dynamic decision making. Cognitive Science, 27, 591-635.
Background. The traditional model of users operating automated processes and analytics in cyber operations may no longer be valid for modern cyber defense operations centers where automation is often adaptive and generally works at larger scales and much faster than people. Modern cyber operations are becoming much more akin to environments where people and machines work as team, leveraging from one another's strengths, actions, and decisions. These new roles for both humans and machines in cyber operations will inevitably raise important challenges including the need for supporting shared mind models between humans and machines, new ways to better define and share context in human-machine teaming, novel interaction modes and visualizations, and new ways to evaluate the performance and effectives of human-machine teams. These are important areas of research that will be topics of discussion at C3E 2019.
Motivation for the Challenge Problem. The proposed challenge problem is two-fold. First it involves the identification of some of the issues associated with HMT in a cyber defense scenario and uses that information to better understand the underlying events taking place in the scenario. The second part of the challenge problem is to propose approaches to solutions to mitigate the problems identified in the first of the challenge.
Proposed Scenario. The cyber defense scenario used for the challenge problem is based on a set of stages of one or more kill-chains taking place over an emulated enterprise network. The network is monitored by different algorithms that are making their own analysis and assessment of the events and situation. The user has partial visibility of the network events but has full access to the assessment provided by all algorithms and analysts, as well as background information about the types of algorithms, and the training data used for each algorithm both prior and during the scenario.
Challenge Problem. The analyst is provided with a description of the network and some information about the possible events taking place in the scenario, which includes information about the different defensive autonomous systems monitoring and responding to network events. The analyst also has information about the types of automation being used and details about the way they have been trained prior to the scenario. Based on that information the analyst is expected to make an assessment of the underlying autonomous defensive events taking place and create a narrative describing the reasoning process. The second part of the problem is the identification of approaches or processes to mitigate any issues observed in the first part of the analysis or to improve the effectiveness of the process by better leveraging the human-machine teaming.
Expected Results. The anticipated outcome of the challenge problem will include two parts. First a description of the critical security events taking place in the scenario and the reasoning process followed by the analyst. That process is expected to include details on how the assessment was taken into account, and possible issues or limitations associated with the support provided by the automation. The second outcome is the recommendation of approaches to better leverage the automation process and mitigate the issues observed in the first part of the problem.
Some questions to be addressed are, for example:
- What research must be accomplished to begin to create human machines teams defending cyber networks?
- How do humans and automation/intelligent systems share internal states (beliefs, intentions, etc.) with one another?
- How do we create and maintain a shared context between humans and automation/intelligent systems?
- What are the most appropriate modes of interaction between humans and automation/intelligent systems in cyber operations?
- How do we quantify the performance or effectiveness of human-machine teams?
- Rabinowitz, N., Perbet, F., Song, H. Francis, Zhang, C., Eslami, S.M. Ali, Botvinick, M., Machine Theory of Mind
- Madni, A., Madni, C., Architectural Framework for Exploring Adaptive Human-Machine Teaming Options in Simulated Dynamic Environments, Systems 2018, 6,44
- Don Norman (2017) Design, Business Models, and Human-Technology Teamwork, Research-Technology Management, 60:1, 26-30, DOI: 10.1080/08956308.2017.1255051
- Damacharla, P., Javaid, A., Devabhaktuni, V., Common Metrics to Benchmark Human-Machine Teams (HMT): A Review, IEEE Access, Vol 6, 2018
- Soule, N., Carvalho, M., Last, D., et al., Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses, Systems 2018, 6,44
Submit any questions to Don Goff, co-PI with Dan Wolf for the Challenge Problem at firstname.lastname@example.org.