Autonomous agents for cyber applications need to learn, reason about, and adapt and deploy security rules to defend networked computer systems while maintaining mission-critical operationally relevant workflows. The goal of the project is to develop machine learning methods for design of neurosymbolic cyber-security agents which can react autonomously to cyber-attacks. The agents must be able to mitigate cyber-attacks by deploying mitigations and countermeasures at variable length time intervals, such as detecting cyber-attacks, isolating compromised components, resetting compromised components to known secure states, and switching to failover configurations. Although recent advances in deep machine learning have enabled the design of sophisticated agents for well-defined tasks, orchestrating defensive actions requires the integration of symbolic models with neural components. Agents need information from a very high-dimensional state space such as alerts from intrusion detection systems and sensory data monitoring the status of operational workflows. Heterogeneity in time scales across software and systems introduces significant challenges. Determining an optimal mitigation action requires decisions in the presence of incomplete and noisy information. Further, neurosymbolic models can facilitate effective human-machine interaction improving trust in machine recommendations/actions.
The project will develop a neurosymbolic model representation is referred as Evolving Behavior Trees (EBTs). Specifically, the research objective of the project is to develop (1) methods for learning EBTs, (2) methods for the assurance of EBTs, and (3) evaluating autonomous cyber agents based on EBTs. I