A Look at Resilience Breakdowns of Human-assisted Cyber Reasoning Systems

pdf

Yan Shoshitaishvili is an Assistant Professor at Arizona State University, where he pursues parallel passions of cybersecurity research, real-world impact, and education. His research focuses on automated program analysis and vulnerability detection techniques. Aside from publishing dozens of research papers in top academic venues, Yan led Shellphish’s participation in the DARPA Cyber Grand Challenge, achieving the creation of a fully autonomous hacking system that won third place in the competition.

Underpinning much of his research is angr, the open-source program analysis framework created by Yan and his collaborators. This framework has powered hundreds of research papers, helped find thousands of security bugs, and continues to be used in research labs and companies around the world.

When he is not doing research, Yan participates in the enthusiast and educational cybersecurity communities. He is a Captain Emeritus of Shellphish, one of the oldest ethical hacking groups in the world, and a founder of the Order of the Overflow, with whom he ran DEF CON CTF, the “world championship” of cybersecurity competitions, from 2018 through 2021. Now, he helps demystify the hacking scene as a co-host of the CTF RadiOOO podcast and forge connections between the government and the hacking community through his participation on CISA’s Technical Advisory Council. In order to inspire students to pursue cybersecurity (and, ultimately, compete at DEF CON!), Yan created pwn.college, an open practice-makes-perfect learning platform that is revolutionizing cybersecurity education for aspiring hackers around the world.

Abstract
In the years since their humble beginnings in DARPA's Cyber Grand Challenge, the autonomous security program analysis, vulnerability assessment, and security mitigations systems known as Cyber Reasoning Systems have continued to evolve. Modern CRSes are able to analyze complex software, find non-traditional vulnerabilities, and deploy cutting-edge mitigations. Critically, to surmount limitations in their original autonomous operation mode, they can now seamlessly integrate with human agents.
 
However, despite these advancements, Cyber Reasoning Systems remain alarmingly fallible in practice. This talk will explore the past, present, and potential future of CRS fallibility in autonomous and semi-autonomous settings, drawing parallels with the fallibility of humans and other branches of Artificial Intelligence and attempting to identify concrete research directions that can improve the state of the art and, eventually, enable us to achieve truly resilient Cyber Reasoning Systems!

Tags:
License: CC-2.5
Submitted by Anonymous on