Virtually every computing system today is at risk from some form of cyber attack. The problem continues to grow in scope, in part because there does not exist today a foundational science of security. While the community is certainly making improvements in the security of many systems, progress is often ad-hoc, muddled, and difficult to measure with respect to actual progress being made. A broad-spectrum science of security is desperately needed at this time and would involve a systematic gathering of knowledge, new theoretical approaches, observational research, experimental research, and more. Certain subfields of security have a strong scientific basis (e.g., cryptography, formal methods), but there is no comprehensive scientific basis for constructing systems that are trustworthy by design. The lack of a disciplined and rigorous scientific basis profoundly limits our ability to design, deploy, and trust most large-scale and cyber-physical systems (CPS). Therefore, it is the goal of the CPS-VO Science of Security Group to provide a vehicle for community awareness, collaboration, and information towards the maturing of the scientific basis for security.

Resilient Control of Cyber-Physical Systems with Distributed Learning
Investigators: Sayan Mitra, Geir Dullerud, and Sanjay Shakkotai Researchers: Pulkit Katdare and Negin Musavi Critical cyber and cyber-physical systems (CPS) are beginning to use predictive AI models. These models help to expand, customize, and optimize…
/projects/resilient-control-cyber-physical-systems-distributed-learning
A Human-Agent-Focused Approach to Security Modeling
Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for…
/projects/human-agent-focused-approach-security-modeling
Mixed Initiative and Collaborative Learning in Adversarial Environments
/projects/mixed-initiative-and-collaborative-learning-adversarial-environments
Development of Methodology Guidelines for Security Research
This project seeks to aid the security research community in conducting and reporting methodologically sound science through (1) development, refinement, and use of community-based security research guidelines; and (2) characterization of the security…
/projects/development-methodology-guidelines-security-research
Scalable Privacy Analysis
One major shortcoming of the current "notice and consent" privacy framework is that the constraints for data usage stated in policies—be they stated privacy practices, regulation, or laws—cannot easily be compared against the technologies that they…
/projects/scalable-privacy-analysis