Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.
Authored by Lisa Oakley, Alina Oprea, Stavros Tripakis
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.In this paper we build on an existing ontology framework for security analysis, available in the ADVISE Meta tool, and we extend it in two directions: i) to cover the attack patterns available in the CAPEC database, a comprehensive dictionary of known patterns of attack, and ii) to capture all the adversaries’ profiles as defined in the Threat Agent Library (TAL), a reference library for defining the characteristics of external and internal threat agents ranging from industrial spies to untrained employees. The proposed extension supports a richer combination of adversaries’ profiles and attack paths, and provides guidance on how to further enrich the ontology based on taxonomies of attacks and adversaries.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
Connected Autonomous Vehicle (CAV) applications have shown the promise of transformative impact on road safety, transportation experience, and sustainability. However, they open large and complex attack surfaces: an adversary can corrupt sensory and communication inputs with catastrophic results. A key challenge in development of security solutions for CAV applications is the lack of effective infrastructure for evaluating such solutions. In this paper, we address the problem by designing an automated, flexible evaluation infrastructure for CAV security solutions. Our tool, CAVELIER, provides an extensible evaluation architecture for CAV security solutions against compromised communication and sensor channels. The tool can be customized for a variety of CAV applications and to target diverse usage models. We illustrate the framework with a number of case studies for security resiliency evaluation in Cooperative Adaptive Cruise Control (CACC).
Authored by Srivalli Boddupalli, Venkata Chamarthi, Chung-Wei Lin, Sandip Ray
Machine learning (ML) models are increasingly being used in the development of Malware Detection Systems. Existing research in this area primarily focuses on developing new architectures and feature representation techniques to improve the accuracy of the model. However, recent studies have shown that existing state-of-the art techniques are vulnerable to adversarial machine learning (AML) attacks. Among those, data poisoning attacks have been identified as a top concern for ML practitioners. A recent study on clean-label poisoning attacks in which an adversary intentionally crafts training samples in order for the model to learn a backdoor watermark was shown to degrade the performance of state-of-the-art classifiers. Defenses against such poisoning attacks have been largely under-explored. We investigate a recently proposed clean-label poisoning attack and leverage an ensemble-based Nested Training technique to remove most of the poisoned samples from a poisoned training dataset. Our technique leverages the relatively large sensitivity of poisoned samples to feature noise that disproportionately affects the accuracy of a backdoored model. In particular, we show that for two state-of-the art architectures trained on the EMBER dataset affected by the clean-label attack, the Nested Training approach improves the accuracy of backdoor malware samples from 3.42% to 93.2%. We also show that samples produced by the clean-label attack often successfully evade malware classification even when the classifier is not poisoned during training. However, even in such scenarios, our Nested Training technique can mitigate the effect of such clean-label-based evasion attacks by recovering the model's accuracy of malware detection from 3.57% to 93.2%.
Authored by Samson Ho, Achyut Reddy, Sridhar Venkatesan, Rauf Izmailov, Ritu Chadha, Alina Oprea