State-of-the-art template reconstruction attacks assume that an adversary has access to a part or whole of the functionality of a target model. However, in a practical scenario, rigid protection of the target system prevents them from gaining knowledge of the target model. In this paper, we propose a novel template reconstruction attack method utilizing a feature converter. The feature converter enables an adversary to reconstruct an image from a corresponding compromised template without knowledge about the target model. The proposed method was evaluated with qualitative and quantitative measures. We achieved the Successful Attack Rate(SAR) of 0.90 on Labeled Faces in the Wild Dataset(LFW) with compromised templates of only 1280 identities.
Authored by Muku Akasaka, Soshi Maeda, Yuya Sato, Masakatsu Nishigaki, Tetsushi Ohki
Satellite technologies are used for both civil and military purposes in the modern world, and typical applications include Communication, Navigation and Surveillance (CNS) services, which have a direct impact several economic, social and environmental protection activity. The increasing reliance on satellite services for safety-of-life and mission-critical applications (e.g., transport, defense and public safety services) creates a severe, although often overlooked, security problem, particularly when it comes to cyber threats. Like other increasingly digitized services, satellites and space platforms are vulnerable to cyberattacks. Thus, the existence of cybersecurity flaws may pose major threats to space-based assets and associated key infrastructure on the ground. These dangers could obstruct global economic progress and, by implication, international security if they are not properly addressed. Mega-constellations make protecting space infrastructure from cyberattacks much more difficult. This emphasizes the importance of defensive cyber countermeasures to minimize interruptions and ensure efficient and reliable contributions to critical infrastructure operations. Very importantly, space systems are inherently complex Cyber-Physical System (CPS) architectures, where communication, control and computing processes are tightly interleaved, and associated hardware/software components are seamlessly integrated. This represents a new challenge as many known physical threats (e.g., conventional electronic warfare measures) can now manifest their effects in cyberspace and, vice-versa, some cyber-threats can have detrimental effects in the physical domain. The concept of cyberspace underlies nearly every aspect of modern society s critical activities and relies heavily on critical infrastructure for economic advancement, public safety and national security. Many governments have expressed the desire to make a substantial contribution to secure cyberspace and are focusing on different aspects of the evolving industrial ecosystem, largely under the impulse of digital transformation and sustainable development goals. The level of cybersecurity attained in this framework is the sum of all national and international activities implemented to protect all actions in the cyber-physical ecosystem. This paper focuses on cybersecurity threats and vulnerabilities in various segments of space CPS architectures. More specifically, the paper identifies the applicable cyber threat mechanisms, conceivable threat actors and the associated space business implications. It also presents metrics and strategies for countering cyber threats and facilitating space mission assurance.
Authored by Kathiravan Thangavel, Jordan Plotnek, Alessandro Gardi, Roberto Sabatini
Recommender systems are powerful tools which touch on numerous aspects of everyday life, from shopping to consuming content, and beyond. However, as other machine learning models, recommender system models are vulnerable to adversarial attacks and their performance could drop significantly with a slight modification of the input data. Most of the studies in the area of adversarial machine learning are focused on the image and vision domain. There are very few work that study adversarial attacks on recommender systems and even fewer work that study ways to make the recommender systems robust and reliable. In this study, we explore two stateof-the-art adversarial attack methods proposed by Tang et al. [1] and Christakopoulou et al. [2] and we report our proposed defenses and experimental evaluations against these attacks. In particular, we observe that low-rank reconstructions and/or transformation of the attacked data has a significant alleviating effect on the attack, and we present extensive experimental evidence to demonstrate the effectiveness of this approach. We also show that a simple classifier is able to learn to detect fake users from real users and can successfully discard them from the dataset. This observation elaborates the fact that the threat model does not generate fake users that mimic the same behavior of real users and can be easily distinguished from real users’ behavior. We also examine how transforming latent factors of the matrix factorization model into a low-dimensional space impacts its performance. Furthermore, we combine fake users from both attacks to examine how our proposed defense is able to defend against multiple attacks at the same time. Local lowrank reconstruction was able to reduce the hit ratio of target items from 23.54\% to 15.69\% while the overall performance of the recommender system was preserved.
Authored by Negin Entezari, Evangelos Papalexakis
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components.
Authored by Lisa Oakley, Alina Oprea, Stavros Tripakis
With the increased commercialization of deep learning (DL) models, there is also a growing need to protect them from illicit usage. For cost- and ease of deployment reasons it is becoming increasingly common to run DL models on the hardware of third parties. Although there are some hardware mechanisms, such as Trusted Execution Environments (TEE), to protect sensitive data, their availability is still limited and not well suited to resource demanding tasks, like DL models, that benefit from hardware accelerators. In this work, we make model stealing more difficult, presenting a novel way to divide up a DL model, with the main part on normal infrastructure and a small part in a remote TEE, and train it using adversarial techniques. In initial experiments on image classification models for the Fashion MNIST and CIFAR 10 datasets, we observed that this obfuscation protection makes it significantly more difficult for an adversary to leverage the exposed model components.
Authored by Jakob Sternby, Bjorn Johansson, Michael Liljenstam
Proactive approaches to security, such as adversary emulation, leverage information about threat actors and their techniques (Cyber Threat Intelligence, CTI). However, most CTI still comes in unstructured forms (i.e., natural language), such as incident reports and leaked documents. To support proactive security efforts, we present an experimental study on the automatic classification of unstructured CTI into attack techniques using machine learning (ML). We contribute with two new datasets for CTI analysis, and we evaluate several ML models, including both traditional and deep learning-based ones. We present several lessons learned about how ML can perform at this task, which classifiers perform best and under which conditions, which are the main causes of classification errors, and the challenges ahead for CTI analysis.
Authored by Vittorio Orbinato, Mariarosaria Barbaraci, Roberto Natella, Domenico Cotroneo
In recent days, security and privacy is becoming a challenge due to the rapid development of technology. In 2021, Khan et al. proposed an authentication and key agreement framework for smart grid network and claimed that the proposed protocol provides security against all well-known attacks. However, in this paper, we present the analysis and shows that the protocol proposed by Khan et al has failed to protect the secrecy of the shared session key between the user and service provider. An adversary can derive the session key (online) by intercepting the communicated messages under the Dolev-Yao threat model. We simulated Khan et al.’s protocol for formal security verification using Tamarin Prover and found a trace for deriving the temporary key. It is used to encrypt the login request that includes the user’s secret credentials. Hence, it also fails to preserve the privacy of the user’s credentials, and therefore any adversary can impersonate the user. As a result, the protocol proposed by Khan et al. is not suitable for practical applications.
Authored by Singam Ram, Vanga Odelu
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
The traditional threat modeling methodologies work well on a small scale, when evaluating targets such as a data field, a software application, or a system component—but they do not allow for comprehensive evaluation of an entire enterprise architecture. They also do not enumerate and consider a comprehensive set of actual threat actions observed in the wild. Because of the lack of adequate threat modeling methodologies for determining cybersecurity protection needs on an enterprise scale, cybersecurity executives and decision makers have traditionally relied upon marketing pressure as the main input into decision making for investments in cybersecurity capabilities (tools). A new methodology, originally developed by the Department of Defense then further expanded by the Department of Homeland Security, for the first time allows for a threat-based, end-to-end evaluation of cybersecurity architectures and determination of gaps or areas in need of future investments. Although in the public domain, this methodology has not been used outside of the federal government. This paper examines the new threat modeling approach that allows organizations to look at their cybersecurity protections from the standpoint of an adversary. The methodology enumerates threat actions that have been observed in the wild using a cyber threat framework and scores cybersecurity architectural capabilities for their ability to protect, detect, and recover from each threat action. The results of the analysis form a matrix called capability coverage map that visually represents the coverage, gaps, and overlaps against threat actions. The threat actions can be further prioritized using a threat heat map – a visual representation of the prevalence and maneuverability of threat actions that can be overlaid on top of a coverage map.
Authored by Branko Bokan, Joost Santos
Network Intrusion Detection Systems (NIDS) monitor networking environments for suspicious events that could compromise the availability, integrity, or confidentiality of the network’s resources. To ensure NIDSs play their vital roles, it is necessary to identify how they can be attacked by adopting a viewpoint similar to the adversary to identify vulnerabilities and defenses hiatus. Accordingly, effective countermeasures can be designed to thwart any potential attacks. Machine learning (ML) approaches have been adopted widely for network anomaly detection. However, it has been found that ML models are vulnerable to adversarial attacks. In such attacks, subtle perturbations are inserted to the original inputs at inference time in order to evade the classifier detection or at training time to degrade its performance. Yet, modeling adversarial attacks and the associated threats of employing the machine learning approaches for NIDSs was not addressed. One of the growing challenges is to avoid ML-based systems’ diversity and ensure their security and trust. In this paper, we conduct threat modeling for ML-based NIDS using STRIDE and Attack Tree approaches to identify the potential threats on different levels. We model the threats that can be potentially realized by exploiting vulnerabilities in ML algorithms through a simplified structural attack tree. To provide holistic threat modeling, we apply the STRIDE method to systems’ data flow to uncover further technical threats. Our models revealed a noticing of 46 possible threats to consider. These presented models can help to understand the different ways that a ML-based NIDS can be attacked; hence, hardening measures can be developed to prevent these potential attacks from achieving their goals.
Authored by Huda Alatwi, Charles Morisset
Information Reuse and Security - Successive approximation register analog-to-digital converter (SAR ADC) is widely adopted in the Internet of Things (IoT) systems due to its simple structure and high energy efficiency. Unfortunately, SAR ADC dissipates various and unique power features when it converts different input signals, leading to severe vulnerability to power side-channel attack (PSA). The adversary can accurately derive the input signal by only measuring the power information from the analog supply pin (AVDD), digital supply pin (DVDD), and/or reference pin (Ref) which feed to the trained machine learning models. This paper first presents the detailed mathematical analysis of power side-channel attack (PSA) to SAR ADC, concluding that the power information from AVDD is the most vulnerable to PSA compared with the other supply pin. Then, an LSB-reused protection technique is proposed, which utilizes the characteristic of LSB from the SAR ADC itself to protect against PSA. Lastly, this technique is verified in a 12-bit 5 MS/s secure SAR ADC implemented in 65nm technology. By using the current waveform from AVDD, the adopted convolutional neural network (CNN) algorithms can achieve \textgreater99\% prediction accuracy from LSB to MSB in the SAR ADC without protection. With the proposed protection, the bit-wise accuracy drops to around 50\%.
Authored by Lele Fang, Jiahao Liu, Yan Zhu, Chi-Hang Chan, Rui Martins
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.
Authored by Lisa Oakley, Alina Oprea, Stavros Tripakis
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.In this paper we build on an existing ontology framework for security analysis, available in the ADVISE Meta tool, and we extend it in two directions: i) to cover the attack patterns available in the CAPEC database, a comprehensive dictionary of known patterns of attack, and ii) to capture all the adversaries’ profiles as defined in the Threat Agent Library (TAL), a reference library for defining the characteristics of external and internal threat agents ranging from industrial spies to untrained employees. The proposed extension supports a richer combination of adversaries’ profiles and attack paths, and provides guidance on how to further enrich the ontology based on taxonomies of attacks and adversaries.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
Connected Autonomous Vehicle (CAV) applications have shown the promise of transformative impact on road safety, transportation experience, and sustainability. However, they open large and complex attack surfaces: an adversary can corrupt sensory and communication inputs with catastrophic results. A key challenge in development of security solutions for CAV applications is the lack of effective infrastructure for evaluating such solutions. In this paper, we address the problem by designing an automated, flexible evaluation infrastructure for CAV security solutions. Our tool, CAVELIER, provides an extensible evaluation architecture for CAV security solutions against compromised communication and sensor channels. The tool can be customized for a variety of CAV applications and to target diverse usage models. We illustrate the framework with a number of case studies for security resiliency evaluation in Cooperative Adaptive Cruise Control (CACC).
Authored by Srivalli Boddupalli, Venkata Chamarthi, Chung-Wei Lin, Sandip Ray
Machine learning (ML) models are increasingly being used in the development of Malware Detection Systems. Existing research in this area primarily focuses on developing new architectures and feature representation techniques to improve the accuracy of the model. However, recent studies have shown that existing state-of-the art techniques are vulnerable to adversarial machine learning (AML) attacks. Among those, data poisoning attacks have been identified as a top concern for ML practitioners. A recent study on clean-label poisoning attacks in which an adversary intentionally crafts training samples in order for the model to learn a backdoor watermark was shown to degrade the performance of state-of-the-art classifiers. Defenses against such poisoning attacks have been largely under-explored. We investigate a recently proposed clean-label poisoning attack and leverage an ensemble-based Nested Training technique to remove most of the poisoned samples from a poisoned training dataset. Our technique leverages the relatively large sensitivity of poisoned samples to feature noise that disproportionately affects the accuracy of a backdoored model. In particular, we show that for two state-of-the art architectures trained on the EMBER dataset affected by the clean-label attack, the Nested Training approach improves the accuracy of backdoor malware samples from 3.42% to 93.2%. We also show that samples produced by the clean-label attack often successfully evade malware classification even when the classifier is not poisoned during training. However, even in such scenarios, our Nested Training technique can mitigate the effect of such clean-label-based evasion attacks by recovering the model's accuracy of malware detection from 3.57% to 93.2%.
Authored by Samson Ho, Achyut Reddy, Sridhar Venkatesan, Rauf Izmailov, Ritu Chadha, Alina Oprea