To date, there are a lot of research works related to the application of game theory to model the interaction between a cyber attacker and defender. At the same time there are some challenges that prevent development and practical application of such approaches. One of the challenges is that at each point in time, the cyber attacker and the defender do not have accurate information about the adversary’s strategy, which results in an uncertainty in choosing their own strategy. The paper considers the application of hypergame theory to process this uncertainty. The authors use the attack graph is used to determine the possible strategies of the cyber attacker, while the graph of dependencies between the assets of the information system is used to determine the gain when applying a particular strategy. Thus, the result of the research is a proposed approach to security analysis and decision support for security incidents response based on the hypergame theory.
Authored by Elena Fedorchenko, Igor Kotenko, Boying Given, Yin Li
The rapid growth of communication networks, coupled with the increasing complexity of cyber threats, necessitates the implementation of proactive measures to protect networks and systems. In this study, we introduce a federated learning-based approach for cyber threat hunting at the endpoint level. The proposed method utilizes the collective intelligence of multiple devices to effectively and confidentially detect attacks on individual machines. A security assessment tool is also developed to emulate the behavior of adversary groups and Advanced Persistent Threat (APT) actors in the network. This tool provides network security experts with the ability to assess their network environment s resilience and aids in generating authentic data derived from diverse threats for use in subsequent stages of the federated learning (FL) model. The results of the experiments demonstrate that the proposed model effectively detects cyber threats on the devices while safeguarding privacy.
Authored by Saeid Sheikhi, Panos Kostakos
As cyber attacks grow in complexity and frequency, cyber threat intelligence (CTI) remains a priority objective for defenders. A critical component of CTI at the strategic level of defensive operations is attack attribution. Attributing an attack to a threat group informs defenders on adversaries that are actively engaging them and advances their ability respond. In this paper, we propose a data analytic approach towards threat attribution using adversary playbooks of tactics, techniques, and procedures (TTPs). Specifically, our approach uses association rule mining on a large real world CTI dataset to extend known threat TTP playbooks with statistically probable TTPs the adversary may deploy. The benefits are twofold. First, we offer a dataset of learned TTP associations and extended threat playbooks. Second, we show that we can attribute attacks using a weighted Jaccard similarity with 96\% accuracy.
Authored by Kelsie Edie, Cole Mckee, Adam Duby
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
Specific Emitter Identification (SEI) is advantageous for its ability to passively identify emitters by exploiting distinct, unique, and organic features unintentionally imparted upon every signal during formation and transmission. These features are attributed to the slight variations and imperfections that exist in the Radio Frequency (RF) front end, thus SEI is being proposed as a physical layer security technique. The majority of SEI work assumes the targeted emitter is a passive source with immutable and difficult-to-mimic signal features. However, Software-Defined Radio (SDR) proliferation and Deep Learning (DL) advancements require a reassessment of these assumptions, because DL can learn SEI features directly from an emitter’s signals and SDR enables signal manipulation. This paper investigates a strong adversary that uses SDR and DL to mimic an authorized emitter’s signal features to circumvent SEI-based identity verification. The investigation considers three SEI mimicry approaches, two different SDR platforms, the presence or lack of signal energy as well as a "decoy" emitter. The results show that "off-the-shelf" DL achieves effective SEI mimicry. Additionally, SDR constraints impact SEI mimicry effectiveness and suggest an adversary’s minimum requirements. Future SEI research must consider adversaries capable of mimicking another emitter’s SEI features or manipulating their own.
Authored by Donald Reising, Joshua Tyler, Mohamed Fadul, Matthew Hilling, Daniel Loveless
In a one-way secret key agreement (OW-SKA) protocol in source model, Alice and Bob have private samples of two correlated variables X and Y that are partially leaked to Eve through the variable Z, and use a single message from Alice to Bob to obtain a shared secret key. We propose an efficient secure OW-SKA when the sent message over the public channel can be tampered with by an active adversary. Our construction uses a specially designed hash function that is used for reconciliation, as well as detection of tampering. In detection of tampering the function is a Message Authentication Code (MAC) that maintains its security when the key is partially leaked. We prove the secrecy of the established key and robustness of the protocol, and discuss our results.
Authored by Somnath Panja, Shaoquan Jiang, Reihaneh Safavi-Naini
Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses
Can we hope to provide provable security against model extraction attacks? As a step towards a theoretical study of this question, we unify and abstract a wide range of “observational” model extraction defenses (OMEDs) - roughly, those that attempt to detect model extraction by analyzing the distribution over the adversary s queries. To accompany the abstract OMED, we define the notion of complete OMEDs - when benign clients can freely interact with the model - and sound OMEDs - when adversarial clients are caught and prevented from reverse engineering the model. Our formalism facilitates a simple argument for obtaining provable security against model extraction by complete and sound OMEDs, using (average-case) hardness assumptions for PAC-learning, in a way that abstracts current techniques in the prior literature. The main result of this work establishes a partial computational incompleteness theorem for the OMED: any efficient OMED for a machine learning model computable by a polynomial size decision tree that satisfies a basic form of completeness cannot satisfy soundness, unless the subexponential Learning Parity with Noise (LPN) assumption does not hold. To prove the incompleteness theorem, we introduce a class of model extraction attacks called natural Covert Learning attacks based on a connection to the Covert Learning model of Canetti and Karchmer (TCC 21), and show that such attacks circumvent any defense within our abstract mechanism in a black-box, nonadaptive way. As a further technical contribution, we extend the Covert Learning algorithm of Canetti and Karchmer to work over any “concise” product distribution (albeit for juntas of a logarithmic number of variables rather than polynomial size decision trees), by showing that the technique of learning with a distributional inverter of Binnendyk et al. (ALT 22) remains viable in the Covert Learning setting.
Authored by Ari Karchmer
Most proposals for securing control systems are heuristic in nature, and while they increase the protection of their target, the security guarantees they provide are unclear. This paper proposes a new way of modeling the security guarantees of a Cyber-Physical System (CPS) against arbitrary false command attacks. As our main case study, we use the most popular testbed for control systems security. We first propose a detailed formal model of this testbed and then show how the original configuration is vulnerable to a single-actuator attack. We then propose modifications to the control system and prove that our modified system is secure against arbitrary, single-actuator attacks.
Authored by John Castellanos, Mohamed Maghenem, Alvaro Cardenas, Ricardo Sanfelice, Jianying Zhou
Due to the broadcast nature of power line communication (PLC) channels, confidential information exchanged on the power grid is prone to malicious exploitation by any PLC device connected to the same power grid. To combat the ever-growing security threats, physical layer security (PLS) has been proposed as a viable safeguard or complement to existing security mechanisms. In this paper, the security analysis of a typical PLC adversary system model is investigated. In particular, we derive the expressions of the corresponding average secrecy capacity (ASC) and the secrecy outage probability (SOP) of the considered PLC system. In addition, numerical results are presented to validate the obtained analytical expressions and to assess the relevant PLS performances. The results show significant impacts of the transmission distances and the used carrier frequency on the overall transmission security.
Authored by Javier Fernandez, Aymen Omri, Roberto Di Pietro
Information system administrators must pay attention to system vulnerability information and take appropriate measures against security attacks on the systems they manage. However, as the number of security vulnerability reports increases, the time required to implement vulnerability remediation also increases, therefore vulnerability risks must be assessed and prioritized. Especially in the early stages of vulnerability discovery, such as zero-day attacks, the risk assessment must consider changes over time, since it takes time to spread the information among adversaries and defenders.The Common Vulnerability Scoring System (CVSS) is used widely for vulnerability risk assessment, but it cannot be said that it can sufficiently cope with temporal changes of risk of attacks. In this paper, we proposed software vulnerability growth models to assist system administrators in decision making. Experimental results show that these models can provide a visual representation of the risk over time.
Authored by Takashi Minohara, Masaya Shimakawa
In wireless security, cognitive adversaries are known to inject jamming energy on the victim’s frequency band and monitor the same band for countermeasures thereby trapping the victim. Under the class of cognitive adversaries, we propose a new threat model wherein the adversary, upon executing the jamming attack, measures the long-term statistic of Kullback-Leibler Divergence (KLD) between its observations over each of the network frequencies before and after the jamming attack. To mitigate this adversary, we propose a new cooperative strategy wherein the victim takes the assistance for a helper node in the network to reliably communicate its message to the destination. The underlying idea is to appropriately split their energy and time resources such that their messages are reliably communicated without disturbing the statistical distribution of the samples in the network. We present rigorous analyses on the reliability and the covertness metrics at the destination and the adversary, respectively, and then synthesize tractable algorithms to obtain near-optimal division of resources between the victim and the helper. Finally, we show that the obtained near-optimal division of energy facilitates in deceiving the adversary with a KLD estimator.
Authored by Soumita Hazra, J. Harshan
Current threat modeling methods focus on understanding the protected network from the perspective of the owners of those networks rather than on comprehensively understanding and integrating the methodology and intent of the threat. We argue that layering the human factors of the adversary over the existing threat models increases the ability of cybersecurity practitioners to truly understand possible threats. Therefore, we need to expand existing adversary and threat modeling approaches in cyberspace to include the representation of human factors of threats, specifically motivations, biases, and perceptions. This additional layer of modeling should be informed by an analysis of cyber threat intelligence reporting. By creating and adopting this expanded modeling, cybersecurity practitioners would have an understanding of how an adversary views their network, which would expand their ability to understand how their network is most likely to be attacked.
Authored by Stephanie Travis, Denis Gračanin, Erin Lanus
The high directionality of millimeter-wave (mmWave) communication systems has proven effective in reducing the attack surface against eavesdropping, thus improving the physical layer security. However, even with highly directional beams, the system is still exposed to eavesdropping against adversaries located within the main lobe. In this paper, we propose BeamSec, a solution to protect the users even from adversaries located in the main lobe. The key feature of BeamSec are: (i) Operating without the knowledge of eavesdropper’s location/channel; (ii) Robustness against colluding eavesdropping attack and (iii) Standard compatibility, which we prove using experiments via our IEEE 802.11ad/ay-compatible 60 GHz phased-array testbed. Methodologically, BeamSec first identifies uncorrelated and diverse beampairs between the transmitter and receiver by analyzing signal characteristics available through standard-compliant procedures. Next, it encodes the information jointly over all selected beampairs to minimize information leakage. We study two methods for allocating transmission time among different beams, namely uniform allocation (no knowledge of the wireless channel) and optimal allocation for maximization of the secrecy rate (with partial knowledge of the wireless channel). Our experiments show that BeamSec outperforms the benchmark schemes against single and colluding eavesdroppers and enhances the secrecy rate by 79.8\% over a random paths selection benchmark.
Authored by Afifa Ishtiaq, Arash Asadi, Ladan Khaloopour, Waqar Ahmed, Vahid Jamali, Matthias Hollick
The rapid growth of communication networks, coupled with the increasing complexity of cyber threats, necessitates the implementation of proactive measures to protect networks and systems. In this study, we introduce a federated learning-based approach for cyber threat hunting at the endpoint level. The proposed method utilizes the collective intelligence of multiple devices to effectively and confidentially detect attacks on individual machines. A security assessment tool is also developed to emulate the behavior of adversary groups and Advanced Persistent Threat (APT) actors in the network. This tool provides network security experts with the ability to assess their network environment s resilience and aids in generating authentic data derived from diverse threats for use in subsequent stages of the federated learning (FL) model. The results of the experiments demonstrate that the proposed model effectively detects cyber threats on the devices while safeguarding privacy.
Authored by Saeid Sheikhi, Panos Kostakos
State-of-the-art template reconstruction attacks assume that an adversary has access to a part or whole of the functionality of a target model. However, in a practical scenario, rigid protection of the target system prevents them from gaining knowledge of the target model. In this paper, we propose a novel template reconstruction attack method utilizing a feature converter. The feature converter enables an adversary to reconstruct an image from a corresponding compromised template without knowledge about the target model. The proposed method was evaluated with qualitative and quantitative measures. We achieved the Successful Attack Rate(SAR) of 0.90 on Labeled Faces in the Wild Dataset(LFW) with compromised templates of only 1280 identities.
Authored by Muku Akasaka, Soshi Maeda, Yuya Sato, Masakatsu Nishigaki, Tetsushi Ohki
Satellite technologies are used for both civil and military purposes in the modern world, and typical applications include Communication, Navigation and Surveillance (CNS) services, which have a direct impact several economic, social and environmental protection activity. The increasing reliance on satellite services for safety-of-life and mission-critical applications (e.g., transport, defense and public safety services) creates a severe, although often overlooked, security problem, particularly when it comes to cyber threats. Like other increasingly digitized services, satellites and space platforms are vulnerable to cyberattacks. Thus, the existence of cybersecurity flaws may pose major threats to space-based assets and associated key infrastructure on the ground. These dangers could obstruct global economic progress and, by implication, international security if they are not properly addressed. Mega-constellations make protecting space infrastructure from cyberattacks much more difficult. This emphasizes the importance of defensive cyber countermeasures to minimize interruptions and ensure efficient and reliable contributions to critical infrastructure operations. Very importantly, space systems are inherently complex Cyber-Physical System (CPS) architectures, where communication, control and computing processes are tightly interleaved, and associated hardware/software components are seamlessly integrated. This represents a new challenge as many known physical threats (e.g., conventional electronic warfare measures) can now manifest their effects in cyberspace and, vice-versa, some cyber-threats can have detrimental effects in the physical domain. The concept of cyberspace underlies nearly every aspect of modern society s critical activities and relies heavily on critical infrastructure for economic advancement, public safety and national security. Many governments have expressed the desire to make a substantial contribution to secure cyberspace and are focusing on different aspects of the evolving industrial ecosystem, largely under the impulse of digital transformation and sustainable development goals. The level of cybersecurity attained in this framework is the sum of all national and international activities implemented to protect all actions in the cyber-physical ecosystem. This paper focuses on cybersecurity threats and vulnerabilities in various segments of space CPS architectures. More specifically, the paper identifies the applicable cyber threat mechanisms, conceivable threat actors and the associated space business implications. It also presents metrics and strategies for countering cyber threats and facilitating space mission assurance.
Authored by Kathiravan Thangavel, Jordan Plotnek, Alessandro Gardi, Roberto Sabatini
Recommender systems are powerful tools which touch on numerous aspects of everyday life, from shopping to consuming content, and beyond. However, as other machine learning models, recommender system models are vulnerable to adversarial attacks and their performance could drop significantly with a slight modification of the input data. Most of the studies in the area of adversarial machine learning are focused on the image and vision domain. There are very few work that study adversarial attacks on recommender systems and even fewer work that study ways to make the recommender systems robust and reliable. In this study, we explore two stateof-the-art adversarial attack methods proposed by Tang et al. [1] and Christakopoulou et al. [2] and we report our proposed defenses and experimental evaluations against these attacks. In particular, we observe that low-rank reconstructions and/or transformation of the attacked data has a significant alleviating effect on the attack, and we present extensive experimental evidence to demonstrate the effectiveness of this approach. We also show that a simple classifier is able to learn to detect fake users from real users and can successfully discard them from the dataset. This observation elaborates the fact that the threat model does not generate fake users that mimic the same behavior of real users and can be easily distinguished from real users’ behavior. We also examine how transforming latent factors of the matrix factorization model into a low-dimensional space impacts its performance. Furthermore, we combine fake users from both attacks to examine how our proposed defense is able to defend against multiple attacks at the same time. Local lowrank reconstruction was able to reduce the hit ratio of target items from 23.54\% to 15.69\% while the overall performance of the recommender system was preserved.
Authored by Negin Entezari, Evangelos Papalexakis
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components.
Authored by Lisa Oakley, Alina Oprea, Stavros Tripakis
With the increased commercialization of deep learning (DL) models, there is also a growing need to protect them from illicit usage. For cost- and ease of deployment reasons it is becoming increasingly common to run DL models on the hardware of third parties. Although there are some hardware mechanisms, such as Trusted Execution Environments (TEE), to protect sensitive data, their availability is still limited and not well suited to resource demanding tasks, like DL models, that benefit from hardware accelerators. In this work, we make model stealing more difficult, presenting a novel way to divide up a DL model, with the main part on normal infrastructure and a small part in a remote TEE, and train it using adversarial techniques. In initial experiments on image classification models for the Fashion MNIST and CIFAR 10 datasets, we observed that this obfuscation protection makes it significantly more difficult for an adversary to leverage the exposed model components.
Authored by Jakob Sternby, Bjorn Johansson, Michael Liljenstam
Proactive approaches to security, such as adversary emulation, leverage information about threat actors and their techniques (Cyber Threat Intelligence, CTI). However, most CTI still comes in unstructured forms (i.e., natural language), such as incident reports and leaked documents. To support proactive security efforts, we present an experimental study on the automatic classification of unstructured CTI into attack techniques using machine learning (ML). We contribute with two new datasets for CTI analysis, and we evaluate several ML models, including both traditional and deep learning-based ones. We present several lessons learned about how ML can perform at this task, which classifiers perform best and under which conditions, which are the main causes of classification errors, and the challenges ahead for CTI analysis.
Authored by Vittorio Orbinato, Mariarosaria Barbaraci, Roberto Natella, Domenico Cotroneo
In recent days, security and privacy is becoming a challenge due to the rapid development of technology. In 2021, Khan et al. proposed an authentication and key agreement framework for smart grid network and claimed that the proposed protocol provides security against all well-known attacks. However, in this paper, we present the analysis and shows that the protocol proposed by Khan et al has failed to protect the secrecy of the shared session key between the user and service provider. An adversary can derive the session key (online) by intercepting the communicated messages under the Dolev-Yao threat model. We simulated Khan et al.’s protocol for formal security verification using Tamarin Prover and found a trace for deriving the temporary key. It is used to encrypt the login request that includes the user’s secret credentials. Hence, it also fails to preserve the privacy of the user’s credentials, and therefore any adversary can impersonate the user. As a result, the protocol proposed by Khan et al. is not suitable for practical applications.
Authored by Singam Ram, Vanga Odelu
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
The traditional threat modeling methodologies work well on a small scale, when evaluating targets such as a data field, a software application, or a system component—but they do not allow for comprehensive evaluation of an entire enterprise architecture. They also do not enumerate and consider a comprehensive set of actual threat actions observed in the wild. Because of the lack of adequate threat modeling methodologies for determining cybersecurity protection needs on an enterprise scale, cybersecurity executives and decision makers have traditionally relied upon marketing pressure as the main input into decision making for investments in cybersecurity capabilities (tools). A new methodology, originally developed by the Department of Defense then further expanded by the Department of Homeland Security, for the first time allows for a threat-based, end-to-end evaluation of cybersecurity architectures and determination of gaps or areas in need of future investments. Although in the public domain, this methodology has not been used outside of the federal government. This paper examines the new threat modeling approach that allows organizations to look at their cybersecurity protections from the standpoint of an adversary. The methodology enumerates threat actions that have been observed in the wild using a cyber threat framework and scores cybersecurity architectural capabilities for their ability to protect, detect, and recover from each threat action. The results of the analysis form a matrix called capability coverage map that visually represents the coverage, gaps, and overlaps against threat actions. The threat actions can be further prioritized using a threat heat map – a visual representation of the prevalence and maneuverability of threat actions that can be overlaid on top of a coverage map.
Authored by Branko Bokan, Joost Santos
Network Intrusion Detection Systems (NIDS) monitor networking environments for suspicious events that could compromise the availability, integrity, or confidentiality of the network’s resources. To ensure NIDSs play their vital roles, it is necessary to identify how they can be attacked by adopting a viewpoint similar to the adversary to identify vulnerabilities and defenses hiatus. Accordingly, effective countermeasures can be designed to thwart any potential attacks. Machine learning (ML) approaches have been adopted widely for network anomaly detection. However, it has been found that ML models are vulnerable to adversarial attacks. In such attacks, subtle perturbations are inserted to the original inputs at inference time in order to evade the classifier detection or at training time to degrade its performance. Yet, modeling adversarial attacks and the associated threats of employing the machine learning approaches for NIDSs was not addressed. One of the growing challenges is to avoid ML-based systems’ diversity and ensure their security and trust. In this paper, we conduct threat modeling for ML-based NIDS using STRIDE and Attack Tree approaches to identify the potential threats on different levels. We model the threats that can be potentially realized by exploiting vulnerabilities in ML algorithms through a simplified structural attack tree. To provide holistic threat modeling, we apply the STRIDE method to systems’ data flow to uncover further technical threats. Our models revealed a noticing of 46 possible threats to consider. These presented models can help to understand the different ways that a ML-based NIDS can be attacked; hence, hardening measures can be developed to prevent these potential attacks from achieving their goals.
Authored by Huda Alatwi, Charles Morisset
Information Reuse and Security - Successive approximation register analog-to-digital converter (SAR ADC) is widely adopted in the Internet of Things (IoT) systems due to its simple structure and high energy efficiency. Unfortunately, SAR ADC dissipates various and unique power features when it converts different input signals, leading to severe vulnerability to power side-channel attack (PSA). The adversary can accurately derive the input signal by only measuring the power information from the analog supply pin (AVDD), digital supply pin (DVDD), and/or reference pin (Ref) which feed to the trained machine learning models. This paper first presents the detailed mathematical analysis of power side-channel attack (PSA) to SAR ADC, concluding that the power information from AVDD is the most vulnerable to PSA compared with the other supply pin. Then, an LSB-reused protection technique is proposed, which utilizes the characteristic of LSB from the SAR ADC itself to protect against PSA. Lastly, this technique is verified in a 12-bit 5 MS/s secure SAR ADC implemented in 65nm technology. By using the current waveform from AVDD, the adopted convolutional neural network (CNN) algorithms can achieve \textgreater99\% prediction accuracy from LSB to MSB in the SAR ADC without protection. With the proposed protection, the bit-wise accuracy drops to around 50\%.
Authored by Lele Fang, Jiahao Liu, Yan Zhu, Chi-Hang Chan, Rui Martins