The rapid growth of communication networks, coupled with the increasing complexity of cyber threats, necessitates the implementation of proactive measures to protect networks and systems. In this study, we introduce a federated learning-based approach for cyber threat hunting at the endpoint level. The proposed method utilizes the collective intelligence of multiple devices to effectively and confidentially detect attacks on individual machines. A security assessment tool is also developed to emulate the behavior of adversary groups and Advanced Persistent Threat (APT) actors in the network. This tool provides network security experts with the ability to assess their network environment s resilience and aids in generating authentic data derived from diverse threats for use in subsequent stages of the federated learning (FL) model. The results of the experiments demonstrate that the proposed model effectively detects cyber threats on the devices while safeguarding privacy.
Authored by Saeid Sheikhi, Panos Kostakos
As cyber attacks grow in complexity and frequency, cyber threat intelligence (CTI) remains a priority objective for defenders. A critical component of CTI at the strategic level of defensive operations is attack attribution. Attributing an attack to a threat group informs defenders on adversaries that are actively engaging them and advances their ability respond. In this paper, we propose a data analytic approach towards threat attribution using adversary playbooks of tactics, techniques, and procedures (TTPs). Specifically, our approach uses association rule mining on a large real world CTI dataset to extend known threat TTP playbooks with statistically probable TTPs the adversary may deploy. The benefits are twofold. First, we offer a dataset of learned TTP associations and extended threat playbooks. Second, we show that we can attribute attacks using a weighted Jaccard similarity with 96\% accuracy.
Authored by Kelsie Edie, Cole Mckee, Adam Duby
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
Specific Emitter Identification (SEI) is advantageous for its ability to passively identify emitters by exploiting distinct, unique, and organic features unintentionally imparted upon every signal during formation and transmission. These features are attributed to the slight variations and imperfections that exist in the Radio Frequency (RF) front end, thus SEI is being proposed as a physical layer security technique. The majority of SEI work assumes the targeted emitter is a passive source with immutable and difficult-to-mimic signal features. However, Software-Defined Radio (SDR) proliferation and Deep Learning (DL) advancements require a reassessment of these assumptions, because DL can learn SEI features directly from an emitter’s signals and SDR enables signal manipulation. This paper investigates a strong adversary that uses SDR and DL to mimic an authorized emitter’s signal features to circumvent SEI-based identity verification. The investigation considers three SEI mimicry approaches, two different SDR platforms, the presence or lack of signal energy as well as a "decoy" emitter. The results show that "off-the-shelf" DL achieves effective SEI mimicry. Additionally, SDR constraints impact SEI mimicry effectiveness and suggest an adversary’s minimum requirements. Future SEI research must consider adversaries capable of mimicking another emitter’s SEI features or manipulating their own.
Authored by Donald Reising, Joshua Tyler, Mohamed Fadul, Matthew Hilling, Daniel Loveless
In a one-way secret key agreement (OW-SKA) protocol in source model, Alice and Bob have private samples of two correlated variables X and Y that are partially leaked to Eve through the variable Z, and use a single message from Alice to Bob to obtain a shared secret key. We propose an efficient secure OW-SKA when the sent message over the public channel can be tampered with by an active adversary. Our construction uses a specially designed hash function that is used for reconciliation, as well as detection of tampering. In detection of tampering the function is a Message Authentication Code (MAC) that maintains its security when the key is partially leaked. We prove the secrecy of the established key and robustness of the protocol, and discuss our results.
Authored by Somnath Panja, Shaoquan Jiang, Reihaneh Safavi-Naini
Theoretical Limits of Provable Security Against Model Extraction by Efficient Observational Defenses
Can we hope to provide provable security against model extraction attacks? As a step towards a theoretical study of this question, we unify and abstract a wide range of “observational” model extraction defenses (OMEDs) - roughly, those that attempt to detect model extraction by analyzing the distribution over the adversary s queries. To accompany the abstract OMED, we define the notion of complete OMEDs - when benign clients can freely interact with the model - and sound OMEDs - when adversarial clients are caught and prevented from reverse engineering the model. Our formalism facilitates a simple argument for obtaining provable security against model extraction by complete and sound OMEDs, using (average-case) hardness assumptions for PAC-learning, in a way that abstracts current techniques in the prior literature. The main result of this work establishes a partial computational incompleteness theorem for the OMED: any efficient OMED for a machine learning model computable by a polynomial size decision tree that satisfies a basic form of completeness cannot satisfy soundness, unless the subexponential Learning Parity with Noise (LPN) assumption does not hold. To prove the incompleteness theorem, we introduce a class of model extraction attacks called natural Covert Learning attacks based on a connection to the Covert Learning model of Canetti and Karchmer (TCC 21), and show that such attacks circumvent any defense within our abstract mechanism in a black-box, nonadaptive way. As a further technical contribution, we extend the Covert Learning algorithm of Canetti and Karchmer to work over any “concise” product distribution (albeit for juntas of a logarithmic number of variables rather than polynomial size decision trees), by showing that the technique of learning with a distributional inverter of Binnendyk et al. (ALT 22) remains viable in the Covert Learning setting.
Authored by Ari Karchmer
Most proposals for securing control systems are heuristic in nature, and while they increase the protection of their target, the security guarantees they provide are unclear. This paper proposes a new way of modeling the security guarantees of a Cyber-Physical System (CPS) against arbitrary false command attacks. As our main case study, we use the most popular testbed for control systems security. We first propose a detailed formal model of this testbed and then show how the original configuration is vulnerable to a single-actuator attack. We then propose modifications to the control system and prove that our modified system is secure against arbitrary, single-actuator attacks.
Authored by John Castellanos, Mohamed Maghenem, Alvaro Cardenas, Ricardo Sanfelice, Jianying Zhou
Due to the broadcast nature of power line communication (PLC) channels, confidential information exchanged on the power grid is prone to malicious exploitation by any PLC device connected to the same power grid. To combat the ever-growing security threats, physical layer security (PLS) has been proposed as a viable safeguard or complement to existing security mechanisms. In this paper, the security analysis of a typical PLC adversary system model is investigated. In particular, we derive the expressions of the corresponding average secrecy capacity (ASC) and the secrecy outage probability (SOP) of the considered PLC system. In addition, numerical results are presented to validate the obtained analytical expressions and to assess the relevant PLS performances. The results show significant impacts of the transmission distances and the used carrier frequency on the overall transmission security.
Authored by Javier Fernandez, Aymen Omri, Roberto Di Pietro
Information system administrators must pay attention to system vulnerability information and take appropriate measures against security attacks on the systems they manage. However, as the number of security vulnerability reports increases, the time required to implement vulnerability remediation also increases, therefore vulnerability risks must be assessed and prioritized. Especially in the early stages of vulnerability discovery, such as zero-day attacks, the risk assessment must consider changes over time, since it takes time to spread the information among adversaries and defenders.The Common Vulnerability Scoring System (CVSS) is used widely for vulnerability risk assessment, but it cannot be said that it can sufficiently cope with temporal changes of risk of attacks. In this paper, we proposed software vulnerability growth models to assist system administrators in decision making. Experimental results show that these models can provide a visual representation of the risk over time.
Authored by Takashi Minohara, Masaya Shimakawa
In wireless security, cognitive adversaries are known to inject jamming energy on the victim’s frequency band and monitor the same band for countermeasures thereby trapping the victim. Under the class of cognitive adversaries, we propose a new threat model wherein the adversary, upon executing the jamming attack, measures the long-term statistic of Kullback-Leibler Divergence (KLD) between its observations over each of the network frequencies before and after the jamming attack. To mitigate this adversary, we propose a new cooperative strategy wherein the victim takes the assistance for a helper node in the network to reliably communicate its message to the destination. The underlying idea is to appropriately split their energy and time resources such that their messages are reliably communicated without disturbing the statistical distribution of the samples in the network. We present rigorous analyses on the reliability and the covertness metrics at the destination and the adversary, respectively, and then synthesize tractable algorithms to obtain near-optimal division of resources between the victim and the helper. Finally, we show that the obtained near-optimal division of energy facilitates in deceiving the adversary with a KLD estimator.
Authored by Soumita Hazra, J. Harshan
Current threat modeling methods focus on understanding the protected network from the perspective of the owners of those networks rather than on comprehensively understanding and integrating the methodology and intent of the threat. We argue that layering the human factors of the adversary over the existing threat models increases the ability of cybersecurity practitioners to truly understand possible threats. Therefore, we need to expand existing adversary and threat modeling approaches in cyberspace to include the representation of human factors of threats, specifically motivations, biases, and perceptions. This additional layer of modeling should be informed by an analysis of cyber threat intelligence reporting. By creating and adopting this expanded modeling, cybersecurity practitioners would have an understanding of how an adversary views their network, which would expand their ability to understand how their network is most likely to be attacked.
Authored by Stephanie Travis, Denis Gračanin, Erin Lanus
The high directionality of millimeter-wave (mmWave) communication systems has proven effective in reducing the attack surface against eavesdropping, thus improving the physical layer security. However, even with highly directional beams, the system is still exposed to eavesdropping against adversaries located within the main lobe. In this paper, we propose BeamSec, a solution to protect the users even from adversaries located in the main lobe. The key feature of BeamSec are: (i) Operating without the knowledge of eavesdropper’s location/channel; (ii) Robustness against colluding eavesdropping attack and (iii) Standard compatibility, which we prove using experiments via our IEEE 802.11ad/ay-compatible 60 GHz phased-array testbed. Methodologically, BeamSec first identifies uncorrelated and diverse beampairs between the transmitter and receiver by analyzing signal characteristics available through standard-compliant procedures. Next, it encodes the information jointly over all selected beampairs to minimize information leakage. We study two methods for allocating transmission time among different beams, namely uniform allocation (no knowledge of the wireless channel) and optimal allocation for maximization of the secrecy rate (with partial knowledge of the wireless channel). Our experiments show that BeamSec outperforms the benchmark schemes against single and colluding eavesdroppers and enhances the secrecy rate by 79.8\% over a random paths selection benchmark.
Authored by Afifa Ishtiaq, Arash Asadi, Ladan Khaloopour, Waqar Ahmed, Vahid Jamali, Matthias Hollick
The rapid growth of communication networks, coupled with the increasing complexity of cyber threats, necessitates the implementation of proactive measures to protect networks and systems. In this study, we introduce a federated learning-based approach for cyber threat hunting at the endpoint level. The proposed method utilizes the collective intelligence of multiple devices to effectively and confidentially detect attacks on individual machines. A security assessment tool is also developed to emulate the behavior of adversary groups and Advanced Persistent Threat (APT) actors in the network. This tool provides network security experts with the ability to assess their network environment s resilience and aids in generating authentic data derived from diverse threats for use in subsequent stages of the federated learning (FL) model. The results of the experiments demonstrate that the proposed model effectively detects cyber threats on the devices while safeguarding privacy.
Authored by Saeid Sheikhi, Panos Kostakos