Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Authored by Yasintha Rumesh, Thulitha Senevirathna, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Deep learning models are being utilized and further developed in many application domains, but challenges still exist regarding their interpretability and consistency. Interpretability is important to provide users with transparent information that enhances the trust between the user and the learning model. It also gives developers feedback to improve the consistency of their deep learning models. In this paper, we present a novel architectural design to embed interpretation into the architecture of the deep learning model. We apply dynamic pixel-wised weights to input images and produce a highly correlated feature map for classification. This feature map is useful for providing interpretation and transparent information about the decision-making of the deep learning model while keeping full context about the relevant feature information compared to previous interpretation algorithms. The proposed model achieved 92\% accuracy for CIFAR 10 classifications without finetuning the hyperparameters. Furthermore, it achieved a 20\% accuracy under 8/255 PGD adversarial attack for 100 iterations without any defense method, indicating extra natural robustness compared to other Convolutional Neural Network (CNN) models. The results demonstrate the feasibility of the proposed architecture.
Authored by Weimin Zhao, Qusay Mahmoud, Sanaa Alwidian
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou
Modern network defense can benefit from the use of autonomous systems, offloading tedious and time-consuming work to agents with standard and learning-enabled components. These agents, operating on critical network infrastructure, need to be robust and trustworthy to ensure defense against adaptive cyber-attackers and, simultaneously, provide explanations for their actions and network activity. However, learning-enabled components typically use models, such as deep neural networks, that are not transparent in their high-level decision-making leading to assurance challenges. Additionally, cyber-defense agents must execute complex long-term defense tasks in a reactive manner that involve coordination of multiple interdependent subtasks. Behavior trees are known to be successful in modelling interpretable, reactive, and modular agent policies with learning-enabled components. In this paper, we develop an approach to design autonomous cyber defense agents using behavior trees with learning-enabled components, which we refer to as Evolving Behavior Trees (EBTs). We learn the structure of an EBT with a novel abstract cyber environment and optimize learning-enabled components for deployment. The learning-enabled components are optimized for adapting to various cyber-attacks and deploying security mechanisms. The learned EBT structure is evaluated in a simulated cyber environment, where it effectively mitigates threats and enhances network visibility. For deployment, we develop a software architecture for evaluating EBT-based agents in computer network defense scenarios. Our results demonstrate that the EBT-based agent is robust to adaptive cyber-attacks and provides high-level explanations for interpreting its decisions and actions.
Authored by Nicholas Potteiger, Ankita Samaddar, Hunter Bergstrom, Xenofon Koutsoukos
Using Intrusion Detection Systems (IDS) powered by artificial intelligence is presented in the proposed work as a novel method for enhancing residential security. The overarching goal of the study is to design, develop, and evaluate a system that employs artificial intelligence techniques for real-time detection and prevention of unauthorized access in response to the rising demand for such measures. Using anomaly detection, neural networks, and decision trees, which are all examples of machine learning algorithms that benefit from the incorporation of data from multiple sensors, the proposed system guarantees the accurate identification of suspicious activities. Proposed work examines large datasets and compares them to conventional security measures to demonstrate the system s superior performance and prospective impact on reducing home intrusions. Proposed work contributes to the field of residential security by proposing a dependable, adaptable, and intelligent method for protecting homes against the ever-changing types of infiltration threats that exist today.
Authored by Jeneetha J, B.Vishnu Prabha, B. Yasotha, Jaisudha J, C. Senthilkumar, V.Samuthira Pandi
The development of AI computing has reached a critical inflection point. The scale of large-scale AI neural network model parameters has grown rapidly to “pre-trillion-scale” level. The computing needs of training large-scale AI neural network models have reached “exa-scale” level. Besides, AI Foundation Model also affects the correctness of AI applications, and becoming a new information security issue. Future AI development will be pushed by progress of computing power (supercomputer), algorithm (neural network model and parameter scale), and application (foundation model and downstream fine tuning). In particular, the computational efficiency of AI will be a key factor in the commercialization and popularization of AI applications.
Authored by Bor-Sung Liang
A fast expanding topic of study on automated AI is focused on the prediction and prevention of cyber-attacks using machine learning algorithms. In this study, we examined the research on applying machine learning algorithms to the problems of strategic cyber defense and attack forecasting. We also provided a technique for assessing and choosing the best machine learning models for anticipating cyber-attacks. Our findings show that machine learning methods, especially random forest and neural network models, are very accurate in predicting cyber-attacks. Additionally, we discovered a number of crucial characteristics, such as source IP, packet size, and malicious traffic that are strongly associated with the likelihood of cyber-attacks. Our results imply that automated AI research on cyber-attack prediction and security planning has tremendous promise for enhancing cyber-security and averting cyber-attacks.
Authored by Ravikiran Madala, N. Vijayakumar, Nandini N, Shanti Verma, Samidha Chandvekar, Devesh Singh
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
With the increased computational efficiency, Deep Neural Network gained more importance in the area of medical diagnosis. Nowadays many researchers have noticed the security concerns of various deep neural network models used for the clinical applications. However an efficient model misbehaves frequently when it confronted with intentionally modified data samples, called adversarial examples. These adversarial examples generated with some imperceptible perturbations, but can fool the DNNs to give false predictions. Thus, various adversarial attacks and defense methods certainly stand out from both AI and security networks and have turned into a hot exploration point lately. Adversarial attacks can be expected in various applications of deep learning model especially in healthcare area for disease prediction or classification. It should be properly handled with effective defensive mechanisms or else it may be a great threat to human life. This literature work will help to notice various adversarial attacks and defensive mechanisms. In the field of clinical analysis, this paper gives a detailed research on adversarial approaches on deep neural networks. This paper starts with the speculative establishments, various techniques, and utilization of adversarial attacking strategies. The contributions by the various researchers for the defensive mechanisms against adversarial attacks were also discussed. A few open issues and difficulties are accordingly discussed about, which might incite further exploration endeavors.
Authored by K Priya V, Peter Dinesh
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
Developing network intrusion detection systems (IDS) presents significant challenges due to the evolving nature of threats and the diverse range of network applications. Existing IDSs often struggle to detect dynamic attack patterns and covert attacks, leading to misidentified network vulnerabilities and degraded system performance. These requirements must be met via dependable, scalable, effective, and adaptable IDS designs. Our IDS can recognise and classify complex network threats by combining the Deep Q-Network (DQN) algorithm with distributed agents and attention techniques.. Our proposed distributed multi-agent IDS architecture has many advantages for guiding an all-encompassing security approach, including scalability, fault tolerance, and multi-view analysis. We conducted experiments using industry-standard datasets including NSL-KDD and CICIDS2017 to determine how well our model performed. The results show that our IDS outperforms others in terms of accuracy, precision, recall, F1-score, and false-positive rate. Additionally, we evaluated our model s resistance to black-box adversarial attacks, which are commonly used to take advantage of flaws in machine learning. Under these difficult circumstances, our model performed quite well.We used a denoising autoencoder (DAE) for further model strengthening to improve the IDS s robustness. Lastly, we evaluated the effectiveness of our zero-day defenses, which are designed to mitigate attacks exploiting unknown vulnerabilities. Through our research, we have developed an advanced IDS solution that addresses the limitations of traditional approaches. Our model demonstrates superior performance, robustness against adversarial attacks, and effective zero-day defenses. By combining deep reinforcement learning, distributed agents, attention techniques, and other enhancements, we provide a reliable and comprehensive solution for network security.
Authored by Malika Malik, Kamaljit Saini
Cloud computing (CC) is vulnerable to existing information technology attacks, since it extends and utilizes information technology infrastructure, applications and typical operating systems. In this manuscript, an Enhanced capsule generative adversarial network (ECGAN) with blockchain based Proof of authority consensus procedure fostered Intrusion detection (ID) system is proposed for enhancing cyber security in CC. The data are collected via NSL-KDD benchmark dataset. The input data is fed to proposed Z-Score Normalization process to eliminate the redundancy including missing values. The pre-processing output is fed to feature selection. During feature selection, extracting the optimum features on the basis of univariate ensemble feature selection (UEFS). Optimum features basis, the data are classified as normal and anomalous utilizing Enhanced capsule generative adversarial networks. Subsequently, blockchain based Proof of authority (POA) consensus process is proposed for improving the cyber security of the data in cloud computing environment. The proposed ECGAN-BC-POA-IDS method is executed in Python and the performance metrics are calculated. The proposed approach has attained 33.7\%, 25.7\%, 21.4\% improved accuracy, 24.6\%, 35.6\%, 38.9\% lower attack detection time, and 23.8\%, 18.9\%, 15.78\% lower delay than the existing methods, like Artificial Neural Network (ANN) with blockchain framework, Integrated Architecture with Byzantine Fault Tolerance consensus, and Blockchain Random Neural Network (RNN-BC) respectively.
Authored by Ravi Kanth, Prem Jacob
Information security construction is a social issue, and the most urgent task is to do an excellent job in information risk assessment. The bayesian neural network currently plays a vital role in enterprise information security risk assessment, which overcomes the subjective defects of traditional assessment results and operates efficiently. The risk quantification method based on fuzzy theory and Bayesian regularization BP neural network mainly uses fuzzy theory to process the original data and uses the processed data as the input value of the neural network, which can effectively reduce the ambiguity of language description. At the same time, special neural network training is carried out for the confusion that the neural network is easy to fall into the optimal local problem. Finally, the risk is verified and quantified through experimental simulation. This paper mainly discusses the problem of enterprise information security risk assessment based on a Bayesian neural network, hoping to provide strong technical support for enterprises and organizations to carry out risk rectification plans. Therefore, the above method provides a new information security risk assessment idea.
Authored by Zijie Deng, Guocong Feng, Qingshui Huang, Hong Zou, Jiafa Zhang
Cyber threats have been a major issue in the cyber security domain. Every hacker follows a series of cyber-attack stages known as cyber kill chain stages. Each stage has its norms and limitations to be deployed. For a decade, researchers have focused on detecting these attacks. Merely watcher tools are not optimal solutions anymore. Everything is becoming autonomous in the computer science field. This leads to the idea of an Autonomous Cyber Resilience Defense algorithm design in this work. Resilience has two aspects: Response and Recovery. Response requires some actions to be performed to mitigate attacks. Recovery is patching the flawed code or back door vulnerability. Both aspects were performed by human assistance in the cybersecurity defense field. This work aims to develop an algorithm based on Reinforcement Learning (RL) with a Convoluted Neural Network (CNN), far nearer to the human learning process for malware images. RL learns through a reward mechanism against every performed attack. Every action has some kind of output that can be classified into positive or negative rewards. To enhance its thinking process Markov Decision Process (MDP) will be mitigated with this RL approach. RL impact and induction measures for malware images were measured and performed to get optimal results. Based on the Malimg Image malware, dataset successful automation actions are received. The proposed work has shown 98\% accuracy in the classification, detection, and autonomous resilience actions deployment.
Authored by Kainat Rizwan, Mudassar Ahmad, Muhammad Habib
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
IBMD(Intelligent Behavior-Based Malware Detection) aims to detect and mitigate malicious activities in cloud computing environments by analyzing the behavior of cloud resources, such as virtual machines, containers, and applications.The system uses different machine learning methods like deep learning and artificial neural networks, to analyze the behavior of cloud resources and detect anomalies that may indicate malicious activity. The IBMD system can also monitor and accumulate the data from various resources, such as network traffic and system logs, to provide a comprehensive view of the behavior of cloud resources. IBMD is designed to operate in a cloud computing environment, taking advantage of the scalability and flexibility of the cloud to detect malware and respond to security incidents. The system can also be integrated with existing security tools and services, such as firewalls and intrusion detection systems, to provide a comprehensive security solution for cloud computing environments.
Authored by Jibu Samuel, Mahima Jacob, Melvin Roy, Sayoojya M, Anu Joy
With the rapid development of science and technology, information security issues have been attracting more attention. According to statistics, tens of millions of computers around the world are infected by malicious software (Malware) every year, causing losses of up to several USD billion. Malware uses various methods to invade computer systems, including viruses, worms, Trojan horses, and others and exploit network vulnerabilities for intrusion. Most intrusion detection approaches employ behavioral analysis techniques to analyze malware threats with packet collection and filtering, feature engineering, and attribute comparison. These approaches are difficult to differentiate malicious traffic from legitimate traffic. Malware detection and classification are conducted with deep learning and graph neural networks (GNNs) to learn the characteristics of malware. In this study, a GNN-based model is proposed for malware detection and classification on a renewable energy management platform. It uses GNN to analyze malware with Cuckoo Sandbox malware records for malware detection and classification. To evaluate the effectiveness of the GNN-based model, the CIC-AndMal2017 dataset is used to examine its accuracy, precision, recall, and ROC curve. Experimental results show that the GNN-based model can reach better results.
Authored by Hsiao-Chung Lin, Ping Wang, Wen-Hui Lin, Yu-Hsiang Lin, Jia-Hong Chen
Low probability of detection (LPD) has recently emerged as a means to enhance the privacy and security of wireless networks. Unlike existing wireless security techniques, LPD measures aim to conceal the entire existence of wireless communication instead of safeguarding the information transmitted from users. Motivated by LPD communication, in this paper, we study a privacy-preserving and distributed framework based on graph neural networks to minimise the detectability of a wireless ad-hoc network as a whole and predict an optimal communication region for each node in the wireless network, allowing them to communicate while remaining undetected from external actors. We also demonstrate the effectiveness of the proposed method in terms of two performance measures, i.e., mean absolute error and median absolute error.
Authored by Sivaram Krishnan, Jihong Park, Subhash Sagar, Gregory Sherman, Benjamin Campbell, Jinho Choi