With the rapid development of cloud computing services and big data applications, the number of data centers is proliferating, and with it, the problem of energy consumption in data centers is becoming more and more serious. Data center energy-saving has received more and more attention as a way to reduce carbon emissions and power costs. The main energy consumption of data centers lies in IT equipment energy consumption and end air conditioning energy consumption. In this paper, we propose a data center energy-saving application system based on fog computing architecture to reduce air conditioning energy consumption, and thus reduce data center energy consumption. Specifically, the intelligent module is placed in the fog node to take advantage of the low latency, proximal computing, and proximal storage of fog computing to shorten the network call link and improve the stability of acquiring energy-saving policies and the frequency of energy-saving regulation, thus solving the disadvantages of high latency and instability in the energy-saving approach of cloud computing architecture. The AI technology is used in the intelligent module to generate energy-saving strategies and remotely regulate the end air conditioners to achieve better energy-saving effects. This solves the shortcomings of the traditional manual regulation based on expert experience with low adjustment frequency and serious loss of cooling capacity of the terminal air conditioner. According to the experimental results, statistics show that compared with the traditional manual regulation based on expert experience, the data center energy-saving application based on fog computing can operate safely and efficiently, and reduce the PUE to 1.04. Compared with the AI energy-saving strategy based on cloud computing, the AI energy-saving strategy based on fog computing generates strategies faster and with lower latency, and the speed is increased by 29.84\%.
Authored by Yazhen Zhang, Fei Hu, Yisa Han, Weiye Meng, Zhou Guo, Chunfang Li
AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
Edge computing enables the computation and analytics capabilities to be brought closer to data sources. The available literature on AI solutions for edge computing primarily addresses just two edge layers. The upper layer can directly communicate with the cloud and comprises one or more IoT edge devices that gather sensing data from IoT devices present in the lower layer. However, industries mainly adopt a multi-layered architecture, referred to as the ISA-95 standard, to isolate and safeguard their assets. In this architecture, only the upper layer is connected to the cloud, while the lower layers of the hierarchy get to interact only with the neighbouring layers. Due to these added intermediate layers (and IoT edge devices) between the top and lower layers, existing AI solutions for typical two-layer edge architectures may not be directly applicable in this scenario. Moreover, not all industries prefer to send and store their private data in the cloud. Implementing AI solutions tailored to a hierarchical edge architecture would increase response time and maintain the same degree of security by working within the ISA-95-compliant network architecture. This paper explores a possible strategy for deploying a centralized federated learning-based AI solution in a hierarchical edge architecture and demonstrates its efficacy through a real deployment scenario.
Authored by Narendra Bisht, Subhasri Duttagupta
The development of 5G, cloud computing, artificial intelligence (AI) and other new generation information technologies has promoted the rapid development of the data center (DC) industry, which directly increase severe energy consumption and carbon emissions problem. In addition to traditional engineering based methods, AI based technology has been widely used in existing data centers. However, the existing AI model training schemes are time-consuming and laborious. To tackle this issues, we propose an automated training and deployment platform for AI modes based on cloud-edge architecture, including the processes of data processing, data annotation, model training optimization, and model publishing. The proposed system can generate specific models based on the room environment and realize standardization and automation of model training, which is helpful for large-scale data center scenarios. The simulation and experimental results show that the proposed solution can reduce the time required of single model training by 76.2\%, and multiple training tasks can run concurrently. Therefore, it can adapt to the large-scale energy-saving scenario and greatly improve the model iteration efficiency, which improves the energy-saving rate and help green energy conservation for data centers.
Authored by Chunfang Li, Zhou Guo, Xingmin He, Fei Hu, Weiye Meng
Foundation models, such as large language models (LLMs), have been widely recognised as transformative AI technologies due to their capabilities to understand and generate content, including plans with reasoning capabilities. Foundation model based agents derive their autonomy from the capabilities of foundation models, which enable them to autonomously break down a given goal into a set of manageable tasks and orchestrate task execution to meet the goal. Despite the huge efforts put into building foundation model based agents, the architecture design of the agents has not yet been systematically explored. Also, while there are significant benefits of using agents for planning and execution, there are serious considerations regarding responsible AI related software quality attributes, such as security and accountability. Therefore, this paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Authored by Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Stefan Harrer, Jon Whittle
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
As a result of globalization, the COVID-19 pandemic and the migration of data to the cloud, the traditional security measures where an organization relies on a security perimeter and firewalls do not work. There is a shift to a concept whereby resources are not being trusted, and a zero-trust architecture (ZTA) based on a zero-trust principle is needed. Adapting zero trust principles to networks ensures that a single insecure Application Protocol Interface (API) does not become the weakest link comprising of Critical Data, Assets, Application and Services (DAAS). The purpose of this paper is to review the use of zero trust in the security of a network architecture instead of a traditional perimeter. Different software solutions for implementing secure access to applications and services for remote users using zero trust network access (ZTNA) is also summarized. A summary of the author s research on the qualitative study of “Insecure Application Programming Interface in Zero Trust Networks” is also discussed. The study showed that there is an increased usage of zero trust in securing networks and protecting organizations from malicious cyber-attacks. The research also indicates that APIs are insecure in zero trust environments and most organization are not aware of their presence.
Authored by Farhan Qazi
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
The use of encryption for medical images offers several benefits. Firstly, it enhances the confidentiality and privacy of patient data, preventing unauthorized individuals or entities from accessing sensitive medical information. Secondly, encrypted medical images may be sent securely via unreliable networks, like the Internet, without running the danger of data eavesdropping or tampering. Traditional methods of storing and retrieving medical images often lack efficient encryption and privacy-preserving mechanisms. This project delves into enhancing the security and accessibility of medical image storage across diverse cloud environments. Through the implementation of encryption methods, pixel scrambling techniques, and integration with AWS S3, the research aimed to fortify the confidentiality of medical images while ensuring rapid retrieval. These findings collectively illuminate the security, and operational efficiency of the implemented encryption, scrambling techniques, AWS integration, and offer a foundation for advancing secure medical image retrieval in multi-cloud settings.
Authored by Mohammad Shanavaz, Charan Manikanta, M. Gnanaprasoona, Sai Kishore, R. Karthikeyan, M.A. Jabbar
Data security in numerous businesses, including banking, healthcare, and transportation, depends on cryptography. As IoT and AI applications proliferate, this is becoming more and more evident. Despite the benefits and drawbacks of traditional cryptographic methods such as symmetric and asymmetric encryption, there remains a demand for enhanced security that does not compromise efficiency. This work introduces a novel approach called Multi-fused cryptography, which combines the benefits of distinct cryptographic methods in order to overcome their shortcomings. Through a comparative performance analysis; our study demonstrates that the proposed technique successfully enhances data security during network transmission.
Authored by Irin Loretta, Idamakanti Kasireddy, M. Prameela, D Rao, M. Kalaiyarasi, S. Saravanan
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems. Our solution is robust against black-box attacking and well-established generative adversarial network (GAN) based image restoration. We analyze the potential risk in previous work, where the proposed cosine similarity computation might directly leak the protected precomputed embedding stored on the server side. We propose a Function Secret Sharing (FSS) based face embedding comparison protocol without any intermediate result leakage. In addition, we show in experiments that the proposed protocol is more efficient compared to the Secret Sharing (SS) based protocol.
Authored by Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
Recent innovations in computer science and informatics are driving the integration of AI into modern healthcare, extending its applications to medical sectors previously reliant on human expertise. Creating robust and clinically relevant AI models requires extensive data, which can be challenging to gather, particularly when dealing with rare diseases. Data sharing among healthcare entities can address this issue, but legal, privacy, and data ownership concerns hinder such approach. To foster data sharing, in this paper we propose the GEmelli GeNerator - Real World Data (GEN-RWD) Sandbox, that provides a secure environment for data analysis without compromising sensitive medical data. This modular architecture serves as a research platform for various stakeholders, including clinical researchers, policymakers, and pharmaceutical companies. Au-thorized users submit research requests through the GUI, which are processed within the hospital, and the results can be accessed without revealing the original clinical data source. In the context of this paper we present GEN-RWD Sandbox s architecture module in charge of executing the analysis requests, the Processor. Processor s code is openly shared as the GSProcessor R package available at https://gitlab.com/benedetta.gottardelli/GSProcessor.
Authored by Benedetta Gottardelli, Roberto Gatta, Leonardo Nucciarelli, Mariachiara Savino, Andrada Tudor, Mauro Vallati, Andrea Damiani
The resurgence of Artificial Intelligence (AI) has been accompanied by a rise in ethical issues. AI practitioners either face challenges in making ethical choices when designing AI-based systems or are not aware of such challenges in the first place. Increasing the level of awareness and understanding of the perceptions of those who develop AI systems is a critical step toward mitigating ethical issues in AI development. Motivated by these challenges, needs, and the lack of engaging approaches to address these, we developed an interactive, scenario-based ethical AI quiz. It allows AI practitioners, including software engineers who develop AI systems, to self-assess their awareness and perceptions about AI ethics. The experience of taking the quiz, and the feedback it provides, will help AI practitioners understand the gap areas, and improve their overall ethical practice in everyday development scenarios. To demonstrate these expected outcomes and the relevance of our tool, we also share a preliminary user study. The video demo can be found at https://zenodo.org/record/7601169\#.Y9xgA-xBxhF.
Authored by Wei Teo, Ze Teoh, Dayang Arabi, Morad Aboushadi, Khairenn Lai, Zhe Ng, Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
In this work, we provide an in-depth characterization study of the performance overhead for running Transformer models with secure multi-party computation (MPC). MPC is a cryptographic framework for protecting both the model and input data privacy in the presence of untrusted compute nodes. Our characterization study shows that Transformers introduce several performance challenges for MPC-based private machine learning inference. First, Transformers rely extensively on “softmax” functions. While softmax functions are relatively cheap in a non-private execution, softmax dominates the MPC inference runtime, consuming up to 50\% of the total inference runtime. Further investigation shows that computing the maximum, needed for providing numerical stability to softmax, is a key culprit for the increase in latency. Second, MPC relies on approximating non-linear functions that are part of the softmax computations, and the narrow dynamic ranges make optimizing softmax while maintaining accuracy quite difficult. Finally, unlike CNNs, Transformer-based NLP models use large embedding tables to convert input words into embedding vectors. Accesses to these embedding tables can disclose inputs; hence, additional obfuscation for embedding access patterns is required for guaranteeing the input privacy. One approach to hide address accesses is to convert an embedding table lookup into a matrix multiplication. However, this naive approach increases MPC inference runtime significantly. We then apply tensor-train (TT) decomposition, a lossy compression technique for representing embedding tables, and evaluate its performance on embedding lookups. We show the trade-off between performance improvements and the corresponding impact on model accuracy using detailed experiments.
Authored by Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin Lee
Searchable encryption allows users to perform search operations on encrypted data before decrypting it first. Secret sharing is one of the most important cryptographic primitives used to design an information theoretic scheme. Nowadays cryptosys-tem designers are providing a facility to adjust the security parameters in real time to circumvent AI-enabled cyber security threats. For long term security of data which is used by various applications, proactive secret sharing allows the shares of the original secret to be dynamically adjusted during a specific interval of time. In proactive secret sharing, the updation of shares at regular intervals of time is done by the servers (participants) and not by the dealer. In this paper, we propose a novel proactive secret sharing scheme where the shares stored at servers are updated using preshared pairwise keys between servers at regular intervals of time. The direct search of words over sentences using the conjunctive search function without the generation of any index is possible using the underlying querying method.
Authored by Praveen K, Gabriel S, Indranil Ray, Avishek Adhikari, Sabyasachi Datta, Arnab Biswas
Electronic media knowledge is unprecedently increasing in recent years. In almost all security control areas, traffic control, weather monitoring, video conferences, social media etc., videos and multimedia data analysis practices are used. As a consequence, it is necessary to retain and transmit these data, by considering the security and privacy issues. IN this research study, a new Div-Mod Stego algorithm is combined with the Multi-Secret Sharing method along with temporary frame reordering and Genetic algorithm to implement high-end security in the process of video sharing. The qualitative and quantitative analysis has also been carried out to compare the performance of the proposed model with the other existing models. A computer analysis shows that the proposed solution would satisfy the requirements of the real-time application.
Authored by R. Logeshwari, Rajasekar Velswamy, Subhashini R, Karunakaran V
At present people can easily share multimedia information on Internet, which leads to serious data security issues. Especially in medical, military and financial fields, images always contain a lot of sensitive information. To safely transmit images among people, many secret image sharing methods are proposed. However, the existing methods can not solve the problems of pixel expansion and high computational complexity of shadow images at the same time. In this paper, we propose an image sharing method by combining sharing matrix and variational hyperprior network, to reduce the pixel expansion and computational complexity of secret image sharing methods. The method uses the variational hyperprior network to encode images. It introduces the hyperprior to effectively catch spatial dependencies in the latent representation, which can compress image with high efficiency. The experimental results show that our method has low computational complexity and high security performance compared with the state-of-the-art approaches. In addition, the proposed method can effectively reduce the pixel expansion when using the sharing matrix to generate shadow images.
Authored by Yuxin Ding, Miaomiao Shao, Cai Nie
Quantum secret sharing (QSS) is a cryptography technique relying on the transmission and manipulation of quantum states to distribute secret information across multiple participants securely. However, quantum systems are susceptible to various types of noise that can compromise their security and reliability. Therefore, it is essential to analyze the influence of noise on QSS to ensure their effectiveness and practicality in real-world quantum communication. This paper studies the impact of various noisy environments on multi-dimensional QSS. Using quantum fidelity, we examine the influence of four noise models: d-phase-flip(dpf), dit-flip(df), amplitude damping(ad), and depolarizing(d). It has been discovered that the fidelity declines with an increase in the noise parameter. Furthermore, the results demonstrate that the efficiency of the QSS protocol differs significantly across distinct noise models.
Authored by Deepa Rathi, Sanjeev Kumar, Reena Grover
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
Data in AI-Empowered Electric Vehicles is protected by using blockchain technology for immutable and verifiable transactions, in addition to high-strength encryption methods and digital signatures. This research paper compares and evaluates the security mechanisms for V2X communication in AI-enabled EVs. The purpose of the study is to ensure the reliability of security measures by evaluating performance metrics including false positive rate, false negative rate, detection accuracy, processing time, communication latency, computational resources, key generation time, and throughput. A comprehensive experimental approach is implemented using a diverse dataset gathered from actual V2X communication condition. The evaluation reveals that the security mechanisms perform inconsistently. Message integrity verification obtains the highest detection accuracy with a low false positive rate of 2\% and a 0\% false negative rate. Traffic encryption has a low processing time, requiring only 10 milliseconds for encryption and decryption, and adds only 5 bytes of communication latency to V2X messages. The detection accuracy of intrusion detection systems is adequate at 95\%, but they require more computational resources, consuming 80\% of the CPU and 150 MB of memory. In particular attack scenarios, certificate-based authentication and secure key exchange show promise. Certificate-based authentication mitigates MitM attacks with a false positive rate of 3\% and a false negative rate of 1\%. Secure key exchange thwarts replication attacks with a false positive rate of 0 and a false negative rate of 2. Nevertheless, their efficacy varies based on the attack scenario, highlighting the need for adaptive security mechanisms. The evaluated security mechanisms exhibit varying rates of throughput. Message integrity verification and traffic encryption accomplish high throughput, enabling 1 Mbps and 800 Kbps, respectively, of secure data transfer rates. Overall, the results contribute to the comprehension of V2X communication security challenges in AI-enabled EVs. Message integrity verification and traffic encryption have emerged as effective mechanisms that provide robust security with high performance. The study provides insight for designing secure and dependable V2X communication systems. Future research should concentrate on enhancing V2X communication s security mechanisms and exploring novel approaches to resolve emerging threats.
Authored by Edward V, Dhivya. S, M.Joe Marshell, Arul Jeyaraj, Ebenezer. V, Jenefa. A
This article proposes a security protection technology based on active dynamic defense technology. Solved unknown threats that traditional rule detection methods cannot detect, effectively resisting purposeless virus spread such as worms; Isolate new unknown viruses, Trojans, and other attack threats; Strengthen terminal protection, effectively solve east-west horizontal penetration attacks in the internal network, and enhance the adversarial capabilities of the internal network. Propose modeling user behavior habits based on machine learning algorithms. By using historical behavior models, abnormal user behavior can be detected in real-time, network danger can be perceived, and proactive changes in network defense strategies can be taken to increase the difficulty of attackers. To achieve comprehensive and effective defense, identification, and localization of network attack behaviors, including APT attacks.
Authored by Fu Yu
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou