Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Authored by Yasintha Rumesh, Thulitha Senevirathna, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
In the realm of agriculture, where crop health is integral to global food security, Our focus is on the early detection of crop diseases. Leveraging Convolutional Neural Networks (CNNs) on a diverse dataset of crop images, our study focuses on the development, training, and optimization of these networks to achieve accurate and timely disease classification. The first segment demonstrates the efficacy of CNN architecture and optimization strategy, showcasing the potential of deep learning models in automating the identification process. The synergy of robust disease detection and interpretability through Explainable Artificial Intelligence (XAI) presented in this work marks a significant stride toward bridging the gap between advanced technology and precision agriculture. By employing visualization, the research seeks to unravel the decision-making processes of our models. XAI Visualization method emerges as notably superior in terms of accuracy, hinting at its better identification of the disease, this method achieves an accuracy of 89.75\%, surpassing both the heat map model and the LIME explanation method. This not only enhances the transparency and trustworthiness of the predictions but also provides invaluable insights for end-users, allowing them to comprehend the diagnostic features considered by the complex algorithm.
Authored by Priyadarshini Patil, Sneha Pamali, Shreya Devagiri, A Sushma, Jyothi Mirje
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou
In this work, we present a comprehensive survey on applications of the most recent transformer architecture based on attention in information security. Our review reveals three primary areas of application: Intrusion detection, Anomaly Detection and Malware Detection. We have presented an overview of attention-based mechanisms and their application in each cybersecurity use case, and discussed open grounds for future trends in Artificial Intelligence enabled information security.
Authored by M. Vubangsi, Sarumi Abidemi, Olukayode Akanni, Auwalu Mubarak, Fadi Al-Turjman
Using Intrusion Detection Systems (IDS) powered by artificial intelligence is presented in the proposed work as a novel method for enhancing residential security. The overarching goal of the study is to design, develop, and evaluate a system that employs artificial intelligence techniques for real-time detection and prevention of unauthorized access in response to the rising demand for such measures. Using anomaly detection, neural networks, and decision trees, which are all examples of machine learning algorithms that benefit from the incorporation of data from multiple sensors, the proposed system guarantees the accurate identification of suspicious activities. Proposed work examines large datasets and compares them to conventional security measures to demonstrate the system s superior performance and prospective impact on reducing home intrusions. Proposed work contributes to the field of residential security by proposing a dependable, adaptable, and intelligent method for protecting homes against the ever-changing types of infiltration threats that exist today.
Authored by Jeneetha J, B.Vishnu Prabha, B. Yasotha, Jaisudha J, C. Senthilkumar, V.Samuthira Pandi
The Internet of Things (IoT) has changed the way we gather medical data in real time. But, it also brings worries about keeping this data safe and private. Ensuring a secure system for IoT is crucial. At the same time, a new technology is emerging that can help the IoT industry a lot. It s called Blockchain technology. It keeps data secure, transparent, and unchangeable. It s like a ledger for tracking lots of connected devices and making them work together. To make IoT even safer, we can use facial recognition with Convolutional Neural Networks (CNN). This paper introduces a healthcare system that combines Blockchain and artificial intelligence in IoT. An implementation of Raspberry Pi E-Health system is presented and evaluated in terms of function s cost. Our system present low cost functions.
Authored by Amina Kessentini, Ibtissem Wali, Mayssa Jarray, Nouri Masmoudi
Deep neural networks have been widely applied in various critical domains. However, they are vulnerable to the threat of adversarial examples. It is challenging to make deep neural networks inherently robust to adversarial examples, while adversarial example detection offers advantages such as not affecting model classification accuracy. This paper introduces common adversarial attack methods and provides an explanation of adversarial example detection. Recent advances in adversarial example detection methods are categorized into two major classes: statistical methods and adversarial detection networks. The evolutionary relationship among different detection methods is discussed. Finally, the current research status in this field is summarized, and potential future directions are highlighted.
Authored by Chongyang Zhao, Hu Li, Dongxia Wang, Ruiqi Liu
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model s accuracy under white-box attack decreases by 9.2S\% 31.1S\% better than others, and average success rates are 74.87\% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting
Authored by Xuanwei Lin, Chen Dong, Ximeng Liu, Yuanyuan Zhang
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Cloud computing (CC) is vulnerable to existing information technology attacks, since it extends and utilizes information technology infrastructure, applications and typical operating systems. In this manuscript, an Enhanced capsule generative adversarial network (ECGAN) with blockchain based Proof of authority consensus procedure fostered Intrusion detection (ID) system is proposed for enhancing cyber security in CC. The data are collected via NSL-KDD benchmark dataset. The input data is fed to proposed Z-Score Normalization process to eliminate the redundancy including missing values. The pre-processing output is fed to feature selection. During feature selection, extracting the optimum features on the basis of univariate ensemble feature selection (UEFS). Optimum features basis, the data are classified as normal and anomalous utilizing Enhanced capsule generative adversarial networks. Subsequently, blockchain based Proof of authority (POA) consensus process is proposed for improving the cyber security of the data in cloud computing environment. The proposed ECGAN-BC-POA-IDS method is executed in Python and the performance metrics are calculated. The proposed approach has attained 33.7\%, 25.7\%, 21.4\% improved accuracy, 24.6\%, 35.6\%, 38.9\% lower attack detection time, and 23.8\%, 18.9\%, 15.78\% lower delay than the existing methods, like Artificial Neural Network (ANN) with blockchain framework, Integrated Architecture with Byzantine Fault Tolerance consensus, and Blockchain Random Neural Network (RNN-BC) respectively.
Authored by Ravi Kanth, Prem Jacob
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
Anomaly detection is a challenge well-suited to machine learning and in the context of information security, the benefits of unsupervised solutions show significant promise. Recent attention to Graph Neural Networks (GNNs) has provided an innovative approach to learn from attributed graphs. Using a GNN encoder-decoder architecture, anomalous edges between nodes can be detected during the reconstruction phase. The aim of this research is to determine whether an unsupervised GNN model can detect anomalous network connections in a static, attributed network. Network logs were collected from four corporate networks and one artificial network using endpoint monitoring tools. A GNN-based anomaly detection system was designed and employed to score and rank anomalous connections between hosts. The model was validated against four realistic experimental scenarios against the four large corporate networks and the smaller artificial network environment. Although quantitative metrics were affected by factors including the scale of the network, qualitative assessments indicated that anomalies from all scenarios were detected. The false positives across each scenario indicate that this model in its current form is useful as an initial triage, though would require further improvement to become a performant detector. This research serves as a promising step for advancing this methodology in detecting anomalous network connections. Future work to improve results includes narrowing the scope of detection to specific threat types and a further focus on feature engineering and selection.
Authored by Charlie Grimshaw, Brian Lachine, Taylor Perkins, Emilie Coote
IBMD(Intelligent Behavior-Based Malware Detection) aims to detect and mitigate malicious activities in cloud computing environments by analyzing the behavior of cloud resources, such as virtual machines, containers, and applications.The system uses different machine learning methods like deep learning and artificial neural networks, to analyze the behavior of cloud resources and detect anomalies that may indicate malicious activity. The IBMD system can also monitor and accumulate the data from various resources, such as network traffic and system logs, to provide a comprehensive view of the behavior of cloud resources. IBMD is designed to operate in a cloud computing environment, taking advantage of the scalability and flexibility of the cloud to detect malware and respond to security incidents. The system can also be integrated with existing security tools and services, such as firewalls and intrusion detection systems, to provide a comprehensive security solution for cloud computing environments.
Authored by Jibu Samuel, Mahima Jacob, Melvin Roy, Sayoojya M, Anu Joy
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-Enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
This article presents a new concept of fully analogue adaptive filters. The adaptation is based on fully analogue neural networks. With the use of a filter bank, it can be used for high frequency and real-time adaptation. The properties of this concept are verified using electronic circuit simulations.
Authored by Filip Paulu, Jiri Hospodka
An Intrusion detection system (IDS) plays a role in network intrusion detection through network data analysis, and high detection accuracy, precision, and recall are required to detect intrusions. Also, various techniques such as expert systems, data mining, and state transition analysis are used for network data analysis. The paper compares the detection effects of the two IDS methods using data mining. The first technique is a support vector machine (SVM), a machine learning algorithm; the second is a deep neural network (DNN), one of the artificial neural network models. The accuracy, precision, and recall were calculated and compared using NSL-KDD training and validation data, which is widely used in intrusion detection to compare the detection effects of the two techniques. DNN shows slightly higher accuracy than the SVM model. The risk of recognizing an actual intrusion as normal data is much greater than the risk of considering normal data as an intrusion, so DNN proves to be much more effective in intrusion detection than SVM.
Authored by N Patel, B Mehtre, Rajeev Wankar