This paper presents a vulnerability detection scheme for small unmanned aerial vehicle (UAV) systems, aiming to enhance their security resilience. It initiates with a comprehensive analysis of UAV system composition, operational principles, and the multifaceted security threats they face, ranging from software vulnerabilities in flight control systems to hardware weaknesses, communication link insecurities, and ground station management vulnerabilities. Subsequently, an automated vulnerability detection framework is designed, comprising three tiers: information gathering, interaction analysis, and report presentation, integrated with fuzz testing techniques for thorough examination of UAV control systems. Experimental outcomes validate the efficacy of the proposed scheme by revealing weak password issues in the target UAV s services and its susceptibility to abnormal inputs. The study not only confirms the practical utility of the approach but also contributes valuable insights and methodologies to UAV security, paving the way for future advancements in AI-integrated smart gray-box fuzz testing technologies.
Authored by He Jun, Guo Zihan, Ni Lin, Zhang Shuai
The growth of the Internet of Things (IoT) is leading to some restructuring and transformation of everyday lives. The number and diversity of IoT devices have increased rapidly, enabling the vision of a smarter environment and opening the door to further automation, accompanied by the generation and collection of enormous amounts of data. The automation and ongoing proliferation of personal and professional data in the IoT have resulted in countless cyber-attacks enabled by the growing security vulnerabilities of IoT devices. Therefore, it is crucial to detect and patch vulnerabilities before attacks happen in order to secure IoT environments. One of the most promising approaches for combating cybersecurity vulnerabilities and ensuring security is through the use of artificial intelligence (AI). In this paper, we provide a review in which we classify, map, and summarize the available literature on AI techniques used to recognize and reduce cybersecurity software vulnerabilities in the IoT. We present a thorough analysis of the majority of AI trends in cybersecurity, as well as cutting-edge solutions.
Authored by Heba Khater, Mohamad Khayat, Saed Alrabaee, Mohamed Serhani, Ezedin Barka, Farag Sallabi
The increasing number of security vulnerabilities has become an important problem that needs to be solved urgently in the field of software security, which means that the current vulnerability mining technology still has great potential for development. However, most of the existing AI-based vulnerability detection methods focus on designing different AI models to improve the accuracy of vulnerability detection, ignoring the fundamental problems of data-driven AI-based algorithms: first, there is a lack of sufficient high-quality vulnerability data; second, there is no unified standardized construction method to meet the standardized evaluation of different vulnerability detection models. This all greatly limits security personnel’s in-depth research on vulnerabilities. In this survey, we review the current literature on building high-quality vulnerability datasets, aiming to investigate how state-of-the-art research has leveraged data mining and data processing techniques to generate vulnerability datasets to facilitate vulnerability discovery. We also identify the challenges of this new field and share our views on potential research directions.
Authored by Yuhao Lin, Ying Li, MianXue Gu, Hongyu Sun, Qiuling Yue, Jinglu Hu, Chunjie Cao, Yuqing Zhang
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.
Authored by Sayandev Mukherjee, Jason Rupe, Jingjie Zhu
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This work examines whether the resolution of a programming guide is related to academic success in a introductory programming course at the Andrés Bello University (Chile). We investigated whether the guide, which consists of 52 exercises which are not mandatory to solve, helps predict the failure of the first test of this course by first-year students. Furthermore, the use of the unified SHAP and XAI framework is proposed to analyze and understand how programming guides influence student performance. The study includes a literature review of previous related studies, a descriptive analysis of the data collected, and a discussion of the practical and theoretical implications of the study. The results obtained will be useful to improve student support strategies and decision making related to the use of guides as an educational tool.
Authored by Gaston Sepulveda, Billy Peralta, Marcos Levano, Pablo Schwarzenberg, Orietta Nicolis
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Despite intensive research, survival rate for pancreatic cancer, a fatal and incurable illness, has not dramatically improved in recent years. Deep learning systems have shown superhuman ability in a considerable number of activities, and recent developments in Artificial Intelligence (AI) have led to its widespread use in predictive analytics of pancreatic cancer. However, the improvement in performance is the result of model complexity being raised, which transforms these systems into “black box” methods and creates uncertainty about how they function and, ultimately, how they make judgements. This ambiguity has made it difficult for deep learning algorithms to be accepted in important field like healthcare, where their benefit may be enormous. As a result, there has been a significant resurgence in recent years of scholarly interest in the topic of Explainable Artificial Intelligence (XAI), which is concerned with the creation of novel techniques for interpreting and explaining deep learning models. In this study, we utilize Computed Tomography (CT) images and Clinical data to predict and analyze pancreatic cancer and survival rate respectively. Since pancreatic tumors are small to identify, the region marking through XAI will assist medical professionals to identify the appropriate region and determine the presence of cancer. Various features are taken into consideration for survival prediction. The most prominent features can be identified with the help of XAI, which in turn aids medical professionals in making better decisions. This study mainly focuses on the XAI strategy for deep and machine learning models rather than prediction and survival methodology.
Authored by Srinidhi B, M Bhargavi
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Recently, Deep learning (DL) model has made remarkable achievements in image processing. To increase the accuracy of the DL model, more parameters are used. Therefore, the current DL models are black-box models that cannot understand the internal structure. This is the reason why the DL model cannot be applied to fields where stability and reliability are important despite its high performance. In this paper, We investigated various Explainable artificial intelligence (XAI) techniques to solve this problem. We also investigated what approaches exist to make multi-modal deep learning models transparent.
Authored by Haekang Song, Sungho Kim
In this paper, we investigate the use of Explainable Artificial Intelligence (XAI) methods for the interpretation of two Convolutional Neural Network (CNN) classifiers in the field of remote sensing (RS). Specifically, the SegNet and Unet architectures for RS building information extraction and segmentation are evaluated using a comprehensive array of primary- and layer-attributions XAI methods. The attribution methods are quantitatively evaluated using the sensitivity metric. Based on the visualization of the different XAI methods, Deconvolution and GradCAM results in many of the study areas show reliability. Moreover, these methods are able to accurately interpret both Unet s and SegNet s decisions and managed to analyze and reveal the internal mechanisms in both models (confirmed by the low sensitivity scores). Overall, no single method stood out as the best one.
Authored by Loghman Moradi, Bahareh Kalantar, Erfan Zaryabi, Alfian Halin, Naonori Ueda
Machine learning models have become increasingly complex, making it difficult to understand how they make predictions. Explainable AI (XAI) techniques have been developed to enhance model interpretability, thereby improving model transparency, trust, and accountability. In this paper, we present a comparative analysis of several XAI techniques to enhance the interpretability of machine learning models. We evaluate the performance of these techniques on a dataset commonly used for regression or classification tasks. The XAI techniques include SHAP, LIME, PDP, and GAM. We compare the effectiveness of these techniques in terms of their ability to explain model predictions and identify the most important features in the dataset. Our results indicate that XAI techniques significantly improve model interpretability, with SHAP and LIME being the most effective in identifying important features in the dataset. Our study provides insights into the strengths and limitations of different XAI techniques and their implications for the development and deployment of machine learning models. We conclude that XAI techniques have the potential to significantly enhance model interpretability and promote trust and accountability in the use of machine learning models. The paper emphasizes the importance of interpretability in medical applications of machine learning and highlights the significance of XAI techniques in developing accurate and reliable models for medical applications.
Authored by Swathi Y, Manoj Challa
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This tertiary systematic literature review examines 29 systematic literature reviews and surveys in Explainable Artificial Intelligence (XAI) to uncover trends, limitations, and future directions. The study explores current explanation techniques, providing insights for researchers, practitioners, and policymakers interested in enhancing AI transparency. Notably, the increasing number of systematic literature reviews (SLRs) in XAI publications indicates a growing interest in the field. The review offers an annotated catalogue for human-computer interaction-focused XAI stakeholders, emphasising practitioner guidelines. Automated searches across ACM, IEEE, and Science Direct databases identified SLRs published between 2019 and May 2023, covering diverse application domains and topics. While adhering to methodological guidelines, the SLRs often lack primary study quality assessments. The review highlights ongoing challenges and future opportunities related to XAI evaluation standardisation, its impact on users, and interdisciplinary research on ethics and GDPR aspects. The 29 SLRs, analysing over two thousand papers, include five directly relevant to practitioners. Additionally, references from the SLRs were analysed to compile a list of frequently cited papers, serving as recommended foundational readings for newcomers to the field.
Authored by Saša Brdnik, Boštjan Šumak
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
As a common network attack method, False data injection attack (FDLA) can often cause serious consequences to the power system due to its strong concealment. Attackers utilize grid topology information to carefully construct covert attack vectors, thus bypassing the traditional bad data detection (BDD) mechanism to maliciously tamper with measurements, which is more destructive and threatening to the power system. To address the difficulty of detecting them effectively, a detection method based on adaptive interpolation-adaptive inhibition extended Kalman filter (AI-AIEKF) is proposed in this paper. By the adaptive interpolation strategy and exponential weight function, the AI-AIEKF algorithm can improve the estimation accuracy and enhance the robustness of the EKF algorithm. Combined with the weight least squares (WVS), the two estimators respond to the system at different speeds, and the consistency test is introduced to detect the FDLAs. The extensive simulations on the IEEE-14-bus demonstrate that FDIAs can be accurately detected, thus validating the validity of the method.
Authored by Guoqing Zhang, Wengen Gao
As deep-learning based image and video manipulation technology advances, the future of truth and information looks bleak. In particular, Deepfakes, wherein a person’s face can be transferred onto the face of someone else, pose a serious threat for potential spread of convincing misinformation that is drastic and ubiquitous enough to have catastrophic real-world consequences. To prevent this, an effective detection tool for manipulated media is needed. However, the detector cannot just be good, it has to evolve with the technology to keep pace with or even outpace the enemy. At the same time, it must defend against different attack types to which deep learning systems are vulnerable. To that end, in this paper, we review various methods of both attack and defense on AI systems, as well as modes of evolution for such a system. Then, we put forward a potential system that combines the latest technologies in multiple areas as well as several novel ideas to create a detection algorithm that is robust against many attacks and can learn over time with unprecedented effectiveness and efficiency.
Authored by Ian Miller, Dan Lin