Attacks against computer system are viewed to be the most serious threat in the modern world. A zero-day vulnerability is an unknown vulnerability to the vendor of the system. Deep learning techniques are widely used for anomaly-based intrusion detection. The technique gives a satisfactory result for known attacks but for zero-day attacks the models give contradictory results. In this work, at first, two separate environments were setup to collect training and test data for zero-day attack. Zero-day attack data were generated by simulating real-time zero-day attacks. Ranking of the features from the train and test data was generated using explainable AI (XAI) interface. From the collected training data more attack data were generated by applying time series generative adversarial network (TGAN) for top 12 features. The train data was concatenated with the AWID dataset. A hybrid deep learning model using Long short-term memory (LSTM) and Convolutional neural network (CNN) was developed to test the zero-day data against the GAN generated concatenated dataset and the original AWID dataset. Finally, it was found that the result using the concatenated dataset gives better performance with 93.53\% accuracy, where the result from only AWID dataset gives 84.29\% accuracy.
Authored by Md. Asaduzzaman, Md. Rahman
Zero-day attacks, which are defined by their abrupt appearance without any previous detection mechanisms, present a substantial obstacle in the field of network security. To address this difficulty, a wide variety of machine learning and deep learning models have been used to identify and minimize zeroday assaults. The models have been assessed for both binary and multi-class classification situations, The objective of this work is to do a thorough comparison and analysis of these models, including the impact of class imbalance and utilizing SHAP (SHapley Additive exPlanations) explainability approaches. Class imbalance is a prevalent problem in cybersecurity datasets, characterized by a considerable disparity between the number of attack cases and non-attack instances. By equalizing the dataset, we guarantee equitable depiction of both categories, so preventing prejudice towards the dominant category throughout the training and assessment of the model. Moreover, the application of SHAP XAI facilitates a more profound comprehension of model predictions, empowering analysts to analyze the fundamental aspects that contribute to the detection of zero-day attacks.
Authored by C.K. Sruthi, Aswathy Ravikumar, Harini Sriraman
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
This paper delves into the nascent paradigm of Explainable AI (XAI) and its pivotal role in enhancing the acceptability of growing AI systems that are shaping the Digital Management 5.0 era. XAI holds significant promise, promoting compliance with legal and ethical standards and offering transparent decision-making tools. The imperative of interpretable AI systems to counter the black box effect and adhere to data protection laws like GDPR is highlighted. This paper aims to achieve a dual objective. Firstly, it provides an indepth understanding of the emerging XAI paradigm, helping practitioners and academics project their future research trajectories. Secondly, it proposes a new taxonomy of XAI models with potential applications that could facilitate AI acceptability. Although the academic literature reflects a crucial lack of exploration into the full potential of XAI, existing models remain mainly theoretical and lack practical applications. By bridging the gap between abstract models and the pragmatic implementation of XAI in management, this paper breaks new ground by launching the scientific foundations of XAI in the upcoming era of Digital Management 5.0.
Authored by Samia Gamoura
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
This work proposes an interpretable Deep Learning framework utilizing Vision Transformers (ViT) for the classification of remote sensing images into land use and land cover (LULC) classes. It uses the Shapley Additive Explanations (SHAP) values to achieve two-stage explanations: 1) bandwise feature importance per class, showing which band assists the prediction of each class and 2) spatial-wise feature understanding, explaining which embedded patches per band affected the network’s performance. Experimental results on the EuroSAT dataset demonstrate the ViT’s accurate classification with an overall accuracy 96.86 \%, offering improved results when compared to popular CNN models. Heatmaps in each one of the dataset’s existing classes highlight the effectiveness of the proposed framework in the band explanation and the feature importance.
Authored by Anastasios Temenos, Nikos Temenos, Maria Kaselimi, Anastasios Doulamis, Nikolaos Doulamis
The number of publications related to Explainable Artificial Intelligence (XAI) has increased rapidly this last decade. However, the subjective nature of explainability has led to a lack of consensus regarding commonly used definitions for explainability and with differing problem statements falling under the XAI label resulting in a lack of comparisons. This paper proposes in broad terms a simple comparison framework for XAI methods based on the output and what we call the practical attributes. The aim of the framework is to ensure that everything that can be held constant for the purpose of comparison, is held constant and to ignore many of the subjective elements present in the area of XAI. An example utilizing such a comparison along the lines of the proposed framework is performed on local, post-hoc, model-agnostic XAI algorithms which are designed to measure the feature importance/contribution for a queried instance. These algorithms are assessed on two criteria using synthetic datasets across a range of classifiers. The first is based on selecting features which contribute to the underlying data structure and the second is how accurately the algorithms select the features used in a decision tree path. The results from the first comparison showed that when the classifier was able to pick up the underlying pattern in the model, the LIME algorithm was the most accurate at selecting the underlying ground truth features. The second test returned mixed results with some instances in which the XAI algorithms were able to accurately return the features used to produce predictions, however this result was not consistent.
Authored by Guo Yeo, Irene Hudson, David Akman, Jeffrey Chan
The growing complexity of wireless networks has sparked an upsurge in the use of artificial intelligence (AI) within the telecommunication industry in recent years. In network slicing, a key component of 5G that enables network operators to lease their resources to third-party tenants, AI models may be employed in complex tasks, such as short-term resource reservation (STRR). When AI is used to make complex resource management decisions with financial and service quality implications, it is important that these decisions be understood by a human-in-the-loop. In this paper, we apply state-of-the-art techniques from the field of Explainable AI (XAI) to the problem of STRR. Using real-world data to develop an AI model for STRR, we demonstrate how our XAI methodology can be used to explain the real-time decisions of the model, to reveal trends about the model’s general behaviour, as well as aid in the diagnosis of potential faults during the model’s development. In addition, we quantitatively validate the faithfulness of the explanations across an extensive range of XAI metrics to ensure they remain trustworthy and actionable.
Authored by Pieter Barnard, Irene Macaluso, Nicola Marchetti, Luiz DaSilva
In order to solve the problems that may arise from the negative impact of EV charging loads on the power distribution network, it is very important to predict the distribution network variability according to EV charging loads. If appropriate facility reinforcement or system operation is made through evaluation of the impact of EV charging load, it will be possible to prevent facility failure in advance and maintain the power quality at a certain level, enabling stable network operation. By analysing the degree of change in the predicted load according to the EV load characteristics through the load prediction model, it is possible to evaluate the influence of the distribution network according to the EV linkage. This paper aims to investigate the effect of EV charging load on voltage stability, power loss, reliability index and economic loss of distribution network. For this, we transformed univariate time series of EV charging data into a multivariate time series using feature engineering techniques. Then, time series forecast models are trained based on the multivariate dataset. Finally, XAI techniques such as LIME and SHAP are applied to the models to obtain the feature importance analysis results.
Authored by H. Lee, H. Lim, B. Lee
Electrical load forecasting is an essential part of the smart grid to maintain a stable and reliable grid along with helping decisions for economic planning. With the integration of more renewable energy resources, especially solar photovoltaic (PV), and transitioning into a prosumer-based grid, electrical load forecasting is deemed to play a crucial role on both regional and household levels. However, most of the existing forecasting methods can be considered black-box models due to deep digitalization enablers, such as Deep Neural Networks (DNN), where human interpretation remains limited. Additionally, the black box character of many models limits insights and applicability. In order to mitigate this shortcoming, eXplainable Artificial Intelligence (XAI) is introduced as a measure to get transparency into the model’s behavior and human interpretation. By utilizing XAI, experienced power market and system professionals can be integrated into developing the data-driven approach, even without knowing the data science domain. In this study, an electrical load forecasting model utilizing an XAI tool for a Norwegian residential building was developed and presented.
Authored by Eilert Henriksen, Ugur Halden, Murat Kuzlu, Umit Cali
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
The rapid shift towards smart cities, particularly in the era of pandemics, necessitates the employment of e-learning, remote learning systems, and hybrid models. Building adaptive and personalized education becomes a requirement to mitigate the downsides of distant learning while maintaining high levels of achievement. Explainable artificial intelligence (XAI), machine learning (ML), and the internet of behaviour (IoB) are just a few of the technologies that are helping to shape the future of smart education in the age of smart cities through Customization and personalization. This study presents a paradigm for smart education based on the integration of XAI and IoB technologies. The research uses data acquired on students' behaviours to determine whether or not the current education systems respond appropriately to learners' requirements. Despite the existence of sophisticated education systems, they have not yet reached the degree of development that allows them to be tailored to learners' cognitive needs and support them in the absence of face-to-face instruction. The study collected data on 41 learner's behaviours in response to academic activities and assessed whether the running systems were able to capture such behaviours and respond appropriately or not; the study used evaluation methods that demonstrated that there is a change in students' academic progression concerning monitoring using IoT/IoB to enable a relative response to support their progression.
Authored by Ossama Embarak
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explain aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI [1]. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
Authored by Jialin Yu, Alexandra Cristea, Anoushka Harit, Zhongtian Sun, Olanrewaju Aduragba, Lei Shi, Noura Moubayed
Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.
Authored by K Srikanth, T Ramesh, Suja Palaniswamy, Ranganathan Srinivasan
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
Authored by Jun Huang, Zerui Wang, Ding Li, Yan Liu