Around the world there has been an advancement of IoT edge devices, that in turn have enabled the collection of rich datasets as part of the Mobile Crowd Sensing (MCS) paradigm, which in practice is implemented in a variety of safety critical applications. In spite of the advantages of such datasets, there exists an inherent data trustworthiness challenge due to the interference of malevolent actors. In this context, there has been a great body of proposed solutions which capitalize on conventional machine algorithms for sifting through faulty data without any assumptions on the trustworthiness of the source. However, there is still a number of open issues, such as how to cope with strong colluding adversaries, while in parallel managing efficiently the sizable influx of user data. In this work we suggest that the usage of explainable artificial intelligence (XAI) can lead to even more efficient performance as we tackle the limitation of conventional black box models, by enabling the understanding and interpretation of a model s operation. Our approach enables the reasoning of the model s accuracy in the presence of adversaries and has the ability to shun out faulty or malicious data, thus, enhancing the model s adaptation process. To this end, we provide a prototype implementation coupled with a detailed performance evaluation under different scenarios of attacks, employing both real and synthetic datasets. Our results suggest that the use of XAI leads to improved performance compared to other existing schemes.
Authored by Sam Afzal-Houshmand, Dimitrios Papamartzivanos, Sajad Homayoun, Entso Veliou, Christian Jensen, Athanasios Voulodimos, Thanassis Giannetsos
The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
As we know, change is the only constant present in healthcare services. In this rapidly developing world, the need to drastically improve healthcare performance is essential. Real-time health data monitoring, analysis, and storage securely present us with a highly efficient healthcare system to diagnose, predict, and prevent deadly diseases. Integrating IoT data with blockchain storage technology gives safety and security to the data. The current bottleneck we face while integrating blockchain and IoT is primarily interoperability, scalability, and lack of regulatory frameworks. By integrating Explainable AI into the system, it is possible to overcome some of these bottlenecks between IoT devices and blockchain. XAI acts as a middleware solution, helping in interpreting the predictions and enforcing the standard data communication protocol.
Authored by CH Murthy V, Lawanya Shri
Sixth generation (6G)-enabled massive network MANO orchestration, alongside distributed supervision and fully reconfigurable control logic that manages dynamic arrangement of network components, such as cell-free, Open-Air Interface (OAI) and RIS, is a potent enabler for the upcoming pervasive digitalization of the vertical use cases. In such a disruptive domain, artificial intelligence (AI)-driven zero-touch “Network of Networks” intent-based automation shall be able to guarantee a high degree of security, efficiency, scalability, and sustainability, especially in cross-domain and interoperable deployment environments (i.e., where points of presence (PoPs) are non-independent and identically distributed (non-IID)). To this extent, this paper presents a novel breakthrough, open, and fully reconfigurable networking architecture for 6G cellular paradigms, named 6G-BRICKS. To this end, 6G-BRICKS will deliver the first open and programmable O-RAN Radio Unit (RU) for 6G networks, termed as the OpenRU, based on an NI USRP-based platform. Moreover, 6G-BRICKS will integrate the RIS concept into the OAI alongside Testing as a Service (TaaS) capabilities, multi-tenancy, disaggregated Operations Support Systems (OSS) and Deep Edge adaptation at the forefront. The overall ambition of 6G-BRICKS is to offer evolvability, granularity, while, at the same time, tackling big challenges such as interdisciplinary efforts and big investments in 6G integration.
Authored by Kostas Ramantas, Anastasios Bikos, Walter Nitzold, Sofie Pollin, Adlen Ksentini, Sylvie Mayrargue, Vasileios Theodorou, Loizos Christofi, Georgios Gardikis, Md Rahman, Ashima Chawla, Francisco Ibañez, Ioannis Chochliouros, Didier Nicholson, Mario, Montagudand, Arman Shojaeifard, Alexios Pagkotzidis, Christos Verikoukis
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
Impact of Equivalence Assessment in the Education Sector using the XAI Model of Blockchain with ECTS
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings.
Authored by Håkon Eriksson, Gudmund Grov
Explainable AI is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. Explainable AI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the Explainable AI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.
Authored by Sangeetha Reji, Jonas Hetterich, Stamatis Pitsios, Vasilis Gkolemi, Sergi Perez-Castanos, Minas Pertselakis
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
The pervasive proliferation of digital technologies and interconnected systems has heightened the necessity for comprehensive cybersecurity measures in computer technological know-how. While deep gaining knowledge of (DL) has turn out to be a effective tool for bolstering security, its effectiveness is being examined via malicious hacking. Cybersecurity has end up an trouble of essential importance inside the cutting-edge virtual world. By making it feasible to become aware of and respond to threats in actual time, Deep Learning is a important issue of progressed security. Adversarial assaults, interpretability of models, and a lack of categorized statistics are all obstacles that want to be studied further with the intention to support DL-based totally security solutions. The protection and reliability of DL in our on-line world relies upon on being able to triumph over those boundaries. The present studies presents a unique method for strengthening DL-based totally cybersecurity, known as name dynamic adverse resilience for deep learning-based totally cybersecurity (DARDL-C). DARDL-C gives a dynamic and adaptable framework to counter antagonistic assaults by using combining adaptive neural community architectures with ensemble learning, real-time threat tracking, risk intelligence integration, explainable AI (XAI) for version interpretability, and reinforcement getting to know for adaptive defense techniques. The cause of this generation is to make DL fashions more secure and proof against the constantly transferring nature of online threats. The importance of simulation evaluation in determining DARDL-C s effectiveness in practical settings with out compromising genuine safety is important. Professionals and researchers can compare the efficacy and versatility of DARDL-C with the aid of simulating realistic threats in managed contexts. This gives precious insights into the machine s strengths and regions for improvement.
Authored by D. Poornima, A. Sheela, Shamreen Ahamed, P. Kathambari
This paper presents a reputation-based threat mitigation framework that defends potential security threats in electroencephalogram (EEG) signal classification during model aggregation of Federated Learning. While EEG signal analysis has attracted attention because of the emergence of brain-computer interface (BCI) technology, it is difficult to create efficient learning models for EEG analysis because of the distributed nature of EEG data and related privacy and security concerns. To address these challenges, the proposed defending framework leverages the Federated Learning paradigm to preserve privacy by collaborative model training with localized data from dispersed sources and introduces a reputation-based mechanism to mitigate the influence of data poisoning attacks and identify compromised participants. To assess the efficiency of the proposed reputation-based federated learning defense framework, data poisoning attacks based on the risk level of training data derived by Explainable Artificial Intelligence (XAI) techniques are conducted on both publicly available EEG signal datasets and the self-established EEG signal dataset. Experimental results on the poisoned datasets show that the proposed defense methodology performs well in EEG signal classification while reducing the risks associated with security threats.
Authored by Zhibo Zhang, Pengfei Li, Ahmed Hammadi, Fusen Guo, Ernesto Damiani, Chan Yeun
Internet of Things (IoT) and Artificial Intelligence (AI) systems have become prevalent across various industries, steering to diverse and far-reaching outcomes, and their convergence has garnered significant attention in the tech world. Studies and reviews are instrumental in supplying industries with the nuanced understanding of the multifaceted developments of this joint domain. This paper undertakes a critical examination of existing perspectives and governance policies, adopting a contextual approach, and addressing not only the potential but also the limitations of these governance policies. In the complex landscape of AI-infused IoT systems, transparency and interpretability are pivotal qualities for informed decision-making and effective governance. In AI governance, transparency allows for scrutiny and accountability, while interpretability facilitates trust and confidence in AI-driven decisions. Therefore, we also evaluate and advocate for the use of two very popular eXplainable AI (XAI) techniques-SHAP and LIME-in explaining the predictive results of AI models. Subsequently, this paper underscores the imperative of not only maximizing the advantages and services derived from the incorporation of IoT and AI but also diligently minimizing possible risks and challenges.
Authored by Nadine Fares, Denis Nedeljkovic, Manar Jammal
The stock market is a topic that is of interest to all sorts of people. It is a place where the prices change very drastically. So, something needs to be done to help the people risking their money on the stock market. The public s opinions are crucial for the stock market. Sentiment is a very powerful force that is constantly changing and having a significant impact. It is reflected on social media platforms, where almost the entire country is active, as well as in the daily news. Many projects have been done in the stock prediction genre, but since sentiments play a big part in the stock market, making predictions of prices without them would lead to inefficient predictions, and hence Sentiment analysis is very important for stock market price prediction. To predict stock market prices, we will combine sentiment analysis from various sources, including News and Twitter. Results are evaluated for two different cryptocurrencies: Ethereum and Solana. Random Forest achieved the best RMSE of 13.434 and MAE of 11.919 for Ethereum. Support Vector Machine achieved the best RMSE of 2.48 and MAE of 1.78 for Solana.
Authored by Arayan Gupta, Durgesh Vyas, Pranav Nale, Harsh Jain, Sashikala Mishra, Ranjeet Bidwe, Bhushan Zope, Amar Buchade
Recently, the increased use of artificial intelligence in healthcare has significantly changed the developments in the field of medicine. Medical centres have adopted AI applications and used it in many applications to predict disease diagnosis and reduce health risks in a predetermined way. In addition to Artificial Intelligence (AI) techniques for processing data and understanding the results of this data, Explainable Artificial Intelligence (XAI) techniques have also gained an important place in the healthcare sector. In this study, reliable and explainable artificial intelligence studies in the field of healthcare were investigated and the blockchain framework, one of the latest technologies in the field of reliability, was examined. Many researchers have used blockchain technology in the healthcare industry to exchange information between laboratories, hospitals, pharmacies, and doctors and to protect patient data. In our study, firstly, the studies whose keywords were XAI and Trustworthy Artificial Intelligence were examined, and then, among these studies, priority was given to current articles using Blockchain technology. Combining the existing methods and results of previous studies and organizing these studies, our study presented a general framework obtained from the reviewed articles. Obtaining this framework from current studies will be beneficial for future studies of both academics and scientists.
Authored by Kübra Arslanoğlu, Mehmet Karaköse
In this work, a novel framework for detecting mali-cious networks in the IoT-enabled Metaverse networks to ensure that malicious network traffic is identified and integrated to suit optimal Metaverse cybersecurity is presented. First, the study raises a core security issue related to the cyberthreats in Metaverse networks and its privacy breaching risks. Second, to address the shortcomings of efficient and effective network intrusion detection (NIDS) of dark web traffic, this study employs a quantization-aware trained (QAT) 1D CNN followed by fully con-nected networks (ID CNNs-GRU-FCN) model, which addresses the issues of and memory contingencies in Metaverse NIDS models. The QAT model is made interpretable using eXplainable artificial intelligence (XAI) methods namely, SHapley additive exPlanations (SHAP) and local interpretable model-agnostic ex-planations (LIME), to provide trustworthy model transparency and interpretability. Overall, the proposed method contributes to storage benefits four times higher than the original model without quantization while attaining a high accuracy of 99.82 \%.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
IoT and AI created a Transportation Management System, resulting in the Internet of Vehicles. Intelligent vehicles are combined with contemporary communication technologies (5G) to achieve automated driving and adequate mobility. IoV faces security issues in the next five (5) areas: data safety, V2X communication safety, platform safety, Intermediate Commercial Vehicles (ICV) safety, and intelligent device safety. Numerous types of AI models have been created to reduce the outcome infiltration risks on ICVs. The need to integrate confidence, transparency, and repeatability into the creation of Artificial Intelligence (AI) for the safety of ICV and to deliver harmless transport systems, on the other hand, has led to an increase in explainable AI (XAI). Therefore, the space of this analysis protected the XAI models employed in ICV intrusion detection systems (IDSs), their taxonomies, and available research concerns. The study s findings demonstrate that, despite its relatively recent submission to ICV, XAI is a potential explore area for those looking to increase the net effect of ICVs. The paper also demonstrates that XAI s greater transparency will help it gain acceptance in the vehicle industry.
Authored by Ravula Vishnukumar, Adla Padma, Mangayarkarasi Ramaiah
Peer-to-peer (P2P) lenders face regulatory, compliance, application, and data security risks. A complete methodology that includes more than statistical and economic methods is needed to conduct credit assessments effectively. This study uses systematic literature network analysis and artificial intelligence to comprehend risk management in P2P lending financial technology. This study suggests that explainable AI (XAI) is better at identifying, analyzing, and evaluating financial industry risks, including financial technology. This is done through human agency, monitoring, transparency, and accountability. The LIME Framework and SHAP Value are widely used machine learning frameworks for data integration to speed up and improve credit score analysis using bank-like criteria. Thus, machine learning is expected to be used to develop a precise and rational individual credit evaluation system in peer-to-peer lending to improve credit risk supervision and forecasting while reducing default risk.
Authored by Ika Arifah, Ina Nihaya
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
Security applications use machine learning (ML) models and artificial intelligence (AI) to autonomously protect systems. However, security decisions are more impactful if they are coupled with their rationale. The explanation behind an ML model s result provides the rationale necessary for a security decision. Explainable AI (XAI) techniques provide insights into the state of a model s attributes and their contribution to the model s results to gain the end user s confidence. It requires human intervention to investigate and interpret the explanation. The interpretation must align system s security profile(s). A security profile is an abstraction of the system s security requirements and related functionalities to comply with them. Relying on human intervention for interpretation is infeasible for an autonomous system (AS) since it must self-adapt its functionalities in response to uncertainty at runtime. Thus, an AS requires an automated approach to extract security profile information from ML model XAI outcomes. The challenge is unifying the XAI outcomes with the security profile to represent the interpretation in a structured form. This paper presents a component to facilitate AS information extraction from ML model XAI outcomes related to predictions and generating an interpretation considering the security profile.
Authored by Sharmin Jahan, Sarra Alqahtani, Rose Gamble, Masrufa Bayesh
Over the past two decades, Cyber-Physical Systems (CPS) have emerged as critical components in various industries, integrating digital and physical elements to improve efficiency and automation, from smart manufacturing and autonomous vehicles to advanced healthcare devices. However, the increasing complexity of CPS and their deployment in highly dynamic contexts undermine user trust. This motivates the investigation of methods capable of generating explanations about the behavior of CPS. To this end, Explainable Artificial Intelligence (XAI) methodologies show potential. However, these approaches do not consider contextual variables that a CPS may be subjected to (e.g., temperature, humidity), and the provided explanations are typically not actionable. In this article, we propose an Actionable Contextual Explanation System (ACES) that considers such contextual influences. Based on a user query about a behavioral attribute of a CPS (for example, vibrations and speed), ACES creates contextual explanations for the behavior of such a CPS considering its context. To generate contextual explanations, ACES uses a context model to discover sensors and actuators in the physical environment of a CPS and obtains time-series data from these devices. It then cross-correlates these time-series logs with the user-specified behavioral attribute of the CPS. Finally, ACES employs a counterfactual explanation method and takes user feedback to identify causal relationships between the contextual variables and the behavior of the CPS. We demonstrate our approach with a synthetic use case; the favorable results obtained, motivate the future deployment of ACES in real-world scenarios.
Authored by Sanjiv Jha, Simon Mayer, Kimberly Garcia
In recent years there is a surge of interest in the interpretability and explainability of AI systems, which is largely motivated by the need for ensuring the transparency and accountability of Artificial Intelligence (AI) operations, as well as by the need to minimize the cost and consequences of poor decisions. Another challenge that needs to be mentioned is the Cyber security attacks against AI infrastructures in manufacturing environments. This study examines eXplainable AI (XAI)-enhanced approaches against adversarial attacks for optimizing Cyber defense methods in manufacturing image classification tasks. The examined XAI methods were applied to an image classification task providing some insightful results regarding the utility of Local Interpretable Model-agnostic Explanations (LIME), Saliency maps, and the Gradient-weighted Class Activation Mapping (Grad-Cam) as methods to fortify a dataset against gradient evasion attacks. To this end, we “attacked” the XAI-enhanced Images and used them as input to the classifier to measure their robustness of it. Given the analyzed dataset, our research indicates that LIME-masked images are more robust to adversarial attacks. We additionally propose an Encoder-Decoder schema that timely predicts (decodes) the masked images, setting the proposed approach sufficient for a real-life problem.
Authored by Georgios Makridis, Spyros Theodoropoulos, Dimitrios Dardanis, Ioannis Makridis, Maria Separdani, Georgios Fatouros, Dimosthenis Kyriazis, Panagiotis Koulouris
In today s age of digital technology, ethical concerns regarding computing systems are increasing. While the focus of such concerns currently is on requirements for software, this article spotlights the hardware domain, specifically microchips. For example, the opaqueness of modern microchips raises security issues, as malicious actors can manipulate them, jeopardizing system integrity. As a consequence, governments invest substantially to facilitate a secure microchip supply chain. To combat the opaqueness of hardware, this article introduces the concept of Explainable Hardware (XHW). Inspired by and building on previous work on Explainable AI (XAI) and explainable software systems, we develop a framework for achieving XHW comprising relevant stakeholders, requirements they might have concerning hardware, and possible explainability approaches to meet these requirements. Through an exploratory survey among 18 hardware experts, we showcase applications of the framework and discover potential research gaps. Our work lays the foundation for future work and structured debates on XHW.
Authored by Timo Speith, Julian Speith, Steffen Becker, Yixin Zou, Asia Biega, Christof Paar
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Impact of Equivalence Assessment in the Education Sector using the XAI Model of Blockchain with ECTS
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran