In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Despite the tremendous impact and potential of Artificial Intelligence (AI) for civilian and military applications, it has reached an impasse as learning and reasoning work well for certain applications and it generally suffers from a number of challenges such as hidden biases and causality. Next, “symbolic” AI (not as efficient as “sub-symbolic” AI), offers transparency, explainability, verifiability and trustworthiness. To address these limitations, neuro-symbolic AI has been emerged as a new AI field that combines efficiency of “sub-symbolic” AI with the assurance and transparency of “symbolic” AI. Furthermore, AI (that suffers from aforementioned challenges) will remain inadequate for operating independently in contested, unpredictable and complex multi-domain battlefield (MDB) environment for the foreseeable future and the AI enabled autonomous systems will require human in the loop to complete the mission in such a contested environment. Moreover, in order to successfully integrate AI enabled autonomous systems into military operations, military operators need to have assurance that these systems will perform as expected and in a safe manner. Most importantly, Human-Autonomy Teaming (HAT) for shared learning and understanding and joint reasoning is crucial to assist operations across military domains (space, air, land, maritime, and cyber) at combat speed with high assurance and trust. In this paper, we present a rough guide to key research challenges and perspectives of neuro symbolic AI for assured and trustworthy HAT.
Authored by Danda Rawat
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems. Our solution is robust against black-box attacking and well-established generative adversarial network (GAN) based image restoration. We analyze the potential risk in previous work, where the proposed cosine similarity computation might directly leak the protected precomputed embedding stored on the server side. We propose a Function Secret Sharing (FSS) based face embedding comparison protocol without any intermediate result leakage. In addition, we show in experiments that the proposed protocol is more efficient compared to the Secret Sharing (SS) based protocol.
Authored by Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
Modern network defense can benefit from the use of autonomous systems, offloading tedious and time-consuming work to agents with standard and learning-enabled components. These agents, operating on critical network infrastructure, need to be robust and trustworthy to ensure defense against adaptive cyber-attackers and, simultaneously, provide explanations for their actions and network activity. However, learning-enabled components typically use models, such as deep neural networks, that are not transparent in their high-level decision-making leading to assurance challenges. Additionally, cyber-defense agents must execute complex long-term defense tasks in a reactive manner that involve coordination of multiple interdependent subtasks. Behavior trees are known to be successful in modelling interpretable, reactive, and modular agent policies with learning-enabled components. In this paper, we develop an approach to design autonomous cyber defense agents using behavior trees with learning-enabled components, which we refer to as Evolving Behavior Trees (EBTs). We learn the structure of an EBT with a novel abstract cyber environment and optimize learning-enabled components for deployment. The learning-enabled components are optimized for adapting to various cyber-attacks and deploying security mechanisms. The learned EBT structure is evaluated in a simulated cyber environment, where it effectively mitigates threats and enhances network visibility. For deployment, we develop a software architecture for evaluating EBT-based agents in computer network defense scenarios. Our results demonstrate that the EBT-based agent is robust to adaptive cyber-attacks and provides high-level explanations for interpreting its decisions and actions.
Authored by Nicholas Potteiger, Ankita Samaddar, Hunter Bergstrom, Xenofon Koutsoukos
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
A decentralized and secure architecture made possible by blockchain technology is what Web 3.0 is known for. By offering a secure and trustworthy platform for transactions and data storage, this new paradigm shift in the digital world promises to transform the way we interact with the internet. Data is the new oil, thus protecting it is equally crucial. The foundation of the web 3.0 ecosystem, which provides a secure and open method of managing user data, is blockchain technology. With the launch of Web 3.0, demand for seamless communication across numerous platforms and technologies has increased. Blockchain offers a common framework that makes it possible for various systems to communicate with one another. The decentralized nature of blockchain technology almost precludes hacker access to the system, ushering in a highly secure Web 3.0. By preserving the integrity and validity of data and transactions, blockchain helps to build trust in online transactions. AI can be integrated with blockchain to enhance its capabilities and improve the overall user experience. We can build a safe and intelligent web that empowers users, gives them more privacy, and gives them more control over their online data by merging blockchain and AI. In this article, we emphasize the value of blockchain and AI technologies in achieving Web 3.0 s full potential for a secure internet and propose a Blockchain and AI empowered framework. The future of technology is now driven by the power of blockchain, AI, and web 3.0, providing a secure and efficient way to manage digital assets and data.
Authored by Akshay Suryavanshi, Apoorva G, Mohan N, Rishika M, Abdul N
With the popularization of AIoT applications, every endpoint device is facing information security risks. Thus, how to ensure the security of the device becomes essential. Chip security is divided into software security and hardware security, both of which are indispensable and complement each other. Hardware security underpins the entire cybersecurity ecosystem by proving essential primitives, including key provisioning, hardware cryptographic engines, hardware unique key (HUK), and unique identification (UID). This establishes a Hardware Root of Trust (HRoT) with secure storage, secure operation, and a secure environment to provide a trustworthy foundation for chip security. Today s talk starts with how to use a Physical Unclonable Function (PUF) to generate a unique “fingerprint” (static random number) for the chip. Next, we will address using a static random number and dynamic entropy to design a high-performance true random number generator and achieve real anti-tampering HRoT by leveraging static and dynamic entropy. By integrating NISTstandard cryptographic engines, we have created an authentic PUF-based Hardware Root of Trust. The all-in-one integrated solution can handle all the necessary security functions throughout the product life cycle as well as maintaining a secure boundary to achieve the integrity of sensitive information or assets. Finally, as hardware-level protection extends to operating systems and applications, products and services become secure.
Authored by Meng-Yi Wu
Blockchain, as an emerging distributed database, effectively addresses the issue of centralized storage in IoT data, where storage capacity cannot match the explosive growth in devices and data scale, as well as the contradictions arising from centralized data management concerning data privacy and security concerns. To alleviate the problem of excessive pressure on single-point storage and ensure data security, a blockchain data storage method based on erasure codes is proposed. This method involves constructing mathematical functions that describe the data to split the original block data into multiple fragments and add redundant slices. These fragments are then encoded and stored in different locations using a circular hash space with the addition of virtual nodes to ensure load balancing among nodes and reduce situations where a single node stores too many encoded data blocks, effectively enhancing the storage space utilization efficiency of the distributed storage database. The blockchain storage method stores encoded data digest information such as storage location, creation time, and hashes, allowing for the tracing of the origin of encoded data blocks. In case of accidental loss or malicious tampering, this enables effective recovery and ensures the integrity and availability of data in the network. Experimental results indicate that compared to traditional blockchain approaches, this method effectively reduces the storage pressure on nodes and exhibits a certain degree of disaster recovery capability.
Authored by Fanyao Meng, Jin Li, Jiaqi Gao, Junjie Liu, Junpeng Ru, Yueming Lu
An IC used in a safety-critical application such as automotive often requires a long lifetime of more than 10 years. Previously, stress test has been used as a means to establish the accelerated aging model for an IC product under a harsh operating condition. Then, the accelerated aging model is time-stretched to predict an IC’s normal lifetime. However, such a long-stretching prediction may not be very trustworthy. In this work, we present a more refined method to provide higher credibility in the IC lifetime prediction. We streamline in this paper a progressive lifetime prediction method with two phases – the training phase and the inference phase. During the training phase, we collect the aging histories of some training devices under various stress levels. During the inference phase, the extrapolation is performed on the “stressed lifetime” versus the “stress level” space and thereby leading to a more trustworthy prediction of the lifetime.
Authored by Chen-Lin Tsai, Shi-Yu Huang
Physical fitness is the prime priority of people these days as everyone wants to see himself as healthy. There are numbers of wearable devices available that help human to monitor their vital body signs through which one can get an average idea of their health. Advancements in the efficiency of healthcare systems have fueled the research and development of high-performance wearable devices. There is significant potential for portable healthcare systems to lower healthcare costs and provide continuous health monitoring of critical patients from remote locations. The most pressing need in this field is developing a safe, effective, and trustworthy medical device that can be used to reliably monitor vital signs from various human organs or the environment within or outside the body through flexible sensors. Still, the patient should be able to go about their normal day while sporting a wearable or implanted medical device. This article highlights the current scenario of wearable devices and sensors for healthcare applications. Specifically, it focuses on some widely used commercially available wearable devices for continuously gauging patient’s vital parameters and discusses the major factors influencing the surge in the demand for medical devices. Furthermore, this paper addresses the challenges and countermeasures of wearable devices in smart healthcare technology.
Authored by Kavery Verma, Preity Preity, Rakesh Ranjan
A fingerprint architecture based on a micro electro mechanical system (MEMS) for the use as a hardware security component is presented. The MEMS serves as a physically unclonable function (PUF) and is used for fingerprint ID generation, derived from the MEMS-specific parameters. The fingerprint is intended to allow the unique identifiability of electronic components and thus to ensure protection against unauthorized replacement or manipulation. The MEMS chip consists of 16 individual varactors with continuously adjustable capacitance values that are used for bit derivation (“analog” PUF). The focus is on the design-related forcing of random technological spread to provide a wide range of different parameters per chip or wafer to achieve a maximum key length. Key generation and verification is carried out via fingerprint electronics connected to the MEMS, which is realized by an FPGA.
Authored by Katja Meinel, Christian Schott, Franziska Mayer, Dhruv Gupta, Sebastian Mittag, Susann Hahn, Sebastian Weidlich, Daniel Bülz, Roman Forke, Karla Hiller, Ulrich Heinkel, Harald Kuhn
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
Memristive crossbar-based architecture provides an energy-efficient platform to accelerate neural networks (NNs) thanks to its Processing-in-Memory (PIM) nature. However, the device-to-device variation (DDV), which is typically modeled as Lognormal distribution, deviates the programmed weights from their target values, resulting in significant performance degradation. This paper proposes a new Bayesian Neural Network (BNN) approach to enhance the robustness of weights against DDV. Instead of using the widely-used Gaussian variational posterior in conventional BNNs, our approach adopts a DDV-specific variational posterior distribution, i.e., Lognormal distribution. Accordingly, in the new BNN approach, the prior distribution is modified to keep consistent with the posterior distribution to avoid expensive Monte Carlo simulations. Furthermore, the mean of the prior distribution is dynamically adjusted in accordance with the mean of the Lognormal variational posterior distribution for better convergence and accuracy. Compared with the state-of-the-art approaches, experimental results show that the proposed new BNN approach can significantly boost the inference accuracy with the consideration of DDV on several well-known datasets and modern NN architectures. For example, the inference accuracy can be improved from 18\% to 74\% in the scenario of ResNet-18 on CIFAR-10 even under large variations.
Authored by Yang Xiao, Qi Xu, Bo Yuan
In the landscape of modern computing, fog computing has emerged as a service provisioning mechanism that addresses the dual demands of low latency and service localisation. Fog architecture consists of a network of interconnected nodes that work collectively to execute tasks and process data in a localised area, thereby reducing the delay induced from communication with the cloud. However, a key issue associated with fog service provisioning models is its limited localised processing capability and storage relative to the cloud, thereby presenting inherent issues on its scalability. In this paper, we propose volunteer computing coupled with optimisation methods to address the issue of localised fog scalability. The use of optimisation methods ensures the optimal use of fog infrastructure. To scale the fog network as per the requirements, we leverage the notion of volunteer computing. We propose an intelligent approach for node selection in a trustworthy fog environment to satisfy the performance and bandwidth requirements of the fog network. The problem is formulated as a multi-criteria decision-making (MCDM) problem where nodes are evaluated and ranked based on several factors, including service level agreement (SLA) parameters and reputation value.
Authored by Asma Alkhalaf, Farookh Hussain
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
Trust evaluation and trust establishment play crucial roles in the management of trust within a multi-agent system. When it comes to collaboration systems, trust becomes directly linked to the specific roles performed by agents. The Role-Based Collaboration (RBC) methodology serves as a framework for assigning roles that facilitate agent collaboration. Within this context, the behavior of an agent with respect to a role is referred to as a process role. This research paper introduces a role engine that incorporates a trust establishment algorithm aimed at identifying optimal and reliable process roles. In our study, we define trust as a continuous value ranging from 0 to 1. To optimize trustworthy process roles, we have developed a consensus-based Gaussian Process Factor Graph (GPFG) tool. Our simulations and experiments validate the feasibility and efficiency of our proposed approach with autonomous robots in unsignalized intersections and narrow hallways.
Authored by Behzad Akbari, Haibin Zhu, Ya-Jun Pan
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
This paper presents a case study about the initial phases of the interface design for an artificial intelligence-based decision-support system for clinical diagnosis. The study presents challenges and opportunities in implementing a human-centered design (HCD) approach during the early stages of the software development of a complex system. These methods are commonly adopted to ensure that the systems are designed based on users needs. For this project, they are also used to investigate the users potential trust issues and ensure the creation of a trustworthy platform. However, the project stage and heterogeneity of the teams can pose obstacles to their implementation. The results of the implementation of HCD methods have shown to be effective and informed the creation of low fidelity prototypes. The outcomes of this process can assist other designers, developers, and researchers in creating trustworthy AI solutions.
Authored by Gabriela Beltrao, Iuliia Paramonova, Sonia Sousa