A decentralized and secure architecture made possible by blockchain technology is what Web 3.0 is known for. By offering a secure and trustworthy platform for transactions and data storage, this new paradigm shift in the digital world promises to transform the way we interact with the internet. Data is the new oil, thus protecting it is equally crucial. The foundation of the web 3.0 ecosystem, which provides a secure and open method of managing user data, is blockchain technology. With the launch of Web 3.0, demand for seamless communication across numerous platforms and technologies has increased. Blockchain offers a common framework that makes it possible for various systems to communicate with one another. The decentralized nature of blockchain technology almost precludes hacker access to the system, ushering in a highly secure Web 3.0. By preserving the integrity and validity of data and transactions, blockchain helps to build trust in online transactions. AI can be integrated with blockchain to enhance its capabilities and improve the overall user experience. We can build a safe and intelligent web that empowers users, gives them more privacy, and gives them more control over their online data by merging blockchain and AI. In this article, we emphasize the value of blockchain and AI technologies in achieving Web 3.0 s full potential for a secure internet and propose a Blockchain and AI empowered framework. The future of technology is now driven by the power of blockchain, AI, and web 3.0, providing a secure and efficient way to manage digital assets and data.
Authored by Akshay Suryavanshi, Apoorva G, Mohan N, Rishika M, Abdul N
With the popularization of AIoT applications, every endpoint device is facing information security risks. Thus, how to ensure the security of the device becomes essential. Chip security is divided into software security and hardware security, both of which are indispensable and complement each other. Hardware security underpins the entire cybersecurity ecosystem by proving essential primitives, including key provisioning, hardware cryptographic engines, hardware unique key (HUK), and unique identification (UID). This establishes a Hardware Root of Trust (HRoT) with secure storage, secure operation, and a secure environment to provide a trustworthy foundation for chip security. Today s talk starts with how to use a Physical Unclonable Function (PUF) to generate a unique “fingerprint” (static random number) for the chip. Next, we will address using a static random number and dynamic entropy to design a high-performance true random number generator and achieve real anti-tampering HRoT by leveraging static and dynamic entropy. By integrating NISTstandard cryptographic engines, we have created an authentic PUF-based Hardware Root of Trust. The all-in-one integrated solution can handle all the necessary security functions throughout the product life cycle as well as maintaining a secure boundary to achieve the integrity of sensitive information or assets. Finally, as hardware-level protection extends to operating systems and applications, products and services become secure.
Authored by Meng-Yi Wu
Blockchain, as an emerging distributed database, effectively addresses the issue of centralized storage in IoT data, where storage capacity cannot match the explosive growth in devices and data scale, as well as the contradictions arising from centralized data management concerning data privacy and security concerns. To alleviate the problem of excessive pressure on single-point storage and ensure data security, a blockchain data storage method based on erasure codes is proposed. This method involves constructing mathematical functions that describe the data to split the original block data into multiple fragments and add redundant slices. These fragments are then encoded and stored in different locations using a circular hash space with the addition of virtual nodes to ensure load balancing among nodes and reduce situations where a single node stores too many encoded data blocks, effectively enhancing the storage space utilization efficiency of the distributed storage database. The blockchain storage method stores encoded data digest information such as storage location, creation time, and hashes, allowing for the tracing of the origin of encoded data blocks. In case of accidental loss or malicious tampering, this enables effective recovery and ensures the integrity and availability of data in the network. Experimental results indicate that compared to traditional blockchain approaches, this method effectively reduces the storage pressure on nodes and exhibits a certain degree of disaster recovery capability.
Authored by Fanyao Meng, Jin Li, Jiaqi Gao, Junjie Liu, Junpeng Ru, Yueming Lu
An IC used in a safety-critical application such as automotive often requires a long lifetime of more than 10 years. Previously, stress test has been used as a means to establish the accelerated aging model for an IC product under a harsh operating condition. Then, the accelerated aging model is time-stretched to predict an IC’s normal lifetime. However, such a long-stretching prediction may not be very trustworthy. In this work, we present a more refined method to provide higher credibility in the IC lifetime prediction. We streamline in this paper a progressive lifetime prediction method with two phases – the training phase and the inference phase. During the training phase, we collect the aging histories of some training devices under various stress levels. During the inference phase, the extrapolation is performed on the “stressed lifetime” versus the “stress level” space and thereby leading to a more trustworthy prediction of the lifetime.
Authored by Chen-Lin Tsai, Shi-Yu Huang
Physical fitness is the prime priority of people these days as everyone wants to see himself as healthy. There are numbers of wearable devices available that help human to monitor their vital body signs through which one can get an average idea of their health. Advancements in the efficiency of healthcare systems have fueled the research and development of high-performance wearable devices. There is significant potential for portable healthcare systems to lower healthcare costs and provide continuous health monitoring of critical patients from remote locations. The most pressing need in this field is developing a safe, effective, and trustworthy medical device that can be used to reliably monitor vital signs from various human organs or the environment within or outside the body through flexible sensors. Still, the patient should be able to go about their normal day while sporting a wearable or implanted medical device. This article highlights the current scenario of wearable devices and sensors for healthcare applications. Specifically, it focuses on some widely used commercially available wearable devices for continuously gauging patient’s vital parameters and discusses the major factors influencing the surge in the demand for medical devices. Furthermore, this paper addresses the challenges and countermeasures of wearable devices in smart healthcare technology.
Authored by Kavery Verma, Preity Preity, Rakesh Ranjan
A fingerprint architecture based on a micro electro mechanical system (MEMS) for the use as a hardware security component is presented. The MEMS serves as a physically unclonable function (PUF) and is used for fingerprint ID generation, derived from the MEMS-specific parameters. The fingerprint is intended to allow the unique identifiability of electronic components and thus to ensure protection against unauthorized replacement or manipulation. The MEMS chip consists of 16 individual varactors with continuously adjustable capacitance values that are used for bit derivation (“analog” PUF). The focus is on the design-related forcing of random technological spread to provide a wide range of different parameters per chip or wafer to achieve a maximum key length. Key generation and verification is carried out via fingerprint electronics connected to the MEMS, which is realized by an FPGA.
Authored by Katja Meinel, Christian Schott, Franziska Mayer, Dhruv Gupta, Sebastian Mittag, Susann Hahn, Sebastian Weidlich, Daniel Bülz, Roman Forke, Karla Hiller, Ulrich Heinkel, Harald Kuhn
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
Memristive crossbar-based architecture provides an energy-efficient platform to accelerate neural networks (NNs) thanks to its Processing-in-Memory (PIM) nature. However, the device-to-device variation (DDV), which is typically modeled as Lognormal distribution, deviates the programmed weights from their target values, resulting in significant performance degradation. This paper proposes a new Bayesian Neural Network (BNN) approach to enhance the robustness of weights against DDV. Instead of using the widely-used Gaussian variational posterior in conventional BNNs, our approach adopts a DDV-specific variational posterior distribution, i.e., Lognormal distribution. Accordingly, in the new BNN approach, the prior distribution is modified to keep consistent with the posterior distribution to avoid expensive Monte Carlo simulations. Furthermore, the mean of the prior distribution is dynamically adjusted in accordance with the mean of the Lognormal variational posterior distribution for better convergence and accuracy. Compared with the state-of-the-art approaches, experimental results show that the proposed new BNN approach can significantly boost the inference accuracy with the consideration of DDV on several well-known datasets and modern NN architectures. For example, the inference accuracy can be improved from 18\% to 74\% in the scenario of ResNet-18 on CIFAR-10 even under large variations.
Authored by Yang Xiao, Qi Xu, Bo Yuan
In the landscape of modern computing, fog computing has emerged as a service provisioning mechanism that addresses the dual demands of low latency and service localisation. Fog architecture consists of a network of interconnected nodes that work collectively to execute tasks and process data in a localised area, thereby reducing the delay induced from communication with the cloud. However, a key issue associated with fog service provisioning models is its limited localised processing capability and storage relative to the cloud, thereby presenting inherent issues on its scalability. In this paper, we propose volunteer computing coupled with optimisation methods to address the issue of localised fog scalability. The use of optimisation methods ensures the optimal use of fog infrastructure. To scale the fog network as per the requirements, we leverage the notion of volunteer computing. We propose an intelligent approach for node selection in a trustworthy fog environment to satisfy the performance and bandwidth requirements of the fog network. The problem is formulated as a multi-criteria decision-making (MCDM) problem where nodes are evaluated and ranked based on several factors, including service level agreement (SLA) parameters and reputation value.
Authored by Asma Alkhalaf, Farookh Hussain
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
Trust evaluation and trust establishment play crucial roles in the management of trust within a multi-agent system. When it comes to collaboration systems, trust becomes directly linked to the specific roles performed by agents. The Role-Based Collaboration (RBC) methodology serves as a framework for assigning roles that facilitate agent collaboration. Within this context, the behavior of an agent with respect to a role is referred to as a process role. This research paper introduces a role engine that incorporates a trust establishment algorithm aimed at identifying optimal and reliable process roles. In our study, we define trust as a continuous value ranging from 0 to 1. To optimize trustworthy process roles, we have developed a consensus-based Gaussian Process Factor Graph (GPFG) tool. Our simulations and experiments validate the feasibility and efficiency of our proposed approach with autonomous robots in unsignalized intersections and narrow hallways.
Authored by Behzad Akbari, Haibin Zhu, Ya-Jun Pan
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
This paper presents a case study about the initial phases of the interface design for an artificial intelligence-based decision-support system for clinical diagnosis. The study presents challenges and opportunities in implementing a human-centered design (HCD) approach during the early stages of the software development of a complex system. These methods are commonly adopted to ensure that the systems are designed based on users needs. For this project, they are also used to investigate the users potential trust issues and ensure the creation of a trustworthy platform. However, the project stage and heterogeneity of the teams can pose obstacles to their implementation. The results of the implementation of HCD methods have shown to be effective and informed the creation of low fidelity prototypes. The outcomes of this process can assist other designers, developers, and researchers in creating trustworthy AI solutions.
Authored by Gabriela Beltrao, Iuliia Paramonova, Sonia Sousa
The Assessment List for Trustworthy AI (ALTAI) was developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) set up by the European Commission to help assess whether the AI system that is being developed, deployed, procured, or used, complies with the seven requirements of Trustworthy AI, as specified in the AI HLEG’s Ethics Guidelines for Trustworthy AI. This paper describes the self-evaluation process of the SHAPES pilot campaign and presents some individual case results applying the prototype of an interactive version of the Assessment List for Trustworthy AI. Finally, the available results of two individual cases are combined. The best results are obtained from the evaluation category ‘transparency’ and the worst from ‘technical robustness and safety’. Future work will be combining the missing self-assessment results and developing mitigation recommendations for AI-based risk reduction recommendations for new SHAPES services.
Authored by Jyri Rajamaki, Pedro Rocha, Mira Perenius, Fotios Gioulekas
Recent advances in artificial intelligence, specifically machine learning, contributed positively to enhancing the autonomous systems industry, along with introducing social, technical, legal and ethical challenges to make them trustworthy. Although Trustworthy Autonomous Systems (TAS) is an established and growing research direction that has been discussed in multiple disciplines, e.g., Artificial Intelligence, Human-Computer Interaction, Law, and Psychology. The impact of TAS on education curricula and required skills for future TAS engineers has rarely been discussed in the literature. This study brings together the collective insights from a number of TAS leading experts to highlight significant challenges for curriculum design and potential TAS required skills posed by the rapid emergence of TAS. Our analysis is of interest not only to the TAS education community but also to other researchers, as it offers ways to guide future research toward operationalising TAS education.
Authored by Mohammad Naiseh, Caitlin Bentley, Sarvapali Ramchurn
The continuously growing importance of today’s technology paradigms such as the Internet of Things (IoT) and the new 5G/6G standard open up unique features and opportunities for smart systems and communication devices. Famous examples are edge computing and network slicing. Generational technology upgrades provide unprecedented data rates and processing power. At the same time, these new platforms must address the growing security and privacy requirements of future smart systems. This poses two main challenges concerning the digital processing hardware. First, we need to provide integrated trustworthiness covering hardware, runtime, and the operating system. Whereas integrated means that the hardware must be the basis to support secure runtime and operating system needs under very strict latency constraints. Second, applications of smart systems cover a wide range of requirements where "one- chip-fits-all" cannot be the cost and energy effective way forward. Therefore, we need to be able to provide a scalable hardware solution to cover differing needs in terms of processing resource requirements.In this paper, we discuss our research on an integrated design of a secure and scalable hardware platform including a runtime and an operating system. The architecture is built out of composable and preferably simple components that are isolated by default. This allows for the integration of third-party hardware/software without compromising the trusted computing base. The platform approach improves system security and provides a viable basis for trustworthy communication devices.
Authored by Friedrich Pauls, Sebastian Haas, Stefan Kopsell, Michael Roitzsch, Nils Asmussen, Gerhard Fettweis
The traditional process of renting the house has several issues such as data security, immutability, less trust and high cost due to the involvement of third party, fraudulent agreement, payment delay and ambiguous contracts. To address these challenges, a blockchain with smart contracts can be an effective solution. This paper leverages the vital features of blockchain and smart contract for designing a trustworthy and secured house rental system. The proposed system involves offchain and on-chain transactions on hyperledger blockchain. Offchain transaction includes the rental contract creation between tenant and landlord based on their mutual agreement. On-chain transactions include the deposit and rent payment, digital key generation and contract dissolution, by considering the agreed terms and conditions in the contract. The functional and performance analysis of the proposed system is carried out by applying the different test cases. The proposed system fulfills the requirements of house rental process with high throughput (\textgreater92 tps) and affordable latency (\textless0.7 seconds).
Authored by Pooja Yadav, Shubham Sharma, Ajit Muzumdar, Chirag Modi, C. Vyjayanthi
With the development of networked embedded technology, the requirements of embedded systems are becoming more and more complex. This increases the difficulty of requirements analysis. Requirements patterns are a means for the comprehension and analysis of the requirements problem. In this paper, we propose seven functional requirements patterns for complex embedded systems on the basis of analyzing the characteristics of modern embedded systems. The main feature is explicitly distinguishing the controller, the system devices (controlled by the controller) and the external entities (monitored by the controller). In addition to the requirements problem description, we also provide observable system behavior description, I∼O logic and the execution mechanism for each pattern. Finally, we apply the patterns to a solar search subsystem of aerospace satellites, and all the 20 requirements can be matched against one of the patterns. This validates the usability of our patterns.
Authored by Xiaoqi Wang, Xiaohong Chen, Xiao Yang, Bo Yang
In order to assess the fire risk of the intelligent buildings, a trustworthy classification model was developed, which provides model supporting for the classification assessment of fire risk in intelligent buildings under the urban intelligent firefight construction. The model integrates Bayesian Network (BN) and software trustworthy computing theory and method, designs metric elements and attributes to assess fire risk from four dimensions of fire situation, building, environment and personnel; BN is used to calculate the risk value of fire attributes; Then, the fire risk attribute value is fused into the fire risk trustworthy value by using the trustworthy assessment model; This paper constructs a trustworthy classification model for intelligent building fire risk, and classifies the fire risk into five ranks according to the trustworthy value and attribute value. Taking the Shanghai Jing’an 11.15 fire as an example case, the result shows that the method provided in this paper can perform fire risk assessment and classification.
Authored by Weilin Wu, Na Wang, Yixiang Chen
Fog computing moves computation from the cloud to edge devices to support IoT applications with faster response times and lower bandwidth utilization. IoT users and linked gadgets are at risk to security and privacy breaches because of the high volume of interactions that occur in IoT environments. These features make it very challenging to maintain and quickly share dynamic IoT data. In this method, cloud-fog offers dependable computing for data sharing in a constantly changing IoT system. The extended IoT cloud, which initially offers vertical and horizontal computing architectures, then combines IoT devices, edge, fog, and cloud into a layered infrastructure. The framework and supporting mechanisms are designed to handle trusted computing by utilising a vertical IoT cloud architecture to protect the IoT cloud after the issues have been taken into account. To protect data integrity and information flow for different computing models in the IoT cloud, an integrated data provenance and information management method is selected. The effectiveness of the dynamic scaling mechanism is then contrasted with that of static serving instances.
Authored by Bommi Prasanthi, Dharavath Veeraswamy, Sravan Abhilash, Kesham Ganesh
This paper first describes the security and privacy challenges for the Internet of Things IoT) systems and then discusses some of the solutions that have been proposed. It also describes aspects of Trustworthy Machine Learning (TML) and then discusses how TML may be applied to handle some of the security and privacy challenges for IoT systems.
Authored by Bhavani Thuraisingham
Advances in the frontier of intelligence and system sciences have triggered the emergence of Autonomous AI (AAI) systems. AAI is cognitive intelligent systems that enable non-programmed and non-pretrained inferential intelligence for autonomous intelligence generation by machines. Basic research challenges to AAI are rooted in their transdisciplinary nature and trustworthiness among interactions of human and machine intelligence in a coherent framework. This work presents a theory and a methodology for AAI trustworthiness and its quantitative measurement in real-time context based on basic research in autonomous systems and symbiotic human-robot coordination. Experimental results have demonstrated the novelty of the methodology and effectiveness of real-time applications in hybrid intelligence systems involving humans, robots, and their interactions in distributed, adaptive, and cognitive AI systems.
Authored by Yingxu Wang
Python continues to be one of the most popular programming languages and has been used in many safetycritical fields such as medical treatment, autonomous driving systems, and data science. These fields put forward higher security requirements to Python ecosystems. However, existing studies on machine learning systems in Python concentrate on data security, model security and model privacy, and just assume the underlying Python virtual machines (PVMs) are secure and trustworthy. Unfortunately, whether such an assumption really holds is still unknown.
Authored by Xinrong Lin, Baojian Hua, Qiliang Fan