The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
Blockchain, as an emerging distributed database, effectively addresses the issue of centralized storage in IoT data, where storage capacity cannot match the explosive growth in devices and data scale, as well as the contradictions arising from centralized data management concerning data privacy and security concerns. To alleviate the problem of excessive pressure on single-point storage and ensure data security, a blockchain data storage method based on erasure codes is proposed. This method involves constructing mathematical functions that describe the data to split the original block data into multiple fragments and add redundant slices. These fragments are then encoded and stored in different locations using a circular hash space with the addition of virtual nodes to ensure load balancing among nodes and reduce situations where a single node stores too many encoded data blocks, effectively enhancing the storage space utilization efficiency of the distributed storage database. The blockchain storage method stores encoded data digest information such as storage location, creation time, and hashes, allowing for the tracing of the origin of encoded data blocks. In case of accidental loss or malicious tampering, this enables effective recovery and ensures the integrity and availability of data in the network. Experimental results indicate that compared to traditional blockchain approaches, this method effectively reduces the storage pressure on nodes and exhibits a certain degree of disaster recovery capability.
Authored by Fanyao Meng, Jin Li, Jiaqi Gao, Junjie Liu, Junpeng Ru, Yueming Lu
An IC used in a safety-critical application such as automotive often requires a long lifetime of more than 10 years. Previously, stress test has been used as a means to establish the accelerated aging model for an IC product under a harsh operating condition. Then, the accelerated aging model is time-stretched to predict an IC’s normal lifetime. However, such a long-stretching prediction may not be very trustworthy. In this work, we present a more refined method to provide higher credibility in the IC lifetime prediction. We streamline in this paper a progressive lifetime prediction method with two phases – the training phase and the inference phase. During the training phase, we collect the aging histories of some training devices under various stress levels. During the inference phase, the extrapolation is performed on the “stressed lifetime” versus the “stress level” space and thereby leading to a more trustworthy prediction of the lifetime.
Authored by Chen-Lin Tsai, Shi-Yu Huang
Physical fitness is the prime priority of people these days as everyone wants to see himself as healthy. There are numbers of wearable devices available that help human to monitor their vital body signs through which one can get an average idea of their health. Advancements in the efficiency of healthcare systems have fueled the research and development of high-performance wearable devices. There is significant potential for portable healthcare systems to lower healthcare costs and provide continuous health monitoring of critical patients from remote locations. The most pressing need in this field is developing a safe, effective, and trustworthy medical device that can be used to reliably monitor vital signs from various human organs or the environment within or outside the body through flexible sensors. Still, the patient should be able to go about their normal day while sporting a wearable or implanted medical device. This article highlights the current scenario of wearable devices and sensors for healthcare applications. Specifically, it focuses on some widely used commercially available wearable devices for continuously gauging patient’s vital parameters and discusses the major factors influencing the surge in the demand for medical devices. Furthermore, this paper addresses the challenges and countermeasures of wearable devices in smart healthcare technology.
Authored by Kavery Verma, Preity Preity, Rakesh Ranjan
A fingerprint architecture based on a micro electro mechanical system (MEMS) for the use as a hardware security component is presented. The MEMS serves as a physically unclonable function (PUF) and is used for fingerprint ID generation, derived from the MEMS-specific parameters. The fingerprint is intended to allow the unique identifiability of electronic components and thus to ensure protection against unauthorized replacement or manipulation. The MEMS chip consists of 16 individual varactors with continuously adjustable capacitance values that are used for bit derivation (“analog” PUF). The focus is on the design-related forcing of random technological spread to provide a wide range of different parameters per chip or wafer to achieve a maximum key length. Key generation and verification is carried out via fingerprint electronics connected to the MEMS, which is realized by an FPGA.
Authored by Katja Meinel, Christian Schott, Franziska Mayer, Dhruv Gupta, Sebastian Mittag, Susann Hahn, Sebastian Weidlich, Daniel Bülz, Roman Forke, Karla Hiller, Ulrich Heinkel, Harald Kuhn
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
Memristive crossbar-based architecture provides an energy-efficient platform to accelerate neural networks (NNs) thanks to its Processing-in-Memory (PIM) nature. However, the device-to-device variation (DDV), which is typically modeled as Lognormal distribution, deviates the programmed weights from their target values, resulting in significant performance degradation. This paper proposes a new Bayesian Neural Network (BNN) approach to enhance the robustness of weights against DDV. Instead of using the widely-used Gaussian variational posterior in conventional BNNs, our approach adopts a DDV-specific variational posterior distribution, i.e., Lognormal distribution. Accordingly, in the new BNN approach, the prior distribution is modified to keep consistent with the posterior distribution to avoid expensive Monte Carlo simulations. Furthermore, the mean of the prior distribution is dynamically adjusted in accordance with the mean of the Lognormal variational posterior distribution for better convergence and accuracy. Compared with the state-of-the-art approaches, experimental results show that the proposed new BNN approach can significantly boost the inference accuracy with the consideration of DDV on several well-known datasets and modern NN architectures. For example, the inference accuracy can be improved from 18\% to 74\% in the scenario of ResNet-18 on CIFAR-10 even under large variations.
Authored by Yang Xiao, Qi Xu, Bo Yuan
In the landscape of modern computing, fog computing has emerged as a service provisioning mechanism that addresses the dual demands of low latency and service localisation. Fog architecture consists of a network of interconnected nodes that work collectively to execute tasks and process data in a localised area, thereby reducing the delay induced from communication with the cloud. However, a key issue associated with fog service provisioning models is its limited localised processing capability and storage relative to the cloud, thereby presenting inherent issues on its scalability. In this paper, we propose volunteer computing coupled with optimisation methods to address the issue of localised fog scalability. The use of optimisation methods ensures the optimal use of fog infrastructure. To scale the fog network as per the requirements, we leverage the notion of volunteer computing. We propose an intelligent approach for node selection in a trustworthy fog environment to satisfy the performance and bandwidth requirements of the fog network. The problem is formulated as a multi-criteria decision-making (MCDM) problem where nodes are evaluated and ranked based on several factors, including service level agreement (SLA) parameters and reputation value.
Authored by Asma Alkhalaf, Farookh Hussain
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
The digitalization and smartization of modern digital systems include the implementation and integration of emerging innovative technologies, such as Artificial Intelligence. By incorporating new technologies, the surface attack of the system also expands, and specialized cybersecurity mechanisms and tools are required to counter the potential new threats. This paper introduces a holistic security risk assessment methodology that aims to assist Artificial Intelligence system stakeholders guarantee the correct design and implementation of technical robustness in Artificial Intelligence systems. The methodology is designed to facilitate the automation of the security risk assessment of Artificial Intelligence components together with the rest of the system components. Supporting the methodology, the solution to the automation of Artificial Intelligence risk assessment is also proposed. Both the methodology and the tool will be validated when assessing and treating risks on Artificial Intelligence-based cybersecurity solutions integrated in modern digital industrial systems that leverage emerging technologies such as cloud continuum including Software-defined networking (SDN).
Authored by Eider Iturbe, Erkuden Rios, Nerea Toledo
Device recognition is the primary step toward a secure IoT system. However, the existing equipment recognition technology often faces the problems of unobvious data characteristics and insufficient training samples, resulting in low recognition rate. To address this problem, a convolutional neural network-based IoT device recognition method is proposed. We first extract the background icons of various IoT devices through the Internet, and then use the ResNet50 neural network to extract icon feature vectors to build an IoT icon library, and realize accurate identification of device types through image retrieval. The experimental results show that the accuracy rate of sampling retrieval in the icon library can reach 98.5\%, and the recognition accuracy rate outside the library can reach 83.3\%, which can effectively identify the type of IoT devices.
Authored by Minghao Lu, Linghui Li, Yali Gao, Xiaoyong Li
The continuously growing importance of today’s technology paradigms such as the Internet of Things (IoT) and the new 5G/6G standard open up unique features and opportunities for smart systems and communication devices. Famous examples are edge computing and network slicing. Generational technology upgrades provide unprecedented data rates and processing power. At the same time, these new platforms must address the growing security and privacy requirements of future smart systems. This poses two main challenges concerning the digital processing hardware. First, we need to provide integrated trustworthiness covering hardware, runtime, and the operating system. Whereas integrated means that the hardware must be the basis to support secure runtime and operating system needs under very strict latency constraints. Second, applications of smart systems cover a wide range of requirements where "one- chip-fits-all" cannot be the cost and energy effective way forward. Therefore, we need to be able to provide a scalable hardware solution to cover differing needs in terms of processing resource requirements.In this paper, we discuss our research on an integrated design of a secure and scalable hardware platform including a runtime and an operating system. The architecture is built out of composable and preferably simple components that are isolated by default. This allows for the integration of third-party hardware/software without compromising the trusted computing base. The platform approach improves system security and provides a viable basis for trustworthy communication devices.
Authored by Friedrich Pauls, Sebastian Haas, Stefan Kopsell, Michael Roitzsch, Nils Asmussen, Gerhard Fettweis
With the development of networked embedded technology, the requirements of embedded systems are becoming more and more complex. This increases the difficulty of requirements analysis. Requirements patterns are a means for the comprehension and analysis of the requirements problem. In this paper, we propose seven functional requirements patterns for complex embedded systems on the basis of analyzing the characteristics of modern embedded systems. The main feature is explicitly distinguishing the controller, the system devices (controlled by the controller) and the external entities (monitored by the controller). In addition to the requirements problem description, we also provide observable system behavior description, I∼O logic and the execution mechanism for each pattern. Finally, we apply the patterns to a solar search subsystem of aerospace satellites, and all the 20 requirements can be matched against one of the patterns. This validates the usability of our patterns.
Authored by Xiaoqi Wang, Xiaohong Chen, Xiao Yang, Bo Yang
Fog computing moves computation from the cloud to edge devices to support IoT applications with faster response times and lower bandwidth utilization. IoT users and linked gadgets are at risk to security and privacy breaches because of the high volume of interactions that occur in IoT environments. These features make it very challenging to maintain and quickly share dynamic IoT data. In this method, cloud-fog offers dependable computing for data sharing in a constantly changing IoT system. The extended IoT cloud, which initially offers vertical and horizontal computing architectures, then combines IoT devices, edge, fog, and cloud into a layered infrastructure. The framework and supporting mechanisms are designed to handle trusted computing by utilising a vertical IoT cloud architecture to protect the IoT cloud after the issues have been taken into account. To protect data integrity and information flow for different computing models in the IoT cloud, an integrated data provenance and information management method is selected. The effectiveness of the dynamic scaling mechanism is then contrasted with that of static serving instances.
Authored by Bommi Prasanthi, Dharavath Veeraswamy, Sravan Abhilash, Kesham Ganesh
The computation of data trustworthiness during double-sided two-way-ranging with ultra-wideband signals between IoT devices is proposed. It relies on machine learning based ranging error correction, in which the certainty of the correction value is used to quantify trustworthiness. In particular, the trustworthiness score and error correction value are calculated from channel impulse response measurements, either using a modified k-nearest neighbor (KNN) or a modified random forest (RF) algorithm. The proposed scheme is easily implemented using commercial ultra-wideband transceivers and it enables real time surveillance of malicious or unintended modification of the propagation channel. The results on experimental data show an improvement of 47\% RMSE on the test set when only trustworthy measurements are considered.
Authored by Philipp Peterseil, Bernhard Etzlinger, David Marzinger, Roya Khanzadeh, Andreas Springer
Ubiquitous environment embedded with artificial intelligent consist of heterogenous smart devices communicating each other in several context for the computation of requirements. In such environment the trust among the smart users have taken as the challenge to provide the secure environment during the communication in the ubiquitous region. To provide the secure trusted environment for the users of ubiquitous system proposed approach aims to extract behavior of smart invisible entities by retrieving their behavior of communication in the network and applying the recommendation-based filters using Deep learning (RBF-DL). The proposed model adopts deep learning-based classifier to classify the unfair recommendation with fair ones to have a trustworthy ubiquitous system. The capability of proposed model is analyzed and validated by considering different attacks and additional feature of instances in comparison with generic recommendation systems.
Authored by Jayashree Agarkhed, Geetha Pawar