Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
Connected, Cooperative, and Autonomous Mobility (CCAM) will take intelligent transportation to a new level of complexity. CCAM systems can be thought of as complex Systems-of-Systems (SoSs). They pose new challenges to security as consequences of vulnerabilities or attacks become much harder to assess. In this paper, we propose the use of a specific type of a trust model, called subjective trust network, to model and assess trustworthiness of data and nodes in an automotive SoS. Given the complexity of the topic, we illustrate the application of subjective trust networks on a specific example, namely Cooperative Intersection Management (CIM). To this end, we introduce the CIM use-case and show how it can be modelled as a subjective trust network. We then analyze how such trust models can be useful both for design time and run-time analysis, and how they would allow us a more precise quantitative assessment of trust in automotive SoSs. Finally, we also discuss the open research problems and practical challenges that need to be addressed before such trust models can be applied in practice.
Authored by Frank Kargl, Nataša Trkulja, Artur Hermann, Florian Sommer, Anderson de Lucena, Alexander Kiening, Sergej Japs
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Authored by Fatemeh Stodt, Christoph Reich, Axel Sikora, Dominik Welte
Due to the concern on cloud security, digital encryption is applied before outsourcing data to the cloud for utilization. This introduces a challenge about how to efficiently perform queries over ciphertexts. Crypto-based solutions currently suffer from limited operation support, high computational complexity, weak generality, and poor verifiability. An alternative method that utilizes hardware-assisted Trusted Execution Environment (TEE), i.e., Intel SGX, has emerged to offer high computational efficiency, generality and flexibility. However, SGX-based solutions lack support on multi-user query control and suffer from security compromises caused by untrustworthy TEE function invocation, e.g., key revocation failure, incorrect query results, and sensitive information leakage. In this article, we leverage SGX and propose a secure and efficient SQL-style query framework named QShield. Notably, we propose a novel lightweight secret sharing scheme in QShield to enable multi-user query control; it effectively circumvents key revocation and avoids cumbersome remote attestation for authentication. We further embed a trust-proof mechanism into QShield to guarantee the trustworthiness of TEE function invocation; it ensures the correctness of query results and alleviates side-channel attacks. Through formal security analysis, proof-of-concept implementation and performance evaluation, we show that QShield can securely query over outsourced data with high efficiency and scalable multi-user support.
Authored by Yaxing Chen, Qinghua Zheng, Zheng Yan, Dan Liu
In the face of an increasing attack landscape, it is necessary to cater for efficient mechanisms to verify software and device integrity for detecting run-time modifications in next-generation systems-of-systems. In this context, remote attestation is a promising defense mechanism that allows a third party, the verifier, to ensure a remote device’s configuration integrity and behavioural execution correctness. However, most of the existing families of attestation solutions suffer from the lack of software-based mechanisms for the efficient extraction of rigid control-flow information. This limits their applicability to only those cyber-physical systems equipped with additional hardware support. This paper proposes a multi-level execution tracing framework capitalizing on recent software features, namely the extended Berkeley Packet Filter and Intel Processor Trace technologies, that can efficiently capture the entire platform configuration and control-flow stacks, thus, enabling wide attestation coverage capabilities that can be applied on both resource-constrained devices and cloud services. Our goal is to enhance run-time software integrity and trustworthiness with a scalable tracing solution eliminating the need for federated infrastructure trust.
Authored by Dimitrios Papamartzivanos, Sofia Menesidou, Panagiotis Gouvas, Thanassis Giannetsos
Embedded systems that make up the Internet of Things (IoT), Supervisory Control and Data Acquisition (SCADA) networks, and Smart Grid applications are coming under increasing scrutiny in the security field. Remote Attestation (RA) is a security mechanism that allows a trusted device, the verifier, to determine the trustworthiness of an untrusted device, the prover. RA has become an area of high interest in academia and industry and many research works on RA have been published in recent years. This paper reviewed the published RA research works from 2003-2020. Our contributions are fourfold. First, we have re-framed the problem of RA into 5 smaller problems: root of trust, evidence type, evidence gathering, packaging and verification, and scalability. We have provided a holistic review of RA by discussing the relationships between these problems and the various solutions that exist in modern RA research. Second, we have presented an enhanced threat model that allows for a greater understanding of the security benefits of a given RA scheme. Third, we have proposed a taxonomy to classify and analyze RA research works and use it to categorize 58 RA schemes reported in literature. Fourth, we have provided cost benefit analysis details of each RA scheme surveyed such that security professionals may perform a cost benefit analysis in the context of their own challenges. Our classification and analysis has revealed areas of future research that have not been addressed by researchers rigorously.
Authored by William Johnson, Sheikh Ghafoor, Stacy Prowell
Due to the concern on cloud security, digital encryption is applied before outsourcing data to the cloud for utilization. This introduces a challenge about how to efficiently perform queries over ciphertexts. Crypto-based solutions currently suffer from limited operation support, high computational complexity, weak generality, and poor verifiability. An alternative method that utilizes hardware-assisted Trusted Execution Environment (TEE), i.e., Intel SGX, has emerged to offer high computational efficiency, generality and flexibility. However, SGX-based solutions lack support on multi-user query control and suffer from security compromises caused by untrustworthy TEE function invocation, e.g., key revocation failure, incorrect query results, and sensitive information leakage. In this article, we leverage SGX and propose a secure and efficient SQL-style query framework named QShield. Notably, we propose a novel lightweight secret sharing scheme in QShield to enable multi-user query control; it effectively circumvents key revocation and avoids cumbersome remote attestation for authentication. We further embed a trust-proof mechanism into QShield to guarantee the trustworthiness of TEE function invocation; it ensures the correctness of query results and alleviates side-channel attacks. Through formal security analysis, proof-of-concept implementation and performance evaluation, we show that QShield can securely query over outsourced data with high efficiency and scalable multi-user support.
Authored by Yaxing Chen, Qinghua Zheng, Zheng Yan, Dan Liu
In the face of an increasing attack landscape, it is necessary to cater for efficient mechanisms to verify software and device integrity for detecting run-time modifications in next-generation systems-of-systems. In this context, remote attestation is a promising defense mechanism that allows a third party, the verifier, to ensure a remote device’s configuration integrity and behavioural execution correctness. However, most of the existing families of attestation solutions suffer from the lack of software-based mechanisms for the efficient extraction of rigid control-flow information. This limits their applicability to only those cyber-physical systems equipped with additional hardware support. This paper proposes a multi-level execution tracing framework capitalizing on recent software features, namely the extended Berkeley Packet Filter and Intel Processor Trace technologies, that can efficiently capture the entire platform configuration and control-flow stacks, thus, enabling wide attestation coverage capabilities that can be applied on both resource-constrained devices and cloud services. Our goal is to enhance run-time software integrity and trustworthiness with a scalable tracing solution eliminating the need for federated infrastructure trust.
Authored by Dimitrios Papamartzivanos, Sofia Menesidou, Panagiotis Gouvas, Thanassis Giannetsos
Embedded systems that make up the Internet of Things (IoT), Supervisory Control and Data Acquisition (SCADA) networks, and Smart Grid applications are coming under increasing scrutiny in the security field. Remote Attestation (RA) is a security mechanism that allows a trusted device, the verifier, to determine the trustworthiness of an untrusted device, the prover. RA has become an area of high interest in academia and industry and many research works on RA have been published in recent years. This paper reviewed the published RA research works from 2003-2020. Our contributions are fourfold. First, we have re-framed the problem of RA into 5 smaller problems: root of trust, evidence type, evidence gathering, packaging and verification, and scalability. We have provided a holistic review of RA by discussing the relationships between these problems and the various solutions that exist in modern RA research. Second, we have presented an enhanced threat model that allows for a greater understanding of the security benefits of a given RA scheme. Third, we have proposed a taxonomy to classify and analyze RA research works and use it to categorize 58 RA schemes reported in literature. Fourth, we have provided cost benefit analysis details of each RA scheme surveyed such that security professionals may perform a cost benefit analysis in the context of their own challenges. Our classification and analysis has revealed areas of future research that have not been addressed by researchers rigorously.
Authored by William Johnson, Sheikh Ghafoor, Stacy Prowell
With the development of cloud computing and edge computing, data sharing and collaboration have become increasing between cloud edge and end. Under the assistance of edge cloud, end users can access the data stored in the cloud by data owners. However, in an unprotected cloud-edge-end network environment, data sharing is vulnerable to security threats from malicious users, and data confidentiality cannot be guaranteed. Most of the existing data sharing approaches use the identity authentication mechanism to resist unauthorized accessed by illegal end users, but the mechanism cannot guarantee the credibility of the end user’s network environment. Therefore, this article proposes an approach for trusted sharing of data under cloud-edge-end collaboration (TSDCEE), in which we verify the trustworthiness of the data requester’s network environment based on the mechanism of attribute remote attestation. Finally, this article uses model checking Spin method to formally analyze TSDCEE, and verifies the security properties of TSDCEE.
Authored by Xuejian Li, Mingguang Wang
The wide adoption of IoT gadgets and CyberPhysical Systems (CPS) makes embedded devices increasingly important. While some of these devices perform mission-critical tasks, they are usually implemented using Micro-Controller Units (MCUs) that lack security mechanisms on par with those available to general-purpose computers, making them more susceptible to remote exploits that could corrupt their software integrity. Motivated by this problem, prior work has proposed techniques to remotely assess the trustworthiness of embedded MCU software. Among them, Control Flow Attestation (CFA) enables remote detection of runtime abuses that illegally modify the program’s control flow during execution (e.g., control flow hijacking and code reuse attacks).
Authored by Antonio Neto, Ivan Nunes
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Social networks are good platforms for likeminded people to exchange their views and thoughts. With the rapid growth of web applications, social networks became huge networks with million numbers of users. On the other hand, number of malicious activities by untrustworthy users also increased. Users must estimate the people trustworthiness before sharing their personal information with them. Since the social networks are huge and complex, the estimation of user trust value is not trivial task and could gain main researchers focus. Some of the mathematical methods are proposed to estimate the user trust value, but still they are lack of efficient methods to analyze user activities. In this paper “An Efficient Trust Computation Methods Using Machine Learning in Online Social Networks- TCML” is proposed. Here the twitter user activities are considered to estimate user direct trust value. The trust values of unknown users are computed through the recommendations of common friends. The available twitter data set is unlabeled data, hence unsupervised methods are used in categorization (clusters) of users and in computation of their trust value. In experiment results, silhouette score is used in assessing of cluster quality. The proposed method performance is compared with existing methods like mole and tidal where it could outperform them.
Authored by Anitha Yarava, Shoba Bindu
Nowadays, Recommender Systems (RSs) have become the indispensable solution to the problem of information overload in many different fields (e-commerce, e-tourism, ...) because they offer their customers with more adapted and increasingly personalized services. In this context, collaborative filtering (CF) techniques are used by many RSs since they make it easier to provide recommendations of acceptable quality by leveraging the preferences of similar user communities. However, these types of techniques suffer from the problem of the sparsity of user evaluations, especially during the cold start phase. Indeed, the process of searching for similar neighbors may not be successful due to insufficient data in the matrix of user-item ratings (case of a new user or new item). To solve this kind of problem, we can find in the literature several solutions which allow to overcome the insufficiency of the data thanks to the social relations between the users. These solutions can provide good quality recommendations even when data is sparse because they permit for an estimation of the level of trust between users. This type of metric is often used in tourism domain to support the computation of similarity measures between users by producing valuable POI (point of interest) recommendations through a better trust-based neighborhood. However, the difficulty of obtaining explicit trust data from the social relationships between tourists leads researchers to infer this data implicitly from the user-item relationships (implicit trust). In this paper, we make a state of the art on CF techniques that can be utilized to reduce the data sparsity problem during the RSs cold start phase. Second, we propose a method that essentially relies on user trustworthiness inferred using scores computed from users’ ratings of items. Finally, we explain how these relationships deduced from existing social links between tourists might be employed as additional sources of information to minimize cold start problem.
Authored by Sarah Medjroud, Nassim Dennouni, Mhamed Henni, Djelloul Bettache
We are adopting blockchain-based security features for the usage in web service applications \& platforms. These technology concepts allow us to enhance the level of trustworthiness for any kind of public web service platform. Related platforms are using simple user registration and validation procedures, which provide huge potential for illegal activities. In contrast, more secure live video identity checks are binding massive resources for the individual, staff-intensive validation tasks. Our approach combines traditional web-based service platform features with blockchain-based security enhancements. The concepts are used on two layers, for the user identification procedures as well as the entire process history on the web service platform.
Authored by Robert Manthey, Richard Vogel, Falk Schmidsberger, Matthias Baumgart, Christian Roschke, Marc Ritter, Matthias Vodel
The continuously growing importance of today’s technology paradigms such as the Internet of Things (IoT) and the new 5G/6G standard open up unique features and opportunities for smart systems and communication devices. Famous examples are edge computing and network slicing. Generational technology upgrades provide unprecedented data rates and processing power. At the same time, these new platforms must address the growing security and privacy requirements of future smart systems. This poses two main challenges concerning the digital processing hardware. First, we need to provide integrated trustworthiness covering hardware, runtime, and the operating system. Whereas integrated means that the hardware must be the basis to support secure runtime and operating system needs under very strict latency constraints. Second, applications of smart systems cover a wide range of requirements where "one- chip-fits-all" cannot be the cost and energy effective way forward. Therefore, we need to be able to provide a scalable hardware solution to cover differing needs in terms of processing resource requirements.In this paper, we discuss our research on an integrated design of a secure and scalable hardware platform including a runtime and an operating system. The architecture is built out of composable and preferably simple components that are isolated by default. This allows for the integration of third-party hardware/software without compromising the trusted computing base. The platform approach improves system security and provides a viable basis for trustworthy communication devices.
Authored by Friedrich Pauls, Sebastian Haas, Stefan Kopsell, Michael Roitzsch, Nils Asmussen, Gerhard Fettweis
Advances in the frontier of intelligence and system sciences have triggered the emergence of Autonomous AI (AAI) systems. AAI is cognitive intelligent systems that enable non-programmed and non-pretrained inferential intelligence for autonomous intelligence generation by machines. Basic research challenges to AAI are rooted in their transdisciplinary nature and trustworthiness among interactions of human and machine intelligence in a coherent framework. This work presents a theory and a methodology for AAI trustworthiness and its quantitative measurement in real-time context based on basic research in autonomous systems and symbiotic human-robot coordination. Experimental results have demonstrated the novelty of the methodology and effectiveness of real-time applications in hybrid intelligence systems involving humans, robots, and their interactions in distributed, adaptive, and cognitive AI systems.
Authored by Yingxu Wang
The management of technical debt related to non-functional properties such as security, reliability or other trustworthiness dimensions is of paramount importance for critical systems (e.g., safety-critical, systems with strong privacy constraints etc.). Unfortunately, diverse factors such as time pressure, resource limitations, organizational aspects, lack of skills, or the fast pace at which new risks appears, can result in an inferior level of trustworthiness than the desired or required one. In addition, there is increased interest in considering trustworthiness characteristics, not in isolation, but in an aggregated fashion, as well as using this knowledge for risk quantification. In this work, we propose a trustworthiness debt measurement approach using 1) established categories and subcategories of trustworthiness characteristics from SQuaRE, 2) a weighting approach for the characteristics based on an AHP method, 3) a composed indicator based on a Fuzzy method, and 4) a risk management and analysis support based on Monte Carlo simulations. Given the preliminary nature of this work, while we propose the general approach for all trustworthiness dimensions, we elaborate more on security and reliability. This initial proposal aims providing a practical approach to manage trustworthiness debt suitable for any life cycle phase and bringing the attention to aggregation methods.
Authored by Imanol Urretavizcaya, Nuria Quintano, Jabier Martinez
The computation of data trustworthiness during double-sided two-way-ranging with ultra-wideband signals between IoT devices is proposed. It relies on machine learning based ranging error correction, in which the certainty of the correction value is used to quantify trustworthiness. In particular, the trustworthiness score and error correction value are calculated from channel impulse response measurements, either using a modified k-nearest neighbor (KNN) or a modified random forest (RF) algorithm. The proposed scheme is easily implemented using commercial ultra-wideband transceivers and it enables real time surveillance of malicious or unintended modification of the propagation channel. The results on experimental data show an improvement of 47\% RMSE on the test set when only trustworthy measurements are considered.
Authored by Philipp Peterseil, Bernhard Etzlinger, David Marzinger, Roya Khanzadeh, Andreas Springer
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society.
Authored by Yejin Shin, KyoungWoo Cho, Joon Kwak, JaeYoung Hwang