Around the world there has been an advancement of IoT edge devices, that in turn have enabled the collection of rich datasets as part of the Mobile Crowd Sensing (MCS) paradigm, which in practice is implemented in a variety of safety critical applications. In spite of the advantages of such datasets, there exists an inherent data trustworthiness challenge due to the interference of malevolent actors. In this context, there has been a great body of proposed solutions which capitalize on conventional machine algorithms for sifting through faulty data without any assumptions on the trustworthiness of the source. However, there is still a number of open issues, such as how to cope with strong colluding adversaries, while in parallel managing efficiently the sizable influx of user data. In this work we suggest that the usage of explainable artificial intelligence (XAI) can lead to even more efficient performance as we tackle the limitation of conventional black box models, by enabling the understanding and interpretation of a model s operation. Our approach enables the reasoning of the model s accuracy in the presence of adversaries and has the ability to shun out faulty or malicious data, thus, enhancing the model s adaptation process. To this end, we provide a prototype implementation coupled with a detailed performance evaluation under different scenarios of attacks, employing both real and synthetic datasets. Our results suggest that the use of XAI leads to improved performance compared to other existing schemes.
Authored by Sam Afzal-Houshmand, Dimitrios Papamartzivanos, Sajad Homayoun, Entso Veliou, Christian Jensen, Athanasios Voulodimos, Thanassis Giannetsos
In today s age of digital technology, ethical concerns regarding computing systems are increasing. While the focus of such concerns currently is on requirements for software, this article spotlights the hardware domain, specifically microchips. For example, the opaqueness of modern microchips raises security issues, as malicious actors can manipulate them, jeopardizing system integrity. As a consequence, governments invest substantially to facilitate a secure microchip supply chain. To combat the opaqueness of hardware, this article introduces the concept of Explainable Hardware (XHW). Inspired by and building on previous work on Explainable AI (XAI) and explainable software systems, we develop a framework for achieving XHW comprising relevant stakeholders, requirements they might have concerning hardware, and possible explainability approaches to meet these requirements. Through an exploratory survey among 18 hardware experts, we showcase applications of the framework and discover potential research gaps. Our work lays the foundation for future work and structured debates on XHW.
Authored by Timo Speith, Julian Speith, Steffen Becker, Yixin Zou, Asia Biega, Christof Paar
Malware poses a significant threat to global cy-bersecurity, with machine learning emerging as the primary method for its detection and analysis. However, the opaque nature of machine learning s decision-making process of-ten leads to confusion among stakeholders, undermining their confidence in the detection outcomes. To enhance the trustworthiness of malware detection, Explainable Artificial Intelligence (XAI) is employed to offer transparent and comprehensible explanations of the detection mechanisms, which enable stakeholders to gain a deeper understanding of detection mechanisms and assist in developing defensive strategies. Despite the recent XAI advancements, several challenges remain unaddressed. In this paper, we explore the specific obstacles encountered in applying XAI to malware detection and analysis, aiming to provide a road map for future research in this critical domain.
Authored by L. Rui, Olga Gadyatskaya
In the realm of agriculture, where crop health is integral to global food security, Our focus is on the early detection of crop diseases. Leveraging Convolutional Neural Networks (CNNs) on a diverse dataset of crop images, our study focuses on the development, training, and optimization of these networks to achieve accurate and timely disease classification. The first segment demonstrates the efficacy of CNN architecture and optimization strategy, showcasing the potential of deep learning models in automating the identification process. The synergy of robust disease detection and interpretability through Explainable Artificial Intelligence (XAI) presented in this work marks a significant stride toward bridging the gap between advanced technology and precision agriculture. By employing visualization, the research seeks to unravel the decision-making processes of our models. XAI Visualization method emerges as notably superior in terms of accuracy, hinting at its better identification of the disease, this method achieves an accuracy of 89.75\%, surpassing both the heat map model and the LIME explanation method. This not only enhances the transparency and trustworthiness of the predictions but also provides invaluable insights for end-users, allowing them to comprehend the diagnostic features considered by the complex algorithm.
Authored by Priyadarshini Patil, Sneha Pamali, Shreya Devagiri, A Sushma, Jyothi Mirje
Despite the tremendous impact and potential of Artificial Intelligence (AI) for civilian and military applications, it has reached an impasse as learning and reasoning work well for certain applications and it generally suffers from a number of challenges such as hidden biases and causality. Next, “symbolic” AI (not as efficient as “sub-symbolic” AI), offers transparency, explainability, verifiability and trustworthiness. To address these limitations, neuro-symbolic AI has been emerged as a new AI field that combines efficiency of “sub-symbolic” AI with the assurance and transparency of “symbolic” AI. Furthermore, AI (that suffers from aforementioned challenges) will remain inadequate for operating independently in contested, unpredictable and complex multi-domain battlefield (MDB) environment for the foreseeable future and the AI enabled autonomous systems will require human in the loop to complete the mission in such a contested environment. Moreover, in order to successfully integrate AI enabled autonomous systems into military operations, military operators need to have assurance that these systems will perform as expected and in a safe manner. Most importantly, Human-Autonomy Teaming (HAT) for shared learning and understanding and joint reasoning is crucial to assist operations across military domains (space, air, land, maritime, and cyber) at combat speed with high assurance and trust. In this paper, we present a rough guide to key research challenges and perspectives of neuro symbolic AI for assured and trustworthy HAT.
Authored by Danda Rawat
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Neural networks are often overconfident about their pre- dictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Un- certainty Aware Training (EUAT), which aims to enhance the ability of neural models to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach oper- ates during the model’s training phase by selectively employing two loss functions depending on whether the training examples are cor- rectly or incorrectly predicted by the model. This allows for pursu- ing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted in- puts, while preserving the model’s misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recog- nition domains considering both non-adversarial and adversarial set- tings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware train- ing techniques, calibration, ensembles, and DEUP) by providing un- certainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model’s output can be trusted or not and under distributional data shifts.
Authored by Pedro Mendes, Paolo Romano, David Garlan
In the realm of Internet of Things (IoT) devices, the trust management system (TMS) has been enhanced through the utilisation of diverse machine learning (ML) classifiers in recent times. The efficacy of training machine learning classifiers with pre-existing datasets for establishing trustworthiness in IoT devices is constrained by the inadequacy of selecting suitable features. The current study employes a subset of the UNSW-NB15 dataset to compute additional features such as throughput, goodput, packet loss. These features may be combined with the best discriminatory features to distinguish between trustworthy and non-trustworthy IoT networks. In addition, the transformed dataset undergoes filter-based and wrapper-based feature selection methods to mitigate the presence of irrelevant and redundant features. The evaluation of classifiers is performed utilising diverse metrics, including accuracy, precision, recall, F1-score, true positive rate (TPR), and false positive rate (FPR). The performance assessment is conducted both with and without the application of feature selection methodologies. Ultimately, a comparative analysis of the machine learning models is performed, and the findings of the analysis demonstrate that our model s efficacy surpasses that of the approaches utilised in the existing literature.
Authored by Muhammad Aaqib, Aftab Ali, Liming Chen, Omar Nibouche
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
Connected, Cooperative, and Autonomous Mobility (CCAM) will take intelligent transportation to a new level of complexity. CCAM systems can be thought of as complex Systems-of-Systems (SoSs). They pose new challenges to security as consequences of vulnerabilities or attacks become much harder to assess. In this paper, we propose the use of a specific type of a trust model, called subjective trust network, to model and assess trustworthiness of data and nodes in an automotive SoS. Given the complexity of the topic, we illustrate the application of subjective trust networks on a specific example, namely Cooperative Intersection Management (CIM). To this end, we introduce the CIM use-case and show how it can be modelled as a subjective trust network. We then analyze how such trust models can be useful both for design time and run-time analysis, and how they would allow us a more precise quantitative assessment of trust in automotive SoSs. Finally, we also discuss the open research problems and practical challenges that need to be addressed before such trust models can be applied in practice.
Authored by Frank Kargl, Nataša Trkulja, Artur Hermann, Florian Sommer, Anderson de Lucena, Alexander Kiening, Sergej Japs
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Authored by Fatemeh Stodt, Christoph Reich, Axel Sikora, Dominik Welte
Due to the concern on cloud security, digital encryption is applied before outsourcing data to the cloud for utilization. This introduces a challenge about how to efficiently perform queries over ciphertexts. Crypto-based solutions currently suffer from limited operation support, high computational complexity, weak generality, and poor verifiability. An alternative method that utilizes hardware-assisted Trusted Execution Environment (TEE), i.e., Intel SGX, has emerged to offer high computational efficiency, generality and flexibility. However, SGX-based solutions lack support on multi-user query control and suffer from security compromises caused by untrustworthy TEE function invocation, e.g., key revocation failure, incorrect query results, and sensitive information leakage. In this article, we leverage SGX and propose a secure and efficient SQL-style query framework named QShield. Notably, we propose a novel lightweight secret sharing scheme in QShield to enable multi-user query control; it effectively circumvents key revocation and avoids cumbersome remote attestation for authentication. We further embed a trust-proof mechanism into QShield to guarantee the trustworthiness of TEE function invocation; it ensures the correctness of query results and alleviates side-channel attacks. Through formal security analysis, proof-of-concept implementation and performance evaluation, we show that QShield can securely query over outsourced data with high efficiency and scalable multi-user support.
Authored by Yaxing Chen, Qinghua Zheng, Zheng Yan, Dan Liu
In the face of an increasing attack landscape, it is necessary to cater for efficient mechanisms to verify software and device integrity for detecting run-time modifications in next-generation systems-of-systems. In this context, remote attestation is a promising defense mechanism that allows a third party, the verifier, to ensure a remote device’s configuration integrity and behavioural execution correctness. However, most of the existing families of attestation solutions suffer from the lack of software-based mechanisms for the efficient extraction of rigid control-flow information. This limits their applicability to only those cyber-physical systems equipped with additional hardware support. This paper proposes a multi-level execution tracing framework capitalizing on recent software features, namely the extended Berkeley Packet Filter and Intel Processor Trace technologies, that can efficiently capture the entire platform configuration and control-flow stacks, thus, enabling wide attestation coverage capabilities that can be applied on both resource-constrained devices and cloud services. Our goal is to enhance run-time software integrity and trustworthiness with a scalable tracing solution eliminating the need for federated infrastructure trust.
Authored by Dimitrios Papamartzivanos, Sofia Menesidou, Panagiotis Gouvas, Thanassis Giannetsos
Embedded systems that make up the Internet of Things (IoT), Supervisory Control and Data Acquisition (SCADA) networks, and Smart Grid applications are coming under increasing scrutiny in the security field. Remote Attestation (RA) is a security mechanism that allows a trusted device, the verifier, to determine the trustworthiness of an untrusted device, the prover. RA has become an area of high interest in academia and industry and many research works on RA have been published in recent years. This paper reviewed the published RA research works from 2003-2020. Our contributions are fourfold. First, we have re-framed the problem of RA into 5 smaller problems: root of trust, evidence type, evidence gathering, packaging and verification, and scalability. We have provided a holistic review of RA by discussing the relationships between these problems and the various solutions that exist in modern RA research. Second, we have presented an enhanced threat model that allows for a greater understanding of the security benefits of a given RA scheme. Third, we have proposed a taxonomy to classify and analyze RA research works and use it to categorize 58 RA schemes reported in literature. Fourth, we have provided cost benefit analysis details of each RA scheme surveyed such that security professionals may perform a cost benefit analysis in the context of their own challenges. Our classification and analysis has revealed areas of future research that have not been addressed by researchers rigorously.
Authored by William Johnson, Sheikh Ghafoor, Stacy Prowell
Due to the concern on cloud security, digital encryption is applied before outsourcing data to the cloud for utilization. This introduces a challenge about how to efficiently perform queries over ciphertexts. Crypto-based solutions currently suffer from limited operation support, high computational complexity, weak generality, and poor verifiability. An alternative method that utilizes hardware-assisted Trusted Execution Environment (TEE), i.e., Intel SGX, has emerged to offer high computational efficiency, generality and flexibility. However, SGX-based solutions lack support on multi-user query control and suffer from security compromises caused by untrustworthy TEE function invocation, e.g., key revocation failure, incorrect query results, and sensitive information leakage. In this article, we leverage SGX and propose a secure and efficient SQL-style query framework named QShield. Notably, we propose a novel lightweight secret sharing scheme in QShield to enable multi-user query control; it effectively circumvents key revocation and avoids cumbersome remote attestation for authentication. We further embed a trust-proof mechanism into QShield to guarantee the trustworthiness of TEE function invocation; it ensures the correctness of query results and alleviates side-channel attacks. Through formal security analysis, proof-of-concept implementation and performance evaluation, we show that QShield can securely query over outsourced data with high efficiency and scalable multi-user support.
Authored by Yaxing Chen, Qinghua Zheng, Zheng Yan, Dan Liu
In the face of an increasing attack landscape, it is necessary to cater for efficient mechanisms to verify software and device integrity for detecting run-time modifications in next-generation systems-of-systems. In this context, remote attestation is a promising defense mechanism that allows a third party, the verifier, to ensure a remote device’s configuration integrity and behavioural execution correctness. However, most of the existing families of attestation solutions suffer from the lack of software-based mechanisms for the efficient extraction of rigid control-flow information. This limits their applicability to only those cyber-physical systems equipped with additional hardware support. This paper proposes a multi-level execution tracing framework capitalizing on recent software features, namely the extended Berkeley Packet Filter and Intel Processor Trace technologies, that can efficiently capture the entire platform configuration and control-flow stacks, thus, enabling wide attestation coverage capabilities that can be applied on both resource-constrained devices and cloud services. Our goal is to enhance run-time software integrity and trustworthiness with a scalable tracing solution eliminating the need for federated infrastructure trust.
Authored by Dimitrios Papamartzivanos, Sofia Menesidou, Panagiotis Gouvas, Thanassis Giannetsos
Embedded systems that make up the Internet of Things (IoT), Supervisory Control and Data Acquisition (SCADA) networks, and Smart Grid applications are coming under increasing scrutiny in the security field. Remote Attestation (RA) is a security mechanism that allows a trusted device, the verifier, to determine the trustworthiness of an untrusted device, the prover. RA has become an area of high interest in academia and industry and many research works on RA have been published in recent years. This paper reviewed the published RA research works from 2003-2020. Our contributions are fourfold. First, we have re-framed the problem of RA into 5 smaller problems: root of trust, evidence type, evidence gathering, packaging and verification, and scalability. We have provided a holistic review of RA by discussing the relationships between these problems and the various solutions that exist in modern RA research. Second, we have presented an enhanced threat model that allows for a greater understanding of the security benefits of a given RA scheme. Third, we have proposed a taxonomy to classify and analyze RA research works and use it to categorize 58 RA schemes reported in literature. Fourth, we have provided cost benefit analysis details of each RA scheme surveyed such that security professionals may perform a cost benefit analysis in the context of their own challenges. Our classification and analysis has revealed areas of future research that have not been addressed by researchers rigorously.
Authored by William Johnson, Sheikh Ghafoor, Stacy Prowell
With the development of cloud computing and edge computing, data sharing and collaboration have become increasing between cloud edge and end. Under the assistance of edge cloud, end users can access the data stored in the cloud by data owners. However, in an unprotected cloud-edge-end network environment, data sharing is vulnerable to security threats from malicious users, and data confidentiality cannot be guaranteed. Most of the existing data sharing approaches use the identity authentication mechanism to resist unauthorized accessed by illegal end users, but the mechanism cannot guarantee the credibility of the end user’s network environment. Therefore, this article proposes an approach for trusted sharing of data under cloud-edge-end collaboration (TSDCEE), in which we verify the trustworthiness of the data requester’s network environment based on the mechanism of attribute remote attestation. Finally, this article uses model checking Spin method to formally analyze TSDCEE, and verifies the security properties of TSDCEE.
Authored by Xuejian Li, Mingguang Wang
The wide adoption of IoT gadgets and CyberPhysical Systems (CPS) makes embedded devices increasingly important. While some of these devices perform mission-critical tasks, they are usually implemented using Micro-Controller Units (MCUs) that lack security mechanisms on par with those available to general-purpose computers, making them more susceptible to remote exploits that could corrupt their software integrity. Motivated by this problem, prior work has proposed techniques to remotely assess the trustworthiness of embedded MCU software. Among them, Control Flow Attestation (CFA) enables remote detection of runtime abuses that illegally modify the program’s control flow during execution (e.g., control flow hijacking and code reuse attacks).
Authored by Antonio Neto, Ivan Nunes
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Social networks are good platforms for likeminded people to exchange their views and thoughts. With the rapid growth of web applications, social networks became huge networks with million numbers of users. On the other hand, number of malicious activities by untrustworthy users also increased. Users must estimate the people trustworthiness before sharing their personal information with them. Since the social networks are huge and complex, the estimation of user trust value is not trivial task and could gain main researchers focus. Some of the mathematical methods are proposed to estimate the user trust value, but still they are lack of efficient methods to analyze user activities. In this paper “An Efficient Trust Computation Methods Using Machine Learning in Online Social Networks- TCML” is proposed. Here the twitter user activities are considered to estimate user direct trust value. The trust values of unknown users are computed through the recommendations of common friends. The available twitter data set is unlabeled data, hence unsupervised methods are used in categorization (clusters) of users and in computation of their trust value. In experiment results, silhouette score is used in assessing of cluster quality. The proposed method performance is compared with existing methods like mole and tidal where it could outperform them.
Authored by Anitha Yarava, Shoba Bindu
Nowadays, Recommender Systems (RSs) have become the indispensable solution to the problem of information overload in many different fields (e-commerce, e-tourism, ...) because they offer their customers with more adapted and increasingly personalized services. In this context, collaborative filtering (CF) techniques are used by many RSs since they make it easier to provide recommendations of acceptable quality by leveraging the preferences of similar user communities. However, these types of techniques suffer from the problem of the sparsity of user evaluations, especially during the cold start phase. Indeed, the process of searching for similar neighbors may not be successful due to insufficient data in the matrix of user-item ratings (case of a new user or new item). To solve this kind of problem, we can find in the literature several solutions which allow to overcome the insufficiency of the data thanks to the social relations between the users. These solutions can provide good quality recommendations even when data is sparse because they permit for an estimation of the level of trust between users. This type of metric is often used in tourism domain to support the computation of similarity measures between users by producing valuable POI (point of interest) recommendations through a better trust-based neighborhood. However, the difficulty of obtaining explicit trust data from the social relationships between tourists leads researchers to infer this data implicitly from the user-item relationships (implicit trust). In this paper, we make a state of the art on CF techniques that can be utilized to reduce the data sparsity problem during the RSs cold start phase. Second, we propose a method that essentially relies on user trustworthiness inferred using scores computed from users’ ratings of items. Finally, we explain how these relationships deduced from existing social links between tourists might be employed as additional sources of information to minimize cold start problem.
Authored by Sarah Medjroud, Nassim Dennouni, Mhamed Henni, Djelloul Bettache
We are adopting blockchain-based security features for the usage in web service applications \& platforms. These technology concepts allow us to enhance the level of trustworthiness for any kind of public web service platform. Related platforms are using simple user registration and validation procedures, which provide huge potential for illegal activities. In contrast, more secure live video identity checks are binding massive resources for the individual, staff-intensive validation tasks. Our approach combines traditional web-based service platform features with blockchain-based security enhancements. The concepts are used on two layers, for the user identification procedures as well as the entire process history on the web service platform.
Authored by Robert Manthey, Richard Vogel, Falk Schmidsberger, Matthias Baumgart, Christian Roschke, Marc Ritter, Matthias Vodel