This paper presents a case study about the initial phases of the interface design for an artificial intelligence-based decision-support system for clinical diagnosis. The study presents challenges and opportunities in implementing a human-centered design (HCD) approach during the early stages of the software development of a complex system. These methods are commonly adopted to ensure that the systems are designed based on users needs. For this project, they are also used to investigate the users potential trust issues and ensure the creation of a trustworthy platform. However, the project stage and heterogeneity of the teams can pose obstacles to their implementation. The results of the implementation of HCD methods have shown to be effective and informed the creation of low fidelity prototypes. The outcomes of this process can assist other designers, developers, and researchers in creating trustworthy AI solutions.
Authored by Gabriela Beltrao, Iuliia Paramonova, Sonia Sousa
The Assessment List for Trustworthy AI (ALTAI) was developed by the High-Level Expert Group on Artificial Intelligence (AI HLEG) set up by the European Commission to help assess whether the AI system that is being developed, deployed, procured, or used, complies with the seven requirements of Trustworthy AI, as specified in the AI HLEG’s Ethics Guidelines for Trustworthy AI. This paper describes the self-evaluation process of the SHAPES pilot campaign and presents some individual case results applying the prototype of an interactive version of the Assessment List for Trustworthy AI. Finally, the available results of two individual cases are combined. The best results are obtained from the evaluation category ‘transparency’ and the worst from ‘technical robustness and safety’. Future work will be combining the missing self-assessment results and developing mitigation recommendations for AI-based risk reduction recommendations for new SHAPES services.
Authored by Jyri Rajamaki, Pedro Rocha, Mira Perenius, Fotios Gioulekas
Recent advances in artificial intelligence, specifically machine learning, contributed positively to enhancing the autonomous systems industry, along with introducing social, technical, legal and ethical challenges to make them trustworthy. Although Trustworthy Autonomous Systems (TAS) is an established and growing research direction that has been discussed in multiple disciplines, e.g., Artificial Intelligence, Human-Computer Interaction, Law, and Psychology. The impact of TAS on education curricula and required skills for future TAS engineers has rarely been discussed in the literature. This study brings together the collective insights from a number of TAS leading experts to highlight significant challenges for curriculum design and potential TAS required skills posed by the rapid emergence of TAS. Our analysis is of interest not only to the TAS education community but also to other researchers, as it offers ways to guide future research toward operationalising TAS education.
Authored by Mohammad Naiseh, Caitlin Bentley, Sarvapali Ramchurn
The continuously growing importance of today’s technology paradigms such as the Internet of Things (IoT) and the new 5G/6G standard open up unique features and opportunities for smart systems and communication devices. Famous examples are edge computing and network slicing. Generational technology upgrades provide unprecedented data rates and processing power. At the same time, these new platforms must address the growing security and privacy requirements of future smart systems. This poses two main challenges concerning the digital processing hardware. First, we need to provide integrated trustworthiness covering hardware, runtime, and the operating system. Whereas integrated means that the hardware must be the basis to support secure runtime and operating system needs under very strict latency constraints. Second, applications of smart systems cover a wide range of requirements where "one- chip-fits-all" cannot be the cost and energy effective way forward. Therefore, we need to be able to provide a scalable hardware solution to cover differing needs in terms of processing resource requirements.In this paper, we discuss our research on an integrated design of a secure and scalable hardware platform including a runtime and an operating system. The architecture is built out of composable and preferably simple components that are isolated by default. This allows for the integration of third-party hardware/software without compromising the trusted computing base. The platform approach improves system security and provides a viable basis for trustworthy communication devices.
Authored by Friedrich Pauls, Sebastian Haas, Stefan Kopsell, Michael Roitzsch, Nils Asmussen, Gerhard Fettweis
The traditional process of renting the house has several issues such as data security, immutability, less trust and high cost due to the involvement of third party, fraudulent agreement, payment delay and ambiguous contracts. To address these challenges, a blockchain with smart contracts can be an effective solution. This paper leverages the vital features of blockchain and smart contract for designing a trustworthy and secured house rental system. The proposed system involves offchain and on-chain transactions on hyperledger blockchain. Offchain transaction includes the rental contract creation between tenant and landlord based on their mutual agreement. On-chain transactions include the deposit and rent payment, digital key generation and contract dissolution, by considering the agreed terms and conditions in the contract. The functional and performance analysis of the proposed system is carried out by applying the different test cases. The proposed system fulfills the requirements of house rental process with high throughput (\textgreater92 tps) and affordable latency (\textless0.7 seconds).
Authored by Pooja Yadav, Shubham Sharma, Ajit Muzumdar, Chirag Modi, C. Vyjayanthi
With the development of networked embedded technology, the requirements of embedded systems are becoming more and more complex. This increases the difficulty of requirements analysis. Requirements patterns are a means for the comprehension and analysis of the requirements problem. In this paper, we propose seven functional requirements patterns for complex embedded systems on the basis of analyzing the characteristics of modern embedded systems. The main feature is explicitly distinguishing the controller, the system devices (controlled by the controller) and the external entities (monitored by the controller). In addition to the requirements problem description, we also provide observable system behavior description, I∼O logic and the execution mechanism for each pattern. Finally, we apply the patterns to a solar search subsystem of aerospace satellites, and all the 20 requirements can be matched against one of the patterns. This validates the usability of our patterns.
Authored by Xiaoqi Wang, Xiaohong Chen, Xiao Yang, Bo Yang
In order to assess the fire risk of the intelligent buildings, a trustworthy classification model was developed, which provides model supporting for the classification assessment of fire risk in intelligent buildings under the urban intelligent firefight construction. The model integrates Bayesian Network (BN) and software trustworthy computing theory and method, designs metric elements and attributes to assess fire risk from four dimensions of fire situation, building, environment and personnel; BN is used to calculate the risk value of fire attributes; Then, the fire risk attribute value is fused into the fire risk trustworthy value by using the trustworthy assessment model; This paper constructs a trustworthy classification model for intelligent building fire risk, and classifies the fire risk into five ranks according to the trustworthy value and attribute value. Taking the Shanghai Jing’an 11.15 fire as an example case, the result shows that the method provided in this paper can perform fire risk assessment and classification.
Authored by Weilin Wu, Na Wang, Yixiang Chen
Fog computing moves computation from the cloud to edge devices to support IoT applications with faster response times and lower bandwidth utilization. IoT users and linked gadgets are at risk to security and privacy breaches because of the high volume of interactions that occur in IoT environments. These features make it very challenging to maintain and quickly share dynamic IoT data. In this method, cloud-fog offers dependable computing for data sharing in a constantly changing IoT system. The extended IoT cloud, which initially offers vertical and horizontal computing architectures, then combines IoT devices, edge, fog, and cloud into a layered infrastructure. The framework and supporting mechanisms are designed to handle trusted computing by utilising a vertical IoT cloud architecture to protect the IoT cloud after the issues have been taken into account. To protect data integrity and information flow for different computing models in the IoT cloud, an integrated data provenance and information management method is selected. The effectiveness of the dynamic scaling mechanism is then contrasted with that of static serving instances.
Authored by Bommi Prasanthi, Dharavath Veeraswamy, Sravan Abhilash, Kesham Ganesh
This paper first describes the security and privacy challenges for the Internet of Things IoT) systems and then discusses some of the solutions that have been proposed. It also describes aspects of Trustworthy Machine Learning (TML) and then discusses how TML may be applied to handle some of the security and privacy challenges for IoT systems.
Authored by Bhavani Thuraisingham
Advances in the frontier of intelligence and system sciences have triggered the emergence of Autonomous AI (AAI) systems. AAI is cognitive intelligent systems that enable non-programmed and non-pretrained inferential intelligence for autonomous intelligence generation by machines. Basic research challenges to AAI are rooted in their transdisciplinary nature and trustworthiness among interactions of human and machine intelligence in a coherent framework. This work presents a theory and a methodology for AAI trustworthiness and its quantitative measurement in real-time context based on basic research in autonomous systems and symbiotic human-robot coordination. Experimental results have demonstrated the novelty of the methodology and effectiveness of real-time applications in hybrid intelligence systems involving humans, robots, and their interactions in distributed, adaptive, and cognitive AI systems.
Authored by Yingxu Wang
The management of technical debt related to non-functional properties such as security, reliability or other trustworthiness dimensions is of paramount importance for critical systems (e.g., safety-critical, systems with strong privacy constraints etc.). Unfortunately, diverse factors such as time pressure, resource limitations, organizational aspects, lack of skills, or the fast pace at which new risks appears, can result in an inferior level of trustworthiness than the desired or required one. In addition, there is increased interest in considering trustworthiness characteristics, not in isolation, but in an aggregated fashion, as well as using this knowledge for risk quantification. In this work, we propose a trustworthiness debt measurement approach using 1) established categories and subcategories of trustworthiness characteristics from SQuaRE, 2) a weighting approach for the characteristics based on an AHP method, 3) a composed indicator based on a Fuzzy method, and 4) a risk management and analysis support based on Monte Carlo simulations. Given the preliminary nature of this work, while we propose the general approach for all trustworthiness dimensions, we elaborate more on security and reliability. This initial proposal aims providing a practical approach to manage trustworthiness debt suitable for any life cycle phase and bringing the attention to aggregation methods.
Authored by Imanol Urretavizcaya, Nuria Quintano, Jabier Martinez
The computation of data trustworthiness during double-sided two-way-ranging with ultra-wideband signals between IoT devices is proposed. It relies on machine learning based ranging error correction, in which the certainty of the correction value is used to quantify trustworthiness. In particular, the trustworthiness score and error correction value are calculated from channel impulse response measurements, either using a modified k-nearest neighbor (KNN) or a modified random forest (RF) algorithm. The proposed scheme is easily implemented using commercial ultra-wideband transceivers and it enables real time surveillance of malicious or unintended modification of the propagation channel. The results on experimental data show an improvement of 47\% RMSE on the test set when only trustworthy measurements are considered.
Authored by Philipp Peterseil, Bernhard Etzlinger, David Marzinger, Roya Khanzadeh, Andreas Springer
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society.
Authored by Yejin Shin, KyoungWoo Cho, Joon Kwak, JaeYoung Hwang
With the upsurge of knowledge graph research, the multimedia field supports the upper multimedia services by constructing the domain knowledge graph to improve the user experience quality of multimedia services. However, the quality of knowledge graph will have a great impact on the performance of the upper multimedia service supported by it. The existing quantitative methods of knowledge graph quality have some problems, such as incomplete information utilization and so on. Therefore, this paper proposes a trustworthiness measurement method of joint entity type embedding and transformer encoder, which is quantified respectively from the local level and the global level, makes full use of the rich semantic information in the knowledge graph, and comprehensively evaluates the quality of the knowledge graph. The experimental results show that the method proposed in this paper can solve the problem that the traditional methods can not detect the matching error of entity and entity type under the condition that the error detection effect of traditional triples is basically unchanged.
Authored by Yujie Yan, Peng Yu, Honglin Fang, Zihao Wu
As in many other research domains, Artificial Intelligence (AI) techniques have been increasing their footprint in Earth Sciences to extract meaningful information from the large amount of high-detailed data available from multiple sensor modalities. While on the one hand the existing success cases endorse the great potential of AI to help address open challenges in ES, on the other hand on-going discussions and established lessons from studies on the sustainability, ethics and trustworthiness of AI must be taken into consideration if the community is to ensure that its research efforts move into directions that effectively benefit the society and the environment. In this paper, we discuss insights gathered from a brief literature review on the subtopics of AI Ethics, Sustainable AI, AI Trustworthiness and AI for Earth Sciences in an attempt to identify some of the promising directions and key needs to successfully bring these concepts together.
Authored by Philipe Dias, Dalton Lunga
The demo presents recent work on social robots that provide information from knowledge graphs in online graph databases. Sometimes more cooperative responses can be generated by using taxonomies and other semantic metadata that has been added to the knowledge graphs. Sometimes metadata about data provenance suggests higher or lower trustworthiness of the data. This raises the question whether robots should indicate trustworthiness when providing the information, and whether this should be done explicitly by meta-level comments or implicitly for example by modulating the robots’ tone of voice and generating positive and negative affect in the robots’ facial expressions.
Authored by Graham Wilcock, Kristiina Jokinen
Taking the sellers’ trustworthiness as a mediation variable, this paper mainly examines the impact of online reviews on the consumers’ purchase decisions. Conducting an online survey, we collect the corresponding data to conduct the hypothesis test by using the SPSS software. We find that the quality of online reviews has a positive impact on consumers’ perceived values and sellers’ trustworthiness. The timeliness of online reviews has a positive impact on consumers’ perceived values, which can have a positive impact on sellers’ trustworthiness. An interesting observation indicates that the perceived values can indirectly influence consumers’ purchase decisions by taking sellers’ trustworthiness as a mediation variable. The sellers’ trustworthiness has a positive impact on consumers’ purchase decisions. We believe that our findings can help online sellers to better manage online reviews.
Authored by Xiaohu Qian, Yunxia Li, Mingqiang Yin, Fan Yang
Software trustworthiness evaluation (STE) is regarded as a Multi-Criteria Decision Making (MCDM) problem that consists of criteria. However, most of the current STE do not consider the relationships between criteria. This paper presents a software trustworthiness evaluation method based on the relationships between criteria. Firstly, the relationships between criteria is described by cooperative and conflicting degrees between criteria. Then, a measure formula for the substitutivity between criteria is proposed and the cooperative degree between criteria is taken as the approximation of the substitutivity between criteria. With the help of the substitutivity between criteria, the software trustworthiness measurement model obtained by axiomatic approaches is applied to aggregate the degree to which each optional software product meets each objective. Finally, the candidate software products are sorted according to the trustworthiness aggregation results, and the optimal product is obtained from the alternative software products on the basis of the sorting results. The effectiveness of the proposed method is demonstrated through case study.
Authored by Hongwei Tao
The loose coupling and distributed nature of service-oriented architecture (SOA) can easily lead to trustworthiness problem of service composition. The current Web service composition trustworthiness evaluation method is biased towards the service provider itself, while ignoring the trustworthiness of the service discoverer and user. This paper mainly studies a multi-angle Web service composition trustworthiness evaluation method, comprehensively considers the three aspects of Web service composition, and uses the service finder to expand the service center. The experiment proves that this kind of trustworthiness evaluation method of Web service composition can improve the accuracy and comprehensiveness of trustworthiness evaluation.
Authored by Zhang Yanhong
Artificial intelligence (AI) technology is rapidly being introduced and used in all industries as the core technology. Further, concerns about unexpected social issues are also emerging. Therefore, each country, and standard and international organizations, are developing and distributing guidelines to maximize the benefits of AI while minimizing risks and side effects. However, there are several hurdles for developers to use them in actual industrial fields such as ambiguity in terminologies, lack of concreteness according to domain, and non-specific requirements. Therefore, in this paper, approaches to address these problems are presented. If the recommendations or guidelines to be developed in the future refer to the proposed approaches, it would be a guideline for assuring AI trustworthiness that is more developer-friendly.
Authored by Jae Hwang
Network security isolation technology is an important means to protect the internal information security of enterprises. Generally, isolation is achieved through traditional network devices, such as firewalls and gatekeepers. However, the security rules are relatively rigid and cannot better meet the flexible and changeable business needs. Through the double sandbox structure created for each user, each user in the virtual machine is isolated from each other and security is ensured. By creating a virtual disk in a virtual machine as a user storage sandbox, and encrypting the read and write of the disk, the shortcomings of traditional network isolation methods are discussed, and the application of cloud desktop network isolation technology based on VMwarer technology in universities is expounded.
Authored by Kai Ye
Hardware breakpoints are used to monitor the behavior of a program on a virtual machine (VM). Although a virtual machine monitor (VMM) can inspect programs on a VM at hardware breakpoints, the programs themselves can detect hardware breakpoints by reading debug registers. Malicious programs may change their behavior to avoid introspection and other security mechanisms if a hardware breakpoint is detected. To prevent introspection evasion, methods for hiding hardware breakpoints by returning a fake value to the VM are proposed. These methods detect the read and write operations of the debug register from the VM and then return the processing to the VM as if their access has succeeded. However, VM introspection remains detectable from the VM by confirming the availability of the debug exception in the address set. While the previous work handles the read and write operations of the debug register, the debug exception is not delivered to the VM program. To address this problem, this study presents a method for making hardware breakpoints compatible with VM introspection. The proposed method uses surplus debug address registers to deliver the debug exception at the hardware breakpoint set by the VM program. If a VM program attempts to write a value to a debug register, the VMM detects and stores the value in a real debug register that is not used for VM introspection. Because debug exception at the hardware breakpoint was delivered to the VM, hardware breakpoints set by the VM were compatible with VM introspection. The evaluation results showed that the proposed method had a low performance overhead.
Authored by Masaya Sato, Ryosuke Nakamura, Toshihiro Yamauchi, Hideo Taniguchi
Distributed Ledger Technology (DLT), from the initial goal of moving digital assets, allows more advanced approaches as smart contracts executed on distributed computational enabling nodes such as Ethereum Virtual Machines (EVM) initially available only on the Ethereum ledger. Since the release of different EVM-based ledgers, the use cases to incentive the integration of smart contracts on other domains, such as IoT environments, increased. In this paper, we analyze the most IoT environment expedient quantitative metrics of various popular EVM-enabling ledgers to provide an overview of potential EVMenabling characteristics.
Authored by Sandi Gec, Dejan Lavbič, Vlado Stankovski, Petar Kochovski
Pen-testing or penetration testing is an exercise embraced to differentiate and take advantage of all the possible weaknesses in a system or network. It certifies the reasonability or deficiency of the security endeavours which have been executed. Our manuscript shows an outline of pen testing. We examine all systems, the advantages, and respective procedure of pen testing. The technique of pen testing incorporates following stages: Planning of the tests, endlessly tests investigation. The testing stage includes the following steps: Weakness investigation, data gathering and weakness exploitation. This manuscript furthermore shows the application of this procedure to direct pen testing on the model of the web applications.
Authored by Sarthak Baluni, Shivansu Dutt, Pranjal Dabral, Srabanti Maji, Anil Kumar, Alka Chaudhary
With the development of information networks, cloud computing, big data, and virtualization technologies promote the emergence of various new network applications to meet the needs of various Internet services. A security protection system for virtual host in cloud computing center is proposed in the article. The system takes "security as a service" as the starting point, takes virtual machines as the core, and takes virtual machine clusters as the unit to provide unified security protection against the borderless characteristics of virtualized computing. The thesis builds a network security protection system for APT attacks; uses the system dynamics method to establish a system capability model, and conducts simulation analysis. The simulation results prove the validity and rationality of the network communication security system framework and modeling analysis method proposed in the thesis. Compared with traditional methods, this method has more comprehensive modeling and analysis elements, and the deduced results are more instructive.
Authored by Xin Nie, Chengcheng Lou