Fog computing moves computation from the cloud to edge devices to support IoT applications with faster response times and lower bandwidth utilization. IoT users and linked gadgets are at risk to security and privacy breaches because of the high volume of interactions that occur in IoT environments. These features make it very challenging to maintain and quickly share dynamic IoT data. In this method, cloud-fog offers dependable computing for data sharing in a constantly changing IoT system. The extended IoT cloud, which initially offers vertical and horizontal computing architectures, then combines IoT devices, edge, fog, and cloud into a layered infrastructure. The framework and supporting mechanisms are designed to handle trusted computing by utilising a vertical IoT cloud architecture to protect the IoT cloud after the issues have been taken into account. To protect data integrity and information flow for different computing models in the IoT cloud, an integrated data provenance and information management method is selected. The effectiveness of the dynamic scaling mechanism is then contrasted with that of static serving instances.
Authored by Bommi Prasanthi, Dharavath Veeraswamy, Sravan Abhilash, Kesham Ganesh
This paper first describes the security and privacy challenges for the Internet of Things IoT) systems and then discusses some of the solutions that have been proposed. It also describes aspects of Trustworthy Machine Learning (TML) and then discusses how TML may be applied to handle some of the security and privacy challenges for IoT systems.
Authored by Bhavani Thuraisingham
Advances in the frontier of intelligence and system sciences have triggered the emergence of Autonomous AI (AAI) systems. AAI is cognitive intelligent systems that enable non-programmed and non-pretrained inferential intelligence for autonomous intelligence generation by machines. Basic research challenges to AAI are rooted in their transdisciplinary nature and trustworthiness among interactions of human and machine intelligence in a coherent framework. This work presents a theory and a methodology for AAI trustworthiness and its quantitative measurement in real-time context based on basic research in autonomous systems and symbiotic human-robot coordination. Experimental results have demonstrated the novelty of the methodology and effectiveness of real-time applications in hybrid intelligence systems involving humans, robots, and their interactions in distributed, adaptive, and cognitive AI systems.
Authored by Yingxu Wang
The management of technical debt related to non-functional properties such as security, reliability or other trustworthiness dimensions is of paramount importance for critical systems (e.g., safety-critical, systems with strong privacy constraints etc.). Unfortunately, diverse factors such as time pressure, resource limitations, organizational aspects, lack of skills, or the fast pace at which new risks appears, can result in an inferior level of trustworthiness than the desired or required one. In addition, there is increased interest in considering trustworthiness characteristics, not in isolation, but in an aggregated fashion, as well as using this knowledge for risk quantification. In this work, we propose a trustworthiness debt measurement approach using 1) established categories and subcategories of trustworthiness characteristics from SQuaRE, 2) a weighting approach for the characteristics based on an AHP method, 3) a composed indicator based on a Fuzzy method, and 4) a risk management and analysis support based on Monte Carlo simulations. Given the preliminary nature of this work, while we propose the general approach for all trustworthiness dimensions, we elaborate more on security and reliability. This initial proposal aims providing a practical approach to manage trustworthiness debt suitable for any life cycle phase and bringing the attention to aggregation methods.
Authored by Imanol Urretavizcaya, Nuria Quintano, Jabier Martinez
The computation of data trustworthiness during double-sided two-way-ranging with ultra-wideband signals between IoT devices is proposed. It relies on machine learning based ranging error correction, in which the certainty of the correction value is used to quantify trustworthiness. In particular, the trustworthiness score and error correction value are calculated from channel impulse response measurements, either using a modified k-nearest neighbor (KNN) or a modified random forest (RF) algorithm. The proposed scheme is easily implemented using commercial ultra-wideband transceivers and it enables real time surveillance of malicious or unintended modification of the propagation channel. The results on experimental data show an improvement of 47\% RMSE on the test set when only trustworthy measurements are considered.
Authored by Philipp Peterseil, Bernhard Etzlinger, David Marzinger, Roya Khanzadeh, Andreas Springer
Artificial intelligence (AI) technology is becoming common in daily life as it finds applications in various fields. Consequently, studies have strongly focused on the reliability of AI technology to ensure that it will be used ethically and in a nonmalicious manner. In particular, the fairness of AI technology should be ensured to avoid problems such as discrimination against a certain group (e.g., racial discrimination). This paper defines seven requirements for eliminating factors that reduce the fairness of AI systems in the implementation process. It also proposes a measure to reduce the bias and discrimination that can occur during AI system implementation to ensure the fairness of AI systems. The proposed requirements and measures are expected to enhance the fairness and ensure the reliability of AI systems and to ultimately increase the acceptability of AI technology in human society.
Authored by Yejin Shin, KyoungWoo Cho, Joon Kwak, JaeYoung Hwang
With the upsurge of knowledge graph research, the multimedia field supports the upper multimedia services by constructing the domain knowledge graph to improve the user experience quality of multimedia services. However, the quality of knowledge graph will have a great impact on the performance of the upper multimedia service supported by it. The existing quantitative methods of knowledge graph quality have some problems, such as incomplete information utilization and so on. Therefore, this paper proposes a trustworthiness measurement method of joint entity type embedding and transformer encoder, which is quantified respectively from the local level and the global level, makes full use of the rich semantic information in the knowledge graph, and comprehensively evaluates the quality of the knowledge graph. The experimental results show that the method proposed in this paper can solve the problem that the traditional methods can not detect the matching error of entity and entity type under the condition that the error detection effect of traditional triples is basically unchanged.
Authored by Yujie Yan, Peng Yu, Honglin Fang, Zihao Wu
As in many other research domains, Artificial Intelligence (AI) techniques have been increasing their footprint in Earth Sciences to extract meaningful information from the large amount of high-detailed data available from multiple sensor modalities. While on the one hand the existing success cases endorse the great potential of AI to help address open challenges in ES, on the other hand on-going discussions and established lessons from studies on the sustainability, ethics and trustworthiness of AI must be taken into consideration if the community is to ensure that its research efforts move into directions that effectively benefit the society and the environment. In this paper, we discuss insights gathered from a brief literature review on the subtopics of AI Ethics, Sustainable AI, AI Trustworthiness and AI for Earth Sciences in an attempt to identify some of the promising directions and key needs to successfully bring these concepts together.
Authored by Philipe Dias, Dalton Lunga
The demo presents recent work on social robots that provide information from knowledge graphs in online graph databases. Sometimes more cooperative responses can be generated by using taxonomies and other semantic metadata that has been added to the knowledge graphs. Sometimes metadata about data provenance suggests higher or lower trustworthiness of the data. This raises the question whether robots should indicate trustworthiness when providing the information, and whether this should be done explicitly by meta-level comments or implicitly for example by modulating the robots’ tone of voice and generating positive and negative affect in the robots’ facial expressions.
Authored by Graham Wilcock, Kristiina Jokinen
Taking the sellers’ trustworthiness as a mediation variable, this paper mainly examines the impact of online reviews on the consumers’ purchase decisions. Conducting an online survey, we collect the corresponding data to conduct the hypothesis test by using the SPSS software. We find that the quality of online reviews has a positive impact on consumers’ perceived values and sellers’ trustworthiness. The timeliness of online reviews has a positive impact on consumers’ perceived values, which can have a positive impact on sellers’ trustworthiness. An interesting observation indicates that the perceived values can indirectly influence consumers’ purchase decisions by taking sellers’ trustworthiness as a mediation variable. The sellers’ trustworthiness has a positive impact on consumers’ purchase decisions. We believe that our findings can help online sellers to better manage online reviews.
Authored by Xiaohu Qian, Yunxia Li, Mingqiang Yin, Fan Yang
Software trustworthiness evaluation (STE) is regarded as a Multi-Criteria Decision Making (MCDM) problem that consists of criteria. However, most of the current STE do not consider the relationships between criteria. This paper presents a software trustworthiness evaluation method based on the relationships between criteria. Firstly, the relationships between criteria is described by cooperative and conflicting degrees between criteria. Then, a measure formula for the substitutivity between criteria is proposed and the cooperative degree between criteria is taken as the approximation of the substitutivity between criteria. With the help of the substitutivity between criteria, the software trustworthiness measurement model obtained by axiomatic approaches is applied to aggregate the degree to which each optional software product meets each objective. Finally, the candidate software products are sorted according to the trustworthiness aggregation results, and the optimal product is obtained from the alternative software products on the basis of the sorting results. The effectiveness of the proposed method is demonstrated through case study.
Authored by Hongwei Tao
The loose coupling and distributed nature of service-oriented architecture (SOA) can easily lead to trustworthiness problem of service composition. The current Web service composition trustworthiness evaluation method is biased towards the service provider itself, while ignoring the trustworthiness of the service discoverer and user. This paper mainly studies a multi-angle Web service composition trustworthiness evaluation method, comprehensively considers the three aspects of Web service composition, and uses the service finder to expand the service center. The experiment proves that this kind of trustworthiness evaluation method of Web service composition can improve the accuracy and comprehensiveness of trustworthiness evaluation.
Authored by Zhang Yanhong
Artificial intelligence (AI) technology is rapidly being introduced and used in all industries as the core technology. Further, concerns about unexpected social issues are also emerging. Therefore, each country, and standard and international organizations, are developing and distributing guidelines to maximize the benefits of AI while minimizing risks and side effects. However, there are several hurdles for developers to use them in actual industrial fields such as ambiguity in terminologies, lack of concreteness according to domain, and non-specific requirements. Therefore, in this paper, approaches to address these problems are presented. If the recommendations or guidelines to be developed in the future refer to the proposed approaches, it would be a guideline for assuring AI trustworthiness that is more developer-friendly.
Authored by Jae Hwang
Network security isolation technology is an important means to protect the internal information security of enterprises. Generally, isolation is achieved through traditional network devices, such as firewalls and gatekeepers. However, the security rules are relatively rigid and cannot better meet the flexible and changeable business needs. Through the double sandbox structure created for each user, each user in the virtual machine is isolated from each other and security is ensured. By creating a virtual disk in a virtual machine as a user storage sandbox, and encrypting the read and write of the disk, the shortcomings of traditional network isolation methods are discussed, and the application of cloud desktop network isolation technology based on VMwarer technology in universities is expounded.
Authored by Kai Ye
Hardware breakpoints are used to monitor the behavior of a program on a virtual machine (VM). Although a virtual machine monitor (VMM) can inspect programs on a VM at hardware breakpoints, the programs themselves can detect hardware breakpoints by reading debug registers. Malicious programs may change their behavior to avoid introspection and other security mechanisms if a hardware breakpoint is detected. To prevent introspection evasion, methods for hiding hardware breakpoints by returning a fake value to the VM are proposed. These methods detect the read and write operations of the debug register from the VM and then return the processing to the VM as if their access has succeeded. However, VM introspection remains detectable from the VM by confirming the availability of the debug exception in the address set. While the previous work handles the read and write operations of the debug register, the debug exception is not delivered to the VM program. To address this problem, this study presents a method for making hardware breakpoints compatible with VM introspection. The proposed method uses surplus debug address registers to deliver the debug exception at the hardware breakpoint set by the VM program. If a VM program attempts to write a value to a debug register, the VMM detects and stores the value in a real debug register that is not used for VM introspection. Because debug exception at the hardware breakpoint was delivered to the VM, hardware breakpoints set by the VM were compatible with VM introspection. The evaluation results showed that the proposed method had a low performance overhead.
Authored by Masaya Sato, Ryosuke Nakamura, Toshihiro Yamauchi, Hideo Taniguchi
Distributed Ledger Technology (DLT), from the initial goal of moving digital assets, allows more advanced approaches as smart contracts executed on distributed computational enabling nodes such as Ethereum Virtual Machines (EVM) initially available only on the Ethereum ledger. Since the release of different EVM-based ledgers, the use cases to incentive the integration of smart contracts on other domains, such as IoT environments, increased. In this paper, we analyze the most IoT environment expedient quantitative metrics of various popular EVM-enabling ledgers to provide an overview of potential EVMenabling characteristics.
Authored by Sandi Gec, Dejan Lavbič, Vlado Stankovski, Petar Kochovski
Pen-testing or penetration testing is an exercise embraced to differentiate and take advantage of all the possible weaknesses in a system or network. It certifies the reasonability or deficiency of the security endeavours which have been executed. Our manuscript shows an outline of pen testing. We examine all systems, the advantages, and respective procedure of pen testing. The technique of pen testing incorporates following stages: Planning of the tests, endlessly tests investigation. The testing stage includes the following steps: Weakness investigation, data gathering and weakness exploitation. This manuscript furthermore shows the application of this procedure to direct pen testing on the model of the web applications.
Authored by Sarthak Baluni, Shivansu Dutt, Pranjal Dabral, Srabanti Maji, Anil Kumar, Alka Chaudhary
With the development of information networks, cloud computing, big data, and virtualization technologies promote the emergence of various new network applications to meet the needs of various Internet services. A security protection system for virtual host in cloud computing center is proposed in the article. The system takes "security as a service" as the starting point, takes virtual machines as the core, and takes virtual machine clusters as the unit to provide unified security protection against the borderless characteristics of virtualized computing. The thesis builds a network security protection system for APT attacks; uses the system dynamics method to establish a system capability model, and conducts simulation analysis. The simulation results prove the validity and rationality of the network communication security system framework and modeling analysis method proposed in the thesis. Compared with traditional methods, this method has more comprehensive modeling and analysis elements, and the deduced results are more instructive.
Authored by Xin Nie, Chengcheng Lou
Python continues to be one of the most popular programming languages and has been used in many safetycritical fields such as medical treatment, autonomous driving systems, and data science. These fields put forward higher security requirements to Python ecosystems. However, existing studies on machine learning systems in Python concentrate on data security, model security and model privacy, and just assume the underlying Python virtual machines (PVMs) are secure and trustworthy. Unfortunately, whether such an assumption really holds is still unknown.
Authored by Xinrong Lin, Baojian Hua, Qiliang Fan
Java-based applications are widely used by companies, government agencies, and financial institutions. Every day, these applications process a considerable amount of sensitive data, such as people’s credit card numbers and passwords. Research has found that the Java Virtual Machine (JVM), an essential component for executing Java-based applications, stores data in memory for an unknown period of time even after the data are no longer used. This mismanagement of JVM puts all the data, sensitive or non-sensitive, in danger and raises a huge concern to all Java-based applications globally. This problem has serious implications for many “secure” applications that employ Javabased frameworks or libraries with a severe security risk of having sensitive data that attackers can access after the data are thought to be cleared. This paper presents a prototype of a secure Java API we design through an undergraduate student research project. The API is implemented using direct Byte buffer so that sensitive data are not managed by JVM garbage collection. We also implement the API using obfuscation so that data are encrypted. Using an initial experimental evaluation, the proposed secure API can successfully protect sensitive data from being accessed by malicious users.
Authored by Lin Deng, Bingyang Wei, Jin Guo, Matt Benke, Tyler Howard, Matt Krause, Aman Patel
A huge number of cloud users and cloud providers are threatened of security issues by cloud computing adoption. Cloud computing is a hub of virtualization that provides virtualization-based infrastructure over physically connected systems. With the rapid advancement of cloud computing technology, data protection is becoming increasingly necessary. It s important to weigh the advantages and disadvantages of moving to cloud computing when deciding whether to do so. As a result of security and other problems in the cloud, cloud clients need more time to consider transitioning to cloud environments. Cloud computing, like any other technology, faces numerous challenges, especially in terms of cloud security. Many future customers are wary of cloud adoption because of this. Virtualization Technologies facilitates the sharing of recourses among multiple users. Cloud services are protected using various models such as type-I and type-II hypervisors, OS-level, and unikernel virtualization but also offer a variety of security issues. Unfortunately, several attacks have been built in recent years to compromise the hypervisor and take control of all virtual machines running above it. It is extremely difficult to reduce the size of a hypervisor due to the functions it offers. It is not acceptable for a safe device design to include a large hypervisor in the Trusted Computing Base (TCB). Virtualization is used by cloud computing service providers to provide services. However, using these methods entails handing over complete ownership of data to a third party. This paper covers a variety of topics related to virtualization protection, including a summary of various solutions and risk mitigation in VMM (virtual machine monitor). In this paper, we will discuss issues possible with a malicious virtual machine. We will also discuss security precautions that are required to handle malicious behaviors. We notice the issues of investigating malicious behaviors in cloud computing, give the scientific categorization and demonstrate the future headings. We ve identified: i) security specifications for virtualization in Cloud computing, which can be used as a starting point for securing Cloud virtual infrastructure, ii) attacks that can be conducted against Cloud virtual infrastructure, and iii) security solutions to protect the virtualization environment from DDOS attacks.
Authored by Tahir Alyas, Karamath Ateeq, Mohammed Alqahtani, Saigeeta Kukunuru, Nadia Tabassum, Rukshanda Kamran
Virtualization is essential in assisting businesses in lowering operational costs while still ensuring increased productivity, better hardware utilization, and flexibility. According to Patrick Lin, Senior Director of Product Management for VMware, "virtualization is both an opportunity and a threat." This survey gives a review of the literature on major virtualization technology security concerns. Our study primarily focuses on several open security flaws that virtualization introduces into the environment. Virtual machines (VMs) are overtaking physical machine infrastructures due to their capacity to simulate hardware environments, share hardware resources, and make use of a range of operating systems (OS). By offering a higher level of hardware abstraction and isolation, efficient external monitoring and recording, and on-demand access, VMs offer more effective security architecture than traditional machines. It concentrates on virtual machine-specific security concerns. The security risks mentioned in this proposal apply to all of the virtualization technologies now on the market; they are not unique to any one particular virtualization technology. In addition to some security advantages that come along with virtualization, the survey first gives a brief review of the various virtualization technologies that are now on the market. It conclude by going into great depth on a number of security gaps in the virtualized environment.
Authored by N.B. Kadu, Pramod Jadhav, Santosh Pawar
The world has seen a quick transition from hard devices for local storage to massive virtual data centers, all possible because of cloud storage technology. Businesses have grown to be scalable, meeting consumer demands on every turn. Cloud computing has transforming the way we do business making IT more efficient and cost effective that leads to new types of cybercrimes. Securing the data in cloud is a challenging task. Cloud security is a mixture of art and science. Art is to create your own technique and technologies in such a way that the user should be authenticated. Science is because you have to come up with ways of securing your application. Data security refers to a broad set of policies, technologies and controls deployed to protect data application and the associated infrastructure of cloud computing. It ensures that the data has not been accessed by any unauthorized person. Cloud storage systems are considered to be a network of distributed data centers which typically uses cloud computing technologies like virtualization and offers some kind of interface for storing data. Virtualization is the process of grouping the physical storage from multiple network storage devices so that it looks like a single storage device.
Authored by Jeevitha K, Thriveni J
Cloud computing has since been turned into the most transcendental growth. This creative invention provides forms of technology and software assistance to companies. Cloud computing is a crucial concept for the distribution of information on the internet. Virtualization is a focal point for supporting cloud resources sharing. The secrecy of data management is the essential warning for the assurance of computer security such that cloud processing will not have effective privacy safety. All subtleties of information relocation to cloud stay escaped the clients. In this review, the effective mobility techniques for privacy and secured cloud computing have been studied to support the infrastructure as service.
Authored by Betty Samuel, Saahira Ahamed, Padmanayaki Selvarajan
We have seen the tremendous expansion of machine learning (ML) technology in Artificial Intelligence (AI) applications, including computer vision, voice recognition, and many others. The availability of a vast amount of data has spurred the rise of ML technologies, especially Deep Learning (DL). Traditional ML systems consolidate all data into a central location, usually a data center, which may breach privacy and confidentiality rules. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating data privacy, legality, scalability, and unwanted bandwidth loss problems. This paper outlines a vision for leveraging FL for better traffic steering predictions. Specifically, we propose a hierarchical FL framework that will dynamically update service function chains in a network by predicting future user demand and network state using the FL method.
Authored by Abdullah Bittar, Changcheng Huang