Social networks are good platforms for likeminded people to exchange their views and thoughts. With the rapid growth of web applications, social networks became huge networks with million numbers of users. On the other hand, number of malicious activities by untrustworthy users also increased. Users must estimate the people trustworthiness before sharing their personal information with them. Since the social networks are huge and complex, the estimation of user trust value is not trivial task and could gain main researchers focus. Some of the mathematical methods are proposed to estimate the user trust value, but still they are lack of efficient methods to analyze user activities. In this paper “An Efficient Trust Computation Methods Using Machine Learning in Online Social Networks- TCML” is proposed. Here the twitter user activities are considered to estimate user direct trust value. The trust values of unknown users are computed through the recommendations of common friends. The available twitter data set is unlabeled data, hence unsupervised methods are used in categorization (clusters) of users and in computation of their trust value. In experiment results, silhouette score is used in assessing of cluster quality. The proposed method performance is compared with existing methods like mole and tidal where it could outperform them.
Authored by Anitha Yarava, Shoba Bindu
We analyze a dataset from Twitter of misinformation related to the COVID-19 pandemic. We consider this dataset from the intersection of two important but, heretofore, largely separate perspectives: misinformation and trust. We apply existing direct trust measures to the dataset to understand their topology, and to better understand if and how trust relates to spread of misinformation online. We find evidence for small worldness in the misinformation trust network; outsized influence from broker nodes; a digital fingerprint that may indicate when a misinformation trust network is forming; and, a positive relationship between greater trust and spread of misinformation.
Authored by Bryan Boots, Steven Simske
The new web 3.0 or Web3 is a distributed web technology mainly operated by decentralized blockchain and Artificial Intelligence. The Web 3.0 technologies bring the changes in industry 4.0 especially the business sector. The contribution of this paper to discuss the new web 3.0 (not semantic web) and to explore the essential factors of the new Web 3.0 technologies in business or industry based on 7 layers of decentralized web. The Layers have users, interface, application, execution, settlement, data, and social as main components. The concept 7 layers of decentralized web was introduced by Polynya. This research was carried out using SLR (Systematic Literature Review) methodology to identify certain factors by analyzing high quality papers in the Scopus database. We found 21 essential factors that are Distributed, Real-time, Community, Culture, Productivity, Efficiency, Decentralized, Trust, Security, Performance, Reliability, Scalability, Transparency, Authenticity, Cost Effective, Communication, Telecommunication, Social Network, Use Case, and Business Simulation. We also present opportunities and challenges of the 21 factors in business and Industry.
Authored by Calvin Vernando, Hendry Hitojo, Randy Steven, Meyliana, Surjandy
Large amount of information generated on the web is useful for extracting useful patterns about customers and their purchases. Recommender system provides framework to utilize this information to make suggestions to user according to their previous preferences. They are intelligent systems having decision making capabilities. This in turn enhances business profit. Recommender system endure from problems like cold start, fake profile generation and data sparsity. Inclusion of trust in recommender system helps to alleviate these problems to a great extent. The phenomenon of trust is derived from daily life experiences like believing the views/reviews suggested by friends and relatives for buying new things. The desideratum of this research paper is to procure a survey on how trust can be incorporated in recommender systems and the advantages trust aware recommender systems have over traditional recommender systems. It highlights the techniques that have been used to develop trust aware recommenders and pros and cones of these techniques.
Authored by Megha Raizada
Nowadays, Recommender Systems (RSs) have become the indispensable solution to the problem of information overload in many different fields (e-commerce, e-tourism, ...) because they offer their customers with more adapted and increasingly personalized services. In this context, collaborative filtering (CF) techniques are used by many RSs since they make it easier to provide recommendations of acceptable quality by leveraging the preferences of similar user communities. However, these types of techniques suffer from the problem of the sparsity of user evaluations, especially during the cold start phase. Indeed, the process of searching for similar neighbors may not be successful due to insufficient data in the matrix of user-item ratings (case of a new user or new item). To solve this kind of problem, we can find in the literature several solutions which allow to overcome the insufficiency of the data thanks to the social relations between the users. These solutions can provide good quality recommendations even when data is sparse because they permit for an estimation of the level of trust between users. This type of metric is often used in tourism domain to support the computation of similarity measures between users by producing valuable POI (point of interest) recommendations through a better trust-based neighborhood. However, the difficulty of obtaining explicit trust data from the social relationships between tourists leads researchers to infer this data implicitly from the user-item relationships (implicit trust). In this paper, we make a state of the art on CF techniques that can be utilized to reduce the data sparsity problem during the RSs cold start phase. Second, we propose a method that essentially relies on user trustworthiness inferred using scores computed from users’ ratings of items. Finally, we explain how these relationships deduced from existing social links between tourists might be employed as additional sources of information to minimize cold start problem.
Authored by Sarah Medjroud, Nassim Dennouni, Mhamed Henni, Djelloul Bettache
The internet has made everything convenient. Through the world wide web it has almost single-handily transformed the way we live our lives. In doing so, we have become so fuelled by cravings for fast and cheap web connections that we find it difficult to take in the bigger picture. It is widely documented that we need a safer and more trusting internet, but few know or agree on what this actually means. This paper introduces a new body of research that explores whether there needs to be a fundamental shift in how we design and deliver these online spaces. In detail, the authors suggest the need for an internet security aesthetic that opens up the internet (from end to end) to fully support the people that are using it. Going forward, this research highlights that social trust needs to be a key concern in defining the future value of the internet.
Authored by Fiona Carroll, Rhyd Lewis
Web technologies have created a worldwide web of problems and cyber risks for individuals and organizations. In this paper, we evaluate web technologies and present the different technologies and their positive impacts on individuals and business sectors. Also, we present a cyber-criminals metrics engine for attack determination on web technologies platforms’ weaknesses. Finally, this paper offers a cautionary note to protect Small and Medium Businesses (SMBs) and make recommendations to help minimize cyber risks and save individuals and organizations from cyberattack distress.
Authored by Olumide Malomo, Shanzhen Gao, Adeyemi Adekoya, Ephrem Eyob, Weizheng Gao
To improve the security and reliability of remote terminals under trusted cloud platform, an identity authentication model based on DAA optimization is proposed. By introducing a trusted third-party CA, the scheme issues a cross domain DAA certificate to the trusted platform that needs cross domain authentication. Then, privacy CA isolation measures are taken to improve the security of the platform, so that the authentication scheme can be used for identity authentication when ordinary users log in to the host equipped with TPM chip. Finally, the trusted computing platform environment is established, and the performance load distribution and total performance load of each entity in the DAA protocol in the unit of machine cycle can be acquired through experimental analysis. The results show that the scheme can take into account the requirements of anonymity, time cost and cross domain authentication in the trusted cloud computing platform, and it is a useful supplement and extension to the existing theories of web service security.
Authored by Yi Liang, Youyong Chen, Xiaoqi Dong, Changchao Dong, Qingyuan Cai
Federated Data-as-a-Service systems are helpful in applications that require dynamic coordination of multiple organizations, such as maritime search and rescue, disaster relief, or contact tracing of an infectious disease. In such systems it is often the case that users cannot be wholly trusted, and access control conditions need to take the level of trust into account. Most existing work on trust-based access control in web services focuses on a single aspect of trust, like user credentials, but trust often has multiple aspects such as users’ behavior and their organization. In addition, most existing solutions use a fixed threshold to determine whether a user’s trust is sufficient, ignoring the dynamic situation where the trade-off between benefits and risks of granting access should be considered. We have developed a Multi-aspect and Adaptive Trust-based Situation-aware Access Control Framework we call “MATS” for federated data sharing systems. Our framework is built using Semantic Web technologies and uses game theory to adjust a system’s access decisions based on dynamic situations. We use query rewriting to implement this framework and optimize the system’s performance by carefully balancing efficiency and simplicity. In this paper we present this framework in detail, including experimental results that validate the feasibility of our approach.
Authored by Dae-Young Kim, Nujood Alodadi, Zhiyuan Chen, Karuna Joshi, Adina Crainiceanu, Don Needham
The initiatives for redecentralization of the Web such as SoLiD aim to enhance users’ privacy by enforcing transparency about the data used by Web applications. However, it is a challenge for a Web application acquiring data from third-party sources to trust data originating from many or even hidden parties. A decentralized web application requires to evaluate trust and take trust-aware decisions autonomously without relying on a centralized infrastructure. While many related trust models consider direct or reputation-based trust for making trust-aware decisions, in decentralized web applications content and context factors (called content trust) become critical due to the arbitrary number of potential data providers and the contextual nature of trust. Besides, the dynamic nature of the decentralized web necessitates trust-aware decisions that are made autonomously by the machine in a collaborative environment without further human intervention. To address these challenges, we present ConTED, a content trust evaluation framework for enabling decentralized Web applications to evaluate content trust autonomously. We also describe the architecture concept, which makes it feasible to integrate content trust models for decentralized Web applications. To demonstrate the feasibility, ConTED is integrated with aTLAS testbed, a web-based test bed to examine trust for a redecentralized web. Finally, we evaluate ConTED in terms of scalability and accuracy through a set of experiments.
Authored by Valentin Siegert, Arved Kirchhoff, Martin Gaedke
Cloud computing has been widely used because of its low price, high reliability, and generality of services. However, considering that cloud computing transactions between users and service providers are usually asynchronous, data privacy involving users and service providers may lead to a crisis of trust, which in turn hinders the expansion of cloud computing applications. In this paper, we propose DPP, a data privacypreserving cloud computing scheme based on homomorphic encryption, which achieves correctness, compatibility, and security. DPP implements data privacy-preserving by introducing homomorphic encryption. To verify the security of DPP, we instantiate DPP based on the Paillier homomorphic encryption scheme and evaluate the performance. The experiment results show that the time-consuming of the key steps in the DPP scheme is reasonable and acceptable.
Authored by Jing Wang, Fengheng Wu, Tingbo Zhang, Xiaohua Wu
Face verification is by far the most popular biometrics technology used for authentication since it is noninvasive and does not require the assistance of the user. In contrast, fingerprint and iris identification technologies require the help of a user during the identification process. Now the technology behind facial recognition has been around for years but recently as its grown more sophisticated is applications have expanded greatly. These days a third-party service provider is often hired to perform facial recognition. The sensitivity of face data raises important privacy concerns about outsourcing servers. In order to protect the privacy of users, this paper discusses privacy-preserving face recognition frameworks applied to different networks. In this survey, we focused primarily on the accuracy of face recognition, computation time, and algorithmic approaches to face recognition on edge and cloud-based networks.
Authored by Rajashree Nambiar, M. Jaiganesh, M.V. Rao
Fraud detection is an integral part of financial security monitoring tool; however, the traditional fraud detection method cannot detect the existing malicious fraud, and the clouds will produce data revealing that the risk of fraud detection system can not protect the privacy of detected object, so the fraud detection data privacy security becomes a significant problem,Homomorphic encryption as a demonstrable cryptography cloud privacy computing outsourcing scheme can ensure that cloud computing can perform ciphertext polynomial calculation under the dense state data without direct contact with the accurate data of users, so as to ensure data privacy security. Aiming at the data privacy security problems in the process of fraud detection, this paper combined homomorphic encryption and Logistic regression fraud detection technology to study the Logistic regression fraud detection algorithm under homomorphic ciphertext and constructed a cloud privacy fraud detection method based on customer service and cloud computing services. CKKS encryption scheme is used to encrypt the fraud data set and realize the Logistic regression fraud detection algorithm under ciphertext. The experiment proves that the difference between the fraud detection accuracy on ciphertext and plaintext is less than 3\%. Under the condition of ensuring the privacy of sensitive data to be detected, the effect of the fraud detection model is not affected.
Authored by Zhuang Chen, Mingdian Cai, Zhikun Wang
The problem of privacy protection of trajectory data has received increasing attention in recent years with the significant grow in the volume of users that contribute trajectory data with rich user information. This creates serious privacy concerns as exposing an individual’s privacy information may result in attacks threatening the user’s safety. In this demonstration we present T P 3 a novel practical framework for supporting trajectory privacy preservation in Mobile Cloud Environments (MCEs). In T P 3, non-expert users submit their trajectories and the system is responsible to determine their privacy exposure before sharing them to data analysts in return for various benefits, e.g. better recommendations. T P 3 makes a number of contributions: (a) It evaluates the privacy exposure of the users utilizing various privacy operations, (b) it is latencyefficient as it implements the privacy operations as serverless functions which can scale automatically to serve an increasing number of users with low latency, and (c) it is practical and cost-efficient as it exploits the serverless model to adapt to the demands of the users with low operational costs for the service provider. Finally, T P 3’s Web-UI provides insights to the service provider regarding the performance and the respective revenue from the service usage, while enabling the user to submit the trajectories with recommended preferences of privacy.
Authored by Dimitrios Tomaras, Michail Tsenos, Vana Kalogeraki
With the help of Voice-controlled Digital Assistants (VCDAs), end users can perform various tasks, such as creating shopping lists, setting reminders, or controlling smart home devices via voice commands. However, in multi-user environments, the different end users of VCDAs may not have access to the same controls to protect their privacy. The primary end users who set up VCDAs usually have full control over the data collected by VCDAs, including text transcripts and audio recordings of the other end users. In order for these secondary end users to gain access to privacy settings, they must also create an account with the appropriate manufacturer and accept an invitation from the primary end user to join the respective VCDA. As a result, they depend on the primary end user and the creation of a user account to be able to protect their privacy. Through a user account, however, personal information, such as name, address, or age can be linked to audio recordings, that poses additional privacy risks to secondary end users. For both primary and secondary end users, audio recordings are still maintained on cloud servers operated by manufacturers, resulting in a lack of transparency for all end users. In this paper, we thus propose an approach to improve the protection of both primary and secondary end users that reaches from the device set-up to its utilization. Our approach is based on the concept of a local registration and offline storage of voice commands.
Authored by Luca Acostsa, Delphine Reinhardt
A large number of establishments and organizations implement clouds to store their databases. More active attacks are used on clouds to get unauthorized access or to do harmful actions that may affect on user’s privacy. Therefore, many studies have proposed to increase the level of security in the clouds depended on several strategies. The behavior is one of the promising strategies that might prevent unauthorized or processes. In this paper, a set of features, are from several previous studies, and these features are based on the user activity and events in a special purpose cloud by which unauthorized process can be prevented and alert user about bad actions during his/her work in the cloud environment. The results of comparison show that the event-based features require less resources and time. Thus, they need to be enhanced by adding more informative features, or some available features form other strategies.
Authored by Mohammed Sheet, Melad Saeed
Cloud computing performs a significant part in sharing resources and data with other devices via data outsourcing. The data collaboration services, as a potential service given by the cloud service provider (CSP), is to assist the consistency and availability of the shared data amongst users. At the time of sharing resources, it is a complicated process for providing secure writing and access control operations. This study develops a Privacy Preserving Encryption with Optimal Key Generation Technique (PPE-OKGT) for CC environment. The presented PPE-OKGT technique secures the data prior to storing in the cloud sever via encryption process. For accomplishing this, the presented PPE-OKGT technique employs data encryption technology to secure the input data into a hidden format. Besides, in order to improve secrecy, the presented PPE-OKGT technique designs a chaotic search and rescue optimization (CSRO) algorithm for optimal generation of keys. The promising performance of the PPE-OKGT technique can be verified using a set of experimentations. A comprehensive comparison study reported the enhancements of the PPE-OKGT technique over other models.
Authored by Sanjeeva Polepaka, B Gayathri, Shahnawaz Ayoub, Himanshu Sharma, Yudhveer Moudgil, S Kannan
Cloud computing platforms are the widely used state of the art platforms by various organizations. Data storage and data sharing services are the most widely used services in the cloud, while the maintenance of data integrity is a big challenge. A public cloud platform which is not reliable, users must generate digital signature of their data and then share generated signature for integrity auditing. Any attack on cloud can compromise the users valuable data which is most likely carried out by external entity. By generating signature we can write the rules of who can access update or delete the data. If data is updated by unauthorized user, then auditing can identify which data is compromised. Here we are using asymmetric keys that is when user upload his/her data over cloud then digital signature will get created with users private key, and when TPA (Third Party Auditor) wants to check the integrity of that users data then he can generate digital signature with users public key. Not only carried out low cost for data storage by compressing data but also established data access protocol to maintain data privacy.
Authored by Subhash Rathod, Ratnashil Khobragade, Vilas Thakare, K.H. Walse, Sushama Pawar
When an underwater acoustic sensor network (UASN) is applied to underwater data collection, different data importance rating (DIR) of sensor nodes will affect the scheduling time slot of data collection. In this paper, we propose a Q-learning and DIRbased media access control (Q-DIR MAC) protocol for dynamic clustering underwater acoustic sensor networks (UASNs), in which the nodes in the network may drift with the movement of ocean currents. We use k-mean algorithm to divide the nodes into several clusters. Each partitioned cluster is composed of one cluster head (CH) and several cluster members (CMs). The CMs can be divided into three levels according to the DIR: non-urgent, normal, and very urgent. The number of three types of nodes follows normal distribution. The data importance of each node is introduced into reward function design of Q-learning. The results show that, in the dynamic clustering UASNs, the proposed QDIR MAC protocol can ensure that important data can be sent to the destination node in time without reducing the data success rate under the condition of priority transmission mechanism.
Authored by Wenxiang Zhang, Weidi Huang, Yougan Chen, Xiaomei Xu
Propagation delay and channel loss are two vital factors affecting reliability of Underwater Acoustic Networks (UANs). Different from land networks, UANs have long propagation delay and poor channel quality, which lead to serious data collision and high bit error rate, respectively. However, complex underwater environments impose great challenges to evaluate propagation delay and channel loss. As temperature is the most critical factor affecting them, in this paper, we propose to employ temperature to evaluate them. However, existing temperature prediction research are insufficient for accuracy or efficiency. This paper proposes a temperature prediction-assisted approach for evaluating propagation delay and channel loss, aiming to improve reliability and performance of underwater acoustic networks. We build a nonlinear autoregressive dynamic neural network-based temperature prediction model to improve prediction accuracy and reduce time complexity. Then, we evaluate propagation delay and channel loss considering different marine environments, including shallow and deep sea. Extensive simulation results show that our approach performs better than five advanced baselines.
Authored by Rui Gao, Jun Iiu, Shanshan Song, En Wang, Yu Gou, Tong Zhang, Jun-hong Cui
With the rapid development of underwater sensor networks, the design of underwater demodulators become increasingly significant. However, underwater acoustic communication is faced with many problems such as propagation time delay, multipath effect and Doppler effect due to the complexity of underwater environment. Demodulation of underwater communication signals is a challenging task. To solve this problem, we propose a novel binary phase shift keying (BPSK) demodulator for underwater acoustic communication based on convolutional neural network, which demodulates the modulation data by detecting the position of phase shift. The method proposed in this paper significantly reduces the bit error rate (BER) compared with the results of the traditional method in URPC1 datasets (Underwater Robot Picking Contest).
Authored by Tianshun Han, Zhensheng Shi, Haiyong Zheng, Junyu Dong, Zhaorui Gu, Bing Zheng
Underwater acoustic sensor network (UASN) is a promising underwater networking technology for wide applications, but there is an urgent need to design reliable and low power consumption routing protocols for UASN to extend network lifetime due to the limited energy supply. In this paper, we propose a Q-learning and data priority-based routing protocol with dynamic computing cluster head (QD-DCR) to extend the network lifetime of UASN. In QD-DCR protocol, the underwater nodes are clustered and set the cluster head (CH) nodes, which are only responsible for computing the optimal path of data transmission and the storage of Q-value table, while the non-CH nodes are responsible for data transmission. Meanwhile, according to the data priority, we design different data transmission methods that can effectively use the limited resources of UASN to transmit urgent data. To further make the residual energy of sensor nodes evenly distributed, we also design the dynamic selection of CH node, which can avoid the potential energy holes. In addition, we adopt Q-learning to determine the optimal next hop instead of the greedy next hop in a cluster. We also define an action utility function that takes into account both residual energy and node depth to extend the network lifetime by distributing the residual energy evenly. Simulation results show that the proposed QD-DCR protocol can effectively extend the network lifetime compared with a classic lifetime-extended routing protocol (QELAR), while alleviating the issue of uneven distribution of the residual energy in the network.
Authored by Shen Tu, Xiuling Zhu, Yougan Chen, Xiaomei Xu
Underwater Acoustic Sensor Networks (UASNs) is a prominent field in communications due to several applications. UASNs enable underwater data collection and monitoring in different applications. UASNs face several challenges like node mobility, low bandwidth, high energy consumption, and routing. The complexity of the underwater routing is increased due to node mobility. Several underwater routing protocols exist in the literature; they determine next-hop based on different criteria. Some criteria to select next-hop are link quality, residual energy, hop-count, etc. Many underwater routing protocols either use hop-count or hop-count as one of the criteria to choose nexthop. Such routing protocols result in lower hop-count, resulting in smaller end-to-end delays. These routing protocols are instrumental in the delay-sensitive applications where the endto-end delay is the primary requirement. However, maintaining up-to-date information of the hop-count of nodes is one of the major challenges due to frequent changes in underwater topology caused due to the water current. This survey paper focuses on underwater routing protocols based on hop-count in selecting the next-hop. It focuses on updating hop-count information in various hop-count-based underwater routing protocols.
Authored by Sahil Kumar, Pradeep Nazareth, B. Chandavarkar
Traditional Web application category recognition is implemented by fingerprint rule matching, which is difficult to extract fingerprint rules and has limited coverage. At present, many improved identification methods semi-automatically extract fingerprints through certain rules and identify Web application categories through clustering or classification algorithms, but still rely on fingerprint rules and human intervention, and the time complexity of classification is too high to process a large amount of data. This paper proposes Multi-layer Simhash Algorithm and combines DBSCAN clustering to realize intelligent identification of Web application types, pioneering the complete automation of fingerprint identification of Web applications. This method has the function of discovering unknown Web applications and predicting unknown application types, and solves the problems of fingerprint rule extraction and manual dependence of Web applications. This paper through the TF-IDF algorithm to extract the Web page text key words and weight, Then, Multi-layer Simhash Algorithm is used to transform text feature words and weights into binary characteristic hash value, at last, the hamming distance between the input Web page and the characteristic hash value of the known category is compared with the radius of the base class, which determines the category of the input Web application. The experimental results show that the accuracy of Web application category recognition and prediction is more than 97\% and 93\% respectively.
Authored by Fuji Han, Dongjun Zhu
Providing security to the IoT system is very essential to protect them from various attacks. Such security features include credential management to avoid hard-coding of credentials in web applications, key management for secure inter-device communication and assignment of trust score to the devices based on various parameters. This work contains the design and implementation details of an open source simulation environment with credential management, key management and trust score calculation features. In credential management, credentials are sent to the target device which is then stored in a JSON file. Web application in the device makes use of these credentials for authentication. In key management, X.509 certificate and private key file are generated. They are used for secure message communication using a session key that is secretly exchanged between the devices. For trust score calculation, parameters are collected from the device. Feedback parameters given by other devices are also sent to the centralised server. The dynamic weighted average model is applied to the trust values derived from these parameters to get the trust score of the device. In addition to the design, the source code of our simulation environment is also made publicly available so that researchers can alter and extend its capabilities.
Authored by Srivatsan V, Vinod Pathari