Verifiable Dynamic Searchable Symmetric Encryption (VDSSE) enables users to securely outsource databases (document sets) to cloud servers and perform searches and updates. The verifiability property prevents users from accepting incorrect search results returned by a malicious server. However, we discover that the community currently only focuses on preventing malicious behavior from the server but ignores incorrect updates from the client, which are very likely to happen since there is no record on the client to check. Indeed most existing VDSSE schemes are not sufficient to tolerate incorrect updates from the client. For instance, deleting a nonexistent keyword-identifier pair can break their correctness and soundness. In this paper, we demonstrate the vulnerabilities of a type of existing VDSSE schemes that fail them to ensure correctness and soundness properties on incorrect updates. We propose an efficient fault-tolerant solution that can consider any DSSE scheme as a black-box and make them into a fault-tolerant VDSSE in the malicious model. Forward privacy is an important property of DSSE that prevents the server from linking an update operation to previous search queries. Our approach can also make any forward secure DSSE scheme into a fault-tolerant VDSSE without breaking the forward security guarantee. In this work, we take FAST [1] (TDSC 2020), a forward secure DSSE, as an example, implement a prototype of our solution, and evaluate its performance. Even when compared with the previous fastest forward private construction that does not support fault tolerance, the experiments show that our construction saves 9× client storage and has better search and update efficiency.
Authored by Dandan Yuan, Shujie Cui, Giovanni Russello
Although the public cloud is known for its incredible capabilities, consumers cannot totally depend on cloud service providers to keep personal data because to the lack of client maneuverability. To protect privacy, data controllers outsourced encryption keys rather than providing information. Crypt - text to conduct out okay and founder access control and provide the encryption keys with others, innate quality Aes (CP-ABE) may be employed. This, however, falls short of effectively protecting against new dangers. The public cloud was unable to validate if a downloader could decode using a number of older methods. Therefore, these files should be accessible to everyone having access to a data storage. A malicious attacker may download hundreds of files in order to launch Economic Deny of Sustain (EDoS) attacks, greatly depleting the cloud resource. The user of cloud storage is responsible for paying the fee. Additionally, the public cloud serves as both the accountant and the payer of resource consumption costs, without offering data owners any information. Cloud infrastructure storage should assuage these concerns in practice. In this study, we provide a technique for resource accountability and defense against DoS attacks for encrypted cloud storage tanks. It uses black-box CP-ABE techniques and abides by the access policy of CP-arbitrary ABE. After presenting two methods for different parameters, speed and security evaluations are given.
Authored by Ankur Biswas, K V, Pradeep, Arvind Pandey, Surendra Shukla, Tej Raj, Abhishek Roy
With the advent of the era of Internet of Things (IoT), the increasing data volume leads to storage outsourcing as a new trend for enterprises and individuals. However, data breaches frequently occur, bringing significant challenges to the privacy protection of the outsourced data management system. There is an urgent need for efficient and secure data sharing schemes for the outsourced data management infrastructure, such as the cloud. Therefore, this paper designs a dual-server-based data sharing scheme with data privacy and high efficiency for the cloud, enabling the internal members to exchange their data efficiently and securely. Dual servers guarantee that none of the servers can get complete data independently by adopting secure two-party computation. In our proposed scheme, if the data is destroyed when sending it to the user, the data will not be restored. To prevent the malicious deletion, the data owner adds a random number to verify the identity during the uploading procedure. To ensure data security, the data is transmitted in ciphertext throughout the process by using searchable encryption. Finally, the black-box leakage analysis and theoretical performance evaluation demonstrate that our proposed data sharing scheme provides solid security and high efficiency in practice.
Authored by Xingqi Luo, Haotian Wang, Jinyang Dong, Chuan Zhang, Tong Wu
This article analyzes the analysis of the joint data security architecture that integrates artificial intelligence and cloud computing in the era of big data. The article discusses and analyzes the integrated applications of big data, artificial intelligence and cloud computing. As an important part of big data security protection, joint data security Protecting the technical architecture is not only related to the security of joint data in the big data era, but also has an important impact on the overall development of the data era. Based on this, the thesis takes the big data security and joint data security protection technical architecture as the research content, and through a simple explanation of big data security, it then conducts detailed research on the big data security and joint data security protection technical architecture from five aspects and thinking.
Authored by Jikui Du
Big Data (BD) is the combination of several technologies which address the gathering, analyzing and storing of massive heterogeneous data. The tremendous spurt of the Internet of Things (IoT) and different technologies are the fundamental incentive behind this enduring development. Moreover, the analysis of this data requires high-performance servers for advanced and parallel data analytics. Thus, data owners with their limited capabilities may outsource their data to a powerful but untrusted environment, i.e., the Cloud. Furthermore, data analytic techniques performed on external cloud may arise various security intimidations regarding the confidentiality and the integrity of the aforementioned; transferred, analyzed, and stored data. To countermeasure these security issues and challenges, several techniques have been addressed. This survey paper aims to summarize and emphasize the security threats within Big Data framework, in addition, it is worth mentioning research work related to Big Data Analytics (BDA).
Authored by Hany Habbak, Khaled Metwally, Ahmed Mattar
Cloud computing has become an integral part of medical big data. The cloud has the capability to store the large data volumes has attracted more attention. The integrity and privacy of patient data are some of the issues that cloud-based medical big data should be addressed. This research work introduces data integrity auditing scheme for cloud-based medical big data. This will help minimize the risk of unauthorized access to the data. Multiple copies of the data are stored to ensure that it can be recovered quickly in case of damage. This scheme can also be used to enable doctors to easily track the changes in patients' conditions through a data block. The simulation results proved the effectiveness of the proposed scheme.
Authored by A. Vineela, N. Kasiviswanath, Shoba Bindu
The big data platform based on cloud computing realizes the storage, analysis and processing of massive data, and provides users with more efficient, accurate and intelligent Internet services. Combined with the characteristics of college teaching resource sharing platform based on cloud computing mode, the multi-faceted security defense strategy of the platform is studied from security management, security inspection and technical means. In the detection module, the optimization of the support vector machine is realized, the detection period is determined, the DDoS data traffic characteristics are extracted, and the source ID blacklist is established; the triggering of the defense mechanism in the defense module, the construction of the forwarder forwarding queue and the forwarder forwarding capability are realized. Reallocation.
Authored by Zhiyi Xing
With the development of information networks, cloud computing, big data, and virtualization technologies promote the emergence of various new network applications to meet the needs of various Internet services. A security protection system for virtual host in cloud computing center is proposed in the article. The system takes “security as a service” as the starting point, takes virtual machines as the core, and takes virtual machine clusters as the unit to provide unified security protection against the borderless characteristics of virtualized computing. The thesis builds a network security protection system for APT attacks; uses the system dynamics method to establish a system capability model, and conducts simulation analysis. The simulation results prove the validity and rationality of the network communication security system framework and modeling analysis method proposed in the thesis. Compared with traditional methods, this method has more comprehensive modeling and analysis elements, and the deduced results are more instructive.
Authored by Xin Nie, Chengcheng Lou
Data security is a vast term that doesn’t have any limits, but there are a certain amount of tools and techniques that could help in gaining security. Honeypot is among one of the tools that are designated and designed to protect the security of a network but in a very dissimilar manner. It is a system that is designed and developed to be compromised and exploited. Honeypots are meant to lure the invaders, but due to advancements in computing systems parallelly, the intruding technologies are also attaining their gigantic influence. In this research work, an approach involving apache-spark (a Big Data Technique) would be introduced in order to use it with the Honeypot System. This work includes an extensive study based on several research papers, through which elaborated experiment-based result has been expressed on the best known open-source honeypot systems. The preeminent possible method of using The Honeypot with apache spark in the sequential channel would also be proposed with the help of a framework diagram.
Authored by Akshay Mudgal, Shaveta Bhatia
Intelligent, smart, Cloud, reconfigurable manufac-turing, and remote monitoring, all intersect in modern industry and mark the path toward more efficient, effective, and sustain-able factories. Many obstacles are found along the path, including legacy machineries and technologies, security issues, and software that is often hard, slow, and expensive to adapt to face unforeseen challenges and needs in this fast-changing ecosystem. Light-weight, portable, loosely coupled, easily monitored, variegated software components, supporting Edge, Fog and Cloud computing, that can be (re)created, (re)configured and operated from remote through Web requests in a matter of milliseconds, and that rely on libraries of ready-to-use tasks also extendable from remote through sub-second Web requests, constitute a fertile technological ground on top of which fourth-generation industries can be built. In this demo it will be shown how starting from a completely virgin Docker Engine, it is possible to build, configure, destroy, rebuild, operate, exclusively from remote, exclusively via API calls, computation networks that are capable to (i) raise alerts based on configured thresholds or trained ML models, (ii) transform Big Data streams, (iii) produce and persist Big Datasets on the Cloud, (iv) train and persist ML models on the Cloud, (v) use trained models for one-shot or stream predictions, (vi) produce tabular visualizations, line plots, pie charts, histograms, at real-time, from Big Data streams. Also, it will be shown how easily such computation networks can be upgraded with new functionalities at real-time, from remote, via API calls.
Authored by Mirco Soderi, Vignesh Kamath, John Breslin
Large capacity, fast-paced, diversified and high-value data are becoming a hotbed of data processing and research. Privacy security protection based on data life cycle is a method to protect privacy. It is used to protect the confidentiality, integrity and availability of personal data and prevent unauthorized access or use. The main advantage of using this method is that it can fully control all aspects related to the information system and its users. With the opening of the cloud, attackers use the cloud to recalculate and analyze big data that may infringe on others' privacy. Privacy protection based on data life cycle is a means of privacy protection based on the whole process of data production, collection, storage and use. This approach involves all stages from the creation of personal information by individuals (e.g. by filling out forms online or at work) to destruction after use for the intended purpose (e.g. deleting records). Privacy security based on the data life cycle ensures that any personal information collected is used only for the purpose of initial collection and destroyed as soon as possible.
Authored by Hongjun Zhang, Shuyan Cheng, Qingyuan Cai, Xiao Jiang
This paper designs a network security protection system based on artificial intelligence technology from two aspects of hardware and software. The system can simultaneously collect Internet public data and secret-related data inside the unit, and encrypt it through the TCM chip solidified in the hardware to ensure that only designated machines can read secret-related materials. The data edge-cloud collaborative acquisition architecture based on chip encryption can realize the cross-network transmission of confidential data. At the same time, this paper proposes an edge-cloud collaborative information security protection method for industrial control systems by combining end-address hopping and load balancing algorithms. Finally, using WinCC, Unity3D, MySQL and other development environments comprehensively, the feasibility and effectiveness of the system are verified by experiments.
Authored by Xiuyun Lu, Wenxing Zhao, Yuquan Zhu
This paper analyzes the problems existing in the existing emergency management technology system in China from various perspectives, and designs the construction of intelligent emergency system in combination with the development of new generation of Internet of Things, big data, cloud computing and artificial intelligence technology. The overall design is based on scientific and technological innovation to lead the reform of emergency management mechanism and process reengineering to build an intelligent emergency technology system characterized by "holographic monitoring, early warning, intelligent research and accurate disposal". To build an intelligent emergency management system that integrates intelligent monitoring and early warning, intelligent emergency disposal, efficient rehabilitation, improvement of emergency standards, safety and operation and maintenance construction.
Authored by Huan Shi, Bo Hui, Biao Hu, RongJie Gu
The Internet of Vehicles (IoVs) performs the rapid expansion of connected devices. This massive number of devices is constantly generating a massive and near-real-time data stream for numerous applications, which is known as big data. Analyzing such big data to find, predict, and control decisions is a critical solution for IoVs to enhance service quality and experience. Thus, the main goal of this paper is to study the impact of big data analytics on traffic prediction in IoVs. In which we have used big data analytics steps to predict the traffic flow, and based on different deep neural models such as LSTM, CNN-LSTM, and GRU. The models are validated using evaluation metrics, MAE, MSE, RMSE, and R2. Hence, a case study based on a real-world road is used to implement and test the efficiency of the traffic prediction models.
Authored by Hakima Khelifi, Amani Belouahri
The age of data (AoD) is identified as one of the most novel and important metrics to measure the quality of big data analytics for Internet-of-Things (IoT) applications. Meanwhile, mobile edge computing (MEC) is envisioned as an enabling technology to minimize the AoD of IoT applications by processing the data in edge servers close to IoT devices. In this paper, we study the AoD minimization problem for IoT big data processing in MEC networks. We first propose an exact solution for the problem by formulating it as an Integer Linear Program (ILP). We then propose an efficient heuristic for the offline AoD minimization problem. We also devise an approximation algorithm with a provable approximation ratio for a special case of the problem, by leveraging the parametric rounding technique. We thirdly develop an online learning algorithm with a bounded regret for the online AoD minimization problem under dynamic arrivals of IoT requests and uncertain network delay assumptions, by adopting the Multi-Armed Bandit (MAB) technique. We finally evaluate the performance of the proposed algorithms by extensive simulations and implementations in a real test-bed. Results show that the proposed algorithms outperform existing approaches by reducing the AoD around 10%.
Authored by Zichuan Xu, Wenhao Ren, Weifa Liang, Wenzheng Xu, Qiufen Xia, Pan Zhou, Mingchu Li
The Software Defined Networking (SDN) is a solution for Data Center Networks (DCN). This solution offers a centralized control that helps to simplify the management and reduce the big data issues of storage management and data analysis. This paper investigates the performance of deploying an SDN controller in DCN. The paper considers the network topology with a different number of hosts using the Mininet emulator. The paper evaluates the performance of DCN based on Python SDN controllers with a different number of hosts. This evaluation compares POX and RYU controllers as DCN solutions using the throughput, delay, overhead, and convergence time. The results show that the POX outperforms the RYU controller and is the best choice for DCN.
Authored by Jellalah Alzarog, Abdalwart Almhishi, Abubaker Alsunousi, Tareg Abulifa, Wisam Eltarjaman, Salem Sati
Fraud mechanisms have evolved from isolated actions performed by single individuals to complex criminal networks. This paper aims to contribute to the identification of potentially relevant nodes in fraud networks. Whilst traditional methods for fraud detection rely on identifying abnormal patterns, this paper proposes STARBRIDGE: a new linear and scalable, ranked out, parameter free method to identify fraudulent nodes and rings based on Bridging, Influence and Control metrics. This is applied to the telecommunications domain where fraudulent nodes form a star-bridge-star pattern. Over 75% of nodes involved in fraud denote control, bridging centrality and doubled the influence scores, when compared to non-fraudulent nodes in the same role, stars and bridges being chief positions.
Authored by Pedro Fidalgo, Rui Lopes, Christos Faloutsos
Application domains like big data and IoT require a lot of user data collected and analyzed to extract useful information, and those data might include user's sensitive and personal information. Hence, it is strongly required to ensure the privacy of user data before releasing them in the public space. Since the fields of IoT and big data are constantly evolving with new types of privacy attacks and prevention mechanisms, there is an urgent need for new research and surveys to develop an overview of the state-of-art. We conducted a systematic mapping study on selected papers related to user privacy in IoT and big data, published between 2010 to 2021. This study focuses on identifying the main privacy objectives, attacks and measures taken to prevent the attacks in the two application domains. Additionally, a visualized classification of the existing attacks is presented along with privacy metrics to draw similarities and dissimilarities among different attacks.
Authored by Raisa Islam, Mohammad Hossen, Dongwan Shin
Mobile terminals especially smartphones are changing people's work and life style. For example, mobile payments are experiencing rapid growth as consumers use mobile terminals as part of lifestyles. However, security is a big challenge for mobile application services. In order to reduce security risks, mobile terminal security assessment should be conducted before providing application services. An approach of comprehensive security assessment is proposed in this paper by defining security metrics with the corresponding scores and determining the relative weights of security metrics based on the analytical hierarchy process (AHP). Overall security assessment of Android-based mobile terminals is implemented for mobile payment services with payment fraud detection accuracy of 89%, which shows that the proposed approach of security assessment is reasonable.
Authored by Zhiyuan Hu, Linghang Shi, Huijun Chen, Chao Li, Jinghui Lu
Code optimization is an essential feature for compilers and almost all software products are released by compiler optimizations. Consequently, bugs in code optimization will inevitably cast significant impact on the correctness of software systems. Locating optimization bugs in compilers is challenging as compilers typically support a large amount of optimization configurations. Although prior studies have proposed to locate compiler bugs via generating witness test programs, they are still time-consuming and not effective enough. To address such limitations, we propose an automatic bug localization approach, ODFL, for locating compiler optimization bugs via differentiating finer-grained options in this study. Specifically, we first disable the fine-grained options that are enabled by default under the bug-triggering optimization levels independently to obtain bug-free and bug-related fine-grained options. We then configure several effective passing and failing optimization sequences based on such fine-grained options to obtain multiple failing and passing compiler coverage. Finally, such generated coverage information can be utilized via Spectrum-Based Fault Localization formulae to rank the suspicious compiler files. We run ODFL on 60 buggy GCC compilers from an existing benchmark. The experimental results show that ODFL significantly outperforms the state-of-the-art compiler bug isolation approach RecBi in terms of all the evaluated metrics, demonstrating the effectiveness of ODFL. In addition, ODFL is much more efficient than RecBi as it can save more than 88% of the time for locating bugs on average.
Authored by Jing Yang, Yibiao Yang, Maolin Sun, Ming Wen, Yuming Zhou, Hai Jin
The increased dissemination of open source software to a broader audience has led to a proportional increase in the dissemination of vulnerabilities. These vulnerabilities are introduced by developers, some intentionally or negligently. In this paper, we work to quantity the relative risk that a given developer represents to a software project. We propose using empirical software engineering based analysis on the vast data made available by GitHub to create a Developer Risk Score (DRS) for prolific contributors on GitHub. The DRS can then be aggregated across a project as a derived vulnerability assessment, we call this the Computational Vulnerability Assessment Score (CVAS). The CVAS represents the correlation between the Developer Risk score across projects and vulnerabilities attributed to those projects. We believe this to be a contribution in trying to quantity risk introduced by specific developers across open source projects. Both of the risk scores, those for contributors and projects, are derived from an amalgamation of data, both from GitHub and outside GitHub. We seek to provide this risk metric as a force multiplier for the project maintainers that are responsible for reviewing code contributions. We hope this will lead to a reduction in the number of introduced vulnerabilities for projects in the Open Source ecosystem.
Authored by Jon Chapman, Hari Venugopalan
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
Authored by Jun Huang, Zerui Wang, Ding Li, Yan Liu
Biometric security is the fastest growing area that receives considerable attention over the past few years. Digital hiding and encryption technologies provide an effective solution to secure biometric information from intentional or accidental attacks. Visual cryptography is the approach utilized for encrypting the information which is in the form of visual information for example images. Meanwhile, the biometric template stored in the databases are generally in the form of images, the visual cryptography could be employed effectively for encrypting the template from the attack. This study develops a share creation with improved encryption process for secure biometric verification (SCIEP-SBV) technique. The presented SCIEP-SBV technique majorly aims to attain security via encryption and share creation (SC) procedure. Firstly, the biometric images undergo SC process to produce several shares. For encryption process, homomorphic encryption (HE) technique is utilized in this work. To further improve the secrecy, an improved bald eagle search (IBES) approach was exploited in this work. The simulation values of the SCIEP-SBV system are tested on biometric images. The extensive comparison study demonstrated the improved outcomes of the SCIEP-SBV technique over compared methods.
Authored by Shammi L, Milind, Emilin Shyni, Khair Nisa, Ravi Bora, S. Saravanan
When storing face biometric samples in accordance with ISO/IEC 19794 as JPEG2000 encoded images, it is necessary to encrypt them for the sake of users’ privacy. Literature suggests selective encryption of JPEG2000 images as fast and efficient method for encryption, the trade-off is that some information is left in plaintext. This could be used by an attacker, in case the encrypted biometric samples are leaked. In this work, we will attempt to utilize a convolutional neural network to perform cryptanalysis of the encryption scheme. That is, we want to assess if there is any information left in plaintext in the selectively encrypted face images which can be used to identify the person. The chosen approach is to train CNNs for biometric face recognition not only with plaintext face samples but additionally conduct a refinement training with partially encrypted data. If this system can successfully utilize encrypted face samples for biometric matching, we can show that the information left in encrypted biometric face samples is information actually usable for biometric recognition.The method works and we can show that a supposedly secure biometric sample still contains identifying information on average over the whole database.
Authored by Heinz Hofbauer, Yoanna Martínez-Díaz, Luis Luevano, Heydi Méndez-Vázquez, Andreas Uhl
In this paper, an overall introduction of fingerprint encryption algorithm is made, and then a fingerprint encryption algorithm with error correction is designed by adding error correction mechanism. This new fingerprint encryption algorithm can produce stochastic key in the form of multinomial coefficient by using the binary system sequencer, encrypt fingerprint, and use the Lagrange difference value to restore the multinomial during authenticating. Due to using the cyclic redundancy check code to find out the most accurate key, the accuracy of this algorithm can be ensured. Experimental result indicates that the fuzzy vault algorithm with error correction can well realize the template protection, and meet the requirements of biological information security protection. In addition, it also indicates that the system's safety performance can be enhanced by chanaing the key's length.
Authored by Liang Chang