With the development of information networks, cloud computing, big data, and virtualization technologies promote the emergence of various new network applications to meet the needs of various Internet services. A security protection system for virtual host in cloud computing center is proposed in the article. The system takes “security as a service” as the starting point, takes virtual machines as the core, and takes virtual machine clusters as the unit to provide unified security protection against the borderless characteristics of virtualized computing. The thesis builds a network security protection system for APT attacks; uses the system dynamics method to establish a system capability model, and conducts simulation analysis. The simulation results prove the validity and rationality of the network communication security system framework and modeling analysis method proposed in the thesis. Compared with traditional methods, this method has more comprehensive modeling and analysis elements, and the deduced results are more instructive.
Authored by Xin Nie, Chengcheng Lou
Data security is a vast term that doesn’t have any limits, but there are a certain amount of tools and techniques that could help in gaining security. Honeypot is among one of the tools that are designated and designed to protect the security of a network but in a very dissimilar manner. It is a system that is designed and developed to be compromised and exploited. Honeypots are meant to lure the invaders, but due to advancements in computing systems parallelly, the intruding technologies are also attaining their gigantic influence. In this research work, an approach involving apache-spark (a Big Data Technique) would be introduced in order to use it with the Honeypot System. This work includes an extensive study based on several research papers, through which elaborated experiment-based result has been expressed on the best known open-source honeypot systems. The preeminent possible method of using The Honeypot with apache spark in the sequential channel would also be proposed with the help of a framework diagram.
Authored by Akshay Mudgal, Shaveta Bhatia
Intelligent, smart, Cloud, reconfigurable manufac-turing, and remote monitoring, all intersect in modern industry and mark the path toward more efficient, effective, and sustain-able factories. Many obstacles are found along the path, including legacy machineries and technologies, security issues, and software that is often hard, slow, and expensive to adapt to face unforeseen challenges and needs in this fast-changing ecosystem. Light-weight, portable, loosely coupled, easily monitored, variegated software components, supporting Edge, Fog and Cloud computing, that can be (re)created, (re)configured and operated from remote through Web requests in a matter of milliseconds, and that rely on libraries of ready-to-use tasks also extendable from remote through sub-second Web requests, constitute a fertile technological ground on top of which fourth-generation industries can be built. In this demo it will be shown how starting from a completely virgin Docker Engine, it is possible to build, configure, destroy, rebuild, operate, exclusively from remote, exclusively via API calls, computation networks that are capable to (i) raise alerts based on configured thresholds or trained ML models, (ii) transform Big Data streams, (iii) produce and persist Big Datasets on the Cloud, (iv) train and persist ML models on the Cloud, (v) use trained models for one-shot or stream predictions, (vi) produce tabular visualizations, line plots, pie charts, histograms, at real-time, from Big Data streams. Also, it will be shown how easily such computation networks can be upgraded with new functionalities at real-time, from remote, via API calls.
Authored by Mirco Soderi, Vignesh Kamath, John Breslin
Large capacity, fast-paced, diversified and high-value data are becoming a hotbed of data processing and research. Privacy security protection based on data life cycle is a method to protect privacy. It is used to protect the confidentiality, integrity and availability of personal data and prevent unauthorized access or use. The main advantage of using this method is that it can fully control all aspects related to the information system and its users. With the opening of the cloud, attackers use the cloud to recalculate and analyze big data that may infringe on others' privacy. Privacy protection based on data life cycle is a means of privacy protection based on the whole process of data production, collection, storage and use. This approach involves all stages from the creation of personal information by individuals (e.g. by filling out forms online or at work) to destruction after use for the intended purpose (e.g. deleting records). Privacy security based on the data life cycle ensures that any personal information collected is used only for the purpose of initial collection and destroyed as soon as possible.
Authored by Hongjun Zhang, Shuyan Cheng, Qingyuan Cai, Xiao Jiang
This paper designs a network security protection system based on artificial intelligence technology from two aspects of hardware and software. The system can simultaneously collect Internet public data and secret-related data inside the unit, and encrypt it through the TCM chip solidified in the hardware to ensure that only designated machines can read secret-related materials. The data edge-cloud collaborative acquisition architecture based on chip encryption can realize the cross-network transmission of confidential data. At the same time, this paper proposes an edge-cloud collaborative information security protection method for industrial control systems by combining end-address hopping and load balancing algorithms. Finally, using WinCC, Unity3D, MySQL and other development environments comprehensively, the feasibility and effectiveness of the system are verified by experiments.
Authored by Xiuyun Lu, Wenxing Zhao, Yuquan Zhu
This paper analyzes the problems existing in the existing emergency management technology system in China from various perspectives, and designs the construction of intelligent emergency system in combination with the development of new generation of Internet of Things, big data, cloud computing and artificial intelligence technology. The overall design is based on scientific and technological innovation to lead the reform of emergency management mechanism and process reengineering to build an intelligent emergency technology system characterized by "holographic monitoring, early warning, intelligent research and accurate disposal". To build an intelligent emergency management system that integrates intelligent monitoring and early warning, intelligent emergency disposal, efficient rehabilitation, improvement of emergency standards, safety and operation and maintenance construction.
Authored by Huan Shi, Bo Hui, Biao Hu, RongJie Gu
The Internet of Vehicles (IoVs) performs the rapid expansion of connected devices. This massive number of devices is constantly generating a massive and near-real-time data stream for numerous applications, which is known as big data. Analyzing such big data to find, predict, and control decisions is a critical solution for IoVs to enhance service quality and experience. Thus, the main goal of this paper is to study the impact of big data analytics on traffic prediction in IoVs. In which we have used big data analytics steps to predict the traffic flow, and based on different deep neural models such as LSTM, CNN-LSTM, and GRU. The models are validated using evaluation metrics, MAE, MSE, RMSE, and R2. Hence, a case study based on a real-world road is used to implement and test the efficiency of the traffic prediction models.
Authored by Hakima Khelifi, Amani Belouahri
The age of data (AoD) is identified as one of the most novel and important metrics to measure the quality of big data analytics for Internet-of-Things (IoT) applications. Meanwhile, mobile edge computing (MEC) is envisioned as an enabling technology to minimize the AoD of IoT applications by processing the data in edge servers close to IoT devices. In this paper, we study the AoD minimization problem for IoT big data processing in MEC networks. We first propose an exact solution for the problem by formulating it as an Integer Linear Program (ILP). We then propose an efficient heuristic for the offline AoD minimization problem. We also devise an approximation algorithm with a provable approximation ratio for a special case of the problem, by leveraging the parametric rounding technique. We thirdly develop an online learning algorithm with a bounded regret for the online AoD minimization problem under dynamic arrivals of IoT requests and uncertain network delay assumptions, by adopting the Multi-Armed Bandit (MAB) technique. We finally evaluate the performance of the proposed algorithms by extensive simulations and implementations in a real test-bed. Results show that the proposed algorithms outperform existing approaches by reducing the AoD around 10%.
Authored by Zichuan Xu, Wenhao Ren, Weifa Liang, Wenzheng Xu, Qiufen Xia, Pan Zhou, Mingchu Li
The Software Defined Networking (SDN) is a solution for Data Center Networks (DCN). This solution offers a centralized control that helps to simplify the management and reduce the big data issues of storage management and data analysis. This paper investigates the performance of deploying an SDN controller in DCN. The paper considers the network topology with a different number of hosts using the Mininet emulator. The paper evaluates the performance of DCN based on Python SDN controllers with a different number of hosts. This evaluation compares POX and RYU controllers as DCN solutions using the throughput, delay, overhead, and convergence time. The results show that the POX outperforms the RYU controller and is the best choice for DCN.
Authored by Jellalah Alzarog, Abdalwart Almhishi, Abubaker Alsunousi, Tareg Abulifa, Wisam Eltarjaman, Salem Sati
Fraud mechanisms have evolved from isolated actions performed by single individuals to complex criminal networks. This paper aims to contribute to the identification of potentially relevant nodes in fraud networks. Whilst traditional methods for fraud detection rely on identifying abnormal patterns, this paper proposes STARBRIDGE: a new linear and scalable, ranked out, parameter free method to identify fraudulent nodes and rings based on Bridging, Influence and Control metrics. This is applied to the telecommunications domain where fraudulent nodes form a star-bridge-star pattern. Over 75% of nodes involved in fraud denote control, bridging centrality and doubled the influence scores, when compared to non-fraudulent nodes in the same role, stars and bridges being chief positions.
Authored by Pedro Fidalgo, Rui Lopes, Christos Faloutsos
Application domains like big data and IoT require a lot of user data collected and analyzed to extract useful information, and those data might include user's sensitive and personal information. Hence, it is strongly required to ensure the privacy of user data before releasing them in the public space. Since the fields of IoT and big data are constantly evolving with new types of privacy attacks and prevention mechanisms, there is an urgent need for new research and surveys to develop an overview of the state-of-art. We conducted a systematic mapping study on selected papers related to user privacy in IoT and big data, published between 2010 to 2021. This study focuses on identifying the main privacy objectives, attacks and measures taken to prevent the attacks in the two application domains. Additionally, a visualized classification of the existing attacks is presented along with privacy metrics to draw similarities and dissimilarities among different attacks.
Authored by Raisa Islam, Mohammad Hossen, Dongwan Shin
Mobile terminals especially smartphones are changing people's work and life style. For example, mobile payments are experiencing rapid growth as consumers use mobile terminals as part of lifestyles. However, security is a big challenge for mobile application services. In order to reduce security risks, mobile terminal security assessment should be conducted before providing application services. An approach of comprehensive security assessment is proposed in this paper by defining security metrics with the corresponding scores and determining the relative weights of security metrics based on the analytical hierarchy process (AHP). Overall security assessment of Android-based mobile terminals is implemented for mobile payment services with payment fraud detection accuracy of 89%, which shows that the proposed approach of security assessment is reasonable.
Authored by Zhiyuan Hu, Linghang Shi, Huijun Chen, Chao Li, Jinghui Lu
Code optimization is an essential feature for compilers and almost all software products are released by compiler optimizations. Consequently, bugs in code optimization will inevitably cast significant impact on the correctness of software systems. Locating optimization bugs in compilers is challenging as compilers typically support a large amount of optimization configurations. Although prior studies have proposed to locate compiler bugs via generating witness test programs, they are still time-consuming and not effective enough. To address such limitations, we propose an automatic bug localization approach, ODFL, for locating compiler optimization bugs via differentiating finer-grained options in this study. Specifically, we first disable the fine-grained options that are enabled by default under the bug-triggering optimization levels independently to obtain bug-free and bug-related fine-grained options. We then configure several effective passing and failing optimization sequences based on such fine-grained options to obtain multiple failing and passing compiler coverage. Finally, such generated coverage information can be utilized via Spectrum-Based Fault Localization formulae to rank the suspicious compiler files. We run ODFL on 60 buggy GCC compilers from an existing benchmark. The experimental results show that ODFL significantly outperforms the state-of-the-art compiler bug isolation approach RecBi in terms of all the evaluated metrics, demonstrating the effectiveness of ODFL. In addition, ODFL is much more efficient than RecBi as it can save more than 88% of the time for locating bugs on average.
Authored by Jing Yang, Yibiao Yang, Maolin Sun, Ming Wen, Yuming Zhou, Hai Jin
The increased dissemination of open source software to a broader audience has led to a proportional increase in the dissemination of vulnerabilities. These vulnerabilities are introduced by developers, some intentionally or negligently. In this paper, we work to quantity the relative risk that a given developer represents to a software project. We propose using empirical software engineering based analysis on the vast data made available by GitHub to create a Developer Risk Score (DRS) for prolific contributors on GitHub. The DRS can then be aggregated across a project as a derived vulnerability assessment, we call this the Computational Vulnerability Assessment Score (CVAS). The CVAS represents the correlation between the Developer Risk score across projects and vulnerabilities attributed to those projects. We believe this to be a contribution in trying to quantity risk introduced by specific developers across open source projects. Both of the risk scores, those for contributors and projects, are derived from an amalgamation of data, both from GitHub and outside GitHub. We seek to provide this risk metric as a force multiplier for the project maintainers that are responsible for reviewing code contributions. We hope this will lead to a reduction in the number of introduced vulnerabilities for projects in the Open Source ecosystem.
Authored by Jon Chapman, Hari Venugopalan
Explainable Artificial Intelligence (XAI) research focuses on effective explanation techniques to understand and build AI models with trust, reliability, safety, and fairness. Feature importance explanation summarizes feature contributions for end-users to make model decisions. However, XAI methods may produce varied summaries that lead to further analysis to evaluate the consistency across multiple XAI methods on the same model and data set. This paper defines metrics to measure the consistency of feature contribution explanation summaries under feature importance order and saliency map. Driven by these consistency metrics, we develop an XAI process oriented on the XAI criterion of feature importance, which performs a systematical selection of XAI techniques and evaluation of explanation consistency. We demonstrate the process development involving twelve XAI methods on three topics, including a search ranking system, code vulnerability detection and image classification. Our contribution is a practical and systematic process with defined consistency metrics to produce rigorous feature contribution explanations.
Authored by Jun Huang, Zerui Wang, Ding Li, Yan Liu
Biometric security is the fastest growing area that receives considerable attention over the past few years. Digital hiding and encryption technologies provide an effective solution to secure biometric information from intentional or accidental attacks. Visual cryptography is the approach utilized for encrypting the information which is in the form of visual information for example images. Meanwhile, the biometric template stored in the databases are generally in the form of images, the visual cryptography could be employed effectively for encrypting the template from the attack. This study develops a share creation with improved encryption process for secure biometric verification (SCIEP-SBV) technique. The presented SCIEP-SBV technique majorly aims to attain security via encryption and share creation (SC) procedure. Firstly, the biometric images undergo SC process to produce several shares. For encryption process, homomorphic encryption (HE) technique is utilized in this work. To further improve the secrecy, an improved bald eagle search (IBES) approach was exploited in this work. The simulation values of the SCIEP-SBV system are tested on biometric images. The extensive comparison study demonstrated the improved outcomes of the SCIEP-SBV technique over compared methods.
Authored by Shammi L, Milind, Emilin Shyni, Khair Nisa, Ravi Bora, S. Saravanan
When storing face biometric samples in accordance with ISO/IEC 19794 as JPEG2000 encoded images, it is necessary to encrypt them for the sake of users’ privacy. Literature suggests selective encryption of JPEG2000 images as fast and efficient method for encryption, the trade-off is that some information is left in plaintext. This could be used by an attacker, in case the encrypted biometric samples are leaked. In this work, we will attempt to utilize a convolutional neural network to perform cryptanalysis of the encryption scheme. That is, we want to assess if there is any information left in plaintext in the selectively encrypted face images which can be used to identify the person. The chosen approach is to train CNNs for biometric face recognition not only with plaintext face samples but additionally conduct a refinement training with partially encrypted data. If this system can successfully utilize encrypted face samples for biometric matching, we can show that the information left in encrypted biometric face samples is information actually usable for biometric recognition.The method works and we can show that a supposedly secure biometric sample still contains identifying information on average over the whole database.
Authored by Heinz Hofbauer, Yoanna Martínez-Díaz, Luis Luevano, Heydi Méndez-Vázquez, Andreas Uhl
In this paper, an overall introduction of fingerprint encryption algorithm is made, and then a fingerprint encryption algorithm with error correction is designed by adding error correction mechanism. This new fingerprint encryption algorithm can produce stochastic key in the form of multinomial coefficient by using the binary system sequencer, encrypt fingerprint, and use the Lagrange difference value to restore the multinomial during authenticating. Due to using the cyclic redundancy check code to find out the most accurate key, the accuracy of this algorithm can be ensured. Experimental result indicates that the fuzzy vault algorithm with error correction can well realize the template protection, and meet the requirements of biological information security protection. In addition, it also indicates that the system's safety performance can be enhanced by chanaing the key's length.
Authored by Liang Chang
Considered sensitive information by the ISO/IEC 24745, biometric data should be stored and used in a protected way. If not, privacy and security of end-users can be compromised. Also, the advent of quantum computers demands quantum-resistant solutions. This work proposes the use of Kyber and Saber public key encryption (PKE) algorithms together with homomorphic encryption (HE) in a face recognition system. Kyber and Saber, both based on lattice cryptography, were two finalists of the third round of NIST post-quantum cryptography standardization process. After the third round was completed, Kyber was selected as the PKE algorithm to be standardized. Experimental results show that recognition performance of the non-protected face recognition system is preserved with the protection, achieving smaller sizes of protected templates and keys, and shorter execution times than other HE schemes reported in literature that employ lattices. The parameter sets considered achieve security levels of 128, 192 and 256 bits.
Authored by Roberto Román, Rosario Arjona, Paula López-González, Iluminada Baturone
Efficient large-scale biometric identification is a challenging open problem in biometrics today. Adding biometric information protection by cryptographic techniques increases the computational workload even further. Therefore, this paper proposes an efficient and improved use of coefficient packing for homomorphically protected biometric templates, allowing for the evaluation of multiple biometric comparisons at the cost of one. In combination with feature dimensionality reduction, the proposed technique facilitates a quadratic computational workload reduction for biometric identification, while long-term protection of the sensitive biometric data is maintained throughout the system. In previous works on using coefficient packing, only a linear speed-up was reported. In an experimental evaluation on a public face database, efficient identification in the encrypted domain is achieved on off-the-shelf hardware with no loss in recognition performance. In particular, the proposed improved use of coefficient packing allows for a computational workload reduction down to 1.6% of a conventional homomorphically protected identification system without improved packing.
Authored by Pia Bauspieß, Jonas Olafsson, Jascha Kolberg, Pawel Drozdowski, Christian Rathgeb, Christoph Busch
Advanced Encryption Standard (AES) algorithm plays an important role in a data security application. In general S-box module in AES will give maximum confusion and diffusion measures during AES encryption and cause significant path delay overhead. In most cases, either L UTs or embedded memories are used for S- box computations which are vulnerable to attacks that pose a serious risk to real-world applications. In this paper, implementation of the composite field arithmetic-based Sub-bytes and inverse Sub-bytes operations in AES is done. The proposed work includes an efficient multiple round AES cryptosystem with higher-order transformation and composite field s-box formulation with some possible inner stage pipelining schemes which can be used for throughput rate enhancement along with path delay optimization. Finally, input biometric-driven key generation schemes are used for formulating the cipher key dynamically, which provides a higher degree of security for the computing devices.
Authored by Ashutosh Gupta, Anita Agrawal
In healthcare 4.0 ecosystems, authentication of healthcare information allows health stakeholders to be assured that data is originated from correct source. Recently, biometric based authentication is a preferred choice, but as the templates are stored on central servers, there are high chances of copying and generating fake biometrics. An adversary can forge the biometric pattern, and gain access to critical health systems. Thus, to address the limitation, the paper proposes a scheme, PHBio, where an encryption-based biometric system is designed prior before storing the template to the server. Once a user provides his biometrics, the authentication process does not decrypt the data, rather uses a homomorphic-enabled Paillier cryptosystem. The scheme presents the encryption and the comparison part which is based on euclidean distance (EUD) strategy between the user input and the stored template on the server. We consider the minimum distance, and compare the same with a predefined threshold distance value to confirm a biometric match, and authenticate the user. The scheme is compared against parameters like accuracy, false rejection rates (FARs), and execution time. The proposed results indicate the validity of the scheme in real-time health setups.
Authored by Deepti Saraswat, Karan Ladhiya, Pronaya Bhattacharya, Mohd Zuhair
Cancelable biometric is a new era of technology that deals with the protection of the privacy content of a person which itself helps in protecting the identity of a person. Here the biometric information instead of being stored directly on the authentication database is transformed into a non-invertible coded format that will be utilized for providing access. The conversion into an encrypted code requires the provision of an encryption key from the user side. Both invertible and non-invertible coding techniques are there but non-invertible one provides additional security to the user. In this paper, a non-invertible cancelable biometric method has been proposed where the biometric image information is canceled and encoded into a code using a user-provided encryption key. This code is generated from the image histogram after continuous bin updation to the maximal value and then it is encrypted by the Hill cipher. This code is stored on the database instead of biometric information. The technique is applied to a set of retinal information taken from the Indian Diabetic Retinopathy database.
Authored by Subhaluxmi Sahoo
Face recognition is a biometric technique that uses a computer or machine to facilitate the recognition of human faces. The advantage of this technique is that it can detect faces without direct contact with the device. In its application, the security of face recognition data systems is still not given much attention. Therefore, this study proposes a technique for securing data stored in the face recognition system database. It implements the Viola-Jones Algorithm, the Kanade-Lucas-Tomasi Algorithm (KLT), and the Principal Component Analysis (PCA) algorithm by applying a database security algorithm using XOR encryption. Several tests and analyzes have been performed with this method. The histogram analysis results show no visual information related to encrypted images with plain images. In addition, the correlation value between the encrypted and plain images is weak, so it has high security against statistical attacks with an entropy value of around 7.9. The average time required to carry out the introduction process is 0.7896 s.
Authored by Magfirawaty Magfirawaty, Fauzan Setiawan, Muhammad Yusuf, Rizki Kurniandi, Raihan Nafis, Nur Hayati
The cutting-edge biometric recognition systems extract distinctive feature vectors of biometric samples using deep neural networks to measure the amount of (dis-)similarity between two biometric samples. Studies have shown that personal information (e.g., health condition, ethnicity, etc.) can be inferred, and biometric samples can be reconstructed from those feature vectors, making their protection an urgent necessity. State-of-the-art biometrics protection solutions are based on homomorphic encryption (HE) to perform recognition over encrypted feature vectors, hiding the features and their processing while releasing the outcome only. However, this comes at the cost of those solutions' efficiency due to the inefficiency of HE-based solutions with a large number of multiplications; for (dis-)similarity measures, this number is proportional to the vector's dimension. In this paper, we tackle the HE performance bottleneck by freeing the two common (dis-)similarity measures, the cosine similarity and the squared Euclidean distance, from multiplications. Assuming normalized feature vectors, our approach pre-computes and organizes those (dis-)similarity measures into lookup tables. This transforms their computation into simple table-lookups and summation only. We study quantization parameters for the values in the lookup tables and evaluate performances on both synthetic and facial feature vectors for which we achieve a recognition performance identical to the non-tabularized baseline systems. We then assess their efficiency under HE and record runtimes between 28.95ms and 59.35ms for the three security levels, demonstrating their enhanced speed.
Authored by Amina Bassit, Florian Hahn, Raymond Veldhuis, Andreas Peter