Vulnerability detection has always been an essential part of maintaining information security, and the existing work can significantly improve the performance of vulnerability detection. However, due to the differences in representation forms and deep learning models, various methods still have some limitations. In order to overcome this defect, We propose a vulnerability detection method VDBWGDL, based on weight graphs and deep learning. Firstly, it accurately locates vulnerability-sensitive keywords and generates variant codes that satisfy vulnerability trigger logic and programmer programming style through code variant methods. Then, the control flow graph is sliced for vulnerable code keywords and program critical statements. The code block is converted into a vector containing rich semantic information and input into the weight map through the deep learning model. According to specific rules, different weights are set for each node. Finally, the similarity is obtained through the similarity comparison algorithm, and the suspected vulnerability is output according to different thresholds. VDBWGDL improves the accuracy and F1 value by 3.98% and 4.85% compared with four state-of-the-art models. The experimental results prove the effectiveness of VDBWGDL.
Authored by Xin Zhang, Hongyu Sun, Zhipeng He, MianXue Gu, Jingyu Feng, Yuqing Zhang
This paper presents secure MatDot codes, a family of evaluation codes that support secure distributed matrix multiplication via a careful selection of evaluation points that exploit the properties of the dual code. We show that the secure MatDot codes provide security against the user by using locally recoverable codes. These new codes complement the recently studied discrete Fourier transform codes for distributed matrix multiplication schemes that also provide security against the user. There are scenarios where the associated costs are the same for both families and instances where the secure MatDot codes offer a lower cost. In addition, the secure MatDot code provides an alternative way to handle the matrix multiplication by identifying the fastest servers in advance. In this way, it can determine a product using fewer servers, specified in advance, than the MatDot codes which achieve the optimal recovery threshold for distributed matrix multiplication schemes.
Authored by Hiram López, Gretchen Matthews, Daniel Valvo
A weather radar is expected to provide information about weather conditions in real time and valid. To obtain these results, weather radar takes a lot of data samples, so a large amount of data is obtained. Therefore, the weather radar equipment must provide bandwidth for a large capacity for transmission and storage media. To reduce the burden of data volume by performing compression techniques at the time of data acquisition. Compressive Sampling (CS) is a new data acquisition method that allows the sampling and compression processes to be carried out simultaneously to speed up computing time, reduce bandwidth when passed on transmission media, and save storage media. There are three stages in the CS method, namely: sparsity transformation using the Discrete Cosine Transform (DCT) algorithm, sampling using a measurement matrix, and reconstruction using the Orthogonal Matching Pursuit (OMP) algorithm. The sparsity transformation aims to convert the representation of the radar signal into a sparse form. Sampling is used to extract important information from the radar signal, and reconstruction is used to get the radar signal back. The data used in this study is the real data of the IDRA beat signal. Based on the CS simulation that has been done, the best PSNR and RMSE values are obtained when using a CR value of two times, while the shortest computation time is obtained when using a CR value of 32 times. CS simulation in a sector via DCT using the CR value two times produces a PSNR value of 20.838 dB and an RMSE value of 0.091. CS simulation in a sector via DCT using the CR value 32 times requires a computation time of 10.574 seconds.
Authored by Muhammad Ammar, Rita Purnamasari, Gelar Budiman
The Compressive Sensing (CS) has wide range of applications in various domains. The sampling of sparse signal, which is periodic or aperiodic in nature, is still an out of focus topic. This paper proposes novel Sparse Spasmodic Sampling (SSS) techniques for different sparse signal in original domain. The SSS techniques are proposed to overcome the drawback of the existing CS sampling techniques, which can sample any sparse signal efficiently and also find location of non-zero components in signals. First, Sparse Spasmodic Sampling model-1 (SSS-1) which samples random points and also include non-zero components is proposed. Another sampling technique, Sparse Spasmodic Sampling model-2 (SSS-2) has the same working principle as model-1 with some advancements in design. It samples equi-distance points unlike SSS-1. It is demonstrated that, using any sampling technique, the signal is able to reconstruct with a reconstruction algorithm with a smaller number of measurements. Simulation results are provided to demonstrate the effectiveness of the proposed sampling techniques.
Authored by Umesh Mahind, Deepak Karia
Signals get sampled using Nyquist rate in conventional sampling method, but in compressive sensing the signals sampled below Nyquist rate by randomly taking the signal projections and reconstructing it out of very few estimations. But in case of recovering the image by utilizing compressive measurements with the help of multi-resolution grid where the image has certain region of interest (RoI) that is more important than the rest, it is not efficient. The conventional Cartesian sampling cannot give good result in motion image sensing recovery and is limited to stationary image sensing process. The proposed work gives improved results by using Radial sampling (a type of compression sensing). This paper discusses the approach of Radial sampling along with the application of Sparse Fourier Transform algorithms that helps in reducing acquisition cost and input/output overhead.
Authored by Tesu Nema, M. Parsai
Mid-infrared spectroscopic imaging (MIRSI) is an emerging class of label-free, biochemically quantitative technologies targeting digital histopathology. Conventional histopathology relies on chemical stains that alter tissue color. This approach is qualitative, often making histopathologic examination subjective and difficult to quantify. MIRSI addresses these challenges through quantitative and repeatable imaging that leverages native molecular contrast. Fourier transform infrared (FTIR) imaging, the best-known MIRSI technology, has two challenges that have hindered its widespread adoption: data collection speed and spatial resolution. Recent technological breakthroughs, such as photothermal MIRSI, provide an order of magnitude improvement in spatial resolution. However, this comes at the cost of acquisition speed, which is impractical for clinical tissue samples. This paper introduces an adaptive compressive sampling technique to reduce hyperspectral data acquisition time by an order of magnitude by leveraging spectral and spatial sparsity. This method identifies the most informative spatial and spectral features, integrates a fast tensor completion algorithm to reconstruct megapixel-scale images, and demonstrates speed advantages over FTIR imaging while providing spatial resolutions comparable to new photothermal approaches.
Authored by Mahsa Lotfollahi, Nguyen Tran, Chalapathi Gajjela, Sebastian Berisha, Zhu Han, David Mayerich, Rohith Reddy
A power amplifier(PA) is inherently nonlinear device and is used in a communication system widely. Due to the nonlinearity of PA, the communication system is hard to work well. Digital predistortion (DPD) is the way to solve this problem. Using Volterra function to fit the PA is what most DPD solutions do. However, when it comes to wideband signal, there is a deduction on the performance of the Volterra function. In this paper, we replace the Volterra function with B-spline function which performs better on fitting PA at wideband signal. And the other benefit is that the orthogonality of coding matrix A could be improved, enhancing the stability of computation. Additionally, we use compressive sampling to reduce the complexity of the function model.
Authored by Cen Liu, Laiwei Luo, Jun Wang, Chao Zhang, Changyong Pan
Communication systems across a variety of applications are increasingly using the angular domain to improve spectrum management. They require new sensing architectures to perform energy-efficient measurements of the electromagnetic environment that can be deployed in a variety of use cases. This paper presents the Directional Spectrum Sensor (DSS), a compressive sampling (CS) based analog-to-information converter (CS-AIC) that performs spectrum scanning in a focused beam. The DSS offers increased spectrum sensing sensitivity and interferer tolerance compared to omnidirectional sensors. The DSS implementation uses a multi-antenna beamforming architecture with local oscillators that are modulated with pseudo random waveforms to obtain CS measurements. The overall operation, limitations, and the influence of wideband angular effects on the spectrum scanning performance are discussed. Measurements on an experimental prototype are presented and highlight improvements over single antenna, omnidirectional sensing systems.
Authored by Petar Barac, Matthew Bajor, Peter Kinget
The camera constructed by a megahertz range intensity modulation active light source and a kilo-frame rate range fast camera based on compressive sensing (CS) technique for three-dimensional (3D) image acquisition was proposed in this research.
Authored by Quang Pham, Yoshio Hayasaki
The compressed sensing (CS) method can reconstruct images with a small amount of under-sampling data, which is an effective method for fast magnetic resonance imaging (MRI). As the traditional optimization-based models for MRI suffered from non-adaptive sampling and shallow” representation ability, they were unable to characterize the rich patterns in MRI data. In this paper, we propose a CS MRI method based on iterative shrinkage threshold algorithm (ISTA) and adaptive sparse sampling, called DSLS-ISTA-Net. Corresponding to the sampling and reconstruction of the CS method, the network framework includes two folders: the sampling sub-network and the improved ISTA reconstruction sub-network which are coordinated with each other through end-to-end training in an unsupervised way. The sampling sub-network and ISTA reconstruction sub-network are responsible for the implementation of adaptive sparse sampling and deep sparse representation respectively. In the testing phase, we investigate different modules and parameters in the network structure, and perform extensive experiments on MR images at different sampling rates to obtain the optimal network. Due to the combination of the advantages of the model-based method and the deep learning-based method in this method, and taking both adaptive sampling and deep sparse representation into account, the proposed networks significantly improve the reconstruction performance compared to the art-of-state CS-MRI approaches.
Authored by Wenwei Huang, Chunhong Cao, Sixia Hong, Xieping Gao
Scanning Transmission Electron Microscopy (STEM) offers high-resolution images that are used to quantify the nanoscale atomic structure and composition of materials and biological specimens. In many cases, however, the resolution is limited by the electron beam damage, since in traditional STEM, a focused electron beam scans every location of the sample in a raster fashion. In this paper, we propose a scanning method based on the theory of Compressive Sensing (CS) and subsampling the electron probe locations using a line hop sampling scheme that significantly reduces the electron beam damage. We experimentally validate the feasibility of the proposed method by acquiring real CS-STEM data, and recovering images using a Bayesian dictionary learning approach. We support the proposed method by applying a series of masks to fully-sampled STEM data to simulate the expectation of real CS-STEM. Finally, we perform the real data experimental series using a constrained-dose budget to limit the impact of electron dose upon the results, by ensuring that the total electron count remains constant for each image.
Authored by D. Nicholls, A. Robinson, J. Wells, A. Moshtaghpour, M. Bahri, A. Kirkland, N. Browning
Compressive radar receiver has attracted a lot of research interest due to its capability to keep balance between sub-Nyquist sampling and high resolution. In evaluating the performance of compressive time delay estimator, Cramer-Rao bound (CRB) has been commonly utilized for lower bounding the mean square error (MSE). However, behaving as a local bound, CRB is not tight in the a priori performance region. In this paper, we introduce the Ziv-Zakai bound (ZZB) methodology into compressive sensing framework, and derive a deterministic ZZB for compressive time delay estimators as a function of the compressive sensing kernel. By effectively incorporating the a priori information of the unknown time delay, the derived ZZB performs much tighter than CRB especially in the a priori performance region. Simulation results demonstrate that the derived ZZB outperforms the Bayesian CRB over a wide range of signal-to-noise ratio, where different types of a priori distribution of time delay are considered.
Authored by Zongyu Zhang, Chengwei Zhou, Chenggang Yan, Zhiguo Shi
With the development of computer technology and information security technology, computer networks will increasingly become an important means of information exchange, permeating all areas of social life. Therefore, recognizing the vulnerabilities and potential threats of computer networks as well as various security problems that exist in reality, designing and researching computer quality architecture, and ensuring the security of network information are issues that need to be resolved urgently. The purpose of this article is to study the design and realization of information security technology and computer quality system structure. This article first summarizes the basic theory of information security technology, and then extends the core technology of information security. Combining the current status of computer quality system structure, analyzing the existing problems and deficiencies, and using information security technology to design and research the computer quality system structure on this basis. This article systematically expounds the function module data, interconnection structure and routing selection of the computer quality system structure. And use comparative method, observation method and other research methods to design and research the information security technology and computer quality system structure. Experimental research shows that when the load of the computer quality system structure studied this time is 0 or 100, the data loss rate of different lengths is 0, and the correct rate is 100, which shows extremely high feasibility.
Authored by Yuanyuan Hu, Xiaolong Cao, Guoqing Li
This paper has a new network security evaluation method as an absorbing Markov chain-based assessment method. This method is different from other network security situation assessment methods based on graph theory. It effectively refinement issues such as poor objectivity of other methods, incomplete consideration of evaluation factors, and mismatching of evaluation results with the actual situation of the network. Firstly, this method collects the security elements in the network. Then, using graph theory combined with absorbing Markov chain, the threat values of vulnerable nodes are calculated and sorted. Finally, the maximum possible attack path is obtained by blending network asset information to determine the current network security status. The experimental results prove that the method fully considers the vulnerability and threat node ranking and the specific case of system network assets, which makes the evaluation result close to the actual network situation.
Authored by Hongbin Gao, Shangxing Wang, Hongbin Zhang, Bin Liu, Dongmei Zhao, Zhen Liu
As the cyberspace structure becomes more and more complex, the problems of dynamic network space topology, complex composition structure, large spanning space scale, and a high degree of self-organization are becoming more and more important. In this paper, we model the cyberspace elements and their dependencies by combining the knowledge of graph theory. Layer adopts a network space modeling method combining virtual and real, and level adopts a spatial iteration method. Combining the layer-level models into one, this paper proposes a fast modeling method for cyberspace security structure model with network connection relationship, hierarchical relationship, and vulnerability information as input. This method can not only clearly express the individual vulnerability constraints in the network space, but also clearly express the hierarchical relationship of the complex dependencies of network individuals. For independent network elements or independent network element groups, it has flexibility and can greatly reduce the computational complexity in later applications.
Authored by Yuwen Zhu, Lei Yu
Aiming at the single hopping strategy in the terminal information hopping active defense technology, a variety of heterogeneous hopping modes are introduced into the terminal information hopping system, the definition of the terminal information is expanded, and the adaptive adjustment of the hopping strategy is given. A network adversarial training simulation system is researched and designed, and related subsystems are discussed from the perspective of key technologies and their implementation, including interactive adversarial training simulation system, adversarial training simulation support software system, adversarial training simulation evaluation system and adversarial training Mock Repository. The system can provide a good environment for network confrontation theory research and network confrontation training simulation, which is of great significance.
Authored by Man Wang
Online information security labs intended for training and facilitating hands-on learning for distance students at master’s level are not easy to develop and administer. This research focuses on analyzing the results of a DSR project for design, development, and implementation of an InfoSec lab. This research work contributes to the existing research by putting forth an initial outline of a generalized model for design theory for InfoSec labs aimed at hands-on education of students in the field of information security. The anatomy of design theory framework is used to analyze the necessary components of the anticipated design theory for InfoSec labs in future.
Authored by Sarfraz Iqbal
The digital transformation brought on by 5G is redefining current models of end-to-end (E2E) connectivity and service reliability to include security-by-design principles necessary to enable 5G to achieve its promise. 5G trustworthiness highlights the importance of embedding security capabilities from the very beginning while the 5G architecture is being defined and standardized. Security requirements need to overlay and permeate through the different layers of 5G systems (physical, network, and application) as well as different parts of an E2E 5G architecture within a risk-management framework that takes into account the evolving security-threats landscape. 5G presents a typical use-case of wireless communication and computer networking convergence, where 5G fundamental building blocks include components such as Software Defined Networks (SDN), Network Functions Virtualization (NFV) and the edge cloud. This convergence extends many of the security challenges and opportunities applicable to SDN/NFV and cloud to 5G networks. Thus, 5G security needs to consider additional security requirements (compared to previous generations) such as SDN controller security, hypervisor security, orchestrator security, cloud security, edge security, etc. At the same time, 5G networks offer security improvement opportunities that should be considered. Here, 5G architectural flexibility, programmability and complexity can be harnessed to improve resilience and reliability. The working group scope fundamentally addresses the following: •5G security considerations need to overlay and permeate through the different layers of the 5G systems (physical, network, and application) as well as different parts of an E2E 5G architecture including a risk management framework that takes into account the evolving security threats landscape. •5G exemplifies a use-case of heterogeneous access and computer networking convergence, which extends a unique set of security challenges and opportunities (e.g., related to SDN/NFV and edge cloud, etc.) to 5G networks. Similarly, 5G networks by design offer potential security benefits and opportunities through harnessing the architecture flexibility, programmability and complexity to improve its resilience and reliability. •The IEEE FNI security WG's roadmap framework follows a taxonomic structure, differentiating the 5G functional pillars and corresponding cybersecurity risks. As part of cross collaboration, the security working group will also look into the security issues associated with other roadmap working groups within the IEEE Future Network Initiative.
Authored by Ashutosh Dutta, Eman Hammad, Michael Enright, Fawzi Behmann, Arsenia Chorti, Ahmad Cheema, Kassi Kadio, Julia Urbina-Pineda, Khaled Alam, Ahmed Limam, Fred Chu, John Lester, Jong-Geun Park, Joseph Bio-Ukeme, Sanjay Pawar, Roslyn Layton, Prakash Ramchandran, Kingsley Okonkwo, Lyndon Ong, Marc Emmelmann, Omneya Issa, Rajakumar Arul, Sireen Malik, Sivarama Krishnan, Suresh Sugumar, Tk Lala, Matthew Borst, Brad Kloza, Gunes Kurt
Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.
Authored by Ashima Jain, Khushboo Tripathi, Aman Jatain, Manju Chaudhary
Information security construction is a social issue, and the most urgent task is to do an excellent job in information risk assessment. The bayesian neural network currently plays a vital role in enterprise information security risk assessment, which overcomes the subjective defects of traditional assessment results and operates efficiently. The risk quantification method based on fuzzy theory and Bayesian regularization BP neural network mainly uses fuzzy theory to process the original data and uses the processed data as the input value of the neural network, which can effectively reduce the ambiguity of language description. At the same time, special neural network training is carried out for the confusion that the neural network is easy to fall into the optimal local problem. Finally, the risk is verified and quantified through experimental simulation. This paper mainly discusses the problem of enterprise information security risk assessment based on a Bayesian neural network, hoping to provide strong technical support for enterprises and organizations to carry out risk rectification plans. Therefore, the above method provides a new information security risk assessment idea.
Authored by Zijie Deng, Guocong Feng, Qingshui Huang, Hong Zou, Jiafa Zhang
Malware created by the Advanced Persistent Threat (APT) groups do not typically carry out the attacks in a single stage. The “Cyber Kill Chain” framework developed by Lockheed Martin describes an APT through a seven stage life cycle [5] . APT groups are generally nation state actors [1] . They perform highly targeted attacks and do not stop until the goal is achieved [7] . Researchers are always working toward developing a system and a process to create an environment safe from APT type attacks [2] . In this paper, the threat considered is ransomware which are developed by APT groups. WannaCry is an example of a highly sophisticated ransomware created by the Lazurus group of North Korea and its level of sophistication is evident from the existence of a contingency plan of attack upon being discovered [3] [6] . The major contribution of this research is the analysis of APT type ransomware using game theory to present optimal strategies for the defender through the development of equilibrium solutions when faced with APT type ransomware attack. The goal of the equilibrium solutions is to help the defender in preparedness before the attack and in minimization of losses during and after the attack.
Authored by Rudra Baksi
This article analyzes the current situation of computer network security in colleges and universities, future development trends, and the relationship between software vulnerabilities and worm outbreaks. After analyzing a server model with buffer overflow vulnerabilities, a worm implementation model based on remote buffer overflow technology is proposed. Complex networks are the medium of worm propagation. By analyzing common complex network evolution models (rule network models, ER random graph model, WS small world network model, BA scale-free network model) and network node characteristics such as extraction degree distribution, single source shortest distance, network cluster coefficient, richness coefficient, and close center coefficient.
Authored by Chunhua Feng
As more and more information systems rely sen-sors for their critical decisions, there is a growing threat of injecting false signals to sensors in the analog domain. In particular, LightCommands showed that MEMS microphones are susceptible to light, through the photoacoustic and photoelectric effects, enabling an attacker to silently inject voice commands to smart speakers. Understanding such unexpected transduction mechanisms is essential for designing secure and reliable MEMS sensors. Is there any other transduction mechanism enabling laser-induced attacks? We positively answer the question by experimentally evaluating two commercial piezoresistive MEMS pressure sensors. By shining a laser light at the piezoresistors through an air hole on the sensor package, the pressure reading changes by ±1000 hPa with 0.5 mW laser power. This phenomenon can be explained by the photoelectric effect at the piezoresistors, which increases the number of carriers and decreases the resistance. We finally show that an attacker can induce the target signal at the sensor reading by shining an amplitude-modulated laser light.
Authored by Tatsuki Tanaka, Takeshi Sugawara
A huge amount of stored and transferred data is expanding rapidly. Therefore, managing and securing the big volume of diverse applications should have a high priority. However, Structured Query Language Injection Attack (SQLIA) is one of the most common dangerous threats in the world. Therefore, a large number of approaches and models have been presented to mitigate, detect or prevent SQL injection attack but it is still alive. Most of old and current models are created based on static, dynamic, hybrid or machine learning techniques. However, SQL injection attack still represents the highest risk in the trend of web application security risks based on several recent studies in 2021. In this paper, we present a review of the latest research dealing with SQL injection attack and its types, and demonstrating several types of most recent and current techniques, models and approaches which are used in mitigating, detecting or preventing this type of dangerous attack. Then, we explain the weaknesses and highlight the critical points missing in these techniques. As a result, we still need more efforts to make a real, novel and comprehensive solution to be able to cover all kinds of malicious SQL commands. At the end, we provide significant guidelines to follow in order to mitigate such kind of attack, and we strongly believe that these tips will help developers, decision makers, researchers and even governments to innovate solutions in the future research to stop SQLIA.
Authored by Mohammad Qbea'h, Saed Alrabaee, Mohammad Alshraideh, Khair Sabri
Electrical substations in power grid act as the critical interface points for the transmission and distribution networks. Over the years, digital technology has been integrated into the substations for remote control and automation. As a result, substations are more prone to cyber attacks and exposed to digital vulnerabilities. One of the notable cyber attack vectors is the malicious command injection, which can lead to shutting down of substations and subsequently power outages as demonstrated in Ukraine Power Plant Attack in 2015. Prevailing measures based on cyber rules (e.g., firewalls and intrusion detection systems) are often inadequate to detect advanced and stealthy attacks that use legitimate-looking measurements or control messages to cause physical damage. Additionally, defenses that use physics-based approaches (e.g., power flow simulation, state estimation, etc.) to detect malicious commands suffer from high latency. Machine learning serves as a potential solution in detecting command injection attacks with high accuracy and low latency. However, sufficient datasets are not readily available to train and evaluate the machine learning models. In this paper, focusing on this particular challenge, we discuss various approaches for the generation of synthetic data that can be used to train the machine learning models. Further, we evaluate the models trained with the synthetic data against attack datasets that simulates malicious commands injections with different levels of sophistication. Our findings show that synthetic data generated with some level of power grid domain knowledge helps train robust machine learning models against different types of attacks.
Authored by Jia Teo, Sean Gunawan, Partha Biswas, Daisuke Mashima