We present a ternary source coding scheme in this paper, which is a special class of low density generator matrix (LDGM) codes. We prove that a ternary linear block LDGM code, whose generator matrix is randomly generated with each element independent and identically distributed, is universal for source coding in terms of the symbol-error rate (SER). To circumvent the high-complex maximum likelihood decoding, we introduce a special class of convolutional LDGM codes, called block Markov superposition transmission of repetition (BMST-R) codes, which are iteratively decodable by a sliding window algorithm. Then the presented BMST-R codes are applied to construct a tandem scheme for Gaussian source compression, where a dead-zone quantizer is introduced before the ternary source coding. The main advantages of this scheme are its universality and flexibility. The dead-zone quantizer can choose a proper quantization level according to the distortion requirement, while the LDGM codes can adapt the code rate to approach the entropy of the quantized sequence. Numerical results show that the proposed scheme performs well for ternary sources over a wide range of code rates and that the distortion introduced by quantization dominates provided that the code rate is slightly greater than the discrete entropy.
Authored by Tingting Zhu, Jifan Liang, Xiao Ma
Nowadays, improving the reliability and security of the transmitted data has gained more attention with the increase in emerging power-limited and lightweight communication devices. Also, the transmission needs to meet specific latency requirements. Combining data encryption and encoding in one physical layer block has been exploited to study the effect on security and latency over traditional sequential data transmission. Some of the current works target secure error-correcting codes that may be candidates for post-quantum computing. However, modifying the popularly used channel coding techniques to guarantee secrecy and maintain the same error performance and complexity at the decoder is challenging since the structure of the channel coding blocks is altered which results in less optimal decoding performance. Also, the redundancy nature of the error-correcting codes complicates the encryption method. In this paper, we briefly review the proposed security schemes on Turbo codes. Then, we propose a secure turbo code design and compare it with the relevant security schemes in the literature. We show that the proposed method is more secure without adding complexity.
Authored by Ahmed Aladi, Emad Alsusa
Vulnerability discovery is an important field of computer security research and development today. Because most of the current vulnerability discovery methods require large-scale manual auditing, and the code parsing process is cumbersome and time-consuming, the vulnerability discovery effect is reduced. Therefore, for the uncertainty of vulnerability discovery itself, it is the most basic tool design principle that auxiliary security analysts cannot completely replace them. The purpose of this paper is to study the source code vulnerability discovery method based on graph neural network. This paper analyzes the three processes of data preparation, source code vulnerability mining and security assurance of the source code vulnerability mining method, and also analyzes the suspiciousness and particularity of the experimental results. The empirical analysis results show that the types of traditional source code vulnerability mining methods become more concise and convenient after using graph neural network technology, and we conducted a survey and found that more than 82% of people felt that the design source code vulnerability mining method used When it comes to graph neural networks, it is found that the design efficiency has become higher.
Authored by Zhenghong Jiang
In this paper, we propose a new ordered statistics decoding (OSD) for linear block codes, which is referred to as local constraint-based OSD (LC-OSD). Distinguished from the conventional OSD, which chooses the most reliable basis (MRB) for re-encoding, the LC-OSD chooses an extended MRB on which local constraints are naturally imposed. A list of candidate codewords is then generated by performing a serial list Viterbi algorithm (SLVA) over the trellis specified with the local constraints. To terminate early the SLVA for complexity reduction, we present a simple criterion which monitors the ratio of the bound on the likelihood of the unexplored candidate codewords to the sum of the hard-decision vector’s likelihood and the up-to-date optimal candidate’s likelihood. Simulation results show that the LC-OSD can have a much less number of test patterns than that of the conventional OSD but cause negligible performance loss. Comparisons with other complexity-reduced OSDs are also conducted, showing the advantages of the LC-OSD in terms of complexity.
Authored by Yiwen Wang, Jifan Liang, Xiao Ma
Vulnerability detection has always been an essential part of maintaining information security, and the existing work can significantly improve the performance of vulnerability detection. However, due to the differences in representation forms and deep learning models, various methods still have some limitations. In order to overcome this defect, We propose a vulnerability detection method VDBWGDL, based on weight graphs and deep learning. Firstly, it accurately locates vulnerability-sensitive keywords and generates variant codes that satisfy vulnerability trigger logic and programmer programming style through code variant methods. Then, the control flow graph is sliced for vulnerable code keywords and program critical statements. The code block is converted into a vector containing rich semantic information and input into the weight map through the deep learning model. According to specific rules, different weights are set for each node. Finally, the similarity is obtained through the similarity comparison algorithm, and the suspected vulnerability is output according to different thresholds. VDBWGDL improves the accuracy and F1 value by 3.98% and 4.85% compared with four state-of-the-art models. The experimental results prove the effectiveness of VDBWGDL.
Authored by Xin Zhang, Hongyu Sun, Zhipeng He, MianXue Gu, Jingyu Feng, Yuqing Zhang
This paper presents secure MatDot codes, a family of evaluation codes that support secure distributed matrix multiplication via a careful selection of evaluation points that exploit the properties of the dual code. We show that the secure MatDot codes provide security against the user by using locally recoverable codes. These new codes complement the recently studied discrete Fourier transform codes for distributed matrix multiplication schemes that also provide security against the user. There are scenarios where the associated costs are the same for both families and instances where the secure MatDot codes offer a lower cost. In addition, the secure MatDot code provides an alternative way to handle the matrix multiplication by identifying the fastest servers in advance. In this way, it can determine a product using fewer servers, specified in advance, than the MatDot codes which achieve the optimal recovery threshold for distributed matrix multiplication schemes.
Authored by Hiram López, Gretchen Matthews, Daniel Valvo
Social Internet of Vehicle (SIoV) has emerged as one of the most promising applications for vehicle communication, which provides safe and comfortable driving experience. It reduces traffic jams and accidents, thereby saving public resources. However, the wrongly communicated messages would cause serious issues, including life threats. So it is essential to ensure the reliability of the message before acting on considering that. Existing works use cryptographic primitives like threshold authentication and ring signatures, which incurs huge computation and communication overheads, and the ring signature size grew linearly with the threshold value. Our objective is to keep the signature size constant regardless of the threshold value. This work proposes MuSigRDT, a multisignature contract based data transmission protocol using Schnorr digital signature. MuSigRDT provides incentives, to encourage the vehicles to share correct information in real-time and participate honestly in SIoV. MuSigRDT is shown to be secure under Universal Composability (UC) framework. The MuSigRDT contract is deployed on Ethereum's Rinkeby testnet.
Authored by Badavath Naik, Somanath Tripathy, Susil Mohanty
Nowadays, microservice architecture is known as a successful and promising architecture for smart city applications. Applying microservices in the designing and implementation of systems has many advantages such as autonomy, loosely coupled, composability, scalability, fault tolerance. However, the complexity of calling between microservices leads to problems in security, accessibility, and data management in the execution of systems. In order to address these challenges, in recent years, various researchers and developers have focused on the use of microservice patterns in the implementation of microservice-based systems. Microservice patterns are the result of developers’ successful experiences in addressing common challenges in microservicebased systems. However, hitherto no guideline has been provided for an in-depth understanding of microservice patterns and how to apply them to real systems. The purpose of this paper is to investigate in detail the most widely used and important microservice patterns in order to analyze the function of each pattern, extract the behavioral signatures and construct a service dependency graph for them so that researchers and enthusiasts use the provided guideline to create a microservice-based system equipped with design patterns. To construct the proposed guideline, five real open source projects have been carefully investigated and analyzed and the results obtained have been used in the process of making the guideline.
Authored by Neda Mohammadi, Abbas Rasoolzadegan
The long-living nature and byte-addressability of persistent memory (PM) amplifies the importance of strong memory protections. This paper develops temporal exposure reduction protection (TERP) as a framework for enforcing memory safety. Aiming to minimize the time when a PM region is accessible, TERP offers a complementary dimension of memory protection. The paper gives a formal definition of TERP, explores the semantics space of TERP constructs, and the relations with security and composability in both sequential and parallel executions. It proposes programming system and architecture solutions for the key challenges for the adoption of TERP, which draws on novel supports in both compilers and hardware to efficiently meet the exposure time target. Experiments validate the efficacy of the proposed support of TERP, in both efficiency and exposure time minimization.
Authored by Yuanchao Xu, Chencheng Ye, Xipeng Shen, Yan Solihin
A weather radar is expected to provide information about weather conditions in real time and valid. To obtain these results, weather radar takes a lot of data samples, so a large amount of data is obtained. Therefore, the weather radar equipment must provide bandwidth for a large capacity for transmission and storage media. To reduce the burden of data volume by performing compression techniques at the time of data acquisition. Compressive Sampling (CS) is a new data acquisition method that allows the sampling and compression processes to be carried out simultaneously to speed up computing time, reduce bandwidth when passed on transmission media, and save storage media. There are three stages in the CS method, namely: sparsity transformation using the Discrete Cosine Transform (DCT) algorithm, sampling using a measurement matrix, and reconstruction using the Orthogonal Matching Pursuit (OMP) algorithm. The sparsity transformation aims to convert the representation of the radar signal into a sparse form. Sampling is used to extract important information from the radar signal, and reconstruction is used to get the radar signal back. The data used in this study is the real data of the IDRA beat signal. Based on the CS simulation that has been done, the best PSNR and RMSE values are obtained when using a CR value of two times, while the shortest computation time is obtained when using a CR value of 32 times. CS simulation in a sector via DCT using the CR value two times produces a PSNR value of 20.838 dB and an RMSE value of 0.091. CS simulation in a sector via DCT using the CR value 32 times requires a computation time of 10.574 seconds.
Authored by Muhammad Ammar, Rita Purnamasari, Gelar Budiman
The Compressive Sensing (CS) has wide range of applications in various domains. The sampling of sparse signal, which is periodic or aperiodic in nature, is still an out of focus topic. This paper proposes novel Sparse Spasmodic Sampling (SSS) techniques for different sparse signal in original domain. The SSS techniques are proposed to overcome the drawback of the existing CS sampling techniques, which can sample any sparse signal efficiently and also find location of non-zero components in signals. First, Sparse Spasmodic Sampling model-1 (SSS-1) which samples random points and also include non-zero components is proposed. Another sampling technique, Sparse Spasmodic Sampling model-2 (SSS-2) has the same working principle as model-1 with some advancements in design. It samples equi-distance points unlike SSS-1. It is demonstrated that, using any sampling technique, the signal is able to reconstruct with a reconstruction algorithm with a smaller number of measurements. Simulation results are provided to demonstrate the effectiveness of the proposed sampling techniques.
Authored by Umesh Mahind, Deepak Karia
Signals get sampled using Nyquist rate in conventional sampling method, but in compressive sensing the signals sampled below Nyquist rate by randomly taking the signal projections and reconstructing it out of very few estimations. But in case of recovering the image by utilizing compressive measurements with the help of multi-resolution grid where the image has certain region of interest (RoI) that is more important than the rest, it is not efficient. The conventional Cartesian sampling cannot give good result in motion image sensing recovery and is limited to stationary image sensing process. The proposed work gives improved results by using Radial sampling (a type of compression sensing). This paper discusses the approach of Radial sampling along with the application of Sparse Fourier Transform algorithms that helps in reducing acquisition cost and input/output overhead.
Authored by Tesu Nema, M. Parsai
Mid-infrared spectroscopic imaging (MIRSI) is an emerging class of label-free, biochemically quantitative technologies targeting digital histopathology. Conventional histopathology relies on chemical stains that alter tissue color. This approach is qualitative, often making histopathologic examination subjective and difficult to quantify. MIRSI addresses these challenges through quantitative and repeatable imaging that leverages native molecular contrast. Fourier transform infrared (FTIR) imaging, the best-known MIRSI technology, has two challenges that have hindered its widespread adoption: data collection speed and spatial resolution. Recent technological breakthroughs, such as photothermal MIRSI, provide an order of magnitude improvement in spatial resolution. However, this comes at the cost of acquisition speed, which is impractical for clinical tissue samples. This paper introduces an adaptive compressive sampling technique to reduce hyperspectral data acquisition time by an order of magnitude by leveraging spectral and spatial sparsity. This method identifies the most informative spatial and spectral features, integrates a fast tensor completion algorithm to reconstruct megapixel-scale images, and demonstrates speed advantages over FTIR imaging while providing spatial resolutions comparable to new photothermal approaches.
Authored by Mahsa Lotfollahi, Nguyen Tran, Chalapathi Gajjela, Sebastian Berisha, Zhu Han, David Mayerich, Rohith Reddy
A power amplifier(PA) is inherently nonlinear device and is used in a communication system widely. Due to the nonlinearity of PA, the communication system is hard to work well. Digital predistortion (DPD) is the way to solve this problem. Using Volterra function to fit the PA is what most DPD solutions do. However, when it comes to wideband signal, there is a deduction on the performance of the Volterra function. In this paper, we replace the Volterra function with B-spline function which performs better on fitting PA at wideband signal. And the other benefit is that the orthogonality of coding matrix A could be improved, enhancing the stability of computation. Additionally, we use compressive sampling to reduce the complexity of the function model.
Authored by Cen Liu, Laiwei Luo, Jun Wang, Chao Zhang, Changyong Pan
Communication systems across a variety of applications are increasingly using the angular domain to improve spectrum management. They require new sensing architectures to perform energy-efficient measurements of the electromagnetic environment that can be deployed in a variety of use cases. This paper presents the Directional Spectrum Sensor (DSS), a compressive sampling (CS) based analog-to-information converter (CS-AIC) that performs spectrum scanning in a focused beam. The DSS offers increased spectrum sensing sensitivity and interferer tolerance compared to omnidirectional sensors. The DSS implementation uses a multi-antenna beamforming architecture with local oscillators that are modulated with pseudo random waveforms to obtain CS measurements. The overall operation, limitations, and the influence of wideband angular effects on the spectrum scanning performance are discussed. Measurements on an experimental prototype are presented and highlight improvements over single antenna, omnidirectional sensing systems.
Authored by Petar Barac, Matthew Bajor, Peter Kinget
The camera constructed by a megahertz range intensity modulation active light source and a kilo-frame rate range fast camera based on compressive sensing (CS) technique for three-dimensional (3D) image acquisition was proposed in this research.
Authored by Quang Pham, Yoshio Hayasaki
The compressed sensing (CS) method can reconstruct images with a small amount of under-sampling data, which is an effective method for fast magnetic resonance imaging (MRI). As the traditional optimization-based models for MRI suffered from non-adaptive sampling and shallow” representation ability, they were unable to characterize the rich patterns in MRI data. In this paper, we propose a CS MRI method based on iterative shrinkage threshold algorithm (ISTA) and adaptive sparse sampling, called DSLS-ISTA-Net. Corresponding to the sampling and reconstruction of the CS method, the network framework includes two folders: the sampling sub-network and the improved ISTA reconstruction sub-network which are coordinated with each other through end-to-end training in an unsupervised way. The sampling sub-network and ISTA reconstruction sub-network are responsible for the implementation of adaptive sparse sampling and deep sparse representation respectively. In the testing phase, we investigate different modules and parameters in the network structure, and perform extensive experiments on MR images at different sampling rates to obtain the optimal network. Due to the combination of the advantages of the model-based method and the deep learning-based method in this method, and taking both adaptive sampling and deep sparse representation into account, the proposed networks significantly improve the reconstruction performance compared to the art-of-state CS-MRI approaches.
Authored by Wenwei Huang, Chunhong Cao, Sixia Hong, Xieping Gao
Scanning Transmission Electron Microscopy (STEM) offers high-resolution images that are used to quantify the nanoscale atomic structure and composition of materials and biological specimens. In many cases, however, the resolution is limited by the electron beam damage, since in traditional STEM, a focused electron beam scans every location of the sample in a raster fashion. In this paper, we propose a scanning method based on the theory of Compressive Sensing (CS) and subsampling the electron probe locations using a line hop sampling scheme that significantly reduces the electron beam damage. We experimentally validate the feasibility of the proposed method by acquiring real CS-STEM data, and recovering images using a Bayesian dictionary learning approach. We support the proposed method by applying a series of masks to fully-sampled STEM data to simulate the expectation of real CS-STEM. Finally, we perform the real data experimental series using a constrained-dose budget to limit the impact of electron dose upon the results, by ensuring that the total electron count remains constant for each image.
Authored by D. Nicholls, A. Robinson, J. Wells, A. Moshtaghpour, M. Bahri, A. Kirkland, N. Browning
Compressive radar receiver has attracted a lot of research interest due to its capability to keep balance between sub-Nyquist sampling and high resolution. In evaluating the performance of compressive time delay estimator, Cramer-Rao bound (CRB) has been commonly utilized for lower bounding the mean square error (MSE). However, behaving as a local bound, CRB is not tight in the a priori performance region. In this paper, we introduce the Ziv-Zakai bound (ZZB) methodology into compressive sensing framework, and derive a deterministic ZZB for compressive time delay estimators as a function of the compressive sensing kernel. By effectively incorporating the a priori information of the unknown time delay, the derived ZZB performs much tighter than CRB especially in the a priori performance region. Simulation results demonstrate that the derived ZZB outperforms the Bayesian CRB over a wide range of signal-to-noise ratio, where different types of a priori distribution of time delay are considered.
Authored by Zongyu Zhang, Chengwei Zhou, Chenggang Yan, Zhiguo Shi
Topic modeling algorithms from the natural language processing (NLP) discipline have been used for various applications. For instance, topic modeling for the product recommendation systems in the e-commerce systems. In this paper, we briefly reviewed topic modeling applications and then described our proposed idea of utilizing topic modeling approaches for cyber threat intelligence (CTI) applications. We improved the previous work by implementing BERTopic and Top2Vec approaches, enabling users to select their preferred pre-trained text/sentence embedding model, and supporting various languages. We implemented our proposed idea as the new topic modeling module for the Open Web Application Security Project (OWASP) Maryam: Open-Source Intelligence (OSINT) framework. We also described our experiment results using a leaked hacker forum dataset (nulled.io) to attract more researchers and open-source communities to participate in the Maryam project of OWASP Foundation.
Authored by Hatma Suryotrisongko, Hari Ginardi, Henning Ciptaningtyas, Saeed Dehqan, Yasuo Musashi
Nowadays big shopping marts are expanding their business all over the world but not all marts are fully protected with the advanced security system. Very often we come across cases where people take the things out of the mart without billing. These marts require some advanced features-based security system for them so that they can run an efficient and no-loss business. The idea we are giving here can not only be implemented in marts to enhance their security but can also be used in various other fields to cope up with the incompetent management system. Several issues of the stores like regular stock updating, placing orders for new products, replacing products that have expired can be solved with the idea we present here. We also plan on making the slow processes of billing and checking out of the mart faster and more efficient that would result in customer satisfaction.
Authored by Shubh Khandelwal, Shreya Sharma, Sarthak Vishnoi, Ms Ashi Agarwal
Artificial intelligence (AI) was engendered by the rapid development of high and new technologies, which altered the environment of business financial audits and caused problems in recent years. As the pioneers of enterprise financial monitoring, auditors must actively and proactively adapt to the new audit environment in the age of AI. However, the performances of the auditors during the adaptation process are not so favorable. In this paper, methods such as data analysis and field research are used to conduct investigations and surveys. In the process of applying AI to the financial auditing of a business, a number of issues are discovered, such as auditors' underappreciation, information security risks, and liability risk uncertainty. On the basis of the problems, related suggestions for improvement are provided, including the cultivation of compound talents, the emphasis on the value of auditors, and the development of a mechanism for accepting responsibility.
Authored by Wenfeng Xiao
Network intrusion detection technology has been a popular application technology for current network security, but the existing network intrusion detection technology in the application process, there are problems such as low detection efficiency, low detection accuracy and other poor detection performance. To solve the above problems, a new treatment combining artificial intelligence with network intrusion detection is proposed. Artificial intelligence-based network intrusion detection technology refers to the application of artificial intelligence techniques, such as: neural networks, neural algorithms, etc., to network intrusion detection, and the application of these artificial intelligence techniques makes the automatic detection of network intrusion detection models possible.
Authored by Chaofan Lu
Artificial intelligence (AI) and machine learning (ML) have been used in transforming our environment and the way people think, behave, and make decisions during the last few decades [1]. In the last two decades everyone connected to the Internet either an enterprise or individuals has become concerned about the security of his/their computational resources. Cybersecurity is responsible for protecting hardware and software resources from cyber attacks e.g. viruses, malware, intrusion, eavesdropping. Cyber attacks either come from black hackers or cyber warfare units. Artificial intelligence (AI) and machine learning (ML) have played an important role in developing efficient cyber security tools. This paper presents Latest Cyber Security Tools Based on Machine Learning which are: Windows defender ATP, DarckTrace, Cisco Network Analytic, IBM QRader, StringSifter, Sophos intercept X, SIME, NPL, and Symantec Targeted Attack Analytic.
Authored by Taher Ghazal, Mohammad Hasan, Raed Zitar, Nidal Al-Dmour, Waleed Al-Sit, Shayla Islam
Document scanning aims to transfer the captured photographs documents into scanned document files. However, current methods based on traditional or key point detection have the problem of low detection accuracy. In this paper, we were the first to propose a document processing system based on semantic segmentation. Our system uses OCRNet to segment documents. Then, perspective transformation and other post-processing algorithms are used to obtain well-scanned documents based on the segmentation result. Meanwhile, we optimized OCRNet's loss function and reached 97.25 MIoU on the test dataset.
Authored by Ziqi Shan, Yuying Wang, Shunzhong Wei, Xiangmin Li, Haowen Pang, Xinmei Zhou