To exploit high temporal correlations in video frames of the same scene, the current frame is predicted from the already-encoded reference frames using block-based motion estimation and compensation techniques. While this approach can efficiently exploit the translation motion of the moving objects, it is susceptible to other types of affine motion and object occlusion/deocclusion. Recently, deep learning has been used to model the high-level structure of human pose in specific actions from short videos and then generate virtual frames in future time by predicting the pose using a generative adversarial network (GAN). Therefore, modelling the high-level structure of human pose is able to exploit semantic correlation by predicting human actions and determining its trajectory. Video surveillance applications will benefit as stored “big” surveillance data can be compressed by estimating human pose trajectories and generating future frames through semantic correlation. This paper explores a new way of video coding by modelling human pose from the already-encoded frames and using the generated frame at the current time as an additional forward-referencing frame. It is expected that the proposed approach can overcome the limitations of the traditional backward-referencing frames by predicting the blocks containing the moving objects with lower residuals. Our experimental results show that the proposed approach can achieve on average up to 2.83 dB PSNR gain and 25.93% bitrate savings for high motion video sequences compared to standard video coding.
Authored by S Rajin, Manzur Murshed, Manoranjan Paul, Shyh Teng, Jiangang Ma
With the rapid development of artificial intelligence, video target tracking is widely used in the fields of intelligent video surveillance, intelligent transportation, intelligent human-computer interaction and intelligent medical diagnosis. Deep learning has achieved remarkable results in the field of computer vision. The development of deep learning not only breaks through many problems that are difficult to be solved by traditional algorithms, improves the computer's cognitive level of images and videos, but also promotes the progress of related technologies in the field of computer vision. This paper combines the deep learning algorithm and target tracking algorithm to carry out relevant experiments on basketball motion detection video, hoping that the experimental results can be helpful to basketball motion detection video target tracking.
Authored by Tieniu Xia
Video summarization aims to improve the efficiency of large-scale video browsing through producting concise summaries. It has been popular among many scenarios such as video surveillance, video review and data annotation. Traditional video summarization techniques focus on filtration in image features dimension or image semantics dimension. However, such techniques can make a large amount of possible useful information lost, especially for many videos with rich text semantics like interviews, teaching videos, in that only the information relevant to the image dimension will be retained. In order to solve the above problem, this paper considers video summarization as a continuous multi-dimensional decision-making process. Specifically, the summarization model predicts a probability for each frame and its corresponding text, and then we designs reward methods for each of them. Finally, comprehensive summaries in two dimensions, i.e. images and semantics, is generated. This approach is not only unsupervised and does not rely on labels and user interaction, but also decouples the semantic and image summarization models to provide more usable interfaces for subsequent engineering use.
Authored by Haoran Sun, Xiaolong Zhu, Conghua Zhou
One of the biggest studies on public safety and tracking that has sparked a lot of interest in recent years is deep learning approach. Current public safety methods are existent for counting and detecting persons. But many issues such as aberrant occurring in public spaces are seldom detected and reported to raise an automated alarm. Our proposed method detects anomalies (deviation from normal events) from the video surveillance footages using deep learning and raises an alarm, if anomaly is found. The proposed model is trained to detect anomalies and then it is applied to the video recording of the surveillance that is used to monitor public safety. Then the video is assessed frame by frame to detect anomaly and then if there is match, an alarm is raised.
Authored by K Nithesh, Nikhath Tabassum, D. Geetha, R Kumari
In recent years, in order to continuously promote the construction of safe cities, security monitoring equipment has been widely used all over the country. How to use computer vision technology to realize effective intelligent analysis of violence in video surveillance is very important to maintain social stability and ensure people's life and property safety. Video surveillance system has been widely used because of its intuitive and convenient advantages. However, the existing video monitoring system has relatively single function, and generally only has the functions of monitoring video viewing, query and playback. In addition, relevant researchers pay less attention to the complex abnormal behavior of violence, and relevant research often ignores the differences between violent behaviors in different scenes. At present, there are two main problems in video abnormal behavior event detection: the video data of abnormal behavior is less and the definition of abnormal behavior in different scenes cannot be clearly distinguished. The main existing methods are to model normal behavior events first, and then define videos that do not conform to the normal model as abnormal, among which the learning method of video space-time feature representation based on deep learning shows a good prospect. In the face of massive surveillance videos, it is necessary to use deep learning to identify violent behaviors, so that the machine can learn to identify human actions, instead of manually monitoring camera images to complete the alarm of violent behaviors. Network training mainly uses video data set to identify network training.
Authored by Xuezhong Wang
In this paper, we quantify elements representing video features and we propose the bitrate prediction of compressed encoding video using deep learning. Particularly, to overcome disadvantage that we cannot predict bitrate of compression video by using Constant Rate Factor (CRF), we use deep learning. We can find element of video feature with relationship of bitrate when we compress the video, and we can confirm its possibility to find relationship through various deep learning techniques.
Authored by Hankaram Choi, Yongchul Bae
The Internet has evolved to the point that gigabytes and even terabytes of data are generated and processed on a daily basis. Such a stream of data is characterised by high volume, velocity and variety and is referred to as Big Data. Traditional data processing tools can no longer be used to process big data, because they were not designed to handle such a massive amount of data. This problem concerns also cyber security, where tools like intrusion detection systems employ classification algorithms to analyse the network traffic. Achieving a high accuracy attack detection becomes harder when the amount of data increases and the algorithms must be efficient enough to keep up with the throughput of a huge data stream. Due to the challenges posed by a big data environment, some monitoring systems have already shifted from deep packet inspection to flow-level inspection. The goal of this paper is to evaluate the applicability of an existing intrusion detection technique that performs deep packet inspection in a big data setting. We have conducted several experiments with Apache Spark to assess the performance of the technique when classifying anomalous packets, showing that it benefits from the use of Spark.
Authored by Fabrizio Angiulli, Angelo Furfaro, Domenico Saccá, Ludovica Sacco
Attack detection in enterprise networks is increasingly faced with large data volumes, in part high data bursts, and heavily fluctuating data flows that often cause arbitrary discarding of data packets in overload situations which can be used by attackers to hide attack activities. Attack detection systems usually configure a comprehensive set of signatures for known vulnerabilities in different operating systems, protocols, and applications. Many of these signatures, however, are not relevant in each context, since certain vulnerabilities have already been eliminated, or the vulnerable applications or operating system versions, respectively, are not installed on the involved systems. In this paper, we present an approach for clustering data flows to assign them to dedicated analysis units that contain only signature sets relevant for the analysis of these flows. We discuss the performance of this clustering and show how it can be used in practice to improve the efficiency of an analysis pipeline.
Authored by Michael Vogel, Franka Schuster, Fabian Kopp, Hartmut König
Though several deep learning (DL) detectors have been proposed for the network attack detection and achieved high accuracy, they are computationally expensive and struggle to satisfy the real-time detection for high-speed networks. Recently, programmable switches exhibit a remarkable throughput efficiency on production networks, indicating a possible deployment of the timely detector. Therefore, we present Soter, a DL enhanced in-network framework for the accurate real-time detection. Soter consists of two phases. One is filtering packets by a rule-based decision tree running on the Tofino ASIC. The other is executing a well-designed lightweight neural network for the thorough inspection of the suspicious packets on the CPU. Experiments on the commodity switch demonstrate that Soter behaves stably in ten network scenarios of different traffic rates and fulfills per-flow detection in 0.03s. Moreover, Soter naturally adapts to the distributed deployment among multiple switches, guaranteeing a higher total throughput for large data centers and cloud networks.
Authored by Guorui Xie, Qing Li, Chupeng Cui, Peican Zhu, Dan Zhao, Wanxin Shi, Zhuyun Qi, Yong Jiang, Xi Xiao
Network attacks become more complicated with the improvement of technology. Traditional statistical methods may be insufficient in detecting constantly evolving network attack. For this reason, the usage of payload-based deep packet inspection methods is very significant in detecting attack flows before they damage the system. In the proposed method, features are extracted from the byte distributions in the payload and these features are provided to characterize the flows more deeply by using N-Gram analysis methods. The proposed procedure has been tested on IDS 2012 and 2017 datasets, which are widely used in the literature.
Authored by Süleyman Özdel, Pelin Ateş, Çağatay Ateş, Mutlu Koca, Emin Anarım
Internet service providers (ISP) rely on network traffic classifiers to provide secure and reliable connectivity for their users. Encrypted traffic introduces a challenge as attacks are no longer viable using classic Deep Packet Inspection (DPI) techniques. Distinguishing encrypted from non-encrypted traffic is the first step in addressing this challenge. Several attempts have been conducted to identify encrypted traffic. In this work, we compare the detection performance of DPI, traffic pattern, and randomness tests to identify encrypted traffic in different levels of granularity. In an experimental study, we evaluate these candidates and show that a traffic pattern-based classifier outperforms others for encryption detection.
Authored by Hossein Doroud, Ahmad Alaswad, Falko Dressler
The growing number of cybersecurity incidents and the always increasing complexity of cybersecurity attacks is forcing the industry and the research community to develop robust and effective methods to detect and respond to network attacks. Many tools are either built upon a large number of rules and signatures which only large third-party vendors can afford to create and maintain, or are based on complex artificial intelligence engines which, in most cases, still require personalization and fine-tuning using costly service contracts offered by the vendors.This paper introduces an open-source network traffic monitoring system based on the concept of cyberscore, a numerical value that represents how a network activity is considered relevant for spotting cybersecurity-related events. We describe how this technique has been applied in real-life networks and present the result of this evaluation.
Authored by Luca Deri, Alfredo Cardigliano
Current intrusion detection techniques cannot keep up with the increasing amount and complexity of cyber attacks. In fact, most of the traffic is encrypted and does not allow to apply deep packet inspection approaches. In recent years, Machine Learning techniques have been proposed for post-mortem detection of network attacks, and many datasets have been shared by research groups and organizations for training and validation. Differently from the vast related literature, in this paper we propose an early classification approach conducted on CSE-CIC-IDS2018 dataset, which contains both benign and malicious traffic, for the detection of malicious attacks before they could damage an organization. To this aim, we investigated a different set of features, and the sensitivity of performance of five classification algorithms to the number of observed packets. Results show that ML approaches relying on ten packets provide satisfactory results.
Authored by Idio Guarino, Giampaolo Bovenzi, Davide Di Monda, Giuseppe Aceto, Domenico Ciuonzo, Antonio Pescapè
The Manufacturer Usage Description (MUD) standard aims to reduce the attack surface for IoT devices by locking down their behavior to a formally-specified set of network flows (access control entries). Formal network behaviors can also be systematically and rigorously verified in any operating environment. Enforcing MUD flows and monitoring their activity in real-time can be relatively effective in securing IoT devices; however, its scope is limited to endpoints (domain names and IP addresses) and transport-layer protocols and services. Therefore, misconfigured or compromised IoTs may conform to their MUD-specified behavior but exchange unintended (or even malicious) contents across those flows. This paper develops PicP-MUD with the aim to profile the information content of packet payloads (whether unencrypted, encoded, or encrypted) in each MUD flow of an IoT device. That way, certain tasks like cyber-risk analysis, change detection, or selective deep packet inspection can be performed in a more systematic manner. Our contributions are twofold: (1) We analyze over 123K network flows of 6 transparent (e.g., HTTP), 11 encrypted (e.g., TLS), and 7 encoded (e.g., RTP) protocols, collected in our lab and obtained from public datasets, to identify 17 statistical features of their application payload, helping us distinguish different content types; and (2) We develop and evaluate PicP-MUD using a machine learning model, and show how we achieve an average accuracy of 99% in predicting the content type of a flow.
Authored by Arman Pashamokhtari, Arunan Sivanathan, Ayyoob Hamza, Hassan Gharakheili
Currently in El Salvador, efforts are being made to implement the digital signature and as part of this technology, a Public Key Infrastructure (PKI) is required, which must validate Certificate Authorities (CA). For a CA, it is necessary to implement the software that allows it to manage digital certificates and perform security procedures for the execution of cryptographic operations, such as encryption, digital signatures, and non-repudiation of electronic transactions. The present work makes a proposal for a digital certificate management system according to the Digital Signature Law of El Salvador and secure cryptography standards. Additionally, a security discussion is accomplished.
Authored by Álvaro Zavala, Leonel Maye
A digital signature is a type of asymmetric cryptography that is used to ensure that the recipient receives the actual received message from the intended sender. Problems that often arise conventionally when requiring letter approval from the authorized official, and the letter concerned is very important and urgent, often the process of giving the signature is hampered because the official concerned is not in place. With these obstacles, the letter that should be distributed immediately becomes hampered and takes a long time in terms of signing the letter. The purpose of this study is to overcome eavesdropping and data exchange in sending data using Digital Signature as authentication of data authenticity and minimizing fake signatures on letters that are not made and authorized by relevant officials based on digital signatures stored in the database. This research implements the Rivest Shamir Adleman method. (RSA) as outlined in an application to provide authorization or online signature with Digital Signature. The results of the study The application of the Rivest Shamir Adleman (RSA) algorithm can run on applications with the Digital Signature method based on ISO 9126 testing by expert examiners, and the questionnaire distributed to users and application operators obtained good results from an average value of 79.81 based on the scale table ISO 9126 conversion, the next recommendation for encryption does not use MD5 but uses Bcrypt secure database to make it stronger.
Authored by Wahyu Widiyanto, Dwi Iskandar, Sri Wulandari, Edy Susena, Edy Susanto
As the demand for effective information protection grows, security has become the primary concern in protecting such data from attackers. Cryptography is one of the methods for safeguarding such information. It is a method of storing and distributing data in a specific format that can only be read and processed by the intended recipient. It offers a variety of security services like integrity, authentication, confidentiality and non-repudiation, Malicious. Confidentiality service is required for preventing disclosure of information to unauthorized parties. In this paper, there are no ideal hash functions that dwell in digital signature concepts is proved.
Authored by Nagaeswari Bodapati, N. Pooja, Amrutha Varshini, Naga Jyothi
This is the time of internet, and we are communicating our confidential data over internet in daily life. So, it is necessary to check the authenticity in communication to stop non-repudiation, of the sender. We are using the digital signature for stopping the non-repudiation. There are many versions of digital signature are available in the market. But in every algorithm, we are sending the original message and the digest message to the receiver. Hence, there is no security applied on the original message. In this paper we are proposed an algorithm which can secure the original and its integrity. In this paper we are using the RSA algorithm as the encryption and decryption algorithm, and SHA256 algorithm for making the hash.
Authored by Surendra Chauhan, Nitin Jain, Satish Pandey
This research investigates efficient architectures for the implementation of the CRYSTALS-Dilithium post-quantum digital signature scheme on reconfigurable hardware, in terms of speed, memory usage, power consumption and resource utilisation. Post quantum digital signature schemes involve a significant computational effort, making efficient hardware accelerators an important contributor to future adoption of schemes. This is work in progress, comprising the establishment of a comprehensive test environment for operational profiling, and the investigation of the use of novel architectures to achieve optimal performance.
Authored by Donal Campbell, Ciara Rafferty, Ayesha Khalid, Maire O'Neill
The MANET architecture's future growth will make extensive use of encryption and encryption to keep network participants safe. Using a digital signature node id, we illustrate how we may stimulate the safe growth of subjective clusters while simultaneously addressing security and energy efficiency concerns. The dynamic topology of MANET allows nodes to join and exit at any time. A form of attack known as a black hole assault was used to accomplish this. To demonstrate that he had the shortest path with the least amount of energy consumption, an attacker in MATLAB R2012a used a digital signature ID to authenticate the node from which he wished to intercept messages (DSEP). “Digital Signature”, “MANET,” and “AODV” are all terms used to describe various types of digital signatures. Black Hole Attack, Single Black Hole Attack, Digital Signature, and DSEP are just a few of the many terms associated with MANET.
Authored by Sunil Gupta, Mohammad Shahid, Ankur Goyal, Rakesh Saxena, Kamal Saluja
As the nature of the website, the EJBCA digital signatures may have vulnerabilities. The list of web-based vulnerabilities can be found in OWASP's Top 10 2021. Anticipating the attack with an effective and efficient forensics application is necessary. The concept of digital forensic readiness can be applied as a pre-incident plan with a digital forensic lifecycle pipeline to establish an efficient forensic process. Managing digital evidence in the pre-incident plan includes data collection, examination, analysis, and findings report. Based on this concept, we implemented it in designing an information system that carries out the entire flow, provides attack evidence collection, visualization of attack statistics in executive summary, mitigation recommendation, and forensic report generation in a physical form when needed. This research offers an information system that can help the digital forensic process and maintain the integrity of the EJBCA digital signature server web.
Authored by Ihsan Rasyid, Luqman Zagi, Suhardi
The paper presents the concept of the association of digital signature technology with the currently trending blockchain technology for providing a mechanism which would detect any dubious data and store it in a place where it could be secure for the long term. The features of blockchain technology perfectly complement the requirements of the educational fields of today's world. The growing trend of digital certificate usage makes it easier for a dubious certificate to existing, among the others hampering the integrity of professional life. Association of hash key and a time stamp with a digital document would ensure that a third person does not corrupt the following certificate. The blockchain ensures that after verification, nobody else misuses the data uploaded and keeps it safe for a long time. The information from the blockchain can be retrieved at any moment by the user using the unique id associated with every user.
Authored by Adeeba Habeeb, Vinod Shukla, Suchi Dubey, Shaista Anwar
The rapid development of technology, makes it easier for everyone to exchange information and knowledge. Exchange information via the internet is threatened with security. Security issues, especially the issue of the confidentiality of information content and its authenticity, are vital things that must protect. Peculiarly for agencies that often hold activities that provide certificates in digital form to participants. Digital certificates are digital files conventionally used as proof of participation or a sign of appreciation owned by someone. We need a security technology for certificates as a source of information known as cryptography. This study aims to validate and authenticate digital certificates with digital signatures using SHA-256, DSA, and 3DES. The use of the SHA-256 hash function is in line with the DSA method and the implementation of 3DES which uses 2 private keys so that the security of digital certificate files can be increased. The pixel changes that appear in the MSE calculation have the lowest value of 7.4510 and the highest value of 165.0561 when the file is manipulated, it answers the security of the proposed method is maintained because the only valid file is the original file.
Authored by Bagas Yulianto, Budi Handoko, Eko Rachmawanto, Pujiono, Arief Soeleman
Recently, placing vehicles in the parking area is becoming a problem. A smart parking system is proposed to solve the problem. Most smart parking systems have a centralized system, wherein that type of system is at-risk of single-point failure that can affect the whole system. To overcome the weakness of the centralized system, the most popular mechanism that researchers proposed is blockchain. If there is no mechanism implemented in the blockchain to verify the authenticity of every transaction, then the system is not secure against impersonation attacks. This study combines blockchain mechanism with Ring Learning With Errors (RLWE) based digital signature for securing the scheme against impersonation and double-spending attacks. RLWE was first proposed by Lyubashevsky et al. This scheme is a development from the previous scheme Learning with Error or LWE.
Authored by Jihan Atiqoh, Ari Barmawi, Farah Afianti
The rise of social media has brought the rise of fake news and this fake news comes with negative consequences. With fake news being such a huge issue, efforts should be made to identify any forms of fake news however it is not so simple. Manually identifying fake news can be extremely subjective as determining the accuracy of the information in a story is complex and difficult to perform, even for experts. On the other hand, an automated solution would require a good understanding of NLP which is also complex and may have difficulties producing an accurate output. Therefore, the main problem focused on this project is the viability of developing a system that can effectively and accurately detect and identify fake news. Finding a solution would be a significant benefit to the media industry, particularly the social media industry as this is where a large proportion of fake news is published and spread. In order to find a solution to this problem, this project proposed the development of a fake news identification system using deep learning and natural language processing. The system was developed using a Word2vec model combined with a Long Short-Term Memory model in order to showcase the compatibility of the two models in a whole system. This system was trained and tested using two different dataset collections that each consisted of one real news dataset and one fake news dataset. Furthermore, three independent variables were chosen which were the number of training cycles, data diversity and vector size to analyze the relationship between these variables and the accuracy levels of the system. It was found that these three variables did have a significant effect on the accuracy of the system. From this, the system was then trained and tested with the optimal variables and was able to achieve the minimum expected accuracy level of 90%. The achieving of this accuracy levels confirms the compatibility of the LSTM and Word2vec model and their capability to be synergized into a single system that is able to identify fake news with a high level of accuracy.
Authored by Anand Matheven, Burra Kumar