In defense and security applications, detection of moving target direction is as important as the target detection and/or target classification. In this study, a methodology for the detection of different mobile targets as approaching or receding was proposed for ground surveillance radar data, and convolutional neural networks (CNN) based on transfer learning were employed for this purpose. In order to improve the classification performance, the use of two key concepts, namely Deep Convolutional Generative Adversarial Network (DCGAN) and decision fusion, has been proposed. With DCGAN, the number of limited available data used for training was increased, thus creating a bigger training dataset with identical distribution to the original data for both moving directions. This generated synthetic data was then used along with the original training data to train three different pre-trained deep convolutional networks. Finally, the classification results obtained from these networks were combined with decision fusion approach. In order to evaluate the performance of the proposed method, publicly available RadEch dataset consisting of eight ground target classes was utilized. Based on the experimental results, it was observed that the combined use of the proposed DCGAN and decision fusion methods increased the detection accuracy of moving target for person, vehicle, group of person and all target groups, by 13.63%, 10.01%, 14.82% and 8.62%, respectively.
Authored by Asli Omeroglu, Hussein Mohammed, Argun Oral, Yucel Ozbek
With the electric power distribution grid facing ever increasing complexity and new threats from cyber-attacks, situational awareness for system operators is quickly becoming indispensable. Identifying de-energized lines on the distribution system during a SCADA communication failure is a prime example where operators need to act quickly to deal with an emergent loss of service. Loss of cellular towers, poor signal strength, and even cyber-attacks can impact SCADA visibility of line devices on the distribution system. Neural Networks (NNs) provide a unique approach to learn the characteristics of normal system behavior, identify when abnormal conditions occur, and flag these conditions for system operators. This study applies a 24-hour load forecast for distribution line devices given the weather forecast and day of the week, then determines the current state of distribution devices based on changes in SCADA analogs from communicating line devices. A neural network-based algorithm is applied to historical events on Alabama Power's distribution system to identify de-energized sections of line when a significant amount of SCADA information is hidden.
Authored by Matthew Leak, Ganesh Venayagamoorthy
Similar to any spoof detection systems, power grid monitoring systems and devices are subject to various cyberattacks by determined and well-funded adversaries. Many well-publicized real-world cyberattacks on power grid systems have been publicly reported. Phasor Measurement Units (PMUs) networks with Phasor Data Concentrators (PDCs) are the main building blocks of the overall wide area monitoring and situational awareness systems in the power grid. The data between PMUs and PDC(s) are sent through the legacy networks, which are subject to many attack scenarios under with no, or inadequate, countermeasures in protocols, such as IEEE 37.118-2. In this paper, we consider a stealthier data spoofing attack against PMU networks, called a mirroring attack, where an adversary basically injects a copy of a set of packets in reverse order immediately following their original positions, wiping out the correct values. To the best of our knowledge, for the first time in the literature, we consider a more challenging attack both in terms of the strategy and the lower percentage of spoofed attacks. As part of our countermeasure detection scheme, we make use of novel framing approach to make application of a 2D Convolutional Neural Network (CNN)-based approach which avoids the computational overhead of the classical sample-based classification algorithms. Our experimental evaluation results show promising results in terms of both high accuracy and true positive rates even under the aforementioned stealthy adversarial attack scenarios.
Authored by Yusuf Korkmaz, Alvin Huseinovic, Halil Bisgin, Saša Mrdović, Suleyman Uludag
In today’s fast pacing world, cybercrimes have time and again proved to be one of the biggest hindrances in national development. According to recent trends, most of the times the victim’s data is breached by trapping it in a phishing attack. Security and privacy of user’s data has become a matter of tremendous concern. In order to address this problem and to protect the naive user’s data, a tool which may help to identify whether a window executable is malicious or not by doing static analysis on it has been proposed. As well as a comparative study has been performed by implementing different classification models like Logistic Regression, Neural Network, SVM. The static analysis approach used takes into parameters of the executables, analysis of properties obtained from PE Section Headers i.e. API calls. Comparing different model will provide the best model to be used for static malware analysis
Authored by Naman Aggarwal, Pradyuman Aggarwal, Rahul Gupta
Security is undoubtedly the most serious problem for Web applications, and SQL injection (SQLi) attacks are one of the most damaging. The detection of SQL blind injection vulnerability is very important, but unfortunately, it is not fast enough. This is because time-based SQL blind injection lacks web page feedback, so the delay function can only be set artificially to judge whether the injection is successful by observing the response time of the page. However, brute force cracking and binary search methods used in injection require more web requests, resulting in a long time to obtain database information in SQL blind injection. In this paper, a gated recurrent neural network-based SQL blind injection technology is proposed to generate the predictive characters in SQL blind injection. By using the neural language model based on deep learning and character sequence prediction, the method proposed in this paper can learn the regularity of common database information, so that it can predict the next possible character according to the currently obtained database information, and sort it according to probability. In this paper, the training model is evaluated, and experiments are carried out on the shooting range to compare the method used in this paper with sqlmap (the most advanced sqli test automation tool at present). The experimental results show that the method used in this paper is more effective and significant than sqlmap in time-based SQL blind injection. It can obtain the database information of the target site through fewer requests, and run faster.
Authored by Jiahui Zheng, Junjian Li, Chao Li, Ran Li
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. While hybrid approaches aim for the “best of both worlds,” the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges-and resultant bugs-involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation-the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.
Authored by Tatiana Vélez, Raffi Khatchadourian, Mehdi Bagherzadeh, Anita Raja
On the Internet of Battlefield Things (IoBT), unmanned aerial vehicles (UAVs) provide significant operational advantages. However, the exploitation of the UAV by an untrustworthy entity might lead to security violations or possibly the destruction of crucial IoBT network functionality. The IoBT system has substantial issues related to data tampering and fabrication through illegal access. This paper proposes the use of an intelligent architecture called IoBT-Net, which is built on a convolution neural network (CNN) and connected with blockchain technology, to identify and trace illicit UAV in the IoBT system. Data storage on the blockchain ledger is protected from unauthorized access, data tampering, and invasions. Conveniently, this paper presents a low complexity and robustly performed CNN called LRCANet to estimate AOA for object localization. The proposed LRCANet is efficiently designed with two core modules, called GFPU and stacks, which are cleverly organized with regular and point convolution layers, a max pool layer, and a ReLU layer associated with residual connectivity. Furthermore, the effectiveness of LRCANET is evaluated by various network and array configurations, RMSE, and compared with the accuracy and complexity of the existing state-of-the-art. Additionally, the implementation of tailored drone-based consensus is evaluated in terms of three major classes and compared with the other existing consensus.
Authored by Mohtasin Golam, Rubina Akter, Revin Naufal, Van-Sang Doan, Jae-Min Lee, Dong-Seong Kim
Existing solutions for scheduling arbitrarily complex distributed applications on networks of computational nodes are insufficient for scenarios where the network topology is changing rapidly. New Internet of Things (IoT) domains like the Internet of Robotic Things (IoRT) and the Internet of Battlefield Things (IoBT) demand solutions that are robust and efficient in environments that experience constant and/or rapid change. In this paper, we demonstrate how recent advancements in machine learning (in particular, in graph convolutional neural networks) can be leveraged to solve the task scheduling problem with decent performance and in much less time than traditional algorithms.
Authored by Jared Coleman, Mehrdad Kiamari, Lillian Clark, Daniel D'Souza, Bhaskar Krishnamachari
Malicious attacks, malware, and ransomware families pose critical security issues to cybersecurity, and it may cause catastrophic damages to computer systems, data centers, web, and mobile applications across various industries and businesses. Traditional anti-ransomware systems struggle to fight against newly created sophisticated attacks. Therefore, state-of-the-art techniques like traditional and neural network-based architectures can be immensely utilized in the development of innovative ransomware solutions. In this paper, we present a feature selection-based framework with adopting different machine learning algorithms including neural network-based architectures to classify the security level for ransomware detection and prevention. We applied multiple machine learning algorithms: Decision Tree (DT), Random Forest (RF), Naïve Bayes (NB), Logistic Regression (LR) as well as Neural Network (NN)-based classifiers on a selected number of features for ransomware classification. We performed all the experiments on one ransomware dataset to evaluate our proposed framework. The experimental results demonstrate that RF classifiers outperform other methods in terms of accuracy, F -beta, and precision scores.
Authored by Mohammad Masum, Md Faruk, Hossain Shahriar, Kai Qian, Dan Lo, Muhaiminul Adnan
Ransomware is an emerging threat that imposed a \$ 5 billion loss in 2017, rose to \$ 20 billion in 2021, and is predicted to hit \$ 256 billion in 2031. While initially targeting PC (client) platforms, ransomware recently leaped over to server-side databases-starting in January 2017 with the MongoDB Apocalypse attack and continuing in 2020 with 85,000 MySQL instances ransomed. Previous research developed countermeasures against client-side ransomware. However, the problem of server-side database ransomware has received little attention so far. In our work, we aim to bridge this gap and present DIMAQS (Dynamic Identification of Malicious Query Sequences), a novel anti-ransomware solution for databases. DIMAQS performs runtime monitoring of incoming queries and pattern matching using two classification approaches (Colored Petri Nets (CPNs) and Deep Neural Networks (DNNs)) for attack detection. Our system design exhibits several novel techniques like dynamic color generation to efficiently detect malicious query sequences globally (i.e., without limiting detection to distinct user connections). Our proof-of-concept and ready-to-use implementation targets MySQL servers. The evaluation shows high efficiency without false negatives for both approaches and a false positive rate of nearly 0%. Both classifiers show very moderate performance overheads below 6%. We will publish our data sets and implementation, allowing the community to reproduce our tests and results.
Authored by Christoph Sendner, Lukas Iffländer, Sebastian Schindler, Michael Jobst, Alexandra Dmitrienko, Samuel Kounev
A recommender system is a filtering application based on personalized information from acquired big data to predict a user's preference. Traditional recommender systems primarily rely on keywords or scene patterns. Users' subjective emotion data are rarely utilized for preference prediction. Novel Brain Computer Interfaces hold incredible promise and potential for intelligent applications that rely on collected user data like a recommender system. This paper describes a deep learning method that uses Brain Computer Interfaces (BCI) based neural measures to predict a user's preference on short music videos. Our models are employed on both population-wide and individualized preference predictions. The recognition method is based on dynamic histogram measurement and deep neural network for distinctive feature extraction and improved classification. Our models achieve 97.21%, 94.72%, 94.86%, and 96.34% classification accuracy on two-class, three-class, four-class, and nine-class individualized predictions. The findings provide evidence that a personalized recommender system on an implicit BCI has the potential to succeed.
Authored by Sukun Li, Xiaoxing Liu
Recommenders are central in many applications today. The most effective recommendation schemes, such as those based on collaborative filtering (CF), exploit similarities between user profiles to make recommendations, but potentially expose private data. Federated learning and decentralized learning systems address this by letting the data stay on user's machines to preserve privacy: each user performs the training on local data and only the model parameters are shared. However, sharing the model parameters across the network may still yield privacy breaches. In this paper, we present Rex, the first enclave-based decentralized CF recommender. Rex exploits Trusted execution environments (TEE), such as Intel software guard extensions (SGX), that provide shielded environments within the processor to improve convergence while preserving privacy. Firstly, Rex enables raw data sharing, which ultimately speeds up convergence and reduces the network load. Secondly, Rex fully preserves privacy. We analyze the impact of raw data sharing in both deep neural network (DNN) and matrix factorization (MF) recommenders and showcase the benefits of trusted environments in a full-fledged implementation of Rex. Our experimental results demonstrate that through raw data sharing, Rex significantly decreases the training time by 18.3 x and the network load by 2 orders of magnitude over standard decentralized approaches that share only parameters, while fully protecting privacy by leveraging trustworthy hardware enclaves with very little overhead.
Authored by Akash Dhasade, Nevena Dresevic, Anne-Marie Kermarrec, Rafael Pires
Adversarial attacks have recently been proposed to scrutinize the security of deep neural networks. Most blackbox adversarial attacks, which have partial access to the target through queries, are target-specific; e.g., they require a well-trained surrogate that accurately mimics a given target. In contrast, target-agnostic black-box attacks are developed to attack any target; e.g., they learn a generalized surrogate that can adapt to any target via fine-tuning on samples queried from the target. Despite their success, current state-of-the-art target-agnostic attacks require tremendous fine-tuning steps and consequently an immense number of queries to the target to generate successful attacks. The high query complexity of these attacks makes them easily detectable and thus defendable. We propose a novel query-efficient target-agnostic attack that trains a generalized surrogate network to output the adversarial directions iv.r.t. the inputs and equip it with an effective fine-tuning strategy that only fine-tunes the surrogate when it fails to provide useful directions to generate the attacks. Particularly, we show that to effectively adapt to any target and generate successful attacks, it is sufficient to fine-tune the surrogate with informative samples that help the surrogate get out of the failure mode with additional information on the target’s local behavior. Extensive experiments on CIFAR10 and CIFAR-100 datasets demonstrate that the proposed target-agnostic approach can generate highly successful attacks for any target network with very few fine-tuning steps and thus significantly smaller number of queries (reduced by several order of magnitudes) compared to the state-of-the-art baselines.
Authored by Raha Moraffah, Huan Liu
When storing face biometric samples in accordance with ISO/IEC 19794 as JPEG2000 encoded images, it is necessary to encrypt them for the sake of users’ privacy. Literature suggests selective encryption of JPEG2000 images as fast and efficient method for encryption, the trade-off is that some information is left in plaintext. This could be used by an attacker, in case the encrypted biometric samples are leaked. In this work, we will attempt to utilize a convolutional neural network to perform cryptanalysis of the encryption scheme. That is, we want to assess if there is any information left in plaintext in the selectively encrypted face images which can be used to identify the person. The chosen approach is to train CNNs for biometric face recognition not only with plaintext face samples but additionally conduct a refinement training with partially encrypted data. If this system can successfully utilize encrypted face samples for biometric matching, we can show that the information left in encrypted biometric face samples is information actually usable for biometric recognition.The method works and we can show that a supposedly secure biometric sample still contains identifying information on average over the whole database.
Authored by Heinz Hofbauer, Yoanna Martínez-Díaz, Luis Luevano, Heydi Méndez-Vázquez, Andreas Uhl
Blind identification of channel codes is crucial in intelligent communication and non-cooperative signal processing, and it plays a significant role in wireless physical layer security, information interception, and information confrontation. Previous researches show a high computation complexity by manual feature extractions, in addition, problems of indisposed accuracy and poor robustness are to be resolved in a low signal-to-noise ratio (SNR). For solving these difficulties, based on deep residual shrinkage network (DRSN), this paper proposes a novel recognizer by deep learning technologies to blindly distinguish the type and the parameter of channel codes without any prior knowledge or channel state, furthermore, feature extractions by the neural network from codewords can avoid intricate calculations. We evaluated the performance of this recognizer in AWGN, single-path fading, and multi-path fading channels, the results of the experiments showed that the method we proposed worked well. It could achieve over 85 % of recognition accuracy for channel codes in AWGN channels when SNR is not lower than 4dB, and provide an improvement of more than 5% over the previous research in recognition accuracy, which proves the validation of the proposed method.
Authored by Haifeng Peng, Chunjie Cao, Yang Sun, Haoran Li, Xiuhua Wen
A botnet is a new type of attack method developed and integrated on the basis of traditional malicious code such as network worms and backdoor tools, and it is extremely threatening. This course combines deep learning and neural network methods in machine learning methods to detect and classify the existence of botnets. This sample does not rely on any prior features, the final multi-class classification accuracy rate is higher than 98.7%, the effect is significant.
Authored by Xiaoran Yang, Zhen Guo, Zetian Mai
The robustness of supply chain networks (SCNs) against sequential topology attacks is significant for maintaining firm relationships and activities. Although SCNs have experienced many emergencies demonstrating that mixed failures exacerbate the impact of cascading failures, existing studies of sequential attacks rarely consider the influence of mixed failure modes on cascading failures. In this paper, a reinforcement learning (RL)-based sequential attack strategy is applied to SCNs with cascading failures that consider mixed failure modes. To solve the large state space search problem in SCNs, a deep Q-network (DQN) optimization framework combining deep neural networks (DNNs) and RL is proposed to extract features of state space. Then, it is compared with the traditional random-based, degree-based, and load-based sequential attack strategies. Simulation results on Barabasi-Albert (BA), Erdos-Renyi (ER), and Watts-Strogatz (WS) networks show that the proposed RL-based sequential attack strategy outperforms three existing sequential attack strategies. It can trigger cascading failures with greater influence. This work provides insights for effectively reducing failure propagation and improving the robustness of SCNs.
Authored by Lei Zhang, Jian Zhou, Yizhong Ma, Lijuan Shen
Cognitive radio (CR) networks are an emerging and promising technology to improve the utilization of vacant bands. In CR networks, security is a very noteworthy domain. Two threatening attacks are primary user emulation (PUE) and spectrum sensing data falsification (SSDF). A PUE attacker mimics the primary user signals to deceive the legitimate secondary users. The SSDF attacker falsifies its observations to misguide the fusion center to make a wrong decision about the status of the primary user. In this paper, we propose a scheme based on clustering the secondary users to counter SSDF attacks. Our focus is on detecting and classifying each cluster as reliable or unreliable. We introduce two different methods using an artificial neural network (ANN) for both methods and five more classifiers such as support vector machine (SVM), random forest (RF), K-nearest neighbors (KNN), logistic regression (LR), and decision tree (DR) for the second one to achieve this goal. Moreover, we consider deterministic and stochastic scenarios with white Gaussian noise (WGN) for attack strategy. Results demonstrate that our method outperforms a recently suggested scheme.
Authored by Nazanin Parhizgar, Ali Jamshidi, Peyman Setoodeh
Vulnerability discovery is an important field of computer security research and development today. Because most of the current vulnerability discovery methods require large-scale manual auditing, and the code parsing process is cumbersome and time-consuming, the vulnerability discovery effect is reduced. Therefore, for the uncertainty of vulnerability discovery itself, it is the most basic tool design principle that auxiliary security analysts cannot completely replace them. The purpose of this paper is to study the source code vulnerability discovery method based on graph neural network. This paper analyzes the three processes of data preparation, source code vulnerability mining and security assurance of the source code vulnerability mining method, and also analyzes the suspiciousness and particularity of the experimental results. The empirical analysis results show that the types of traditional source code vulnerability mining methods become more concise and convenient after using graph neural network technology, and we conducted a survey and found that more than 82% of people felt that the design source code vulnerability mining method used When it comes to graph neural networks, it is found that the design efficiency has become higher.
Authored by Zhenghong Jiang
Cloud security has become a serious challenge due to increasing number of attacks day-by-day. Intrusion Detection System (IDS) requires an efficient security model for improving security in the cloud. This paper proposes a game theory based model, named as Game Theory Cloud Security Deep Neural Network (GT-CSDNN) for security in cloud. The proposed model works with the Deep Neural Network (DNN) for classification of attack and normal data. The performance of the proposed model is evaluated with CICIDS-2018 dataset. The dataset is normalized and optimal points about normal and attack data are evaluated based on the Improved Whale Algorithm (IWA). The simulation results show that the proposed model exhibits improved performance as compared with existing techniques in terms of accuracy, precision, F-score, area under the curve, False Positive Rate (FPR) and detection rate.
Authored by Ashima Jain, Khushboo Tripathi, Aman Jatain, Manju Chaudhary
Information security construction is a social issue, and the most urgent task is to do an excellent job in information risk assessment. The bayesian neural network currently plays a vital role in enterprise information security risk assessment, which overcomes the subjective defects of traditional assessment results and operates efficiently. The risk quantification method based on fuzzy theory and Bayesian regularization BP neural network mainly uses fuzzy theory to process the original data and uses the processed data as the input value of the neural network, which can effectively reduce the ambiguity of language description. At the same time, special neural network training is carried out for the confusion that the neural network is easy to fall into the optimal local problem. Finally, the risk is verified and quantified through experimental simulation. This paper mainly discusses the problem of enterprise information security risk assessment based on a Bayesian neural network, hoping to provide strong technical support for enterprises and organizations to carry out risk rectification plans. Therefore, the above method provides a new information security risk assessment idea.
Authored by Zijie Deng, Guocong Feng, Qingshui Huang, Hong Zou, Jiafa Zhang
The emergence of smart cars has revolutionized the automotive industry. Today's vehicles are equipped with different types of electronic control units (ECUs) that enable autonomous functionalities like self-driving, self-parking, lane keeping, and collision avoidance. The ECUs are connected to each other through an in-vehicle network, named Controller Area Network. In this talk, we will present the different cyber attacks that target autonomous vehicles and explain how an intrusion detection system (IDS) using machine learning can play a role in securing the Controller Area Network. We will also discuss the main research contributions for the security of autonomous vehicles. Specifically, we will describe our IDS, named Histogram-based Intrusion Detection and Filtering framework. Next, we will talk about the machine learning explainability issue that limits the acceptability of machine learning in autonomous vehicles, and how it can be addressed using our novel intrusion detection system based on rule extraction methods from Deep Neural Networks.
Authored by Abdelwahid Derhab
Intrusion detection for Controller Area Network (CAN) protocol requires modern methods in order to compete with other electrical architectures. Fingerprint Intrusion Detection Systems (IDS) provide a promising new approach to solve this problem. By characterizing network traffic from known ECUs, hazardous messages can be discriminated. In this article, a modified version of Fingerprint IDS is employed utilizing both step response and spectral characterization of network traffic via neural network training. With the addition of feature set reduction and hyperparameter tuning, this method accomplishes a 99.4% detection rate of trusted ECU traffic.
Authored by Kunaal Verma, Mansi Girdhar, Azeem Hafeez, Selim Awad
The Controller area network (CAN) is the most extensively used in-vehicle network. It is set to enable communication between a number of electronic control units (ECU) that are widely found in most modern vehicles. CAN is the de facto in-vehicle network standard due to its error avoidance techniques and similar features, but it is vulnerable to various attacks. In this research, we propose a CAN bus intrusion detection system (IDS) based on convolutional neural networks (CNN). U-CAN is a segmentation model that is trained by monitoring CAN traffic data that are preprocessed using hamming distance and saliency detection algorithm. The model is trained and tested using publicly available datasets of raw and reverse-engineered CAN frames. With an F\_1 Score of 0.997, U-CAN can detect DoS, Fuzzy, spoofing gear, and spoofing RPM attacks of the publicly available raw CAN frames. The model trained on reverse-engineered CAN signals that contain plateau attacks also results in a true positive rate and false-positive rate of 0.971 and 0.998, respectively.
Authored by Araya Desta, Shuji Ohira, Ismail Arai, Kazutoshi Fujikawa
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances. HGNNs, compared to homogeneous data, absorb various aspects of information about individuals in the training stage. That means more information has been covered in the learning result, especially sensitive information. However, the privacy-preserving methods on homogeneous graphs only preserve the same type of node attributes or relationships, which cannot effectively work on heterogeneous graphs due to the complexity. To address this issue, we propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP, which provides a double guarantee on graph features and topology. In particular, we first define a new attack scheme to reveal privacy leakage in the heterogeneous graphs. Specifically, we design a two-stage pipeline framework, which includes the privacy-preserving feature encoder and the heterogeneous link reconstructor with gradients perturbation based on differential privacy to tolerate data diversity and against the attack. To better control the noise and promote model performance, we utilize a bi-level optimization pattern to allocate a suitable privacy budget for the above two modules. Our experiments on four public benchmarks show that the HeteDP method is equipped to resist heterogeneous graph privacy leakage with admirable model generalization.
Authored by Yuecen Wei, Xingcheng Fu, Qingyun Sun, Hao Peng, Jia Wu, Jinyan Wang, Xianxian Li