Modern connected vehicles are equipped with a large number of sensors, which enable a wide range of services that can improve overall traffic safety and efficiency. However, remote access to connected vehicles also introduces new security issues affecting both inter and intra-vehicle communications. In fact, existing intra-vehicle communication systems, such as Controller Area Network (CAN), lack security features, such as encryption and secure authentication for Electronic Control Units (ECUs). Instead, Original Equipment Manufacturers (OEMs) seek security through obscurity by keeping secret the proprietary format with which they encode the information. Recently, it has been shown that the reuse of CAN frame IDs can be exploited to perform CAN bus reverse engineering without physical access to the vehicle, thus raising further security concerns in a connected environment. This work investigates whether anonymizing the frames of each newly released vehicle is sufficient to prevent CAN bus reverse engineering based on frame ID matching. The results show that, by adopting Machine Learning techniques, anonymized CAN frames can still be fingerprinted and identified in an unknown vehicle with an accuracy of up to 80 %.
Authored by Alessio Buscemi, Ion Turcanu, German Castignani, Thomas Engel
Intrusion detection for Controller Area Network (CAN) protocol requires modern methods in order to compete with other electrical architectures. Fingerprint Intrusion Detection Systems (IDS) provide a promising new approach to solve this problem. By characterizing network traffic from known ECUs, hazardous messages can be discriminated. In this article, a modified version of Fingerprint IDS is employed utilizing both step response and spectral characterization of network traffic via neural network training. With the addition of feature set reduction and hyperparameter tuning, this method accomplishes a 99.4% detection rate of trusted ECU traffic.
Authored by Kunaal Verma, Mansi Girdhar, Azeem Hafeez, Selim Awad
The exponential growth of IoT-type systems has led to a reconsideration of the field of database management systems in terms of storing and handling high-volume data. Recently, many real-time Database Management Systems(DBMS) have been developed to address issues such as security, managing concurrent access to stored data, and optimizing data query performance. This paper studies methods that allow to reduce the temporal validity range for common DBMS. The primary purpose of IoT edge devices is to generate data and make it available for machine learning or statistical algorithms. This is achieved inside the Knowledge Discovery in Databases process. In order to visualize and obtain critical Data Mining results, all the device-generated data must be made available as fast as possible for selection, preprocessing and data transformation. In this research we investigate if IoT edge devices can be used with common DBMS proper configured in order to access data fast instead of working with Real Time DBMS. We will study what kind of transactions are needed in large IoT ecosystems and we will analyze the techniques of controlling concurrent access to common resources (stored data). For this purpose, we built a series of applications that are able to simulate concurrent writing operations to a common DBMS in order to investigate the performance of concurrent access to database resources. Another important procedure that will be tested with the developed applications will be to increase the availability of data for users and data mining applications. This will be achieved by using field indexing.
Authored by Valentin Pupezescu, Marilena-Cătălina Pupezescu, Lucian-Andrei Perișoară
False data injection cyber-attack detection models on smart grid operation have been much explored recently, considering analytical physics-based and data-driven solutions. Recently, a hybrid data-driven physics-based model framework for monitoring the smart grid is developed. However, the framework has not been implemented in real-time environment yet. In this paper, the framework of the hybrid model is developed within a real-time simulation environment. OPAL-RT real-time simulator is used to enable Hardware-in-the-Loop testing of the framework. IEEE 9-bus system is considered as a testing grid for gaining insight. The process of building the framework and the challenges faced during development are presented. The performance of the framework is investigated under various false data injection attacks.
Authored by Valeria Vega-Martinez, Austin Cooper, Brandon Vera, Nader Aljohani, Arturo Bretas
One of the major concerns in the real-time monitoring systems in a smart grid is the Cyber security threat. The false data injection attack is emerging as a major form of attack in Cyber-Physical Systems (CPS). A False data Injection Attack (FDIA) can lead to severe issues like insufficient generation, physical damage to the grid, power flow imbalance as well as economical loss. The recent advancements in machine learning algorithms have helped solve the drawbacks of using classical detection techniques for such attacks. In this article, we propose to use Autoencoders (AE’s) as a novel Machine Learning approach to detect FDI attacks without any major modifications. The performance of the method is validated through the analysis of the simulation results. The algorithm achieves optimal accuracy owing to the unsupervised nature of the algorithm.
Authored by Amritha G, Vishakh Kh, Jishnu C V, Manjula Nair
The Internet of Things is a developing technology that converts physical objects into virtual objects connected to the internet using wired and wireless network architecture. Use of cross-layer techniques in the internet of things is primarily driven by the high heterogeneity of hardware and software capabilities. Although traditional layered architecture has been effective for a while, cross-layer protocols have the potential to greatly improve a number of wireless network characteristics, including bandwidth and energy usage. Also, one of the main concerns with the internet of things is security, and machine learning (ML) techniques are thought to be the most cuttingedge and viable approach. This has led to a plethora of new research directions for tackling IoT's growing security issues. In the proposed study, a number of cross-layer approaches based on machine learning techniques that have been offered in the past to address issues and challenges brought on by the variety of IoT are in-depth examined. Additionally, the main issues are mentioned and analyzed, including those related to scalability, interoperability, security, privacy, mobility, and energy utilization.
Authored by K. Saranya, Dr. A. Valarmathi
In the deep nano-scale regime, reliability has emerged as one of the major design issues for high-density integrated systems. Among others, key reliability-related issues are soft errors, high temperature, and aging effects (e.g., NBTI-Negative Bias Temperature Instability), which jeopardize the correct applications' execution. Tremendous amount of research effort has been invested at individual system layers. Moreover, in the era of growing cyber-security threats, modern computing systems experience a wide range of security threats at different layers of the software and hardware stacks. However, considering the escalating reliability and security costs, designing a highly reliable and secure system would require engaging multiple system layers (i.e. both hardware and software) to achieve cost-effective robustness. This talk provides an overview of important reliability issues, prominent state-of-the-art techniques, and various hardwaresoftware collaborative reliability modeling and optimization techniques developed at our lab, with a focus on the recent works on ML-based reliability techniques. Afterwards, this talk will also discuss how advanced ML techniques can be leveraged to devise new types of hardware security attacks, for instance on logic locked circuits. Towards the end of the talk, I will also give a quick pitch on the reliability and security challenges for the embedded machine learning (ML) on resource/energy-constrained devices subjected to unpredictable and harsh scenarios.
Authored by Muhammad Shafique
In the Smart Grid paradigm, this critical infrastructure operation is increasingly exposed to cyber-threats due to the increased dependency on communication networks. An adversary can launch an attack on a power grid operation through False Data Injection into system measurements and/or through attacks on the communication network, such as flooding the communication channels with unnecessary data or intercepting messages. A cross-layered strategy that combines power grid data, communication grid monitoring and Machine Learning-based processing is a promising solution for detecting cyber-threats. In this paper, an implementation of an integrated solution of a cross-layer framework is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection of anomalies in the operation of power grid. IEEE 118-bus system is built in Simulink to provide a power grid testing environment and communication network data is emulated using SimComponents. The performance of the framework is investigated under various FDI and communication attacks.
Authored by Nader Aljohani, Dennis Agnew, Keerthiraj Nagaraj, Sharon Boamah, Reynold Mathieu, Arturo Bretas, Janise McNair, Alina Zare
The value and size of information exchanged through dark-web pages are remarkable. Recently Many researches showed values and interests in using machine-learning methods to extract security-related useful knowledge from those dark-web pages. In this scope, our goals in this research focus on evaluating best prediction models while analyzing traffic level data coming from the dark web. Results and analysis showed that feature selection played an important role when trying to identify the best models. Sometimes the right combination of features would increase the model’s accuracy. For some feature set and classifier combinations, the Src Port and Dst Port both proved to be important features. When available, they were always selected over most other features. When absent, it resulted in many other features being selected to compensate for the information they provided. The Protocol feature was never selected as a feature, regardless of whether Src Port and Dst Port were available.
Authored by Ahmad Al-Omari, Andrew Allhusen, Abdullah Wahbeh, Mohammad Al-Ramahi, Izzat Alsmadi
Cyber threats can cause severe damage to computing infrastructure and systems as well as data breaches that make sensitive data vulnerable to attackers and adversaries. It is therefore imperative to discover those threats and stop them before bad actors penetrating into the information systems.Threats hunting algorithms based on machine learning have shown great advantage over classical methods. Reinforcement learning models are getting more accurate for identifying not only signature-based but also behavior-based threats. Quantum mechanics brings a new dimension in improving classification speed with exponential advantage. The accuracy of the AI/ML algorithms could be affected by many factors, from algorithm, data, to prejudicial, or even intentional. As a result, AI/ML applications need to be non-biased and trustworthy.In this research, we developed a machine learning-based cyber threat detection and assessment tool. It uses two-stage (both unsupervised and supervised learning) analyzing method on 822,226 log data recorded from a web server on AWS cloud. The results show the algorithm has the ability to identify the threats with high confidence.
Authored by Shuangbao Wang, Md Arafin, Onyema Osuagwu, Ketchiozo Wandji
Steady advancement in Artificial Intelligence (AI) development over recent years has caused AI systems to become more readily adopted across industry and military use-cases globally. As powerful as these algorithms are, there are still gaping questions regarding their security and reliability. Beyond adversarial machine learning, software supply chain vulnerabilities and model backdoor injection exploits are emerging as potential threats to the physical safety of AI reliant CPS such as autonomous vehicles. In this work in progress paper, we introduce the concept of AI supply chain vulnerabilities with a provided proof of concept autonomous exploitation framework. We investigate the viability of algorithm backdoors and software third party library dependencies for applicability into modern AI attack kill chains. We leverage an autonomous vehicle case study for demonstrating the applicability of our offensive methodologies within a realistic AI CPS operating environment.
Authored by Daniel Williams, Chelece Clark, Rachel McGahan, Bradley Potteiger, Daniel Cohen, Patrick Musau
Missing values are an unavoidable problem for classification tasks of machine learning in medical data. With the rapid development of the medical system, large scale medical data is increasing. Missing values increase the difficulty of mining hidden but useful information in these medical datasets. Deletion and imputation methods are the most popular methods for dealing with missing values. Existing studies ignored to compare and discuss the deletion and imputation methods of missing values under the row missing rate and the total missing rate. Meanwhile, they rarely used experiment data sets that are mixed-type and large scale. In this work, medical data sets of various sizes and mixed-type are used. At the same time, performance differences of deletion and imputation methods are compared under the MCAR (Missing Completely At Random) mechanism in the baseline task using LR (Linear Regression) and SVM (Support Vector Machine) classifiers for classification with the same row and total missing rates. Experimental results show that under the MCAR missing mechanism, the performance of two types of processing methods is related to the size of datasets and missing rates. As the increasing of missing rate, the performance of two types for processing missing values decreases, but the deletion method decreases faster, and the imputation methods based on machine learning have more stable and better classification performance on average. In addition, small data sets are easily affected by processing methods of missing values.
Authored by Lijuan Ren, Tao Wang, Aicha Seklouli, Haiqing Zhang, Abdelaziz Bouras
Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.
Authored by Xin Cheng, Mei-Qi Wang, Yu-Bo Shi, Jun Lin, Zhong-Feng Wang
Cloud provides access to shared pool of resources like storage, networking, and processing. Distributed denial of service attacks are dangerous for Cloud services because they mainly target the availability of resources. It is important to detect and prevent a DDoS attack for the continuity of Cloud services. In this review, we analyze the different mechanisms of detection and prevention of the DDoS attacks in Clouds. We identify the major DDoS attacks in Clouds and compare the frequently-used strategies to detect, prevent, and mitigate those attacks that will help the future researchers in this area.
Authored by Muhammad Tehaam, Salman Ahmad, Hassan Shahid, Muhammad Saboor, Ayesha Aziz, Kashif Munir
One of the major threats in the cyber security and networking world is a Distributed Denial of Service (DDoS) attack. With massive development in Science and Technology, the privacy and security of various organizations are concerned. Computer Intrusion and DDoS attacks have always been a significant issue in networked environments. DDoS attacks result in non-availability of services to the end-users. It interrupts regular traffic flow and causes a flood of flooded packets, causing the system to crash. This research presents a Machine Learning-based DDoS attack detection system to overcome this challenge. For the training and testing purpose, we have used the NSL-KDD Dataset. Logistic Regression Classifier, Support Vector Machine, K Nearest Neighbour, and Decision Tree Classifier are examples of machine learning algorithms which we have used to train our model. The accuracy gained are 90.4, 90.36, 89.15 and 82.28 respectively. We have added a feature called BOTNET Prevention, which scans for Phishing URLs and prevents a healthy device from being a part of the botnet.
Authored by Neeta Chavan, Mohit Kukreja, Gaurav Jagwani, Neha Nishad, Namrata Deb
A classification issue in machine learning is the issue of spotting Distributed Denial of Service (DDos) attacks. A Denial of Service (DoS) assault is essentially a deliberate attack launched from a single source with the implied intent of rendering the target's application unavailable. Attackers typically aims to consume all available network bandwidth in order to accomplish this, which inhibits authorized users from accessing system resources and denies them access. DDoS assaults, in contrast to DoS attacks, include several sources being used by the attacker to launch an attack. At the network, transportation, presentation, and application layers of a 7-layer OSI architecture, DDoS attacks are most frequently observed. With the help of the most well-known standard dataset and multiple regression analysis, we have created a machine learning model in this work that can predict DDoS and bot assaults based on traffic.
Authored by Soumyajit Das, Zeeshaan Dayam, Pinaki Chatterjee
Cloud computing provides a great platform for the users to utilize the various computational services in order accomplish their requests. However it is difficult to utilize the computational storage services for the file handling due to the increased protection issues. Here Distributed Denial of Service (DDoS) attacks are the most commonly found attack which will prevent from cloud service utilization. Thus it is confirmed that the DDoS attack detection and load balancing in cloud are most extreme issues which needs to be concerned more for the improved performance. This attained in this research work by measuring up the trust factors of virtual machines in order to predict the most trustable VMs which will be combined together to form the trustable source vector. After trust evaluation, in this work Bat algorithm is utilized for the optimal load distribution which will predict the optimal VM resource for the task allocation with the concern of budget. This method is most useful in the process of detecting the DDoS attacks happening on the VM resources. Finally prevention of DDOS attacks are performed by introducing the Fuzzy Extreme Learning Machine Classifier which will learn the cloud resource setup details based on which DDoS attack detection can be prevented. The overall performance of the suggested study design is performed in a Java simulation model to demonstrate the superiority of the proposed algorithm over the current research method.
Authored by Sai Manoj
Computer and Vehicular networks, both are prone to multiple information security breaches because of many reasons like lack of standard protocols for secure communication and authentication. Distributed Denial of Service (DDoS) is a threat that disrupts the communication in networks. Detection and prevention of DDoS attacks with accuracy is a necessity to make networks safe.In this paper, we have experimented two machine learning-based techniques one each for attack detection and attack prevention. These detection & prevention techniques are implemented in different environments including vehicular network environments and computer network environments. Three different datasets connected to heterogeneous environments are adopted for experimentation. The first dataset is the NSL-KDD dataset based on the traffic of the computer network. The second dataset is based on a simulation-based vehicular environment, and the third CIC-DDoS 2019 dataset is a computer network-based dataset. These datasets contain different number of attributes and instances of network traffic. For the purpose of attack detection AdaBoostM1 classification algorithm is used in WEKA and for attack prevention Logit Model is used in STATA. Results show that an accuracy of more than 99.9% is obtained from the simulation-based vehicular dataset. This is the highest accuracy rate among the three datasets and it is obtained within a very short period of time i.e., 0.5 seconds. In the same way, we use a Logit regression-based model to classify packets. This model shows an accuracy of 100%.
Authored by Amandeep Verma, Rahul Saha
In recent decades, a Distributed Denial of Service (DDoS) attack is one of the most expensive attacks for business organizations. The DDoS is a form of cyber-attack that disrupts the operation of computer resources and networks. As technology advances, the styles and tools used in these attacks become more diverse. These attacks are increased in frequency, volume, and intensity, and they can quickly disrupt the victim, resulting in a significant financial loss. In this paper, it is described the significance of DDOS attacks and propose a new method for detecting and mitigating the DDOS attacks by analyzing the traffics coming to the server from the BOTNET in attacking system. The process of analyzing the requests coming from the BOTNET uses the Machine learning algorithm in the decision making. The simulation is carried out and the results analyze the DDOS attack.
Authored by D Satyanarayana, Aisha Alasmi
DDoS attacks still represent a severe threat to network services. While there are more or less workable solutions to defend against these attacks, there is a significant space for further research regarding automation of reactions and subsequent management. In this paper, we focus on one piece of the whole puzzle. We strive to automatically infer filtering rules which are specific to the current DoS attack to decrease the time to mitigation. We employ a machine learning technique to create a model of the traffic mix based on observing network traffic during the attack and normal period. The model is converted into the filtering rules. We evaluate our approach with various setups of hyperparameters. The results of our experiments show that the proposed approach is feasible in terms of the capability of inferring successful filtering rules.
Authored by Martin Žádník
DDoS attacks produce a lot of traffic on the network. DDoS attacks may be fought in a novel method thanks to the rise of Software Defined Networking (SDN). DDoS detection and data gathering may lead to larger system load utilization among SDN as well as systems, much expense of SDN, slow reaction period to DDoS if they are conducted at regular intervals. Using the Identification Retrieval algorithm, we offer a new DDoS detection framework for detecting resource scarcity type DDoS attacks. In designed to check low-density DDoS attacks, we employ a combination of network traffic characteristics. The KSVD technique is used to generate a dictionary of network traffic parameters. In addition to providing legitimate and attack traffic models for dictionary construction, the suggested technique may be used to network traffic as well. Matching Pursuit and Wavelet-based DDoS detection algorithms are also implemented and compared using two separate data sets. Despite the difficulties in identifying LR-DoS attacks, the results of the study show that our technique has a detection accuracy of 89%. DDoS attacks are explained for each type of DDoS, and how SDN weaknesses may be exploited. We conclude that machine learning-based DDoS detection mechanisms and cutoff point DDoS detection techniques are the two most prevalent methods used to identify DDoS attacks in SDN. More significantly, the generational process, benefits, and limitations of each DDoS detection system are explained. This is the case in our testing environment, where the intrusion detection system (IDS) is able to block all previously identified threats
Authored by E. Fenil, Mohan Kumar
A distributed denial-of-service (DDoS) is a malicious attempt by attackers to disrupt the normal traffic of a targeted server, service or network. This is done by overwhelming the target and its surrounding infrastructure with a flood of Internet traffic. The multiple compromised computer systems (bots or zombies) then act as sources of attack traffic. Exploited machines can include computers and other network resources such as IoT devices. The attack results in either degraded network performance or a total service outage of critical infrastructure. This can lead to heavy financial losses and reputational damage. These attacks maximise effectiveness by controlling the affected systems remotely and establishing a network of bots called bot networks. It is very difficult to separate the attack traffic from normal traffic. Early detection is essential for successful mitigation of the attack, which gives rise to a very important role in cybersecurity to detect the attacks and mitigate the effects. This can be done by deploying machine learning or deep learning models to monitor the traffic data. We propose using various machine learning and deep learning algorithms to analyse the traffic patterns and separate malicious traffic from normal traffic. Two suitable datasets have been identified (DDoS attack SDN dataset and CICDDoS2019 dataset). All essential preprocessing is performed on both datasets. Feature selection is also performed before detection techniques are applied. 8 different Neural Networks/ Ensemble/ Machine Learning models are chosen and the datasets are analysed. The best model is chosen based on the performance metrics (DEEP NEURAL NETWORK MODEL). An alternative is also suggested (Next best - Hypermodel). Optimisation by Hyperparameter tuning further enhances the accuracy. Based on the nature of the attack and the intended target, suitable mitigation procedures can then be deployed.
Authored by Ms. Deepthi Bennet, Ms. Preethi Bennet, D Anitha
Target attack identification and detection has always been a concern of network security in the current environment. However, the economic losses caused by DDoS attacks are also enormous. In recent years, DDoS attack detection has made great progress mainly in the user application layer of the network layer. In this paper, a review and discussion are carried out according to the different detection methods and platforms. This paper mainly includes three parts, which respectively review statistics-based machine learning detection, target attack detection on SDN platform and attack detection on cloud service platform. Finally, the research suggestions for DDoS attack detection are given.
Authored by Jing Chen, Lei Yang, Ziqiao Qiu
This paper studies Distributed Denial of Service (DDoS) attack detection by adopting the Deep Neural Network (DNN) model in Software Defined Networking (SDN). We first deploy the flow collector module to collect the flow table entries. Considering the detection efficiency of the DNN model, we also design some features manually in addition to the features automatically obtained by the flow table. Then we use the preprocessed data to train the DNN model and make a prediction. The overall detection framework is deployed in the SDN controller. The experiment results illustrate DNN model has higher accuracy in identifying attack traffic than machine learning algorithms, which lays a foundation for the defense against DDoS attack.
Authored by Wanqi Zhao, Haoyue Sun, Dawei Zhang
Machine learning-based DDoS attack detection methods are mostly implemented at the packet level with expensive computational time costs, and the space cost of those sketch-based detection methods is uncertain. This paper proposes a two-stage DDoS attack detection algorithm combining time series-based multi-dimensional sketch and machine learning technologies. Besides packet numbers, total lengths, and protocols, we construct the time series-based multi-dimensional sketch with limited space cost by storing elephant flow information with the Boyer-Moore voting algorithm and hash index. For the first stage of detection, we adopt CNN to generate sketch-level DDoS attack detection results from the time series-based multi-dimensional sketch. For the sketch with potential DDoS attacks, we use RNN with flow information extracted from the sketch to implement flow-level DDoS attack detection in the second stage. Experimental results show that not only is the detection accuracy of our proposed method much close to that of packet-level DDoS attack detection methods based on machine learning, but also the computational time cost of our method is much smaller with regard to the number of machine learning operations.
Authored by Yanchao Sun, Yuanfeng Han, Yue Zhang, Mingsong Chen, Shui Yu, Yimin Xu