Unsupervised cross-domain NER task aims to solve the issues when data in a new domain are fully-unlabeled. It leverages labeled data from source domain to predict entities in unlabeled target domain. Since training models on large domain corpus is time-consuming, in this paper, we consider an alternative way by introducing syntactic dependency structure. Such information is more accessible and can be shared between sentences from different domains. We propose a novel framework with dependency-aware GNN (DGNN) to learn these common structures from source domain and adapt them to target domain, alleviating the data scarcity issue and bridging the domain gap. Experimental results show that our method outperforms state-of-the-art methods.
Authored by Luchen Liu, Xixun Lin, Peng Zhang, Lei Zhang, Bin Wang
Guidelines, directives, and policy statements are usually presented in “linear” text form - word after word, page after page. However necessary, this practice impedes full understanding, obscures feedback dynamics, hides mutual dependencies and cascading effects and the like-even when augmented with tables and diagrams. The net result is often a checklist response as an end in itself. All this creates barriers to intended realization of guidelines and undermines potential effectiveness. We present a solution strategy using text as “data”, transforming text into a structured model, and generate network views of the text(s), that we then can use for vulnerability mapping, risk assessments and note control point analysis. For proof of concept we draw on NIST conceptual model and analysis of guidelines for smart grid cybersecurity, more than 600 pages of text.
Authored by Nazli Choucri, Gaurav Agarwal
Deep learning models rely on single word features and location features of text to achieve good results in text relation extraction tasks. However, previous studies have failed to make full use of semantic information contained in sentence dependency syntax trees, and data sparseness and noise propagation still affect classification models. The BERT(Bidirectional Encoder Representations from Transformers) pretrained language model provides a better representation of natural language processing tasks. And entity enhancement methods have been proved to be effective in relation extraction tasks. Therefore, this paper proposes a combination of the shortest dependency path and entity-enhanced BERT pre-training language model for model construction to reduce the impact of noise terms on the classification model and obtain more semantically expressive feature representation. The algorithm is tested on SemEval-2010 Task 8 English relation extraction dataset, and the F1 value of the final experiment can reach 0. 881.
Authored by Zeyu Sun, Chi Zhang
Cyber Attack is the most challenging issue all over the world. Nowadays, Cyber-attacks are increasing on digital systems and organizations. Innovation and utilization of new digital technology, infrastructure, connectivity, and dependency on digital strategies are transforming day by day. The cyber threat scope has extended significantly. Currently, attackers are becoming more sophisticated, well-organized, and professional in generating malware programs in Python, C Programming, C++ Programming, Java, SQL, PHP, JavaScript, Ruby etc. Accurate attack modeling techniques provide cyber-attack planning, which can be applied quickly during a different ongoing cyber-attack. This paper aims to create a new cyber-attack model that will extend the existing model, which provides a better understanding of the network’s vulnerabilities.Moreover, It helps protect the company or private network infrastructure from future cyber-attacks. The final goal is to handle cyber-attacks efficacious manner using attack modeling techniques. Nowadays, many organizations, companies, authorities, industries, and individuals have faced cybercrime. To execute attacks using our model where honeypot, the firewall, DMZ and any other security are available in any environment.
Authored by Mostafa Al-Amin, Mirza Khatun, Mohammed Uddin
To solve the problem of an excessive number of component vulnerabilities and limited defense resources in industrial cyber physical systems, a method for analyzing security critical components of system is proposed. Firstly, the components and vulnerability information in the system are modeled based on SysML block definition diagram. Secondly, as SysML block definition diagram is challenging to support direct analysis, a block security dependency graph model is proposed. On this basis, the transformation rules from SysML block definition graph to block security dependency graph are established according to the structure of block definition graph and its vulnerability information. Then, the calculation method of component security importance is proposed, and a security critical component analysis tool is designed and implemented. Finally, an example of a Drone system is given to illustrate the effectiveness of the proposed method. The application of this method can provide theoretical and technical support for selecting key defense components in the industrial cyber physical system.
Authored by Junjie Zhao, Bingfeng Xu, Xinkai Chen, Bo Wang, Gaofeng He
Modern vehicles have multiple electronic control units (ECUs) that are connected together as part of a complex distributed cyber-physical system (CPS). The ever-increasing communication between ECUs and external electronic systems has made these vehicles particularly susceptible to a variety of cyber-attacks. In this work, we present a novel anomaly detection framework called TENET to detect anomalies induced by cyber-attacks on vehicles. TENET uses temporal convolutional neural networks with an integrated attention mechanism to learn the dependency between messages traversing the in-vehicle network. Post deployment in a vehicle, TENET employs a robust quantitative metric and classifier, together with the learned dependencies, to detect anomalous patterns. TENET is able to achieve an improvement of 32.70% in False Negative Rate, 19.14% in the Mathews Correlation Coefficient, and 17.25% in the ROC-AUC metric, with 94.62% fewer model parameters, and 48.14% lower inference time compared to the best performing prior works on automotive anomaly detection.
Authored by Sooryaa Thiruloga, Vipin Kukkala, Sudeep Pasricha
Steady advancement in Artificial Intelligence (AI) development over recent years has caused AI systems to become more readily adopted across industry and military use-cases globally. As powerful as these algorithms are, there are still gaping questions regarding their security and reliability. Beyond adversarial machine learning, software supply chain vulnerabilities and model backdoor injection exploits are emerging as potential threats to the physical safety of AI reliant CPS such as autonomous vehicles. In this work in progress paper, we introduce the concept of AI supply chain vulnerabilities with a provided proof of concept autonomous exploitation framework. We investigate the viability of algorithm backdoors and software third party library dependencies for applicability into modern AI attack kill chains. We leverage an autonomous vehicle case study for demonstrating the applicability of our offensive methodologies within a realistic AI CPS operating environment.
Authored by Daniel Williams, Chelece Clark, Rachel McGahan, Bradley Potteiger, Daniel Cohen, Patrick Musau
Third-party libraries with rich functionalities facilitate the fast development of JavaScript software, leading to the explosive growth of the NPM ecosystem. However, it also brings new security threats that vulnerabilities could be introduced through dependencies from third-party libraries. In particular, the threats could be excessively amplified by transitive dependencies. Existing research only considers direct dependencies or reasoning transitive dependencies based on reachability analysis, which neglects the NPM-specific dependency resolution rules as adapted during real installation, resulting in wrongly resolved dependencies. Consequently, further fine-grained analysis, such as precise vulnerability propagation and their evolution over time in dependencies, cannot be carried out precisely at a large scale, as well as deriving ecosystem-wide solutions for vulnerabilities in dependencies. To fill this gap, we propose a knowledge graph-based dependency resolution, which resolves the inner dependency relations of dependencies as trees (i.e., dependency trees), and investigates the security threats from vulnerabilities in dependency trees at a large scale. Specifically, we first construct a complete dependency-vulnerability knowledge graph (DVGraph) that captures the whole NPM ecosystem (over 10 million library versions and 60 million well-resolved dependency relations). Based on it, we propose a novel algorithm (DTResolver) to statically and precisely resolve dependency trees, as well as transitive vulnerability propagation paths, for each package by taking the official dependency resolution rules into account. Based on that, we carry out an ecosystem-wide empirical study on vulnerability propagation and its evolution in dependency trees. Our study unveils lots of useful findings, and we further discuss the lessons learned and solutions for different stakeholders to mitigate the vulnerability impact in NPM based on our findings. For example, we implement a dependency tree based vulnerability remediation method (DTReme) for NPM packages, and receive much better performance than the official tool (npm audit fix).
Authored by Chengwei Liu, Sen Chen, Lingling Fan, Bihuan Chen, Yang Liu, Xin Peng
Legacy programs are normally monolithic (that is, all code runs in a single process and is not partitioned), and a bug in a program may result in the entire program being vulnerable and therefore untrusted. Program partitioning can be used to separate a program into multiple partitions, so as to isolate sensitive data or privileged operations. Manual program partitioning requires programmers to rewrite the entire source code, which is cumbersome, error-prone, and not generic. Automatic program partitioning tools can separate programs according to the dependency graph constructed based on data or programs. However, programmers still need to manually implement remote service interfaces for inter-partition communication. Therefore, in this paper, we propose AutoSlicer, whose purpose is to partition a program more automatically, so that the programmer is only required to annotate sensitive data. AutoSlicer constructs accurate data dependency graphs (DDGs) by enabling execution flow graphs, and the DDG-based partitioning algorithm can compute partition information based on sensitive annotations. In addition, the code refactoring toolchain can automatically transform the source code into sensitive and insensitive partitions that can be deployed on the remote procedure call framework. The experimental evaluation shows that AutoSlicer can effectively improve the accuracy (13%-27%) of program partitioning by enabling EFG, and separate real-world programs with a relatively smaller performance overhead (0.26%-9.42%).
Authored by Weizhong Qiang, Hao Luo
Cyber-Physical Systems (CPS) are systems that contain digital embedded devices while depending on environmental influences or external configurations. Identifying relevant influences of a CPS as well as modeling dependencies on external influences is difficult. We propose to learn these dependencies with decision trees in combination with clustering. The approach allows to automatically identify relevant influences and receive a data-related explanation of system behavior involving the system's use-case. Our paper presents a case study of our method for a Real-Time Localization System (RTLS) proving the usefulness of our approach, and discusses further applications of a learned decision tree.
Authored by Swantje Plambeck, Görschwin Fey, Jakob Schyga, Johannes Hinckeldeyn, Jochen Kreutzfeldt
Cloud provides access to shared pool of resources like storage, networking, and processing. Distributed denial of service attacks are dangerous for Cloud services because they mainly target the availability of resources. It is important to detect and prevent a DDoS attack for the continuity of Cloud services. In this review, we analyze the different mechanisms of detection and prevention of the DDoS attacks in Clouds. We identify the major DDoS attacks in Clouds and compare the frequently-used strategies to detect, prevent, and mitigate those attacks that will help the future researchers in this area.
Authored by Muhammad Tehaam, Salman Ahmad, Hassan Shahid, Muhammad Saboor, Ayesha Aziz, Kashif Munir
One of the major threats in the cyber security and networking world is a Distributed Denial of Service (DDoS) attack. With massive development in Science and Technology, the privacy and security of various organizations are concerned. Computer Intrusion and DDoS attacks have always been a significant issue in networked environments. DDoS attacks result in non-availability of services to the end-users. It interrupts regular traffic flow and causes a flood of flooded packets, causing the system to crash. This research presents a Machine Learning-based DDoS attack detection system to overcome this challenge. For the training and testing purpose, we have used the NSL-KDD Dataset. Logistic Regression Classifier, Support Vector Machine, K Nearest Neighbour, and Decision Tree Classifier are examples of machine learning algorithms which we have used to train our model. The accuracy gained are 90.4, 90.36, 89.15 and 82.28 respectively. We have added a feature called BOTNET Prevention, which scans for Phishing URLs and prevents a healthy device from being a part of the botnet.
Authored by Neeta Chavan, Mohit Kukreja, Gaurav Jagwani, Neha Nishad, Namrata Deb
A classification issue in machine learning is the issue of spotting Distributed Denial of Service (DDos) attacks. A Denial of Service (DoS) assault is essentially a deliberate attack launched from a single source with the implied intent of rendering the target's application unavailable. Attackers typically aims to consume all available network bandwidth in order to accomplish this, which inhibits authorized users from accessing system resources and denies them access. DDoS assaults, in contrast to DoS attacks, include several sources being used by the attacker to launch an attack. At the network, transportation, presentation, and application layers of a 7-layer OSI architecture, DDoS attacks are most frequently observed. With the help of the most well-known standard dataset and multiple regression analysis, we have created a machine learning model in this work that can predict DDoS and bot assaults based on traffic.
Authored by Soumyajit Das, Zeeshaan Dayam, Pinaki Chatterjee
Cloud computing provides a great platform for the users to utilize the various computational services in order accomplish their requests. However it is difficult to utilize the computational storage services for the file handling due to the increased protection issues. Here Distributed Denial of Service (DDoS) attacks are the most commonly found attack which will prevent from cloud service utilization. Thus it is confirmed that the DDoS attack detection and load balancing in cloud are most extreme issues which needs to be concerned more for the improved performance. This attained in this research work by measuring up the trust factors of virtual machines in order to predict the most trustable VMs which will be combined together to form the trustable source vector. After trust evaluation, in this work Bat algorithm is utilized for the optimal load distribution which will predict the optimal VM resource for the task allocation with the concern of budget. This method is most useful in the process of detecting the DDoS attacks happening on the VM resources. Finally prevention of DDOS attacks are performed by introducing the Fuzzy Extreme Learning Machine Classifier which will learn the cloud resource setup details based on which DDoS attack detection can be prevented. The overall performance of the suggested study design is performed in a Java simulation model to demonstrate the superiority of the proposed algorithm over the current research method.
Authored by Sai Manoj
Internet of Things (IoT) and those protocol CoAP and MQTT has security issues that have entirely changed the security strategy should be utilized and behaved for devices restriction. Several challenges have been observed in multiple domains of security, but Distributed Denial of Service (DDoS) have actually dangerous in IoT that have RT. Thus, the IoT paradigm and those protocols CoAP and MQTT have been investigated to seek whether network services could be efficiently delivered for resources usage, managed, and disseminated to the devices. Internet of Things is justifiably joined with the best practices augmentation to make this task enriched. However, factors behaviors related to traditional networks have not been effectively mitigated until now. In this paper, we present and deep, qualitative, and comprehensive systematic mapping to find the answers to the following research questions, such as, (i) What is the state-of-the-art in IoT security, (ii) How to solve the restriction devices challenges via infrastructure involvement, (iii) What type of technical/protocol/ paradigm needs to be studied, and (iv) Security profile should be taken care of, (v) As the proposals are being evaluated: A. If in simulated/virtualized/emulated environment or; B. On real devices, in which case which devices. After doing a comparative study with other papers dictate that our work presents a timely contribution in terms of novel knowledge toward an understanding of formulating IoT security challenges under the IoT restriction devices take care.
Authored by Márcio Nascimento, Jean Araujo, Admilson Ribeiro
Computer and Vehicular networks, both are prone to multiple information security breaches because of many reasons like lack of standard protocols for secure communication and authentication. Distributed Denial of Service (DDoS) is a threat that disrupts the communication in networks. Detection and prevention of DDoS attacks with accuracy is a necessity to make networks safe.In this paper, we have experimented two machine learning-based techniques one each for attack detection and attack prevention. These detection & prevention techniques are implemented in different environments including vehicular network environments and computer network environments. Three different datasets connected to heterogeneous environments are adopted for experimentation. The first dataset is the NSL-KDD dataset based on the traffic of the computer network. The second dataset is based on a simulation-based vehicular environment, and the third CIC-DDoS 2019 dataset is a computer network-based dataset. These datasets contain different number of attributes and instances of network traffic. For the purpose of attack detection AdaBoostM1 classification algorithm is used in WEKA and for attack prevention Logit Model is used in STATA. Results show that an accuracy of more than 99.9% is obtained from the simulation-based vehicular dataset. This is the highest accuracy rate among the three datasets and it is obtained within a very short period of time i.e., 0.5 seconds. In the same way, we use a Logit regression-based model to classify packets. This model shows an accuracy of 100%.
Authored by Amandeep Verma, Rahul Saha
Distributed Denial-of-Service (DDoS) attacks aim to cause downtime or a lack of responsiveness for web services. DDoS attacks targeting the application layer are amongst the hardest to catch as they generally appear legitimate at lower layers and attempt to take advantage of common application functionality or aspects of the HTTP protocol, rather than simply send large amounts of traffic like with volumetric flooding. Attacks can focus on functionality such as database operations, file retrieval, or just general backend code. In this paper, we examine common forms of application layer attacks, preventative and detection measures, and take a closer look specifically at HTTP Flooding attacks by the High Orbit Ion Cannon (HOIC) and “low and slow” attacks through slowloris.
Authored by Samuel Black, Yoohwan Kim
The new paradigm software-defined networking (SDN) supports network innovation and makes the control of network operations more agile. The flow table is the main component of SDN switch which contains a set of flow entries that define how new flows are processed. Low-rate distributed denial-of-service (LR-DDoS) attacks are difficult to detect and mitigate because they behave like legitimate users. There are many detection methods for LR DDoS attacks in the literature, but none of these methods detect single-packet LR DDoS attacks. In fact, LR DDoS attackers exploit vulnerabilities in the mechanism of congestion control in TCP to either periodically retransmit burst attack packets for a short time period or to continuously launch a single attack packet at a constant low rate. In this paper, the proposed scheme detects LR-DDoS by examining all incoming packets and filtering the single packets sent from different source IP addresses to the same destination at a constant low rate. Sending single packets at a constant low rate will increase the number of flows at the switch which can make it easily overflowed. After detecting the single attack packets, the proposed scheme prevents LR-DDoS at its early stage by deleting the flows created by these packets once they reach the threshold. According to the results of the experiment, the scheme achieves 99.47% accuracy in this scenario. In addition, the scheme has simple logic and simple calculation, which reduces the overhead of the SDN controller.
Authored by Wisam Muragaa
Wireless sensor networks are used in many areas such as war field surveillance, monitoring of patient, controlling traffic, environmental and building surveillance. Wireless technology, on the other hand, brings a load of new threats with it. Because WSNs communicate across radio frequencies, they are more susceptible to interference than wired networks. The authors of this research look at the goals of WSNs in terms of security as well as DDOS attacks. The majority of techniques are available for detecting DDOS attacks in WSNs. These alternatives, on the other hand, stop the assault after it has begun, resulting in data loss and wasting limited sensor node resources. The study finishes with a new method for detecting the UDP Reflection Amplification Attack in WSN, as well as instructions on how to use it and how to deal with the case.
Authored by B.J Kumar, V.S Gowda
Distributed Denial of Service (DDoS) attacks aim to make a server unresponsive by flooding the target server with a large volume of packets (Volume based DDoS attacks), by keeping connections open for a long time and exhausting the resources (Low and Slow DDoS attacks) or by targeting protocols (Protocol based attacks). Volume based DDoS attacks that flood the target server with a large number of packets are easier to detect because of the abnormality in packet flow. Low and Slow DDoS attacks, however, make the server unavailable by keeping connections open for a long time, but send traffic similar to genuine traffic, making detection of such attacks difficult. This paper proposes a solution to detect and mitigate one such Low and slow DDoS attack, Slowloris in an SDN (Software Defined Networking) environment. The proposed solution involves communication between the detection and mitigation module and the controller of the Software Defined Network to get data to detect and mitigate low and slow DDoS attack.
Authored by A Sai, B Tilak, Sai Sanjith, Padi Suhas, R Sanjeetha
Network security is a prominent topic that is gaining international attention. Distributed Denial of Service (DDoS) attack is often regarded as one of the most serious threats to network security. Software Defined Network (SDN) decouples the control plane from the data plane, which can meet various network requirements. But SDN can also become the object of DDoS attacks. This paper proposes an automated DDoS attack mitigation method that is based on the programmability of the Ryu controller and the features of the OpenFlow switch flow tables. The Mininet platform is used to simulate the whole process, from SDN traffic generation to using a K-Nearest Neighbor model for traffic classification, as well as identifying and mitigating DDoS attack. The packet counts of the victim's malicious traffic input port are significantly lower after the mitigation method is implemented than before the mitigation operation. The purpose of mitigating DDoS attack is successfully achieved.
Authored by Danni Wang, Sizhao Li
In recent decades, a Distributed Denial of Service (DDoS) attack is one of the most expensive attacks for business organizations. The DDoS is a form of cyber-attack that disrupts the operation of computer resources and networks. As technology advances, the styles and tools used in these attacks become more diverse. These attacks are increased in frequency, volume, and intensity, and they can quickly disrupt the victim, resulting in a significant financial loss. In this paper, it is described the significance of DDOS attacks and propose a new method for detecting and mitigating the DDOS attacks by analyzing the traffics coming to the server from the BOTNET in attacking system. The process of analyzing the requests coming from the BOTNET uses the Machine learning algorithm in the decision making. The simulation is carried out and the results analyze the DDOS attack.
Authored by D Satyanarayana, Aisha Alasmi
DDoS attacks still represent a severe threat to network services. While there are more or less workable solutions to defend against these attacks, there is a significant space for further research regarding automation of reactions and subsequent management. In this paper, we focus on one piece of the whole puzzle. We strive to automatically infer filtering rules which are specific to the current DoS attack to decrease the time to mitigation. We employ a machine learning technique to create a model of the traffic mix based on observing network traffic during the attack and normal period. The model is converted into the filtering rules. We evaluate our approach with various setups of hyperparameters. The results of our experiments show that the proposed approach is feasible in terms of the capability of inferring successful filtering rules.
Authored by Martin Žádník
From the past few years, DDoS attack incidents are continuously rising across the world. DDoS attackers have also shifted their target towards cloud environments as majority of services have shifted their operations to cloud. Various authors proposed distinct solutions to minimize the DDoS attacks effects on victim services and co-located services in cloud environments. In this work, we propose an approach by utilizing incoming request separation at the container-level. In addition, we advocate to employ scale-inside out [10] approach for all the suspicious requests. In this manner, we achieve the request serving of all the authenticated benign requests even in the presence of an attack. We also improve the usages of scale-inside out approach by applying it to a container which is serving the suspicious requests in a separate container. The results of our proposed technique show a significant decrease in the response time of benign users during the DDoS attack as compared with existing solutions.
Authored by Anmol Kumar, Gaurav Somani
Undoubtedly, technology has not only transformed our world of work and lifestyle, but it also carries with it a lot of security challenges. The Distributed Denial-of-Service (DDoS) attack is one of the most prominent attacks witnessed by cyberspace of the current era. This paper outlines several DDoS attacks, their mitigation stages, propagation of attacks, malicious codes, and finally provides redemptions of exhibiting normal and DDoS attacked scenarios. A case study of a SYN flooding attack has been exploited by using Metasploit. The utilization of CPU frame length and rate have been observed in normal and attacked phases. Preliminary results clearly show that in a normal scenario, CPU usage is about 20%. However, in attacked phases with the same CPU load, CPU execution overhead is nearly 90% or 100%. Thus, through this research, the major difference was found in CPU usage, frame length, and degree of data flow. Wireshark tool has been used for network traffic analyzer.
Authored by Sambhavi Kukreti, Sumit Modgil, Neha Gehlot, Vinod Kumar