Dual Connectivity is a key approach to achieving optimization of throughput and latency in heterogeneous networks. Originally a technique introduced by the 3rd Generation Partnership Project (3GPP) for terrestrial communications, it is not been widely explored in satellite systems. In this paper, Dual Connectivity is implemented in a multi-orbital satellite network, where a network model is developed by employing the diversity gains from Dual Connectivity and Carrier Aggregation for the enhancement of satellite uplink capacity. An introduction of software defined network controller is performed at the network layer coupled with a carefully designed hybrid resource allocation algorithm which is implemented strategically. The algorithm performs optimum dynamic flow control and traffic steering by considering the availability of resources and the channel propagation information of the orbital links to arrive at a resource allocation pattern suitable in enhancing uplink system performance. Simulation results are shown to evaluate the achievable gains in throughput and latency; in addition we provide useful insight in the design of multi-orbital satellite networks with implementable scheduler design.
Authored by Michael Dazhi, Hayder Al-Hraishawi, Mysore Shankar, Symeon Chatzinotas
With the intelligent development of power system, due to the double-layer structure of smart grid and the characteristics of failure propagation across layers, the attack path also changes significantly: from single-layer to multi-layer and from static to dynamic. In response to the shortcomings of the single-layer attack path of traditional attack path identification methods, this paper proposes the idea of cross-layer attack, which integrates the threat propagation mechanism of the information layer and the failure propagation mechanism of the physical layer to establish a forward-backward bi-directional detection model. The model is mainly used to predict possible cross-layer attack paths and evaluate their path generation probabilities to provide theoretical guidance and technical support for defenders. The experimental results show that the method proposed in this paper can well identify the dynamic cross-layer attacks in the smart grid.
Authored by Binbin Wang, Yi Wu, Naiwang Guo, Lei Zhang, Chang Liu
The Internet of Things is a developing technology that converts physical objects into virtual objects connected to the internet using wired and wireless network architecture. Use of cross-layer techniques in the internet of things is primarily driven by the high heterogeneity of hardware and software capabilities. Although traditional layered architecture has been effective for a while, cross-layer protocols have the potential to greatly improve a number of wireless network characteristics, including bandwidth and energy usage. Also, one of the main concerns with the internet of things is security, and machine learning (ML) techniques are thought to be the most cuttingedge and viable approach. This has led to a plethora of new research directions for tackling IoT's growing security issues. In the proposed study, a number of cross-layer approaches based on machine learning techniques that have been offered in the past to address issues and challenges brought on by the variety of IoT are in-depth examined. Additionally, the main issues are mentioned and analyzed, including those related to scalability, interoperability, security, privacy, mobility, and energy utilization.
Authored by K. Saranya, Dr. A. Valarmathi
In the deep nano-scale regime, reliability has emerged as one of the major design issues for high-density integrated systems. Among others, key reliability-related issues are soft errors, high temperature, and aging effects (e.g., NBTI-Negative Bias Temperature Instability), which jeopardize the correct applications' execution. Tremendous amount of research effort has been invested at individual system layers. Moreover, in the era of growing cyber-security threats, modern computing systems experience a wide range of security threats at different layers of the software and hardware stacks. However, considering the escalating reliability and security costs, designing a highly reliable and secure system would require engaging multiple system layers (i.e. both hardware and software) to achieve cost-effective robustness. This talk provides an overview of important reliability issues, prominent state-of-the-art techniques, and various hardwaresoftware collaborative reliability modeling and optimization techniques developed at our lab, with a focus on the recent works on ML-based reliability techniques. Afterwards, this talk will also discuss how advanced ML techniques can be leveraged to devise new types of hardware security attacks, for instance on logic locked circuits. Towards the end of the talk, I will also give a quick pitch on the reliability and security challenges for the embedded machine learning (ML) on resource/energy-constrained devices subjected to unpredictable and harsh scenarios.
Authored by Muhammad Shafique
In the Smart Grid paradigm, this critical infrastructure operation is increasingly exposed to cyber-threats due to the increased dependency on communication networks. An adversary can launch an attack on a power grid operation through False Data Injection into system measurements and/or through attacks on the communication network, such as flooding the communication channels with unnecessary data or intercepting messages. A cross-layered strategy that combines power grid data, communication grid monitoring and Machine Learning-based processing is a promising solution for detecting cyber-threats. In this paper, an implementation of an integrated solution of a cross-layer framework is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection of anomalies in the operation of power grid. IEEE 118-bus system is built in Simulink to provide a power grid testing environment and communication network data is emulated using SimComponents. The performance of the framework is investigated under various FDI and communication attacks.
Authored by Nader Aljohani, Dennis Agnew, Keerthiraj Nagaraj, Sharon Boamah, Reynold Mathieu, Arturo Bretas, Janise McNair, Alina Zare
Embedded devices are becoming increasingly pervasive in safety-critical systems of the emerging cyber-physical world. While trusted execution environments (TEEs), such as ARM TrustZone, have been widely deployed in mobile platforms, little attention has been given to deployment on real-time cyber-physical systems, which present a different set of challenges compared to mobile applications. For safety-critical cyber-physical systems, such as autonomous drones or automobiles, the current TEE deployment paradigm, which focuses only on confidentiality and integrity, is insufficient. Computation in these systems also needs to be completed in a timely manner (e.g., before the car hits a pedestrian), putting a much stronger emphasis on availability.To bridge this gap, we present RT-TEE, a real-time trusted execution environment. There are three key research challenges. First, RT-TEE bootstraps the ability to ensure availability using a minimal set of hardware primitives on commodity embedded platforms. Second, to balance real-time performance and scheduler complexity, we designed a policy-based event-driven hierarchical scheduler. Third, to mitigate the risks of having device drivers in the secure environment, we designed an I/O reference monitor that leverages software sandboxing and driver debloating to provide fine-grained access control on peripherals while minimizing the trusted computing base (TCB).We implemented prototypes on both ARMv8-A and ARMv8-M platforms. The system is tested on both synthetic tasks and real-life CPS applications. We evaluated rover and plane in simulation and quadcopter both in simulation and with a real drone.
Authored by Jinwen Wang, Ao Li, Haoran Li, Chenyang Lu, Ning Zhang
Supply chain cyberattacks that exploit insecure third-party software are a growing concern for the security of the electric power grid. These attacks seek to deploy malicious software in grid control devices during the fabrication, shipment, installation, and maintenance stages, or as part of routine software updates. Malicious software on grid control devices may inject bad data or execute bad commands, which can cause blackouts and damage power equipment. This paper describes an experimental setup to simulate the software update process of a commercial power relay as part of a hardware-in-the-loop simulation for grid supply chain cyber-security assessment. The laboratory setup was successfully utilized to study three supply chain cyber-security use cases.
Authored by Joseph Keller, Shuva Paul, Santiago Grijalva, Vincent Mooney
The damage or destruction of Critical Infrastructures (CIs) affect societies’ sustainable functioning. Therefore, it is crucial to have effective methods to assess the risk and resilience of CIs. Failure Mode and Effects Analysis (FMEA) and Failure Mode Effects and Criticality Analysis (FMECA) are two approaches to risk assessment and criticality analysis. However, these approaches are complex to apply to intricate CIs and associated Cyber-Physical Systems (CPS). We provide a top-down strategy, starting from a high abstraction level of the system and progressing to cover the functional elements of the infrastructures. This approach develops from FMECA but estimates risks and focuses on assessing resilience. We applied the proposed technique to a real-world CI, predicting how possible improvement scenarios may influence the overall system resilience. The results show the effectiveness of our approach in benchmarking the CI resilience, providing a cost-effective way to evaluate plausible alternatives concerning the improvement of preventive measures.
Authored by Gonçalo Carvalho, Nadia Medeiros, Henrique Madeira, Bruno Cabral
Cyber-Physical System (CPS) represents systems that join both hardware and software components to perform real-time services. Maintaining the system's reliability is critical to the continuous delivery of these services. However, the CPS running environment is full of uncertainties and can easily lead to performance degradation. As a result, the need for a recovery technique is highly needed to achieve resilience in the system, with keeping in mind that this technique should be as green as possible. This early doctorate proposal, suggests a game theory solution to achieve resilience and green in CPS. Game theory has been known for its fast performance in decision-making, helping the system to choose what maximizes its payoffs. The proposed game model is described over a real-life collaborative artificial intelligence system (CAIS), that involves robots with humans to achieve a common goal. It shows how the expected results of the system will achieve the resilience of CAIS with minimized CO2 footprint.
Authored by Diaeddin Rimawi
Cyber-Physical Systems (CPS) have a physical part that can interact with sensors and actuators. The data that is read from sensors and the one generated to drive actuators is crucial for the correct operation of this class of devices. Most implementations trust the data being read from sensors and the outputted data to actuators. Real-time validation of the input and output of data for any system is crucial for the safety of its operation. This paper proposes an architecture for handling this issue through smart data guards detached from sensors and controllers and acting solely on the data. This mitigates potential issues of malfunctioning sensors and intentional sensor and controller attacks. The data guards understand the expected data, can detect anomalies and can correct them in real-time. This approach adds more guarantees for fault-tolerant behavior in the presence of attacks and sensor failures.
Authored by Anton Hristozov, Eric Matson, Eric Dietz, Marcus Rogers
Security is a critical aspect in the process of designing, developing, and testing software systems. Due to the increasing need for security-related skills within software systems, there is a growing demand for these skills to be taught in computer science. A series of security modules was developed not only to meet the demand but also to assess the impact of these modules on teaching critical cyber security topics in computer science courses. This full paper in the innovative practice category presents the outcomes of six security modules in a freshman-level course at two institutions. The study adopts a Model-Eliciting Activity (MEA) as a project for students to demonstrate an understanding of the security concepts. Two experimental studies were conducted: 1) Teaching effectiveness of implementing cyber security modules and MEA project, 2) Students’ experiences in conceptual modeling tasks in problem-solving. In measuring the effectiveness of teaching security concepts with the MEA project, students’ performance, attitudes, and interests as well as the instructor’s effectiveness were assessed. For the conceptual modeling tasks in problem-solving, the results of student outcomes were analyzed. After implementing the security modules with the MEA project, students showed a great understanding of cyber security concepts and an increased interest in broader computer science concepts. The instructor’s beliefs about teaching, learning, and assessment shifted from teacher-centered to student-centered during their experience with the security modules and MEA project. Although 64.29% of students’ solutions do not seem suitable for real-world implementation, 76.9% of the developed solutions showed a sufficient degree of creativity.
Authored by Jeong Yang, Young Kim, Brandon Earwood
To solve the problem of an excessive number of component vulnerabilities and limited defense resources in industrial cyber physical systems, a method for analyzing security critical components of system is proposed. Firstly, the components and vulnerability information in the system are modeled based on SysML block definition diagram. Secondly, as SysML block definition diagram is challenging to support direct analysis, a block security dependency graph model is proposed. On this basis, the transformation rules from SysML block definition graph to block security dependency graph are established according to the structure of block definition graph and its vulnerability information. Then, the calculation method of component security importance is proposed, and a security critical component analysis tool is designed and implemented. Finally, an example of a Drone system is given to illustrate the effectiveness of the proposed method. The application of this method can provide theoretical and technical support for selecting key defense components in the industrial cyber physical system.
Authored by Junjie Zhao, Bingfeng Xu, Xinkai Chen, Bo Wang, Gaofeng He
Steady advancement in Artificial Intelligence (AI) development over recent years has caused AI systems to become more readily adopted across industry and military use-cases globally. As powerful as these algorithms are, there are still gaping questions regarding their security and reliability. Beyond adversarial machine learning, software supply chain vulnerabilities and model backdoor injection exploits are emerging as potential threats to the physical safety of AI reliant CPS such as autonomous vehicles. In this work in progress paper, we introduce the concept of AI supply chain vulnerabilities with a provided proof of concept autonomous exploitation framework. We investigate the viability of algorithm backdoors and software third party library dependencies for applicability into modern AI attack kill chains. We leverage an autonomous vehicle case study for demonstrating the applicability of our offensive methodologies within a realistic AI CPS operating environment.
Authored by Daniel Williams, Chelece Clark, Rachel McGahan, Bradley Potteiger, Daniel Cohen, Patrick Musau
Third-party libraries with rich functionalities facilitate the fast development of JavaScript software, leading to the explosive growth of the NPM ecosystem. However, it also brings new security threats that vulnerabilities could be introduced through dependencies from third-party libraries. In particular, the threats could be excessively amplified by transitive dependencies. Existing research only considers direct dependencies or reasoning transitive dependencies based on reachability analysis, which neglects the NPM-specific dependency resolution rules as adapted during real installation, resulting in wrongly resolved dependencies. Consequently, further fine-grained analysis, such as precise vulnerability propagation and their evolution over time in dependencies, cannot be carried out precisely at a large scale, as well as deriving ecosystem-wide solutions for vulnerabilities in dependencies. To fill this gap, we propose a knowledge graph-based dependency resolution, which resolves the inner dependency relations of dependencies as trees (i.e., dependency trees), and investigates the security threats from vulnerabilities in dependency trees at a large scale. Specifically, we first construct a complete dependency-vulnerability knowledge graph (DVGraph) that captures the whole NPM ecosystem (over 10 million library versions and 60 million well-resolved dependency relations). Based on it, we propose a novel algorithm (DTResolver) to statically and precisely resolve dependency trees, as well as transitive vulnerability propagation paths, for each package by taking the official dependency resolution rules into account. Based on that, we carry out an ecosystem-wide empirical study on vulnerability propagation and its evolution in dependency trees. Our study unveils lots of useful findings, and we further discuss the lessons learned and solutions for different stakeholders to mitigate the vulnerability impact in NPM based on our findings. For example, we implement a dependency tree based vulnerability remediation method (DTReme) for NPM packages, and receive much better performance than the official tool (npm audit fix).
Authored by Chengwei Liu, Sen Chen, Lingling Fan, Bihuan Chen, Yang Liu, Xin Peng
Documents are a common method of storing infor-mation and one of the most conventional forms of expression of ideas. Cloud servers store a user's documents with thousands of other users in place of physical storage devices. Indexes corresponding to the documents are also stored at the cloud server to enable the users to retrieve documents of their interest. The index includes keywords, document identities in which the keywords appear, along with Term Frequency-Inverse Document Frequency (TF-IDF) values which reflect the keywords' relevance scores of the dataset. Currently, there are no efficient methods to delete keywords from millions of documents over cloud servers while avoiding any compromise to the user's privacy. Most of the existing approaches use algorithms that divide a bigger problem into sub-problems and then combine them like divide and conquer problems. These approaches don't focus entirely on fine-grained deletion. This work is focused on achieving fine-grained deletion of keywords by keeping the size of the TF-IDF matrix constant after processing the deletion query, which comprises of keywords to be deleted. The experimental results of the proposed approach confirm that the precision of ranked search still remains very high after deletion without recalculation of the TF-IDF matrix.
Authored by Kushagra Lavania, Gaurang Gupta, D.V.N. Kumar
Side Channel Attacks (SCAs), an attack that exploits the physical information generated when an encryption algorithm is executed on a device to recover the key, has become one of the key threats to the security of encrypted devices. Recently, with the development of deep learning, deep learning techniques have been applied to SCAs with good results on publicly available dataset experiences. In this paper, we propose a power traces decomposition method that divides the original power traces into two parts, where the data-influenced part is defined as data power traces (Tdata) and the other part is defined as device constant power traces, and use the Tdata for training the network model, which has more obvious advantages than using the original power traces for training the network model. To verify the effectiveness of the approach, we evaluated the ATXmega128D4 microcontroller by capturing the power traces generated when implementing AES-128. Experimental results show that network models trained using Tdata outperform network models trained using raw power traces (Traw ) in terms of classification accuracy, training time, cross-subkey recovery key, and cross-device recovery key.
Authored by Fanliang Hu, Feng Ni
The new paradigm software-defined networking (SDN) supports network innovation and makes the control of network operations more agile. The flow table is the main component of SDN switch which contains a set of flow entries that define how new flows are processed. Low-rate distributed denial-of-service (LR-DDoS) attacks are difficult to detect and mitigate because they behave like legitimate users. There are many detection methods for LR DDoS attacks in the literature, but none of these methods detect single-packet LR DDoS attacks. In fact, LR DDoS attackers exploit vulnerabilities in the mechanism of congestion control in TCP to either periodically retransmit burst attack packets for a short time period or to continuously launch a single attack packet at a constant low rate. In this paper, the proposed scheme detects LR-DDoS by examining all incoming packets and filtering the single packets sent from different source IP addresses to the same destination at a constant low rate. Sending single packets at a constant low rate will increase the number of flows at the switch which can make it easily overflowed. After detecting the single attack packets, the proposed scheme prevents LR-DDoS at its early stage by deleting the flows created by these packets once they reach the threshold. According to the results of the experiment, the scheme achieves 99.47% accuracy in this scenario. In addition, the scheme has simple logic and simple calculation, which reduces the overhead of the SDN controller.
Authored by Wisam Muragaa
Distributed Denial of Service (DDoS) attacks aim to make a server unresponsive by flooding the target server with a large volume of packets (Volume based DDoS attacks), by keeping connections open for a long time and exhausting the resources (Low and Slow DDoS attacks) or by targeting protocols (Protocol based attacks). Volume based DDoS attacks that flood the target server with a large number of packets are easier to detect because of the abnormality in packet flow. Low and Slow DDoS attacks, however, make the server unavailable by keeping connections open for a long time, but send traffic similar to genuine traffic, making detection of such attacks difficult. This paper proposes a solution to detect and mitigate one such Low and slow DDoS attack, Slowloris in an SDN (Software Defined Networking) environment. The proposed solution involves communication between the detection and mitigation module and the controller of the Software Defined Network to get data to detect and mitigate low and slow DDoS attack.
Authored by A Sai, B Tilak, Sai Sanjith, Padi Suhas, R Sanjeetha
Network security is a prominent topic that is gaining international attention. Distributed Denial of Service (DDoS) attack is often regarded as one of the most serious threats to network security. Software Defined Network (SDN) decouples the control plane from the data plane, which can meet various network requirements. But SDN can also become the object of DDoS attacks. This paper proposes an automated DDoS attack mitigation method that is based on the programmability of the Ryu controller and the features of the OpenFlow switch flow tables. The Mininet platform is used to simulate the whole process, from SDN traffic generation to using a K-Nearest Neighbor model for traffic classification, as well as identifying and mitigating DDoS attack. The packet counts of the victim's malicious traffic input port are significantly lower after the mitigation method is implemented than before the mitigation operation. The purpose of mitigating DDoS attack is successfully achieved.
Authored by Danni Wang, Sizhao Li
DDoS attacks produce a lot of traffic on the network. DDoS attacks may be fought in a novel method thanks to the rise of Software Defined Networking (SDN). DDoS detection and data gathering may lead to larger system load utilization among SDN as well as systems, much expense of SDN, slow reaction period to DDoS if they are conducted at regular intervals. Using the Identification Retrieval algorithm, we offer a new DDoS detection framework for detecting resource scarcity type DDoS attacks. In designed to check low-density DDoS attacks, we employ a combination of network traffic characteristics. The KSVD technique is used to generate a dictionary of network traffic parameters. In addition to providing legitimate and attack traffic models for dictionary construction, the suggested technique may be used to network traffic as well. Matching Pursuit and Wavelet-based DDoS detection algorithms are also implemented and compared using two separate data sets. Despite the difficulties in identifying LR-DoS attacks, the results of the study show that our technique has a detection accuracy of 89%. DDoS attacks are explained for each type of DDoS, and how SDN weaknesses may be exploited. We conclude that machine learning-based DDoS detection mechanisms and cutoff point DDoS detection techniques are the two most prevalent methods used to identify DDoS attacks in SDN. More significantly, the generational process, benefits, and limitations of each DDoS detection system are explained. This is the case in our testing environment, where the intrusion detection system (IDS) is able to block all previously identified threats
Authored by E. Fenil, Mohan Kumar
Software Defined Networking (SDN) is an emerging technology, which provides the flexibility in communicating among network. Software Defined Network features separation of the data forwarding plane from the control plane which includes controller, resulting centralized network. Due to centralized control, the network becomes more dynamic, and resources are managed efficiently and cost-effectively. Network Virtualization is transformation of network from hardware-based to software-based. Network Function Virtualization will permit implementation, adaptable provisioning, and even management of functions virtually. The use of virtualization of SDN networks permits network to strengthen the features of SDN and virtualization of NFV and has for that reason has attracted notable research awareness over the last few years. SDN platform introduces network security challenges. The network becomes vulnerable when a large number of requests is encapsulated inside packet\_in messages and passed to controller from switch for instruction, if it is not recognized by existing flow entry rules. which will limit the resources and become a bottleneck for the entire network leading to DDoS attack. It is necessary to have quick provisional methods to prevent the switches from breaking down. To resolve this problem, the researcher develops a mechanism that detects and mitigates flood attacks. This paper provides a comprehensive survey which includes research relating frameworks which are utilized for detecting attack and later mitigation of flood DDoS attack in Software Defined Network (SDN) with the help of NFV.
Authored by Namita Ashodia, Kishan Makadiya
This paper mainly explores the detection and defense of DDoS attacks in the SDN architecture of the 5G environment, and proposes a DDoS attack detection method based on the deep learning two-level model CNN-LSTM in the SDN network. Not only can it greatly improve the accuracy of attack detection, but it can also reduce the time for classifying and detecting network traffic, so that the transmission of DDoS attack traffic can be blocked in time to ensure the availability of network services.
Authored by Mengxue Li, Binxin Zhang, Guangchang Wang, Bin ZhuGe, Xian Jiang, Ligang Dong
This paper studies Distributed Denial of Service (DDoS) attack detection by adopting the Deep Neural Network (DNN) model in Software Defined Networking (SDN). We first deploy the flow collector module to collect the flow table entries. Considering the detection efficiency of the DNN model, we also design some features manually in addition to the features automatically obtained by the flow table. Then we use the preprocessed data to train the DNN model and make a prediction. The overall detection framework is deployed in the SDN controller. The experiment results illustrate DNN model has higher accuracy in identifying attack traffic than machine learning algorithms, which lays a foundation for the defense against DDoS attack.
Authored by Wanqi Zhao, Haoyue Sun, Dawei Zhang
In recent years, new types of cyber attacks called targeted attacks have been observed. It targets specific organizations or individuals, while usual large-scale attacks do not focus on specific targets. Organizations have published many Word or PDF files on their websites. These files may provide the starting point for targeted attacks if they include hidden data unintentionally generated in the authoring process. Adhatarao and Lauradoux analyzed hidden data found in the PDF files published by security agencies in many countries and showed that many PDF files potentially leak information like author names, details on the information system and computer architecture. In this study, we analyze hidden data of PDF files published on the website of police agencies in Japan and compare the results with Adhatarao and Lauradoux's. We gathered 110989 PDF files. 56% of gathered PDF files contain personal names, organization names, usernames, or numbers that seem to be IDs within the organizations. 96% of PDF files contain software names.
Authored by Taichi Hasegawa, Taiichi Saito, Ryoichi Sasaki
Web services use server-side input sanitization to guard against harmful input. Some web services publish their sanitization logic to make their client interface more usable, e.g., allowing clients to debug invalid requests locally. However, this usability practice poses a security risk. Specifically, services may share the regexes they use to sanitize input strings - and regex-based denial of service (ReDoS) is an emerging threat. Although prominent service outages caused by ReDoS have spurred interest in this topic, we know little about the degree to which live web services are vulnerable to ReDoS. In this paper, we conduct the first black-box study measuring the extent of ReDoS vulnerabilities in live web services. We apply the Consistent Sanitization Assumption: that client-side sanitization logic, including regexes, is consistent with the sanitization logic on the server-side. We identify a service's regex-based input sanitization in its HTML forms or its API, find vulnerable regexes among these regexes, craft ReDoS probes, and pinpoint vulnerabilities. We analyzed the HTML forms of 1,000 services and the APIs of 475 services. Of these, 355 services publish regexes; 17 services publish unsafe regexes; and 6 services are vulnerable to ReDoS through their APIs (6 domains; 15 subdomains). Both Microsoft and Amazon Web Services patched their web services as a result of our disclosure. Since these vulnerabilities were from API specifications, not HTML forms, we proposed a ReDoS defense for a popular API validation library, and our patch has been merged. To summarize: in client-visible sanitization logic, some web services advertise Re-DoS vulnerabilities in plain sight. Our results motivate short-term patches and long-term fundamental solutions. “Make measurable what cannot be measured.” -Galileo Galilei
Authored by Efe Barlas, Xin Du, James Davis