Nowadays, network information security is of great concern, and the measurement of the trustworthiness of terminal devices is of great significance to the security of the entire network. The measurement method of terminal device security trust still has the problems of high complexity, lack of universality. In this paper, the device fingerprint library of device access network terminal devices is first established through the device fingerprint mixed collection method; Secondly, the software and hardware features of the device fingerprint are used to increase the uniqueness of the device identification, and the multi- dimensional standard metric is used to measure the trustworthiness of the terminal device; Finally, Block chain technology is used to store the fingerprint and standard model of network access terminal equipment on the chain. To improve the security level of network access devices, a device access method considering the trust of terminal devices from multiple perspectives is implemented.
Authored by Jiaqi Peng, Ke Yang, Jiaxing Xuan, Da Li, Lei Fan
TVM(tensor virtual machine) as a deep learning compiler which supports the conversion of machine learning models into TVM IR(intermediate representation) and to optimise the generation of high-performance machine code for various hardware platforms. While the traditional approach is to parallelise the cyclic transformations of operators, in this paper we partition the implementation of the operators in the deep learning compiler TVM with parallel scheduling to derive a faster running time solution for the operators. An optimisation algorithm for partitioning and parallel scheduling is designed for the deep learning compiler TVM, where operators such as two-dimensional convolutions are partitioned into multiple smaller implementations and several partitioned operators are run in parallel scheduling to derive the best operator partitioning and parallel scheduling decisions by means of performance estimation. To evaluate the effectiveness of the algorithm, multiple examples of the two-dimensional convolution operator, the average pooling operator, the maximum pooling operator, and the ReLU activation operator with different input sizes were tested on the CPU platform, and the performance of these operators was experimentally shown to be improved and the operators were run speedily.
Authored by Zhiyu Li, Xiang Zhou, Wenbin Weng
Derivatives are key to numerous science, engineering, and machine learning applications. While existing tools generate derivatives of programs in a single language, modern parallel applications combine a set of frameworks and languages to leverage available performance and function in an evolving hardware landscape. We propose a scheme for differentiating arbitrary DAG-based parallelism that preserves scalability and efficiency, implemented into the LLVM-based Enzyme automatic differentiation framework. By integrating with a full-fledged compiler backend, Enzyme can differentiate numerous parallel frameworks and directly control code generation. Combined with its ability to differentiate any LLVM-based language, this flexibility permits Enzyme to leverage the compiler tool chain for parallel and differentiation-specitic optimizations. We differentiate nine distinct versions of the LULESH and miniBUDE applications, written in different programming languages (C++, Julia) and parallel frameworks (OpenMP, MPI, RAJA, Julia tasks, MPI.jl), demonstrating similar scalability to the original program. On benchmarks with 64 threads or nodes, we find a differentiation overhead of 3.4–6.8× on C++ and 5.4–12.5× on Julia.
Authored by William Moses, Sri Narayanan, Ludger Paehler, Valentin Churavy, Michel Schanen, Jan Hückelheim, Johannes Doerfert, Paul Hovland
Model checking is one of the most commonly used technique in formal verification. However, the exponential scale state space renders exhaustive state enumeration inefficient even for a moderate System on Chip (SoC) design. In this paper, we propose a method that leverages symbolic execution to accelerate state space search and pinpoint security vulnerabilities. We automatically convert the hardware design to functionally equivalent C++ code and utilize the KLEE symbolic execution engine to perform state exploration through heuristic search. To reduce the search space, we symbolically represent essential input signals while making non-critical inputs concrete. Experiment results have demonstrated that our method can precisely identify security vulnerabilities at significantly lower computation cost.
Authored by Shibo Tang, Xingxin Wang, Yifei Gao, Wei Hu
The long-living nature and byte-addressability of persistent memory (PM) amplifies the importance of strong memory protections. This paper develops temporal exposure reduction protection (TERP) as a framework for enforcing memory safety. Aiming to minimize the time when a PM region is accessible, TERP offers a complementary dimension of memory protection. The paper gives a formal definition of TERP, explores the semantics space of TERP constructs, and the relations with security and composability in both sequential and parallel executions. It proposes programming system and architecture solutions for the key challenges for the adoption of TERP, which draws on novel supports in both compilers and hardware to efficiently meet the exposure time target. Experiments validate the efficacy of the proposed support of TERP, in both efficiency and exposure time minimization.
Authored by Yuanchao Xu, Chencheng Ye, Xipeng Shen, Yan Solihin
Artificial intelligence (AI) and machine learning (ML) have been used in transforming our environment and the way people think, behave, and make decisions during the last few decades [1]. In the last two decades everyone connected to the Internet either an enterprise or individuals has become concerned about the security of his/their computational resources. Cybersecurity is responsible for protecting hardware and software resources from cyber attacks e.g. viruses, malware, intrusion, eavesdropping. Cyber attacks either come from black hackers or cyber warfare units. Artificial intelligence (AI) and machine learning (ML) have played an important role in developing efficient cyber security tools. This paper presents Latest Cyber Security Tools Based on Machine Learning which are: Windows defender ATP, DarckTrace, Cisco Network Analytic, IBM QRader, StringSifter, Sophos intercept X, SIME, NPL, and Symantec Targeted Attack Analytic.
Authored by Taher Ghazal, Mohammad Hasan, Raed Zitar, Nidal Al-Dmour, Waleed Al-Sit, Shayla Islam
With the development of computer technology and information security technology, computer networks will increasingly become an important means of information exchange, permeating all areas of social life. Therefore, recognizing the vulnerabilities and potential threats of computer networks as well as various security problems that exist in reality, designing and researching computer quality architecture, and ensuring the security of network information are issues that need to be resolved urgently. The purpose of this article is to study the design and realization of information security technology and computer quality system structure. This article first summarizes the basic theory of information security technology, and then extends the core technology of information security. Combining the current status of computer quality system structure, analyzing the existing problems and deficiencies, and using information security technology to design and research the computer quality system structure on this basis. This article systematically expounds the function module data, interconnection structure and routing selection of the computer quality system structure. And use comparative method, observation method and other research methods to design and research the information security technology and computer quality system structure. Experimental research shows that when the load of the computer quality system structure studied this time is 0 or 100, the data loss rate of different lengths is 0, and the correct rate is 100, which shows extremely high feasibility.
Authored by Yuanyuan Hu, Xiaolong Cao, Guoqing Li
This paper presents a case study for designing and implementing a secure communication protocol over a Controller Area Network (CAN). The CAN based protocol uses a hybrid encryption method on a relatively simple hardware / software environment. Moreover, the blockchain technology is proposed as a working solution to provide an extra secure level of the proposed system.
Authored by Adrian-Florin Croitoru, Florin Stîngă, Marius Marian
We study how information flows through a multi-robot network in order to better understand how to provide resilience to malicious information. While the notion of global resilience is well studied, one way existing methods provide global resilience is by bringing robots closer together to improve the connectivity of the network. However, large changes in network structure can impede the team from performing other functions such as coverage, where the robots need to spread apart. Our goal is to mitigate the trade-off between resilience and network structure preservation by applying resilience locally in areas of the network where it is needed most. We introduce a metric, Influence, to identify vulnerable regions in the network requiring resilience. We design a control law targeting local resilience to the vulnerable areas by improving the connectivity of robots within these areas so that each robot has at least 2F+1 vertex-disjoint communication paths between itself and the high influence robot in the vulnerable area. We demonstrate the performance of our local resilience controller in simulation and in hardware by applying it to a coverage problem and comparing our results with an existing global resilience strategy. For the specific hardware experiments, we show that our control provides local resilience to vulnerable areas in the network while only requiring 9.90% and 15.14% deviations from the desired team formation compared to the global strategy.
Authored by Matthew Cavorsi, Stephanie Gil
The exponential growth of IoT-type systems has led to a reconsideration of the field of database management systems in terms of storing and handling high-volume data. Recently, many real-time Database Management Systems(DBMS) have been developed to address issues such as security, managing concurrent access to stored data, and optimizing data query performance. This paper studies methods that allow to reduce the temporal validity range for common DBMS. The primary purpose of IoT edge devices is to generate data and make it available for machine learning or statistical algorithms. This is achieved inside the Knowledge Discovery in Databases process. In order to visualize and obtain critical Data Mining results, all the device-generated data must be made available as fast as possible for selection, preprocessing and data transformation. In this research we investigate if IoT edge devices can be used with common DBMS proper configured in order to access data fast instead of working with Real Time DBMS. We will study what kind of transactions are needed in large IoT ecosystems and we will analyze the techniques of controlling concurrent access to common resources (stored data). For this purpose, we built a series of applications that are able to simulate concurrent writing operations to a common DBMS in order to investigate the performance of concurrent access to database resources. Another important procedure that will be tested with the developed applications will be to increase the availability of data for users and data mining applications. This will be achieved by using field indexing.
Authored by Valentin Pupezescu, Marilena-Cătălina Pupezescu, Lucian-Andrei Perișoară
False data injection cyber-attack detection models on smart grid operation have been much explored recently, considering analytical physics-based and data-driven solutions. Recently, a hybrid data-driven physics-based model framework for monitoring the smart grid is developed. However, the framework has not been implemented in real-time environment yet. In this paper, the framework of the hybrid model is developed within a real-time simulation environment. OPAL-RT real-time simulator is used to enable Hardware-in-the-Loop testing of the framework. IEEE 9-bus system is considered as a testing grid for gaining insight. The process of building the framework and the challenges faced during development are presented. The performance of the framework is investigated under various false data injection attacks.
Authored by Valeria Vega-Martinez, Austin Cooper, Brandon Vera, Nader Aljohani, Arturo Bretas
The excess buffering of packets in network elements, also referred to as bufferbloat, results in high latency. Considering the requirements of traffic generated by video conferencing systems like Zoom, cloud rendered gaming platforms like Google Stadia, or even video streaming services such as Netflix, Amazon Prime and YouTube, timeliness of such traffic is important. Ensuring low latency to IP flows with a high throughput calls for the application of Active Queue Management (AQM) schemes. This introduces yet another problem as the co-existence of scalable and classic congestion controls leads to the starvation of classic TCP flows. Technologies such as Low Latency Low Loss Scalable Throughput (L4S) and the corresponding dual queue coupled AQM, DualPI2, provide a robust solution to these problems. However, their deployment on hardware targets such as programmable switches is quite challenging due to the complexity of algorithms and architectural constraints of switching ASICs. In this study, we provide proof of concept implementations of two AQMs that enable the co-existence of scalable and traditional TCP traffic, namely DualPI2 and the preceding single-queue PI2 AQM, on an Intel Tofino switching ASIC. Given the fixed operation of the switch’s traffic manager, we investigate to what extent it is possible to implement a fully RFC-compliant version of the two AQMs on the Tofino ASIC. The study shows that an appropriate split between control and data plane operations is required while we also exploit fixed functionality of the traffic manager to support such solutions.
Authored by Gergő Gombos, Maurice Mouw, Sándor Laki, Chrysa Papagianni, Koen De Schepper
The demand for increasing flexibility use in power systems is stressed by the changing grid utilization. Making use of largely untapped flexibility potential is possible through novel flexibility markets. Different approaches for these markets are being developed and vary considering their handling of transaction schemes and relation of participating entities. This paper delivers the conceptual development of a holistic system architecture for the realization of an interregional flexibility market, which targets a market based congestion management in the transmission and distribution system through trading between system operators and flexibility providers. The framework combines a market mechanism with the required supplements like appropriate control algorithms for emergency situations, cyber-physical system monitoring and cyber-security assessment. The resulting methods are being implemented and verified in a remote-power-hardware-in-the-loop setup coupling a real world low voltage grid with a geographically distant real time simulation using state of the art control system applications with an integration of the aforementioned architecture components.
Authored by Oliver Kraft, Oliver Pohl, Ulf Häger, Kai Heussen, Nils Müller, Zeeshan Afzal, Mathias Ekstedt, Hossein Farahmand, Dmytro Ivanko, Ankit Singh, Sasiphong Leksawat, Andreas Kubis
The Internet of Things is a developing technology that converts physical objects into virtual objects connected to the internet using wired and wireless network architecture. Use of cross-layer techniques in the internet of things is primarily driven by the high heterogeneity of hardware and software capabilities. Although traditional layered architecture has been effective for a while, cross-layer protocols have the potential to greatly improve a number of wireless network characteristics, including bandwidth and energy usage. Also, one of the main concerns with the internet of things is security, and machine learning (ML) techniques are thought to be the most cuttingedge and viable approach. This has led to a plethora of new research directions for tackling IoT's growing security issues. In the proposed study, a number of cross-layer approaches based on machine learning techniques that have been offered in the past to address issues and challenges brought on by the variety of IoT are in-depth examined. Additionally, the main issues are mentioned and analyzed, including those related to scalability, interoperability, security, privacy, mobility, and energy utilization.
Authored by K. Saranya, Dr. A. Valarmathi
In the deep nano-scale regime, reliability has emerged as one of the major design issues for high-density integrated systems. Among others, key reliability-related issues are soft errors, high temperature, and aging effects (e.g., NBTI-Negative Bias Temperature Instability), which jeopardize the correct applications' execution. Tremendous amount of research effort has been invested at individual system layers. Moreover, in the era of growing cyber-security threats, modern computing systems experience a wide range of security threats at different layers of the software and hardware stacks. However, considering the escalating reliability and security costs, designing a highly reliable and secure system would require engaging multiple system layers (i.e. both hardware and software) to achieve cost-effective robustness. This talk provides an overview of important reliability issues, prominent state-of-the-art techniques, and various hardwaresoftware collaborative reliability modeling and optimization techniques developed at our lab, with a focus on the recent works on ML-based reliability techniques. Afterwards, this talk will also discuss how advanced ML techniques can be leveraged to devise new types of hardware security attacks, for instance on logic locked circuits. Towards the end of the talk, I will also give a quick pitch on the reliability and security challenges for the embedded machine learning (ML) on resource/energy-constrained devices subjected to unpredictable and harsh scenarios.
Authored by Muhammad Shafique
Embedded devices are becoming increasingly pervasive in safety-critical systems of the emerging cyber-physical world. While trusted execution environments (TEEs), such as ARM TrustZone, have been widely deployed in mobile platforms, little attention has been given to deployment on real-time cyber-physical systems, which present a different set of challenges compared to mobile applications. For safety-critical cyber-physical systems, such as autonomous drones or automobiles, the current TEE deployment paradigm, which focuses only on confidentiality and integrity, is insufficient. Computation in these systems also needs to be completed in a timely manner (e.g., before the car hits a pedestrian), putting a much stronger emphasis on availability.To bridge this gap, we present RT-TEE, a real-time trusted execution environment. There are three key research challenges. First, RT-TEE bootstraps the ability to ensure availability using a minimal set of hardware primitives on commodity embedded platforms. Second, to balance real-time performance and scheduler complexity, we designed a policy-based event-driven hierarchical scheduler. Third, to mitigate the risks of having device drivers in the secure environment, we designed an I/O reference monitor that leverages software sandboxing and driver debloating to provide fine-grained access control on peripherals while minimizing the trusted computing base (TCB).We implemented prototypes on both ARMv8-A and ARMv8-M platforms. The system is tested on both synthetic tasks and real-life CPS applications. We evaluated rover and plane in simulation and quadcopter both in simulation and with a real drone.
Authored by Jinwen Wang, Ao Li, Haoran Li, Chenyang Lu, Ning Zhang
Supply chain cyberattacks that exploit insecure third-party software are a growing concern for the security of the electric power grid. These attacks seek to deploy malicious software in grid control devices during the fabrication, shipment, installation, and maintenance stages, or as part of routine software updates. Malicious software on grid control devices may inject bad data or execute bad commands, which can cause blackouts and damage power equipment. This paper describes an experimental setup to simulate the software update process of a commercial power relay as part of a hardware-in-the-loop simulation for grid supply chain cyber-security assessment. The laboratory setup was successfully utilized to study three supply chain cyber-security use cases.
Authored by Joseph Keller, Shuva Paul, Santiago Grijalva, Vincent Mooney
Cyber-Physical System (CPS) represents systems that join both hardware and software components to perform real-time services. Maintaining the system's reliability is critical to the continuous delivery of these services. However, the CPS running environment is full of uncertainties and can easily lead to performance degradation. As a result, the need for a recovery technique is highly needed to achieve resilience in the system, with keeping in mind that this technique should be as green as possible. This early doctorate proposal, suggests a game theory solution to achieve resilience and green in CPS. Game theory has been known for its fast performance in decision-making, helping the system to choose what maximizes its payoffs. The proposed game model is described over a real-life collaborative artificial intelligence system (CAIS), that involves robots with humans to achieve a common goal. It shows how the expected results of the system will achieve the resilience of CAIS with minimized CO2 footprint.
Authored by Diaeddin Rimawi
Model compression is one of the most preferred techniques for efficiently deploying deep neural networks (DNNs) on resource- constrained Internet of Things (IoT) platforms. However, the simply compressed model is often vulnerable to adversarial attacks, leading to a conflict between robustness and efficiency, especially for IoT devices exposed to complex real-world scenarios. We, for the first time, address this problem by developing a novel framework dubbed Magical-Decomposition to simultaneously enhance both robustness and efficiency for hardware. By leveraging a hardware-friendly model compression method called singular value decomposition, the defending algorithm can be supported by most of the existing DNN hardware accelerators. To step further, by using a recently developed DNN interpretation tool, the underlying scheme of how the adversarial accuracy can be increased in the compressed model is highlighted clearly. Ablation studies and extensive experiments under various attacks/models/datasets consistently validate the effectiveness and scalability of the proposed framework.
Authored by Xin Cheng, Mei-Qi Wang, Yu-Bo Shi, Jun Lin, Zhong-Feng Wang
Software Defined Networking (SDN) is an emerging technology, which provides the flexibility in communicating among network. Software Defined Network features separation of the data forwarding plane from the control plane which includes controller, resulting centralized network. Due to centralized control, the network becomes more dynamic, and resources are managed efficiently and cost-effectively. Network Virtualization is transformation of network from hardware-based to software-based. Network Function Virtualization will permit implementation, adaptable provisioning, and even management of functions virtually. The use of virtualization of SDN networks permits network to strengthen the features of SDN and virtualization of NFV and has for that reason has attracted notable research awareness over the last few years. SDN platform introduces network security challenges. The network becomes vulnerable when a large number of requests is encapsulated inside packet\_in messages and passed to controller from switch for instruction, if it is not recognized by existing flow entry rules. which will limit the resources and become a bottleneck for the entire network leading to DDoS attack. It is necessary to have quick provisional methods to prevent the switches from breaking down. To resolve this problem, the researcher develops a mechanism that detects and mitigates flood attacks. This paper provides a comprehensive survey which includes research relating frameworks which are utilized for detecting attack and later mitigation of flood DDoS attack in Software Defined Network (SDN) with the help of NFV.
Authored by Namita Ashodia, Kishan Makadiya
This research investigates efficient architectures for the implementation of the CRYSTALS-Dilithium post-quantum digital signature scheme on reconfigurable hardware, in terms of speed, memory usage, power consumption and resource utilisation. Post quantum digital signature schemes involve a significant computational effort, making efficient hardware accelerators an important contributor to future adoption of schemes. This is work in progress, comprising the establishment of a comprehensive test environment for operational profiling, and the investigation of the use of novel architectures to achieve optimal performance.
Authored by Donal Campbell, Ciara Rafferty, Ayesha Khalid, Maire O'Neill