Markov models of reliability of fault-tolerant computer systems are proposed, taking into account two stages of recovery of redundant memory devices. At the first stage, the physical recovery of memory devices is implemented, and at the second, the informational one consists in entering the data necessary to perform the required functions. Memory redundancy is carried out to increase the stability of the system to the loss of unique data generated during the operation of the system. Data replication is implemented in all functional memory devices. Information recovery is carried out using replicas of data stored in working memory devices. The model takes into account the criticality of the system to the timeliness of calculations in real time and to the impossibility of restoring information after multiple memory failures, leading to the loss of all stored replicas of unique data. The system readiness coefficient and the probability of its transition to a non-recoverable state are determined. The readiness of the system for the timely execution of requests is evaluated, taking into account the influence of the shares of the distribution of the performance of the computer allocated for the maintenance of requests and for the entry of information into memory after its physical recovery.
Authored by Vladimir Bogatyrev, Stanislav Bogatyrev, Anatoly Bogatyrev
In recent times, Network-on-Chip (NoC) has become state of the art for communication in Multiprocessor System-on-Chip due to the existing scalability issues in this area. However, these systems are exposed to security threats such as extraction of secret information. Therefore, the need for secure communication arises in such environments. In this work, we present a communication protocol based on authenticated encryption with recovery mechanisms to establish secure end-to-end communication between the NoC nodes. In addition, a selected key agreement approach required for secure communication is implemented. The security functionality is located in the network adapter of each processing element. If data is tampered with or deleted during transmission, recovery mechanisms ensure that the corrupted data is retransmitted by the network adapter without the need of interference from the processing element. We simulated and implemented the complete system with SystemC TLM using the NoC simulation platform PANACA. Our results show that we can keep a high rate of correctly transmitted information even when attackers infiltrated the NoC system.
Authored by Julian Haase, Sebastian Jaster, Elke Franz, Diana Göhringer
A critical property of distributed data processing systems is the high level of reliability of such systems. A practical solution to this problem is to place copies of archives of magnetic media in the nodes of the system. These archives are used to restore data destroyed during the processing of requests to this data. The paper shows the impact of the use of archives on the reliability indicators of distributed systems.
Authored by Sergey Somov, Larisa Bogatyryova
An approach to substantiating the choice of a discipline for the maintenance of a redundant computer system, with the possible use of node resources saved after failures, is considered. The choice is aimed at improving the reliability and profitability of the system, taking into account the operational costs of restoring nodes. Models of reliability of systems with service disciplines are proposed, providing both the possibility of immediate recovery of nodes after failures, and allowing degradation of the system when using node resources stored after failures in it. The models take into account the conditions of the admissibility or inadmissibility of the loss of information accumulated during the operation of the system. The operating costs are determined, taking into account the costs of restoring nodes for the system maintenance disciplines under consideration
Authored by Vladimir Bogatyrev, Stanislav Bogatyrev, Anatoly Bogatyrev
At present, the black-start mode of the large power grid is mostly limited to relying on the black-start power supply inside the system, or only to the recovery mode that regards the transmission power of tie lines between systems as the black-start power supply. The starting power supply involved in the situation of the large power outage is incomplete and it is difficult to give full play to the respective advantages of internal and external power sources. In this paper, a method of coordinated black-start of large power grid internal and external power sources is proposed by combining the two modes. Firstly, the black-start capability evaluation system is built to screen out the internal black-start power supply, and the external black-start power supply is determined by analyzing the connection relationship between the systems. Then, based on the specific implementation principles, the black-start power supply coordination strategy is formulated by using the Dijkstra shortest path algorithm. Based on the condensation idea, the black-start zoning and path optimization method applicable to this strategy is proposed. Finally, the black-start security verification and corresponding control measures are adopted to obtain a scheme of black-start cooperation between internal and external power sources in the large power grid. The above method is applied in a real large power grid and compared with the conventional restoration strategy to verify the feasibility and efficiency of this method.
Authored by Liang Guili, Zhang Dongying, Wang Wei, Gong Cheng, Cui Duo, Tian Yichun, Wang Yan
The Time-Triggered Architecture (TTA) presents a blueprint for building safe and real-time constrained distributed systems, based on a set of orthogonal concepts that make extensive use of the availability of a globally consistent notion of time and a priori knowledge of events. Although the TTA tolerates arbitrary failures of any of its nodes by architectural means (active node replication, a membership service, and bus guardians), the design of these means considers only accidental faults. However, distributed safety- and real-time critical systems have been emerging into more open and interconnected systems, operating autonomously for prolonged times and interfacing with other possibly non-real-time systems. Therefore, the existence of vulnerabilities that adversaries may exploit to compromise system safety cannot be ruled out. In this paper, we discuss potential targeted attacks capable of bypassing TTA's fault-tolerance mechanisms and demonstrate how two well-known recovery techniques - proactive and reactive rejuvenation - can be incorporated into TTA to reduce the window of vulnerability for attacks without introducing extensive and costly changes.
Authored by Mohammad Alkoudsi, Gerhard Fohler, Marcus Völp
Achieving agile and resilient autonomous capabilities for cyber defense requires moving past indicators and situational awareness into automated response and recovery capabilities. The objective of the AlphaSOC project is to use state of the art sequential decision-making methods to automatically investigate and mitigate attacks on cyber physical systems (CPS). To demonstrate this, we developed a simulation environment that models the distributed navigation control system and physics of a large ship with two rudders and thrusters for propulsion. Defending this control network requires processing large volumes of cyber and physical signals to coordi-nate defensive actions over many devices with minimal disruption to nominal operation. We are developing a Reinforcement Learning (RL)-based approach to solve the resulting sequential decision-making problem that has large observation and action spaces.
Authored by Ryan Silva, Cameron Hickert, Nicolas Sarfaraz, Jeff Brush, Josh Silbermann, Tamim Sookoor
The parameters of transmission line vary with environmental and operating conditions, thus the paper proposes a digital twin-based transmission line model. Based on synchrophasor measurements from phasor measurement units, the proposed model can use the maximum likelihood estimation (MLE) to reduce uncertainty between the digital twin and its physical counterpart. A case study has been conducted in the paper to present the influence of the uncertainty in the measurements on the digital twin for the transmission line and analyze the effectiveness of the MLE method. The results show that the proposed digital twin-based model is effective in reducing the influence of the uncertainty in the measurements and improving the fault location accuracy.
Authored by Han Zhang, Xiaoxiao Luo, Yongfu Li, Wenxia Sima, Ming Yang
This paper introduces lronMask, a new versatile verification tool for masking security. lronMask is the first to offer the verification of standard simulation-based security notions in the probing model as well as recent composition and expandability notions in the random probing model. It supports any masking gadgets with linear randomness (e.g. addition, copy and refresh gadgets) as well as quadratic gadgets (e.g. multiplication gadgets) that might include non-linear randomness (e.g. by refreshing their inputs), while providing complete verification results for both types of gadgets. We achieve this complete verifiability by introducing a new algebraic characterization for such quadratic gadgets and exhibiting a complete method to determine the sets of input shares which are necessary and sufficient to perform a perfect simulation of any set of probes. We report various benchmarks which show that lronMask is competitive with state-of-the-art verification tools in the probing model (maskVerif, scVerif, SILVEH, matverif). lronMask is also several orders of magnitude faster than VHAPS -the only previous tool verifying random probing composability and expandability- as well as SILVEH -the only previous tool providing complete verification for quadratic gadgets with nonlinear randomness. Thanks to this completeness and increased performance, we obtain better bounds for the tolerated leakage probability of state-of-the-art random probing secure compilers.
Authored by Sonia Belaïd, Darius Mercadier, Matthieu Rivain, Abdul Taleb
With the considerable success achieved by modern fuzzing in-frastructures, more crashes are produced than ever before. To dig out the root cause, rapid and faithful crash triage for large numbers of crashes has always been attractive. However, hindered by the practical difficulty of reducing analysis imprecision without compromising efficiency, this goal has not been accomplished. In this paper, we present an end-to-end crash triage solution Default, for accurately and quickly pinpointing unique root cause from large numbers of crashes. In particular, we quantify the “crash relevance” of program entities based on mutual information, which serves as the criterion of unique crash bucketing and allows us to bucket massive crashes without pre-analyzing their root cause. The quantification of “crash relevance” is also used in the shortening of long crashing traces. On this basis, we use the interpretability of neural networks to precisely pinpoint the root cause in the shortened traces by evaluating each basic block's impact on the crash label. Evaluated with 20 programs with 22216 crashes in total, Default demonstrates remarkable accuracy and performance, which is way beyond what the state-of-the-art techniques can achieve: crash de-duplication was achieved at a super-fast processing speed - 0.017 seconds per crashing trace, without missing any unique bugs. After that, it identifies the root cause of 43 unique crashes with no false negatives and an average false positive rate of 9.2%.
Authored by Xing Zhang, Jiongyi Chen, Chao Feng, Ruilin Li, Wenrui Diao, Kehuan Zhang, Jing Lei, Chaojing Tang
Port knocking provides an added layer of security on top of the existing security systems of a network. A predefined port knocking sequence is used to open the ports, which are closed by the firewall by default. The server determines the valid request if the knocking sequence is correct and opens the desired port. However, this sequence poses a security threat due to its static nature. This paper presents the port knock sequence-based communication protocol in the Software Defined network (SDN). It provides better management by separating the control plane and data plane. At the same time, it causes a communication overhead between the switches and the controller. To avoid this overhead, we are using the port knocking concept in the data plane without any involvement of the SDN controller. This study proposes three port knock sequence-based protocols (static, partial dynamic, and dynamic) in the data plane. To test the protocol in SDN environment, the P4 implementation of the underlying model is done in the BMV2 (behavioral model version 2) virtual switch. To check the security of the protocols, an informal security analysis is performed, which shows that the proposed protocols are secured to be implemented in the SDN data plane.
Authored by Isha Pali, Ruhul Amin
Dynamic Host Control Protocol (DHCP) is a protocol which provides IP addresses and network configuration parameters to the hosts present in the network. This protocol is deployed in small, medium, and large size organizations which removes the burden from network administrator to manually assign network parameters to every host in the network for establishing communication. Every vendor who plans to incorporate DHCP service in its device follows the working flow defined in Request for Comments (RFC). DHCP Starvation and DHCP Flooding attack are Denial of Service (DoS) attacks to prevents provision of IP addresses by DHCP. Port Security and DHCP snooping are built-in security features which prevents these DoS attacks. However, novel techniques have been devised to bypass these security features which uses ARP and ICMP protocol to perform the attack. The purpose of this research is to analyze implementation of DHCP in multiple devices to verify the involvement of both ARP and ICMP in the address acquisition process of DHCP as per RFC and to validate the results of prior research which assumes ARP or ICMP are used by default in all of devices.
Authored by Shameel Syed, Faheem Khuhawar, Shahnawaz Talpur, Aftab Memon, Miquel-Angel Luque-Nieto, Sanam Narejo
Intrusion Detection System (IDS) is one of the applications to detect intrusions in the network. IDS aims to detect any malicious activities that protect the computer networks from unknown persons or users called attackers. Network security is one of the significant tasks that should provide secure data transfer. Virtualization of networks becomes more complex for IoT technology. Deep Learning (DL) is most widely used by many networks to detect the complex patterns. This is very suitable approaches for detecting the malicious nodes or attacks. Software-Defined Network (SDN) is the default virtualization computer network. Attackers are developing new technology to attack the networks. Many authors are trying to develop new technologies to attack the networks. To overcome these attacks new protocols are required to prevent these attacks. In this paper, a unique deep intrusion detection approach (UDIDA) is developed to detect the attacks in SDN. Performance shows that the proposed approach is achieved more accuracy than existing approaches.
Authored by Vamsi Krishna, Venkata Matta
Nowadays, in this COVID era, work from home is quietly more preferred than work from the office. Due to this, the need for a firewall has been increased day by day. Every organization uses the firewall to secure their network and create VPN servers to allow their employees to work from home. Due to this, the security of the firewall plays a crucial role. In this paper, we have compared the two most popular open-source firewalls named pfSense and OPNSense. We have examined the security they provide by default without any other attachment. To do this, we performed four different attacks on the firewalls and compared the results. As a result, we have observed that both provide the same security still pfSense has a slight edge when an attacker tries to perform a Brute force attack over OPNSense.
Authored by Harsh Kiratsata, Deep Raval, Payal Viras, Punit Lalwani, Himanshu Patel, Panchal D.
In recent years, the epidemic of speculative side channels significantly increases the difficulty in enforcing domain isolation boundaries in a virtualized cloud environment. Although mitigations exist, the approach taken by the industry is neither a long-term nor a scalable solution, as we target each vulnerability with specific mitigations that add up to substantial performance penalties. We propose a different approach to secret isolation: guaranteeing that the hypervisor is Secret-Free (SF). A Secret-Free design partitions memory into secrets and non-secrets and reconstructs hypervisor isolation. It enforces that all domains have a minimal and secret-free view of the address space. In contrast to state-of-the-art, a Secret-Free hypervisor does not identify secrets to be hidden, but instead identifies non-secrets that can be shared, and only grants access necessary for the current operation, an allow-list approach. SF designs function with existing hardware and do not exhibit noticeable performance penalties in production workloads versus the unmitigated baseline, and outperform state-of-the-art techniques by allowing speculative execution where secrets are invisible. We implement SF in Xen (a Type-I hypervisor) to demonstrate that the design applies well to a commercial hypervisor. Evaluation shows performance comparable to baseline and up to 37% improvement in certain hypervisor paths compared with Xen default mitigations. Further, we demonstrate Secret-Free is a generic kernel isolation infrastructure for a variety of systems, not limited to Type-I hypervisors. We apply the same model in Hyper-V (Type-I), bhyve (Type-II) and FreeBSD (UNIX kernel) to evaluate its applicability and effectiveness. The successful implementations on these systems prove the generality of SF, and reveal the specific adaptations and optimizations required for each type of kernel.
Authored by Hongyan Xia, David Zhang, Wei Liu, Istvan Haller, Bruce Sherwin, David Chisnall
Recently, research on AI-based network intrusion detection has been actively conducted. In previous studies, the machine learning models such as SVM (Support Vector Machine) and RF (Random Forest) showed consistently high performance, whereas the NB (Naïve Bayes) showed various performances with large deviations. In the paper, after analyzing the cause of the NB models showing various performances addressed in the several studies, we measured the performance of the Gaussian NB model according to the smoothing factor that is closely related to these causes. Furthermore, we compared the performance of the Gaussian NB model with that of the other models as a zero-day attack detection system. As a result of the experiment, the accuracy was 38.80% and 87.99% in case that the smoothing factor is 0 and default respectively, and the highest accuracy was 94.53% in case that the smoothing factor is 1e-01. In the experiment, we used only some types of the attack data in the NSL-KDD dataset. The experiments showed the applicability of the Gaussian NB model as a zero-day attack detection system in the future. In addition, it is clarified that the smoothing factor of the Gaussian NB model determines the shape of gaussian distribution that is related to the likelihood.
Authored by Kijung Bong, Jonghyun Kim
Recently exposed vulnerabilities reveal the necessity to improve the security of branch predictors. Branch predictors record history about the execution of different processes, and such information from different processes are stored in the same structure and thus accessible to each other. This leaves the attackers with the opportunities for malicious training and malicious perception. Physical or logical isolation mechanisms such as using dedicated tables and flushing during context-switch can provide security but incur non-trivial costs in space and/or execution time. Randomization mechanisms incurs the performance cost in a different way: those with higher securities add latency to the critical path of the pipeline, while the simpler alternatives leave vulnerabilities to more sophisticated attacks.This paper proposes HyBP, a practical hybrid protection and effective mechanism for building secure branch predictors. The design applies the physical isolation and randomization in the right component to achieve the best of both worlds. We propose to protect the smaller tables with physically isolation based on (thread, privilege) combination; and protect the large tables with randomization. Surprisingly, the physical isolation also significantly enhances the security of the last-level tables by naturally filtering out accesses, reducing the information flow to these bigger tables. As a result, key changes can happen less frequently and be performed conveniently at context switches. Moreover, we propose a latency hiding design for a strong cipher by precomputing the "code book" with a validated, cryptographically strong cipher. Overall, our design incurs a performance penalty of 0.5% compared to 5.1% of physical isolation under the default context switching interval in Linux.
Authored by Lutan Zhao, Peinan Li, RUI HOU, Michael Huang, Xuehai Qian, Lixin Zhang, Dan Meng
Conpot is a low-interaction SCADA honeypot system that mimics a Siemens S7-200 proprietary device on default deployments. Honeypots operating using standard configurations can be easily detected by adversaries using scanning tools such as Shodan. This study focuses on the capabilities of the Conpot honeypot, and how these competences can be used to lure attackers. In addition, the presented research establishes a framework that enables for the customized configuration, thereby enhancing its functionality to achieve a high degree of deceptiveness and realism when presented to the Shodan scanners. A comparison between the default and configured deployments is further conducted to prove the modified deployments' effectiveness. The resulting annotations can assist cybersecurity personnel to better acknowledge the effectiveness of the honeypot's artifacts and how they can be used deceptively. Lastly, it informs and educates cybersecurity audiences on how important it is to deploy honeypots with advanced deceptive configurations to bait cybercriminals.
Authored by Warren Cabral, Leslie Sikos, Craig Valli
DDoS is a major issue in network security and a threat to service providers that renders a service inaccessible for a period of time. The number of Internet of Things (IoT) devices has developed rapidly. Nevertheless, it is proven that security on these devices is frequently disregarded. Many detection methods exist and are mostly focused on Machine Learning. However, the best method has not been defined yet. The aim of this paper is to find the optimal volumetric DDoS attack detection method by first comparing different existing machine learning methods, and second, by building an adaptive lightweight heuristics model relying on few traffic attributes and simple DDoS detection rules. With this new simple model, our goal is to decrease the classification time. Finally, we compare machine learning methods with our adaptive new heuristics method which shows promising results both on the accuracy and performance levels.
Authored by Rani Rahbani, Jawad Khalife
Cyber attacks keep states, companies and individuals at bay, draining precious resources including time, money, and reputation. Attackers thereby seem to have a first mover advantage leading to a dynamic defender attacker game. Automated approaches taking advantage of Cyber Threat Intelligence on past attacks bear the potential to empower security professionals and hence increase cyber security. Consistently, there has been a lot of research on automated approaches in cyber risk management including works on predictive attack algorithms and threat hunting. Combining data on countermeasures from “MITRE Detection, Denial, and Disruption Framework Empowering Network Defense” and adversarial data from “MITRE Adversarial Tactics, Techniques and Common Knowledge” this work aims at developing methods that enable highly precise and efficient automatic incident response. We introduce Attack Incident Responder, a methodology working with simple heuristics to find the most efficient sets of counter-measures for hypothesized attacks. By doing so, the work contributes to narrowing the attackers first mover advantage. Experimental results are promising high average precisions in predicting effiective defenses when using the methodology. In addition, we compare the proposed defense measures against a static set of defensive techniques offering robust security against observed attacks. Furthermore, we combine the approach of automated incidence response to an approach for threat hunting enabling full automation of security operation centers. By this means, we define a threshold in the precision of attack hypothesis generation that must be met for predictive defense algorithms to outperform the baseline. The calculated threshold can be used to evaluate attack hypothesis generation algorithms. The presented methodology for automated incident response may be a valuable support for information security professionals. Last, the work elaborates on the combination of static base defense with adaptive incidence response for generating a bio-inspired artificial immune system for computerized networks.
Authored by Florian Kaiser, Leon Andris, Tim Tennig, Jonas Iser, Marcus Wiens, Frank Schultmann
Recent studies have demonstrated the lack of robustness of image reconstruction networks to test-time evasion attacks, posing security risks and potential for misdiagnoses. In this paper, we evaluate how vulnerable such networks are to training-time poisoning attacks for the first time. In contrast to image classification, we find that trigger-embedded basic backdoor attacks on these models executed using heuristics lead to poor attack performance. Thus, it is non-trivial to generate backdoor attacks for image reconstruction. To tackle the problem, we propose a bi-level optimization (BLO)-based attack generation method and investigate its effectiveness on image reconstruction. We show that BLO-generated back-door attacks can yield a significant improvement over the heuristics-based attack strategy.
Authored by Vardaan Taneja, Pin-Yu Chen, Yuguang Yao, Sijia Liu
With the rise of IoT applications, about 20.4 billion devices will be online in 2020, and that number will rise to 75 billion a month by 2025. Different sensors in IoT devices let them get and process data remotely and in real time. Sensors give them information that helps them make smart decisions and manage IoT environments well. IoT Security is one of the most important things to think about when you're developing, implementing, and deploying IoT platforms. People who use the Internet of Things (IoT) say that it allows people to communicate, monitor, and control automated devices from afar. This paper shows how to use Deep learning and machine learning to make an IDS that can be used on IoT platforms as a service. In the proposed method, a cnn mapped the features, and a random forest classifies normal and attack classes. In the end, the proposed method made a big difference in all performance parameters. Its average performance metrics have gone up 5% to 6%.
Authored by Mehul Kapoor, Puneet Kaur
Python continues to be one of the most popular programming languages and has been used in many safety-critical fields such as medical treatment, autonomous driving systems, and data science. These fields put forward higher security requirements to Python ecosystems. However, existing studies on machine learning systems in Python concentrate on data security, model security and model privacy, and just assume the underlying Python virtual machines (PVMs) are secure and trustworthy. Unfortunately, whether such an assumption really holds is still unknown.This paper presents, to the best of our knowledge, the first and most comprehensive empirical study on the security of CPython, the official and most deployed Python virtual machine. To this end, we first designed and implemented a software prototype dubbed PVMSCAN, then use it to scan the source code of the latest CPython (version 3.10) and other 10 versions (3.0 to 3.9), which consists of 3,838,606 lines of source code. Empirical results give relevant findings and insights towards the security of Python virtual machines, such as: 1) CPython virtual machines are still vulnerable, for example, PVMSCAN detected 239 vulnerabilities in version 3.10, including 55 null dereferences, 86 uninitialized variables and 98 dead stores; Python/C API-related vulnerabilities are very common and have become one of the most severe threats to the security of PVMs: for example, 70 Python/C API-related vulnerabilities are identified in CPython 3.10; 3) the overall quality of the code remained stable during the evolution of Python VMs with vulnerabilities per thousand line (VPTL) to be 0.50; and 4) automatic vulnerability rectification is effective: 166 out of 239 (69.46%) vulnerabilities can be rectified by a simple yet effective syntax-directed heuristics.We have reported our empirical results to the developers of CPython, and they have acknowledged us and already confirmed and fixed 2 bugs (as of this writing) while others are still being analyzed. This study not only demonstrates the effectiveness of our approach, but also highlights the need to improve the reliability of infrastructures like Python virtual machines by leveraging state-of-the-art security techniques and tools.
Authored by Xinrong Lin, Baojian Hua, Qiliang Fan
Coverage-guided testing has shown to be an effective way to find bugs. If we model coverage-guided testing as a search problem (i.e., finding inputs that can cover more branches), then its efficiency mainly depends on two factors: (1) the accuracy of the searching algorithm and (2) the number of inputs that can be evaluated per unit time. Therefore, improving the search throughput has shown to be an effective way to improve the performance of coverage-guided testing.In this work, we present a novel design to improve the search throughput: by evaluating newly generated inputs with JIT-compiled path constraints. This approach allows us to significantly improve the single thread throughput as well as scaling to multiple cores. We also developed several optimization techniques to eliminate major bottlenecks during this process. Evaluation of our prototype JIGSAW shows that our approach can achieve three orders of magnitude higher search throughput than existing fuzzers and can scale to multiple cores. We also find that with such high throughput, a simple gradient-guided search heuristic can solve path constraints collected from a large set of real-world programs faster than SMT solvers with much more sophisticated search heuristics. Evaluation of end-to-end coverage-guided testing also shows that our JIGSAW-powered hybrid fuzzer can outperform state-of-the-art testing tools.
Authored by Ju Chen, Jinghan Wang, Chengyu Song, Heng Yin
Globalization of the integrated circuit (IC) supply chain exposes designs to security threats such as reverse engineering and intellectual property (IP) theft. Designers may want to protect specific high-level synthesis (HLS) optimizations or micro-architectural solutions of their designs. Hence, protecting the IP of ICs is essential. Behavioral locking is an approach to thwart these threats by operating at high levels of abstraction instead of reasoning on the circuit structure. Like any security protection, behavioral locking requires additional area. Existing locking techniques have a different impact on security and overhead, but they do not explore the effects of alternatives when making locking decisions. We develop a design-space exploration (DSE) framework to optimize behavioral locking for a given security metric. For instance, we optimize differential entropy under area or key-bit constraints. We define a set of heuristics to score each locking point by analyzing the system dependence graph of the design. The solution yields better results for 92% of the cases when compared to baseline, state-of-the-art (SOTA) techniques. The approach has results comparable to evolutionary DSE while requiring 100× to 400× less computational time.
Authored by Luca Collini, Ramesh Karri, Christian Pilato