Provenance 2022 - Traditional Intrusion Detection Systems (IDS) are struggling to keep up with the increase in sophisticated cyberattacks such as Advanced Persistent Threats (APT) over the past years. Provenance-based Intrusion Detection Systems (PIDS) utilize data provenance concepts to enable fine-grained event correlation, and the results show increased detection accuracy and reduced false-alarm rates compared to traditional IDS. Especially, rule-based approaches for the PIDS have demonstrated high detection accuracy, low false alarm, and fast detection time. However, rules are manually created by security experts, which is time-consuming and doesn’t ensure high-quality rule standards. To address this issue, we propose an automated rule generation framework to generate robust rules to describe malicious files automatically. As a result, high-quality rules can be used in PIDS to identify similar attacks and other affected systems promptly.
Authored by Michael Zipperle, Florian Gottwalt, Yu Zhang, Omar Hussain, Elizabeth Chang, Tharam Dillon
Provenance 2022 - Connected vehicles (CVs) have facilitated the development of intelligent transportation system that supports critical safety information sharing with minimum latency. However, CVs are vulnerable to different external and internal attacks. Though cryptographic techniques can mitigate external attacks, preventing internal attacks imposes challenges due to authorized but malicious entities. Thwarting internal attacks require identifying the trustworthiness of the participating vehicles. This paper proposes a trust management framework for CVs using interaction provenance that ensures privacy, considers both in-vehicle and vehicular network security incidents, and supports flexible security policies. For this purpose, we present an interaction provenance recording and trust management protocol. Different events are extracted from interaction provenance, and trustworthiness is calculated using fuzzy policies based on the events.
Authored by Mohammad Hoque, Ragib Hasan
Provenance 2022 - Advanced Persistent Threats (APTs) are typically sophisticated, stealthy and long-term attacks that are difficult to be detected and investigated. Recently proposed provenance graph based on system audit logs has become an important approach for APT detection and investigation. However, existing provenance-based approaches that either require rules based on expert knowledge or cannot pinpoint attack events in a provenance graph still cannot effectively mitigate APT attacks. In this paper, we present Deepro, a provenance-based APT campaign detection approach that not only effectively detects attack-relevant entities in a provenance graph but also precisely recovers APT campaigns based on the detected entities. Specifically, Deepro first customizes a general purpose GNN (Graph Neural Network) model to represent and detect process nodes in a provenance graph through automatically learning different patterns of attack behaviors and benign behaviors using the rich contextual information in the provenance graph. Then, Deepro further detects attack-relevant file and network entities according to their data dependencies with the detected process nodes. Finally, Deepro recovers APT campaigns through correlating detected entities based on their causality relationships in the provenance graph. We evaluated Deepro with ten real-world APT attacks. The evaluation result shows that Deepro can effectively detect attack events with an average 98.81\% F1-score and thus produces precise provenance sub-graphs of APT attacks.
Authored by Na Yan, Yu Wen, Luyao Chen, Yanna Wu, Boyang Zhang, Zhaoyang Wang, Dan Meng
Provenance 2022 - Data provenance is meta–information about the origin and processing history of data. We demonstrate the provenance analysis of SQL queries and use it for query debugging. How–provenance determines which query expressions have been relevant for evaluating selected pieces of output data. Likewise, Where– and Why–provenance determine relevant pieces of input data. The combined provenance notions can be explored visually and interactively. We support a feature–rich SQL dialect with correlated subqueries and focus on bag semantics. Our fine–grained provenance analysis derives individual data provenance for table cells and SQL expressions.
Authored by Tobias Muller, Pascal Engel
Provable Security - Recent research has shown that hackers can efficiently infer sensitive user activities only by observing the network traffic of smart home devices. To protect users’ privacy, researchers have designed several traffic obfuscation methods. However, existing methods usually consume high bandwidth or provide weak privacy protection. In this paper, we conduct thorough research on smart home traffic obfuscation. We first propose a fixed-value obfuscation scheme and prove that it is perfectly secure by showing the indistinguishability of user activities. Yet, fixed-value obfuscation has high bandwidth consumption. To further reduce the bandwidth consumption, we propose combining fixed-value obfuscation with Multipath TCP transmission. The security and performance of the proposed multipath fixed-value obfuscation method are theoretically analyzed. We have implemented the proposed methods and tested them on public packet traces and simulated smart home networks. The experimental results match well with the theoretical analysis.
Authored by Gaofeng He, Xiancai Xiao, Renhong Chen, Haiting Zhu, Zhaowei Zhang, Bingfeng Xu
Provable Security - Design-hiding techniques are a central piece of academic and industrial efforts to protect electronic circuits from being reverse-engineered. However, these techniques have lacked a principled foundation to guide their design and security evaluation, leading to a long line of broken schemes. In this paper, we begin to lay this missing foundation.
Authored by Animesh Chhotaray, Thomas Shrimpton
Provable Security - The Industrial Internet of Things (IIoT) has brought about enormous changes in both our individual ways of life and the ways in which our culture works, transforming them into an unique electronic medium. This has enormous implications for almost every facet of life, including clever logistical, smart grids, and smart cities. In particular, the amount of gadgets that are part of the Industrial Internet of Things (IIoT) is increasing at such a fast pace that numerous gadgets and sensors are constantly communicating with one another and exchanging a substantial quantity of data. The potential of spying and hijacked assaults in messaging services has grown as a result of the creation; as a direct consequence of this, protecting data privacy and security has become two key problems at the current moment. In recent years, a protocol known as certificateless signature (LCS), which is both better secured and lighter, has been more popular for use in the development of source of energy IIoT protocols. The Schnorr signature serves as the foundation for this method s underlying mechanism. In spite of this, we found that the vast majority of the currently implemented CLS schemes are susceptible to a number of widespread security flaws. These flaws include man-in-the-middle (MITM) attacks, key generation centre (KGC) compromised attacks, and distributed denial of service (DDoS) attacks. As a potential solution to the issues that have been discussed in the preceding paragraphs, we, the authors of this work, suggest an unique pairing-free provable data approach. In order to develop a revolutionary LCS scheme that is dependable and efficient, this plan takes use of the most cutting-edge blockchain technology as well as smart contracts. After that, in order to verify the dependability of our system, we simulate both Type-I and Type-II adversary and run the results through a series of tests. The findings of a system security and a summative assessment have shown that our design is capable of providing more reliable security assurance at a lower overall cost of computation (for illustration, limited by around 40.0\% at most) and transmission time (for example, reduced by around 94.7\% at most) like other proposed scheme.
Authored by Meenakshi Garg, Krishan Sharma
Provable Security - Unlike coverage-based fuzzing that gives equal attention to every part of a code, directed fuzzing aims to direct a fuzzer to a specific target in the code, e.g., the code with potential vulnerabilities. Despite much progress, we observe that existing directed fuzzers are still not efficient as they often symbolically or concretely execute a lot of program paths that cannot reach the target code. They thus waste a lot of computational resources. This paper presents BEACON, which can effectively direct a greybox fuzzer in the sea of paths in a provable manner. That is, assisted by a lightweight static analysis that computes abstracted preconditions for reaching the target, we can prune 82.94\% of the executing paths at runtime with negligible analysis overhead (ă5h) but with the guarantee that the pruned paths must be spurious with respect to the target. We have implemented our approach, BEACON, and compared it to five state-of-the-art (directed) fuzzers in the application scenario of vulnerability reproduction. The evaluation results demonstrate that BEACON is 11.50x faster on average than existing directed grey-box fuzzers and it can also improve the speed of the conventional coverage-guided fuzzers, AFL, AFL++, and Mopt, to reproduce specific bugs with 6.31x ,11.86x, and 10.92x speedup, respectively. More interestingly, when used to test the vulnerability patches, BEACON found 14 incomplete fixes of existing CVE-identified vulnerabilities and 8 new bugs while 10 of them are exploitable with new CVE ids assigned.
Authored by Heqing Huang, Yiyuan Guo, Qingkai Shi, Peisen Yao, Rongxin Wu, Charles Zhang
Provable Security - We address logic locking, a mechanism for securing digital Integrated Circuits (ICs) from piracy by untrustworthy foundries. We discuss previous work and the state-of-the-art, and observe that, despite more than a decade of research that has gone into the topic (resulting in both powerful attacks and subsequent defenses), there is no consensus on what it means for a particular locking mechanism to be secure. This paper attempts to remedy this situation. Specifically, it formulates a definition of security for a logic locking mechanism based on indistinguishability and relates the definition to security from actual attackers in a precise and unambiguous manner. We then describe a mechanism that satisfies the definition, thereby achieving (provable) security from all prior attacks. The mechanism assumes the existence of both a puncturable pseudorandom function family and an indistinguishability obfuscator, two cryptographic primitives that exist under well-founded assumptions. The mechanism builds upon the Stripped-Functionality Logic Locking (SFLL) framework, a state-of-the-art family of locking mechanisms whose potential for ever achieving security is currently in question. Along the way, partly as motivation, we present additional results, such as a reason founded in average-case complexity for why benchmark circuits locked with a prior scheme are susceptible to the wellknown SAT attack against such schemes, and why provably thwarting the SAT attack is insufficient as a meaningful notion of security for logic locking.
Authored by Mohamed Massad, Nahid Juma, Jonathan Shahen, Mariana Raykova, Siddharth Garg, Mahesh Tripunitara
Provable Security - This paper studies provable security guarantees for cyber-physical systems (CPS) under actuator attacks. Specifically, we consider safety for CPS and propose a new attack-detection mechanism based on a zeroing control barrier function (ZCBF) condition. To reduce the conservatism in its implementation, we design an adaptive recovery mechanism based on how close the state is to violating safety. We show that the attack-detection mechanism is sound, i.e., there are no false negatives for adversarial attacks. Finally, we use a Quadratic Programming (QP) approach for online recovery (and nominal) control synthesis. We demonstrate the effectiveness of the proposed method in a case study involving a quadrotor with an attack on its motors.
Authored by Kunal Garg, Ricardo Sanfelice, Alvaro Cardenas
Provable Security - Spectre vulnerabilities violate our fundamental assumptions about architectural abstractions, allowing attackers to steal sensitive data despite previously state-of-the-art countermeasures. To defend against Spectre, developers of verification tools and compiler-based mitigations are forced to reason about microarchitectural details such as speculative execution. In order to aid developers with these attacks in a principled way, the research community has sought formal foundations for speculative execution upon which to rebuild provable security guarantees.
Authored by Sunjay Cauligi, Craig Disselkoen, Daniel Moghimi, Gilles Barthe, Deian Stefan
Provable Security - With the rapid development of cloud storage technology, effectively verifying the cloud data’s integrity becomes a major issue in cloud storage. Recently, Nayak and Tripathy proposed their cloud auditing protocol. Although the protocol is efficient, the protocol is not secure as Yu et al’s attack shows. Even if the server does not store the users’ data, it is possible to forge the provable data possession proof and pass the integrity audit. We point out the issue in their article and describe in detail the process by which the cloud server forging the proof. We finally give an improved and more secure scheme and analysis its security also.
Authored by Xu Wang, Ruifeng Li
Provable Security - The data are stored as multiple copies on different cloud servers for improving the constancy and availability as the data are being outsourced. For proving the integrity of the data of multiple copies Provable Data Possession (PDP) protocol is used. Beforehand all of the PDP protocol will be storing copies in single cloud storage server. Public key infrastructure was depended by many PDP protocols which lacks security and leads to vulnerabilities. For storing various copies on multiple different cloud server identity based provable data possession has been used. By using the homomorphic tags data are stored in multiple cloud and its integrity will be simultaneously checked. Computation Diffie-Hellman hard problem was base for our scheme. Our scheme has been the premier for the provable data possession of multifold copies on multiple various cloud. The given system model, security model was given and this experimental research proved that our PDP scheme is applicable as well as practical.
Authored by Chamaeshika, Vasanthi, Jerart Julus
Protocol Verification - PCI Express (Peripheral Component Interconnect) is a point-to-point, high-performance, serial interconnect protocol. PCIe outperforms older buses and offers greater bandwidth, making it a fantastic option for a wide range of applications. PCIe features layered architecture with three distinct layers. Packets are used to convey information between these layers. The verification IP of the physical layer in PCI Express is implemented in this paper. The Universal Verification Methodology is used for development of VIP of PCIe, which is written in System Verilog (UVM).
Authored by Viraj Vaidya, Vaishali Ingale, Ashlesha Gokhale
Protocol Verification - Transactions involving multiple blockchains are implemented by cross-chain protocols. These protocols are based on smart contracts, programs that run on blockchains, executed by a network of computers. Verifying the runtime correctness of smart contracts is a problem of compelling practical interest since, smart contracts can automatically transfer ownership of cryptocurrencies, electronic securities, and other valuable assets among untrusting parties. Such verification is challenging since smart contract execution is time sensitive, and the clocks on different blockchains may not be perfectly synchronized. This paper describes a method for runtime monitoring of blockchain executions. First, we propose a generalized runtime verification technique for verifying partially synchronous distributed computations for the metric temporal logic (MTL) by exploiting bounded-skew clock synchronization. Second, we introduce a progression-based formula rewriting scheme for monitoring MTL specifications which employs SMT solving techniques and report experimental results.
Authored by Ritam Ganguly, Yingjie Xue, Aaron Jonckheere, Parker Ljung, Benjamin Schornstein, Borzoo Bonakdarpour, Maurice Herlihy
Protocol Verification - As a new generation of high-performance chip-to-chip packet transmission protocol, Interlaken is usually embedded in FPGA, ARM, DSP and other chips in the way of IP core. As a high bandwidth network communication protocol, relevant testing and verification work is the key to support users highly reliable applications.This paper analyzes the framing layer and protocol layer of Interlaken IP core embedded in FPGA, encapsulates the pre-defined data at the client end of the protocol layer, tests the working condition under the transmission rate of 100Gbps, and error injection in one segment of the four-segment user-side to simulate packet encapsulation errors. The study found that when sending packets, when any control word in any segment is lost, about 1/2 of the packets are received and about 1/4 are received correctly.
Authored by Chao Zhang, Huabo Sun, Zichen Zhang, Lipei Zhang, Sun Lei
Protocol Verification - Symbolic protocol verification generally abstracts probabilities away, considering computations that succeed only with negligible probability, such as guessing random numbers or breaking an encryption scheme, as impossible. This abstraction, sometimes referred to as the perfect cryptography assumption, has shown very useful as it simplifies automation of the analysis. However, probabilities may also appear in the control flow where they are generally not negligible. In this paper we consider a framework for symbolic protocol analysis with a probabilistic choice operator: the probabilistic applied pi calculus. We define and explore the relationships between several behavioral equivalences. In particular we show the need for randomized schedulers and exhibit a counterexample to a result in a previous work that relied on nonrandomized ones. As in other frameworks that mix both non-deterministic and probabilistic choices, schedulers may sometimes be unrealistically powerful. We therefore consider two subclasses of processes that avoid this problem. In particular, when considering purely non-deterministic protocols, as is done in classical symbolic verification, we show that a probabilistic adversary has—maybe surprisingly—a strictly superior distinguishing power for may testing, which, when the number of sessions is bounded, we show to coincide with purely possibilistic similarity.
Authored by Vincent Cheval, Raphaëlle Crubillé, Steve Kremer
Protocol Verification - Microservices is an architectural style that enables the design of applications as series of services independent from each other in function, maintainability, deployability and scalability. Two phase commit protocol as an example of classical distributed transaction protocols are not suitable for this pattern. These protocols require all participants in a transaction to commit or roll back before the transaction can proceed. This is way saga was chosen to replace 2PC. Saga is a mechanism that belongs to the ’database per service’ pattern and which guarantees distributed transactions that covers many services. The saga pattern provides transaction management using a sequence of local transactions. Saga can be rolled back by a compensating transaction that ensures one of two possibilities all operations complete successfully or the corresponding compensation transactions are run to undo the change. In this paper, we try to run a verification of saga using the maude language. We use the Saga protocol by orchestration, and we present a verification way to check it.
Authored by Manel Djerou, Okba Tibermacine
Protocol Verification - Bus protocols are critical for the operation of a system as all communications are handled through the bus by following a predetermined structure. An IP is designed to verify if the system follows the specified protocol for seamless communications between multiple blocks in the system. As the process technology decreases the number of sub-blocks in the system also increases thus the verification complexity increases. In Traditional verification architecture, the design under test (DUT) signals are individually connected to the verification environment by binding the interface to the subblocks, signals are encapsulated and simplified to handle. In this work, an AHB verification module is designed by employing the interface binding technique.
Authored by Vignesh Mahendra, D.S. Sam, A Hernisha, Josephine Atchaya
Protocol Verification - Attribute-based encryption (ABE) is an extension scheme of identity-based encryption and public-key encryption. It can achieve fine-grained access control and one-to-many encryption mode, suitable for practical applications. In addition to mathematical proofs, it is essential to verify its security properties using different protocols. This paper proposes two key exchange protocols to securely exchange the user secret keys in ABE. ProVerif is an automated cryptographic protocol verifier that we have used during protocol verification. We specifically aim to protect the confidentiality of the generated keys. The proposed protocols are formally analysed and their security property has been formally proved.
Authored by Baasansuren Bat-Erdene, Yuping Yan, Mohammed Kamel, Peter Ligeti
Protocol Verification - The System-On-Chip (SoC) designs are becoming more complex nowadays. Multiple Intellectual Property (IPs) are integrated in a single SoC and these IPs communicate with the help of various bus protocols. Verification takes almost 70 \% time in design cycle hence re-usable verification environment of these commonly used protocols is very important. In this paper, AXI4-Lite protocol is verified using UVM based testbench structure. To verify all channels of AX I protocol, data is written into a 4-bit shift register and it is read back. The UVM testbench acts as a master device which will send all control information, data and address to the register through the AXI interface. To understand verification goal achievement, coverpoints are written and functional and code coverage reports are analyzed. The synopsys V CS tool is used for the simulation.
Authored by Hardi Sangani, Usha Mehta
Protocol Verification - Display Serial Interface (DSI) is a high-speed serial interface standard. It supports display and touch screens in mobile devices such as smartphones, laptops, tablets, and other platforms. DSI describes several layers that define the detailed interconnect between a host processor and a peripheral device in a mobile system. The targeted DSI protocol layer is the low-level layer in the standard which is responsible for bytes organization, error-checking information addition, and packet formulation. In this paper, we propose a constrained random Coverage-Driven Verification approach (CDV) for the DSI lowlevel protocol layer using Universal Verification Methodology (UVM). This approach could be the basis of a standardized Verification Intellectual Property (VIP) to test and verify the standardized DSI layer. We provide a well-established base of a reusable and scalable UVM environment that can verify the DSI protocol layer using various techniques such as error injection mechanism, System Verilog Assertions (SVA), and direct UVM sequences aim to cover the different real-life scenarios between the host processor and the peripheral device. Our results show that we can detect all inserted errors to assure the functionality of the DSI low-level protocol layer. Different real-life scenarios between the host processor and the peripheral device are covered with 100\% functional coverage and 93\% code coverage.
Authored by Ahmed Ali, Ahmed Shalaby, Sherif Saif, Mohamed Taher
Privacy Policies and Measurement - The fundamental target of tone mapping is to duplicate the given scene or an image close to the 64000 world brilliance coordinating the human read inside the show gadgets. Therapeutic imaging utilizes that is procedures to downsize commotion and sharpness subtleties to upgrade the visual delineation of the picture. Because details play such an important role in determining proof and treating disease, it s critical to concentrate on the most important options when displaying medical images. It could be a method for reducing the unpredictability of high-dimensional data. You ll be able to use essential part analysis to rough out high- dimensional data with fewer measurements. Each measurement is regarded as the most important component and refers to a direct blend of the underlying components, as well as the amount of data. This data can be used to solve a wide range of problems that happen on a regular basis. It also highlighted how Big Data may be used to analyse Internet and image data sources effectively. concerns of privacy, methods for securing the components of pattern environments and systems, Edges, on the other hand, which nearly always square measure fascinating options of associate degree image) are also characterized by sharp transitions in grey levels, therefore averaging filters have the undesirable facet result that they blur edges. Another application of this kind of method includes the smoothing of false contours that result from victimization associate degree meagerly range of grey levels.
Authored by Rajeev Kumar, Neha Sharma, Sandeep Kumar
Privacy Policies and Measurement - The Function-as-a-Service cloud computing paradigm has made large-scale application development convenient and efficient as developers no longer need to deploy or manage the necessary infrastructure themselves. However, as a consequence of this abstraction, developers lose insight into how their code is executed and data is processed. Cloud providers currently offer little to no assurance of the integrity of customer data. One approach to robust data integrity verification is the analysis of data provenance—logs that describe the causal history of data, applications, users, and non-person entities. This paper introduces ProProv, a new domain-specific language and graphical user interface for specifying policies over provenance metadata to automate provenance analyses.
Authored by Kevin Dennis, Shamaria Engram, Tyler Kaczmarek, Jay Ligatti
Privacy Policies and Measurement - The emergence of mobile edge computing (MEC) imposes an unprecedented pressure on privacy protection, although it helps the improvement of computation performance including energy consumption and computation delay by computation offloading. To this end, we propose a deep reinforcement learning (DRL)-based computation offloading scheme to optimize jointly privacy protection and computation performance. The privacy exposure risk caused by offloading history is investigated, and an analysis metric is defined to evaluate the privacy level. To find the optimal offloading strategy, an algorithm combining actor-critic, off-policy, and maximum entropy is proposed to accelerate the learning rate. Simulation results show that the proposed scheme has better performance compared with other benchmarks.
Authored by Zhengjun Gao, Guowen Wu, Yizhou Shen, Hong Zhang, Shigen Shen, Qiying Cao