Provable Security - Unlike coverage-based fuzzing that gives equal attention to every part of a code, directed fuzzing aims to direct a fuzzer to a specific target in the code, e.g., the code with potential vulnerabilities. Despite much progress, we observe that existing directed fuzzers are still not efficient as they often symbolically or concretely execute a lot of program paths that cannot reach the target code. They thus waste a lot of computational resources. This paper presents BEACON, which can effectively direct a greybox fuzzer in the sea of paths in a provable manner. That is, assisted by a lightweight static analysis that computes abstracted preconditions for reaching the target, we can prune 82.94\% of the executing paths at runtime with negligible analysis overhead (ă5h) but with the guarantee that the pruned paths must be spurious with respect to the target. We have implemented our approach, BEACON, and compared it to five state-of-the-art (directed) fuzzers in the application scenario of vulnerability reproduction. The evaluation results demonstrate that BEACON is 11.50x faster on average than existing directed grey-box fuzzers and it can also improve the speed of the conventional coverage-guided fuzzers, AFL, AFL++, and Mopt, to reproduce specific bugs with 6.31x ,11.86x, and 10.92x speedup, respectively. More interestingly, when used to test the vulnerability patches, BEACON found 14 incomplete fixes of existing CVE-identified vulnerabilities and 8 new bugs while 10 of them are exploitable with new CVE ids assigned.
Authored by Heqing Huang, Yiyuan Guo, Qingkai Shi, Peisen Yao, Rongxin Wu, Charles Zhang
Provable Security - We address logic locking, a mechanism for securing digital Integrated Circuits (ICs) from piracy by untrustworthy foundries. We discuss previous work and the state-of-the-art, and observe that, despite more than a decade of research that has gone into the topic (resulting in both powerful attacks and subsequent defenses), there is no consensus on what it means for a particular locking mechanism to be secure. This paper attempts to remedy this situation. Specifically, it formulates a definition of security for a logic locking mechanism based on indistinguishability and relates the definition to security from actual attackers in a precise and unambiguous manner. We then describe a mechanism that satisfies the definition, thereby achieving (provable) security from all prior attacks. The mechanism assumes the existence of both a puncturable pseudorandom function family and an indistinguishability obfuscator, two cryptographic primitives that exist under well-founded assumptions. The mechanism builds upon the Stripped-Functionality Logic Locking (SFLL) framework, a state-of-the-art family of locking mechanisms whose potential for ever achieving security is currently in question. Along the way, partly as motivation, we present additional results, such as a reason founded in average-case complexity for why benchmark circuits locked with a prior scheme are susceptible to the wellknown SAT attack against such schemes, and why provably thwarting the SAT attack is insufficient as a meaningful notion of security for logic locking.
Authored by Mohamed Massad, Nahid Juma, Jonathan Shahen, Mariana Raykova, Siddharth Garg, Mahesh Tripunitara
Provable Security - This paper studies provable security guarantees for cyber-physical systems (CPS) under actuator attacks. Specifically, we consider safety for CPS and propose a new attack-detection mechanism based on a zeroing control barrier function (ZCBF) condition. To reduce the conservatism in its implementation, we design an adaptive recovery mechanism based on how close the state is to violating safety. We show that the attack-detection mechanism is sound, i.e., there are no false negatives for adversarial attacks. Finally, we use a Quadratic Programming (QP) approach for online recovery (and nominal) control synthesis. We demonstrate the effectiveness of the proposed method in a case study involving a quadrotor with an attack on its motors.
Authored by Kunal Garg, Ricardo Sanfelice, Alvaro Cardenas
Provable Security - Spectre vulnerabilities violate our fundamental assumptions about architectural abstractions, allowing attackers to steal sensitive data despite previously state-of-the-art countermeasures. To defend against Spectre, developers of verification tools and compiler-based mitigations are forced to reason about microarchitectural details such as speculative execution. In order to aid developers with these attacks in a principled way, the research community has sought formal foundations for speculative execution upon which to rebuild provable security guarantees.
Authored by Sunjay Cauligi, Craig Disselkoen, Daniel Moghimi, Gilles Barthe, Deian Stefan
Provable Security - With the rapid development of cloud storage technology, effectively verifying the cloud data’s integrity becomes a major issue in cloud storage. Recently, Nayak and Tripathy proposed their cloud auditing protocol. Although the protocol is efficient, the protocol is not secure as Yu et al’s attack shows. Even if the server does not store the users’ data, it is possible to forge the provable data possession proof and pass the integrity audit. We point out the issue in their article and describe in detail the process by which the cloud server forging the proof. We finally give an improved and more secure scheme and analysis its security also.
Authored by Xu Wang, Ruifeng Li
Provable Security - The data are stored as multiple copies on different cloud servers for improving the constancy and availability as the data are being outsourced. For proving the integrity of the data of multiple copies Provable Data Possession (PDP) protocol is used. Beforehand all of the PDP protocol will be storing copies in single cloud storage server. Public key infrastructure was depended by many PDP protocols which lacks security and leads to vulnerabilities. For storing various copies on multiple different cloud server identity based provable data possession has been used. By using the homomorphic tags data are stored in multiple cloud and its integrity will be simultaneously checked. Computation Diffie-Hellman hard problem was base for our scheme. Our scheme has been the premier for the provable data possession of multifold copies on multiple various cloud. The given system model, security model was given and this experimental research proved that our PDP scheme is applicable as well as practical.
Authored by Chamaeshika, Vasanthi, Jerart Julus
Protocol Verification - PCI Express (Peripheral Component Interconnect) is a point-to-point, high-performance, serial interconnect protocol. PCIe outperforms older buses and offers greater bandwidth, making it a fantastic option for a wide range of applications. PCIe features layered architecture with three distinct layers. Packets are used to convey information between these layers. The verification IP of the physical layer in PCI Express is implemented in this paper. The Universal Verification Methodology is used for development of VIP of PCIe, which is written in System Verilog (UVM).
Authored by Viraj Vaidya, Vaishali Ingale, Ashlesha Gokhale
Protocol Verification - Transactions involving multiple blockchains are implemented by cross-chain protocols. These protocols are based on smart contracts, programs that run on blockchains, executed by a network of computers. Verifying the runtime correctness of smart contracts is a problem of compelling practical interest since, smart contracts can automatically transfer ownership of cryptocurrencies, electronic securities, and other valuable assets among untrusting parties. Such verification is challenging since smart contract execution is time sensitive, and the clocks on different blockchains may not be perfectly synchronized. This paper describes a method for runtime monitoring of blockchain executions. First, we propose a generalized runtime verification technique for verifying partially synchronous distributed computations for the metric temporal logic (MTL) by exploiting bounded-skew clock synchronization. Second, we introduce a progression-based formula rewriting scheme for monitoring MTL specifications which employs SMT solving techniques and report experimental results.
Authored by Ritam Ganguly, Yingjie Xue, Aaron Jonckheere, Parker Ljung, Benjamin Schornstein, Borzoo Bonakdarpour, Maurice Herlihy
Protocol Verification - As a new generation of high-performance chip-to-chip packet transmission protocol, Interlaken is usually embedded in FPGA, ARM, DSP and other chips in the way of IP core. As a high bandwidth network communication protocol, relevant testing and verification work is the key to support users highly reliable applications.This paper analyzes the framing layer and protocol layer of Interlaken IP core embedded in FPGA, encapsulates the pre-defined data at the client end of the protocol layer, tests the working condition under the transmission rate of 100Gbps, and error injection in one segment of the four-segment user-side to simulate packet encapsulation errors. The study found that when sending packets, when any control word in any segment is lost, about 1/2 of the packets are received and about 1/4 are received correctly.
Authored by Chao Zhang, Huabo Sun, Zichen Zhang, Lipei Zhang, Sun Lei
Protocol Verification - Symbolic protocol verification generally abstracts probabilities away, considering computations that succeed only with negligible probability, such as guessing random numbers or breaking an encryption scheme, as impossible. This abstraction, sometimes referred to as the perfect cryptography assumption, has shown very useful as it simplifies automation of the analysis. However, probabilities may also appear in the control flow where they are generally not negligible. In this paper we consider a framework for symbolic protocol analysis with a probabilistic choice operator: the probabilistic applied pi calculus. We define and explore the relationships between several behavioral equivalences. In particular we show the need for randomized schedulers and exhibit a counterexample to a result in a previous work that relied on nonrandomized ones. As in other frameworks that mix both non-deterministic and probabilistic choices, schedulers may sometimes be unrealistically powerful. We therefore consider two subclasses of processes that avoid this problem. In particular, when considering purely non-deterministic protocols, as is done in classical symbolic verification, we show that a probabilistic adversary has—maybe surprisingly—a strictly superior distinguishing power for may testing, which, when the number of sessions is bounded, we show to coincide with purely possibilistic similarity.
Authored by Vincent Cheval, Raphaëlle Crubillé, Steve Kremer
Protocol Verification - Microservices is an architectural style that enables the design of applications as series of services independent from each other in function, maintainability, deployability and scalability. Two phase commit protocol as an example of classical distributed transaction protocols are not suitable for this pattern. These protocols require all participants in a transaction to commit or roll back before the transaction can proceed. This is way saga was chosen to replace 2PC. Saga is a mechanism that belongs to the ’database per service’ pattern and which guarantees distributed transactions that covers many services. The saga pattern provides transaction management using a sequence of local transactions. Saga can be rolled back by a compensating transaction that ensures one of two possibilities all operations complete successfully or the corresponding compensation transactions are run to undo the change. In this paper, we try to run a verification of saga using the maude language. We use the Saga protocol by orchestration, and we present a verification way to check it.
Authored by Manel Djerou, Okba Tibermacine
Protocol Verification - Bus protocols are critical for the operation of a system as all communications are handled through the bus by following a predetermined structure. An IP is designed to verify if the system follows the specified protocol for seamless communications between multiple blocks in the system. As the process technology decreases the number of sub-blocks in the system also increases thus the verification complexity increases. In Traditional verification architecture, the design under test (DUT) signals are individually connected to the verification environment by binding the interface to the subblocks, signals are encapsulated and simplified to handle. In this work, an AHB verification module is designed by employing the interface binding technique.
Authored by Vignesh Mahendra, D.S. Sam, A Hernisha, Josephine Atchaya
Protocol Verification - Attribute-based encryption (ABE) is an extension scheme of identity-based encryption and public-key encryption. It can achieve fine-grained access control and one-to-many encryption mode, suitable for practical applications. In addition to mathematical proofs, it is essential to verify its security properties using different protocols. This paper proposes two key exchange protocols to securely exchange the user secret keys in ABE. ProVerif is an automated cryptographic protocol verifier that we have used during protocol verification. We specifically aim to protect the confidentiality of the generated keys. The proposed protocols are formally analysed and their security property has been formally proved.
Authored by Baasansuren Bat-Erdene, Yuping Yan, Mohammed Kamel, Peter Ligeti
Protocol Verification - The System-On-Chip (SoC) designs are becoming more complex nowadays. Multiple Intellectual Property (IPs) are integrated in a single SoC and these IPs communicate with the help of various bus protocols. Verification takes almost 70 \% time in design cycle hence re-usable verification environment of these commonly used protocols is very important. In this paper, AXI4-Lite protocol is verified using UVM based testbench structure. To verify all channels of AX I protocol, data is written into a 4-bit shift register and it is read back. The UVM testbench acts as a master device which will send all control information, data and address to the register through the AXI interface. To understand verification goal achievement, coverpoints are written and functional and code coverage reports are analyzed. The synopsys V CS tool is used for the simulation.
Authored by Hardi Sangani, Usha Mehta
Protocol Verification - Display Serial Interface (DSI) is a high-speed serial interface standard. It supports display and touch screens in mobile devices such as smartphones, laptops, tablets, and other platforms. DSI describes several layers that define the detailed interconnect between a host processor and a peripheral device in a mobile system. The targeted DSI protocol layer is the low-level layer in the standard which is responsible for bytes organization, error-checking information addition, and packet formulation. In this paper, we propose a constrained random Coverage-Driven Verification approach (CDV) for the DSI lowlevel protocol layer using Universal Verification Methodology (UVM). This approach could be the basis of a standardized Verification Intellectual Property (VIP) to test and verify the standardized DSI layer. We provide a well-established base of a reusable and scalable UVM environment that can verify the DSI protocol layer using various techniques such as error injection mechanism, System Verilog Assertions (SVA), and direct UVM sequences aim to cover the different real-life scenarios between the host processor and the peripheral device. Our results show that we can detect all inserted errors to assure the functionality of the DSI low-level protocol layer. Different real-life scenarios between the host processor and the peripheral device are covered with 100\% functional coverage and 93\% code coverage.
Authored by Ahmed Ali, Ahmed Shalaby, Sherif Saif, Mohamed Taher
Image Intelligence in Cyber Security using Sensing System towards the Future Generation Intelligence
Privacy Policies and Measurement - The fundamental target of tone mapping is to duplicate the given scene or an image close to the 64000 world brilliance coordinating the human read inside the show gadgets. Therapeutic imaging utilizes that is procedures to downsize commotion and sharpness subtleties to upgrade the visual delineation of the picture. Because details play such an important role in determining proof and treating disease, it s critical to concentrate on the most important options when displaying medical images. It could be a method for reducing the unpredictability of high-dimensional data. You ll be able to use essential part analysis to rough out high- dimensional data with fewer measurements. Each measurement is regarded as the most important component and refers to a direct blend of the underlying components, as well as the amount of data. This data can be used to solve a wide range of problems that happen on a regular basis. It also highlighted how Big Data may be used to analyse Internet and image data sources effectively. concerns of privacy, methods for securing the components of pattern environments and systems, Edges, on the other hand, which nearly always square measure fascinating options of associate degree image) are also characterized by sharp transitions in grey levels, therefore averaging filters have the undesirable facet result that they blur edges. Another application of this kind of method includes the smoothing of false contours that result from victimization associate degree meagerly range of grey levels.
Authored by Rajeev Kumar, Neha Sharma, Sandeep Kumar
Privacy Policies and Measurement - The Function-as-a-Service cloud computing paradigm has made large-scale application development convenient and efficient as developers no longer need to deploy or manage the necessary infrastructure themselves. However, as a consequence of this abstraction, developers lose insight into how their code is executed and data is processed. Cloud providers currently offer little to no assurance of the integrity of customer data. One approach to robust data integrity verification is the analysis of data provenance—logs that describe the causal history of data, applications, users, and non-person entities. This paper introduces ProProv, a new domain-specific language and graphical user interface for specifying policies over provenance metadata to automate provenance analyses.
Authored by Kevin Dennis, Shamaria Engram, Tyler Kaczmarek, Jay Ligatti
Privacy Policies and Measurement - The emergence of mobile edge computing (MEC) imposes an unprecedented pressure on privacy protection, although it helps the improvement of computation performance including energy consumption and computation delay by computation offloading. To this end, we propose a deep reinforcement learning (DRL)-based computation offloading scheme to optimize jointly privacy protection and computation performance. The privacy exposure risk caused by offloading history is investigated, and an analysis metric is defined to evaluate the privacy level. To find the optimal offloading strategy, an algorithm combining actor-critic, off-policy, and maximum entropy is proposed to accelerate the learning rate. Simulation results show that the proposed scheme has better performance compared with other benchmarks.
Authored by Zhengjun Gao, Guowen Wu, Yizhou Shen, Hong Zhang, Shigen Shen, Qiying Cao
Privacy Policies and Measurement - Email is one of the oldest and most popular applications on today’s Internet and is used for business and private communication. However, most emails are still susceptible to being intercepted or even manipulated by the servers transmitting the messages. Users with S/MIME certificates can protect their email messages. In this paper, we investigate the market for S/MIME certificates and analyse the impact of the ordering and revocation processes on the users’ privacy. We complete those processes for each vendor and investigate the number of requests, the size of the data transfer, and the number of trackers on the vendor’s Web site. We further collect all relevant documents, including privacy policies, and report on their number of words, readability, and quality. Our results show that users must make at least 86 HTTP requests and transfer at least 1.35 MB to obtain a certificate and 178 requests and 2.03 MB to revoke a certificate. All but one vendor employ third-party tracking during these processes, which causes between 43 and 354 third-party requests. Our results further show that the vendors’ privacy policies are at least 1701 words long which requires a user approximately 7 minutes to read. The longest policy requires approximately half an hour to be read. Measurements of the readability of all vendors’ privacy policies indicate that users need a level of education that is nearly equivalent to a bachelor’s degree to comprehend the texts. We also report on the quality of the policies and find that the vendors achieve compliance scores between 45 \% and 90 \%. With our method, vendors can measure their impact on the users’ privacy and create better products. On the other hand, users benefit from an analysis of all S/MIME certificate vendors in that they can make an informed choice of their vendor based on the objective metrics obtained by our study. Ultimately, the results help to increase the prevalence of encrypted emails and render society less susceptible to surveillance.
Authored by Tobias Mueller, Max Hartenstein
Privacy Policies and Measurement - With increased reliance of digital storage for personal, financial, medical, and policy information, a greater demand for robust digital authentication and cybersecurity protection measures results. Current security options include alpha-numeric passwords, two factor authentication, and bio-metric options such as fingerprint or facial recognition. However, all of these methods are not without their drawbacks. This projects leverages the fact that the use of physical handwritten signatures is still prevalent in society, and the thoroughly trained process and motions of handwritten signatures is unique for every individual. Thus, a writing stylus that can authenticate its user via inertial signature detection is proposed, which classifies inertial measurement features for user identification. The current prototype consists of two triaxial accelerometers, one mounted at each of the stylus’ terminal ends. Features extracted from how the pen is held, stroke styles, and writing speed can affect the stylus tip accelerations which leads to a unique signature detection and to deter forgery attacks. Novel, manual spatiotemporal features relating to such metrics were proposed and a multi-layer perceptron was utilized for binary classification. Results of a preliminary user study are promising with overall accuracy of 95.7\%, sensitivity of 100\%, and recall rate of 90\%.
Authored by Divas Subedi, Isabella Yung, Digesh Chitrakar, Kevin Huang
Privacy Policies and Measurement - Although the number of smart Internet of Things (IoT) devices has grown in recent years, the public s perception of how effectively these devices secure IoT data has been questioned. Many IoT users do not have a good level of confidence in the security or privacy procedures implemented within IoT smart devices for protecting personal IoT data. Moreover, determining the level of confidence end users have in their smart devices is becoming a major challenge. In this paper, we present a study that focuses on identifying privacy concerns IoT end users have when using IoT smart devices. We investigated multiple smart devices and conducted a survey to identify users privacy concerns. Furthermore, we identify five IoT privacy-preserving (IoTPP) control policies that we define and employ in comparing the privacy measures implemented by various popular smart devices. Results from our study show that the over 86\% of participants are very or extremely concerned about the security and privacy of their personal data when using smart IoT devices such as Google Nest Hub or Amazon Alexa. In addition, our study shows that a significant number of IoT users may not be aware that their personal data is collected, stored or shared by IoT devices.
Authored by Daniel Joy, Olivera Kotevska, Eyhab Al-Masri
Privacy Policies and Measurement - We report on the ideas and experiences of adapting Brane, a workflow execution framework, for use cases involving medical data exchange and processing. These use cases impose new requirements on the system to enforce policies encoding safety properties, ranging from access control to legal regulations pertaining to data privacy. Our approach emphasizes users’ control over the extent to which they cooperate in distributed execution, at the cost of revealing information about their policies.
Authored by Christopher Esterhuyse, Tim Muller, Thomas Van Binsbergen, Adam Belloum
Privacy Policies and Measurement - It is estimated that over 1 billion Closed-Circuit Television (CCTV) cameras are operational worldwide. The advertised main benefits of CCTV cameras have always been the same; physical security, safety, and crime deterrence. The current scale and rate of deployment of CCTV cameras bring additional research and technical challenges for CCTV forensics as well, as for privacy enhancements.
Authored by Hannu Turtiainen, Andrei Costin, Timo Hämäläinen, Tuomo Lahtinen, Lauri Sintonen
Privacy Policies and Measurement - Modelling and analyzing the massive policy discourse networks are of great importance in critical policy studies and have recently attracted increasing research interests. Yet, the effective representation scheme, quantitative policymaking metrics and the proper analysis methods remain largely unexplored. To address above challenges, with the Latent Dirichlet Allocation embedding, we proposed a government policy network, which models multiple entity types and complex relationships in between. Specifically, we have constructed the government policy network based on approximately 700 rural innovation and entrepreneurship policies released by the Chinese central government and eight provinces’ governments in the past eight years. We verified that the entity degree in the policy network is subject to the power-law distribution. Moreover, we propose a metric to evaluate the coordination between the central and local departments, namely coordination strength. And we find that this metric effectively reflects the coordination relationship between central and local departments. This study could be considered as a theoretical basis for applications such as policy discourse relationship prediction and departmental collaborative analysis.
Authored by Yilin Kang, Renwei Ou
Privacy Policies and Measurement - First introduced as a way of recording client-side state, third-party vendors have proliferated widely on the Web, and have become a fundamental part of the Web ecosystem. However, there is widespread concern that third-party vendors are being abused to track and profile individuals online for commercial, analytical and various other purposes. This paper builds the platform called “PRIVIS”, aiming at providing unique insights on how the privacy ecosystem is structured and affected through the analysis of data that stems from real users. First, to showcase what can be learned from this ecosystem through a datadriven analysis across the country, time and first-party categories, PRIVIS visualises data gathered from over 10K Chrome installers. It also equips participants with the means to collect and analyze their own data so that they could assess how their browsing habits are shared with third parties from their perspectives. Based on real-user datasets, the third-party quantity is not the only measure of web privacy risks. The measure proposed in this paper is how well thirdparty providers know their users. Second, PRIVIS studies the interplay between user location, special periods (after epidemic outbreak) and the overall number of third parties observed. The visualisation suggests that lockdown policies facilitate the growth in the number of third parties. Collectively, there are more active third-party activities, compared with both before the lockdowns and the corresponding periods in the previous year. And throughout the lockdown stages, the first lockdown performs the most aggressive.
Authored by Xuehui Hu