Smart contracts are usually financial-related, which makes them attractive attack targets. Many static analysis tools have been developed to facilitate the contract audit process, but not all of them take account of two special features of smart contracts: (1) The external variables, like time, are constrained by real-world factors; (2) The internal variables persist between executions. Since these features import implicit constraints into contracts, they significantly affect the performance of static tools, such as causing errors in reachability analysis and resulting in false positives. In this paper, we conduct a systematic study on implicit constraints from three aspects. First, we summarize the implicit constraints in smart contracts. Second, we evaluate the impact of such constraints on the state-of-the-art static tools. Third, we propose a lightweight but effective mitigation method named ConSym to deal with such constraints and integrate it into OSIRIS. The evaluation result shows that ConSym can filter out 96% of false positives and reduce false negatives by two-thirds.
Authored by Tingting Yin, Chao Zhang, Yuandong Ni, Yixiong Wu, Taiyu Wong, Xiapu Luo, Zheming Li, Yu Guo
The interaction between the transmission system of doubly-fed wind farms and the power grid and the stability of the system have always been widely concerned at home and abroad. In recent years, wind farms have basically installed static var generator (SVG) to improve voltage stability. Therefore, this paper mainly studies the subsynchronous oscillation (SSO) problem in the grid-connected grid-connected doubly-fed wind farm with static var generators. Firstly based on impedance analysis, the sequence impedance model of the doubly-fed induction generator and the static var generator is established by the method. Then, based on the stability criterion of Bode plot and time domain simulation, the influence of the access of the static var generator on the SSO of the system is analyzed. Finally, the sensitivity analysis of the main parameters of the doubly-fed induction generator and the static var generator is carried out. The results show that the highest sensitivity is the proportional gain parameter of the doubly-fed induction generator current inner loop, and its value should be reduced to reduce the risk of SSO of the system.
Authored by Yingchi Tian, Shiwu Xiao
In this paper, the axial symmetry is used to analyze the deformation and stress change of the wheel, so as to reduce the scale of analysis and reduce the cost in industrial production. Firstly, the material properties are defined, then the rotation section of the wheel is established, the boundary conditions are defined, the model is divided by finite element, the angular velocity and pressure load during rotation are applied, and the radial and axial deformation diagram, radial, axial and equivalent stress distribution diagram of the wheel are obtained through analysis and solution. The use of axisymmetric characteristics can reduce the analysis cost in the analysis, and can be applied to materials or components with such characteristics, so as to facilitate the design and improvement of products and reduce the production cost.
Authored by Ye Yangfang, Ma Jing, Zhang Wenhui, Zhang Dekang, Zhou Shuhua, You Zhangping
This paper use the method of finite element analysis, and comparing and analyzing the split box and the integrated box from two aspects of modal analysis and static analysis. It is concluded that the integrated box has the characteristics of excellent vibration characteristics and high strength tolerance; At the same time, according to the S-N curve of the material and the load spectrum of the box, the fatigue life of the integrated box is 26.24 years by using the fatigue analysis software Fe-safe, which meets the service life requirements; The reliability analysis module PDS is used to calculate the reliability of the box, and the reliability of the integrated box is 96.5999%, which meets the performance requirements.
Authored by Liang Xuan, Chunfei Zhang, Siyuan Tian, Tianmin Guan, Lei Lei
In today’s fast pacing world, cybercrimes have time and again proved to be one of the biggest hindrances in national development. According to recent trends, most of the times the victim’s data is breached by trapping it in a phishing attack. Security and privacy of user’s data has become a matter of tremendous concern. In order to address this problem and to protect the naive user’s data, a tool which may help to identify whether a window executable is malicious or not by doing static analysis on it has been proposed. As well as a comparative study has been performed by implementing different classification models like Logistic Regression, Neural Network, SVM. The static analysis approach used takes into parameters of the executables, analysis of properties obtained from PE Section Headers i.e. API calls. Comparing different model will provide the best model to be used for static malware analysis
Authored by Naman Aggarwal, Pradyuman Aggarwal, Rahul Gupta
Software based scan diagnosis is the de facto method for debugging logic scan failures. Physical analysis success rate is high on dies diagnosed with maximum score, one symptom, one suspect and shorter net. This poses a limitation on maximum utilization of scan diagnosis data for PFA. There have been several attempts to combine dynamic fault isolation techniques with scan diagnosis results to enhance the utilization and success rate. However, it is not a feasible approach for foundry due to limited product design and test knowledge and hardware requirements such as probe card and tester. Suitable for a foundry, an enhanced diagnosis-driven analysis scheme was proposed in [1] that classifies the failures as frontend-of-line (FEOL) and backend-of-line (BEOL) improving the die selection process for PFA. In this paper, static NIR PEM and defect prediction approach are applied on dies that are already classified as FEOL and BEOL failures yet considered unsuitable for PFA due to low score, multiple symptoms, and suspects. Successful case studies are highlighted to showcase the effectiveness of using static NIR PEM as the next level screening process to further maximize the scan diagnosis data utilization.
Authored by S. Moon, D. Nagalingam, Y. Ngow, A. Quah
Static analyzers have become increasingly popular both as developer tools and as subjects of empirical studies. Whereas static analysis tools exist for disparate programming languages, the bulk of the empirical research has focused on the popular Java programming language. In this paper, we investigate to what extent some known results about using static analyzers for Java change when considering C\#-another popular object-oriented language. To this end, we combine two replications of previous Java studies. First, we study which static analysis tools are most widely used among C\# developers, and which warnings are more commonly reported by these tools on open-source C\# projects. Second, we develop and empirically evaluate EagleRepair: a technique to automatically fix code in response to static analysis warnings; this is a replication of our previous work for Java [20]. Our replication indicates, among other things, that 1) static code analysis is fairly popular among C\# developers too; 2) Re-Sharper is the most widely used static analyzer for C\#; 3) several static analysis rules are commonly violated in both Java and C\# projects; 4) automatically generating fixes to static code analysis warnings with good precision is feasible in C\#. The EagleRepair tool developed for this research is available as open source.
Authored by Martin Odermatt, Diego Marcilio, Carlo Furia
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.
Authored by Haoxiang Shi, Wu Liu, Jingyu Liu, Jun Ai, Chunhui Yang
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Authored by Tukaram Muske, Alexander Serebrenik
Native code is now commonplace within Android app packages where it co-exists and interacts with Dex bytecode through the Java Native Interface to deliver rich app functionalities. Yet, state-of-the-art static analysis approaches have mostly overlooked the presence of such native code, which, however, may implement some key sensitive, or even malicious, parts of the app behavior. This limitation of the state of the art is a severe threat to validity in a large range of static analyses that do not have a complete view of the executable code in apps. To address this issue, we propose a new advance in the ambitious research direction of building a unified model of all code in Android apps. The JUCIFY approach presented in this paper is a significant step towards such a model, where we extract and merge call graphs of native code and bytecode to make the final model readily-usable by a common Android analysis framework: in our implementation, JUCIFY builds on the Soot internal intermediate representation. We performed empirical investigations to highlight how, without the unified model, a significant amount of Java methods called from the native code are “unreachable” in apps' callgraphs, both in goodware and malware. Using JUCIFY, we were able to enable static analyzers to reveal cases where malware relied on native code to hide invocation of payment library code or of other sensitive code in the Android framework. Additionally, JUCIFY'S model enables state-of-the-art tools to achieve better precision and recall in detecting data leaks through native code. Finally, we show that by using JUCIFY we can find sensitive data leaks that pass through native code.
Authored by Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, Xiaoyu Sun, Kevin Allix, Tegawende Bissyandè, Jacques Klein
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
Authored by Felix Schuckert, Hanno Langweg, Basel Katt
The increasing use of Infrastructure as Code (IaC) in DevOps leads to benefits in speed and reliability of deployment operation, but extends to infrastructure challenges typical of software systems. IaC scripts can contain defects that result in security and reliability issues in the deployed infrastructure: techniques for detecting and preventing them are needed. We analyze and survey the current state of research in this respect by conducting a literature review on static analysis techniques for IaC. We describe analysis techniques, defect categories and platforms targeted by tools in the literature.
Authored by Michele Chiari, Michele De Pascalis, Matteo Pradella
Long analysis times are a key bottleneck for the widespread adoption of whole-program static analysis tools. Fortunately, however, a user is often only interested in finding errors in the application code, which constitutes a small fraction of the whole program. Current application-focused analysis tools overapproximate the effect of the library and hence reduce the precision of the analysis results. However, empirical studies have shown that users have high expectations on precision and will ignore tool results that don't meet these expectations. In this paper, we introduce the first tool QueryMax that significantly speeds up an application code analysis without dropping any precision. QueryMax acts as a pre-processor to an existing analysis tool to select a partial library that is most relevant to the analysis queries in the application code. The selected partial library plus the application is given as input to the existing static analysis tool, with the remaining library pointers treated as the bottom element in the abstract domain. This achieves a significant speedup over a whole-program analysis, at the cost of a few lost errors, and with no loss in precision. We instantiate and run experiments on QueryMax for a cast-check analysis and a null-pointer analysis. For a particular configuration, QueryMax enables these two analyses to achieve, relative to a whole-program analysis, an average recall of 87%, a precision of 100% and a geometric mean speedup of 10x.
Authored by Akshay Utture, Jens Palsberg
Static analysis tools generate a large number of alarms that require manual inspection. In prior work, repositioning of alarms is proposed to (1) merge multiple similar alarms together and replace them by a fewer alarms, and (2) report alarms as close as possible to the causes for their generation. The premise is that the proposed merging and repositioning of alarms will reduce the manual inspection effort. To evaluate the premise, this paper presents an empirical study with 249 developers on the proposed merging and repositioning of static alarms. The study is conducted using static analysis alarms generated on \$C\$ programs, where the alarms are representative of the merging vs. non-merging and repositioning vs. non-repositioning situations in real-life code. Developers were asked to manually inspect and determine whether assertions added corresponding to alarms in \$C\$ code hold. Additionally, two spatial cognitive tests are also done to determine relationship in performance. The empirical evaluation results indicate that, in contrast to expectations, there was no evidence that merging and repositioning of alarms reduces manual inspection effort or improves the inspection accuracy (at times a negative impact was found). Results on cognitive abilities correlated with comprehension and alarm inspection accuracy.
Authored by Niloofar Mansoor, Tukaram Muske, Alexander Serebrenik, Bonita Sharif
The proliferation of autonomous and connected vehicles on our roads is increasingly felt. However, the problems related to the optimization of the energy consumed, to the safety, and to the security of these do not cease to arise on the tables of debates bringing together the various stakeholders. By focusing on the security aspect of such systems, we can realize that there is a family of problems that must be investigated as soon as possible. In particular, those that may manifest as the system expands. Therefore, this work aims to model and simulate the behavior of a system of autonomous and connected vehicles in the face of a malware invasion. In order to achieve the set objective, we propose a model to our system which is inspired by those used in epidimology, such as SI, SIR, SIER, etc. This being adapted to our case study, stochastic processes are defined in order to characterize its dynamics. After having fixed the values of the various parameters, as well as those of the initial conditions, we run 100 simulations of our system. After which we visualize the results got, we analyze them, and we give some interpretations. We end by outlining the lessons and recommendations drawn from the results.
Authored by Manal Mouhib, Kamal Azghiou, Abdelhamid Benali
Low-frequency oscillation (LFO) is a security and stability issue that the power system focuses on, measurement data play an important role in online monitoring and analysis of low-frequency oscillation parameters. Aiming at the problem that the measurement data containing noise affects the accuracy of modal parameter identification, a VMD-SSI modal identification algorithm is proposed, which uses the variational modal decomposition algorithm (VMD) for noise reduction combined with the stochastic subspace algorithm for identification. The VMD algorithm decomposes and reconstructs the initial signal with certain noise, and filters out the noise signal. Then, the optimized signal is input into stochastic subspace identification algorithm(SSI), the modal parameters is obtained. Simulation of a three-machine ninenode system verifies that the VMD-SSI mode identification algorithm has good anti-noise performance.
Authored by Yanjun Zhang, Peng Zhao, Ziyang Han, Luyu Yang, Junrui Chen
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.
Authored by Lisa Oakley, Alina Oprea, Stavros Tripakis
When it comes to cryptographic random number generation, poor understanding of the security requirements and “mythical aura” of black-box statistical testing frequently leads it to be used as a substitute for cryptanalysis. To make things worse, a seemingly standard document, NIST SP 800–22, describes 15 statistical tests and suggests that they can be used to evaluate random and pseudorandom number generators in cryptographic applications. The Chi-nese standard GM/T 0005–2012 describes similar tests. These documents have not aged well. The weakest pseudorandom number generators will easily pass these tests, promoting false confidence in insecure systems. We strongly suggest that SP 800–22 be withdrawn by NIST; we consider it to be not just irrelevant but actively harmful. We illustrate this by discussing the “reference generators” contained in the SP 800–22 document itself. None of these generators are suitable for modern cryptography, yet they pass the tests. For future development, we suggest focusing on stochastic modeling of entropy sources instead of model-free statistical tests. Random bit generators should also be reviewed for potential asymmetric backdoors via trapdoor one-way functions, and for security against quantum computing attacks.
Authored by Markku-Juhani Saarinen
Security evaluation can be performed using a variety of analysis methods, such as attack trees, attack graphs, threat propagation models, stochastic Petri nets, and so on. These methods analyze the effect of attacks on the system, and estimate security attributes from different perspectives. However, they require information from experts in the application domain for properly capturing the key elements of an attack scenario: i) the attack paths a system could be subject to, and ii) the different characteristics of the possible adversaries. For this reason, some recent works focused on the generation of low-level security models from a high-level description of the system, hiding the technical details from the modeler.In this paper we build on an existing ontology framework for security analysis, available in the ADVISE Meta tool, and we extend it in two directions: i) to cover the attack patterns available in the CAPEC database, a comprehensive dictionary of known patterns of attack, and ii) to capture all the adversaries’ profiles as defined in the Threat Agent Library (TAL), a reference library for defining the characteristics of external and internal threat agents ranging from industrial spies to untrained employees. The proposed extension supports a richer combination of adversaries’ profiles and attack paths, and provides guidance on how to further enrich the ontology based on taxonomies of attacks and adversaries.
Authored by Francesco Mariotti, Matteo Tavanti, Leonardo Montecchi, Paolo Lollini
Smart grid is a new generation of grid that inte-grates traditional grid and grid information system, and infor-mation security of smart grid is an extremely important part of the whole grid. The research of trusted computing technology provides new ideas to protect the information security of the power grid. To address the problem of large deviations in the calculation of credible dynamic thresholds due to the existence of characteristics such as self-similarity and traffic bursts in smart grid information collection, a traffic prediction model based on ARMA and Poisson process is proposed. And the Hurst coefficient is determined more accurately using R/S analysis, which finally improves the efficiency and accuracy of the trusted dynamic threshold calculation.
Authored by Fangfang Dang, Lijing Yan, Shuai Li, Dingding Li
The use of Virtual Machine (VM) migration as support for software rejuvenation was introduced more than a decade ago. Since then, several works have validated this approach from experimental and theoretical perspectives. Recently, some works shed light on the possibility of using the same technique as Moving Target Defense (MTD). However, to date, no work evaluated the availability and security levels while applying VM migration for both rejuvenation and MTD (multipurpose VM migration). In this paper, we conduct a comprehensive evaluation using Stochastic Petri Net (SPN) models to tackle this challenge. The evaluation covers the steady-state system availability, expected MTD protection, and related metrics of a system under time-based multipurpose VM migration. Results show that the availability and security improvement due to VM migration deployment surpasses 50% in the best scenarios. However, there is a trade-off between availability and security metrics, meaning that improving one implies compromising the other.
Authored by Matheus Torquato, Paulo Maciel, Marco Vieira
In this paper, we consider a discrete-time stochastic Stackelberg game where there is a defender (also called leader) who has to defend a target and an attacker (also called follower). The attacker has a private type that evolves as a controlled Markov process. The objective is to compute the stochastic Stackelberg equilibrium of the game where defender commits to a strategy. The attacker’s strategy is the best response to the defender strategy and defender’s strategy is optimum given the attacker plays the best response. In general, computing such equilibrium involves solving a fixed-point equation for the whole game. In this paper, we present an algorithm that computes such strategies by solving lower dimensional fixed-point equations for each time t. Based on this algorithm, we compute the Stackelberg equilibrium of a security example.
Authored by Deepanshu Vasal
Common Vulnerability Scoring System (CVSS) is intended to capture the key characteristics of a vulnerability and correspondingly produce a numerical score to indicate the severity. Important efforts are conducted for building a CVSS stochastic model in order to provide a high-level risk assessment to better support cybersecurity decision-making. However, these efforts consider nothing regarding HPC (High-Performance Computing) networks using a Science Demilitary Zone (DMZ) architecture that has special design principles to facilitate data transition, analysis, and store through in a broadband backbone. In this paper, an HPCvul (CVSS-based vulnerability and risk assessment) approach is proposed for HPC networks in order to provide an understanding of the ongoing awareness of the HPC security situation under a dynamic cybersecurity environment. For such a purpose, HPCvul advocates the standardization of the collected security-related data from the network to achieve data portability. HPCvul adopts an attack graph to model the likelihood of successful exploitation of a vulnerability. It is able to merge multiple attack graphs from different HPC subnets to yield a full picture of a large HPC network. Substantial results are presented in this work to demonstrate HPCvul design and its performance.
Authored by Jayanta Debnath, Derock Xie
Swarm learning (SL) is an emerging promising decentralized machine learning paradigm and has achieved high performance in clinical applications. SL solves the problem of a central structure in federated learning by combining edge computing and blockchain-based peer-to-peer network. While there are promising results in the assumption of the independent and identically distributed (IID) data across participants, SL suffers from performance degradation as the degree of the non-IID data increases. To address this problem, we propose a generative augmentation framework in swarm learning called SL-GAN, which augments the non-IID data by generating the synthetic data from participants. SL-GAN trains generators and discriminators locally, and periodically aggregation via a randomly elected coordinator in SL network. Under the standard assumptions, we theoretically prove the convergence of SL-GAN using stochastic approximations. Experimental results demonstrate that SL-GAN outperforms state-of-art methods on three real world clinical datasets including Tuberculosis, Leukemia, COVID-19.
Authored by Zirui Wang, Shaoming Duan, Chengyue Wu, Wenhao Lin, Xinyu Zha, Peiyi Han, Chuanyi Liu
With the continuous development of the Internet, artificial intelligence, 5G and other technologies, various issues have started to receive attention, among which the network security issue is now one of the key research directions for relevant research scholars at home and abroad. This paper researches on the basis of traditional Internet technology to establish a security identification system on top of the network physical layer of the Internet, which can effectively identify some security problems on top of the network infrastructure equipment and solve the identified security problems on the physical layer. This experiment is to develop a security identification system, research and development in the network physical level of the Internet, compared with the traditional development of the relevant security identification system in the network layer, the development in the physical layer, can be based on the physical origin of the protection, from the root to solve part of the network security problems, can effectively carry out the identification and solution of network security problems. The experimental results show that the security identification system can identify some basic network security problems very effectively, and the system is developed based on the physical layer of the Internet network, and the protection is carried out from the physical device, and the retransmission symbol error rates of CQ-PNC algorithm and ML algorithm in the experiment are 110 and 102, respectively. The latter has a lower error rate and better protection.
Authored by Yunge Huang