In an advanced metering infrastructure (AMI), the electric utility collects power consumption data from smart meters to improve energy optimization and provides detailed information on power consumption to electric utility customers. However, AMI is vulnerable to data falsification attacks, which organized adversaries can launch. Such attacks can be detected by analyzing customers' fine-grained power consumption data; however, analyzing customers' private data violates the customers' privacy. Although homomorphic encryption-based schemes have been proposed to tackle the problem, the disadvantage is a long execution time. This paper proposes a new privacy-preserving data falsification detection scheme to shorten the execution time. We adopt elliptic curve cryptography (ECC) based on homomorphic encryption (HE) without revealing customer power consumption data. HE is a form of encryption that permits users to perform computations on the encrypted data without decryption. Through ECC, we can achieve light computation. Our experimental evaluation showed that our proposed scheme successfully achieved 18 times faster than the CKKS scheme, a common HE scheme.
Authored by Sanskruti Joshi, Ruixiao Li, Shameek Bhattacharjee, Sajal Das, Hayato Yamana
5G has significantly facilitated the development of attractive applications such as autonomous driving and telemedicine due to its lower latency, higher data rates, and enormous connectivity. However, there are still some security and privacy issues in 5G, such as network slicing privacy and flexibility and efficiency of network slicing selection. In the smart grid scenario, this paper proposes a 5G slice selection security scheme based on the Pohlig-Hellman algorithm, which realizes the protection of slice selection privacy data between User i(Ui) and Access and Mobility Management function (AMF), so that the data will not be exposed to third-party attackers. Compared with other schemes, the scheme proposed in this paper is simple in deployment, low in computational overhead, and simple in process, and does not require the help of PKI system. The security analysis also verifies that the scheme can accurately protect the slice selection privacy data between Ui and AMF.
Authored by Jiming Yao, Peng Wu, Duanyun Chen, Wei Wang, Youxu Fang
In today's era, the smart grid is the carrier of the new energy technology revolution and a very critical development stage for grid intelligence. In the process of smart grid operation, maintenance and maintenance, many heterogeneous and polymorphic data can be formed, that is to say big data. This paper analyzes the power big data prediction technology for smart grid applications, and proposes practical application strategies In this paper, an in-depth analysis of the relationship between cloud computing and big data key technologies and smart grid is carried out, and an overview of the key technologies of electric power big data is carried out.
Authored by Guang-ye Li, Jia-xin Zhang, Xin Wen, Lang-Ming Xu, Ying Yuan
In order to solve the problem of untargeted data security grading methods in the process of power grid data governance, this paper analyzes the mainstream data security grading standards at home and abroad, investigates and sorts out the characteristics of power grid data security grading requirements, and proposes a method that considers national, social, and A grid data security classification scheme for the security impact of four dimensions of individuals and enterprises. The plan determines the principle of power grid data security classification. Based on the basic idea of “who will be affected to what extent and to what extent when the power grid data security is damaged”, it defines three classification factors that need to be considered: the degree of impact, the scope of influence, and the objects of influence, and the power grid data is divided into five security levels. In the operation stage of power grid data security grading, this paper sorts out the experience and gives the recommended grading process. This scheme basically conforms to the status quo of power grid data classification, and lays the foundation for power grid data governance.
Authored by Jinqiang Fan, Yonggang Xu, Jing Ma
With the gradual construction and implementation of cloud computing, the information security problem of the smart grid has surfaced. Therefore, in the construction of the smart grid cloud computing platform, information security needs to be considered in planning, infrastructure, and management at the same time, and it is imminent to build an information network that is secure from terminal to the platform to data. This paper introduces the concept of cloud security technology and the latest development of cloud security technology and discusses the main strategies of cloud security construction in electric power enterprises.
Authored by Guocong Feng, Qingshui Huang, Zijie Deng, Hong Zou, Jiafa Zhang
Smart grid (SG) is considered the next generation of the traditional power grid. It is mainly divided into three main infrastructures: power system, information and communication infrastructures. Cybersecurity is imperative for information infrastructure and the secure, reliable, and efficient operation of the smart grid. Cybersecurity or a lack of proper implementation thereof poses a considerable challenge to the deployment of SG. Therefore, in this paper, A comprehensive survey of cyber security is presented in the smart grid context. Cybersecurity-related information infrastructure is clarified. The impact of adopting cybersecurity on control and management systems has been discussed. Also, the paper highlights the cybersecurity issues and challenges associated with the control decisions in the smart grid.
Authored by Amira Mohammed, Gibin George
The increasing demand for the interconnected IoT based smart grid is facing threats from cyber-attacks due to inherent vulnerability in the smart grid network. There is a pressing need to evaluate and model these vulnerabilities in the network to avoid cascading failures in power systems. In this paper, we propose and evaluate a vulnerability assessment framework based on attack probability for the protection and security of a smart grid. Several factors were taken into consideration such as the probability of attack, propagation of attack from a parent node to child nodes, effectiveness of basic metering system, Kalman estimation and Advanced Metering Infrastructure (AMI). The IEEE-300 bus smart grid was simulated using MATPOWER to study the effectiveness of the proposed framework by injecting false data injection attacks (FDIA); and studying their propagation. Our results show that the use of severity assessment standards such as Common Vulnerability Scoring System (CVSS), AMI measurements and Kalman estimates were very effective for evaluating the vulnerability assessment of smart grid in the presence of FDIA attack scenarios.
Authored by Muhammad Rashed, Joarder Kamruzzaman, Iqbal Gondal, Syed Islam
In this paper we consider cyber security requirements of the smart buildings. We identify cyber risks, threats, attack scenarios, security objectives and related security controls. The work was done as a part of a smart building design and construction work. From the controls identified w e concluded security practices for engineering-in smart buildings security. The paper provides an idea toward which system security engineers can strive in the basic design and implementation of the most critical components of the smart buildings. The intent of the concept is to help practitioners to avoid ad hoc approaches in the development of security mechanisms for smart buildings with shared space.
Authored by Tapio Frantti, Markku Korkiakoski
In the smart grid, the sharing of power data among various energy entities can make the data play a higher value. However, there may be unauthorized access while sharing data, which makes many entities unwilling to share their data to prevent data leakage. Based on blockchain and ABAC (Attribute-based Access Control) technology, this paper proposes an access control scheme, so that users can achieve fine-grained access control of their data when sharing them. The solution uses smart contract to achieve automated and reliable policy evaluation. IPFS (Interplanetary File System) is used for off-chain distributed storage to share the storage pressure of blockchain and guarantee the reliable storage of data. At the same time, all processes in the system are stored in the blockchain, ensuring the accountability of the system. Finally, the experiment proves the feasibility of the proposed scheme.
Authored by Xiao Liang, Ningyu An, Da Li, Qiang Zhang, Ruimiao Wang
5G network slicing plays a key role in the smart grid business. The existing authentication schemes for 5G slicing in smart grids require high computing costs, so they are time-consuming and do not fully consider the security of authentication. Aiming at the application scenario of 5G smart grid, this paper proposes an identity-based lightweight secondary authentication scheme. Compared with other well-known methods, in the protocol interaction of this paper, both the user Ui and the grid server can authenticate each other's identities, thereby preventing illegal users from pretending to be identities. The grid user Ui and the grid server can complete the authentication process without resorting to complex bilinear mapping calculations, so the computational overhead is small. The grid user and grid server can complete the authentication process without transmitting the original identification. Therefore, this scheme has the feature of anonymous authentication. In this solution, the authentication process does not require infrastructure such as PKI, so the deployment is simple. Experimental results show that the protocol is feasible in practical applications
Authored by Yue Yu, Jiming Yao, Wei Wang, Lanxin Qiu, Yangzhou Xu
The complexity and scale of modern software programs often lead to overlooked programming errors and security vulnerabilities. Developers often rely on automatic tools, like static analysis tools, to look for bugs and vulnerabilities. Static analysis tools are widely used because they can understand nontrivial program behaviors, scale to millions of lines of code, and detect subtle bugs. However, they are known to generate an excess of false alarms which hinder their utilization as it is counterproductive for developers to go through a long list of reported issues, only to find a few true positives. One of the ways proposed to suppress false positives is to use machine learning to identify them. However, training machine learning models requires good quality labeled datasets. For this purpose, we developed D2A [3], a differential analysis based approach that uses the commit history of a code repository to create a labeled dataset of Infer [2] static analysis output.
Authored by Saurabh Pujar, Yunhui Zheng, Luca Buratti, Burn Lewis, Alessandro Morari, Jim Laredo, Kevin Postlethwait, Christoph Görn
Smart contracts are usually financial-related, which makes them attractive attack targets. Many static analysis tools have been developed to facilitate the contract audit process, but not all of them take account of two special features of smart contracts: (1) The external variables, like time, are constrained by real-world factors; (2) The internal variables persist between executions. Since these features import implicit constraints into contracts, they significantly affect the performance of static tools, such as causing errors in reachability analysis and resulting in false positives. In this paper, we conduct a systematic study on implicit constraints from three aspects. First, we summarize the implicit constraints in smart contracts. Second, we evaluate the impact of such constraints on the state-of-the-art static tools. Third, we propose a lightweight but effective mitigation method named ConSym to deal with such constraints and integrate it into OSIRIS. The evaluation result shows that ConSym can filter out 96% of false positives and reduce false negatives by two-thirds.
Authored by Tingting Yin, Chao Zhang, Yuandong Ni, Yixiong Wu, Taiyu Wong, Xiapu Luo, Zheming Li, Yu Guo
The interaction between the transmission system of doubly-fed wind farms and the power grid and the stability of the system have always been widely concerned at home and abroad. In recent years, wind farms have basically installed static var generator (SVG) to improve voltage stability. Therefore, this paper mainly studies the subsynchronous oscillation (SSO) problem in the grid-connected grid-connected doubly-fed wind farm with static var generators. Firstly based on impedance analysis, the sequence impedance model of the doubly-fed induction generator and the static var generator is established by the method. Then, based on the stability criterion of Bode plot and time domain simulation, the influence of the access of the static var generator on the SSO of the system is analyzed. Finally, the sensitivity analysis of the main parameters of the doubly-fed induction generator and the static var generator is carried out. The results show that the highest sensitivity is the proportional gain parameter of the doubly-fed induction generator current inner loop, and its value should be reduced to reduce the risk of SSO of the system.
Authored by Yingchi Tian, Shiwu Xiao
In this paper, the axial symmetry is used to analyze the deformation and stress change of the wheel, so as to reduce the scale of analysis and reduce the cost in industrial production. Firstly, the material properties are defined, then the rotation section of the wheel is established, the boundary conditions are defined, the model is divided by finite element, the angular velocity and pressure load during rotation are applied, and the radial and axial deformation diagram, radial, axial and equivalent stress distribution diagram of the wheel are obtained through analysis and solution. The use of axisymmetric characteristics can reduce the analysis cost in the analysis, and can be applied to materials or components with such characteristics, so as to facilitate the design and improvement of products and reduce the production cost.
Authored by Ye Yangfang, Ma Jing, Zhang Wenhui, Zhang Dekang, Zhou Shuhua, You Zhangping
This paper use the method of finite element analysis, and comparing and analyzing the split box and the integrated box from two aspects of modal analysis and static analysis. It is concluded that the integrated box has the characteristics of excellent vibration characteristics and high strength tolerance; At the same time, according to the S-N curve of the material and the load spectrum of the box, the fatigue life of the integrated box is 26.24 years by using the fatigue analysis software Fe-safe, which meets the service life requirements; The reliability analysis module PDS is used to calculate the reliability of the box, and the reliability of the integrated box is 96.5999%, which meets the performance requirements.
Authored by Liang Xuan, Chunfei Zhang, Siyuan Tian, Tianmin Guan, Lei Lei
In today’s fast pacing world, cybercrimes have time and again proved to be one of the biggest hindrances in national development. According to recent trends, most of the times the victim’s data is breached by trapping it in a phishing attack. Security and privacy of user’s data has become a matter of tremendous concern. In order to address this problem and to protect the naive user’s data, a tool which may help to identify whether a window executable is malicious or not by doing static analysis on it has been proposed. As well as a comparative study has been performed by implementing different classification models like Logistic Regression, Neural Network, SVM. The static analysis approach used takes into parameters of the executables, analysis of properties obtained from PE Section Headers i.e. API calls. Comparing different model will provide the best model to be used for static malware analysis
Authored by Naman Aggarwal, Pradyuman Aggarwal, Rahul Gupta
Software based scan diagnosis is the de facto method for debugging logic scan failures. Physical analysis success rate is high on dies diagnosed with maximum score, one symptom, one suspect and shorter net. This poses a limitation on maximum utilization of scan diagnosis data for PFA. There have been several attempts to combine dynamic fault isolation techniques with scan diagnosis results to enhance the utilization and success rate. However, it is not a feasible approach for foundry due to limited product design and test knowledge and hardware requirements such as probe card and tester. Suitable for a foundry, an enhanced diagnosis-driven analysis scheme was proposed in [1] that classifies the failures as frontend-of-line (FEOL) and backend-of-line (BEOL) improving the die selection process for PFA. In this paper, static NIR PEM and defect prediction approach are applied on dies that are already classified as FEOL and BEOL failures yet considered unsuitable for PFA due to low score, multiple symptoms, and suspects. Successful case studies are highlighted to showcase the effectiveness of using static NIR PEM as the next level screening process to further maximize the scan diagnosis data utilization.
Authored by S. Moon, D. Nagalingam, Y. Ngow, A. Quah
Static analyzers have become increasingly popular both as developer tools and as subjects of empirical studies. Whereas static analysis tools exist for disparate programming languages, the bulk of the empirical research has focused on the popular Java programming language. In this paper, we investigate to what extent some known results about using static analyzers for Java change when considering C\#-another popular object-oriented language. To this end, we combine two replications of previous Java studies. First, we study which static analysis tools are most widely used among C\# developers, and which warnings are more commonly reported by these tools on open-source C\# projects. Second, we develop and empirically evaluate EagleRepair: a technique to automatically fix code in response to static analysis warnings; this is a replication of our previous work for Java [20]. Our replication indicates, among other things, that 1) static code analysis is fairly popular among C\# developers too; 2) Re-Sharper is the most widely used static analyzer for C\#; 3) several static analysis rules are commonly violated in both Java and C\# projects; 4) automatically generating fixes to static code analysis warnings with good precision is feasible in C\#. The EagleRepair tool developed for this research is available as open source.
Authored by Martin Odermatt, Diego Marcilio, Carlo Furia
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.
Authored by Haoxiang Shi, Wu Liu, Jingyu Liu, Jun Ai, Chunhui Yang
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Authored by Tukaram Muske, Alexander Serebrenik
Native code is now commonplace within Android app packages where it co-exists and interacts with Dex bytecode through the Java Native Interface to deliver rich app functionalities. Yet, state-of-the-art static analysis approaches have mostly overlooked the presence of such native code, which, however, may implement some key sensitive, or even malicious, parts of the app behavior. This limitation of the state of the art is a severe threat to validity in a large range of static analyses that do not have a complete view of the executable code in apps. To address this issue, we propose a new advance in the ambitious research direction of building a unified model of all code in Android apps. The JUCIFY approach presented in this paper is a significant step towards such a model, where we extract and merge call graphs of native code and bytecode to make the final model readily-usable by a common Android analysis framework: in our implementation, JUCIFY builds on the Soot internal intermediate representation. We performed empirical investigations to highlight how, without the unified model, a significant amount of Java methods called from the native code are “unreachable” in apps' callgraphs, both in goodware and malware. Using JUCIFY, we were able to enable static analyzers to reveal cases where malware relied on native code to hide invocation of payment library code or of other sensitive code in the Android framework. Additionally, JUCIFY'S model enables state-of-the-art tools to achieve better precision and recall in detecting data leaks through native code. Finally, we show that by using JUCIFY we can find sensitive data leaks that pass through native code.
Authored by Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, Xiaoyu Sun, Kevin Allix, Tegawende Bissyandè, Jacques Klein
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
Authored by Felix Schuckert, Hanno Langweg, Basel Katt
The increasing use of Infrastructure as Code (IaC) in DevOps leads to benefits in speed and reliability of deployment operation, but extends to infrastructure challenges typical of software systems. IaC scripts can contain defects that result in security and reliability issues in the deployed infrastructure: techniques for detecting and preventing them are needed. We analyze and survey the current state of research in this respect by conducting a literature review on static analysis techniques for IaC. We describe analysis techniques, defect categories and platforms targeted by tools in the literature.
Authored by Michele Chiari, Michele De Pascalis, Matteo Pradella
Long analysis times are a key bottleneck for the widespread adoption of whole-program static analysis tools. Fortunately, however, a user is often only interested in finding errors in the application code, which constitutes a small fraction of the whole program. Current application-focused analysis tools overapproximate the effect of the library and hence reduce the precision of the analysis results. However, empirical studies have shown that users have high expectations on precision and will ignore tool results that don't meet these expectations. In this paper, we introduce the first tool QueryMax that significantly speeds up an application code analysis without dropping any precision. QueryMax acts as a pre-processor to an existing analysis tool to select a partial library that is most relevant to the analysis queries in the application code. The selected partial library plus the application is given as input to the existing static analysis tool, with the remaining library pointers treated as the bottom element in the abstract domain. This achieves a significant speedup over a whole-program analysis, at the cost of a few lost errors, and with no loss in precision. We instantiate and run experiments on QueryMax for a cast-check analysis and a null-pointer analysis. For a particular configuration, QueryMax enables these two analyses to achieve, relative to a whole-program analysis, an average recall of 87%, a precision of 100% and a geometric mean speedup of 10x.
Authored by Akshay Utture, Jens Palsberg
Static analysis tools generate a large number of alarms that require manual inspection. In prior work, repositioning of alarms is proposed to (1) merge multiple similar alarms together and replace them by a fewer alarms, and (2) report alarms as close as possible to the causes for their generation. The premise is that the proposed merging and repositioning of alarms will reduce the manual inspection effort. To evaluate the premise, this paper presents an empirical study with 249 developers on the proposed merging and repositioning of static alarms. The study is conducted using static analysis alarms generated on \$C\$ programs, where the alarms are representative of the merging vs. non-merging and repositioning vs. non-repositioning situations in real-life code. Developers were asked to manually inspect and determine whether assertions added corresponding to alarms in \$C\$ code hold. Additionally, two spatial cognitive tests are also done to determine relationship in performance. The empirical evaluation results indicate that, in contrast to expectations, there was no evidence that merging and repositioning of alarms reduces manual inspection effort or improves the inspection accuracy (at times a negative impact was found). Results on cognitive abilities correlated with comprehension and alarm inspection accuracy.
Authored by Niloofar Mansoor, Tukaram Muske, Alexander Serebrenik, Bonita Sharif