Effective information security risk management is essential for survival of any business that is dependent on IT. In this paper we present an efficient and effective solution to find best parameters for managing cyber risks using artificial intelligence. Genetic algorithm is use as it can provide our required optimization and intelligence. Results show that GA is professional in finding the best parameters and minimizing the risk.
Authored by Osama Hosam
The growing maturity of orchestration languages is contributing to the elaboration of cloud composite services, whose resources may be deployed over different distributed infrastructures. These composite services are subject to changes over time, that are typically required to support cloud properties, such as scalability and rapid elasticity. In particular, the migration of their elementary resources may be triggered by performance constraints. However, changes induced by this migration may introduce vulnerabilities that may compromise the resources, or even the whole cloud service. In that context, we propose an automated SMT1-based security framework for supporting the migration of resources in cloud composite services, and preventing the occurrence of new configuration vulnerabilities. We formalize the underlying security automation based on SMT solving, in order to assess the migrated resources and select adequate counter-measures, considering both endogenous and exogenous security mechanisms. We then evaluate its benefits and limits through large series of experiments based on a proof-of-concept prototype implemented over the CVC4 commonly-used open-source solver. These experiments show a minimal overhead with regular operating systems deployed in cloud environments.
Authored by Mohamed Oulaaffart, Remi Badonnel, Christophe Bianco
Simulations have gained paramount importance in terms of software development for wireless sensor networks and have been a vital focus of the scientific community in this decade to provide efficient, secure, and safe communication in smart cities. Network Simulators are widely used for the development of safe and secure communication architectures in smart city. Therefore, in this technical survey report, we have conducted experimental comparisons among ten different simulation environments that can be used to simulate smart-city operations. We comprehensively analyze and compare simulators COOJA, NS-2 with framework Mannasim, NS-3, OMNeT++ with framework Castalia, WSNet, TOSSIM, J-Sim, GloMoSim, SENSE, and Avrora. These simulators have been run eight times each and comparison among them is critically scrutinized. The main objective behind this research paper is to assist developers and researchers in selecting the appropriate simulator against the scenario to provide safe and secure wired and wireless networks. In addition, we have discussed the supportive simulation environments, functions, and operating modes, wireless channel models, energy consumption models, physical, MAC, and network-layer protocols in detail. The selection of these simulation frameworks is based on features, literature, and important characteristics. Lastly, we conclude our work by providing a detailed comparison and describing the pros and cons of each simulator.
Authored by Ali Mohsin, Sana Aurangzeb, Muhammad Aleem, Muhammad Khan
We continue to tackle the problem of poorly defined security metrics by building on and improving our previous work on designing sound security metrics. We reformulate the previous method into a set of conditions that are clearer and more widely applicable for deriving sound security metrics. We also modify and enhance some concepts that led to an unforeseen weakness in the previous method that was subsequently found by users, thereby eliminating this weakness from the conditions. We present examples showing how the conditions can be used to obtain sound security metrics. To demonstrate the conditions' versatility, we apply them to show that an aggregate security metric made up of sound security metrics is also sound. This is useful where the use of an aggregate measure may be preferred, to more easily understand the security of a system.
Authored by George Yee
This paper presents a MATLAB Graphical User Interface (GUI) based tool that determines the performance evaluation metrics of the physically unclonable functions (PUFs). The PUFs are hardware security primitives which can be utilized in several hardware security applications like integrated circuits protection, device authentication, secret key generation, and hardware obfuscation. Like any other technology approach, PUFs evaluation requires testing different performance metrics, each of which can be determined by at least one mathematical equation. The proposed tool (PUFs Tool) reads the PUF instances’ output and then computes and generates the values of the main PUFs’ performance metrics: uniqueness, reliability, uniformity, and bit-aliasing. In addition, it generates a bar code for each PUF instance considered in the evaluation process. The PUFs Tool is designed and developed using the app designer of MATLAB software 2021b.
Authored by Husam Kareem, Khaleel Almousa, Dmitriy Dunaev
Autonomous vehicles (AVs) are capable of making driving decisions autonomously using multiple sensors and a complex autonomous driving (AD) software. However, AVs introduce numerous unique security challenges that have the potential to create safety consequences on the road. Security mechanisms require a benchmark suite and an evaluation framework to generate comparable results. Unfortunately, AVs lack a proper benchmarking framework to evaluate the attack and defense mechanisms and quantify the safety measures. This paper introduces BenchAV – a security benchmark suite and evaluation framework for AVs to address current limitations and pressing challenges of AD security. The benchmark suite contains 12 security and performance metrics, and an evaluation framework that automates the metric collection process using Carla simulator and Robot Operating System (ROS).
Authored by Mohammad Hoque, Mahmud Hossain, Ragib Hasan
This article describes an analysis of the key technologies currently applied to improve the quality, efficiency, safety and sustainability of Smart Grid systems and identifies the tools to optimize them and possible gaps in this area, considering the different energy sources, distributed generation, microgrids and energy consumption and production capacity. The research was conducted with a qualitative methodological approach, where the literature review was carried out with studies published from 2019 to 2022, in five (5) databases following the selection of studies recommended by the PRISMA guide. Of the five hundred and four (504) publications identified, ten (10) studies provided insight into the technological trends that are impacting this scenario, namely: Internet of Things, Big Data, Edge Computing, Artificial Intelligence and Blockchain. It is concluded that to obtain the best performance within Smart Grids, it is necessary to have the maximum synergy between these technologies, since this union will enable the application of advanced smart digital technology solutions to energy generation and distribution operations, thus allowing to conquer a new level of optimization.
Authored by Ivonne Núñez, Elia Cano, Carlos Rovetto, Karina Ojo-Gonzalez, Andrzej Smolarz, Juan Saldana-Barrios
The electromagnetic energy harvesting technology is a new and effective way to supply power to the condition monitoring sensors installed on or near the transmission line. We will use Computer Simulation Technology Software to simulate the different designs of stand-alone electromagnetic energy harvesters The power generated by energy harvesters of different design structures is compared and analyzed through simulation and experimental results. We then propose an improved design of energy harvester.
Authored by Guowei An, Congzheng Han, Fugui Zhang, Kun Liu
The complexity and scale of modern software programs often lead to overlooked programming errors and security vulnerabilities. Developers often rely on automatic tools, like static analysis tools, to look for bugs and vulnerabilities. Static analysis tools are widely used because they can understand nontrivial program behaviors, scale to millions of lines of code, and detect subtle bugs. However, they are known to generate an excess of false alarms which hinder their utilization as it is counterproductive for developers to go through a long list of reported issues, only to find a few true positives. One of the ways proposed to suppress false positives is to use machine learning to identify them. However, training machine learning models requires good quality labeled datasets. For this purpose, we developed D2A [3], a differential analysis based approach that uses the commit history of a code repository to create a labeled dataset of Infer [2] static analysis output.
Authored by Saurabh Pujar, Yunhui Zheng, Luca Buratti, Burn Lewis, Alessandro Morari, Jim Laredo, Kevin Postlethwait, Christoph Görn
Smart contracts are usually financial-related, which makes them attractive attack targets. Many static analysis tools have been developed to facilitate the contract audit process, but not all of them take account of two special features of smart contracts: (1) The external variables, like time, are constrained by real-world factors; (2) The internal variables persist between executions. Since these features import implicit constraints into contracts, they significantly affect the performance of static tools, such as causing errors in reachability analysis and resulting in false positives. In this paper, we conduct a systematic study on implicit constraints from three aspects. First, we summarize the implicit constraints in smart contracts. Second, we evaluate the impact of such constraints on the state-of-the-art static tools. Third, we propose a lightweight but effective mitigation method named ConSym to deal with such constraints and integrate it into OSIRIS. The evaluation result shows that ConSym can filter out 96% of false positives and reduce false negatives by two-thirds.
Authored by Tingting Yin, Chao Zhang, Yuandong Ni, Yixiong Wu, Taiyu Wong, Xiapu Luo, Zheming Li, Yu Guo
This paper use the method of finite element analysis, and comparing and analyzing the split box and the integrated box from two aspects of modal analysis and static analysis. It is concluded that the integrated box has the characteristics of excellent vibration characteristics and high strength tolerance; At the same time, according to the S-N curve of the material and the load spectrum of the box, the fatigue life of the integrated box is 26.24 years by using the fatigue analysis software Fe-safe, which meets the service life requirements; The reliability analysis module PDS is used to calculate the reliability of the box, and the reliability of the integrated box is 96.5999%, which meets the performance requirements.
Authored by Liang Xuan, Chunfei Zhang, Siyuan Tian, Tianmin Guan, Lei Lei
Software based scan diagnosis is the de facto method for debugging logic scan failures. Physical analysis success rate is high on dies diagnosed with maximum score, one symptom, one suspect and shorter net. This poses a limitation on maximum utilization of scan diagnosis data for PFA. There have been several attempts to combine dynamic fault isolation techniques with scan diagnosis results to enhance the utilization and success rate. However, it is not a feasible approach for foundry due to limited product design and test knowledge and hardware requirements such as probe card and tester. Suitable for a foundry, an enhanced diagnosis-driven analysis scheme was proposed in [1] that classifies the failures as frontend-of-line (FEOL) and backend-of-line (BEOL) improving the die selection process for PFA. In this paper, static NIR PEM and defect prediction approach are applied on dies that are already classified as FEOL and BEOL failures yet considered unsuitable for PFA due to low score, multiple symptoms, and suspects. Successful case studies are highlighted to showcase the effectiveness of using static NIR PEM as the next level screening process to further maximize the scan diagnosis data utilization.
Authored by S. Moon, D. Nagalingam, Y. Ngow, A. Quah
Static analyzers have become increasingly popular both as developer tools and as subjects of empirical studies. Whereas static analysis tools exist for disparate programming languages, the bulk of the empirical research has focused on the popular Java programming language. In this paper, we investigate to what extent some known results about using static analyzers for Java change when considering C\#-another popular object-oriented language. To this end, we combine two replications of previous Java studies. First, we study which static analysis tools are most widely used among C\# developers, and which warnings are more commonly reported by these tools on open-source C\# projects. Second, we develop and empirically evaluate EagleRepair: a technique to automatically fix code in response to static analysis warnings; this is a replication of our previous work for Java [20]. Our replication indicates, among other things, that 1) static code analysis is fairly popular among C\# developers too; 2) Re-Sharper is the most widely used static analyzer for C\#; 3) several static analysis rules are commonly violated in both Java and C\# projects; 4) automatically generating fixes to static code analysis warnings with good precision is feasible in C\#. The EagleRepair tool developed for this research is available as open source.
Authored by Martin Odermatt, Diego Marcilio, Carlo Furia
Code-graph based software defect prediction methods have become a research focus in SDP field. Among them, Code Property Graph is used as a form of data representation for code defects due to its ability to characterize the structural features and dependencies of defect codes. However, since the coarse granularity of Code Property Graph, redundant information which is not related to defects often attached to the characterization of software defects. Thus, it is a problem to be solved in how to locate software defects at a finer granularity in Code Property Graph. Static analysis is a technique for identifying software defects using set defect rules, and there are many proven static analysis tools in the industry. In this paper, we propose a method for locating specific types of defects in the Code Property Graph based on the result of static analysis tool. Experiments show that the location method based on static analysis results can effectively predict the location of specific defect types in real software program.
Authored by Haoxiang Shi, Wu Liu, Jingyu Liu, Jun Ai, Chunhui Yang
Static analysis tools help to detect common pro-gramming errors but generate a large number of false positives. Moreover, when applied to evolving software systems, around 95 % of alarms generated on a version are repeated, i.e., they have also been generated on the previous version. Version-aware static analysis techniques (VSATs) have been proposed to suppress the repeated alarms that are not impacted by the code changes between the two versions. The alarms reported by VSATs after the suppression, called delta alarms, still constitute 63% of the tool-generated alarms. We observe that delta alarms can be further postprocessed using their corresponding code changes: the code changes due to which VSATs identify them as delta alarms. However, none of the existing VSATs or alarms postprocessing techniques postprocesses delta alarms using the corresponding code changes. Based on this observation, we use the code changes to classify delta alarms into six classes that have different priorities assigned to them. The assignment of priorities is based on the type of code changes and their likelihood of actually impacting the delta alarms. The ranking of alarms, obtained by prioritizing the classes, can help suppress alarms that are ranked lower, when resources to inspect all the tool-generated alarms are limited. We performed an empirical evaluation using 9789 alarms generated on 59 versions of seven open source C applications. The evaluation results indicate that the proposed classification and ranking of delta alarms help to identify, on average, 53 % of delta alarms as more likely to be false positives than the others.
Authored by Tukaram Muske, Alexander Serebrenik
Native code is now commonplace within Android app packages where it co-exists and interacts with Dex bytecode through the Java Native Interface to deliver rich app functionalities. Yet, state-of-the-art static analysis approaches have mostly overlooked the presence of such native code, which, however, may implement some key sensitive, or even malicious, parts of the app behavior. This limitation of the state of the art is a severe threat to validity in a large range of static analyses that do not have a complete view of the executable code in apps. To address this issue, we propose a new advance in the ambitious research direction of building a unified model of all code in Android apps. The JUCIFY approach presented in this paper is a significant step towards such a model, where we extract and merge call graphs of native code and bytecode to make the final model readily-usable by a common Android analysis framework: in our implementation, JUCIFY builds on the Soot internal intermediate representation. We performed empirical investigations to highlight how, without the unified model, a significant amount of Java methods called from the native code are “unreachable” in apps' callgraphs, both in goodware and malware. Using JUCIFY, we were able to enable static analyzers to reveal cases where malware relied on native code to hide invocation of payment library code or of other sensitive code in the Android framework. Additionally, JUCIFY'S model enables state-of-the-art tools to achieve better precision and recall in detecting data leaks through native code. Finally, we show that by using JUCIFY we can find sensitive data leaks that pass through native code.
Authored by Jordan Samhi, Jun Gao, Nadia Daoudi, Pierre Graux, Henri Hoyez, Xiaoyu Sun, Kevin Allix, Tegawende Bissyandè, Jacques Klein
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
Authored by Felix Schuckert, Hanno Langweg, Basel Katt
The increasing use of Infrastructure as Code (IaC) in DevOps leads to benefits in speed and reliability of deployment operation, but extends to infrastructure challenges typical of software systems. IaC scripts can contain defects that result in security and reliability issues in the deployed infrastructure: techniques for detecting and preventing them are needed. We analyze and survey the current state of research in this respect by conducting a literature review on static analysis techniques for IaC. We describe analysis techniques, defect categories and platforms targeted by tools in the literature.
Authored by Michele Chiari, Michele De Pascalis, Matteo Pradella
Long analysis times are a key bottleneck for the widespread adoption of whole-program static analysis tools. Fortunately, however, a user is often only interested in finding errors in the application code, which constitutes a small fraction of the whole program. Current application-focused analysis tools overapproximate the effect of the library and hence reduce the precision of the analysis results. However, empirical studies have shown that users have high expectations on precision and will ignore tool results that don't meet these expectations. In this paper, we introduce the first tool QueryMax that significantly speeds up an application code analysis without dropping any precision. QueryMax acts as a pre-processor to an existing analysis tool to select a partial library that is most relevant to the analysis queries in the application code. The selected partial library plus the application is given as input to the existing static analysis tool, with the remaining library pointers treated as the bottom element in the abstract domain. This achieves a significant speedup over a whole-program analysis, at the cost of a few lost errors, and with no loss in precision. We instantiate and run experiments on QueryMax for a cast-check analysis and a null-pointer analysis. For a particular configuration, QueryMax enables these two analyses to achieve, relative to a whole-program analysis, an average recall of 87%, a precision of 100% and a geometric mean speedup of 10x.
Authored by Akshay Utture, Jens Palsberg
Smart grid is a new generation of grid that inte-grates traditional grid and grid information system, and infor-mation security of smart grid is an extremely important part of the whole grid. The research of trusted computing technology provides new ideas to protect the information security of the power grid. To address the problem of large deviations in the calculation of credible dynamic thresholds due to the existence of characteristics such as self-similarity and traffic bursts in smart grid information collection, a traffic prediction model based on ARMA and Poisson process is proposed. And the Hurst coefficient is determined more accurately using R/S analysis, which finally improves the efficiency and accuracy of the trusted dynamic threshold calculation.
Authored by Fangfang Dang, Lijing Yan, Shuai Li, Dingding Li
The use of Virtual Machine (VM) migration as support for software rejuvenation was introduced more than a decade ago. Since then, several works have validated this approach from experimental and theoretical perspectives. Recently, some works shed light on the possibility of using the same technique as Moving Target Defense (MTD). However, to date, no work evaluated the availability and security levels while applying VM migration for both rejuvenation and MTD (multipurpose VM migration). In this paper, we conduct a comprehensive evaluation using Stochastic Petri Net (SPN) models to tackle this challenge. The evaluation covers the steady-state system availability, expected MTD protection, and related metrics of a system under time-based multipurpose VM migration. Results show that the availability and security improvement due to VM migration deployment surpasses 50% in the best scenarios. However, there is a trade-off between availability and security metrics, meaning that improving one implies compromising the other.
Authored by Matheus Torquato, Paulo Maciel, Marco Vieira
Security is of vital importance in wireless industrial communication systems. When spoofing attacking has occurred, leading to economic losses or even safety accidents. So as to address the concern, existing approaches mainly rely on traditional cryptographic algorithms. However, these methods cannot meet the needs of short delay and lightweight. In this paper, we propose a CSI-based PHY-layer security authentication scheme to detect spoofing detection. The main idea takes advantage of the uncorrelated nature of wireless channels to the identification of spoofing nodes in the physical layer. We demonstrate a MIMO-OFDM based spoofing detection prototype in industrial environments. Firstly, utilizing Universal Software Radio Peripheral (USRPs) to establish MIMO-OFDM communication systems is presented. Secondly, our proposed security scheme of CSI-based PHY-layer authentication is demonstrated. Finally, the effectiveness of the proposed approach has been verified via attack experiments.
Authored by Songlin Chen, Sijing Wang, Xingchen Xu, Long Jiao, Hong Wen
Software vulnerabilities threaten the security of computer system, and recently more and more loopholes have been discovered and disclosed. For the detected vulnerabilities, the relevant personnel will analyze the vulnerability characteristics, and combine the vulnerability scoring system to determine their severity level, so as to determine which vulnerabilities need to be dealt with first. In recent years, some characteristic description-based methods have been used to predict the severity level of vulnerability. However, the traditional text processing methods only grasp the superficial meaning of the text and ignore the important contextual information in the text. Therefore, this paper proposes an innovative method, called BERT-CNN, which combines the specific task layer of Bert with CNN to capture important contextual information in the text. First, we use Bert to process the vulnerability description and other information, including Access Gained, Attack Origin and Authentication Required, to generate the feature vectors. Then these feature vectors of vulnerabilities and their severity levels are input into a CNN network, and the parameters of the CNN are gotten. Next, the fine-tuned Bert and the trained CNN are used to predict the severity level of a vulnerability. The results show that our method outperforms the state-of-the-art method with 91.31% on F1-score.
Authored by Xuming Ni, Jianxin Zheng, Yu Guo, Xu Jin, Ling Li
Ethical bias in machine learning models has become a matter of concern in the software engineering community. Most of the prior software engineering works concentrated on finding ethical bias in models rather than fixing it. After finding bias, the next step is mitigation. Prior researchers mainly tried to use supervised approaches to achieve fairness. However, in the real world, getting data with trustworthy ground truth is challenging and also ground truth can contain human bias. Semi-supervised learning is a technique where, incrementally, labeled data is used to generate pseudo-labels for the rest of data (and then all that data is used for model training). In this work, we apply four popular semi-supervised techniques as pseudo-labelers to create fair classification models. Our framework, Fair-SSL, takes a very small amount (10%) of labeled data as input and generates pseudo-labels for the unlabeled data. We then synthetically generate new data points to balance the training data based on class and protected attribute as proposed by Chakraborty et al. in FSE 2021. Finally, classification model is trained on the balanced pseudo-labeled data and validated on test data. After experimenting on ten datasets and three learners, we find that Fair-SSL achieves similar performance as three state-of-the-art bias mitigation algorithms. That said, the clear advantage of Fair-SSL is that it requires only 10% of the labeled training data. To the best of our knowledge, this is the first SE work where semi-supervised techniques are used to fight against ethical bias in SE ML models. To facilitate open science and replication, all our source code and datasets are publicly available at https://github.com/joymallyac/FairSSL. CCS CONCEPTS • Software and its engineering → Software creation and management; • Computing methodologies → Machine learning. ACM Reference Format: Joymallya Chakraborty, Suvodeep Majumder, and Huy Tu. 2022. Fair-SSL: Building fair ML Software with less data. In International Workshop on Equitable Data and Technology (FairWare ‘22), May 9, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3524491.3527305
Authored by Joymallya Chakraborty, Suvodeep Majumder, Huy Tu
This research evaluates the accuracy of two methods of authorship prediction: syntactical analysis and n-gram, and explores its potential usage. The proposed algorithm measures n-gram, and counts adjectives, adverbs, verbs, nouns, punctuation, and sentence length from the training data, and normalizes each metric. The proposed algorithm compares the metrics of training samples to testing samples and predicts authorship based on the correlation they share for each metric. The severity of correlation between the testing and training data produces significant weight in the decision-making process. For example, if analysis of one metric approximates 100% positive correlation, the weight in the decision is assigned a maximum value for that metric. Conversely, a 100% negative correlation receives the minimum value. This new method of authorship validation holds promise for future innovation in fraud protection, the study of historical documents, and maintaining integrity within academia.
Authored by Jared Nelson, Mohammad Shekaramiz