Hard Problems: Predictive Metrics 2015

 

 
SoS Logo

Hard Problems:  Predictive Metrics 2015

 

One of the hard problems in the Science of Security is the development of predictive metrics.  The work on this topic cited here was presented in 2015.


Abraham, S.; Nair, S., "Exploitability Analysis Using Predictive Cybersecurity Framework," in Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, pp. 317-323, 24-26 June 2015. doi: 10.1109/CYBConf.2015.7175953

Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.

Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; decision making; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Markov processes; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model. (ID#: 15-8566)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7175953&isnumber=7175890

 

Yaming Tang; Fei Zhao; Yibiao Yang; Hongmin Lu; Yuming Zhou; Baowen Xu, "Predicting Vulnerable Components via Text Mining or Software Metrics? An Effort-Aware Perspective," in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 27-36, 3-5 Aug. 2015. doi: 10.1109/QRS.2015.15

Abstract: In order to identify vulnerable software components, developers can take software metrics as predictors or use text mining techniques to build vulnerability prediction models. A recent study reported that text mining based models have higher recall than software metrics based models. However, this conclusion was drawn without considering the sizes of individual components which affects the code inspection effort to determine whether a component is vulnerable. In this paper, we investigate the predictive power of these two kinds of prediction models in the context of effort-aware vulnerability prediction. To this end, we use the same data sets, containing 223 vulnerabilities found in three web applications, to build vulnerability prediction models. The experimental results show that: (1) in the context of effort-aware ranking scenario, text mining based models only slightly outperform software metrics based models, (2) in the context of effort-aware classification scenario, text mining based models perform similarly to software metrics based models in most cases, and (3) most of the effect sizes (i.e. the magnitude of the differences) between these two kinds of models are trivial. These results suggest that, from the viewpoint of practical application, software metrics based models are comparable to text mining based models. Therefore, for developers, software metrics based models are practical choices for vulnerability prediction, as the cost to build and apply these models is much lower.

Keywords: Internet; data mining; software metrics; text analysis; Web applications; effort-aware perspective; effort-aware ranking scenario; effort-aware vulnerability prediction; software metrics based models; text mining; vulnerability prediction models; vulnerable software components; Context; Context modeling; Predictive models; Software; Software metrics; Text mining; effort-aware; prediction; software metrics; text mining; vulnerability (ID#: 15-8567)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7272911&isnumber=7272893

 

Woody, C.; Ellison, R.; Nichols, W., "Predicting Cybersecurity Using Quality Data," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-5, 14-16 April 2015. doi: 10.1109/THS.2015.7225327

Abstract: Within the process of system development and implementation, programs assemble hundreds of different metrics for tracking and monitoring software such as budgets, costs and schedules, contracts, and compliance reports. Each contributes, directly or indirectly, toward the cybersecurity assurance of the results. The Software Engineering Institute has detailed size, defect, and process data on over 100 software development projects. The projects include a wide range of application domains. Data from five projects identified as successful safety-critical or security-critical implementations were selected for cybersecurity consideration. Material was analyzed to identify a possible correlation between modeling quality and security and to identify potential predictive cybersecurity modeling characteristics. While not a statistically significant sample, this data indicates the potential for establishing benchmarks for ranges of quality performance (for example, defect injection rates and removal rates and test yields) that provide a predictive capability for cybersecurity results.

Keywords: safety-critical software; security of data; software quality; system monitoring; Software Engineering Institute; cybersecurity assurance; cybersecurity consideration; predictive capability; predictive cybersecurity modeling characteristics; programs assemble; quality data; quality performance; safety-critical implementation; security-critical implementation; software development project; software monitoring; software tracking; system development; Contracts; Safety; Schedules; Software; Software measurement; Testing; Topology; engineering security; quality modeling; security predictions; software assurance (ID#: 15-8568)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225327&isnumber=7190491

 

Abraham, S.; Nair, S., "A Novel Architecture for Predictive CyberSecurity Using Non-homogenous Markov Models," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 774-781, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.446

Abstract: Evaluating the security of an enterprise is an important step towards securing its system and resources. However existing research provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We still lack effective techniques to accurately measure the predictive security risk of an enterprise taking into account the dynamic attributes associated with vulnerabilities that can change over time. It is therefore critical to establish an effective cyber-security analytics strategy to minimize risk and protect critical infrastructure from external threats before it even starts. In this paper we present an integrated view of security for computer networks within an enterprise, understanding threats and vulnerabilities, performing analysis to evaluate the current as well as future security situation of an enterprise to address potential situations. We formally define a non-homogeneous Markov model for quantitative security evaluation using Attack Graphs which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate to help visualize the future security state of the network leading to actionable knowledge and insight. We present experimental results from applying this model on a sample network to demonstrate the practicality of our approach.

Keywords: Markov processes; computer network security; attack graphs; computer networks; cyber security analytics strategy; dynamic attributes; enterprise security goals; external threats; impact attacks; nonhomogeneous Markov model; nonhomogenous Markov Models; predictive cybersecurity; predictive security risk; quantitative security evaluation; time dependent covariates; Biological system modeling; Computer architecture; Computer security; Markov processes; Measurement; Attack Graph; CVSS; Cyber Situational Awareness; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model (ID#: 15-8569)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7345354&isnumber=7345233

 

Anger, E.; Yalamanchili, S.; Dechev, D.; Hendry, G.; Wilke, J., "Application Modeling for Scalable Simulation of Massively Parallel Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 238-247, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.286

Abstract: Macro-scale simulation has been advanced as one tool for application -- architecture co-design to express operation of exascale systems. These simulations approximate the behavior of system components, trading off accuracy for increased evaluation speed. Application skeletons serve as the vehicle for these simulations, but they require accurately capturing the execution behavior of computation. The complexity of application codes, the heterogeneity of the platforms, and the increasing importance of simulating multiple performance metrics (e.g., execution time, energy) require new modeling techniques. We propose flexible statistical models to increase the fidelity of application simulation at scale. We present performance model validation for several exascale mini-applications that leverage a variety of parallel programming frameworks targeting heterogeneous architectures for both time and energy performance metrics. When paired with these statistical models, application skeletons were simulated on average 12.5 times faster than the original application incurring only 6.08% error, which is 12.5% faster and 33.7% more accurate than baseline models.

Keywords: parallel architectures; parallel programming; power aware computing; principal component analysis; application-architecture codesign; energy performance metrics; exascale systems; flexible statistical model; heterogeneous architectures; massively parallel systems; parallel programming frameworks; performance metrics; performance model; scalable simulation modeling; statistical model; time performance metrics; Analytical models; Computational modeling; Data models;Hardware; Load modeling; Predictive models; Skeleton (ID#: 15-8570)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7336170&isnumber=7336120

 

Daniel R. Thomas, Alastair R. Beresford, Andrew Rice; “Security Metrics for the Android Ecosystem;” SPSM '15 Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, October 2015, Pages 87-98. Doi: 10.1145/2808117.2808118

Abstract: The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.

Keywords: android, ecosystems, metrics, updates, vulnerabilities (ID#: 15-8571)

URL: http://doi.acm.org/10.1145/2808117.2808118

 

Cormac Herley, Wolter Pieters; “’If You Were Attacked, You'd Be Sorry’: Counterfactuals as Security Arguments;” NSPW '15 Proceedings of the 2015 New Security Paradigms Workshop, September 2015, Pages 112-123. Doi: 10.1145/2841113.2841122

Abstract: Counterfactuals (or what-if scenarios) are often employed as security arguments, but the dos and don'ts of their use are poorly understood. They are useful to discuss vulnerability of systems under threats that haven't yet materialized, but they can also be used to justify investment in obscure controls. In this paper, we shed light on the role of counterfactuals in security, and present conditions under which counterfactuals are legitimate arguments, linked to the exclusion or inclusion of the threat environment in security metrics. We provide a new paradigm for security reasoning by deriving essential questions to ask in order to decide on the acceptability of specific counterfactuals as security arguments, which can serve as a basis for further study in this field. We conclude that counterfactuals are a necessary evil in security, which should be carefully controlled.

Keywords: adversarial risk, control strength, counterfactuals, security arguments, security metrics, threat environment (ID#: 15-8572)

URL:  http://doi.acm.org/10.1145/2841113.2841122

 

Yang Liu, Jing Zhang, Armin Sarabi, Mingyan Liu, Manish Karir, Michael Bailey; “Predicting Cyber Security Incidents Using Feature-Based Characterization of Network-Level Malicious Activities;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 3-9. Doi: 10.1145/2713579.2713582

Abstract: This study offers a first step toward understanding the extent to which we may be able to predict cyber security incidents (which can be of one of many types) by applying machine learning techniques and using externally observed malicious activities associated with network entities, including spamming, phishing, and scanning, each of which may or may not have direct bearing on a specific attack mechanism or incident type. Our hypothesis is that when viewed collectively, malicious activities originating from a network are indicative of the general cleanness of a network and how well it is run, and that furthermore, collectively they exhibit fairly stable and thus predictive behavior over time. To test this hypothesis, we utilize two datasets in this study: (1) a collection of commonly used IP address-based/host reputation blacklists (RBLs) collected over more than a year, and (2) a set of security incident reports collected over roughly the same period. Specifically, we first aggregate the RBL data at a prefix level and then introduce a set of features that capture the dynamics of this aggregated temporal process. A comparison between the distribution of these feature values taken from the incident dataset and from the general population of prefixes shows distinct differences, suggesting their value in distinguishing between the two while also highlighting the importance of capturing dynamic behavior (second order statistics) in the malicious activities. These features are then used to train a support vector machine (SVM) for prediction. Our preliminary results show that we can achieve reasonably good prediction performance over a forecasting window of a few months.

Keywords: network reputation, network security, prediction, temporal pattern, time-series data (ID#: 15-8573)

URL:  http://doi.acm.org/10.1145/2713579.2713582

 

Patrick Morrison, Kim Herzig, Brendan Murphy, Laurie Williams; “Challenges with Applying Vulnerability Prediction Models;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 4. Doi: 10.1145/2746194.2746198

Abstract: Vulnerability prediction models (VPM) are believed to hold promise for providing software engineers guidance on where to prioritize precious verification resources to search for vulnerabilities. However, while Microsoft product teams have adopted defect prediction models, they have not adopted vulnerability prediction models (VPMs). The goal of this research is to measure whether vulnerability prediction models built using standard recommendations perform well enough to provide actionable results for engineering resource allocation. We define 'actionable' in terms of the inspection effort required to evaluate model results. We replicated a VPM for two releases of the Windows Operating System, varying model granularity and statistical learners. We reproduced binary-level prediction precision (~0.75) and recall (~0.2). However, binaries often exceed 1 million lines of code, too large to practically inspect, and engineers expressed preference for source file level predictions. Our source file level models yield precision below 0.5 and recall below 0.2. We suggest that VPMs must be refined to achieve actionable performance, possibly through security-specific metrics.

Keywords: churn, complexity, coverage, dependencies, metrics, prediction, vulnerabilities (ID#: 15-8574)

URL:  http://doi.acm.org/10.1145/2746194.2746198

 

Gargi Saha, T. Pranav Bhat, K. Chandrasekaran; “A Generic Approach to Security Evaluation for Multimedia Data;” ICCCT '15 Proceedings of the Sixth International Conference on Computer and Communication Technology 2015, September 2015, Pages 333-338.  Doi: 10.1145/2818567.2818669

Abstract: Beginning with a critical analysis of existing multimedia metrics, this paper builds upon their drawbacks in streaming media by the introduction of alternate metrics, backed by analytical correctness proofs of accuracy and comparative simulation with earlier metrics to justify the improvements made in security judgement techniques.

Keywords: Luminance Similarity Score, Multimedia, Multimedia security metrics, Security Metrics, Security evaluation metrics (ID#: 15-8575)

URL:  http://doi.acm.org/10.1145/2818567.2818669

 

Mohammad Noureddine, Ken Keefe, William H. Sanders, Masooda Bashir; “Quantitative Security Metrics with Human in the Loop;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 21. Doi: 10.1145/2746194.2746215

Abstract: The human factor is often regarded as the weakest link in cybersecurity systems. The investigation of several security breaches reveals an important impact of human errors in exhibiting security vulnerabilities. Although security researchers have long observed the impact of human behavior, few improvements have been made in designing secure systems that are resilient to the uncertainties of the human element.  In this work, we summarize the state of the art work in human cybersecurity research, and present the Human-Influenced Task-Oriented (HITOP) formalism for modeling human decisions in security systems. We also provide a roadmap for future research. We aim at developing a simulation tool that allows modeling and analysis of security systems in light of the uncertainties of human behavior.

Keywords: human models, quantitative security metrics, security modeling (ID#: 15-8576)

URL:  http://doi.acm.org/10.1145/2746194.2746215

 

Shouling Ji, Shukun Yang, Ting Wang, Changchang Liu, Wei-Han Lee, Raheem Beyah; “PARS: A Uniform and Open-source Password Analysis and Research System;” ACSAC 2015 Proceedings of the 31st Annual Computer Security Applications Conference, December2015, Pages 321-330. Doi:

Abstract: In this paper, we introduce an open-source and modular password analysis and research system, PARS, which provides a uniform, comprehensive and scalable research platform for password security. To the best of our knowledge, PARS is the first such system that enables researchers to conduct fair and comparable password security research. PARS contains 12 state-of-the-art cracking algorithms, 15 intra-site and cross-site password strength metrics, 8 academic password meters, and 15 of the 24 commercial password meters from the top-150 websites ranked by Alexa. Also, detailed taxonomies and large-scale evaluations of the PARS modules are presented in the paper.

Keywords: Passwords, cracking, evaluation, measurement, metrics (ID#: 15-8577)

URL:  http://doi.acm.org/10.1145/2818000.2818018

 

Sofia Charalampidou, Apostolos Ampatzoglou, Paris Avgeriou; “Size and Cohesion Metrics as Indicators of the Long Method Bad Smell: An Empirical Study;” PROMISE '15 Proceedings of the 11th International Conference on Predictive Models and Data Analytics in Software Engineering; October 2015, Article No. 8. Doi: 10.1145/2810146.2810155

Abstract: Source code bad smells are usually resolved through the application of well-defined solutions, i.e., refactoring. In the literature, software metrics are used as indicators of the existence and prioritization of resolving bad smells. In this paper, we focus on the long method smell (i.e. one of the most frequent and persistent bad smells) that can be resolved by the extract method refactoring. Until now, the identification of long methods or extract method opportunities has been performed based on cohesion, size or complexity metrics. However, the empirical validation of these metrics has exhibited relatively low accuracy with regard to their capacity to indicate the existence of long methods or extract method opportunities. Thus, we empirically explore the ability of size and cohesion metrics to predict the existence and the refactoring urgency of long method occurrences, through a case study on java open-source methods. The results of the study suggest that one size and four cohesion metrics are capable of characterizing the need and urgency for resolving the long method bad smell, with a higher accuracy compared to the previous studies. The obtained results are discussed by providing possible interpretations and implications to practitioners and researchers.

Keywords: Long method, case study, cohesion, metrics, size (ID#: 15-8578)

URL: http://doi.acm.org/10.1145/2810146.2810155

 

Haining Chen, Omar Chowdhury, Jing Chen, Ninghui Li, Robert Proctor; “Towards Quantification of Firewall Policy Complexity;” HotSoS '15 Proceedings of the 2015 Symposium and Bootcamp on the Science of Security, April 2015, Article No. 18. Doi: 10.1145/2746194.2746212

Abstract: Developing metrics for quantifying the security and usability aspects of a system has been of constant interest to the cybersecurity research community. Such metrics have the potential to provide valuable insight on security and usability of a system and to aid in the design, development, testing, and maintenance of the system. Working towards the overarching goal of such metric development, in this work we lay down the groundwork for developing metrics for quantifying the complexity of firewall policies. We are particularly interested in capturing the human perceived complexity of firewall policies. To this end, we propose a potential workflow that researchers can follow to develop empirically-validated, objective metrics for measuring the complexity of firewall policies. We also propose three hypotheses that capture salient properties of a firewall policy which constitute the complexity of a policy for a human user. We identify two categories of human-perceived policy complexity (i.e., syntactic complexity and semantic complexity), and for each of them propose potential complexity metrics for firewall policies that exploit two of the hypotheses we suggest. The current work can be viewed as a stepping stone for future research on development of such policy complexity metrics.

Keywords: firewall policies, policy complexity metrics (ID#: 15-8579)

URL: http://doi.acm.org/10.1145/2746194.2746212

Niketa Gupta, Deepali Singh, Ashish Sharma; “Identifying Effective Software Metrics for Categorical Defect Prediction Using Structural Equation Modeling;” WCI '15 Proceedings of the Third International Symposium on Women in Computing and Informatics, April 2015, Pages 59-65. Doi: 10.1145/2791405.2791484

Abstract: Software Defect prediction is the pre-eminent area of software engineering which has witnessed huge importance over last decades. The identification of defects in the early stages of software development improves the quality of the software system and reduce the effort in maintaining the quality of software product. Many research studies have been conducted to construct the prediction model that considers the CK metrics suite and object oriented software metrics. For the prediction model development, consideration of interaction among the metrics is not a common practice. This paper presents the empirical evaluation in which several software metrics were investigated in order to identify the effective set of the metrics for each defect category which can significantly improve the defect prediction model made for each defect category. For each of the metrics, Pearson correlation coefficient with the number of defect categories were calculated and subsequently stepwise regression model is constructed to predict the reduced set metrics for each defect category. We have proposed a novel approach for modeling the defects using structural equation modeling further which validates our work. Structural models were built for each defect category using structural equation modeling which claims that results are validated.

Keywords: Defect Prediction, Software Metrics, Stepwise regression model, Structural Equation Modeling (ID#: 15-8580)

URL:  http://doi.acm.org/10.1145/2791405.2791484

 

Xiaoyuan Jing, Fei Wu, Xiwei Dong, Fumin Qi, Baowen Xu; “Heterogeneous Cross-Company Defect Prediction by Unified Metric Representation and CCA-Based Transfer Learning;”  ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 496-507. Doi: 10.1145/2786805.2786813

Abstract: Cross-company defect prediction (CCDP) learns a prediction model by using training data from one or multiple projects of a source company and then applies the model to the target company data. Existing CCDP methods are based on the assumption that the data of source and target companies should have the same software metrics. However, for CCDP, the source and target company data is usually heterogeneous, namely the metrics used and the size of metric set are different in the data of two companies. We call CCDP in this scenario as heterogeneous CCDP (HCCDP) task. In this paper, we aim to provide an effective solution for HCCDP. We propose a unified metric representation (UMR) for the data of source and target companies. The UMR consists of three types of metrics, i.e., the common metrics of the source and target companies, source-company specific metrics and target-company specific metrics. To construct UMR for source company data, the target-company specific metrics are set as zeros, while for UMR of the target company data, the source-company specific metrics are set as zeros. Based on the unified metric representation, we for the first time introduce canonical correlation analysis (CCA), an effective transfer learning method, into CCDP to make the data distributions of source and target companies similar. Experiments on 14 public heterogeneous datasets from four companies indicate that: 1) for HCCDP with partially different metrics, our approach significantly outperforms state-of-the-art CCDP methods; 2) for HCCDP with totally different metrics, our approach obtains comparable prediction performances in contrast with within-project prediction results. The proposed approach is effective for HCCDP.

Keywords: Heterogeneous cross-company defect prediction (HCCDP), canonical correlation analysis (CCA), common metrics, company-specific metrics, unified metric representation (ID#: 15-8581)

URL: http://doi.acm.org/10.1145/2786805.2786813

 

Christoffer Rosen, Ben Grawi, Emad Shihab; “Commit Guru: Analytics and Risk Prediction of Software Commits;”  ESEC/FSE 2015 Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, August 2015, Pages 966-969. Doi: 10.1145/2786805.2803183

Abstract: Software quality is one of the most important research sub-areas of software engineering. Hence, a plethora of research has focused on the prediction of software quality. Much of the software analytics and prediction work has proposed metrics, models and novel approaches that can predict quality with high levels of accuracy. However, adoption of such techniques remains low; one of the reasons for this low adoption of the current analytics and prediction technique is the lack of actionable and publicly available tools. We present Commit Guru, a language agnostic analytics and prediction tool that identifies and predicts risky software commits. Commit Guru is publicly available and is able to mine any GIT SCM repository. Analytics are generated at both, the project and commit levels. In addition, Commit Guru automatically identifies risky (i.e., bug-inducing) commits and builds a prediction model that assess the likelihood of a recent commit introducing a bug in the future. Finally, to facilitate future research in the area, users of Commit Guru can download the data for any project that is processed by Commit Guru with a single click. Several large open source projects have been successfully processed using Commit Guru. Commit Guru is available online at commit.guru. Our source code is also released freely under the MIT license.

Keywords: Risky Software Commits, Software Analytics, Software Metrics, Software Prediction (ID#: 15-8582)

URL: http://doi.acm.org/10.1145/2786805.2803183

 

Junhyung Moon, Kyoungwoo Lee; ”Spatio-Temporal Visual Security Metric for Secure Mobile Video Applications;” MoVid '15 Proceedings of the 7th ACM International Workshop on Mobile Video, March 2015, Pages 9-14. Doi: 10.1145/2727040.2727047

Abstract: According to the widespread mobile devices and wearable devices, various mobile video applications are emerging. Some of those applications contain sensitive data such as military information so that they need to be protected from anonymous intruders. Thus, several video encryption techniques have been proposed. Accordingly, it has become essential to evaluate the visual security of encrypted videos. Several techniques have attempted to evaluate the visual security in the spatial domain but failed to capture it in the temporal domain. Thus, we present a temporal visual security metric and consequently propose a spatio-temporal visual security metric by combining ours with an existing metric which evaluates the spatial visual security. Our experimental results demonstrate that our proposed metrics appropriately evaluate temporal distortion as well as spatial distortion of encrypted videos while ensuring high correlation with subjective evaluation scores. Further we examine the tradeoff between the energy consumption for mobile video encryption techniques and visual security of encrypted videos. This tradeoff study is useful in determining a right encryption technique which satisfies the energy budget for secure mobile video applications.

Keywords: metric, spatio-temporal, spatio-temporal metric, video encryption, visual quality, visual security (ID#: 15-8583)

URL:  http://doi.acm.org/10.1145/2727040.2727047

 

Niketa Gupta, Deepali Panwar, Ashish Sharma; “Modeling Structural Model for Defect Categories Based On Software Metrics for Categorical Defect Prediction;” ICCCT '15 Proceedings of the Sixth International Conference on Computer and Communication Technology 2015,September 2015, Pages 46-50. Doi:  10.1145/2818567.2818576

Abstract: Software Defect prediction is the pre-eminent area of software engineering which has witnessed huge importance over last decades. The identification of defects in the early stages of software development not only improve the quality of the software system but also reduce the time, cost and effort associated in maintaining the quality of software product. The quality of the software can be best assessed by software metrics. To evaluate the quality of the software, a number of software metrics have been proposed. Many research studies have been conducted to construct the prediction model that considers the CK (Chidamber and Kemerer) metrics suite and object oriented software metrics. For the prediction model development, consideration of interaction among the metrics is not a common practice. This paper presents the empirical evaluation in which several software metrics were investigated in order to identify the effective set of the metrics for each defect category which can significantly improve the defect prediction model made for each defect category. For each of the metrics, Pearson correlation coefficient with the number of defect categories were calculated and subsequently stepwise regression model is constructed for each defect category to predict the set of the metrics that are the good indicator of each defect category. We have proposed a novel approach for modelling the defects using structural equation modeling further which validates our work. Structural models were built for each defect category using structural equation modeling which claims that results are validated.

Keywords: Defect Prediction, Software Metrics, Stepwise regression model, Structural Equation Modeling (ID#: 15-8584)

URL: http://doi.acm.org/10.1145/2818567.2818576

 

Meriem Laifa, Samir Akrouf, Ramdane Maamri; “Online Social Trust: an Overview;” IPAC '15 Proceedings of the International Conference on Intelligent Information Processing, Security and Advanced Communication, November 2015, Article No. 9. Doi: 10.1145/2816839.2816912

Abstract: There is a wealth of information created every day through computer-mediated communications. Trust is an important component to sustain successful interactions and to filter the overflow of information. The concept of trust is widely used in computer science in various contexts and for different aims. This variety can confuse or mislead new researchers who are interested in trust concept but not familiar enough with it to find relevant related work to their projects. Therefore, we give in this paper an overview of online trust by focusing on its social aspect, and we classify important reviewed work in an attempt to guide new researchers in this domain and facilitate the first steps of their research projects. Based on previous trust surveys, we considered the following criteria: (1) trust dimension and its research purpose, (2) the trusted context and (3) the application domain in which trust is applied.

Keywords: Trust, classification, metrics, online social network (ID#: 15-8585)

URL: http://doi.acm.org/10.1145/2816839.2816912

 

Ben Stock, Stephan Pfistner, Bernd Kaiser, Sebastian Lekies, Martin Johns; ”From Facepalm to Brain Bender: Exploring Client-Side Cross-Site Scripting;”  CCS '15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, October 2015, Pages 1419-1430. Doi: 10.1145/2810103.2813625

Abstract: Although studies have shown that at least one in ten Web pages contains a client-side XSS vulnerability, the prevalent causes for this class of Cross-Site Scripting have not been studied in depth. Therefore, in this paper, we present a large-scale study to gain insight into these causes. To this end, we analyze a set of 1,273 real-world vulnerabilities contained on the Alexa Top 10k domains using a specifically designed architecture, consisting of an infrastructure which allows us to persist and replay vulnerabilities to ensure a sound analysis. In combination with a taint-aware browsing engine, we can therefore collect important execution trace information for all flaws. Based on the observable characteristics of the vulnerable JavaScript, we derive a set of metrics to measure the complexity of each flaw. We subsequently classify all vulnerabilities in our data set accordingly to enable a more systematic analysis. In doing so, we find that although a large portion of all vulnerabilities have a low complexity rating, several incur a significant level of complexity and are repeatedly caused by vulnerable third-party scripts. In addition, we gain insights into other factors related to the existence of client-side XSS flaws, such as missing knowledge of browser-provided APIs, and find that the root causes for Client-Side Cross-Site Scripting range from unaware developers to incompatible first- and third-party code.

Keywords: analysis, client-side XSS, complexity metrics (ID#: 15-8586)

URL: http://doi.acm.org/10.1145/2810103.2813625

 

Daniel Vecchiato, Marco Vieira, Eliane Martins; “A Security Configuration Assessment for Android Devices;”  SAC '15 Proceedings of the 30th Annual ACM Symposium on Applied Computing, April 2015, Pages 2299-2304. Doi: 10.1145/2695664.2695679

Abstract: The wide spreading of mobile devices, such as smartphones and tablets, and their always-advancing capabilities makes them an attractive target for attackers. This, together with the fact that users frequently store critical personal information in such devices and that many organizations currently allow employees to use their personal devices to access the enterprise information infrastructure and applications, makes the assessment of the security of mobile devices a key issue. This paper proposes an approach supported by a tool that allows assessing the security of Android devices based on the user-defined settings, which are known to be a key source of security vulnerabilities. The tool automatically extracts 41 settings from the mobile devices under testing, 14 of which defined and proposed in this work and the remaining adapted from the well-known CIS benchmarks. The paper discusses the settings that are analyzed, describes the overall architecture of the tool, and presents a preliminary evaluation that demonstrates the importance of this type of tools as a foundation towards the assessment of the security of mobile devices.

Keywords: android security, mobile device, security assessment (ID#: 15-8587)

URL:  http://doi.acm.org/10.1145/2695664.2695679


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.