Software Assurance 2015

 

 
SoS Logo

Software Assurance 2015

 

Software assurance is an essential element in the development of scalable and composable systems.  For a complete system to be secure each subassembly must be secure and that security must be measureable. The research work cited here looks at software assurance metrics and was presented in 2015.


D. Kumar and M. Kumari, "Component Based Software Engineering: Quality Assurance Models, Metrics," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6, 2015. doi: 10.1109/ICRITO.2015.7359358

Abstract: Component based software engineering is another pattern in software advancement. The fundamental thought is to reuse officially finished components as opposed to creating everything from the earliest starting point every time. Utilization of component based improvement brings numerous focal points: speedier advancement, lower expenses of the improvement, better ease of use, and so on. Component based improvement is however still not developing process there still exist numerous issues. Case in point, when you purchase a component you don't know precisely its conduct, you don't have control over its upkeep, et cetera. To have the capacity to effectively create component based items, the associations must present new improvement strategies. We have highlight Quality assurance for component based software. Through this paper we proposed a QAM of CBM that covers CRA, CD, certification, customization, and SAD, SI, ST, and SM.

Keywords: object-oriented programming; quality assurance; software engineering; software metrics; CBM; CD; CRA; QAM; SAD; SM; ST; Sl; certification; component based improvement; component based software engineering; quality assurance models; software advancement; software metrics; Quadrature amplitude modulation; Reliability; Component based software engineering; Metrics; Quality Assurance Characteristics; Quality Assurance Models; life cycle (ID#: 16-9409)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359358&isnumber=7359191

 

C. Woody, R. Ellison and W. Nichols, "Predicting Cybersecurity Using Quality Data," Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-5, 2015. doi: 10.1109/THS.2015.7225327

Abstract: Within the process of system development and implementation, programs assemble hundreds of different metrics for tracking and monitoring software such as budgets, costs and schedules, contracts, and compliance reports. Each contributes, directly or indirectly, toward the cybersecurity assurance of the results. The Software Engineering Institute has detailed size, defect, and process data on over 100 software development projects. The projects include a wide range of application domains. Data from five projects identified as successful safety-critical or security-critical implementations were selected for cybersecurity consideration. Material was analyzed to identify a possible correlation between modeling quality and security and to identify potential predictive cybersecurity modeling characteristics. While not a statistically significant sample, this data indicates the potential for establishing benchmarks for ranges of quality performance (for example, defect injection rates and removal rates and test yields) that provide a predictive capability for cybersecurity results.

Keywords: safety-critical software; security of data ;software quality; system monitoring; Software Engineering Institute; cybersecurity assurance; cybersecurity consideration; predictive capability; predictive cybersecurity modeling characteristics; programs assemble; quality data; quality performance; safety-critical implementation; security-critical implementation; software development project; software monitoring; software tracking; system development; Contracts; Safety; Schedules; Software; Software measurement; Testing; Topology; engineering security; quality modeling; security predictions; software assurance (ID#: 16-9410)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7225327&isnumber=7190491

 

A. Delaitre, B. Stivalet, E. Fong and V. Okun, "Evaluating Bug Finders -- Test and Measurement of Static Code Analyzers," Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, Florence, 2015, pp. 14-20, 2015. doi: 10.1109/COUFLESS.2015.10

Abstract: Software static analysis is one of many options for finding bugs in software. Like compilers, static analyzers take a program as input. This paper covers tools that examine source code - without executing it - and output bug reports. Static analysis is a complex and generally undecidable problem. Most tools resort to approximation to overcome these obstacles and it sometimes leads to incorrect results. Therefore, tool effectiveness needs to be evaluated. Several characteristics of the tools should be examined. First, what types of bugs can they find? Second, what proportion of bugs do they report? Third, what percentage of findings is correct? These questions can be answered by one or more metrics. But to calculate these, we need test cases having certain characteristics: statistical significance, ground truth, and relevance. Test cases with all three attributes are out of reach, but we can use combinations of only two to calculate the metrics. The results in this paper were collected during Static Analysis Tool Exposition (SATE) V, where participants ran 14 static analyzers on the test sets we provided and submitted their reports to us for analysis. Tools had considerably different support for most bug classes. Some tools discovered significantly more bugs than others or generated mostly accurate warnings, while others reported wrong findings more frequently. Using the metrics, an evaluator can compare candidates and select the tool that aligns best with his or her objectives. In addition, our results confirm that the bugs most commonly found by tools are among the most common and important bugs in software. We also observed that code complexity is a major hindrance for static analyzers and detailed which code constructs tools handle well and which impede their analysis.

Keywords: program debugging; program diagnostics; program testing; SATE V; bug finder evaluation; code complexity; ground truth; software static analysis; static analysis tool exposition V; static code analyzer measurement; static code analyzer testing; statistical significance; Complexity theory; Computer bugs; Java; Measurement; NIST; Production; Software; software assurance; software faults; software vulnerability; static analysis tools (ID#: 16-9411)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7181477&isnumber=7181467

 

C. Tantithamthavorn, S. McIntosh, A. E. Hassan, A. Ihara and K. Matsumoto, "The Impact of Mislabelling on the Performance and Interpretation of Defect Prediction Models," Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, Florence, 2015, pp. 812-823, 2015.

 doi: 10.1109/ICSE.2015.93

Abstract: The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.

Keywords: software performance evaluation; software reliability; Apache Jackrabbit system; Lucene system; defect prediction model interpretation; defect prediction model performance; defect prediction models; mislabelling impact; prediction model reliability; randomly-injected noise; software modules; Data mining; Data models; Noise; Noise measurement; Predictive models; Software; Data Quality; Mislabelling; Software Defect Prediction; Software Quality Assurance (ID#: 16-9412)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7194628&isnumber=7194545

 

Sun-Jan Huang, Wen-Chuan Chen and Ping-Yao Chiu, "Evaluation Process Model of the Software Product Quality Levels," Industrial Informatics - Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), 2015 International Conference on, Wuhan, 2015, pp. 55-58, 2015. doi: 10.1109/ICIICII.2015.101

Abstract: Nowadays, software industry is still facing many problems in controlling and evaluating software product quality. One of the primary reasons for this is that except for the lack of an objective software product quality assessment model, software organizations do not have a well-defined mechanism for measuring the quality attributes and further evaluating the level of software product quality. This paper proposes a process model for evaluating the level of software product quality, which is based on the International Standard ISO/IEC 14598 - Software Product Evaluation. The proposed process model can generate a tailored software product quality evaluation model based on the types of information systems. Accordingly, the required software measures are collected and further analyzed for providing feedback to improve the software product quality. It can help software development organizations establish their own evaluation models of the software product quality level and thus serve as an agreement of the requirement of software product quality.

Keywords: quality control; software development management; software metrics; software quality; software standards; International Standard ISO/IEC 14598;evaluation process model; information system; objective software product quality assessment model; quality attribute measurement; quality control; quality evaluation; software development; software industry; software product quality level; IEC Standards; ISO Standards; Measurement; Organizations; Product design; Quality assessment; Software; Measurement and Analysis; Software Product Quality Level; Software Quality Assurance (ID#: 16-9413)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7373789&isnumber=7373746

 

Luyi Li, Minyan Lu and Tingyang Gu, "Constructing Runtime Models of Complex Software-Intensive Systems for Analysis of Failure Mechanism," Reliability Systems Engineering (ICRSE), 2015 First International Conference on, Beijing, 2015, pp. 1-10, 2015. doi: 10.1109/ICRSE.2015.7366482

Abstract: With the growing complexity of complex software-intensive systems, some new features emerge such as logical complexity, boundary erosion and failure normalization, which bring new challenges for software dependability assurance. As a result, there is urgent necessity to analyze the failure mechanism of these systems in order to ensure the dependability of complex software-intensive systems. Research indicates that because of the emerging new features, the failure mechanism of complex software-intensive systems is related closely to the system's runtime states and behaviors. But direct analysis of failure mechanism on actual complex software-intensive systems is costly and nearly impossible because of their large scale. So failure mechanism analysis was normally performed on abstract models of real systems. However, current modelling methods are insufficient for describing the system's internal interaction, software/hardware interaction behavior, runtime behavior comprehensively. So it is necessary to propose a new modelling method to support the description of these new features. This paper proposes a method for constructing runtime models for complex software-intensive systems which takes into consideration internal interaction behavior, interaction behavior between software and hardware on system boundary as well as dynamic runtime behavior. The proposed method includes a static structure model to describe the static structure property of the system, a software/hardware interaction model to describe the interaction characteristics between hardware and software on system boundary and a dynamic runtime behavior model to describe the dynamic features of runtime behavior formally. An example is provided to demonstrate how to use the proposed method and its implication for failure mechanism analysis in complex software-intensive systems is discussed.

Keywords: program diagnostics; software metrics software reliability; system recovery; abstract model; boundary erosion; complex software-intensive system; dynamic runtime behavior model; failure mechanism analysis; failure normalization; internal interaction behavior; logical complexity; runtime model; software dependability assurance; software-hardware interaction behavior; software-hardware interaction model; static structure model; system boundary; system internal interaction; system runtime behavior; system runtime state; Analytical models; Failure analysis; Object oriented modeling; Runtime; Software; Unified modeling language; dynamic runtime behavior; failure mechanism; interaction between software and hardware; runtime model; static structure (ID#: 16-9414)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7366482&isnumber=7366393

 

R. E. Garcia, R. C. Messias Correia, C. Olivete, A. Costacurta Brandi and J. Marques Prates, "Teaching and Learning Software Project Management: A Hands-on Approach," Frontiers in Education Conference (FIE), 2015. Doi: 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-7, 2015. doi: 10.1109/FIE.2015.7344412

Abstract: Project management is an essential activity across several areas, including Software Engineering. Through good management it is possible to achieve deadlines, budgets goals and mainly delivering a product that meets customer expectations. Project management activity encompasses: measurement and metrics; estimation; risk analysis; schedules; tracking and control. Considering the importance of managing projects, it is necessary that courses related to Information Technology and Computer Science present to students concepts, techniques and methodology necessary to cover all project management activities. Software project management courses aim at preparing students to apply management techniques required to plan, organize, monitor and control software projects. In a nutshell, software project management focuses on process, problem and people. In this paper we proposed an approach to teaching and learning of software project management using practical activities. The intention of this work is to provide the experience of applying theoretical concepts in practical activities. The teaching and learning approach, applied since 2006 in a Computer Science course, is based on teamwork. Each team is divided into groups assuming different roles of software process development. We have set four groups, each one assuming a different role (manager; software quality assurance; analyst and designer; programmer). The team must be conducted across the software process by its manager. We use four projects, each group is in charge of managing a different project. In this paper we present the proposed approach (based on hands on activities for project management); we summarize the lessons learned by applying the approach since 2006; we present a qualitative analysis from data collect along the application.

Keywords: computer science education; educational courses; project management; risk analysis; scheduling; software management; software metrics; teaching; team working; analyst; computer science course; control; designer; estimation; hands-on approach; information technology; learning software project management course; manager; measurement; metrics; programmer; qualitative analysis; risk analysis; schedule; software engineering; software process development; software quality assurance; teaching; teamwork; tracking; Education; Monitoring; Project management; Schedules; Software; Software engineering; Learning Project Management; Practical Activities; Teaching Methodology; Teamwork (ID#: 16-9415)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7344412&isnumber=7344011

 

C. J. Hwang, A. Kush and Ruchika, "Performance Evaluation of MANET Using Quality of Service Metrics," Innovative Computing Technology (INTECH), 2015 Fifth International Conference on, Galcia, 2015, pp. 130-135, 2015. doi: 10.1109/INTECH.2015.7173483

Abstract: An ad hoc network is a collection of mobile nodes dynamically forming a temporary network without the use of any existing network infrastructure or centralized administration. Several routing protocols have been proposed for ad hoc networks and prominent among them are Ad hoc On Demand Distance Vector Routing (AODV) and Dynamic Source Routing (DSR). Effort has been made to merge software Quality assurance parameters to adhoc networks to achieve desired results. This Paper analyses the performance of AODV and DSR routing protocols for the quality assurance metrics. The performance differentials of AODV and DSR protocols are analyzed using NS-2 simulator and compared in terms of quality assurance metrics applied.

Keywords: mobile ad hoc networks; quality assurance; quality of service; routing protocols; software quality; AODV routing protocols; DSR routing protocols; MANET performance evaluation; NS-2 simulator; ad hoc on demand distance vector routing; dynamic source routing; mobile ad hoc network; mobile node collection; network infrastructure; quality of service metrics; software quality assurance parameter; temporary network; Mobile ad hoc networks; Reliability; Routing; Routing protocols; Usability; AODV; DSR; MANET; NS2; PDR; SQA (ID#: 16-9416)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7173483&isnumber=7173359

 

T. Bin Noor and H. Hemmati, "A Similarity-Based Approach for Test Case Prioritization Using Historical Failure Data," Software Reliability Engineering (ISSRE), 2015 IEEE 26th International Symposium on, Gaithersburg, MD, 2015, pp. 58-68. doi: 10.1109/ISSRE.2015.7381799

Abstract: Test case prioritization is a crucial element in software quality assurance in practice, specially, in the context of regression testing. Typically, test cases are prioritized in a way that they detect the potential faults earlier. The effectiveness of test cases, in terms of fault detection, is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases, therefore, they are highly ranked, while prioritizing. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar, e.g., when the new failing test is a slightly modified version of an old failing one to catch an undetected fault. In this paper, we define a class of metrics that estimate the test cases quality using their similarity to the previously failing test cases. We have conducted several experiments with five real world open source software systems, with real faults, to evaluate the effectiveness of these quality metrics. The results of our study show that our proposed similarity-based quality measure is significantly more effective for prioritizing test cases compared to existing test case quality measures.

Keywords: fault diagnosis; program testing; public domain software; quality assurance; regression analysis; software metrics; software quality; statistical testing; code coverage; code size; historical failure data; historical fault detection; open source software systems; regression testing; similarity-based quality measure; software quality assurance; software quality metrics; test case prioritization; Context; Fault detection; History; Measurement; Software quality; Testing; Code coverage; Distance function; Execution trace; Historical data; Similarity; Test case prioritization; Test quality metric; Test size (ID#: 16-9417)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7381799&isnumber=7381793

 

H. Sharma and A. Chug, "Dynamic Metrics Are Superior Than Static Metrics in Maintainability Prediction: An Empirical Case Study," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359354

Abstract: Software metrics help us to make meaningful estimates for software products and guide us in taking managerial and technical decisions like budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. Many design metrics have proposed in literature to measure various constructs of Object Oriented (OO) paradigm such as class, coupling, cohesion, inheritance, information hiding and polymorphism and use them further in determining the various aspects of software quality. However, the use of conventional static metrics have found to be inadequate for modern OO software due to the presence of run time polymorphism, templates class, template methods, dynamic binding and some code left unexecuted due to specific input conditions. This gap gave a cue to focus on the use of dynamic metrics instead of traditional static metrics to capture the software characteristics and further deploy them for maintainability predictions. As the dynamic metrics are more precise in capturing the execution behavior of the software system, in the current empirical investigation with the use of open source code, we validate and verify the superiority of dynamic metrics over static metrics. Four machine learning models are used for making the prediction model while training is performed simultaneously using static as well as dynamic metric suite. The results are analyzed using prevalent prediction accuracy measures which indicate that predictive capability of dynamic metrics is more concise than static metrics irrespective of any machine learning prediction model. Results of this would be helpful to practitioners as they can use the dynamic metrics in maintainability prediction in order to achieve precise planning of resource allocation.

Keywords: learning (artificial intelligence); object-oriented methods; public domain software; resource allocation; software maintenance; software metrics; software quality; OO software; design metrics; dynamic binding; dynamic metrics; machine learning prediction model; maintainability prediction; object oriented paradigm; open source code; prevalent prediction accuracy; resource allocation; run time polymorphism; software characteristics; software metrics; software product estimation; software quality; static metrics; template class; template methods; Dynamic metrics; Machine learning; Software maintainability prediction; Software quality; Static metrics (ID#: 16-9418)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7359354&isnumber=7359191

 

T. Cseri, "Examining Structural Correctness of Documentation Comments in C++ Programs," Scientific Conference on Informatics, 2015 IEEE 13th International, Poprad, 2015, pp. 79-84. doi: 10.1109/Informatics.2015.7377812

Abstract: Tools guaranteeing the correctness of software focus almost exclusively on the syntax and the semantics of programming languages. Compilers, static analysis tools, etc. generate diagnostic messages on inconsistencies of the language elements. However, source code contains other important artifacts: comments, which are highly important to document, understand and therefore maintain the software. It is a common experience that the quality of comments erodes during the lifecycle of the software. In this paper we investigate the quality of the documentation comments, which follow a predefined strict syntax, because they are written to be processed using an external documentation generator tool. We categorize the inconsistencies identified by Doxygen - the most widespread documentation tool for C++. We define a metric to represent the quality of the comments and we investigate how this metric changes during the lifetime of a project. The aim of the research is to provide quality assurance for the non-language components of the software.

Keywords: C++ language; computational linguistics; software metrics; software quality; system documentation; C++ programs; Doxygen; comments quality metric; documentation comments quality; documentation comments structural correctness; external documentation generator tool; quality assurance; software nonlanguage components; syntax; Documentation; Generators; HTML; Semantics; Software; Syntactics (ID#: 16-9419)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7377812&isnumber=7377797

 

N. Ohsugi et al., "Using Trac for Empirical Data Collection and Analysis in Developing Small and Medium-Sized Enterprise Systems," Empirical Software Engineering and Measurement (ESEM), 2015 ACM/IEEE International Symposium on, Beijing, 2015, pp. 1-9. doi: 10.1109/ESEM.2015.7321217

Abstract: This paper describes practical case studies of using Trac as a platform for collecting empirical data in the development of small and medium-sized enterprise systems. Project managers have been using various empirical data such as size, development efforts and number of bugs found. These data are vital for management, although the cost entailed is not small in preparing an effective combination of measurement tools, procedures and continuous monitoring to collect reliable data, and many small and medium-sized projects are constrained by budget limitations. This paper describes practical examples of data collection at low cost in the development of two enterprise systems. The examples consist of a small project (5 development personnel at the peak period, down to 3 during the maintenance) and a medium-sized project (80 personnel at the peak, down to 28), used to develop two different enterprise systems. Over 29 months, ten basic metrics and seven derived metrics were collected regarding effort, size and quality, and were used for progress management, estimation, and quality assurance.

Keywords: data analysis; small-to-medium enterprises; Trac; data analysis; empirical data collection; progress management; small and medium-sized enterprise systems; Data collection; Estimation; Maintenance engineering; Measurement; Monitoring; Personnel; Reliability (ID#: 16-9420)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321217&isnumber=7321177

 

R. Yanggratoke et al., "Predicting Real-Time Service-Level Metrics From Device Statistics," Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, Ottawa, ON, 2015, pp. 414-422. doi: 10.1109/INM.2015.7140318

Abstract: While real-time service assurance is critical for emerging telecom cloud services, understanding and predicting performance metrics for such services is hard. In this paper, we pursue an approach based upon statistical learning whereby the behavior of the target system is learned from observations. We use methods that learn from device statistics and predict metrics for services running on these devices. Specifically, we collect statistics from a Linux kernel of a server machine and predict client-side metrics for a video-streaming service (VLC). The fact that we collect thousands of kernel variables, while omitting service instrumentation, makes our approach service-independent and unique. While our current lab configuration is simple, our results, gained through extensive experimentation, prove the feasibility of accurately predicting client-side metrics, such as video frame rates and RTP packet rates, often within 10-15% error (NMAE), also under high computational load and across traces from different scenarios.

Keywords: Linux; cloud computing; operating system kernels; software performance evaluation; video streaming; Linux kernel; VLC; client-side metrics prediction; device statistics; performance metrics; real-time service assurance; real-time service-level metrics prediction; server machine; service instrumentation; statistical learning; telecom cloud services; video-streaming service; Computational modeling; Generators; Load modeling; Measurement; Predictive models; Servers; Streaming media; Quality of service; cloud computing; machine learning; network analytics; statistical learning; video streaming (ID#: 16-9421)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140318&isnumber=7140257

 

W. C. Barott, T. Dabrowski and B. Himed, "Fidelity and Complexity in Passive Radar Simulations," High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, Daytona Beach Shores, FL, 2015, pp. 277-278. doi: 10.1109/HASE.2015.30

Abstract: A case study of the trade off between fidelity and complexity is presented for a passive radar simulator. Although it is possible to accurately model the underlying physics, signal processing, and environment of a radar, the resulting model might be both too complex and too costly to evaluate. Instead, simplifications of various model attributes reduce the complexity and permit fast evaluation of performance metrics over large areas, such as the United States. Several model simplifications and their impact on the results are discussed.

Keywords: digital simulation; passive radar; radar computing; United States; complexity; complexity reduction; fidelity; passive radar simulations; radar environment; signal processing; Accuracy; Atmospheric modeling; Computational modeling; Passive radar; Predictive models; modeling; passive radar; simulation (ID#: 16-9422)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027444&isnumber=7027398

 

E. Takamura, K. Mangum, F. Wasiak and C. Gomez-Rosa, "Information Security Considerations for Protecting NASA Mission Operations Centers (MOCs)," Aerospace Conference, 2015 IEEE, Big Sky, MT, 2015, pp. 1-14. doi: 10.1109/AERO.2015.7119207

Abstract: In NASA space flight missions, the Mission Operations Center (MOC) is often considered “the center of the (ground segment) universe,” at least by those involved with ground system operations. It is at and through the MOC that spacecraft is commanded and controlled, and science data acquired. This critical element of the ground system must be protected to ensure the confidentiality, integrity and availability of the information and information systems supporting mission operations. This paper identifies and highlights key information security aspects affecting MOCs that should be taken into consideration when reviewing and/or implementing protecting measures in and around MOCs. It stresses the need for compliance with information security regulation and mandates, and the need for the reduction of IT security risks that can potentially have a negative impact to the mission if not addressed. This compilation of key security aspects was derived from numerous observations, findings, and issues discovered by IT security audits the authors have conducted on NASA mission operations centers in the past few years. It is not a recipe on how to secure MOCs, but rather an insight into key areas that must be secured to strengthen the MOC, and enable mission assurance. Most concepts and recommendations in the paper can be applied to non-NASA organizations as well. Finally, the paper emphasizes the importance of integrating information security into the MOC development life cycle as configuration, risk and other management processes are tailored to support the delicate environment in which mission operations take place.

Keywords: aerospace computing; command and control systems; data integrity; information systems; risk management; security of data; space vehicles; IT security audits; IT security risk reduction; MOC development life cycle; NASA MOC protection; NASA mission operation center protection; NASA space flight missions; ground system operations; information availability; information confidentiality; information integrity; information security considerations; information security regulation; information systems; nonNASA organizations; spacecraft command and control; Access control; Information security; Monitoring; NASA; Software; IT security metrics; NASA; access control; asset protection; automation; change control; connection protection; continuous diagnostics and mitigation; continuous monitoring; ground segment ground system; incident handling; information assurance; information security; information security leadership; information technology leadership; infrastructure protection; least privilege; logical security; mission assurance; mission operations; mission operations center; network security; personnel screening; physical security; policies and procedures; risk management; scheduling restrictions; security controls; security hardening; software updates; system cloning and software licenses; system security; system security life cycle; unauthorized change detection; unauthorized change deterrence; unauthorized change prevention (ID#: 16-9423)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7119207&isnumber=7118873

 

J. D. Rocco, D. D. Ruscio, L. Iovino and A. Pierantonio, "Mining Correlations of ATL Model Transformation and Metamodel Metrics," Modeling in Software Engineering (MiSE), 2015 IEEE/ACM 7th International Workshop on, Florence, 2015, pp. 54-59. doi: 10.1109/MiSE.2015.17

Abstract: Model transformations are considered to be the "heart" and "soul" of Model Driven Engineering, and as a such, advanced techniques and tools are needed for supporting the development, quality assurance, maintenance, and evolution of model transformations. Even though model transformation developers are gaining the availability of powerful languages and tools for developing, and testing model transformations, very few techniques are available to support the understanding of transformation characteristics. In this paper, we propose a process to analyze model transformations with the aim of identifying to what extent their characteristics depend on the corresponding input and target met models. The process relies on a number of transformation and metamodel metrics that are calculated and properly correlated. The paper discusses the application of the approach on a corpus consisting of more than 90 ATL transformations and 70 corresponding metamodels.

Keywords: program diagnostics; software metrics; ATL model transformation; correlation modeling; metamodel metrics; model driven engineering; Analytical models; Complexity theory; Correlation; IP networks; Indexes; Measurement; Object oriented modeling (ID#: 16-9424)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7167403&isnumber=7167386

 

Y. S. Olaperi and S. Misra, "An Empirical Evaluation of Software Quality Assurance Practices and Challenges in a Developing Country," Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 867-871. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.129

Abstract: Globally, it has been ascertained that the implementation of software quality assurance practices throughout the software development cycle yields quality software products that satisfies the user and meets specified requirements. The awareness and adoption of these techniques has recorded increase in the quality and patronage of software products. However, in developing countries like Nigeria indigenous software produced is not patronized by large corporations such as banks for their financial portfolio, and even the government. This research investigated the software quality assurance practices of practitioners in Nigeria, and the challenges being faced in implementing software quality in a bid to improve the quality and patronage of software. It was observed that while most practitioners claim to adhere to software quality practices, they barely have an understanding of software quality standards and a vast majority do not have a distinct software quality assurance team to enforce this quality. The core challenges inhibiting the practice of these software quality standards have also been identified. The research has helped to reveal some issues within the industry, of which possible solutions have been proffered.

Keywords: human factors; quality assurance; software development management; software process improvement; software quality; software standards; Nigeria; developing countries; software development cycle; software quality assurance practices; software quality assurance team; software quality improvement; user satisfaction; Companies; Planning; Software quality; Standards organizations; software; software quality; software quality assurance; software quality challenges (ID#: 16-9425)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7363169&isnumber=7362962

 

J. Morris-King and H. Cam, "Ecology-Inspired Cyber Risk Model for Propagation of Vulnerability Exploitation in Tactical Edge," Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 336-341. doi: 10.1109/MILCOM.2015.7357465

Abstract: A multitude of cyber vulnerabilities on the tactical edge arise from the mix of network infrastructure, physical hardware and software, and individual user-behavior. Because of the inherent complexity of socio-technical systems, most models of tactical cyber assurance omit the non-physical influence propagation between mobile systems and users. This omission leads to a question: how can the flow of influence across a network act as a proxy for assessing the propagation of risk? Our contribution toward solving this problem is to introduce a dynamic, adaptive ecosystem-inspired model of vulnerability exploitation and risk flow over a tactical network. This model is based on ecological characteristics of the tactical edge, where the heterogeneous characteristics and behaviors of human-machine systems enhance or degrade mission risk in the tactical environment. Our approach provides an in-depth analysis of vulnerability exploitation propagation and risk flow using a multi-agent epidemic model which incorporates user-behavior and mobility as components of the system. This user-behavior component is expressed as a time-varying parameter driving a multi-agent system. We validate this model by conducting a synthetic battlefield simulation, where performance results depend mainly on the level of functionality of the assets and services. The composite risk score is shown to be proportional to infection rates from the Standard Epidemic Model.

Keywords: human factors; military communication; mobile ad hoc networks; multi-agent systems; telecommunication computing; telecommunication network reliability; time-varying systems; dynamic adaptive ecosystem-inspired model; ecology-inspired cyber risk model; human-machine systems; mobile systems; mobile users; multiagent epidemic model; nonphysical influence propagation; risk flow; risk propagation; socio-technical system complexity; synthetic battlefield simulation; tactical cyber assurance; tactical edge; tactical network; time-varying parameter; user-behavior; vulnerability exploitation propagation; Biological system modeling; Computational modeling; Computer security; Ecosystems; Risk management; Timing; Unified modeling language; Agent-based simulation; Ecological modeling; Epidemic system; Risk propagation; Tactical edge network (ID#: 16-9426)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7357465&isnumber=7357245


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.