Software Assurance, 2014, Part 1

 

 
SoS Logo

Software Assurance, 2014

Part 1



Software assurance is an essential element in the development of scalable and composable systems.  For a complete system to be secure, each subassembly must be secure. The research work cited here was presented in 2014.




Konrad Iwanicki, Przemyslaw Horban, Piotr Glazar, Karol Strzelecki; “Bringing Modern Unit Testing Techniques to Sensornets,” ACM Transactions on Sensor Networks (TOSN), Volume 11, Issue 2, August 2014, Article No. 25. doi:10.1145/2629422

Abstract: Unit testing, an important facet of software quality assurance, is underappreciated by wireless sensor network (sensornet) developers. This is likely because our tools lag behind the rest of the computing field. As a remedy, we present a new framework that enables modern unit testing techniques in sensornets. Although the framework takes a holistic approach to unit testing, its novelty lies mainly in two aspects. First, to boost test development, it introduces embedded mock modules that automatically abstract out dependencies of tested code. Second, to automate test assessment, it provides embedded code coverage tools that identify untested control flow paths in the code. We demonstrate that in sensornets these features pose unique problems, solving which requires dedicated support from the compiler and operating system. However, the solutions have the potential to offer substantial benefits. In particular, they reduce the unit test development effort by a few factors compared to existing solutions. At the same time, they facilitate obtaining full code coverage, compared to merely 57–72% that can be achieved with integration tests. They also allow for intercepting and reporting many classes of runtime failures, thereby simplifying the diagnosis of software flaws. Finally, they enable fine-grained management of the quality of sensornet software.

Keywords: Unit testing, code coverage, embedded systems, mock objects, software quality assurance, wireless sensor networks (ID#: 15-6236)

URL:  http://doi.acm.org/10.1145/2629422



Peter C. Rigby, Daniel M. German, Laura Cowen, Margaret-Anne Storey; “Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory,” ACM Transactions on Software Engineering and Methodology (TOSEM) - Special Issue International Conference on Software Engineering (ICSE 2012) and Regular Papers, Volume 23, Issue 4, August 2014, Article No. 35. doi:10.1145/2594458

Abstract: Peer review is seen as an important quality-assurance mechanism in both industrial development and the open-source software (OSS) community. The techniques for performing inspections have been well studied in industry; in OSS development, software peer reviews are not as well understood.  To develop an empirical understanding of OSS peer review, we examine the review policies of 25 OSS projects and study the archival records of six large, mature, successful OSS projects. We extract a series of measures based on those used in traditional inspection experiments. We measure the frequency of review, the size of the contribution under review, the level of participation during review, the experience and expertise of the individuals involved in the review, the review interval, and the number of issues discussed during review. We create statistical models of the review efficiency, review interval, and effectiveness, the issues discussed during review, to determine which measures have the largest impact on review efficacy.  We find that OSS peer reviews are conducted asynchronously by empowered experts who focus on changes that are in their area of expertise. Reviewers provide timely, regular feedback on small changes. The descriptive statistics clearly show that OSS review is drastically different from traditional inspection.

Keywords: Peer review, inspection, mining software repositories, open-source software (ID#: 15-6237)

URL:  http://doi.acm.org/10.1145/2594458



Lucas Layman, Victor R. Basili, Marvin V. Zelkowitz; “A Methodology for Exposing Risk in Achieving Emergent System Properties,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 3, May 2014,  Article No. 22. doi:10.1145/2560048

Abstract: Determining whether systems achieve desired emergent properties, such as safety or reliability, requires an analysis of the system as a whole, often in later development stages when changes are difficult and costly to implement. In this article we propose the Process Risk Indicator (PRI) methodology for analyzing and evaluating emergent properties early in the development cycle. A fundamental assumption of system engineering is that risk mitigation processes reduce system risks, yet these processes may also be a source of risk: (1) processes may not be appropriate for achieving the desired emergent property; or (2) processes may not be followed appropriately. PRI analyzes development process artifacts (e.g., designs pertaining to reliability or safety analysis reports) to quantify process risks that may lead to higher system risk. We applied PRI to the hazard analysis processes of a network-centric, Department of Defense system-of-systems and two NASA spaceflight projects to assess the risk of not achieving one such emergent property, software safety, during the early stages of the development lifecycle. The PRI methodology was used to create measurement baselines for process indicators of software safety risk, to identify risks in the hazard analysis process, and to provide feedback to projects for reducing these risks.

Keywords: Process risk, risk measurement, software safety (ID#: 15-6238)

URL:  http://doi.acm.org/10.1145/2560048



Pingyu Zhang, Sebastian Elbaum; “Amplifying Tests to Validate Exception Handling Code: An Extended Study in the Mobile Application Domain,” ACM Transactions on Software Engineering and Methodology (TOSEM) - Special Issue International Conference on Software Engineering (ICSE 2012) and Regular Papers, Volume 23, Issue 4, August 2014, Article No. 32. doi:10.1145/2652483

Abstract: Validating code handling exceptional behavior is difficult, particularly when dealing with external resources that may be noisy and unreliable, as it requires (1) systematic exploration of the space of exceptions that may be thrown by the external resources, and (2) setup of the context to trigger specific patterns of exceptions. In this work, we first present a study quantifying the magnitude of the problem by inspecting the bug repositories of a set of popular applications in the increasingly relevant domain of Android mobile applications. The study revealed that 22% of the confirmed and fixed bugs have to do with poor exceptional handling code, and half of those correspond to interactions with external resources. We then present an approach that addresses this challenge by performing an systematic amplification of the program space explored by a test by manipulating the behavior of external resources. Each amplification attempts to expose a program’s exception handling constructs to new behavior by mocking an external resource so that it returns normally or throws an exception following a predefined set of patterns. Our assessment of the approach indicates that it can be fully automated, is powerful enough to detect 67% of the faults reported in the bug reports of this kind, and is precise enough that 78% of the detected anomalies are fixed, and it has a great potential to assist developers.

Keywords: Test transformation, exception handling, mobile applications, test amplification, test case generation (ID#: 15-6239)

URL: http://doi.acm.org/10.1145/2652483



Salah Bouktif, Houari Sahraoui, Faheem Ahmed; “Predicting Stability of Open-Source Software Systems Using Combination of Bayesian Classifiers,” ACM Transactions on Management Information Systems (TMIS); Volume 5 Issue 1, April 2014,  Article No. 3. doi:10.1145/2555596

Abstract: The use of free and Open-Source Software (OSS) systems is gaining momentum. Organizations are also now adopting OSS, despite some reservations, particularly about the quality issues. Stability of software is one of the main features in software quality management that needs to be understood and accurately predicted. It deals with the impact resulting from software changes and argues that stable components lead to a cost-effective software evolution. Changes are most common phenomena present in OSS in comparison to proprietary software. This makes OSS system evolution a rich context to study and predict stability. Our objective in this work is to build stability prediction models that are not only accurate but also interpretable, that is, able to explain the link between the architectural aspects of a software component and its stability behavior in the context of OSS. Therefore, we propose a new approach based on classifiers combination capable of preserving prediction interpretability. Our approach is classifier-structure dependent. Therefore, we propose a particular solution for combining Bayesian classifiers in order to derive a more accurate composite classifier that preserves interpretability. This solution is implemented using a genetic algorithm and applied in the context of an OSS large-scale system, namely the standard Java API. The empirical results show that our approach outperforms state-of-the-art approaches from both machine learning and software engineering.

Keywords: Bayesian classifiers, Software stability prediction, genetic algorithm (ID#: 15-6240)

URL:  http://doi.acm.org/10.1145/2555596



Yuming Zhou, Baowen Xu, Hareton Leung, Lin Chen; “An In-Depth Study of the Potentially Confounding Effect of Class Size in Fault Prediction,” ACM Transactions on Software Engineering and Methodology (TOSEM); Volume 23, Issue 1, February 2014, Article No. 10. doi:10.1145/2556777

Abstract: Background. The extent of the potentially confounding effect of class size in the fault prediction context is not clear, nor is the method to remove the potentially confounding effect, or the influence of this removal on the performance of fault-proneness prediction models. Objective. We aim to provide an in-depth understanding of the effect of class size on the true associations between object-oriented metrics and fault-proneness. Method. We first employ statistical methods to examine the extent of the potentially confounding effect of class size in the fault prediction context. After that, we propose a linear regression-based method to remove the potentially confounding effect. Finally, we empirically investigate whether this removal could improve the prediction performance of fault-proneness prediction models. Results. Based on open-source software systems, we found: (a) the confounding effect of class size on the associations between object-oriented metrics and fault-proneness in general exists; (b) the proposed linear regression-based method can effectively remove the confounding effect; and (c) after removing the confounding effect, the prediction performance of fault prediction models with respect to both ranking and classification can in general be significantly improved. Conclusion. We should remove the confounding effect of class size when building fault prediction models.

Keywords: Metrics, class size, confounding effect, fault, prediction (ID#: 15-6241)

URL:  http://doi.acm.org/10.1145/2556777



Lionel Briand, Davide Falessi, Shiva Nejati, Mehrdad Sabetzadeh, Tao Yue; “Traceability and SysML Design Slices to Support Safety Inspections: A Controlled Experiment,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014,  Article No. 9. doi:10.1145/2559978

Abstract: Certifying safety-critical software and ensuring its safety requires checking the conformance between safety requirements and design. Increasingly, the development of safety-critical software relies on modeling, and the System Modeling Language (SysML) is now commonly used in many industry sectors. Inspecting safety conformance by comparing design models against safety requirements requires safety inspectors to browse through large models and is consequently time consuming and error-prone. To address this, we have devised a mechanism to establish traceability between (functional) safety requirements and SysML design models to extract design slices (model fragments) that filter out irrelevant details but keep enough context information for the slices to be easy to inspect and understand. In this article, we report on a controlled experiment assessing the impact of the traceability and slicing mechanism on inspectors' conformance decisions and effort. Results show a significant decrease in effort and an increase in decisions’ correctness and level of certainty.

Keywords: Empirical software engineering, design, requirements specification, software and system safety, software/program verification (ID#: 15-6242)

URL: http://doi.acm.org/10.1145/2559978 



Yong Ge, Guofei Jiang, Min Ding, Hui Xiong; “Ranking Metric Anomaly in Invariant Networks,” ACM Transactions on Knowledge Discovery from Data (TKDD), Volume 8, Issue 2, June 2014, Article No. 8. doi:10.1145/2601436

Abstract: The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A traditional approach to model monitoring data is to discover invariant relationships among the monitoring data. Indeed, we can discover all invariant relationships among all pairs of monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, since system faults usually propagate among the monitoring data and eventually lead to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. To this end, in this article, we propose the problem of ranking metrics according to the anomaly levels for a given invariant network, while this is a nontrivial task due to the uncertainties and the complex nature of invariant networks. Specifically, we propose two types of algorithms for ranking metric anomaly by link analysis in invariant networks. Along this line, we first define two measurements to quantify the anomaly level of each metric, and introduce the mRank algorithm. Also, we provide a weighted score mechanism and develop the gRank algorithm, which involves an iterative process to obtain a score to measure the anomaly levels. In addition, some extended algorithms based on mRank and gRank algorithms are developed by taking into account the probability of being broken as well as noisy links. Finally, we validate all the proposed algorithms on a large number of real-world and synthetic data sets to illustrate the effectiveness and efficiency of different algorithms.

Keywords: Metric anomaly ranking, invariant networks, link analysis (ID#: 15-6243)

URL:   http://doi.acm.org/10.1145/2601436

 

Hwidong Na, Jong-Hyeok Lee; “Linguistic Analysis of Non-ITG Word Reordering Between Language Pairs with Different Word Order Typologies,” ACM Transactions on Asian Language Information Processing (TALIP), Volume 13, Issue 3, September 2014, Article No. 11. doi:10.1145/2644810

Abstract: The Inversion Transduction Grammar (ITG) constraints have been widely used for word reordering in machine translation studies. They are, however, so restricted that some types of word reordering cannot be handled properly. We analyze three corpora between SVO and SOV languages: Chinese-Korean, English-Japanese, and English-Korean. In our analysis, sentences that require non-ITG word reordering are manually categorized. We also report the results for two quantitative measures that reveal the significance of non-ITG word reordering. In conclusion, we suggest that ITG constraints are insufficient to deal with word reordering in real situations.

Keywords: Machine translation, corpus analysis, inversion transduction grammar (ID#: 15-6244)

URL:  http://doi.acm.org/10.1145/2644810



Klaas-Jan Stol, Paris Avgeriou, Muhammad Ali Babar, Yan Lucas, Brian Fitzgerald; “Key Factors for Adopting Inner Source,” ACM Transactions on Software Engineering and Methodology (TOSEM),

Volume 23 Issue 2, March 2014, Article No. 18. doi:10.1145/2533685

Abstract: A number of organizations have adopted Open Source Software (OSS) development practices to support or augment their software development processes, a phenomenon frequently referred to as Inner Source. However the adoption of Inner Source is not a straightforward issue. Many organizations are struggling with the question of whether Inner Source is an appropriate approach to software development for them in the first place. This article presents a framework derived from the literature on Inner Source, which identifies nine important factors that need to be considered when implementing Inner Source. The framework can be used as a probing instrument to assess an organization on these nine factors so as to gain an understanding of whether or not Inner Source is suitable. We applied the framework in three case studies at Philips Healthcare, Neopost Technologies, and Rolls-Royce, which are all large organizations that have either adopted Inner Source or were planning to do so. Based on the results presented in this article, we outline directions for future research.

Keywords: Case study, framework, inner source, open-source development practices (ID#: 15-6245)

URL:  http://doi.acm.org/10.1145/2533685

 

M. Unterkalmsteiner, R. Feldt, T. Gorschek; “A Taxonomy for Requirements Engineering and Software Test Alignment,” ACM Transactions on Software Engineering and Methodology (TOSEM) , Volume 23, Issue 2, March 2014, Article No. 16. doi:10.1145/2523088

Abstract: Requirements engineering and software testing are mature areas and have seen a lot of research. Nevertheless, their interactions have been sparsely explored beyond the concept of traceability. To fill this gap, we propose a definition of requirements engineering and software test (REST) alignment, a taxonomy that characterizes the methods linking the respective areas, and a process to assess alignment. The taxonomy can support researchers to identify new opportunities for investigation, as well as practitioners to compare alignment methods and evaluate alignment, or lack thereof. We constructed the REST taxonomy by analyzing alignment methods published in literature, iteratively validating the emerging dimensions. The resulting concept of an information dyad characterizes the exchange of information required for any alignment to take place. We demonstrate use of the taxonomy by applying it on five in-depth cases and illustrate angles of analysis on a set of thirteen alignment methods. In addition, we developed an assessment framework (REST-bench), applied it in an industrial assessment, and showed that it, with a low effort, can identify opportunities to improve REST alignment. Although we expect that the taxonomy can be further refined, we believe that the information dyad is a valid and useful construct to understand alignment.

Keywords: Alignment, software process assessment, software testing, taxonomy (ID#: 15-6246)

URL:  http://doi.acm.org/10.1145/2523088



Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci; “Model-Based Synthesis of Control Software from System-Level Formal Specifications,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014, Article No. 6. doi:10.1145/2559934

Abstract: Many embedded systems are indeed software-based control systems, that is, control systems whose controller consists of control software running on a microcontroller device. This motivates investigation on formal model-based design approaches for automatic synthesis of embedded systems control software. We present an algorithm, along with a tool QKS implementing it, that from a formal model (as a discrete-time linear hybrid system) of the controlled system (plant), implementation specifications (that is, number of bits in the Analog-to-Digital, AD, conversion) and system-level formal specifications (that is, safety and liveness requirements for the closed loop system) returns correct-by-construction control software that has a Worst-Case Execution Time (WCET) linear in the number of AD bits and meets the given specifications. We show feasibility of our approach by presenting experimental results on using it to synthesize control software for a buck DC-DC converter, a widely used mixed-mode analog circuit, and for the inverted pendulum.

Keywords: Hybrid systems, correct-by-construction control software synthesis, model-based design of control software (ID#: 15-6247)

URL:  http://doi.acm.org/10.1145/2559934



Dilan Sahin, Marouane Kessentini, Slim Bechikh, Kalyanmoy Deb; “Code-Smell Detection as a Bilevel Problem,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24, Issue 1, September 2014, Article No. 6. doi:10.1145/2675067

Abstract: Code smells represent design situations that can affect the maintenance and evolution of software. They make the system difficult to evolve. Code smells are detected, in general, using quality metrics that represent some symptoms. However, the selection of suitable quality metrics is challenging due to the absence of consensus in identifying some code smells based on a set of symptoms and also the high calibration effort in determining manually the threshold value for each metric. In this article, we propose treating the generation of code-smell detection rules as a bilevel optimization problem. Bilevel optimization problems represent a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, only the optimal solutions to the lower-level problem become possible feasible candidates to the upper-level problem. In this sense, the code-smell detection problem can be treated as a bilevel optimization problem, but due to lack of suitable solution techniques, it has been attempted to be solved as a single-level optimization problem in the past. In our adaptation here, the upper-level problem generates a set of detection rules, a combination of quality metrics, which maximizes the coverage of the base of code-smell examples and artificial code smells generated by the lower level. The lower level maximizes the number of generated artificial code smells that cannot be detected by the rules produced by the upper level. The main advantage of our bilevel formulation is that the generation of detection rules is not limited to some code-smell examples identified manually by developers that are difficult to collect, but it allows the prediction of new code-smell behavior that is different from those of the base of examples. The statistical analysis of our experiments over 31 runs on nine open-source systems and one industrial project shows that seven types of code smells were detected with an average of more than 86% in terms of precision and recall. The results confirm the outperformance of our bilevel proposal compared to state-of-art code-smell detection techniques. The evaluation performed by software engineers also confirms the relevance of detected code smells to improve the quality of software systems.

Keywords: Search-based software engineering, code smells, software quality (ID#: 15-6248)

URL:  http://doi.acm.org/10.1145/2675067

 

Eric Yuan, Naeem Esfahani, Sam Malek; “A Systematic Survey of Self-Protecting Software Systems,” ACM Transactions on Autonomous and Adaptive Systems (TAAS), Volume 8 Issue 4, January 2014, Article No. 17. doi:10.1145/2555611

Abstract: Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts.

Keywords: Self-protection, adaptive security, autonomic computing, self-* properties, self-adaptive systems (ID#: 15-6249)

URL:  http://doi.acm.org/10.1145/2555611



Juan De Lara, Esther Guerra, Jesús Sánchez Cuadrado; “When and How to Use Multilevel Modelling,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24 Issue 2, December 2014, Article No. 12. doi:10.1145/2685615

Abstract: Model-Driven Engineering (MDE) promotes models as the primary artefacts in the software development process, from which code for the final application is derived. Standard approaches to MDE (like those based on MOF or EMF) advocate a two-level metamodelling setting where Domain-Specific Modelling Languages (DSMLs) are defined through a metamodel that is instantiated to build models at the metalevel below.  Multilevel modelling (also called deep metamodelling) extends the standard approach to metamodelling by enabling modelling at an arbitrary number of metalevels, not necessarily two. Proposers of multilevel modelling claim this leads to simpler model descriptions in some situations, although its applicability has been scarcely evaluated. Thus, practitioners may find it difficult to discern when to use it and how to implement multilevel solutions in practice.  In this article, we discuss those situations where the use of multilevel modelling is beneficial, and identify recurring patterns and idioms. Moreover, in order to assess how often the identified patterns arise in practice, we have analysed a wide range of existing two-level DSMLs from different sources and domains, to detect when their elements could be rearranged in more than two metalevels. The results show this scenario is not uncommon, while in some application domains (like software architecture and enterprise/process modelling) pervasive, with a high average number of pattern occurrences per metamodel.

Keywords: Model-driven engineering, domain-specific modelling languages, metamodelling, metamodelling patterns, multilevel modeling (ID#: 15-6250)

URL: http://doi.acm.org/10.1145/2685615



Gerwin Klein, June Andronick, Kevin Elphinstone, Toby Murray, Thomas Sewell, Rafal Kolanski, Gernot Heiser; “Comprehensive Formal Verification of an OS Microkernel,” ACM Transactions on Computer Systems (TOCS), Volume 32 Issue 1, February 2014, Article No. 2. doi:10.1145/2560537

Abstract: We present an in-depth coverage of the comprehensive machine-checked formal verification of seL4, a general-purpose operating system microkernel.  We discuss the kernel design we used to make its verification tractable. We then describe the functional correctness proof of the kernel’s C implementation and we cover further steps that transform this result into a comprehensive formal verification of the kernel: a formally verified IPC fastpath, a proof that the binary code of the kernel correctly implements the C semantics, a proof of correct access-control enforcement, a proof of information-flow noninterference, a sound worst-case execution time analysis of the binary, and an automatic initialiser for user-level systems that connects kernel-level access-control enforcement with reasoning about system behaviour. We summarise these results and show how they integrate to form a coherent overall analysis, backed by machine-checked, end-to-end theorems. The seL4 microkernel is currently not just the only general-purpose operating system kernel that is fully formally verified to this degree. It is also the only example of formal proof of this scale that is kept current as the requirements, design and implementation of the system evolve over almost a decade. We report on our experience in maintaining this evolving formally verified code base.

Keywords: Isabelle/HOL, L4, microkernel, operating systems, seL4 (ID#: 15-6251)

URL:  http://doi.acm.org/10.1145/2560537



Kai Pan, Xintao Wu, Tao Xie; “Guided Test Generation for Database Applications via Synthesized Database Interactions,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 2, March 2014, Article No. 12. doi:10.1145/2491529

Abstract: Testing database applications typically requires the generation of tests consisting of both program inputs and database states. Recently, a testing technique called Dynamic Symbolic Execution (DSE) has been proposed to reduce manual effort in test generation for software applications. However, applying DSE to generate tests for database applications faces various technical challenges. For example, the database application under test needs to physically connect to the associated database, which may not be available for various reasons. The program inputs whose values are used to form the executed queries are not treated symbolically, posing difficulties for generating valid database states or appropriate database states for achieving high coverage of query-result-manipulation code. To address these challenges, in this article, we propose an approach called SynDB that synthesizes new database interactions to replace the original ones from the database application under test. In this way, we bridge various constraints within a database application: query-construction constraints, query constraints, database schema constraints, and query-result-manipulation constraints. We then apply a state-of-the-art DSE engine called Pex for .NET from Microsoft Research to generate both program inputs and database states. The evaluation results show that tests generated by our approach can achieve higher code coverage than existing test generation approaches for database applications.

Keywords: Automatic test generation, database application testing, dynamic symbolic execution, synthesized database interactions (ID#: 15-6252)

URL:  http://doi.acm.org/10.1145/2491529



Akshay Dua, Nirupama Bulusu, Wu-Chang Feng, Wen Hu; “Combating Software and Sybil Attacks to Data Integrity in Crowd-Sourced Embedded Systems,ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13 Issue 5s, November 2014, Article No. 154. doi:10.1145/2629338

Abstract: Crowd-sourced mobile embedded systems allow people to contribute sensor data, for critical applications, including transportation, emergency response and eHealth. Data integrity becomes imperative as malicious participants can launch software and Sybil attacks modifying the sensing platform and data. To address these attacks, we develop (1) a Trusted Sensing Peripheral (TSP) enabling collection of high-integrity raw or aggregated data, and participation in applications requiring additional modalities; and (2) a Secure Tasking and Aggregation Protocol (STAP) enabling aggregation of TSP trusted readings by untrusted intermediaries, while efficiently detecting fabricators. Evaluations demonstrate that TSP and STAP are practical and energy-efficient.

Keywords: Trust, critical systems, crowd-sourced sensing, data integrity, embedded systems, mobile computing, security (ID#: 15-6253)

URL: http://doi.acm.org/10.1145/2629338



Robert M. Hierons; “Combining Centralised and Distributed Testing,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 24 Issue 1, September 2014, Article No. 5. doi:10.1145/2661296

Abstract: Many systems interact with their environment at distributed interfaces (ports) and sometimes it is not possible to place synchronised local testers at the ports of the system under test (SUT). There are then two main approaches to testing: having independent local testers or a single centralised tester that interacts asynchronously with the SUT. The power of using independent testers has been captured using implementation relation dioco. In this article, we define implementation relation diococ for the centralised approach and prove that dioco and diococ are incomparable. This shows that the frameworks detect different types of faults and so we devise a hybrid framework and define an implementation relation diocos for this. We prove that the hybrid framework is more powerful than the distributed and centralised approaches. We then prove that the Oracle problem is NP-complete for diococ and diocos but can be solved in polynomial time if we place an upper bound on the number of ports. Finally, we consider the problem of deciding whether there is a test case that is guaranteed to force a finite state model into a particular state or to distinguish two states, proving that both problems are undecidable for the centralised and hybrid frameworks.

Keywords: Centralised testing, distributed testing, model-based testing (ID#: 15-6254)

URL: http://doi.acm.org/10.1145/2661296



Ming Xia, Yabo Dong, Wenyuan Xu, Xiangyang Li, Dongming Lu; “MC2: Multimode User-Centric Design of Wireless Sensor Networks for Long-Term Monitoring,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 3, April 2014, Article No. 52. doi:10.1145/2509856

Abstract: Real-world, long-running wireless sensor networks (WSNs) require intense user intervention in the development, hardware testing, deployment, and maintenance stages. A majority of network design is network centric and focuses primarily on network performance, for example, efficient sensing and reliable data delivery. Although several tools have been developed to assist debugging and fault diagnosis, it is yet to systematically examine the underlying heavy burden that users face throughout the lifetime of WSNs. In this article, we propose a general Multimode user-CentriC (MC2) framework that can, with simple user inputs, adjust itself to assist user operation and thus reduce the users’ burden at various stages. In particular, we have identified utilities that are essential at each stage and grouped them into modes. In each mode, only the corresponding utilities will be loaded, and modes can be easily switched using the customized MC2 sensor platform. As such, we reduce the runtime interference between various utilities and simplify their development as well as their debugging. We validated our MC2 software and the sensor platform in a long-lived microclimate monitoring system deployed at a wildland heritage site, Mogao Grottoes. In our current system, 241 sensor nodes have been deployed in 57 caves, and the network has been running for over five years. Our experimental validation shows that the MC2 framework shortens the time for network deployment and maintenance, and makes network maintenance doable by field experts (in our case, historians).

Keywords: MC2 framework, Wireless sensor networks, user-centric design (ID#: 15-6255)

URL: http://doi.acm.org/10.1145/2509856  



Lihua Huang, Sulin Ba, Xianghua Lu; “Building Online Trust in a Culture of Confucianism: The Impact of Process Flexibility and Perceived Control,” ACM Transactions on Management Information Systems (TMIS), Volume 5, Issue 1, April 2014, Article No. 4.  doi:10.1145/2576756

Abstract: The success of e-commerce companies in a Confucian cultural context takes more than advanced IT and process design that have proven successful in Western countries. The example of eBay’s failure in China indicates that earning the trust of Chinese consumers is essential to success, yet the process of building that trust requires something different from that in the Western culture. This article attempts to build a theoretical model to explore the relationship between the Confucian culture and online trust. We introduce two new constructs, namely process flexibility and perceived control, as particularly important factors in online trust formation in the Chinese cultural context. A survey was conducted to test the proposed theoretical model. This study offers a new explanation for online trust formation in the Confucian context. The findings of this article can provide guidance for companies hoping to successfully navigate the Chinese online market in the future.

Keywords: Confucianism, culture, e-commerce, online market, perceived control, process flexibility, trust (ID#: 15-6256)

URL:  http://doi.acm.org/10.1145/2576756



Amit Zoran, Roy Shilkrot, Suranga Nanyakkara, Joseph Paradiso; “The Hybrid Artisans: A Case Study in Smart Tools,” ACM Transactions on Computer-Human Interaction (TOCHI), Volume 21, Issue 3, June 2014, Article No. 15.  doi:10.1145/2617570

Abstract: We present an approach to combining digital fabrication and craft, demonstrating a hybrid interaction paradigm where human and machine work in synergy. The FreeD is a hand-held digital milling device, monitored by a computer while preserving the makers freedom to manipulate the work in many creative ways. Relying on a pre-designed 3D model, the computer gets into action only when the milling bit risks the objects integrity, preventing damage by slowing down the spindle speed, while the rest of the time it allows complete gestural freedom. We present the technology and explore several interaction methodologies for carving. In addition, we present a user study that reveals how synergetic cooperation between human and machine preserves the expressiveness of manual practice. This quality of the hybrid territory evolves into design personalization. We conclude on the creative potential of open-ended procedures within this hybrid interactive territory of manual smart tools and devices.

Keywords: (not provided) (ID#: 15-6257)

URL:  http://doi.acm.org/10.1145/2617570

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.