Software Assurance, 2014, Part 2

 

 
SoS Logo

Software Assurance, 2014

Part 2



Software assurance is an essential element in the development of scalable and composable systems. For a complete system to be secure, each subassembly must be secure. The research work cited here was presented in 2014.




Luis Angel D. Bathen, Nikil D. Dutt; “Embedded RAIDs-on-Chip for Bus-Based Chip-Multiprocessors,” ACM Transactions on Embedded Computing Systems (TECS) — Regular Papers, Volume 13, Issue 4, November 2014, Article No. 83. doi:10.1145/2533316

Abstract: The dual effects of larger die sizes and technology scaling, combined with aggressive voltage scaling for power reduction, increase the error rates for on-chip memories. Traditional on-chip memory reliability techniques (e.g., ECC) incur significant power and performance overheads. In this article, we propose a low-power-and-performance-overhead Embedded RAID (E-RAID) strategy and present Embedded RAIDs-on-Chip (E-RoC), a distributed dynamically managed reliable memory subsystem for bus-based Chip-Multiprocessors. E-RoC achieves reliability through redundancy by optimizing RAID-like policies tuned for on-chip distributed memories. We achieve on-chip reliability of memories through the use of Distributed Dynamic ScratchPad Allocatable Memories (DSPAMs) and their allocation policies. We exploit aggressive voltage scaling to reduce power consumption overheads due to parallel DSPAM accesses, and rely on the E-RoC Manager to automatically handle any resulting voltage-scaling-induced errors. We demonstrate how E-RAIDs can further enhance the fault tolerance of traditional memory reliability approaches by designing E-RAID levels that exploit ECC. Finally, we show the power and flexibility of the E-RoC concept by showing the benefits of having a heterogeneous E-RAID levels that fit each application’s needs (fault tolerance, power/energy, performance).  Our experimental results on CHStone/Mediabench II benchmarks show that our E-RAID levels converge to 100% error-free data rates much faster than traditional ECC approaches. Moreover, E-RAID levels that exploit ECC can guarantee 99.9% error-free data rates at ultra low Vdd on average, where as traditional ECC approaches were able to attain at most 99.1% error-free data rates. We observe an average of 22% dynamic power consumption increase by using traditional ECC approaches with respect to the baseline (non-voltage scaled SPMs), whereas our E-RAID levels are able to save dynamic power consumption by an average of 27% (w.r.t. the same non-voltage scaled SPMs baseline), while incurring worst-case 2% higher performance overheads than traditional ECC approaches. By voltage scaling the memories, we see that traditional ECC approaches are able to save static energy by 6.4% (average), where as our E-RAID approaches achieve 23.4% static energy savings (average). Finally, we observe that mixing E-RAID levels allows us to further reduce the dynamic power consumption by up to 55.5% at the cost of an average 5.6% increase in execution time over traditional approaches.

Keywords: Information assurance, chip-multiprocessors, embedded systems, policy, scratchpad memory, security, virtualization (ID#: 15-6258)

URL: http://doi.acm.org/10.1145/2533316



Haibo Zeng, Marco Di Natale, Qi Zhu; “Minimizing Stack and Communication Memory Usage in Real-Time Embedded Applications,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13, Issue 5s, November 2014, Article No. 149. doi:10.1145/2632160

Abstract: In the development of real-time embedded applications, especially those on systems-on-chip, an efficient use of RAM memory is as important as the effective scheduling of the computation resources. The protection of communication and state variables accessed by concurrent tasks must provide real-time schedulability guarantees while using the least amount of memory. Several schemes, including preemption thresholds, have been developed to improve schedulability and save stack space by selectively disabling preemption. However, the design synthesis problem is still open. In this article, we target the assignment of the scheduling parameters to minimize memory usage for systems of practical interest, including designs compliant with automotive standards. We propose algorithms either proven optimal or shown to improve on randomized optimization methods like simulated annealing.

Keywords: Preemption threshold scheduling, data synchronization mechanism, memory usage, stack requirement (ID#: 15-6259)

URL: http://doi.acm.org/10.1145/2632160 



Peng Li, Debin Gao, Michael K. Reiter; “StopWatch: A Cloud Architecture for Timing Channel Mitigation,” ACM Transactions on Information and System Security (TISSEC), Volume 17, Issue 2, November 2014, Article No. 8. doi:10.1145/2670940

Abstract: This article presents StopWatch, a system that defends against timing-based side-channel attacks that arise from coresidency of victims and attackers in infrastructure-as-a-service clouds. StopWatch triplicates each cloud-resident guest virtual machine (VM) and places replicas so that the three replicas of a guest VM are coresident with nonoverlapping sets of (replicas of) other VMs. StopWatch uses the timing of I/O events at a VM’s replicas collectively to determine the timings observed by each one or by an external observer, so that observable timing behaviors are similarly likely in the absence of any other individual, coresident VMs. We detail the design and implementation of StopWatch in Xen, evaluate the factors that influence its performance, demonstrate its advantages relative to alternative defenses against timing side channels with commodity hardware, and address the problem of placing VM replicas in a cloud under the constraints of StopWatch so as to still enable adequate cloud utilization.

Keywords: Timing channels, clouds, replication, side channels, virtualization (ID#: 15-6260)

URL: http://doi.acm.org/10.1145/2670940 



Wei Hu, Dejun Mu, Jason Oberg, Baolei Mao, Mohit Tiwari, Timothy Sherwood, Ryan Kastner; “Gate-Level Information Flow Tracking for Security Lattices,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 20, Issue 1, November 2014, Article No. 2. doi:10.1145/2676548

Abstract: High-assurance systems found in safety-critical infrastructures are facing steadily increasing cyber threats. These critical systems require rigorous guarantees in information flow security to prevent confidential information from leaking to an unclassified domain and the root of trust from being violated by an untrusted party. To enforce bit-tight information flow control, gate-level information flow tracking (GLIFT) has recently been proposed to precisely measure and manage all digital information flows in the underlying hardware, including implicit flows through hardware-specific timing channels. However, existing work in this realm either restricts to two-level security labels or essentially targets two-input primitive gates and several simple multilevel security lattices. This article provides a general way to expand the GLIFT method for multilevel security. Specifically, it formalizes tracking logic for an arbitrary Boolean gate under finite security lattices, presents a precise tracking logic generation method for eliminating false positives in GLIFT logic created in a constructive manner, and illustrates application scenarios of GLIFT for enforcing multilevel information flow security. Experimental results show various trade-offs in precision and performance of GLIFT logic created using different methods. It also reveals the area and performance overheads that should be expected when expanding GLIFT for multilevel security.

Keywords: High-assurance system, formal method, gate-level information flow tracking, hardware security, multilevel security, security lattice (ID#: 15-6261)

URL:  http://doi.acm.org/10.1145/2676548 



Huang-Ming Huang, Christopher Gill, Chenyang Lu; “Implementation and Evaluation of Mixed-Criticality Scheduling Approaches for Sporadic Tasks,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Real-Time and Embedded Technology and Applications, Domain-Specific Multicore Computing, Cross-Layer Dependable Embedded Systems, and Application of Concurrency to System Design (ACSD), Volume 13, Issue 4s, July 2014, Article No. 126. doi:10.1145/2584612

Abstract: Traditional fixed-priority scheduling analysis for periodic and sporadic task sets is based on the assumption that all tasks are equally critical to the correct operation of the system. Therefore, every task has to be schedulable under the chosen scheduling policy, and estimates of tasks’ worst-case execution times must be conservative in case a task runs longer than is usual. To address the significant underutilization of a system’s resources under normal operating conditions that can arise from these assumptions, several mixed-criticality scheduling approaches have been proposed. However, to date, there have been few quantitative comparisons of system schedulability or runtime overhead for the different approaches.  In this article, we present a side-by-side implementation and evaluation of the known mixed-criticality scheduling approaches, for periodic and sporadic mixed-criticality tasks on uniprocessor systems, under a mixed-criticality scheduling model that is common to all these approaches. To make a fair evaluation of mixed-criticality scheduling, we also address previously open issues and propose modifications to improve particular approaches. Our empirical evaluations demonstrate that user-space implementations of mechanisms to enforce different mixed-criticality scheduling approaches can be achieved atop Linux without kernel modification, with reasonably low (but in some cases nontrivial) overhead for mixed-criticality real-time task sets.

Keywords: Real-time systems, mixed-criticality scheduling (ID#: 15-6262)

URL:  http://doi.acm.org/10.1145/2584612 



Dionisio De Niz, Lutz Wrage, Anthony Rowe, Ragunathan (Raj) Rajkumar; “Utility-Based Resource Overbooking for Cyber-Physical Systems,” ACM Transactions on Embedded Computing Systems (TECS) - Special Issue on Risk and Trust in Embedded Critical Systems, Special Issue on Real-Time, Embedded and Cyber-Physical Systems, Special Issue on Virtual Prototyping of Parallel and Embedded Systems (ViPES), Volume 13, Issue 5s, November 2014, Article No. 162. doi:10.1145/2660497

Abstract: Traditional hard real-time scheduling algorithms require the use of the worst-case execution times to guarantee that deadlines will be met. Unfortunately, many algorithms with parameters derived from sensing the physical world suffer large variations in execution time, leading to pessimistic overall utilization, such as visual recognition tasks. In this article, we present ZS-QRAM, a scheduling approach that enables the use of flexible execution times and application-derived utility to tasks in order to maximize total system utility. In particular, we provide a detailed description of the algorithm, the formal proofs for its temporal protection, and a detailed, evaluation. Our evaluation uses the Utility Degradation Resilience (UDR) showing that ZS-QRAM is able to obtain 4× as much UDR as ZSRM, a previous overbooking approach, and almost 2× as much UDR as Rate-Monotonic with Period Transformation (RM/TP). We then evaluate a Linux kernel module implementation of our scheduler on an Unmanned Air Vehicle (UAV) platform. We show that, by using our approach, we are able to keep the tasks that render the most utility by degrading lower-utility ones even in the presence of highly dynamic execution times.

Keywords: Real-time scheduling, mixed-criticality systems, quality of service, unmanned aerial vehicles, utility functions (ID#: 15-6263)

URL:  http://doi.acm.org/10.1145/2660497 



William Enck, Peter Gilbert, Seungyeop Han, Vasant Tendulkar, Byung-Gon Chun, Landon P. Cox, Jaeyeon Jung, Patrick McDaniel, Anmol N. Sheth; “TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones,” ACM Transactions on Computer Systems (TOCS), Volume 32, Issue 2, June 2014,  Article No. 5. doi:10.1145/2619091

Abstract: Today’s smartphone operating systems frequently fail to provide users with visibility into how third-party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system-wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android’s virtualized execution environment. TaintDroid incurs only 32% performance overhead on a CPU-bound microbenchmark and imposes negligible overhead on interactive third-party applications. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, in our 2010 study we found 20 applications potentially misused users’ private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy-sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.

Keywords: Information-flow tracking, mobile apps, privacy monitoring, smartphones (ID#: 15-6264)

URL: http://doi.acm.org/10.1145/2619091 



Reinhard Schneider, Dip Goswami, Samarjit Chakraborty, Unmesh Bordoloi, Petru Eles, Zebo Peng; “Quantifying Notions of Extensibility in FlexRay Schedule Synthesis,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 19 Issue 4, August 2014, Article No. 32. doi:10.1145/2647954

Abstract: FlexRay has now become a well-established in-vehicle communication bus at most original equipment manufacturers (OEMs) such as BMW, Audi, and GM. Given the increasing cost of verification and the high degree of crosslinking between components in automotive architectures, an incremental design process is commonly followed. In order to incorporate FlexRay-based designs in such a process, the resulting schedules must be extensible, that is: (i) when messages are added in later iterations, they must preserve deadline guarantees of already scheduled messages, and (ii) they must accommodate as many new messages as possible without changes to existing schedules. Apart from extensible scheduling having not received much attention so far, traditional metrics used for quantifying them cannot be trivially adapted to FlexRay schedules. This is because they do not exploit specific properties of the FlexRay protocol. In this article we, for the first time, introduce new notions of extensibility for FlexRay that capture all the protocol-specific properties. In particular, we focus on the dynamic segment of FlexRay and we present a number of metrics to quantify extensible schedules. Based on the introduced metrics, we propose strategies to synthesize extensible schedules and compare the results of different scheduling algorithms. We demonstrate the applicability of the results with industrial-size case studies and also show that the proposed metrics may also be visually represented, thereby allowing for easy interpretation.

Keywords: FlexRay, automotive, extensibility, schedule synthesis (ID#: 15-6265)

URL: http://doi.acm.org/10.1145/2647954 



Bin Ren, Todd Mytkowicz, Gagan Agrawal; “A Portable Optimization Engine for Accelerating Irregular Data-Traversal Applications on SIMD Architectures,” ACM Transactions on Architecture and Code Optimization (TACO), Volume 11, Issue 2, June 2014, Article No. 16. doi:10.1145/2632215

Abstract: Fine-grained data parallelism is increasingly common in the form of longer vectors integrated with mainstream processors (SSE, AVX) and various GPU architectures. This article develops support for exploiting such data parallelism for a class of nonnumeric, nongraphic applications, which perform computations while traversing many independent, irregular data structures. We address this problem by developing several novel techniques. First, for code generation, we develop an intermediate language for specifying such traversals, followed by a runtime scheduler that maps traversals to various SIMD units. Second, we observe that good data locality is crucial to sustained performance from SIMD architectures, whereas many applications that operate on irregular data structures (e.g., trees and graphs) have poor data locality. To address this challenge, we develop a set of data layout optimizations that improve spatial locality for applications that traverse many irregular data structures. Unlike prior data layout optimizations, our approach incorporates a notion of both interthread and intrathread spatial reuse into data layout. Finally, we enable performance portability (i.e., the ability to automatically optimize applications for different architectures) by accurately modeling the impact of inter- and intrathread locality on program performance. As a consequence, our model can predict which data layout optimization to use on a wide variety of SIMD architectures.  To demonstrate the efficacy of our approach and optimizations, we first show how they enable up to a 12X speedup on one SIMD architecture for a set of real-world applications. To demonstrate that our approach enables performance portability, we show how our model predicts the optimal layout for applications across a diverse set of three real-world SIMD architectures, which offers as much as 45% speedup over a suboptimal solution.

Keywords: Irregular data structure, SIMD, fine-grained parallelism (ID#: 15-6266)

URL:  http://doi.acm.org/10.1145/2632215 



Ghassan O. Karame, Aurélien Francillon, Victor Budilivschi, Srdjan Čapkun, Vedran Čapkun; “Microcomputations as Micropayments in Web-based Services,” ACM Transactions on Internet Technology (TOIT), Volume 13, Issue 3, May 2014, Article No. 8. doi:10.1145/2611526

Abstract: In this article, we propose a new micropayment model for nonspecialized commodity web-services based on microcomputations. In our model, a user that wishes to access online content (offered by a website) does not need to register or pay to access the website; instead, he will accept to run microcomputations on behalf of the service provider in exchange for access to the content. These microcomputations can, for example, support ongoing computing projects that have clear social benefits (e.g., projects relating to medical research) or can contribute towards commercial computing projects. We analyze the security and privacy of our proposal and we show that it preserves the privacy of users. We argue that this micropayment model is economically and technically viable and that it can be integrated in existing distributed computing frameworks (e.g., the BOINC platform). In this respect, we implement a prototype of a system based on our model and we deploy our prototype on Amazon Mechanical Turk to evaluate its performance and usability given a large number of users. Our results show that our proposed scheme does not affect the browsing experience of users and is likely to be used by a non-trivial proportion of users. Finally, we empirically show that our scheme incurs comparable bandwidth and CPU consumption to the resource usage incurred by online advertisements featured in popular websites.

Keywords: Distributed computing, Monetization, microcomputations, micropayments, privacy (ID#: 15-6267)

URL:  http://doi.acm.org/10.1145/2611526 



David Basin, Cas Cremers; “Know Your Enemy: Compromising Adversaries in Protocol Analysis,” ACM Transactions on Information and System Security (TISSEC), Volume 17, Issue 2, November 2014,  Article No. 7. doi:10.1145/2658996

Abstract: We present a symbolic framework, based on a modular operational semantics, for formalizing different notions of compromise relevant for the design and analysis of cryptographic protocols. The framework’s rules can be combined to specify different adversary capabilities, capturing different practically-relevant notions of key and state compromise. The resulting adversary models generalize the models currently used in different domains, such as security models for authenticated key exchange. We extend an existing security-protocol analysis tool, Scyther, with our adversary models. This extension systematically supports notions such as weak perfect forward secrecy, key compromise impersonation, and adversaries capable of state-reveal queries. Furthermore, we introduce the concept of a protocol-security hierarchy, which classifies the relative strength of protocols against different adversaries.  In case studies, we use Scyther to analyse protocols and automatically construct protocol-security hierarchies in the context of our adversary models. Our analysis confirms known results and uncovers new attacks. Additionally, our hierarchies refine and correct relationships between protocols previously reported in the cryptographic literature.

Keywords: Security protocols, adversary models, automated analysis, threat models (ID#: 15-6268)

URL:  http://doi.acm.org/10.1145/2658996 



Songqing Chen, Lei Liu, Xinyuan Wang, Xinwen Zhang, Zhao Zhang; “A Host-Based Approach for Unknown Fast-Spreading Worm Detection and Containment,” ACM Transactions on Autonomous and Adaptive Systems (TAAS) - Special Section on Best Papers from SEAMS 2012, Volume 8, Issue 4, January 2014, Article No. 21. doi:10.1145/2555615

Abstract: The fast-spreading worm, which immediately propagates itself after a successful infection, is becoming one of the most serious threats to today’s networked information systems. In this article, we present WormTerminator, a host-based solution for fast Internet worm detection and containment with the assistance of virtual machine techniques based on the fast-worm defining characteristic. In WormTerminator, a virtual machine cloning the host OS runs in parallel to the host OS. Thus, the virtual machine has the same set of vulnerabilities as the host. Any outgoing traffic from the host is diverted through the virtual machine. If the outgoing traffic from the host is for fast worm propagation, the virtual machine should be infected and will exhibit worm propagation pattern very quickly because a fast-spreading worm will start to propagate as soon as it successfully infects a host. To prove the concept, we have implemented a prototype of WormTerminator and have examined its effectiveness against the real Internet worm Linux/Slapper. Our empirical results confirm that WormTerminator is able to completely contain worm propagation in real-time without blocking any non-worm traffic. The major performance cost of WormTerminator is a one-time delay to the start of each outgoing normal connection for worm detection. To reduce the performance overhead, caching is utilized, through which WormTerminator will delay no more than 6% normal outgoing traffic for such detection on average.

Keywords: WormTerminator, polymorphic worms, virtual machine, worm containment, zero-day worms (ID#: 15-6269)

URL:  http://doi.acm.org/10.1145/2555615 



Min Y. Mun, Donnie H. Kim, Katie Shilton, Deborah Estrin, Mark Hansen, Ramesh Govindan; “PDVLoc: A Personal Data Vault for Controlled Location Data Sharing,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 4, June 2014, Article No. 58. doi:10.1145/2523820

Abstract: Location-Based Mobile Service (LBMS) is one of the most popular smartphone services. LBMS enables people to more easily connect with each other and analyze the aspects of their lives. However, sharing location data can leak people’s privacy. We present PDVLoc, a controlled location data-sharing framework based on selectively sharing data through a Personal Data Vault (PDV). A PDV is a privacy architecture in which individuals retain ownership of their data. Data are routinely filtered before being shared with content-service providers, and users or data custodian services can participate in making controlled data-sharing decisions. Introducing PDVLoc gives users flexible and granular access control over their location data. We have implemented a prototype of PDVLoc and evaluated it using real location-sharing social networking applications, Google Latitude and Foursquare. Our user study of 19 participants over 20 days shows that most users find that PDVLoc is useful to manage and control their location data, and are willing to continue using PDVLoc.

Keywords: Location-based mobile service, personal data vault, privacy, selective sharing, system (ID#: 15-6270)

URL: http://doi.acm.org/10.1145/2523820 



Wolf-Bastian Pöttner, Hans Seidel, James Brown, Utz Roedig, Lars Wolf; “Constructing Schedules for Time-Critical Data Delivery in Wireless Sensor Networks,” ACM Transactions on Sensor Networks (TOSN), Volume 10, Issue 3, April 2014, Article No. 44. doi:10.1145/2494528

Abstract: Wireless sensor networks for industrial process monitoring and control require highly reliable and timely data delivery. To match performance requirements, specialised schedule based medium access control (MAC) protocols are employed. In order to construct an efficient system, it is necessary to find a schedule that can support the given application requirements in terms of data delivery latency and reliability. Furthermore, additional requirements such as transmission power may have to be taken into account when constructing the schedule. In this article, we show how such schedule can be constructed. We describe methods and tools to collect the data necessary as input for schedule calculation. Moreover, due to the high complexity of schedule calculation, we also introduce a heuristic. We evaluate the proposed methods in a real-world process automation and control application deployed in an oil refinery and further present a long-term experiment in an office environment. Additionally, we discuss a framework for schedule life-cycle management.

Keywords: Reliability, WSAN, WSN, schedule, scheduling, timeliness, wireless sensor and actor network, wireless sensor network (ID#: 15-6271)

URL: http://doi.acm.org/10.1145/2494528 



Michael Sirivianos, Kyungbaek Kim, Jian Wei Gan, Xiaowei Yang; “Leveraging Social Feedback to Verify Online Identity Claims,” ACM Transactions on the Web (TWEB), Volume 8, Issue 2, March 2014, Article No. 9. doi:10.1145/2543711

Abstract: Anonymity is one of the main virtues of the Internet, as it protects privacy and enables users to express opinions more freely. However, anonymity hinders the assessment of the veracity of assertions that online users make about their identity attributes, such as age or profession. We propose FaceTrust, a system that uses online social networks to provide lightweight identity credentials while preserving a user’s anonymity. FaceTrust employs a “game with a purpose” design to elicit the opinions of the friends of a user about the user’s self-claimed identity attributes, and uses attack-resistant trust inference to assign veracity scores to identity attribute assertions. FaceTrust provides credentials, which a user can use to corroborate his assertions. We evaluate our proposal using a live Facebook deployment and simulations on a crawled social graph. The results show that our veracity scores are strongly correlated with the ground truth, even when dishonest users make up a large fraction of the social network and employ the Sybil attack.

Keywords: (ID#: 15-6272)

URL: http://doi.acm.org/10.1145/2543711  

 

Mingqiang Li, Patrick P. C. Lee; “STAIR Codes: A General Family of Erasure Codes for Tolerating Device and Sector Failures,” ACM Transactions on Storage (TOS) - Special Issue on Usenix Fast 2014, Volume 10, Issue 4, October 2014, Article No. 14. doi:10.1145/2658991

Abstract: Practical storage systems often adopt erasure codes to tolerate device failures and sector failures, both of which are prevalent in the field. However, traditional erasure codes employ device-level redundancy to protect against sector failures, and hence incur significant space overhead. Recent sector-disk (SD) codes are available only for limited configurations. By making a relaxed but practical assumption, we construct a general family of erasure codes called STAIR codes, which efficiently and provably tolerate both device and sector failures without any restriction on the size of a storage array and the numbers of tolerable device failures and sector failures. We propose the upstairs encoding and downstairs encoding methods, which provide complementary performance advantages for different configurations. We conduct extensive experiments on STAIR codes in terms of space saving, encoding/decoding speed, and update cost. We demonstrate that STAIR codes not only improve space efficiency over traditional erasure codes, but also provide better computational efficiency than SD codes based on our special code construction. Finally, we present analytical models that characterize the reliability of STAIR codes, and show that the support of a wider range of configurations by STAIR codes is critical for tolerating sector failure bursts discovered in the field.

Keywords: Erasure codes, device failures, reliability analysis, sector failures (ID#: 15-6273)

URL:  http://doi.acm.org/10.1145/2658991 



Chia-Heng Tu, Hui-Hsin Hsu, Jen-Hao Chen, Chun-Han Chen, Shih-Hao Hung; “Performance and Power Profiling for Emulated Android Systems,” ACM Transactions on Design Automation of Electronic Systems (TODAES), Volume 19, Issue 2, March 2014, Article No. 10. doi:10.1145/2566660

Abstract: Simulation is a common approach for assisting system design and optimization. For system-wide optimization, energy and computational resources are often the two most critical issues. Monitoring the energy state of each hardware component and measuring the time spent in each state is needed for accurate energy and performance prediction. For software optimization, it is important to profile the energy and the time consumed by each software construct in a realistic operating environment with a proper workload. However, the conventional approaches of simulation often fail to produce satisfying data. First, building a cycle-accurate simulation environment for a complex system, such as an Android smartphone, is difficult and can take a long time. Second, a slow simulation can significantly alter the behavior of multithreaded, I/O-intensive applications and can affect the accuracy of profiles. Third, existing software-based profilers generally do not work on simulators, which makes it difficult for performance analysis of complicated software, for example, Java applications executed by the Dalvik VM in an Android system.  To address these aforementioned problems, we proposed and prototyped a framework, called virtual performance analyzer (VPA). VPA takes advantage of an existing emulator or virtual machine monitor to reduce the complexity of building a simulator. VPA allows the user to selectively and incrementally integrate timing models and power models into the emulator with our carefully designed performance/power monitors, tracing facility, and profiling tools to evaluate and analyze the emulated system. The emulated system can perform at different levels of speed to help verify if the profile data are impacted by the emulation speed. Finally, VPA supports existing software-based profiles and enables non-intrusive tracing/profiling by minimizing the probe effect. Our experimental results show that the VPA framework allows users to quickly establish a performance/power evaluation environment and gather useful information to support system design and software optimization for Android smartphones.

Keywords: Android system emulation, full system emulation, performance profiling, performance tracing, power model, timing estimation (ID#: 15-6274)

URL:  http://doi.acm.org/10.1145/2566660 



Amit Chakrabarti, Graham Cormode, Andrew Mcgregor, Justin Thaler; “Annotations in Data Streams,” ACM Transactions on Algorithms (TALG), Volume 11, Issue 1, October 2014, Article No. 7. doi:10.1145/2636924

Abstract: The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms can be further reduced by enlisting a more powerful “helper” that can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a nontrivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend—including recent studies of multipass streaming, read/write streams, and randomly ordered streams—of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right.

Keywords: Interactive proof, annotations, data streams (ID#: 15-6275)

URL:  http://doi.acm.org/10.1145/2636924 



Andrew F. Tappenden, James Miller; “Automated Cookie Collection Testing,” ACM Transactions on Software Engineering and Methodology (TOSEM), Volume 23, Issue 1, February 2014, Article No. 3. doi:10.1145/2559936

Abstract: Cookies are used by over 80% of Web applications utilizing dynamic Web application frameworks. Applications deploying cookies must be rigorously verified to ensure that the application is robust and secure. Given the intense time-to-market pressures faced by modern Web applications, testing strategies that are low cost and automatable are required. Automated Cookie Collection Testing (CCT) is presented, and is empirically demonstrated to be a low-cost and highly effective automated testing solution for modern Web applications. Automatable test oracles and evaluation metrics specifically designed for Web applications are presented, and are shown to be significant diagnostic tests. Automated CCT is shown to detect faults within five real-world Web applications. A case study of over 580 test results for a single application is presented demonstrating that automated CCT is an effective testing strategy. Moreover, CCT is found to detect security bugs in a Web application released into full production.

Keywords: Cookies, Web application testing, adaptive random testing, automated testing, software testing, test generation, test strategies (ID#: 15-6276)

URL:  http://doi.acm.org/10.1145/2559936 

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.