Compressive Sampling 2015 |
Compressive sampling (or compressive sensing) is an important theory in signal processing. It allows efficient acquisition and reconstruction of a signal and may also be the basis for user identification. For the Science of Security, the topic has implications for resilience, cyber-physical systems, privacy, and composability. The works cited here were published or presented in 2015.
Liwen Xu, Xiaohong Hao, Nicholas D. Lane, Xin Liu, Thomas Moscibroda; “Cost-Aware Compressive Sensing for Networked Sensing Systems,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 130–141. doi:10.1145/2737095.2737105
Abstract: Compressive Sensing is a technique that can help reduce the sampling rate of sensing tasks. In mobile crowdsensing applications or wireless sensor networks, the resource burden of collecting samples is often a major concern. Therefore, compressive sensing is a promising approach in such scenarios. An implicit assumption underlying compressive sensing — both in theory and its applications — is that every sample has the same cost: its goal is to simply reduce the number of samples while achieving a good recovery accuracy. In many networked sensing systems, however, the cost of obtaining a specific sample may depend highly on the location, time, condition of the device, and many other factors of the sample. In this paper, we study compressive sensing in situations where different samples have different costs, and we seek to find a good trade-off between minimizing the total sample cost and the resulting recovery accuracy. We design Cost-Aware Compressive Sensing (CACS), which incorporates the cost-diversity of samples into the compressive sensing framework, and we apply CACS in networked sensing systems. Technically, we use regularized column sum (RCS) as a predictive metric for recovery accuracy, and use this metric to design an optimization algorithm for finding a least cost randomized sampling scheme with provable recovery bounds. We also show how CACS can be applied in a distributed context. Using traffic monitoring and air pollution as concrete application examples, we evaluate CACS based on large-scale real-life traces. Our results show that CACS achieves significant cost savings, outperforming natural baselines (greedy and random sampling) by up to 4x.
Keywords: compressive sensing, crowdsensing, resource-efficiency (ID#: 15-7000)
URL: http://doi.acm.org/10.1145/2737095.2737105
Liwen Xu, Xiaohong Hao, Nicholas D. Lane, Xin Liu, Thomas Moscibroda; “More with Less: Lowering User Burden in Mobile Crowdsourcing Through Compressive Sensing,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 659–670. doi:10.1145/2750858.2807523
Abstract: Mobile crowdsourcing is a powerful tool for collecting data of various types. The primary bottleneck in such systems is the high burden placed on the user who must manually collect sensor data or respond in-situ to simple queries (e.g., experience sampling studies). In this work, we present Compressive CrowdSensing (CCS) -- a framework that enables compressive sensing techniques to be applied to mobile crowdsourcing scenarios. CCS enables each user to provide significantly reduced amounts of manually collected data, while still maintaining acceptable levels of overall accuracy for the target crowd-based system. Naïve applications of compressive sensing do not work well for common types of crowdsourcing data (e.g., user survey responses) because the necessary correlations that are exploited by a sparsifying base are hidden and non-trivial to identify. CCS comprises a series of novel techniques that enable such challenges to be overcome. We evaluate CCS with four representative large-scale datasets and find that it is able to outperform standard uses of compressive sensing, as well as conventional approaches to lowering the quantity of user data needed by crowd systems.
Keywords: compressive sensing, mobile crowdsensing (ID#: 15-7001)
URL: http://doi.acm.org/10.1145/2750858.2807523
Xingsong Hou, Chunli Wang, Xueming Qian; “Compressive Imaging Based on Coefficients Cutting in DLWT Domain and Approximate Message Passing,” ICIMCS ’15, Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, August 2015, Article No. 71. doi:10.1145/2808492.2808563
Abstract: In compressive imaging (CI), accurate coefficients recovery is possible when the transform coefficients for an image are sufficiently sparse. However, conventional transforms, such as discrete cosine transform (DCT) and discrete wavelet transform (DWT), can not acquire a sufficiently sparse representation. A large amount of small coefficients indeed bring interference to the the recovery of large ones. This paper aims to improve the recovery performance by accurately reconstructing the large coefficients as many as possible. Thus, a compressive imaging scheme based on coefficients cutting in directional lifting wavelet transform (DLWT) domain and low-complexity iterative Bayesian algorithm is proposed. The proposed scheme is an improved version of our previous work. In this paper, relations between the best-fitted cutoff ratio and sampling rate are further analyzed. Due to the efficient Bayesian recovery algorithm, the proposed method offers better recovery performance with much lower complexity than our previous work. Experimental results show that our method outperforms many state-of-the-art compressive imaging recovery methods.
Keywords: DLWT, approximate message passing, coefficients cutting, compressive imaging, tail folding (ID#: 15-7002)
URL: http://doi.acm.org/10.1145/2808492.2808563
Yun Tan, Xingsong Hou, Xueming Qian; “A Fine-Grain Nonlocal Weighted Average Method for Image CS Reconstruction,” ICIMCS ’15, Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, August 2015, Article No. 70. doi:10.1145/2808492.2808562
Abstract: Compressive sensing can acquire a signal at a sampling rate far below the Nyquist sampling rate if signal is sparse in some domain. However, signal reconstruction from its observations is challenging because it is an implicit ill-pose problem in practice. As classical CS reconstruction methods, total variation (TV) and iteratively reweighted TV (ReTV) methods only exploit local image information, which results in some loss of image structures and causes blocking effect. In this paper, we observe there are abundant nonlocal repetitive structures in nature image, so we propose a novel fine-grain nonlocal weighted average method for nature image CS reconstruction, and we take full use of the nonlocal repetitive structures to recover image from the observations. Besides, an efficient iterative bound optimization algorithm which is stably convergent in our experiments is applied to the above CS reconstruction. The experimental results of different nature images demonstrate that our proposed algorithm can outperform the existing classical nature image CS reconstruction algorithms in Peak-Signal-Noise-Ratio (PSNR) and subjective evaluation.
Keywords: CS reconstruction, compressive sensing, iterative algorithm, nonlocal repetitive (ID#: 15-7003)
URL: http://doi.acm.org/10.1145/2808492.2808562
Michael B. Cohen, Richard Peng; “Lp Row Sampling by Lewis Weights,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 183–192. doi:10.1145/2746539.2746567
Abstract: We give a simple algorithm to efficiently sample the rows of a matrix while preserving the p-norms of its product with vectors. Given an n * d matrix A, we find with high probability and in input sparsity time an A' consisting of about d log d rescaled rows of A such that |Ax|1 is close to |A'x|1 for all vectors x. We also show similar results for all Lp that give nearly optimal sample bounds in input sparsity time. Our results are based on sampling by “Lewis weights”, which can be viewed as statistical leverage scores of a reweighted matrix. We also give an elementary proof of the guarantees of this sampling process for L1.
Keywords: lp regression, lewis weights, matrix concentration bounds, row sampling (ID#: 15-7004)
URL: http://doi.acm.org/10.1145/2746539.2746567
Ying Yan, Jiaxing Zhang, Bojun Huang, Xuzhan Sun, Jiaqi Mu, Zheng Zhang, Thomas Moscibroda; “Distributed Outlier Detection using Compressive Sensing,” SIGMOD ’15, Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, May 2015, Pages 3–16. doi:10.1145/2723372.2747641
Abstract: Computing outliers and related statistical aggregation functions from large-scale big data sources is a critical operation in many cloud computing scenarios, e.g. service quality assurance, fraud detection, or novelty discovery. Such problems commonly have to be solved in a distributed environment where each node only has a local slice of the entirety of the data. To process a query on the global data, each node must transmit its local slice of data or an aggregated subset thereof to a global aggregator node, which can then compute the desired statistical aggregation function. In this context, reducing the total communication cost is often critical to the overall efficiency. In this paper, we show both theoretically and empirically that these communication costs can be significantly reduced for common distributed computing problems if we take advantage of the fact that production-level big data usually exhibits a form of sparse structure. Specifically, we devise a new aggregation paradigm for outlier detection and related queries. The paradigm leverages compressive sensing for data sketching in combination with outlier detection techniques. We further propose an algorithm that works even for non-sparse data that concentrates around an unknown value. In both cases, we show that the communication cost is reduced to the logarithm of the global data size. We incorporate our approach into Hadoop and evaluate it on real web-scale production data (distributed click-data logs). Our approach reduces data shuffling IO by up to 99%, and end-to-end job duration by up to 40% on many actual production queries.
Keywords: big sparse data, compressive sensing, distributed aggregation, outlier detection (ID#: 15-7005)
URL: http://doi.acm.org/10.1145/2723372.2747641
Marco Trevisi, Ricardo Carmona-Galán, Ángel Rodríguez-Vázquez; “Hardware-Oriented Feature Extraction Based on Compressive Sensing,” ICDSC ’15, Proceedings of the 9th International Conference on Distributed Smart Cameras, September 2015, Pages 211–212. doi:10.1145/2789116.2802657
Abstract: Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this set as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.
Keywords: compressive feature extraction, compressive sensing (ID#: 15-7006)
URL: http://doi.acm.org/10.1145/2789116.2802657
Damian Pfammatter, Domenico Giustiniano, Vincent Lenders; “A Software-Defined Sensor Architecture for Large-Scale Wideband Spectrum Monitoring,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 71–82. doi:10.1145/2737095.2737119
Abstract: Today’s spectrum measurements are mainly performed by governmental agencies which drive around using expensive specialized hardware. The idea of crowdsourcing spectrum monitoring has recently gained attention as an alternative way to capture the usage of wide portions of the wireless spectrum at larger geographical and time scales. To support this vision, we develop a flexible software-defined sensor architecture that enables distributed data collection in real-time over the Internet. Our sensor design builds upon low-cost commercial off-the-shelf (COTS) hardware components with a total cost per sensor device below $100. The low-cost nature of our sensor platform makes the sensing approach particularly suitable for large-scale deployments but imposes technical challenges regarding performance and quality. To circumvent the limits of our solution, we have implemented and evaluated different sensing strategies and noise reduction techniques. Our results suggest that our sensor architecture may be useful in application areas such as dynamic spectrum access in cognitive radios, detecting regions with elevated electro-smog, or simply to gain an understanding of the spectrum usage for advanced signal intelligence such as anomaly detection or policy enforcement.
Keywords: crowdsourcing, distributed, spectrum monitoring, wideband (ID#: 15-7007)
URL: http://doi.acm.org/10.1145/2737095.2737119
Anusha Withana, Roshan Peiris, Nipuna Samarasekara, Suranga Nanayakkara; “zSense: Enabling Shallow Depth Gesture Recognition for Greater Input Expressivity on Smart Wearables,” CHI ’15, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 3661–3670. doi:10.1145/2702123.2702371
Abstract: In this paper we present zSense, which provides greater input expressivity for spatially limited devices such as smart wearables through a shallow depth gesture recognition system using non-focused infrared sensors. To achieve this, we introduce a novel Non-linear Spatial Sampling (NSS) technique that significantly cuts down the number of required infrared sensors and emitters. These can be arranged in many different configurations; for example, number of sensor emitter units can be as minimal as one sensor and two emitters. We implemented different configurations of zSense on smart wearables such as smartwatches, smartglasses and smart rings. These configurations naturally fit into the flat or curved surfaces of such devices, providing a wide scope of zSense enabled application scenarios. Our evaluations reported over 94.8% gesture recognition accuracy across all configurations.
Keywords: compressive sensing, interacting with small devices, shallow depth gesture recognition, smart wearables (ID#: 15-7008)
URL: http://doi.acm.org/10.1145/2702123.2702371
Aosen Wang, Zhanpeng Jin, Chen Song, Wenyao Xu; “Adaptive Compressed Sensing Architecture in Wireless Brain-Computer Interface,” DAC ’15, Proceedings of the 52nd Annual Design Automation Conference, June 2015, Article No. 173. doi:10.1145/2744769.2744792
Abstract: Wireless sensor nodes advance the brain-computer interface (BCI) from laboratory setup to practical applications. Compressed sensing (CS) theory provides a sub-Nyquist sampling paradigm to improve the energy efficiency of electroencephalography (EEG) signal acquisition. However, EEG is a structure-variational signal with time-varying sparsity, which decreases the efficiency of compressed sensing. In this paper, we present a new adaptive CS architecture to tackle the challenge of EEG signal acquisition. Specifically, we design a dynamic knob framework to respond to EEG signal dynamics, and then formulate its design optimization into a dynamic programming problem. We verify our proposed adaptive CS architecture on a publicly available data set. Experimental results show that our adaptive CS can improve signal reconstruction quality by more than 70% under different energy budgets while only consuming 187.88 nJ/event. This indicates that the adaptive CS architecture can effectively adapt to the EEG signal dynamics in the BCI.
Keywords: (not provided) (ID#: 15-7009)
URL: http://doi.acm.org/10.1145/2744769.2744792
Mohammad-Mahdi Moazzami, Dennis E. Phillips, Rui Tan, Guoliang Xing; “ORBIT: A Smartphone-Based Platform for Data-Intensive Embedded Sensing Applications,” IPSN ’15, Proceedings of the 14th International Conference on Information Processing in Sensor Networks, April 2015, Pages 83–94. doi:10.1145/2737095.2737098
Abstract: Owing to the rich processing, multi-modal sensing, and versatile networking capabilities, smartphones are increasingly used to build data-intensive embedded sensing applications. However, various challenges must be systematically addressed before smartphones can be used as a generic embedded sensing platform, including high power consumption, lack of real-time functionality and user-friendly embedded programming support. This paper presents ORBIT, a smartphone-based platform for data-intensive embedded sensing applications. ORBIT features a tiered architecture, in which a smartphone can interface to an energy-efficient peripheral board and/or a cloud service. ORBIT as a platform addresses the shortcomings of current smartphones while utilizing their strengths. ORBIT provides a profile-based task partitioning allowing it to intelligently dispatch the processing tasks among the tiers to minimize the system power consumption. ORBIT also provides a data processing library that includes two mechanisms namely adaptive delay/quality trade-off and data partitioning via multi-threading to optimize resource usage. Moreover, ORBIT supplies an annotation based programming API for developers that significantly simplifies the application development and provides programming flexibility. Extensive microbenchmark evaluation and two case studies including seismic sensing and multi-camera 3D reconstruction, validate the generic design of ORBIT.
Keywords: data processing, data-intensive applications, embedded sensing, smartphone (ID#: 15-7010)
URL: http://doi.acm.org/10.1145/2737095.2737098
Pradeep Sen, Matthias Zwicker, Fabrice Rousselle, Sung-Eui Yoon, Nima Khademi Kalantari; “Denoising Your Monte Carlo Renders: Recent Advances in Image-Space Adaptive Sampling and Reconstruction,” SIGGRAPH ’15, ACM SIGGRAPH 2015 Courses, July 2015, Article No. 11. doi:10.1145/2776880.2792740
Abstract: Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. The current shift in the computer graphics industry towards Monte Carlo rendering has sparked renewed interest in effective, practical noise reduction techniques that are applicable to a wide range of rendering effects, and easily integrated into existing production pipelines. In this course, we survey recent advances in image-space adaptive sampling and reconstruction (filtering) algorithms for noise reduction, which have proven effective at reducing the computational cost of Monte Carlo techniques in practice. These techniques reduce variance by either controlling the sampling density over the image plane, and/or aggregating samples in a reconstruction step, possibly over large image regions in a way that preserves scene detail. To do this, they apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. In some cases, they use the statistical analysis to set the parameters for filtering. In others, they estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. In this course, we aim to provide an overview for practitioners to assess these approaches, and for researchers to identify open research challenges and opportunities for future work. In an introduction, we will first situate image-space adaptive sampling and reconstruction in the larger context of variance reduction for Monte Carlo rendering, and discuss its conceptual advantages and potential drawbacks. In the next part, we will provide details on five specific state-of-the-art algorithms. We will provide visual and quantitative comparisons, and discuss advantages and disadvantages in terms of image quality, computational requirements, and ease of implementation and integration with existing renderers. We will conclude the course by pointing out how some of these techniques are proving useful in real-world applications. Finally, we will discuss directions for potential further improvements. This course brings together speakers that have made numerous contributions to image space adaptive rendering, which they presented at recent ACM SIGGRAPH, ACM SIGGRAPH Asia, and other conferences. The speakers bridge the gap between academia and industry, and they will be able to provide insights relevant to researchers, developers, and practitioners alike.
Keywords: (not provided) (ID#: 15-7011)
URL: http://doi.acm.org/10.1145/2776880.2792740
Elena Ikonomovska, Sina Jafarpour, Ali Dasdan; “Real-Time Bid Prediction using Thompson Sampling-Based Expert Selection,” KDD ’15, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2015, Pages 1869–1878. doi:10.1145/2783258.2788586
Abstract: We study online meta-learners for real-time bid prediction that predict by selecting a single best predictor among several subordinate prediction algorithms, here called “experts”. These predictors belong to the family of context-dependent past performance estimators that make a prediction only when the instance to be predicted falls within their areas of expertise. Within the advertising ecosystem, it is very common for the contextual information to be incomplete, hence, it is natural for some of the experts to abstain from making predictions on some of the instances. Experts’ areas of expertise can overlap, which makes their predictions less suitable for merging; as such, they lend themselves better to the problem of best expert selection. In addition, their performance varies over time, which gives the expert selection problem a non-stochastic, adversarial flavor. In this paper we propose to use probability sampling (via Thompson Sampling) as a meta-learning algorithm that samples from the pool of experts for the purpose of bid prediction. We show performance results from the comparison of our approach to multiple state-of-the-art algorithms using exploration scavenging on a log file of over 300 million ad impressions, as well as comparison to a baseline rule-based model using production traffic from a leading DSP platform.
Keywords: bayesian online learning, multi-armed bandits, online advertising, online algorithms, randomized probability matching (ID#: 15-7012)
URL: http://doi.acm.org/10.1145/2783258.2788586
Leye Wang, Daqing Zhang, Animesh Pathak, Chao Chen, Haoyi Xiong, Dingqi Yang, Yasha Wang; “CCS-TA: Quality-Guaranteed Online Task Allocation in Compressive Crowdsensing,” UbiComp ’15, Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2015, Pages 683–694. doi:10.1145/2750858.2807513
Abstract: Data quality and budget are two primary concerns in urban-scale mobile crowdsensing applications. In this paper, we leverage the spatial and temporal correlation among the data sensed in different sub-areas to significantly reduce the required number of sensing tasks allocated (corresponding to budget), yet ensuring the data quality. Specifically, we propose a novel framework called CCS-TA, combining the state-of-the-art compressive sensing, Bayesian inference, and active learning techniques, to dynamically select a minimum number of sub-areas for sensing task allocation in each sensing cycle, while deducing the missing data of unallocated sub-areas under a probabilistic data accuracy guarantee. Evaluations on real-life temperature and air quality monitoring datasets show the effectiveness of CCS-TA. In the case of temperature monitoring, CCS-TA allocates 18.0-26.5% fewer tasks than baseline approaches, allocating tasks to only 15.5% of the sub-areas on average while keeping overall sensing error below 0.25°C in 95% of the cycles.
Keywords: crowdsensing, data quality, task allocation (ID#: 15-7013)
URL: http://doi.acm.org/10.1145/2750858.2807513
Xiufeng Xie, Eugene Chai, Xinyu Zhang, Karthikeyan Sundaresan, Amir Khojastepour, Sampath Rangarajan; “Hekaton: Efficient and Practical Large-Scale MIMO,” MobiCom ’15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 304–316. doi:10.1145/2789168.2790116
Abstract: Large-scale multiuser MIMO (MU-MIMO) systems have the potential for multi-fold scaling of network capacity. The research community has recognized this theoretical potential and developed architectures [1,2] with large numbers of RF chains. Unfortunately, building the hardware with a large number of RF chains is challenging in practice. CSI data transport and computational overhead of MU-MIMO beamforming can also become prohibitive under large network scale. Furthermore, it is difficult to physically append extra RF chains on existing communication equipments to support such large-scale MU-MIMO architectures. In this paper, we present Hekaton, a novel large-scale MU-MIMO framework that combines legacy MU-MIMO beamforming with phased-array antennas. The core of Hekaton is a two-level beamforming architecture. First, the phased-array antennas steer spatial beams toward each downlink user to reduce channel correlation and suppress the cross-talk interference in the RF domain (for beamforming gain), then we adopt legacy digital beamforming to eliminate the interference between downlink data streams (for spatial multiplexing gain). In this way, Hekaton realizes a good fraction of potential large-scale MU-MIMO gains even under the limited RF chain number on existing communication equipments. We evaluate the performance of Hekaton through over-the-air testbed built over the WARPv3 platform and trace-driven emulation. In the evaluations, Hekaton can improve single-cell throughput by up to 2.5X over conventional MU-MIMO with a single antenna per RF chain, while using the same transmit power.
Keywords: large-scale mu-mimo, phased-array antenna, scalability, two-level beamforming (ID#: 15-7014)
URL: http://doi.acm.org/10.1145/2789168.2790116
Linghe Kong, Xue Liu: “mZig: Enabling Multi-Packet Reception in ZigBee,” MobiCom ’15, Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, September 2015, Pages 552–565. doi:10.1145/2789168.2790104
Abstract: This paper presents mZig, a novel physical layer design that enables a receiver to simultaneously decode multiple packets from different transmitters in ZigBee. As a low-power and low-cost wireless protocol, the promising ZigBee has been widely used in sensor networks, cyber-physical systems, and smart buildings. Since ZigBee based networks usually adopt tree or cluster topology, the convergecast scenarios are common in which multiple transmitters need to send packets to one receiver. For example, in a smart home, all appliances report data to one control plane via ZigBee. However, concurrent transmissions in convergecast lead to the severe collision problem. The conventional ZigBee avoids collisions using backoff time, which introduces additional time overhead. Advanced methods resolve collisions instead of avoidance, in which the state-of-the-art ZigZag resolves one m-packet collision requiring m retransmissions. We propose mZig to resolve one m-packet collision by this collision itself, so the theoretical throughput is improved m-fold. Leveraging the unique features in ZigBee’s physical layer including its chip rate, half-sine pulse shaping and O-QPSK modulation, mZig subtly decomposes multiple packets from one collision in baseband signal processing. The practical factors of noise, multipath, and frequency offset are taken into account in mZig design. We implement mZig on USRPs and establish a seven-node testbed. Experiment results demonstrate that mZig can receive up to four concurrent packets in our testbed. The throughput of mZig is 4.5x of the conventional ZigBee and 3.2x of ZigZag in the convergecast with four or more transmitters.
Keywords: collision, convergecast, multi-packet reception, zigbee (ID#: 15-7015)
URL: http://doi.acm.org/10.1145/2789168.2790104
Pan Hu, Pengyu Zhang, Deepak Ganesan; “Laissez-Faire: Fully Asymmetric Backscatter Communication,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 255–267. doi:10.1145/2785956.2787477
Abstract: Backscatter provides dual-benefits of energy harvesting and low-power communication, making it attractive to a broad class of wireless sensors. But the design of a protocol that enables extremely power-efficient radios for harvesting-based sensors as well as high-rate data transfer for data-rich sensors presents a conundrum. In this paper, we present a new {\em fully asymmetric} backscatter communication protocol where nodes blindly transmit data as and when they sense. This model enables fully flexible node designs, from extraordinarily power-efficient backscatter radios that consume barely a few micro-watts to high-throughput radios that can stream at hundreds of Kbps while consuming a paltry tens of micro-watts. The challenge, however, lies in decoding concurrent streams at the reader, which we achieve using a novel combination of time-domain separation of interleaved signal edges, and phase-domain separation of colliding transmissions. We provide an implementation of our protocol, LF-Backscatter, and show that it can achieve an order of magnitude or more improvement in throughput, latency and power over state-of-art alternatives.
Keywords: architecture, backscatter, wireless (ID#: 15-7016)
URL: http://doi.acm.org/10.1145/2785956.2787477
Raef Bassily, Adam Smith; “Local, Private, Efficient Protocols for Succinct Histograms,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 127–135. doi:10.1145/2746539.2746632
Abstract: We give efficient protocols and matching accuracy lower bounds for frequency estimation in the local model for differential privacy. In this model, individual users randomize their data themselves, sending differentially private reports to an untrusted server that aggregates them. We study protocols that produce a succinct histogram representation of the data. A succinct histogram is a list of the most frequent items in the data (often called “heavy hitters”) along with estimates of their frequencies; the frequency of all other items is implicitly estimated as 0. If there are n users whose items come from a universe of size d, our protocols run in time polynomial in n and log(d). With high probability, they estimate the accuracy of every item up to error O(√{log(d)/(ε2n)}). Moreover, we show that this much error is necessary, regardless of computational efficiency, and even for the simple setting where only one item appears with significant frequency in the data set. Previous protocols (Mishra and Sandler, 2006; Hsu, Khanna and Roth, 2012) for this task either ran in time Ω(d) or had much worse error (about √[6]{log(d)/(ε2n)}), and the only known lower bound on error was Ω(1/√{n}). We also adapt a result of McGregor et al (2010) to the local setting. In a model with public coins, we show that each user need only send 1 bit to the server. For all known local protocols (including ours), the transformation preserves computational efficiency.
Keywords: algorithms, complexity, differential privacy, heavy hitters, local protocols, succinct histograms (ID#: 15-7017)
URL: http://doi.acm.org/10.1145/2746539.2746632
Guy Bresler; “Efficiently Learning Ising Models on Arbitrary Graphs,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 771–782. doi:10.1145/2746539.2746631
Abstract: Graph underlying an Ising model from i.i.d. samples. Over the last fifteen years this problem has been of significant interest in the statistics, machine learning, and statistical physics communities, and much of the effort has been directed towards finding algorithms with low computational cost for various restricted classes of models. Nevertheless, for learning Ising models on general graphs with p nodes of degree at most d, it is not known whether or not it is possible to improve upon the pd computation needed to exhaustively search over all possible neighborhoods for each node. In this paper we show that a simple greedy procedure allows to learn the structure of an Ising model on an arbitrary bounded-degree graph in time on the order of p2. We make no assumptions on the parameters except what is necessary for identifiability of the model, and in particular the results hold at low-temperatures as well as for highly non-uniform models. The proof rests on a new structural property of Ising models: we show that for any node there exists at least one neighbor with which it has a high mutual information.
Keywords: ising model, markov random field, structure learning (ID#: 15-7018)
URL: http://doi.acm.org/10.1145/2746539.2746631
David Medernach, Jeannie Fitzgerald, R. Muhammad Atif Azad, Conor Ryan; “Wave: Incremental Erosion of Residual Error,” GECCO Companion ’15, Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, July 2015, Pages 1285–1292. doi:10.1145/2739482.2768503
Abstract: Typically, Genetic Programming (GP) attempts to solve a problem by evolving solutions over a large, and usually pre-determined number of generations. However, overwhelming evidence shows that not only does the rate of performance improvement drop considerably after a few early generations, but that further improvement also comes at a considerable cost (bloat). Furthermore, each simulation (a GP run), is typically independent yet homogeneous: it does not re-use solutions from a previous run and retains the same experimental settings. Some recent research on symbolic regression divides work across GP runs where the subsequent runs optimise the residuals from a previous run and thus produce a cumulative solution; however, all such subsequent runs (or iterations) still remain homogeneous thus using a pre-set, large number of generations (50 or more). This work introduces Wave, a divide and conquer approach to GP whereby a sequence of short but sharp, and dependent yet potentially heterogeneous GP runs provides a collective solution; the sequence is akin to a wave such that each member of the sequence (that is, a short GP run) is a period of the wave. Heterogeneity across periods results from varying settings of system parameters, such as population size or number of generations, and also by alternating use of the popular GP technique known as linear scaling. The results show that Wave trains faster and better than both standard GP and multiple linear regression, can prolong discovery through constant restarts (which as a side effect also reduces bloat), can innovatively leverage a learning aid, that is, linear scaling at various stages instead of using it constantly regardless of whether it helps and performs reasonably even with a tiny population size (25) which bodes well for real time or data intensive training.
Keywords: fitness landscapes, genetic algorithms, genetic programming, machine learning, performance measures, semantic GP (ID#: 15-7019)
URL: http://doi.acm.org/10.1145/2739482.2768503
Paul Tune, Matthew Roughan; “Spatiotemporal Traffic Matrix Synthesis,” SIGCOMM ’15, Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, August 2015, Pages 579–592. doi:10.1145/2829988.2787471
Abstract: Traffic matrices describe the volume of traffic between a set of sources and destinations within a network. These matrices are used in a variety of tasks in network planning and traffic engineering, such as the design of network topologies. Traffic matrices naturally possess complex spatiotemporal characteristics, but their proprietary nature means that little data about them is available publicly, and this situation is unlikely to change. Our goal is to develop techniques to synthesize traffic matrices for researchers who wish to test new network applications or protocols. The paucity of available data, and the desire to build a general framework for synthesis that could work in various settings requires a new look at this problem. We show how the principle of maximum entropy can be used to generate a wide variety of traffic matrices constrained by the needs of a particular task, and the available information, but otherwise avoiding hidden assumptions about the data. We demonstrate how the framework encompasses existing models and measurements, and we apply it in a simple case study to illustrate the value.
Keywords: maximum entropy, network design, spatiotemporal modeling, traffic engineering, traffic matrix synthesis (ID#: 15-7020)
URL: http://doi.acm.org/10.1145/2829988.2787471
Jean Bourgain, Sjoerd Dirksen, Jelani Nelson; “Toward a Unified Theory of Sparse Dimensionality Reduction in Euclidean Space,” STOC ’15, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, June 2015, Pages 499–508. doi:10.1145/2746539.2746541
Abstract: Let Φ∈Rm x n be a sparse Johnson-Lindenstrauss transform [52] with column sparsity s. For a subset T of the unit sphere and ε∈(0,1/2), we study settings for m,s to ensure EΦ supx∈ T |Φ x|22 - 1| < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneously and multiplicatively up to 1+ε. We introduce a new complexity parameter, which depends on the geometry of T, and show that it suffices to choose s and m such that this parameter is small. Our result is a sparse analog of Gordon’s theorem, which was concerned with a dense Φ having i.i.d. Gaussian entries. We qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries. Our work also implies new results in using the sparse Johnson-Lindenstrauss transform in randomized linear algebra, compressed sensing, manifold learning, and constrained least squares problems such as the Lasso.
Keywords: compressed sensing, dimensionality reduction, manifold learning, randomized linear algebra, sparsity (ID#: 15-7021)
URL: http://doi.acm.org/10.1145/2746539.2746541
Xingliang Yuan, Cong Wang, Kui Ren; “Enabling IP Protection for Outsourced Integrated Circuit Design,” ASIACCS ’15, Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 237–247. doi:10.1145/2714576.2714601
Abstract: As today’s integrated circuit (IC) has easily involved millions and even billions of gates, known as very large-scale integration (VLSI), one natural trend is to move such prohibitive in-house design procedure to the low-cost public cloud. However, such a migration is also raising a challenging request on practical and privacy-preserving techniques to safeguard the sensitive IC design data, i.e., the Intellectual Property (IP). In this paper, we initiate the first study along the direction, and present a practical system for privacy-preserving IC timing analysis, which refers to an essential and expensive procedure via repeated evaluations of timing delays on a gate-level circuit. For privacy protection, our system leverages a key observation that many IP blocks are universally reused and shared across different IC designs, and thus only a small portion of critical IP blocks need to be protected. By carefully extracting such critical design data from the whole circuit, our system only outsources the non-critical data to the public cloud. As such “data splitting” does not readily facilitate correct timing analysis, we then develop specialized algorithms to enable the public cloud to take only the non-critical data and return intermediate results. Such results can later be integrated with critical design data by the local server for fast timing analysis. We also propose a heuristic algorithm to considerably reduce the bandwidth cost in the system. Through rigorous security analysis, we show our system is resilient to IC reverse engineering and protects both the critical IP gate-level design and functionality. We evaluate our system over large IC benchmarks with up to a million of gates to show the efficiency and effectiveness.
Keywords: integrated circuits, ip protection, secure outsourcing, timing analysis (ID#: 15-7022)
URL: http://doi.acm.org/10.1145/2714576.2714601
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.