Coding Theory

Image removed.

Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published between January and August of 2014.

  • Vempaty, A; Han, Y.S.; Varshney, L.R.; Varshney, P.K., "Coding Theory For Reliable Signal Processing," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.200,205, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785331 With increased dependence on technology in daily life, there is a need to ensure their reliable performance. There are many applications where we carry out inference tasks assisted by signal processing systems. A typical system performing an inference task can fail due to multiple reasons: presence of a component with permanent failure, a malicious component providing corrupt information, or there might simply be an unreliable component which randomly provides faulty data. Therefore, it is important to design systems which perform reliably even in the presence of such unreliable components. Coding theory based techniques provide a possible solution to this problem. In this position paper, we survey some of our recent work on the use of coding theory based techniques for the design of some signal processing applications. As examples, we consider distributed classification and target localization in wireless sensor networks. We also consider the more recent paradigm of crowdsourcing and discuss how coding based techniques can be used to mitigate the effect of unreliable crowd workers in the system.
    Keywords: error correction codes; signal processing; telecommunication network reliability; wireless sensor networks; coding theory; crowdsourcing; distributed inference; malicious component; permanent failure; reliable signal processing; wireless sensor network; Encoding; Maximum likelihood estimation; Reliability theory; Sensors; Wireless sensor networks; Coding theory; Crowdsourcing; Distributed Inference; Reliability; Wireless Sensor Networks (ID#:14-2689)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785331&isnumber=6785290
  • Vempaty, A; Varshney, L.R.; Varshney, P.K., "Reliable Crowdsourcing for Multi-Class Labeling Using Coding Theory," Selected Topics in Signal Processing, IEEE Journal of, vol.8, no.4, pp.667,679, Aug. 2014. doi: 10.1109/JSTSP.2014.2316116 Crowdsourcing systems often have crowd workers that perform unreliable work on the task they are assigned. In this paper, we propose the use of error-control codes and decoding algorithms to design crowdsourcing systems for reliable classification despite unreliable crowd workers. Coding theory based techniques also allow us to pose easy-to-answer binary questions to the crowd workers. We consider three different crowdsourcing models: systems with independent crowd workers, systems with peer-dependent reward schemes, and systems where workers have common sources of information. For each of these models, we analyze classification performance with the proposed coding-based scheme. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We also show that pairing among workers and diversification of the questions help in improving system performance. We demonstrate the effectiveness of the proposed coding-based scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use of good codes may improve the performance of the crowdsourcing task over typical majority-voting approaches.
    Keywords: decoding; error correction codes; pattern classification; Amazon Mechanical Turk; classification performance analysis; coding theory based techniques; crowdsourcing microtask platform; decoding algorithms; error-control codes; independent crowd workers; majority-voting approaches; multiclass labeling; peer-dependent reward schemes; reliable crowdsourcing system; Algorithm design and analysis; Decoding; Hamming distance;Nose;Reliability;Sensors;Vectors;Crowdsourcing;error-control codes; multi-class labeling; quality assurance (ID#:14-2690)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784318&isnumber=6856242
  • Guangfu Wu; Lin Wang; Trieu-Kien Truong, "Use of Matroid Theory To Construct A Class Of Good Binary Linear Codes," Communications, IET, vol.8, no.6, pp.893, 898, April 17 2014. doi: 10.1049/iet-com.2013.0671 It is still an open challenge in coding theory how to design a systematic linear (n, k) - code C over GF(2) with maximal minimum distance d. In this study, based on matroid theory (MT), a limited class of good systematic binary linear codes (n, k, d) is constructed, where n = 2k-1 + · · · + 2k-δ and d = 2k-2 + · · · + 2k-δ-1 for k ≥ 4, 1 ≤ δ <; k. These codes are well known as special cases of codes constructed by Solomon and Stiffler (SS) back in 1960s. Furthermore, a new shortening method is presented. By shortening the optimal codes, we can design new kinds of good systematic binary linear codes with parameters n = 2k-1 + · · · + 2k-δ - 3u and d = 2k-2 + · · · + 2k-δ-1 - 2u for 2 ≤ u ≤ 4, 2 ≤ δ <; k. The advantage of MT over the original SS construction is that it has an advantage in yielding generator matrix on systematic form. In addition, the dual code C⊥ with relative high rate and optimal minimum distance can be obtained easily in this study.
    Keywords: binary codes; combinatorial mathematics; linear codes; matrix algebra; SS construction; Solomon-Stiffler code construction; coding theory; dual code; generator matrix; matroid theory; maximal minimum distance; optimal codes; shortening method; systematic binary linear codes (ID#:14-2691)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798003&isnumber=6797989
  • Xunrui Yin; Zongpeng Li; Xin Wang, "A Matroid Theory Approach To Multicast Network Coding," INFOCOM, 2014 Proceedings IEEE, pp.646,654, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847990 Network coding encourages the mixing of information flows at intermediate nodes of a network for enhanced network capacity, especially for one-to-many multicast applications. A fundamental problem in multicast network coding is to construct a feasible solution such that encoding and decoding are performed over a finite field of size as small as possible. Coding operations over very small finite fields (e.g., F2) enable low computational complexity in theory and ease of implementation in practice. In this work, we propose a new approach based on matroid theory to study multicast network coding and its minimum field size requirements. Applying this new approach that translates multicast networks into matroids, we derive the first upper-bounds on the field size requirement based on the number of relay nodes in the network, and make new progresses along the direction of proving that coding over very small fields (F2 and F3) suffices for multicast network coding in planar networks.
    Keywords: combinatorial mathematics; matrix algebra; multicast communication; network coding; coding operations; decoding; encoding; enhanced network capacity; information flows; intermediate nodes; matroid theory; minimum field size requirements; multicast network coding; multicast networks; one-to-many multicast applications; planar networks; relay nodes; Encoding; Multicast communication; Network coding; Receivers; Relays; Throughput; Vectors (ID#:14-2692)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847990&isnumber=6847911
  • Shanmugam, K.; Dimakis, AG.; Langberg, M., "Graph Theory Versus Minimum Rank For Index Coding," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.291,295, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874841 We obtain novel index coding schemes and show that they provably outperform all previously known graph theoretic bounds proposed so far 1. Further, we establish a rather strong negative result: all known graph theoretic bounds are within a logarithmic factor from the chromatic number. This is in striking contrast to minrank since prior work has shown that it can outperform the chromatic number by a polynomial factor in some cases. The conclusion is that all known graph theoretic bounds are not much stronger than the chromatic number.
    Keywords: graph colouring; linear codes; chromatic number; graph theoretic bounds; index coding scheme; logarithmic factor; minimum rank; minrank; polynomial factor; Channel coding; Indexes; Interference; Unicast; Upper bound (ID#:14-2693)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874841&isnumber=6874773
  • Chen, H.C.H.; Lee, P.P.C., "Enabling Data Integrity Protection in Regenerating-Coding-Based Cloud Storage: Theory and Implementation," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.2, pp.407,416, Feb. 2014. doi: 10.1109/TPDS.2013.164 To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage, along with efficient data integrity checking and recovery procedures, becomes critical. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. Therefore, we study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. We design and implement a practical data integrity protection (DIP) scheme for a specific regenerating code, while preserving its intrinsic properties of fault tolerance and repair-traffic saving. Our DIP scheme is designed under a mobile Byzantine adversarial model, and enables a client to feasibly verify the integrity of random subsets of outsourced data against general or malicious corruptions. It works under the simple assumption of thin-cloud storage and allows different parameters to be fine-tuned for a performance-security trade-off. We implement and evaluate the overhead of our DIP scheme in a real cloud storage testbed under different parameter choices. We further analyze the security strengths of our DIP scheme via mathematical models. We demonstrate that remote integrity checking can be feasibly integrated into regenerating codes in practical deployment.
    Keywords: cloud computing; data integrity; data protection; DIP scheme; data integrity protection; fault tolerance; mobile Byzantine adversarial model; performance-security trade-off ;regenerating-coded data integrity checking; regenerating-coding-based cloud storage; remote integrity checking; repair-traffic saving; thin-cloud storage; experimentation; implementation; remote data checking; secure and trusted storage systems (ID#:14-2694)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547608&isnumber=6689796
  • Gomez, Arley; Mejia, Carolina; Montoya, J.Andres, "Linear Network Coding And The Model Theory Of Linear Rank Inequalities," Network Coding (NetCod), 2014 International Symposium on, pp.1,5, 27-28 June 2014. doi: 10.1109/NETCOD.2014.6892128 Let n ≥ 4. Can the entropic region of order n be defined by a finite list of polynomial inequalities? This question was first asked by Chan and Grant. We showed, in a companion paper, that if it were the case one could solve many algorithmic problems coming from network coding, index coding and secret sharing. Unfortunately, it seems that the entropic regions are not semialgebraic. Are the Ingleton regions semialgebraic sets? We provide some evidence showing that the Ingleton regions are semialgebraic. Furthermore, we show that if the Ingleton regions are semialgebraic, then one can solve many algorithmic problems coming from Linear Network Coding.
    Keywords: Electronic mail; Encoding; Indexes; Network coding; Polynomials; Random variables; Vectors (ID#:14-2695)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892128&isnumber=6892118
  • Xenoulis, K., "List Permutation Invariant Linear Codes: Theory and Applications," Information Theory, IEEE Transactions on, vol.60, no.9, pp.5263, 5282, Sept. 2014. doi: 10.1109/TIT.2014.2333000 The class of q-ary list permutation invariant linear codes is introduced in this paper along with probabilistic arguments that validate their existence when certain conditions are met. The specific class of codes is characterized by an upper bound that is tighter than the generalized Shulman-Feder bound and relies on the distance of the codes' weight distribution to the binomial (multinomial, respectively) one. The bound applies to cases where a code from the proposed class is transmitted over a q-ary output symmetric discrete memoryless channel and list decoding with fixed list size is performed at the output. In the binary case, the new upper bounding technique allows the discovery of list permutation invariant codes whose upper bound coincides with sphere-packing exponent. Furthermore, the proposed technique motivates the introduction of a new class of upper bounds for general q-ary linear codes whose members are at least as tight as the DS2 bound as well as all its variations for the discrete channels treated in this paper.
    Keywords: channel coding; decoding; linear codes; memoryless systems; generalized Shulman-Feder bound; list decoding; list permutation invariant linear codes; symmetric discrete memoryless channel; Hamming distance; Hamming weight; Linear codes; Maximum likelihood decoding; Vectors; Discrete symmetric channels; double exponential function; list decoding; permutation invariance; reliability function (ID#:14-2696)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843999&isnumber=6878505
  • Micciancio, D., "Locally Dense Codes," Computational Complexity (CCC), 2014 IEEE 29th Conference on, vol., no., pp.90,97, 11-13 June 2014. doi: 10.1109/CCC.2014.17 The Minimum Distance Problem (MDP), i.e., the computational task of evaluating (exactly or approximately) the minimum distance of a linear code, is a well known NP-hard problem in coding theory. A key element in essentially all known proofs that MDP is NP-hard is the construction of a combinatorial object that we may call a locally dense code. This is a linear code with large minimum distance d that admits a ball of smaller radius r¡d containing an exponential number of codewords, together with some auxiliary information used to map these codewords. In this paper we provide a generic method to explicitly construct locally dense binary codes, starting from an arbitrary linear code with sufficiently large minimum distance. Instantiating our construction with well known linear codes (e.g., Reed-Solomon codes concatenated with Hadamard codes) yields a simple proof that MDP is NPhard to approximate within any constant factor under deterministic polynomial time reductions, simplifying and explaining recent results of Cheng and Wan (STOC 2009 / IEEE Trans. Inf. Theory, 2012) and Austrin and Khot (ICALP 2011). Our work is motivated by the construction of analogous combinatorial objects over integer lattices, which are used in NP-hardness proofs for the Shortest Vector Problem (SVP). We show that for the max norm, locally dense lattices can also be easily constructed. However, all currently known constructions of locally dense lattices in the standard Euclidean norm are probabilistic. Finding a deterministic construction of locally dense Euclidean lattices, analogous to the results presented in this paper, would prove the NP-hardness of approximating SVP under deterministic polynomial time reductions, a long standing open problem in the computational complexity of integer lattices.
    Keywords: binary codes; combinatorial mathematics; computational complexity; linear codes; MDP; NP-hard problem; SVP; arbitrary linear code; codewords; coding theory; combinatorial object construction; computational complexity; deterministic polynomial time reductions; integer lattices; locally dense Euclidean lattices; locally dense binary codes; locally dense lattices; max norm; minimum distance problem; shortest vector problem; standard Euclidean norm; Binary codes; Lattices; Linear codes; Polynomials; Reed-Solomon codes; Symmetric matrices; Vectors; NP-hardness; coding theory; derandomization; lattices; minimum distance problem; shortest vector problem (ID#:14-2697)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875478&isnumber=6875460
  • Paajanen, P., "Finite p-Groups, Entropy Vectors, and the Ingleton Inequality for Nilpotent Groups," Information Theory, IEEE Transactions on, vol.60, no.7, pp.3821, 3824, July 2014. doi: 10.1109/TIT.2014.2321561 In this paper, we study the capacity/entropy region of finite, directed, acyclic, multiple-sources, and multiple-sinks network by means of group theory and entropy vectors coming from groups. There is a one-to-one correspondence between the entropy vector of a collection of n random variables and a certain group-characterizable vector obtained from a finite group and n of its subgroups. We are looking at nilpotent group characterizable entropy vectors and show that they are all also abelian group characterizable, and hence they satisfy the Ingleton inequality. It is known that not all entropic vectors can be obtained from abelian groups, so our result implies that to get more exotic entropic vectors, one has to go at least to soluble groups or larger nilpotency classes. The result also implies that Ingleton inequality is satisfied by nilpotent groups of bounded class, depending on the order of the group.
    Keywords: entropy; group theory; network coding; Ingleton inequality; abelian group; capacity-entropy region; finite p-groups; group theory; group-characterizable vector; multiple-sinks network; network coding theory; nilpotent group characterizable entropy vectors; Channel coding;Entropy; Indexes; Lattices; Random variables; Structural rings; Vectors; Non-Shannon type inequalities; entropy regions; network coding theory; nilpotent groups; p-groups (ID#:14-2698)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809978&isnumber=6832684
  • Guruswami, V.; Narayanan, S., "Combinatorial Limitations of Average-Radius List-Decoding," Information Theory, IEEE Transactions on, vol.60, no.10, pp. 5827, 5842, Oct. 2014. doi: 10.1109/TIT.2014.2343224 We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of (O(1/gamma )) ) and lower bound (of (Omega _{p} (log (1/gamma ))) ) for the list size needed to list decode up to error fraction (p) with rate (gamma ) away from capacity, i.e., (1- h(p)-gamma ) [here (pin (0, {1}/{2})) and (gamma > 0) ]. Our main result is that we prove that in any binary code (C subseteq { 0, 1 } ^{n}) of rate (1- h(p) - gamma ) , there must exist a set ( mathcal {L}subset C) of (Omega _{p} (1/sqrt {gamma })) codewords such that the average distance of the points in ( mathcal {L}) from their centroid is at most (pn) . In other words, there must exist (Omega _{p}(1/sqrt {gamma })) codewords with low average radius. The standard notion of list decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average radius form is in - tself quite natural; for instance, the classical Johnson bound in fact implies average-radius list-decodability. The remaining results concern the standard notion of list-decoding, and help clarify the current state of affairs regarding combinatorial bounds for list-decoding as follows. First, we give a short simple proof, over all fixed alphabets, of the above-mentioned (Omega _{p}(log (1/gamma ))) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. Second, we show that one cannot improve the (Omega _{p}(log (1/gamma ))) lower bound via techniques based on identifying the zero-rate regime for list-decoding of constant-weight codes [this is a typical approach for negative results in coding theory, including the (Omega _{p} (log (1/gamma ))) list-size lower bound]. On a positive note, our (Omega _{p}(1/sqrt {gamma })) lower bound for average-radius list-decoding circumvents this barrier. Third, we exhibit a reverse connection between the existence of constant-weight and general codes for list-decoding, showing that the best possible list-size, as a function of the gap (gamma ) of the rate to the capacity limit, is the same up to constant factors for both constant-weight codes (with weight bounded away from (p) ) and general codes. Fourth, we give simple second moment-based proofs that w.h.p. a list-size of (Omega _{p} (1/gamma )) is needed for list-decoding random codes from errors as well as erasures.
    Keywords: Binary codes; Decoding; Entropy; Hamming distance; Standards; Upper bound; Combinatorial coding theory; linear codes; list error-correction; probabilistic method; random coding (ID#:14-2699)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866234&isnumber=6895347
  • Shpilka, A, "Capacity-Achieving Multiwrite WOM Codes," Information Theory, IEEE Transactions on, vol.60, no.3, pp.1481,1487, March 2014. doi: 10.1109/TIT.2013.2294464 In this paper, we give an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/ε)O(t/(δε)) when ε is the gap to capacity and encoding and decoding run in time N1+δ. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.
    Keywords: codes; decoding; alphabets; capacity-achieving binary t-write WOM codes; capacity-achieving multiwrite WOM codes; decoding algorithms; polynomial time encoding; Decoding; Encoding; Force; Indexes; Polynomials; Vectors; Writing; Coding theory; WOM-codes; flash memories; hash-functions; write-once memories (ID#:14-2700)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680743&isnumber=6739111
  • Bitouźe, N.; Amat, AG.I; Rosnes, E., "Using Short Synchronous WOM Codes to Make WOM Codes Decodable," Communications, IEEE Transactions on, vol.62, no.7, pp.2156, 2169, July 2014. doi: 10.1109/TCOMM.2014.2323308 In the framework of write-once memory (WOM) codes, it is important to distinguish between codes that can be decoded directly and those that require the decoder to know the current generation so as to successfully decode the state of the memory. A widely used approach to constructing WOM codes is to design first nondecodable codes that approach the boundaries of the capacity region and then make them decodable by appending additional cells that store the current generation, at an expense of rate loss. In this paper, we propose an alternative method to making nondecodable WOM codes decodable by appending cells that also store some additional data. The key idea is to append to the original (nondecodable) code a short synchronous WOM code and write generations of the original code and the synchronous code simultaneously. We consider both the binary and the nonbinary case. Furthermore, we propose a construction of synchronous WOM codes, which are then used to make nondecodable codes decodable. For short-to-moderate block lengths, the proposed method significantly reduces the rate loss as compared to the standard method.
    Keywords: decoding; WOM codes decodable; capacity region; current generation; nondecodable codes; short synchronous WOM codes; short-to-moderate block lengths; standard method; write generations; write once memory; Binary codes; Decoding; Encoding; Solids; Standards; Synchronization; Vectors; Coding theory; Flash memories; decodable codes; synchronous write-once memory (WOM) codes (ID#:14-2701)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815644&isnumber=6860331
  • Papailiopoulos, D.S.; Dimakis, AG., "Locally Repairable Codes," Information Theory, IEEE Transactions on, vol.60, no.10, pp.5843,5855, Oct. 2014. doi: 10.1109/TIT.2014.2325570 Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main limitation of off-the-shelf erasure codes is their high-repair cost during single node failure events. A major open problem in this area has been the design of codes that: 1) are repair efficient and 2) achieve arbitrarily high data rates. In this paper, we explore the repair metric of locality, which corresponds to the number of disk accesses required during a single node repair. Under this metric, we characterize an information theoretic tradeoff that binds together the locality, code distance, and storage capacity of each node. We show the existence of optimal locally repairable codes (LRCs) that achieve this tradeoff. The achievability proof uses a locality aware flow-graph gadget, which leads to a randomized code construction. Finally, we present an optimal and explicit LRC that achieves arbitrarily high data rates. Our locality optimal construction is based on simple combinations of Reed–Solomon blocks.
    Keywords: Encoding; Entropy; Joints; Maintenance engineering; Measurement; Peer-to-peer computing; Vectors; Information theory; coding theory; distributed storage (ID#:14-2702)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818438&isnumber=6895347
  • Yaakobi, E.; Mahdavifar, H.; Siegel, P.H.; Vardy, A; Life, J.K.W., "Rewriting Codes for Flash Memories," Information Theory, IEEE Transactions on, vol.60, no.2, pp.964,975, Feb. 2014. doi: 10.1109/TIT.2013.2290715 Flash memory is a nonvolatile computer memory comprising blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes-known as floating codes (or flash codes) and buffer codes-have been designed in order to maximize the number of times that information stored in a flash memory can be written (and rewritten) prior to incurring a block erasure. An (n,k,t)q flash code ℂ is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of ℂ, defined as δ(ℂ)=n(q-1)-t, is a measure of how close the code comes to perfectly utilizing all these transitions. In this paper, we show a construction of flash codes with write deficiency O(q k log k) if q ≥ log2 k, and at most O(klog2k) otherwise. An (n,r,l,t)q buffer code is a coding scheme for storing a buffer of r l-ary symbols such that for any sequence of t symbols, it is possible to successfully decode the last r symbols that were written. We improve upon a previous upper bound on the maximum number of writes t in the case where there is a single cell to store the buffer. Then, we show how to improve a construction by Jiang that uses multiple cells, where n ≥ 2r.
    Keywords: block codes; flash memories ;random-access storage; block erasures; buffer codes; coding schemes; flash codes; flash memories; floating codes; l-ary symbols; multiple cells; nonvolatile computer memory; rewriting codes; Ash; Buffer storage; Decoding; Encoding; Indexes; Upper bound; Vectors; Buffer codes; coding theory; flash codes; flash memories (ID#:14-2703)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6662417&isnumber=6714461
  • Vempaty, A; Han, Y.S.; Varshney, P.K., "Target Localization in Wireless Sensor Networks Using Error Correcting Codes," Information Theory, IEEE Transactions on, vol.60, no.1, pp.697, 712, Jan. 2014 doi: 10.1109/TIT.2013.2289859 In this paper, we consider the task of target localization using quantized data in wireless sensor networks. We propose a computationally efficient localization scheme by modeling it as an iterative classification problem. We design coding theory based iterative approaches for target localization where at every iteration, the fusion center (FC) solves an M-ary hypothesis testing problem and decides the region of interest for the next iteration. The coding theory based iterative approach works well even in the presence of Byzantine (malicious) sensors in the network. We further consider the effect of non-ideal channels. We suggest the use of soft-decision decoding to compensate for the loss due to the presence of fading channels between the local sensors and FC. We evaluate the performance of the proposed schemes in terms of the Byzantine fault tolerance capability and probability of detection of the target region. We also present performance bounds, which help us in designing the system. We provide asymptotic analysis of the proposed schemes and show that the schemes achieve perfect region detection irrespective of the noise variance when the number of sensors tends to infinity. Our numerical results show that the proposed schemes provide a similar performance in terms of mean square error as compared with the traditional maximum likelihood estimation but are computationally much more efficient and are resilient to errors due to Byzantines and non-ideal channels.
    Keywords: decoding; error correction codes; fading channels; iterative methods; probability; wireless sensor networks; Byzantine fault tolerance capability; M-ary hypothesis testing problem; asymptotic analysis; coding theory based iterative approaches; efficient localization scheme; error correcting codes; fading channels; fusion center; iterative classification problem; maximum likelihood estimation; mean square error; perfect region detection; probability; soft-decision decoding; target localization; wireless sensor networks; Decoding; Encoding; Fading; Hamming distance; Sensor fusion; Wireless sensor networks; Byzantines; Target localization; error correcting codes; wireless sensor networks (ID#:14-2704)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657772&isnumber=6690264

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.