Acoustic Fingerprints

Image removed.

 

Acoustic fingerprints can be used to identify an audio sample or quickly locate similar items in an audio database. As a security tool, fingerprints offer a modality of biometric identification of a user. Current research is exploring various aspects and applications, including the use of these fingerprints for mobile device security, antiforensics, use of image processing techniques, and client side embedding.
 

  • Liu, Yuxi; Hatzinakos, Dimitrios, "Human Acoustic Fingerprints: A Novel Biometric Modality For Mobile Security," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3784,3788, 4-9 May 2014. (ID#:14-1601) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854309&isnumber=6853544 Recently, the demand for more robust protection against unauthorized use of mobile devices has been rapidly growing. This paper presents a novel biometric modality Transient Evoked Otoacoustic Emission (TEOAE) for mobile security. Prior works have investigated TEOAE for biometrics in a setting where an individual is to be identified among a pre-enrolled identity gallery. However, this limits the applicability to mobile environment, where attacks in most cases are from imposters unknown to the system before. Therefore, we employ an unsupervised learning approach based on Autoencoder Neural Network to tackle such blind recognition problem. The learning model is trained upon a generic dataset and used to verify an individual in a random population. We also introduce the framework of mobile biometric system considering practical application. Experiments show the merits of the proposed method and system performance is further evaluated by cross-validation with an average EER 2.41% achieved. Keywords: Autoencoder Neural Network; Biometric Verification; Mobile Security; Otoacoustic Emission; Time-frequency Analysis
  • Zeng, Hui; Qin, Tengfei; Kang, Xiangui; Liu, Li, "Countering Anti-Forensics Of Median Filtering," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.2704,2708, 4-9 May 2014. (ID#:14-1602) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854091&isnumber=6853544 The statistical fingerprints left by median filtering can be a valuable clue for image forensics. However, these fingerprints may be maliciously erased by a forger. Recently, a tricky anti-forensic method has been proposed to remove median filtering traces by restoring images' pixel difference distribution. In this paper, we analyze the traces of this anti-forensic technique and propose a novel counter method. The experimental results show that our method could reveal this anti-forensics effectively at low computation load. According to our best knowledge, it's the first work on countering anti-forensics of median filtering. Keywords: Image forensics; anti-forensic; median filtering; pixel difference
  • Rafii, Zafar; Coover, Bob; Han, Jinyu, "An Audio Fingerprinting System For Live Version Identification Using Image Processing Techniques," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.644,648, 4-9 May 2014. (ID#:14-1603) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853675&isnumber=6853544 Suppose that you are at a music festival checking on an artist, and you would like to quickly know about the song that is being played (e.g., title, lyrics, album, etc.). If you have a smartphone, you could record a sample of the live performance and compare it against a database of existing recordings from the artist. Services such as Shazam or SoundHound will not work here, as this is not the typical framework for audio fingerprinting or query-by-humming systems, as a live performance is neither identical to its studio version (e.g., variations in instrumentation, key, tempo, etc.) nor it is a hummed or sung melody. We propose an audio fingerprinting system that can deal with live version identification by using image processing techniques. Compact fingerprints are derived using a log-frequency spectrogram and an adaptive thresholding method, and template matching is performed using the Hamming similarity and the Hough Transform. Keywords: adaptive thresholding; Constant Q Transform; audio fingerprinting; cover identification
  • Naini, Rohit; Moulin, Pierre, "Fingerprint Information Maximization For Content Identification," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3809,3813, 4-9 May 2014. (ID#:14-1604) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854314&isnumber=6853544 This paper presents a novel design of content fingerprints based on maximization of the mutual information across the distortion channel. We use the information bottleneck method to optimize the filters and quantizers that generate these fingerprints. A greedy optimization scheme is used to select filters from a dictionary and allocate fingerprint bits. We test the performance of this method for audio fingerprinting and show substantial improvements over existing learning based fingerprints. Keywords: Audio fingerprinting; Content Identification; Information bottleneck; Information maximization
  • Bianchi, Tiziano; Piva, Alessandro, "TTP-free Asymmetric Fingerprinting Protocol Based On Client Side Embedding," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3987,3991, 4-9 May 2014. (ID#:14-1605) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854350&isnumber=6853544 In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution. Keywords: Buyer-Seller watermarking protocol; Client-side embedding; Fingerprinting; secure watermark embedding
  • Coover, Bob; Han, Jinyu, "A Power Mask Based Audio Fingerprint," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.1394,1398, 4-9 May 2014. (ID#:14-1606) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853826&isnumber=6853544 The Philips audio fingerprint[1] has been used for years, but its robustness against external noise has not been studied accurately. This paper shows the Philips fingerprint is noise resistant, and is capable of recognizing music that is corrupted by noise at a −4 to −7 dB signal to noise ratio. In addition, the drawbacks of the Philips fingerprint are addressed by utilizing a “Power Mask” in conjunction with the Philips fingerprint during the matching process. This Power Mask is a weight matrix given to the fingerprint bits, which allows mismatched bits to be penalized according to their relevance in the fingerprint. The effectiveness of the proposed fingerprint was evaluated by experiments using a database of 1030 songs and 1184 query files that were heavily corrupted by two types of noise at varying levels. Our experiments show the proposed method has significantly improved the noise resistance of the standard Philips fingerprint. Keywords: Audio Fingerprint; Music Recognition
  • Moussallam, Manuel; Daudet, Laurent, "A general framework for dictionary based audio fingerprinting," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3077,3081, 4-9 May 2014. (ID#:14-1607) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854166&isnumber=6853544 Fingerprint-based Audio recognition system must address concurrent objectives. Indeed, fingerprints must be both robust to distortions and discriminative while their dimension must remain to allow fast comparison. This paper proposes to restate these objectives as a penalized sparse representation problem. On top of this dictionary-based approach, we propose a structured sparsity model in the form of a probabilistic distribution for the sparse support. A practical suboptimal greedy algorithm is then presented and evaluated on robustness and recognition tasks. We show that some existing methods can be seen as particular cases of this algorithm and that the general framework allows to reach other points of a Pareto-like continuum. Keywords: Audio Fingerprinting; Sparse Representation
  • Wen-Long Chin, Trong Nghia Le, Chun-Lin Tseng, Wei-Che Kao, Chun-Shen Tsai, Chun-Wei Kao, “Cooperative Detection Of Primary User Emulation Attacks Based On Channel-Tap Power In Mobile Cognitive Radio Networks,” International Journal of Ad Hoc and Ubiquitous Computing, Volume 15 Issue 4, May 2014, Pages 263-274. (ID#:14-1608) URL: http://dl.acm.org/citation.cfm?id=2629824.2629828&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 This paper discusses a novel approach to determine primary user emulation attacks (PUEA) in mobile cognitive radio (CR) networks. This method focuses on identifying such attacks when there is a low signal-to-noise ratio (SNR) on the network. Users are directly detected through the physical layer (PHY), using channel-tap power as an identifying radio-frequency (RF) fingerprint. Fixed sample size test (FSST) and Sequential probability ratio test (SPRT) are employed to combat the issue of effective performance in fading channels. Results are discussed. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.