Edge Detection and Metrics 2015

 

 
SoS Logo

Edge Detection and Metrics

2015



Edge detection is an important issue in image and signal processing. The work cited here looks at the development of metrics. These works were presented or published in 2015.




Jaiswal, A.; Garg, B.; Kaushal, V.; Sharma, G.K., “SPAA-Aware 2D Gaussian Smoothing Filter Design Using Efficient Approximation Techniques,” in VLSI Design (VLSID), 2015 28th International Conference on, vol., no., pp. 333–338, 3–7 Jan. 2015. doi:10.1109/VLSID.2015.62

Abstract: The limited battery lifetime and rapidly increasing functionality of portable multimedia devices demand energy-efficient designs. The filters employed mainly in these devices are based on Gaussian smoothing, which is slow and, severely affects the performance. In this paper, we propose a novel energy-efficient approximate 2D Gaussian smoothing filter (2D-GSF) architecture by exploiting “nearest pixel approximation” and rounding-off Gaussian kernel coefficients. The proposed architecture significantly improves Speed-Power-Area-Accuracy (SPAA) metrics in designing energy-efficient filters. The efficacy of the proposed approximate 2D-GSF is demonstrated on real application such as edge detection. The simulation results show 72%, 79% and 76% reduction in area, power and delay, respectively with acceptable 0.4dB loss in PSNR as compared to the well-known approximate 2D-GSF.

Keywords: Gaussian processes; approximation theory; edge detection; smoothing methods; S PAA metric; SPAA-aware 2D Gaussian smoothing filter design; approximation technique; edge detection; energy-efficient approximate 2D GSF architecture; energy-efficient approximate 2D Gaussian smoothing filter architecture; energy-efficient designs; limited battery lifetime; nearest pixel approximation; portable multimedia devices; rounding-off Gaussian kernel coefficient; speed-power-area-accuracy; Adders; Approximation methods; Complexity theory; Computer architecture; Image edge detection; Kernel; Smoothing methods; Approximate design; Edge-detection; Energy-efficiency; Error Tolerant Applications; Gaussian Smoothing Filter (ID#: 15-7061)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031756&isnumber=7031671

 

Rui Xu; Naman, A.T.; Mathew, R.; Rufenacht, D.; Taubman, D., “Motion Estimation with Accurate Boundaries,” in Picture Coding Symposium (PCS), 2015, vol., no., pp. 184–188, May 31 2015–June 3 2015. doi:10.1109/PCS.2015.7170072

Abstract: This paper investigates several techniques that increase the accuracy of motion boundaries in estimated motion fields of a local dense estimation scheme. In particular, we examine two matching metrics, one is MSE in the image domain and the other one is a recently proposed multiresolution metric that has been shown to produce more accurate motion boundaries. We also examine several different edge-preserving filters. The edge-aware moving average filter, proposed in this paper, takes an input image and the result of an edge detection algorithm, and outputs an image that is smooth except at the detected edges. Compared to the adoption of edge-preserving filters, we find that matching metrics play a more important role in estimating accurate and compressible motion fields. Nevertheless, the proposed filter may provide further improvements in the accuracy of the motion boundaries. These findings can be very useful for a number of recently proposed scalable interactive video coding schemes.

Keywords: edge detection; image filtering; image matching; image resolution; motion estimation; moving average processes; MSE; compressible motion field; edge detection algorithm; edge-aware moving average filter; edge-preserving filters; image domain; local dense estimation scheme; matching metrics; motion boundary; motion estimation; multiresolution metric; scalable interactive video coding scheme; Accuracy; Image coding; Image edge detection; Joints; Measurement; Motion estimation; Video coding (ID#: 15-7062)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7170072&isnumber=7170026

 

Haj-Hassan, H.; Chaddad, A.; Tanougast, C.; Harkouss, Y., “Comparison of Segmentation Techniques for Histopathological Images,” in Digital Information and Communication Technology and its Applications (DICTAP), 2015 Fifth International Conference on, vol., no., pp. 80–85, April 29 2015–May 1 2015. doi:10.1109/DICTAP.2015.7113175

Abstract: Image segmentation is a widely used in medical imaging applications by detecting anatomical structures and regions of interest. This paper concerns a survey of numerous segmentation model used in biomedical field. We organized segmentation techniques by four approaches, namely, thresholding, edge-based, region-based and snake. These techniques have been compared with simulation results and demonstrated the feasibility of medical image segmentation. Snake was demonstrated a capability with a high performance metrics to detect irregular shape as carcinoma cell type. This study showed the advantage of the deformable segmentation technique to segment abnormal cells with Dice similarity value over 83%.

Keywords: biomedical optical imaging; cellular biophysics; edge detection; gradient methods; image segmentation; medical image processing; object detection; vectors; anatomical structure detection; biomedical field; carcinoma cell type; dice similarity value; edge-based approach; gradient vector; histopathological image segmentation techniques; irregular shape detection; medical image segmentation; medical imaging applications; region-based approach; regions-of-interest detection; snake approach; thresholding approach; Anatomical structure; Biological system modeling; Decision support systems; Image edge detection; Image segmentation; Simulation; Segmentation; biomedical; edge; region; snake; thresholding (ID#: 15-7063)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7113175&isnumber=7113160

 

Mukherjee, M.; Edwards, J.; Kwon, H.; La Porta, T.F., “Quality of Information-Aware Real-Time Traffic Flow Analysis and Reporting,” in Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, vol., no., pp. 69–74, 23–27 March 2015. doi:10.1109/PERCOMW.2015.7133996

Abstract: In this paper we present a framework for Quality of Information (QoI)-aware networking. QoI quantifies how useful a piece of information is for a given query or application. Herein, we present a general QoI model, as well as a specific example instantiation that carries throughout the rest of the paper. In this model, we focus on the tradeoffs between precision and accuracy. As a motivating example, we look at traffic video analysis. We present simple algorithms for deriving various traffic metrics from video, such as vehicle count and average speed. We implement these algorithms both on a desktop workstation and less-capable mobile device. We then show how QoI-awareness enables end devices to make intelligent decisions about how to process queries and form responses, such that huge bandwidth savings are realized.

Keywords: mobile computing; traffic information systems; video signal processing; QoI; average speed; bandwidth savings; desktop workstation; end devices; form responses; information-aware real-time traffic flow analysis; mobile device; quality of information-aware networking; traffic metrics; traffic video analysis; vehicle count; Accuracy; Cameras; Image edge detection; Quality of service; Sensors; Streaming media; Vehicles (ID#: 15-7064)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7133996&isnumber=7133953

 

Lokhande, S.S.; Dawande, N.A., “A Survey on Document Image Binarization Techniques,” in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, vol., no., pp. 742–746, 26–27 Feb. 2015. doi:10.1109/ICCUBEA.2015.148

Abstract: Document image binarization is performed to segment foreground text from background text in badly degraded documents. In this paper, a comprehensive survey has been conducted on some state-of-the-art document image binarization techniques. After describing these document images binarization techniques, their performance have been compared with the help of various evaluation performance metrics which are widely used for document image analysis and recognition. On the basis of this comparison, it has been found out that the adaptive contrast method is the best performing method. Accordingly, the partial results that we have obtained for the adaptive contrast method have been stated and also the mathematical model and block diagram of the adaptive contrast method has been described in detail.

Keywords: document image processing; image recognition; image segmentation; text analysis; adaptive contrast method; background text segmentation; document image analysis; document image binarization techniques; document image recognition; foreground text segmentation; mathematical model; performance evaluation metrics; Distortion; Image edge detection; Image segmentation; Mathematical model; Measurement; Text analysis; degraded document image; document image binarization; image contrast; segmentation (ID#: 15-7065)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7155946&isnumber=7155781

 

Balodi, A.; Dewal, M.L.; Rawat, A., “Comparison of Despeckle Filters for Ultrasound Images,” in Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, vol., no., pp. 1919–1924, 11–13 March 2015. doi: (not provided)

Abstract: A comparative study of despeckle filters for ultrasound images have been presented in this paper. We know that the ultrasound images are corrupted by speckle noise, which has limited the growth of automatic diagnosis for ultrasound images. This paper compiles twelve despeckling filters for speckle noise reduction. A comparative study has been done in terms of preserving the texture features and edges. Six stabilized evaluation metrics, namely, signal to noise ratio (SNR), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity (SSIM) index, beta metric (β) and figure of merit (FoM) are calculated to investigate the performance of the despeckle filters.

Keywords: biomedical ultrasonics; image filtering; image texture; mean square error methods; medical image processing; speckle; ultrasonic imaging; FoM; PSNR; RMSE; SSIM; automatic diagnosis; beta metric; despeckle filters; despeckling filters; figure of merit; peak signal to noise ratio; root mean square error; speckle noise reduction; structural similarity index; ultrasound images; Image edge detection; Measurement; Optical filters; Signal to noise ratio; Speckle; Wiener filters; Beta metric; Despeckle; FoM; PSNR; RMSE; SNR; SSIM (ID#: 15-7066)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100578&isnumber=7100186

 

Marburg, A.; Hayes, M.P., “SMARTPIG: Simultaneous Mosaicking and Resectioning Through Planar Image Graphs,” in Robotics and Automation (ICRA), 2015 IEEE International Conference on  vol., no., pp. 5767–5774, 26–30 May 2015. doi:10.1109/ICRA.2015.7140007

Abstract: This paper describes Smartpig, an algorithm for the iterative mosaicking of images of a planar surface using a unique parameterization which decomposes inter-image projective warps into camera intrinsics, fronto-parallel projections, and inter-image similarities. The constraints resulting from the inter-image alignments within an image set are stored in an undirected graph structure allowing efficient optimization of image projections on the plane. Camera pose is also directly recoverable from the graph, making Smartpig a feasible solution to the problem of simultaneous location and mapping (SLAM). Smartpig is demonstrated on a set of 144 high resolution aerial images and evaluated with a number of metrics against ground control.

Keywords: SLAM (robots); cameras; directed graphs; image segmentation; iterative methods; robot vision; SLAM; Smartpig algorithm; camera pose; fronto-parallel projections; ground control; high resolution aerial images; image iterative mosaicking; image projection optimization; image set; inter-image alignments; inter-image projective decomposition; inter-image similarity; intrinsic camera; planar image graphs; planar surface; simultaneous location and mapping; undirected graph structure; Cameras; Cost function; Image edge detection; Measurement; Silicon; Three-dimensional displays (ID#: 15-7067)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7140007&isnumber=7138973

 

Kerouh, F.; Serir, A., “A No Reference Perceptual Blur Quality Metric in the DCT Domain,” in Control, Engineering & Information Technology (CEIT), 2015 3rd International Conference on, vol., no., pp. 1–6, 25–27 May 2015. doi:10.1109/CEIT.2015.7233043

Abstract: Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.

Keywords: blind source separation; discrete cosine transforms; image restoration; DCT domain; JNB; PBBQM; blind blur quality measures; blind objective metrics; blur distortion; blurring affects; digital imaging systems; discrete cosine transform; edges; fine image details; frequency domain; human visual system; image high frequency components; just noticeable blur concept; no reference perceptual blur quality metric; perceived image quality degradation; perceptual blind blur quality metric; perceptual no reference blur assessment metric; subjective quality scores; Databases; Discrete cosine transforms; Frequency-domain analysis; Image edge detection; Visual systems; Wavelet transforms; Blurring; Discrete Cosine Transform (DCT); Just Noticeable Blur threshold (JNB); blind quality metric (ID#: 15-7068)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7233043&isnumber=7232976

 

Windisch, G.; Kozlovszky, M., “Image Sharpness Metrics for Digital Microscopy,” in Applied Machine Intelligence and Informatics (SAMI), 2015 IEEE 13th International Symposium on, vol., no., pp. 273–276, 22–24 Jan. 2015. doi:10.1109/SAMI.2015.7061889

Abstract: Image sharpness measurements are important parts of many image processing applications. To measure image sharpness multiple algorithms have been proposed and measured in the past but they have been developed with having out-of-focus photographs in mind and they do not work so well with images taken using a digital microscope. In this article we show the difference between images taken with digital cameras, images taken with a digital microscope and artificially blurred images. The conventional sharpness measures are executed on all these categories to measure the difference and a standard image set taken with a digital microscope is proposed and described to serve as a common baseline for further sharpness measures in the field.

Keywords: cameras; feature extraction; image processing; microscopy; photography; artificially blurred images; digital cameras; digital microscopy; image processing applications; image sharpness measurements; image sharpness metrics; out-of-focus photographs; Digital cameras; Image databases; Image edge detection; Measurement; Microscopy; Noise (ID#: 15-7069)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7061889&isnumber=7061844

 

Khademi, A.; Moody, A.R., “Multiscale Partial Volume Estimation for Segmentation of White Matter Lesions Using Flair MRI,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, vol., no., pp. 568–571, 16–19 April 2015. doi:10.1109/ISBI.2015.7163937

Abstract: For robust segmentation of white matter lesions (WML), a partial volume fraction (PVF) estimation approach was previously developed for FLAIR MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead the PV fraction was estimated directly from each FLAIR MRI using an adaptively defined global edge map that exploits a novel relationship between edge content and PVA. Although promising, predefined noise filter parameters were needed, and the edge metric is computed on a single scale which limits wide-scale implementations. To handle these challenges, this work defines a novel multiscale PVF estimation approach that is based on scale space derivatives. The result is a scale-invariant representation of edge content which is used to estimate a multiscale (scale-invariant) PV fraction. Validation results show the method is performing better than the previous version.

Keywords: biomedical MRI; image segmentation; medical image processing; FLAIR MRI; edge content scale-invariant representation; edge metrics; global edge map; multiscale PV fraction; multiscale partial volume estimation; multispectral scan; noise filter parameter; partial volume fraction estimation; scale space derivative; white matter lesion segmentation; Image edge detection; Image segmentation; Lesions; Magnetic resonance imaging; Noise; Volume measurement; FLAIR; MRI; WML; partial volume (ID#: 15-7070)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163937&isnumber=7163789

 

Hakim, Aayesha; Talele, K.T.V.; Harsh, Rajesh; Verma, Dharmesh, “Electronic Portal Image Processing for High Precision Radiotherapy,” in Computation of Power, Energy Information and Communication (ICCPEIC), 2015 International Conference on, vol., no., pp. 0007–0012, 22–23 April 2015. doi:10.1109/ICCPEIC.2015.7259436

Abstract: The advent of a-Si Electronic Portal Imaging Device (EPID) has led to an important tool for the clinicians to verify the location of the radiation therapy beam with respect to the patient anatomy. However, the Electronic Portal Images (EPI) are blur and suffer from low contrast due to Compton Scattering. It is difficult to differentiate between the organs and tissues from low contrast images. We need better in-treatment images to extract relevant features of the anatomy for a reliable patient set-up verification. The goal of this research work was to inspect several image processing techniques for contrast enhancement and edge detection/sharpening on EPI in DICOM format and improvise their visual aspects for better diagnosis and intervention. We propose a hybrid approach to enhance the quality of electronic portal images by using CLAHE algorithm and median filtering followed by image sharpening. Results suggest impressive improvement in the image quality by the proposed method. To quantify the degree of enhancement or degradation for various techniques experimentally, metrics like RMSE and PSNR are compared.

Keywords: Cancer; DICOM; Head; Histograms; Image edge detection; Neck; Thorax; DICOM; Electronic Portal Imaging Device (EPID); image enhancement; image processing (ID#: 15-7071)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7259436&isnumber=7259434

 

Mastan Vali, S.K..; Naga Kishore, K.L.; Prathibha, G., “Robust Image Watermarking Using Tetrolet Transform,” in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, vol., no., pp. 1–5, 24–25 Jan. 2015. doi:10.1109/EESCO.2015.7253651

Abstract: This paper proposes new watermarking technique based on tetrolet domain. The tetrolet transform is a new adaptive Haar-type wavelet transform based on tetrominoes. It avoids Gibbs oscillation because it applies the Haar function at the edge of the image. Our proposed watermarking algorithm embeds the watermark into the tetrolet coefficients which are selected by considering different of shapes tetrominoes. We evaluated the effectiveness of the watermarking approach by considering the quality metrics like RMSE, PSNR and Robustness parameters. The experimental results reveal that the proposed watermarking scheme is robust against common image processing attacks.

Keywords: Discrete wavelet transforms; Image edge detection; Robustness; Watermarking; Adaptive Haar type wavelets; PSNR; RMSE; Robustness; Tetrolet Transform; Tetrominoes (ID#: 15-7072)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7253651&isnumber=7253613

 

Sandic-Stankovic, D.; Kukolj, D.; Le Callet, P., “DIBR Synthesized Image Quality Assessment Based on Morphological Wavelets,” in Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on, vol., no., pp. 1–6, 26–29 May 2015. doi:10.1109/QoMEX.2015.7148143

Abstract: Most of the Depth Image Based Rendering (DIBR) techniques produce synthesized images which contain nonuniform geometric distortions affecting edges coherency. This type of distortions are challenging for common image quality metrics. Morphological filters maintain important geometric information such as edges across different resolution levels. In this paper, morphological wavelet peak signal-to-noise ratio measure, MW-PSNR, based on morphological wavelet decomposition is proposed to tackle the evaluation of DIBR synthesized images. It is shown that MW-PSNR achieves much higher correlation with human judgment compared to the state-of-the-art image quality measures in this context.

Keywords: edge detection; image filtering; rendering (computer graphics); wavelet transforms; DIBR synthesized image evaluation; DIBR synthesized image quality assessment; MW-PSNR; depth image based rendering; edge coherency; image quality metrics; morphological filters; morphological wavelet decomposition; morphological wavelet peak signal-to-noise ratio measure; morphological wavelets; nonuniform geometric distortions; Distortion; Image quality; Lattices; Measurement; Signal resolution; Wavelet transforms; DIBR synthesized image quality assessment; Multi-scale PSNR; lifting scheme; morphological wavelets; nonseparable morphological wavelet decomposition; quincunx sampling (ID#: 15-7073)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148143&isnumber=7148077

 

Kyoungmin Lee; Kolsch, M., “Shot Boundary Detection with Graph Theory Using Keypoint Features and Color Histograms,” in Applications of Computer Vision (WACV), 2015 IEEE Winter Conference on,  vol., no., pp. 1177–1184, 5–9 Jan. 2015. doi:10.1109/WACV.2015.161

Abstract: The TRECVID report of 2010 [14] evaluated video shot boundary detectors as achieving “excellent performance on [hard] cuts and gradual transitions.” Unfortunately, while re-evaluating the state of the art of the shot boundary detection, we found that they need to be improved because the characteristics of consumer-produced videos have changed significantly since the introduction of mobile gadgets, such as smartphones, tablets and outdoor activity purposed cameras, and video editing software has been evolving rapidly. In this paper, we evaluate the best-known approach on a contemporary, publicly accessible corpus, and present a method that achieves better performance, particularly on soft transitions. Our method combines color histograms with key point feature matching to extract comprehensive frame information. Two similarity metrics, one for individual frames and one for sets of frames, are defined based on graph cuts. These metrics are formed into temporal feature vectors on which a SVM is trained to perform the final segmentation. The evaluation on said “modern” corpus of relatively short videos yields a performance of 92% recall (at 89% precision) overall, compared to 69% (91%) of the best-known method.

Keywords: edge detection; graph theory; image colour analysis; image matching; image segmentation; support vector machines; video signal processing; SVM; color histograms; comprehensive frame information; graph cuts; graph theory; key point feature matching; segmentation; temporal feature vectors; video shot boundary detection; Color; Feature extraction; Histograms; Image color analysis; Measurement; Support vector machines; Vectors (ID#: 15-7074)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7046015&isnumber=7045853

 

Saleh, F.S.; Azmi, R., “Automated Lesion Border Detection of Dermoscopy Images Using Spectral Clustering,” in Pattern Recognition and Image Analysis (IPRIA), 2015 2nd International Conference on, vol., no., pp. 1–6, 11–12 March 2015. doi:10.1109/PRIA.2015.7161640

Abstract: Skin lesion segmentation is one of the most important steps for automated early skin cancer detection since the accuracy of the following steps significantly depends on it. In this paper we present a novel approach based on spectral clustering that provides accurate and effective segmentation for dermoscopy images. In the proposed method, an optimized clustering algorithm has been provided which effectively extracts lesion borders using spectral graph partitioning algorithm in an appropriate color space, considering special characteristics of dermoscopy images. The proposed segmentation method has been applied to 170 dermoscopic images and evaluated with two metrics, by means of the segmentation results provided by an experienced dermatologist as the ground truth. The experiment results of this approach demonstrate that, complex contours are distinguished correctly while challenging features of skin lesions such as topological changes, weak or false contours, and asymmetry in color and shape are handled as might be expected when compared to four state of the art methods.

Keywords: cancer; edge detection; feature extraction; image colour analysis; image segmentation; medical image processing; pattern clustering; shape recognition; skin; automated lesion border detection; color asymmetry; dermoscopy image segmentation; lesion border extraction; shape asymmetry; skin cancer detection; spectral clustering; spectral graph partitioning algorithm; Clustering algorithms; Hair; Image color analysis; Image segmentation; Lesions; Malignant tumors; Skin; Dermoscopic Images; Segmentation; Spectral Clustering; Uniform color space; automated early skin cancer detection (ID#: 15-7075)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161640&isnumber=7161613

 

Moradi, M.; Falahati, A.; Shahbahrami, A.; Zare-Hassanpour, R., “Improving Visual Quality in Wireless Capsule Endoscopy Images wth Contrast-Limited Adaptive Histogram Equalization,” in Pattern Recognition and Image Analysis (IPRIA), 2015 2nd International Conference on, vol., no., pp. 1–5, 11–12 March 2015. doi:10.1109/PRIA.2015.7161645

Abstract: Wireless Capsule Endoscopy (WCE) is a noninvasive device for detection of gastrointestinal problems especially small bowel diseases, such as polyps which causes gastrointestinal bleeding. The quality of WCE images is very important for diagnosis. In this paper, a new method is proposed to improve the quality of WCE images. In our proposed method for improving the quality of WCE images, Removing Noise and Contrast Enhancement (RNCE) algorithm is used. The algorithm have been implemented and tested on some real images. Quality metrics used for performance evaluation of the proposed method is Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Edge Strength Similarity for Image (ESSIM). The results obtained from SSIM, PSNR and ESSIM indicate that the implemented RNCE method improve the quality of WCE images significantly.

Keywords: biological organs; biomedical optical imaging; diseases; endoscopes; image denoising; image enhancement; medical image processing; WCE image quality; bowel diseases; contrast enhancement algorithm; contrast-limited adaptive histogram equalization; diagnosis; edge strength similarity-for-image; gastrointestinal bleeding; gastrointestinal problem detection; noninvasive device; peak signal-to-noise ratio; performance evaluation; polyps; quality metrics; removing noise algorithm; similarity index measure; visual quality; wireless capsule endoscopy images; Diseases; Endoscopes; Gastrointestinal tract; Imaging; PSNR; Wireless communication; Contrast Enhancement; Medical Image Processing; Wireless Capsule Endoscopy (WCE) (ID#: 15-7076)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7161645&isnumber=7161613

 

Gomez-Valverde, J.J.; Ortuno, J.E.; Guerra, P.; Hermann, B.; Zabihian, B.; Rubio-Guivernau, J.L.; Santos, A.; Drexler, W.; Ledesma-Carbayo, M.J., “Evaluation of Speckle Reduction with Denoising Filtering in Optical Coherence Tomography for Dermatology,” in Biomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on, vol., no., pp. 494–497, 16–19 April 2015. doi:10.1109/ISBI.2015.7163919

Abstract: Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we evaluate various denoising filters with high edge-preserving potential for the reduction of speckle noise in 256 dermatological OCT B-scans. Our results show that the Enhanced Sigma Filter and the Block Matching 3-D (BM3D) as 2D denoising filters and the Wavelet Multiframe algorithm considering adjacent B-scans achieved the best results in terms of the enhancement quality metrics used. Our results suggest that a combination of 2D filtering followed by a wavelet based compounding algorithm may significantly reduce speckle, increasing signal-to-noise and contrast-to-noise ratios, without the need of extra acquisitions of the same frame.

Keywords: biomedical optical imaging; diseases; filtering theory; image denoising; image enhancement; image matching; medical image processing; optical tomography; skin; wavelet transforms; 2D denoising filters; 3D block matching; BM3D; OCT images; contrast-to-noise ratios; dermatological OCT B-scans; enhancement quality metrics; high edge-preserving potential; optical coherence tomography; sigma filter; signal-to-noise ratios; skin disease diagnosis; speckle noise; speckle reduction; wavelet multiframe algorithm; wavelet-based compounding algorithm; Adaptive optics; Biomedical optical imaging; Digital filters; Optical filters; Optical imaging; Speckle; Tomography; Optical Coherence Tomography; denoising; dermatology; speckle (ID#: 15-7077)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7163919&isnumber=7163789 

 


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.