• This record comes from PubMed

Detection of abnormality in wireless capsule endoscopy images using fractal features

. 2020 Dec ; 127 () : 104094. [epub] 20201027

Language English Country United States Media print-electronic

Document type Journal Article, Research Support, Non-U.S. Gov't

One of the most recent non-invasive technologies to examine the gastrointestinal tract is wireless capsule endoscopy (WCE). As there are thousands of endoscopic images in an 8-15 h long video, an evaluator has to pay constant attention for a relatively long time (60-120 min). Therefore the possibility of the presence of pathological findings in a few images (displayed for evaluation for a few seconds only) brings a significant risk of missing the pathology with all negative consequences for the patient. Hence, manually reviewing a video to identify abnormal images is not only a tedious and time consuming task that overwhelms human attention but also is error prone. In this paper, a method is proposed for the automatic detection of abnormal WCE images. The differential box counting method is used for the extraction of fractal dimension (FD) of WCE images and the random forest based ensemble classifier is used for the identification of abnormal frames. The FD is a well-known technique for extraction of features related to texture, smoothness, and roughness. In this paper, FDs are extracted from pixel-blocks of WCE images and are fed to the classifier for identification of images with abnormalities. To determine a suitable pixel block size for FD feature extraction, various sizes of blocks are considered and are fed into six frequently used classifiers separately, and the block size of 7×7 giving the best performance is empirically determined. Further, the selection of the random forest ensemble classifier is also done using the same empirical study. Performance of the proposed method is evaluated on two datasets containing WCE frames. Results demonstrate that the proposed method outperforms some of the state-of-the-art methods with AUC of 85% and 99% on Dataset-I and Dataset-II respectively.

References provided by Crossref.org

Find record

Citation metrics

Loading data ...

Archiving options

Loading data ...