Why Use Position Features in Liver Segmentation Performed by Convolutional Neural Network
Status PubMed-not-MEDLINE Jazyk angličtina Země Švýcarsko Médium electronic-ecollection
Typ dokumentu časopisecké články
PubMed
34658919
PubMed Central
PMC8518428
DOI
10.3389/fphys.2021.734217
Knihovny.cz E-zdroje
- Klíčová slova
- convolutional neural network, liver volumetry, machine learning, medical imaging, position features, semantic segmentation,
- Publikační typ
- časopisecké články MeSH
Liver volumetry is an important tool in clinical practice. The calculation of liver volume is primarily based on Computed Tomography. Unfortunately, automatic segmentation algorithms based on handcrafted features tend to leak segmented objects into surrounding tissues like the heart or the spleen. Currently, convolutional neural networks are widely used in various applications of computer vision including image segmentation, while providing very promising results. In our work, we utilize robustly segmentable structures like the spine, body surface, and sagittal plane. They are used as key points for position estimation inside the body. The signed distance fields derived from these structures are calculated and used as an additional channel on the input of our convolutional neural network, to be more specific U-Net, which is widely used in medical image segmentation tasks. Our work shows that this additional position information improves the results of the segmentation. We test our approach in two experiments on two public datasets of Computed Tomography images. To evaluate the results, we use the Accuracy, the Hausdorff distance, and the Dice coefficient. Code is publicly available at: https://gitlab.com/hachaf/liver-segmentation.git.
Biomedical Center Faculty of Medicine in Pilsen Charles University Pilsen Czechia
Department of Cybernetics Faculty of Applied Sciences University of West Bohemia Pilsen Czechia
Department of Informatics Faculty of Applied Sciences University of West Bohemia Pilsen Czechia
Zobrazit více v PubMed
Aspert N., Santa-Cruz D., Ebrahimi T. (2002). “Mesh: measuring errors between surfaces using the Hausdorff distance,” in Proceedings. IEEE International Conference on Multimedia and Expo (Lausanne: ). 10.1109/ICME.2002.1035879 DOI
Carion N., Massa F., Synnaeve G., Usunier N., Kirillov A., Zagoruyko S. (2020). End-to-End object detection with transformers. arXiv preprint arXiv:2005.12872. 10.1007/978-3-030-58452-8_13 DOI
Chen J., Lu Y., Yu Q., Luo X., Adeli E., Wang Y., et al. . (2021). Transunet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306.
Chen L.-C., Papandreou G., Kokkinos I., Murphy K., Yuille A. L. (2017a). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. 10.1109/TPAMI.2017.2699184 PubMed DOI
Chen L.-C., Papandreou G., Schroff F., Adam H. (2017b). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.
Christ B., Dahmen U., Herrmann K. H., König M., Reichenbach J. R., Ricken T., et al. . (2017). Computational modeling in liver surgery. Front. Physiol. 8:906. 10.3389/fphys.2017.00906 PubMed DOI PMC
Deng J., Dong W., Socher R., Li L.-J., Li K., Fei-Fei L. (2009). “ImageNet: a large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition (Miami, FL: IEEE; ), 248–255. 10.1109/CVPR.2009.5206848 DOI
Dice L. R. (1945). Measures of the amount of ecologic association between species. Ecology 26, 297–302. 10.2307/1932409 DOI
Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., et al. . (2020). An image is worth 16x16 words: transformersfor image recognition at scale. arXiv preprint arXiv:2010.11929.
Heimann T., van Ginneken B., Styner M. A., Arzhaeva Y., Aurich V., Bauer C., et al. . (2009). Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans. Med. Imaging 28, 1251–1265. 10.1109/TMI.2009.2013851 PubMed DOI
Jirik M., Liska V. (2018). “Body navigation via robust segmentation of basic structures,” in VipIMAGE 2017: Proceedings of the VI ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing Porto (Portugal: ), 306–314. 10.1007/978-3-319-68195-5_33 DOI
Krizhevsky A., Sutskever I., Hinton G. E. (2012). Imagenet classification with deep convolutional neural networks. Adv. Neural Inform. Process. Syst. 25, 1097–1105. 10.1145/3065386 DOI
Long J., Shelhamer E., Darrell T. (2015). “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440. 10.1109/CVPR.2015.7298965 PubMed DOI
Moghbel M., Mashohor S., Mahmud R., Saripan M. I. B. (2018). Review of liver segmentation and computer assisted detection/diagnosis methods in computed tomography. Artif. Intell. Rev. 50, 497–537. 10.1007/s10462-017-9550-x DOI
Radiuk P. (2020). Applying 3D U-Net architecture to the task of multi-organ segmentation in computed tomography. Appl. Comput. Syst. 25, 43–50. 10.2478/acss-2020-0005 DOI
Ronneberger O., Fischer P., Brox T. (2015). “U-Net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Munich: Springer; ), 234–241. 10.1007/978-3-319-24574-4_28 DOI
Soler L. (2016). 3D-IRCADb-01. Available online at: http://www.ircad.fr/research/3d-ircadb-01/
Sørenson T. (1948). A Method of Establishing Groups of Equal Amplitude in Plant Sociology Based on Similarity of Species Content and Its Application to Analyses of the Vegetation on Danish Commons. Biologiske skrifter. I kommission hos E. Munksgaard.
Valanarasu J. M. J., Oza P., Hacihaliloglu I., Patel V. M. (2021). Medical transformer: gated axial-attention for medical image segmentation. arXiv preprint arXiv:2102.10662.
Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., et al. . (2017). “Attention is all you need,” in Advances in Neural Information Processing Systems, eds. S. J. Hanson, J. D. Cowan, and C. L. Giles (San Francisco, CA: Morgan Kaufmann Publishers Inc.) 5998–6008.
Zheng S., Lu J., Zhao H., Zhu X., Luo Z., Wang Y., et al. . (2021). “Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6881–6890.
Zhou Z., Siddiquee M. M. R., Tajbakhsh N., Liang J. (2018). “UNet++: a nested U-Net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (Springer: ), 3–11. 10.1007/978-3-030-00889-5_1 PubMed DOI PMC