Dataset-Agnostic Vessel Segmentation of Retinal Fundus Images by a Vector Quantized Variational Autoencoder
DOI:
https://doi.org/10.47611/jsrhs.v10i3.2280Keywords:
Medical Imaging, Vessel Segmentation, Autoencoder, Artificial IntelligenceAbstract
The use of retinal fundus images plays a major role in the diagnosis of various diseases such as diabetic retinopathy. Doctors frequently perform vessel segmentation as a key step for retinal image analysis. This is laborious and time-consuming; AI researchers are developing the U-Net model to automate this process. However, the U-Net model struggles to generalize its predictions across datasets due to variability in fundus images. To overcome these limitations, I propose a cross-domain Vector Quantized Variational Autoencoder (VQ-VAE) that is dataset-agnostic - regardless of the training dataset, the VQ-VAE can accurately classify vessel segmentations. The model does not have to be retrained for each different target dataset, eliminating the need for new data, resources, and time. The VQ-VAE consists of an encoder-decoder network with a custom discrete embedding space. The encoder's result is quantized through this embedding space then decoded to produce a segmentation mask. Both this VQ-VAE and a U-Net model were trained on the DRIVE dataset and tested on the DRIVE, IOSTAR, and CHASE_DB1 datasets. Both models were successful on the dataset they were trained on - DRIVE. However, the U-Net failed to generate vessel segmentation masks when tested with other datasets while the VQ-VAE performed with high accuracy. Quantitatively, the VQ-VAE performed well, having F1 scores from 0.758 to 0.767 across datasets. My model can produce convincing segmentation masks for new retinal image datasets without additional data, time, and resources. Applications include using the VQ-VAE after fundus image is taken to streamline the vessel segmentation process.
Downloads
References or Bibliography
Abràmoff, Michael D et al. “Retinal imaging and image analysis.” IEEE reviews in biomedical engineering vol. 3 (2010): 169-208. doi:10.1109/RBME.2010.2084567
Fraz, Muhammad Moazam et al. “An ensemble classification-based approach applied to retinal blood vessel segmentation.” IEEE transactions on bio-medical engineering vol. 59,9 (2012): 2538-48. doi:10.1109/TBME.2012.2205687
Guo, Changlu, et al. “SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.”
ArXiv:2004.03696 [Cs, Eess], (2020). arXiv.org, http://arxiv.org/abs/2004.03696.
Zhang, Jiong et al. “Robust retinal vessel segmentation via locally adaptive derivative frames in
orientation scores,” IEEE Transactions on Medical Imaging, vol. 35, no. 12, pp. 26312644, (2016). DOI: 10.1109/TMI.2016.2587062
Oord, Aaron van den, et al. “Neural Discrete Representation Learning.” ArXiv:1711.00937 [Cs], (2018). arXiv.org, http://arxiv.org/abs/1711.00937.
Ronneberger, Olaf, et al. “U-Net: Convolutional Networks for Biomedical Image Segmentation.”
ArXiv:1505.04597 [Cs], (2015). arXiv.org, http://arxiv.org/abs/1505.04597.
Abbasi-Sureshjani, Samaneh et al. “Biologically-inspired supervised vasculature segmentation in SLO retinal fundus images,” in International Conference Image Analysis and Recognition, pp. 325334. Springer, (2015). DOI: 10.1007/978-3-319-20801-5_35
Staal, Joes et al. “Ridge-based vessel segmentation in color images of the retina.” IEEE transactions on medical imaging vol. 23,4 (2004): 501-9. doi:10.1109/TMI.2004.825627
Yan, Wenjun, et al. “The Domain Shift Problem of Medical Image Segmentation and Vendor-Adaptation by Unet-GAN.” Medical Image Computing and Computer Assisted Intervention MICCAI 2019, edited by Dinggang Shen et al., vol. 11765, Springer International Publishing, (2019), pp. 62331. DOI.org (Crossref), doi:10.1007/978-3-030-32245-8_69.
Zhang, Han, et al. “Self-Attention Generative Adversarial Networks.” ArXiv:1805.08318 [Cs, Stat], June 2019. arXiv.org, http://arxiv.org/abs/1805.08318.
Zhao et al. "Supervised Segmentation of Un-Annotated Retinal Fundus Images by Synthesis." IEEE Transactions on Medical Imaging vol. 38, no. 1, pp. 46-56, (2019) doi: 10.1109/TMI.2018.2854886.
Published
How to Cite
Issue
Section
Copyright (c) 2021 Tejas Prabhune; David Walz
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.