Investigative Ophthalmology & Visual Science Cover Image for Volume 64, Issue 8
June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Quantification of Hard Exudates Using Color Fundus Photographs based on Deep Learning Pix2Pix GAN Image Translation Model
Author Affiliations & Notes
  • Vinisha Sant
    University of Pittsburgh, Pittsburgh, Pennsylvania, United States
  • Sandeep Chandra Bollepalli
    Department of Ophthalmology, UPMC, Pittsburgh, Pennsylvania, United States
  • Abdul Rasheed Mohammed
    School of Optometry and Vision Science, University of Waterloo, Waterloo, Ontario, Canada
  • Jose Sahel
    Department of Ophthalmology, UPMC, Pittsburgh, Pennsylvania, United States
  • Jay Chhablani
    Department of Ophthalmology, UPMC, Pittsburgh, Pennsylvania, United States
  • Kiran Kumar Vupparaboina
    Department of Ophthalmology, UPMC, Pittsburgh, Pennsylvania, United States
  • Footnotes
    Commercial Relationships   Vinisha Sant None; Sandeep Chandra Bollepalli None; Abdul Rasheed Mohammed None; Jose Sahel Avista RX, Code C (Consultant/Contractor), GenSight Biologics, Sparing Vision, Prophesee, Chronolife, Tilak Healthcare, VegaVect Inc., Avista, Tenpoint, SharpEye, Code I (Personal Financial Interest), Unpaid censor on the board of GenSight Biologics and SparingVision; Censor on the board of Avista, Chair advisory board of SparingVision, Tenpoint, Institute of Ophthalmology Basel (IOB); President of the Fondation Voir & Entendre; Director board of trustees RD Fund (Foundation Fighting Blindness), Gilbert Foundation advisory board, Code S (non-remunerative); Jay Chhablani None; Kiran Vupparaboina None
  • Footnotes
    Support  The work was supported by the NIH CORE Grant P30 EY08098 to the Dept. of Ophthalmology, the Eye and Ear Foundation of Pittsburgh; the Shear Family Foundation Grant to the University of Pittsburgh Department of Ophthalmology; and an unrestricted grant from Research to Prevent Blindness, New York, NY; and partly by Grant BT/PR16582/BID/7/667/2016.
Investigative Ophthalmology & Visual Science June 2023, Vol.64, OD62. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vinisha Sant, Sandeep Chandra Bollepalli, Abdul Rasheed Mohammed, Jose Sahel, Jay Chhablani, Kiran Kumar Vupparaboina; Quantification of Hard Exudates Using Color Fundus Photographs based on Deep Learning Pix2Pix GAN Image Translation Model. Invest. Ophthalmol. Vis. Sci. 2023;64(8):OD62.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Diabetic retinopathy is one of the leading causes of blindness. Hard exudates (HEs) are one of the important clinical indicators of the presence of DR and they can be visualized using color fundus (CF) photographs. Accurate segmentation of hard exudates assumes significance in the quantification and monitoring of DR progression. In this study, we attempted a novel image translation method based on Pix2Pix generative adversarial networks (GAN) deep learning model to segment HEs in CF.

Methods : This study is performed based on a retrospective dataset of 150 color fundus images taken from subjects diagnosed with DR. We adopted Pix2Pix GAN model which is developed for translating an image pixel-by-pixel into any targeted image. Here we plan to translate CF image to the corresponding HE-segmented CF image (Figure 1). For the generator, we used the residual encoder-decoder (ResUNet) model. Ground-truth segmentations required for training were obtained based on our previously validated semi-automated method. In segmented images for training, HEs were marked in blue and Pix2Pix learns the blue regions. Both generator and discriminator are optimized for mean squared error (MSE) loss functions. Train-test split was 120:30. Segmentation performance is evaluated using Dice coefficient (DC) between the ground-truth and the Pix2Pix GAN segmentations.

Results : On the 30 eyes, the proposed method achieves an average Dice score of 91.47% against ground truth segmentation. Figure 2 depicts the comparison between ground-truth and algorithmic segmentation on representative CF images.

Conclusions : The proposed Pix2Pix-GAN-based approach demonstrated close agreement with ground-truth segmentation. This method remains generalizable and can be adapted to other segmentation tasks.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

Figure 1: Schematic of proposed HE segmentation using CF images based on Pix2Pix GAN.

Figure 1: Schematic of proposed HE segmentation using CF images based on Pix2Pix GAN.

 

Figure 2: Comparison of Pix2Pix based HE segmentation against ground-truth on representative CF images.

Figure 2: Comparison of Pix2Pix based HE segmentation against ground-truth on representative CF images.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×