July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
Geographic Atrophy Lesion Segmentation Using a Deep Learning Network (U-net)
Author Affiliations & Notes
  • Jasmine Patil
    Clinical Imaging Group, Genentech, California, United States
  • Michael Kawczynski
    Personalized Healthcare, Roche, California, United States
  • Simon S. Gao
    Clinical Imaging Group, Genentech, California, United States
    Personalized Healthcare, Roche, California, United States
  • Alexandre Fernandez Coimbra
    Clinical Imaging Group, Genentech, California, United States
  • Footnotes
    Commercial Relationships   Jasmine Patil, Genentech (E); Michael Kawczynski, Genentech (E); Simon Gao, Genentech (E); Alexandre Coimbra, Genentech (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 1459. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jasmine Patil, Michael Kawczynski, Simon S. Gao, Alexandre Fernandez Coimbra; Geographic Atrophy Lesion Segmentation Using a Deep Learning Network (U-net). Invest. Ophthalmol. Vis. Sci. 2019;60(9):1459.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To automatically segment Geographic Atrophy (GA) lesions using a deep learning network and to compare the results with manual segmentations by two readers.

Methods : A deep learning model was developed using spectral domain optical coherence tomography (OCT) image data from patients in the MAHALO (NCT01229215) study. A total of 63 images were chosen to develop the model (48 training and 15 testing). These did not include images with significant motion artifacts. GA lesions were segmented manually on OCT en face choroidal images. Manual segmentation was performed by two readers for training and testing the model. Average time taken for each manual segmentation was 12 minutes. Reader1 (JP) graded all 63 images (48 training and 15 testing). Reader2 (SSG) graded the 15 test dataset images. Training patients were randomly split 5 times into 42 training images and 6 development images to train the model. Dice coefficient (a quotient of similarity that ranges between 0 and 1) between model segmentation and annotation masks was compared with dice coefficient between two readers.

Results : Unet_1024 network was used to build the model.The Dice score for manual segmentation between Reader1 and Reader2 was 0.952 on the 15 testing patients. The dice score on the same testing set of images between the Unet model and Reader1 was 0.846. Range of average of dice scores on 5 random splits on the development set was 0.849 - 0.918 and on the holdout set was 0.816 - 0.83. Dice score of 0.846 was achieved by averaging the predictions from each of the 5 training models. Figure 1. shows (A, C, E) en face OCT images, (B, E, H) segmentation of the GA lesion by Reader1(red) & Reader2(green) and (C,F,I) segmentation by Reader1(red) & the Unet model (yellow).

Conclusions : The results demonstrate the feasibility of GA segmentation using a deep learning approach. The segmented images produced by the model had a dice score comparable to that between the two readers. Futher improvement in the model and inclusion of additional data can significanlty reduce the manual effort for GA segmentation.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

 

Figure 1. Images A,D,G are OCT en face images. Images B,E,H are the manual segmentations by Reader1(red) and Reader2(green). Images C,F,I are Reader1 segmentation(red) and model predicted segmentation(yellow). Corresponding Dice coefficients are mentioned in the right hand corner. Image I has the lowest dice score in the test set.

Figure 1. Images A,D,G are OCT en face images. Images B,E,H are the manual segmentations by Reader1(red) and Reader2(green). Images C,F,I are Reader1 segmentation(red) and model predicted segmentation(yellow). Corresponding Dice coefficients are mentioned in the right hand corner. Image I has the lowest dice score in the test set.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×