July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Deep learning-based automatic segmentation of ellipsoid zone defects in optical coherence tomography images of macular telangiectasia type 2
Author Affiliations & Notes
  • Jessica Loo
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Leyuan Fang
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • David Cunefare
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Glenn J Jaffe
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Sina Farsiu
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   Jessica Loo, None; Leyuan Fang, None; David Cunefare, None; Glenn Jaffe, Heidelberg Engineering (C); Sina Farsiu, None
  • Footnotes
    Support  The Lowy Medical Research Institute and NIH RO1 EY022691
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1225. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jessica Loo, Leyuan Fang, David Cunefare, Glenn J Jaffe, Sina Farsiu; Deep learning-based automatic segmentation of ellipsoid zone defects in optical coherence tomography images of macular telangiectasia type 2. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1225.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop an automatic method to detect ellipsoid zone (EZ) defects in retinal spectral domain optical coherence tomography (SD-OCT) images of subjects with macular telangiectasia type 2 (MacTel2).

Methods : A convolutional neural network (CNN) was developed and trained to classify clusters of OCT A-scans as defective or normal. The CNN was trained on baseline images of 78 eyes from the international, multicenter, randomized phase 2 trial of ciliary neurotrophic factor for MacTel2 (NTMT02; Neurotech, Cumberland, RI, USA) and tested on images taken six months post-baseline, of 20 eyes whose baseline images were not used for training. During testing, an en face OCT probability map that each A-scan was defective was generated from the trained CNN. A threshold was applied to obtain a binary map of EZ defects. This binary map was compared to the binary map obtained from manual segmentation of the images by an expert reader. The Dice similarity coefficient (DSC) between the two maps and the EZ defects areas of both maps were calculated.

Results : Using 5-fold cross-validation, the mean DSC of all 98 eyes was 0.8739. The mean ± standard deviation of the EZ defects areas by automatic and manual segmentation was 0.897 ± 0.659 and 0.837 ± 0.660 mm2, respectively. The average computation time for each image was 0.9 seconds. Figure 1 shows an example of the results. On qualitative assessment of the automatic segmentation data, the algorithm was especially more likely to make mistakes when the retina was obscured by blood vessels.

Conclusions : Overall, there was excellent agreement between the automatic and manual segmentations. Some of the algorithm errors could be associated with “borderline” atrophic images, which are difficult to classify even for an expert in manual segmentation. The automatic algorithm will be very useful in observational and interventional MacTel2 clinical trials to assess changes in EZ defects area over time.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

Figure 1: (a) Manual segmentation binary map. (b) Automatic segmentation probability map. (c) Automatic segmentation binary map. (d) Overlay of (a) and (c) showing true positives (green), false positives (blue) and false negatives (red). (e) OCT image corresponding to the yellow scan line in (a-d) showing where the algorithm has predicted EZ defects (white boxes), true positives (green), false positives (blue) and false negatives (red).

Figure 1: (a) Manual segmentation binary map. (b) Automatic segmentation probability map. (c) Automatic segmentation binary map. (d) Overlay of (a) and (c) showing true positives (green), false positives (blue) and false negatives (red). (e) OCT image corresponding to the yellow scan line in (a-d) showing where the algorithm has predicted EZ defects (white boxes), true positives (green), false positives (blue) and false negatives (red).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×