June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
Deep convolutional neural networks for automated OCT pathology recognition
Author Affiliations & Notes
  • Daniel B Russakoff
    Voxeleron LLC, San Francisco, California, United States
  • Jonathan D Oakley
    Voxeleron LLC, San Francisco, California, United States
  • Robert Chang
    Byers Eye Institute at Stanford University, Stanford, California, United States
  • Footnotes
    Commercial Relationships   Daniel Russakoff, Voxeleron (E), Voxeleron (P); Jonathan Oakley, Voxeleron (E), Voxeleron (P); Robert Chang, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 668. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel B Russakoff, Jonathan D Oakley, Robert Chang; Deep convolutional neural networks for automated OCT pathology recognition. Invest. Ophthalmol. Vis. Sci. 2017;58(8):668.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To use deep learning algorithms with and without automated segmentation-based preprocessing to classify individual optical coherence tomography (OCT) tomograms as “pathology present” or “pathology absent” based on expert labelling. Increasingly, deep learning approaches have shown success in automating image analysis of such data. Limitations include access to large labelled data and compute power, yet deep learning is now capable of matching expert clinicians, as demonstrated with diabetic retinopathy fundus photos [Gulshan 2016]. One way to overcome the data limitation is through intelligent preprocessing to help reduce model complexity and better address the bias-variance trade-off inherent in machine learning problems.

Methods : Macular OCT B-scans (Cirrus, Carl Zeiss Meditec, Inc) were used from 100 unique patients with 20 of those patients exhibiting evident pathology, including age-related macular degeneration (AMD), epiretinal membrane (ERM), and high myopia. Each was manually classified and split into 15,081 negative examples (controls), and 7,284 positive examples (pathology). For preprocessing, automated layer segmentation software (Orion, Voxeleron LLC) was used to crop each B-scan from the ILM to a fixed offset below Bruch’s. The cropped B-scans were then resampled to a uniform size (Fig 1). The patients were randomly split into training (60%), validation (10%), and test (30%) sets. A deep convolutional neural network (CNN) using 8 layers was trained [2 convolutional layers, 2 pooling layers, 1 dropout layer, 2 hidden layers, and a soft-max regression layer]. Another CNN was trained on the same data but without the automated segmentation-based preprocessing.

Results : The accuracy of the system, as measured on the test set, is 98.8% with a sensitivity of 96.9% and a specificity of 99.8%. Without the automated segmentation-based preprocessing, the accuracy drops to 85.3% with a sensitivity of 78.7% and specificity of 88.6%.(Fig 2).

Conclusions : A deep CNN is capable of accurately differentiating pathological and normal OCT macular B-scans. With automated segmentation-based preprocessing, algorithm accuracy increased by 15.8% with improved sensitivity and specificity.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

 

Fig 1. Example preprocessed input images (from l to r): ERM, AMD, and two “pathology absent” cases.

Fig 1. Example preprocessed input images (from l to r): ERM, AMD, and two “pathology absent” cases.

 

Fig 2. Confusion matrices with (left) and without (right) the automated segmentation-based preprocessing.

Fig 2. Confusion matrices with (left) and without (right) the automated segmentation-based preprocessing.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×