August 2019
Volume 60, Issue 11
Open Access
ARVO Imaging in the Eye Conference Abstract  |   August 2019
Real-time Scene Understanding in Ophthalmic Anterior Segment OCT Images
Author Affiliations & Notes
  • Hessam Roodaki
    Carl Zeiss Meditec, Munich, Germany
    Technische Universität München, Munich, Germany
  • Matthias Grimm
    Technische Universität München, Munich, Germany
  • Nassir Navab
    Technische Universität München, Munich, Germany
    Johns Hopkins University, Baltimore, Maryland, United States
  • Abouzar Eslami
    Carl Zeiss Meditec, Munich, Germany
  • Footnotes
    Commercial Relationships   Hessam Roodaki, Carl Zeiss Meditec (E); Matthias Grimm, None; Nassir Navab, None; Abouzar Eslami, Carl Zeiss Meditec (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2019, Vol.60, PB095. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hessam Roodaki, Matthias Grimm, Nassir Navab, Abouzar Eslami; Real-time Scene Understanding in Ophthalmic Anterior Segment OCT Images. Invest. Ophthalmol. Vis. Sci. 2019;60(11):PB095.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Machine learning algorithms are useful and efficient at interpreting medical images and segmenting anatomies. Here we present an approach that goes one step further by gaining scene understanding using cutting-edge machine learning techniques. Our method reliably detects anatomies of the anterior segment of the eye in OCT B-scans and implicitly understands the location of acquisition.

Methods : The utilized neural network architecture in our work is U-net, a convolutional classifier without any fully connected layers, with multiple modifications. Batch renormalization is introduced to increase training efficiency. Squeeze and excitation layers are added to improve interdependencies between channels. Dilated convolutions are also used to increase the receptive field of the network. Our design emphasizes on scene understanding to accurately learn the correct position of anatomies relative to each other. The ADAM algorithm is used for training with cross entropy loss as cost function. The neural network classifies input image pixels as one of cornea, sclera, iris or background classes. A spectral domain OCT system is used for data acquisition. An automated method is employed to capture random OCT B-scans with varying parameters such as size, scale, location and gain from ex-vivo porcine eyes as training dataset. Multiple images are captured from each location as a form of data augmentation. Annotation of the anatomies in the acquired dataset is initialized by multiple automated algorithms and then manually refined.

Results : In total 7503 training images and 1136 validation images are used. The network achieves an accuracy of 95.62% pixel classification over all classes and the entire validation dataset. Inference on a 1024×1024 OCT B-scan takes about 50 milliseconds.

Conclusions : We have presented a reliable method using machine learning for real-time OCT anatomy classification with acceptable accuracy. The algorithm succeeds in segmentation independent of the input image size, scale or location. However, to train a neural network effectively for clinical use, gathering large datasets of human subjects or domain adaptation is crucial.

This abstract was presented at the 2019 ARVO Imaging in the Eye Conference, held in Vancouver, Canada, April 26-27, 2019.

 

Ophthalmic anterior segment OCT B-scans and anatomy classification results. Red represents cornea, green is sclera, and blue shows iris.

Ophthalmic anterior segment OCT B-scans and anatomy classification results. Red represents cornea, green is sclera, and blue shows iris.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×