July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Comparing the assessment of image quality of Optical Coherence Tomography data with Deep Neural Networks and Image processing
Author Affiliations & Notes
  • Krunalkumar Ramanbhai Patel
    CARIn, Carl Zeiss India, Bangalore, Karnataka, India
  • Keyur Ranipa
    CARIn, Carl Zeiss India, Bangalore, Karnataka, India
  • Namritha Shivaram
    CARIn, Carl Zeiss India, Bangalore, Karnataka, India
  • Footnotes
    Commercial Relationships   Krunalkumar Ramanbhai Patel, CARIn, Carl Zeiss India (E); Keyur Ranipa, CARIn, Carl Zeiss India (E); Namritha Shivaram, CARIn, Carl Zeiss India (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1734. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Krunalkumar Ramanbhai Patel, Keyur Ranipa, Namritha Shivaram; Comparing the assessment of image quality of Optical Coherence Tomography data with Deep Neural Networks and Image processing. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1734.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Over the last two decades, OCT based imaging became popular in Ophthalmology. An acquired OCT volume of a human eye consists of multiple frames and the analysis is performed through segmenting layers followed by calculation of measurement parameters. This process is highly dependent on quality and alignment of the acquired scans, and therefore it would be useful to have an automated algorithm to assess the quality and alignment of OCT scans. In this abstract, we present the comparison of two approaches for this task; 1. Deep Neural Network (DNN) 2. Image Processing (IP).

Methods : OCT images were acquired with PRIMUS 200 (ZEISS, India) and CIRRUSTM HD-OCT (ZEISS, Dublin, CA).We used 28,443 B-scans of dimensions 1024 x 512 pixel height for our evaluation. Ground truth labels were Good Quality, Poor Quality, Off-center, and Signal Loss. Fig 1 shows example images for each class.
The IP approach segments a region of interest (ROI) that contains the retinal layers followed by calculating the ROI position within the axial field of view (FOV) and SNR(#ROI pixels/#rest pixels) to detect Off-center and poor images respectively. To detect the Signal Loss category, SNR is evaluated for 16 vertical sectors of an image. The DNN based approach uses a pre-trained VGG-16 Convolutional Neural Network. Trainable parameters are estimated on randomly drawn 80% of the data, which leads to 22,752 training images. Both approaches are evaluated on the remaining hold-out test data which consists of 5689 (4047 Good, 491 Off-center, 1018 Signal loss, 123 Poor) images.

Results : Using the DNN based approach, we achieved accuracies of 96%, 96%, 88% and 79% for Good, Poor, Off-center and Signal Loss images, respectively, while the IP based approach results in accuracies of 91%, 76%, 78% and 99%. For the binary classification problem, i.e., classifying images into Good or Bad (Poor, Off-center and Signal Loss), we achieved 96% and 87% accuracies using DNN based method as compared to the IP based approach showing 91% and 82%. Fig 2 shows the confusion matrices for both approaches and ROC curve.

Conclusions : Based on the results of this study, it can be concluded that the DNN based automated OCT image quality evaluation is more reliable than the IP based approach. This may reduce the reliance of analysis algorithms on quality of OCT image.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

Fig 1 Sample OCT images

Fig 1 Sample OCT images

 

Fig 2 Confusion matrices & ROC curve

Fig 2 Confusion matrices & ROC curve

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×