June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
Minimizing inter-camera image variation effects on retinal image screening algorithms with autoencoder
Author Affiliations & Notes
  • Niranchana Manivannan
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
  • Jeremy Benson
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
    Computer Science Dept., University of New Mexico, Albuquerque, New Mexico, United States
  • Sheila C Nemeth
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
  • Zyden Jarry
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
  • Trilce Estrada
    Computer Science Dept., University of New Mexico, Albuquerque, New Mexico, United States
  • E Simon Barriga
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
  • Peter Soliz
    VisionQuest Biomedical LLC, Albuquerque, New Mexico, United States
  • Footnotes
    Commercial Relationships   Niranchana Manivannan, VisionQuest Biomedical LLC (E); Jeremy Benson, VisionQuest Biomedical LLC (E); Sheila Nemeth, VisionQuest Biomedical LLC (E); Zyden Jarry, VisionQuest Biomedical LLC (E); Trilce Estrada, University of New Mexico (E); E Simon Barriga, VisionQuest Biomedical LLC (I); Peter Soliz, VisionQuest Biomedical LLC (I)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 4833. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Niranchana Manivannan, Jeremy Benson, Sheila C Nemeth, Zyden Jarry, Trilce Estrada, E Simon Barriga, Peter Soliz; Minimizing inter-camera image variation effects on retinal image screening algorithms with autoencoder. Invest. Ophthalmol. Vis. Sci. 2017;58(8):4833.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Screening for retinal diseases has expanded rapidly in the field of telemedicine. A major hurdle to successfully implementing automatic screening is the lack of scalability of the algorithms so that the single algorithm cannot be applied to images from any arbitrary retinal camera. The purpose of this study is to demonstrate a machine learning technique to normalize images from any retinal camera such that they can be used for screening without modification of the automatic algorithms.

Methods : 2352 optic disc centered retinal images were collected from two retinal cameras (Canon CR1 and CR2). The goal was to emulate the characteristics of the CR2 camera using images from the CR1. The common approach is to crop and interpolate to fit another camera’s format. Our approach using a autoencoder takes spatial (size) and camera (illumination, contrast) characteristics into account when converting an image. A sparse autoencoder model with 700 hidden layers was developed. We used green channel images. To evaluate the quantitative differences in the images, entropy and contrast were calculated using gray-level co-occurrence matrix for the CR1, CR2, interpolated and autoencoded images. For evaluating qualitative changes, the CR1, autoencoded, and interpolated images were graded for Diabetic Retinopathy.

Results : Figure 1 shows CR1 images before and after autoencoding. Agreement (Cohen’s kappa) between the gradings from CR1 and the gradings from the interpolated images and the autoencoded images was 0.83 and 0.84, respectively (almost perfect agreement). These results show that the autoencoding algorithm does not change structural information in the image, which influences the grader’s decision while preserving the other image characteristics. The difference in contrast and entropy between the converted images and the target camera format (CR2) were reduced by 14% and 18% respectively, when autoencoding was used instead of interpolation. This shows that using autoencoder enabled the contrast and entropy values of CR1 to match the target (CR2) better than interpolation.

Conclusions : We demonstrated that our autoencoding model converts images obtained from one camera model to emulate the characteristics of another and therefore, enabling it be used in automatic algorithms which are not compatible with that camera model.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

 

Figure 1: CR1 (left) and CR1 autoencoded to CR2 format (right)

Figure 1: CR1 (left) and CR1 autoencoded to CR2 format (right)

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×