July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
A deep learning approach to patient alignment and retina tracking
Author Affiliations & Notes
  • Muzammi Arshad Arain
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Niranchana Manivannan
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Homayoun Bagherinia
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • David Nolan
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Footnotes
    Commercial Relationships   Muzammi Arain, Carl Zeiss Meditec, Inc. (E); Niranchana Manivannan, Carl Zeiss Meditec, Inc. (E); Homayoun Bagherinia, Carl Zeiss Meditec, Inc. (E); David Nolan, Carl Zeiss Meditec, Inc. (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 6120. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Muzammi Arshad Arain, Niranchana Manivannan, Homayoun Bagherinia, David Nolan; A deep learning approach to patient alignment and retina tracking. Invest. Ophthalmol. Vis. Sci. 2019;60(9):6120.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Patient alignment is an important step in the operation of any ophthalmic instrument. In the ultra-widefield fundus imager CLARUSTM 500 instrument (ZEISS, Dublin, CA), alignment is realized by both pupil images and widefield infrared (IR) fundus images. Here we report the use of IR preview images for automatic patient alignment and retina tracking using deep learning (DL) techniques via optic nerve head (ONH) detection.

Methods : Widefield IR preview images were collected using a CLARUSTM 500 during patient alignment at a rate of 10 frames per second. These images were processed through a deep learning algorithm having a preprocessing block and an optimized UNet architecture to identify ONH location. This three contraction/expansion layer network was trained with the 128x128 images (downsampled by a factor of 24) as input against images with manually annotated ONH. Dice coefficient was used as loss function and sigmoid activation was used in the final layer. UNet architecture is sparsified to achieve real time performance. Here we report on the application of the algorithm on continuously acquired data on a healthy test subject.

Results : 274 images were collected within 4.5 minutes at 10 frames per second. The DL algorithm was able to detect the ONH in 234 (85%) images with an accuracy of less than 150 mm within 100 ms when compared to the ground truth. For 14% images, both grader and DL algorithm failed in detecting the ONH. In the remaining 1% of the images, the ONH was detected by the algorithm but the grader failed to identify ONH on first try. After the DL algorithm detected the ONH location, the grader was able to verify the location by increasing the contrast manually. It was observed that the detected motion of the ONH was within +/- 3mm of the reference image as indicated in Fig. 1.

Conclusions : We demonstrated the feasibility of the DL algorithm in detecting the ONH from a series of fundus images acquired during patient alignment. This method can enable faster and accurate patient alignment during subsequent patient visits and during repeated acquisitions.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

 

Fig. 1: An example of a) ONH detection failure by DL algorithm and human, b) failure by human on the first try, c) successful detection by human and DL algorithm. Graph in d) shows tracking results with the center or circles showing motion of detected ONH during alignment and the size representing the error in detection of the ONH compared to ground truth.

Fig. 1: An example of a) ONH detection failure by DL algorithm and human, b) failure by human on the first try, c) successful detection by human and DL algorithm. Graph in d) shows tracking results with the center or circles showing motion of detected ONH during alignment and the size representing the error in detection of the ONH compared to ground truth.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×