Investigative Ophthalmology & Visual Science Cover Image for Volume 59, Issue 9
July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
A machine learning approach to optic nerve head detection in widefield fundus images
Author Affiliations & Notes
  • Kevin Meng
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Conor Leahy
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Homayoun Bagherinia
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Gary C Lee
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Luis De Sisternes
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Nathan Shemonski
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Footnotes
    Commercial Relationships   Kevin Meng, Carl Zeiss Meditec, Inc. (E); Conor Leahy, Carl Zeiss Meditec, Inc. (E); Homayoun Bagherinia, Carl Zeiss Meditec, Inc. (E); Gary Lee, Carl Zeiss Meditec, Inc. (E); Luis De Sisternes, Carl Zeiss Meditec, Inc. (E); Nathan Shemonski, Carl Zeiss Meditec, Inc. (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1727. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kevin Meng, Conor Leahy, Homayoun Bagherinia, Gary C Lee, Luis De Sisternes, Nathan Shemonski; A machine learning approach to optic nerve head detection in widefield fundus images. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1727.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Identification of the optic nerve head (ONH) is important to computer-aided analysis of retinal images. There are many challenges to identifying the ONH, such as lens reflexes or lesions in the eye which can be falsely identified as the ONH. We present a robust machine-learning approach for the identification of the ONH in wide-field fundus images by classifying a number of hand-crafted features.

Methods : Many pixel-level features can be derived from a wide-field fundus image, but for our approach we focused on a few simple features of the ONH, such as rotational invariance and brightness variance. The features are in the form of heatmaps of feature prominence in regions of the image. We trained a random forest classifier from the Sci-Kit Learn Python Package. Input images are weighted by the classifier and the location of the ONH in the image was estimated. We used a dataset set of 397 color fundus images with a wide variety of diseases captured by a CLARUSTM 500 instrument (ZEISS, Dublin, CA), randomly sampled into training and test sets (50/50 split). Success was defined as whenever the estimated ONH region overlaps with a 3 mm region (up to 3 pixels radius) around the ground truth ONH position. Ground truth was determined by a human visually identifying the ONH location.

Results : With our current set of features, a 95.6% success rate was achieved. Figure 2 shows the breakdown of our results.

Conclusions : A machine-learning approach to the identification of the ONH in wide-field fundus images is feasible and robust. This approach can be applied to grayscale images, and has applications for optical coherence tomography angiography (OCT-A) images and fundus autofluorescence images (FAF). Success rate may be improved with additional features being defined.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

Figure 1: a) 16 Widefield fundus images taken from CLARUS 500. b) Corresponding chart of predicted ONH locations, and the ground truths overlaid. Red = predicted. Green = ground truth. Yellow = regions of overlap.

Figure 1: a) 16 Widefield fundus images taken from CLARUS 500. b) Corresponding chart of predicted ONH locations, and the ground truths overlaid. Red = predicted. Green = ground truth. Yellow = regions of overlap.

 

Table 1: Results of our model predicting ONH location for 183 images, trained on 217 images. TP: Model located ONH correctly. FP: Model located ONH incorrectly. FN: Model unable to locate ONH. TN: Model did not find ONH and ONH not present in image. TN always 0 because ONH always present.

Table 1: Results of our model predicting ONH location for 183 images, trained on 217 images. TP: Model located ONH correctly. FP: Model located ONH incorrectly. FN: Model unable to locate ONH. TN: Model did not find ONH and ONH not present in image. TN always 0 because ONH always present.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×