Purchase this article with an account.
Kevin Meng, Conor Leahy, Homayoun Bagherinia, Gary C Lee, Luis De Sisternes, Nathan Shemonski; A machine learning approach to optic nerve head detection in widefield fundus images. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1727.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Identification of the optic nerve head (ONH) is important to computer-aided analysis of retinal images. There are many challenges to identifying the ONH, such as lens reflexes or lesions in the eye which can be falsely identified as the ONH. We present a robust machine-learning approach for the identification of the ONH in wide-field fundus images by classifying a number of hand-crafted features.
Many pixel-level features can be derived from a wide-field fundus image, but for our approach we focused on a few simple features of the ONH, such as rotational invariance and brightness variance. The features are in the form of heatmaps of feature prominence in regions of the image. We trained a random forest classifier from the Sci-Kit Learn Python Package. Input images are weighted by the classifier and the location of the ONH in the image was estimated. We used a dataset set of 397 color fundus images with a wide variety of diseases captured by a CLARUSTM 500 instrument (ZEISS, Dublin, CA), randomly sampled into training and test sets (50/50 split). Success was defined as whenever the estimated ONH region overlaps with a 3 mm region (up to 3 pixels radius) around the ground truth ONH position. Ground truth was determined by a human visually identifying the ONH location.
With our current set of features, a 95.6% success rate was achieved. Figure 2 shows the breakdown of our results.
A machine-learning approach to the identification of the ONH in wide-field fundus images is feasible and robust. This approach can be applied to grayscale images, and has applications for optical coherence tomography angiography (OCT-A) images and fundus autofluorescence images (FAF). Success rate may be improved with additional features being defined.
This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.
Figure 1: a) 16 Widefield fundus images taken from CLARUS 500. b) Corresponding chart of predicted ONH locations, and the ground truths overlaid. Red = predicted. Green = ground truth. Yellow = regions of overlap.
Table 1: Results of our model predicting ONH location for 183 images, trained on 217 images. TP: Model located ONH correctly. FP: Model located ONH incorrectly. FN: Model unable to locate ONH. TN: Model did not find ONH and ONH not present in image. TN always 0 because ONH always present.
This PDF is available to Subscribers Only