June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
A machine learning approach to determine refractive errors of the eye
Author Affiliations & Notes
  • Arne Ohlendorf
    Technology and Innovation, Carl Zeiss Vision International GmbH, Aalen, Germany
    Insitute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
  • Alexander Leube
    Insitute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
  • Christian Leibig
    Insitute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
  • Siegfried Wahl
    Technology and Innovation, Carl Zeiss Vision International GmbH, Aalen, Germany
    Insitute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
  • Footnotes
    Commercial Relationships   Arne Ohlendorf, Carl Zeiss Vision International GmbH (F); Alexander Leube, None; Christian Leibig, None; Siegfried Wahl, Carl Zeiss Vision International GmbH (F)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 1136. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arne Ohlendorf, Alexander Leube, Christian Leibig, Siegfried Wahl; A machine learning approach to determine refractive errors of the eye
      . Invest. Ophthalmol. Vis. Sci. 2017;58(8):1136.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : In order to account for the neural contributions to perception, refractive errors of the eye are traditionally assessed by a subjective measurement, or by using mechanistic models, like image quality metrics. The aim of this research was to explore the suitability of a machine learning approach to identify the perceptually optimal sphero-cylindrical refraction correction, when compared to subjective measurements.

Methods : The data of 460 eyes from 230 subjects with a mean age of 33.1 ± 11.5 years (range: 18 to 78 years) were included in the study. The subjective assessment of the non-cycloplegic refractive errors under monocular conditions were performed by two optometrists using a digital phoropter (ZEISS Visuphor 500, Carl Zeiss Vision GmbH, Germany) and SLOAN optotypes. Wavefront aberrations were approximated by Zernike polynomials up to the 7th radial order (i.Profiler plus, Carl Zeiss Vision). A multilayer perceptron (MLP) with two hidden layers was trained to predict the sphero-cylindrical refraction correction from 37 dimensional feature vectors (36 Zernike coefficients + pupil diameter). All data was used for training and testing purposes via 10-fold cross validation. Bland Altman analysis was performed to investigate the mean differences and the 95% Limits of Agreement (LoA) between refractive components (M, J0 and J45) from MLP predictions and independent subjective refractions.

Results : Using the machine learning approach to predict the power vectors of refraction, the mean squared error (MSE) of the prediction (M: MSE = 0.17 D2; J0 MSE = 0.03 D2; J45 MSE = 0.03 D2) resided in a similar range as the resolution commonly used for subjective prescriptions (M: 0.14 D2; J0 and J45: 0.02 D2). 95% of the predictions were contained in the interval of ±0.83 D (M), ±0.35 D (J0) and ±0.34 D (J45) respectively. These intervals are comparable to those previously observed for both mechanistic models and subjective assessments of refraction. The bias with respect to subjective measurements is trivially close to zero for all three vector components (ΔM = 0.00 D, ΔJ0 = 0.01 D and ΔJ45 = 0.02 D).

Conclusions : The neural network based prediction of the refractive correction lead to comparable values of the power vectors of refraction, when compared to the subjective measurements.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×