Investigative Ophthalmology & Visual Science Cover Image for Volume 59, Issue 9
July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
A novel Bayesian adaptive method for mapping the visual field
Author Affiliations & Notes
  • Pengjing Xu
    Psychology, The Ohio State University, Columbus, Ohio, United States
  • Luis A Lesmes
    Adaptive Sensory Technology, San Diego, California, United States
  • Deyue Yu
    College of Optometry, The Ohio State University, Columbus, Ohio, United States
  • Zhong-Lin Lu
    Psychology, The Ohio State University, Columbus, Ohio, United States
  • Footnotes
    Commercial Relationships   Pengjing Xu, None; Luis Lesmes, Adaptive Sensory Technology (I), Adaptive Sensory Technology (E), Adaptive Sensory Technology (P); Deyue Yu, None; Zhong-Lin Lu, Adaptive Sensory Technology (I), Adaptive Sensory Technology (P)
  • Footnotes
    Support   National Eye Institute (EY021553, EY025658)
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1266. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pengjing Xu, Luis A Lesmes, Deyue Yu, Zhong-Lin Lu; A novel Bayesian adaptive method for mapping the visual field. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1266.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Measuring visual functions - light and contrast sensitivity, visual acuity, reading speed, crowding - across retinal locations provides visual field maps (VFMs) that are valuable for detecting and managing eye disease. Mapping light sensitivity is standard for glaucoma, but its assessment is noisy (Keltner et al, 2000). Mapping other visual functions is difficult. To improve the precision of light sensitivity mapping, and enable other VFM testing, we develop a novel hybrid Bayesian adaptive test framework. This study validates the quick VFM (qVFM) method to measure light sensitivity across the visual field.

Methods : The qVFM combines a parametric approach for preliminary assessment of the VFM's shape, and a non-parametric approach for assessing individual VF locations. In both simulation and psychophysics studies,we sampled 100 VF locations (60 x 60 deg) and compared the performance of qVFM with a qYN procedure that tested each location independently (Lesmes, et al, 2015). Subjects were cued to report a target dot's presence or absence, with its luminance adaptively-adjusted on each trial. Simulated runs of 300trials (for both qVFM and qYN) were used to compare the accuracy and precision of the methods. In addition, data were collected from six eyes (3 OS, 3 OD) of 4 normal subjects.

Results : Simulations showed that the bias and standard deviation (SD) of the estimated thresholds (in dB: -10*log10(luminance (in asb)/10000)) were 0.049 and 0.63 dB by the qVFM, and 0.21 and 0.85 dB by the qYN. Estimates of within-run variability (68.2% HWCIs) were comparable to cross-run variability (SD). For the subjects, the average HWCI of the qVFM estimates decreased from 7.65 dB on the first trial to 0.51 dB after 150 trials, and to 0.41 dB after 300 trials. The root mean squared error (RMSE) of light sensitivity estimates from the qVFM and qYN methods started at 1.95 dB on the first trial and decreased to 1.51 dB after 150 qVFM trials and to 1.08 dB after 300 trials.

Conclusions : The qVFM provides accurate, precise, and efficient mapping of light sensitivity. The method can be extended to map other visual functions, with potential clinical signals for monitoring vision loss, evaluating therapeutic interventions, and developing effective rehabilitation for low vision.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×