June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Mapping visual field with Bayesian-adaptive eye-tracking qVFM method
Author Affiliations & Notes
  • Pengjing Xu
    College of Optometry, The Ohio State University, Columbus, Ohio, United States
  • Zhong-Lin Lu
    Division of Arts and Sciences, NYU Shanghai, Shanghai, China
    Center for Neural Science and Department of Psychology, New York University, New York, New York, United States
  • Deyue Yu
    College of Optometry, The Ohio State University, Columbus, Ohio, United States
  • Footnotes
    Commercial Relationships   Pengjing Xu, The Ohio State University (P); Zhong-Lin Lu, Adaptive Sensory Technology (I), The Ohio State University (P); Deyue Yu, The Ohio State University (P)
  • Footnotes
    Support  EY025658, EY021553
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 4618. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pengjing Xu, Zhong-Lin Lu, Deyue Yu; Mapping visual field with Bayesian-adaptive eye-tracking qVFM method. Invest. Ophthalmol. Vis. Sci. 2020;61(7):4618.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : A Bayesian adaptive qVFM method has been recently developed to provide accurate, precise and efficient visual field maps (VFMs) of visual functions such as light sensitivity and contrast sensitivity (Xu et al.,2019; in press). In the initial implementations of the qVFM method, observer's response was obtained through button/key press or verbal report. To simplify the response process and improve the usability of the qVFM method, we implemented it using eye movement response in an nAFC light detection task.

Methods : Six eyes (3 OS, 3 OD) of three observers were tested using both the eye-tracking and original qVFM methods (YN task with key press). Four measurements (200 trials/measurement) of the VFM (36 testing locations, evenly sampled across a visual field of 36°×36°) were obtained using each method. The target stimulus was a light disc (Goldmann size III) with luminance adaptively adjusted in each trial. In the eye-tracking qVFM test, observer was instructed to make a saccade to the target location, and the saccade direction was used to determine the accuracy of the response (correct if within ±22.5°). In the YN qVFM test, observer reported the presence or absence of the target at a cued location with a key press.

Results : The average estimated volume under the surface of VFM, a summary metric of the measured VFM, was 331 dB and 330 dB after 200 trials for the eye-tracking and YN qVFM methods, respectively. The agreement between the estimated VFMs from the two methods was evaluated by the root mean square error, which started at 3.97 dB, decreased to 1.14 dB after 100 trials, and to .50 dB after 200 trials. The average within-run variabilities (68.2% HWCI) of the eye-tracking and YN qVFM estimates decreased from 3.59 dB on the first trial to .62 dB and .85 dB after 100 trials, and to .50 dB and .68 dB after 200 trials, respectively. The repeated run variabilities of the eye-tracking and YN qVFM estimates were comparable to the HWCIs (.53 dB and .80 dB after 100 trials, and .38 dB and .36 dB after 200 trials, respectively).

Conclusions : The eye-tracking qVFM method provides a more user-friendly mapping of light sensitivity. The method could find potential clinical applications in monitoring vision loss, evaluating therapeutic interventions, and developing effective rehabilitation for people with visual impairment.

This is a 2020 ARVO Annual Meeting abstract.

 

Figure 1: Estimated VFMs (OS and OD) from one human observer.

Figure 1: Estimated VFMs (OS and OD) from one human observer.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×