September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Simulating Reduced Acuity in Low Vision: Validation of Two Models
Author Affiliations & Notes
  • Quan Lei
    Psychology, University of Minnesota Twin Cities, Minneapolis, Minnesota, United States
  • Daniel Kersten
    Psychology, University of Minnesota Twin Cities, Minneapolis, Minnesota, United States
  • William Thompson
    School of Computing, University of Utah , Salt Lake City, Utah, United States
  • Gordon E Legge
    Psychology, University of Minnesota Twin Cities, Minneapolis, Minnesota, United States
  • Footnotes
    Commercial Relationships   Quan Lei, None; Daniel Kersten, None; William Thompson, None; Gordon Legge, None
  • Footnotes
    Support  NIH Grant EY017835
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 634. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Quan Lei, Daniel Kersten, William Thompson, Gordon E Legge; Simulating Reduced Acuity in Low Vision: Validation of Two Models. Invest. Ophthalmol. Vis. Sci. 2016;57(12):634.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Low vision is associated with a wide range of acuity reduction. Simulating visual information loss due to acuity reduction can provide tools for visualization of the real-world challenges faced by people with low vision. The purpose of this study is to validate and compare two image-processing models, one linear and the other nonlinear, that aim to simulate acuity loss.

Methods : Both models implement a spatial-frequency filter based on a contrast-sensitivity function (CSF), shifted on the frequency axis to represent acuity reduction. The linear model scales the spatial frequency content of an image by the ratio of the shifted to unshifted CSF. The non-linear model, based on Peli (1990), decomposes a visual image into a discrete set of frequency bands and applies a hard threshold to each band using the acuity-shifted CSF. Both models were tested psychophysically on subjects with normal vision in a letter recognition task. Sloan letters filtered by either model were presented individually on each trial, simulating different levels of visual acuity from 0 (normal) to 1.5 in logMar. To investigate how stimulus contrast interacts with acuity loss, the Michelson contrast of letters were 20%, 50% or 80%. Eight subjects completed 1800 trials with letters varying in nominal logMar size, contrast and simulated acuity loss. Effective acuity was measured as the logMar size of letters that yielded 75% correct performance for each stimulus contrast and simulated acuity loss.

Results : Both models successfully simulated different levels of acuity reduction. For both models and for all subjects, measured acuity regressed linearly with simulated acuity at a slope close to unity, with the nonlinear model approaching closest to the theoretical unity line. Measured acuity showed a clear contrast-dependence such that acuity dropped steadily with reduced letter contrast, which was more pronounced for the non-linear model. Informal observation suggested that the non-linear model is more robust than the linear one with respect to moderate variations in viewing condition.

Conclusions : Acuity loss in low vision can be simulated by either a linear or non-linear model with qualitatively similar performance characteristics. Both models provide a tool to visualize and investigate visual challenges commonly encountered by people with low vision in the real world. Further work is needed to determine the selection of the two models for specific applications.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×