October 2024
Volume 65, Issue 12
Open Access
Visual Psychophysics and Physiological Optics  |   October 2024
Quantifying the Functional Relationship Between Visual Acuity and Contrast Sensitivity Function
Author Affiliations & Notes
  • Zhong-Lin Lu
    Division of Arts and Sciences, NYU Shanghai, Shanghai, China
    Center for Neural Science, New York University, New York, New York, United States
    Department of Psychology, New York University, New York, New York, United States
    NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
  • Yukai Zhao
    Center for Neural Science, New York University, New York, New York, United States
  • Luis Andres Lesmes
    Adaptive Sensory Technology Inc., San Diego, California, United States
  • Michael Dorr
    Adaptive Sensory Technology Inc., San Diego, California, United States
  • Correspondence: Zhong-Lin Lu, 4 Washington Pl, New York, NY 10003, USA; zhonglin@nyu.edu
Investigative Ophthalmology & Visual Science October 2024, Vol.65, 33. doi:https://doi.org/10.1167/iovs.65.12.33
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhong-Lin Lu, Yukai Zhao, Luis Andres Lesmes, Michael Dorr; Quantifying the Functional Relationship Between Visual Acuity and Contrast Sensitivity Function. Invest. Ophthalmol. Vis. Sci. 2024;65(12):33. https://doi.org/10.1167/iovs.65.12.33.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Studies have reported that individuals with certain ocular disorders may have significant decreases in contrast sensitivity function (CSF) despite having normal or near normal visual acuity (VA). This study seeks to elucidate this phenomenon by investigating the relationship between VA and CSF.

Methods: We analyzed data from 14 eyes tested with Electronic Early Treatment Diabetic Retinopathy and quantitative CSF under four Bangerter foil conditions (n = 56). From the CSF data, we estimated peak gain, peak frequency, and contrast sensitivity acuity (CSA). We explored the correlations between VA and various CSF parameters and evaluated five predictive models of VA using CSA alone and in combination with additional CSF parameters through ridge regression.

Results: We found that similar VA scores can correspond with markedly different CSFs and observed significant correlations among all CSF parameters and between VA and each CSF parameter (all P < 0.001). The most effective predictive model, incorporating CSA and peak gain, explained 90.97% of the variance with a root mean squared error of 0.0676 logMAR, which is comparable with the average standard deviation of the VA scores (0.0627 logMAR) and accounted for 38.6% of the residual variance not explained by the CSA-alone model.

Conclusions: This study offers the first empirical inference of the quantitative relationship between VA and CSF, suggesting that various CSF parameter combinations can yield identical VA. This might help to explain why some clinical populations with normal or near-normal VA exhibit significant CSF deficits and calls for further research in different clinical settings.

Visual acuity (VA) and contrast sensitivity function (CSF) are essential metrics for evaluating visual function. VA assesses spatial resolution,1 and CSF measures the ability to detect subtle light changes against the background.2,3 Although many studies have reported robust correlations between VA and CSF in both healthy older adults and clinical populations,414 there are notable exceptions where patients with certain ocular disorders exhibit substantial decreases in CS despite having VA scores similar to those of control subjects or those in earlier stages of the disease.1520 Clinically, standard letter acuity often fails to detect functional vision issues, particularly in the early stages of macular diseases, and may not always reflect a patient's own perception of their visual function.21,22 Subjective visual complaints are common in ophthalmology clinics, even among patients with a VA of 20/20.23 Because CS is more strongly associated with performance in daily visual tasks3,2426 and vision-related quality of life than VA,8,27,28 impaired CSF can help to explain subjective visual complaints and diminished quality of life in patients with macular disease and good measured VA.29,30 
For example, Figure 1 illustrates a case of treated amblyopia in which patients with 20/20 VA in both previously amblyopic eyes (pAE) and fellow eyes (pFEs) showed significant worse CS in intermediate (P < 0.05), high spatial frequencies (P < 0.01), and notable CS acuity (CSA) deficits in the pAE (P < 0.01).20 Additional studies reinforce this finding. One study comparing 151 eyes with maculopathy but good VA (≥20/30) with 93 control eyes found significant CSF reductions in low, intermediate and some high spatial frequencies (1.5, 3.0, 6.0, and 12.0 cycles per degree [cpd]; all P < 0.01) and significant CSA deficits after controlling for age and VA in a multivariate regression analysis (P < 0.02).18 An analysis of a subset of 49 eyes with maculopathy, including 11 with retinal vein occlusion, 8 with macula-off retinal detachment, 27 with dry AMD, and 3 with wet AMD—alongside 62 eyes from healthy controls with very good VA (all ≥ 20/20; P = 0.10), revealed significant decreases in CS at low and intermediate spatial frequencies (0.15, 0.20, 0.17, and 0.14 log10 units at 1, 1.5, 3, and 6 cpd, respectively; all P < 0.001), although there was no significant difference in CSA (P = 0.22). 
Figure 1.
 
Individual (AE) and average (F) CSFs of five patients with treated amblyopia and 20/20 VA in both eyes (triangle, pFE; circle, pAE). The average CSFs for the pAEs and pFEs are shown with the P value of the paired t-test at each spatial frequency. #0.05 < P < 0.10; *P < 0.05; **P < 0.01. Adapted from Huang et al (2007).20
Figure 1.
 
Individual (AE) and average (F) CSFs of five patients with treated amblyopia and 20/20 VA in both eyes (triangle, pFE; circle, pAE). The average CSFs for the pAEs and pFEs are shown with the P value of the paired t-test at each spatial frequency. #0.05 < P < 0.10; *P < 0.05; **P < 0.01. Adapted from Huang et al (2007).20
In another study by Joltikov et al.,15 four groups with a VA of ≥20/40 were evaluated: nondiabetics (control group), diabetics without retinopathy (no-DR group), diabetics with mild non-proliferative retinopathy (mild NPDR group), and diabetics with moderate to very severe NPDR (moderate NPDR group). Although the VA did not differ significantly among the groups, all diabetic groups showed decreased CS compared with controls in low, intermediate, and high spatial frequencies (1.5, 3.0, 6.0, 12.0, and 18.0 cpd; all P < 0.01). The no-DR group exhibited significant CS deficits at low and intermediate spatial frequencies compared with controls (0.12, 0.16, 0.25, and 0.21 log10 units at 1.5, 3.0, 6.0, and 12.0 cpd, respectively; all P < 0.05). Additionally, the moderate NPDR group had lower CS compared with the mild NPDR group in low, intermediate and high spatial frequencies (0.15, 0.20, 0.23, 0.28, and 0.22 log10 units at 1.5, 3.0, 6.0, 12.0, and 18.0 cpd, respectively; all P < 0.05). Although not reported explicitly, visual inspection of the CSFs suggests apparent CSA deficits in the diabetic groups. 
In a study involving 50 patients with thyroid-associated ophthalmopathy (excluding dysthyroid optic neuropathy) and a mean VA of 20/20—similar to 20 control patients (P = 0.598)—significant CS deficits were observed at intermediate and high spatial frequencies (0.11, 0.19, 0.27, and 0.24 log10 units at 3, 6, 12, and 18 cpd, respectively; all P < 0.001). These deficits included apparent CSA issues.16 
These findings highlight a discrepancy between CSF and VA, suggesting a many-to-one relationship, where different CSF configurations may correspond with similar VAs. The current study investigates this phenomenon by examining the relationship between VA and CSF. 
Typically, VA is assessed using high-contrast, sharp black optotypes on a white background, with the smallest identifiable optotype determining acuity (Fig. 2A). In contrast, CSF measurements use gratings or filtered optotypes with more subtle differences from a gray background. Figure 2B displays five optotypes with one octave bandwidth, commonly used in CSF tests. The spatial frequency spectra of VA and CSF optotypes are notably different (see Figs. 2C, 2D). A typical CSF model with five underlying spatial frequency channels (Fig. 2E) is used to estimate channel activations by VA optotypes (Fig. 2F), showing that VA optotypes of five sizes activate all five channels. Conversely, CSF optotypes activate only one spatial frequency channel each (Fig. 2G). Notably, the smallest VA optotype most actively engages a channel centered at an intermediate, not the highest. 
Figure 2.
 
(A) VA optotypes at five different sizes. (B) CSF optotypes with one octave bandwidth at five different sizes. (C) Fourier power spectra of the VA optotypes, with each curve representing a different size. (D) Fourier power spectra of the CSF optotypes, with each curve representing a different size. (E) A CSF with five channels. (F) Activations of the five channels by VA optotypes from largest to smallest. Each panel displays activations by the optotype for a specific optotype size. (G) Activations of the five channels by CSF optotypes from largest to smallest.
Figure 2.
 
(A) VA optotypes at five different sizes. (B) CSF optotypes with one octave bandwidth at five different sizes. (C) Fourier power spectra of the VA optotypes, with each curve representing a different size. (D) Fourier power spectra of the CSF optotypes, with each curve representing a different size. (E) A CSF with five channels. (F) Activations of the five channels by VA optotypes from largest to smallest. Each panel displays activations by the optotype for a specific optotype size. (G) Activations of the five channels by CSF optotypes from largest to smallest.
Previous reports suggest that individuals can effectively perform the VA task with spatial frequencies much lower than the dominant frequency of the smallest identifiable optotype.20,3137 In contrast, CSA (the cutoff spatial frequency at which the CS = 1.0) represents the highest spatial frequency perceivable by an observer at 100% optotype contrast relative to a gray background. Despite variations in assessment methodologies and underlying mechanisms, research consistently indicates a robust correlation between VA and CSA,38,39 implying that alterations in one metric may be associated with changes in the other. In contrast, CSF changes in some ocular diseases may not affect VA.1520 
From a theoretical standpoint, CSF serves as a comprehensive measure of spatial vision and holds the potential to predict VA directly. Several studies have showcased that human performance in tasks involving the identification of sharp optotypes can be accurately predicted from their CSF profiles.4043 For instance, Chung et al.40 developed the CSF observer model, which successfully predicts the center frequencies in letter identification tasks for both foveal and peripheral vision. This predictive capacity was achieved by using the CSF as a spatial modulation transfer function for the sharp optotypes tested under identical background luminance, retinal location, and optotype layout. The CSF observer model simplifies the human visual system to a single channel, with the CSF serving as its spatial modulation transfer function. Several variations of this model incorporated multiple spatial frequency channels underlying the CSF to enhance predictions of VA thresholds.38,40,42,4454 
The CSF observer model has demonstrated its effectiveness in assessing CSF and VA under controlled testing conditions. However, it is important to note that such consistency is often lacking in practical scenarios. For instance, the quantitative CSF (qCSF) test in Zhao et al. (2023)55 and the Electronic Early Treatment Diabetic Retinopathy (E-ETDRS) test in Zhao et al. (2021)56 had several differences (Fig. 3), including variations in optotype layouts and crowding. Typically, CSF is evaluated using grating stimuli of fixed size across various spatial frequencies,5759 whereas VA is assessed using letter stimuli that vary in size. CSFs measured with fixed-size gratings often show much larger CSAs compared with those measured with letter optotypes, where the sizes decrease with increasing spatial frequency.60 The difference in stimuli can create more complex relationships between estimated VA and CSF, especially when considering specific optotypes, background luminance, and optotype layouts used in each test. Although theoretically it is possible to establish a functional relationship between VA and CSF under identical experimental setups using the CSA observer model, the challenge lies in accurately estimating this relationship using experimental data that may not perfectly adhere to those conditions. 
Figure 3.
 
Illustrations of the optotypes in (A) E-ETDRS and (B) qCSF tests.
Figure 3.
 
Illustrations of the optotypes in (A) E-ETDRS and (B) qCSF tests.
In this study, we analyzed existing data55,56 from 14 eyes tested with E-ETDRS and qCSF61,62 under four Bangerter foil conditions (N = 56) to evaluate the relationship between VA and CSF based on statistical analysis. The null hypothesis posits that CSA alone can provide the best predictions of VA. The alternative hypothesis suggests that CSA, combined with additional CSF parameters, would be more predictive. 
We obtained the average VA from the four repeated E-ETDRS tests for each eye in every test condition. Next, we re-parameterized the CSF model49 to encompass three critical parameters: peak gain (PG), peak frequency (PF), and CSA. We estimated these parameters for each eye in every test condition from the CSF data. We visualized CSFs with statistically equivalent VAs and evaluated the correlations between VA and the CSF parameters, as well as among the CSF parameters themselves. Subsequently, we applied ridge regression6366 with cross-validation to evaluate five predictive models of VA, using CSA alone and in combination with additional CSF parameters, to identify the best predictive model. Finally, we generated CSFs with very different shapes that correspond to the same VA based on the functional relationship in the best predictive model. 
Methods
Data
The dataset comprised 14 eyes of 7 observers with either normal or corrected-to-normal vision, each subjected to E-ETDRS and qCSF tests under four conditions: no foil (F0) and three levels of Bangerter foils (Ryser Ophtalmologie, St. Gallen, Switzerland) with nominal acuities of 20/25 (F1), 20/30 (F2), and 20/100 (F3). Whereas each eye underwent qCSF assessment once, it underwent E-ETDRS test four times. The sequence of tests under the four Bangerter foil conditions was randomized across subjects. During each trial, the stimuli remained on the screen until the subject verbally reported the identities of the optotypes, which were then entered into the computer by the experimenter. Written informed consent was obtained from all participants, and the study protocol adhered to the tenets of the Declaration of Helsinki, approved by the institutional review board of human subject research of the Ohio State University. 
The E-ETDRS Test
The optotype stimuli used in the E-ETDRS test included ten black Sloan letters: C, D, H, K, N, O, R, S, V, and Z. The test used these letters in 20 optotype sizes, equally spaced between −0.3 and 1.6 logMAR. To replicate the crowding effects of the ETDRS chart, flanker bars with the same stroke width as the letters and a flanker-to-center distance of 1.75 letter widths were included (see Fig. 3A). 
The E-ETDRS procedure67 involved two phases: a screening phase and a threshold phase. The screening phase provides an initial estimate of the observer's VA and determines the starting optotype sizes for the threshold phase. During the threshold phase, testing begins with the sizes identified in the screening phase, and additional sizes are tested until the upper and lower bounds of the size range are established. The upper bound is defined as the smallest optotype size at which all five letters are identified correctly or the largest size achievable, whichever is smaller. The lower bound is defined as the largest optotype size at which none of the five letters is correctly identified or the smallest size achievable, whichever is larger. This process samples the full range of the acuity psychometric function. 
In the E-ETDRS method, a pool of five letters is used (without replacement), with each letter presented one at a time at each optotype size within the range. The number of correctly identified letters determines the final acuity estimate. The VA score is calculated by adding five letters for each size between the largest achievable size (1.6 logMAR in this implementation) and the upper bound tested in the threshold phase. VA in logMAR is then computed as: VA(logMAR)  =  1.7  −  0.02  ×  VAS
The qCSF Test
In the qCSF test, 10 Sloan letters (C, D, H, K, N, O, R, S, V, and Z) were used, each filtered with a raised cosine filter.68 The filter function is given by40:  
\begin{eqnarray} F\left( f \right) = \left\{ {\begin{array}{@{}*{1}{c}@{}} 0,\ f < \displaystyle\frac{f_0}{2}\ or\ f > 2f_0\\\\ \displaystyle\frac{1}{2} + \displaystyle\frac{1}{2}\cos \left( {\displaystyle\frac{{\log \left( {\frac{f}{{{{f}_0}}}} \right)}}{{\log \left( {\frac{{{{f}_{{\rm{cutoff}}}}}}{{{{f}_0}}}} \right)}}\pi } \right),\ f \in \left[ {\frac{{{{f}_0}}}{2},2{{f}_0}} \right], \end{array}} \right.\quad \end{eqnarray}
(1)
where f is the radial spatial frequency, f0 = 3 cycles per object is the center frequency, and fcutoff = 2f0 ensures that the full bandwidth at half height spans one octave. Each filtered letter image was normalized by its maximum absolute intensity, resulting in a maximum Michelson contrast of 1.0 after normalization. Stimulus contrast values, ranging from 0.002 to 1.000 in 128 equal logarithmic steps, were generated by scaling the normalized images. Nineteen spatial frequencies, ranging from 1.19 to 30.95 cpd, were obtained by resizing. 
The qCSF test consisted of 50 rows, each displaying 3 bandpass-filtered optotypes (Fig. 3B), randomly sampled with replacement from the 10 Sloan letters. The sizes and contrasts of the optotypes were determined by qCSF method.61,62 In brief, the qCSF method models CSFs using a log parabola with three parameters: PG θPG, peak spatial frequency θPF, and bandwidth at half height θBH. Unlike conventional methods that adaptively select stimuli only in contrast space at each spatial frequency, the qCSF method optimizes stimuli in both contrast and spatial frequency spaces, maximizing the information gain about the CSF in each trial. This Bayesian active learning algorithm selects the optimal stimulus before each trial and updates the posterior probabilities of CSF parameters based on the observer's responses, providing a direct estimate of the entire CSF curve rather than discrete contrast thresholds or sensitivities. 
Apparatus
The E-ETDRS and qCSF tests were performed using MATLAB (MathWorks Corp., Natick, MA, USA). A 24-inch Dell P2415Q liquid-crystal display monitor with a resolution of 3840 × 2160 pixel and a background luminance of 97 cd/m2 was used for the E-ETDRS tests, and a 55-inch Samsung UN55FH6030 monitor with a resolution of 1920 × 1080 pixel and a background luminance of 95.4 cd/m2 was used for the qCSF test. On the Samsung monitor, a bit-stealing algorithm was used to achieve 9-bit grayscale resolution,69 and gamma correction was performed using a psychophysical procedure.70 Participants viewed the displays monocularly from a distance of 4 m, with the untested eye covered by an opaque patch. 
The analyses of qCSF data were conducted on a Dell computer with an Intel Xeon W-2145 @ 3.70 GHz CPU (8 cores and 16 threads) and 64 GB installed memory (RAM). Additionally, ridge regression analyses were conducted on a Macbook Air with an M2 chip and 16 GB of RAM using Matlab R2022b. 
Scoring the Data
The VA scores from the four repeated E-ETDRS tests were averaged to generate the average E-ETDRS VA score for each eye in each of the four Bangerter foil conditions (N = 56). 
The qCSF test is modeled using a log parabola function with three parameters: PG (θPG), PF (θPF), and bandwidth (θBH) (Fig. 4). We developed an equation to relate CSA (θCSA) to these parameters. This approach enabled us to transform the joint posterior distribution of θPG, θPF, and θBH from each qCSF test (see Supplementary Materials A) into a joint posterior distribution of θA = (θPG, θPF, θCSA). Consequently, we were able to estimate the mean and standard deviation of θPG, θPF, and θCSA for each test. 
Figure 4.
 
The log parabola model of the CSF: PG θPG, peak spatial frequency θPF, bandwidth θBH, and CSA θCSA.
Figure 4.
 
The log parabola model of the CSF: PG θPG, peak spatial frequency θPF, bandwidth θBH, and CSA θCSA.
In the qCSF test,61 CSF is expressed as a function of θ = (θPG, θPF, θBH):  
\begin{eqnarray} && {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{\rm{S}}\left( {{\rm{f}}|{\rm{\theta }}} \right)} \right) = {\rm{\ lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PG}}}}} \right)\nonumber\\ &&\qquad - \frac{4}{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( 2 \right)}}{{\left( {\frac{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( {\rm{f}} \right) - {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PF}}}}} \right)}}{{{{{\rm{\theta }}}_{{\rm{BH}}}}}}} \right)}^2}.\quad \end{eqnarray}
(2)
 
When f =  θCSA, CS S(f|θ) = 1.0:  
\begin{eqnarray} && {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PG}}}}} \right) - \frac{4}{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( 2 \right)}} {{\left( {\frac{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{CSA}}}}} \right) - {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PF}}}}} \right)}}{{{{{\rm{\theta }}}_{{\rm{BH}}}}}}} \right)}^2}\nonumber\\ &&\qquad = {\rm{lo}}{{{\rm{g}}}_{10}}\left( {1.0} \right).\quad \end{eqnarray}
(3)
 
Therefore, we can derive θCSA:  
\begin{eqnarray} {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{\theta }_{CSA}}} \right) = {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{\theta }_{PF}}} \right) + \frac{1}{2}{{\theta }_{BH{\rm{\ }}}}\sqrt {lo{{g}_{10}}\left( 2 \right)lo{{g}_{10}}\left( {{{\theta }_{PG}}} \right)} .\quad \end{eqnarray}
(4)
 
The equation allows us to compute CSA (θCSA) from θPG, θPF, and θBH obtained from the original qCSF test. 
Conversely, we can derive θBH as a function of θPG, θPF, and θCSA, and reparametrize the CSF model with the updated parameters θA = (θPG, θPF, θCSA):  
\begin{eqnarray} &&{\rm{lo}}{{{\rm{g}}}_{10}}\left( {{\rm{S}}\left( {{\rm{f}}|{{{\rm{\theta }}}_{\rm{A}}}} \right)} \right) = {\rm{\ lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PG}}}}} \right)\nonumber\\ &&\quad -\, {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PG}}}}} \right) {{\left( {\frac{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( {\rm{f}} \right) - {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PF}}}}} \right)}}{{{\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{CSA}}}}} \right) - {\rm{lo}}{{{\rm{g}}}_{10}}\left( {{{{\rm{\theta }}}_{{\rm{PF}}}}} \right)}}} \right)}^2}.\quad \end{eqnarray}
(5)
 
Evaluating the Relationship Between VA and CSF
We assessed the correlations between VA and the three CSF parameters, as well as correlations among the three CSF parameters across all individuals, treating each eye and foil manipulation as an individual. To test whether CSA alone or in combination with other CSF parameters offers the best predictions of VA and to determine the optimal relationship between VA and CSF, we constructed and evaluated five predictive models. These models used the following predictors: 
  • 1. CSA alone
  • 2. CSA combined with PG
  • 3. CSA combined with PF
  • 4. All three CSF parameters (CSA, PG, and PF)
  • 5. All three CSF parameters along with their interactions, derived from feature engineering techniques
We divided the data into training and validation subsets. Ridge regression was applied to fit each model to the training subset. We then compared the models based on their performance in predicting VA in the validation subset. 
In ridge regression, the cost function is defined as:  
\begin{eqnarray} {\rm{Cost\ function}} = \mathop \sum \limits_{i = 1}^{{{I}_{train}}} {{\left({{{\rm{\beta }}}_0} + \mathop \sum \limits_{l = 1}^L {{\beta }_l}{\rm{CS}}{{{\rm{F}}}_{{\rm{li}}}} - {\rm{V}}{{{\rm{A}}}_{\rm{i}}}\right)}^2} + {\rm{\lambda }}\mathop \sum \limits_l^L {\rm{\beta }}_{\rm{l}}^2,\quad \end{eqnarray}
(6a)
 
\begin{eqnarray} {\rm{V}}{{{\rm{A}}}_{\rm{i}}} = \frac{{\mathop \sum \nolimits_{j = 1}^J {\rm{V}}{{{\rm{A}}}_{{\rm{ij}}}}}}{{\rm{J}}},\quad \end{eqnarray}
(6b)
where β0 is the intercept, CSFli represents CSF predictor l for individual i (with L = 1, 2, 2, 3, and 6 for the five predictive models, respectively), VAij is the VA score for individual i in test j, J = 4 is the number of repeated tests, VAi represents the average VA for individual i in the training subset, βls are the coefficients for the predictors, Itrain is the total number of individuals in the training subset, and λ is the ridge parameter. 
The cost function consists of two terms: the first represents the summed squared difference between model predictions and the observations in the training subset, and the second term regularizes the solution by penalizing large sums of squared coefficients. Ridge regression was chosen over ordinary multivariate linear regression, which uses only the first term in the cost function, due to the high correlations among CSF parameters. 
We conducted ridge regression 5000 times for each of the five predictive models, each with 1001 λ values ranging from 0.0 to 1.0. In each iteration, the dataset was split randomly into training (75%) and validation (25%) subsets. All five models were trained using the same training subset. We then calculated the root mean squared error (RMSEval(λ)) for the validation subset for each model and each λ value using the formula:  
\begin{eqnarray} RMS{{E}_{val}}\left( \lambda \right) = \sqrt {\frac{{\sum {{{({{{\widehat {VA}}}_i}\left( \lambda \right) - V{{A}_i})}}^2}}}{{{{I}_{val}}}}} ,\quad \end{eqnarray}
(7)
where \({{\widehat {VA}}_i}( \lambda )\) represents the predicted VA score for individual i with ridge parameter λ and Ival is the total number of individuals in the validation subset. 
The optimal λ parameter (λopt) for each model was identified as the value that minimized the average RMSEval(λ) across the 5000 validation tests (Supplementary Materials B). We then extracted RMSEvalopt) of each model using its respective λopt across all 5000 iterations, resulting in 5000 RMSEvalopt) values per model. Because they were not normally distributed, we used one-tailed significance tests (MATLAB function signtest) on RMSEvalopt) to compare the performance of these models. 
The optimal \(\beta _l^{opt}\) and intercept \(\beta _0^{opt}\) for each model were determined by averaging the coefficients under each model's respective λopt across the 5000 iterations. The relationship between VA and CSF for each model was then defined as:  
\begin{eqnarray} \widehat {VA}_i^{opt} = \beta _0^{opt} + \mathop \sum \limits_l^L \beta _l^{opt}{\rm{\ }}CS{{F}_{li}},\quad \end{eqnarray}
(8)
where \(\widehat {VA}_i^{opt}\) represents the predicted VA score for individual i. To evaluate the performance of the models across the entire dataset, we computed the overall RMSE as:  
\begin{eqnarray} RMSE = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{{{I}_{total}}} {{{(\widehat {VA}_i^{opt} - V{{A}_i})}}^2}}}{{{{I}_{total}}}}} ,\quad \end{eqnarray}
(9)
where Itotal = 56 is the number of test sets in the entire dataset. 
In addition, we computed R2 of the models based on their overall RMSE:  
\begin{eqnarray} {{R}^2} = 1 - \frac{{{{I}_{total}}RMS{{E}^2}}}{{\mathop \sum \nolimits_{i = 1}^{{{I}_{total}}} {{{(V{{A}_i} - \overline {VA} )}}^2}}},\quad \end{eqnarray}
(10)
where \(\overline {VA} \) is the average VA across all 56 tests. 
To assess the percentage of variance explained by adding additional predictors to the CSA-alone model, we computed the ratio of the variance reduction achieved by the model (relative to the CSA-alone model) to the residual variance of the CSA-alone model (relative to measurement noise). The percentage of variance reduction (%RV) is given by:  
\begin{eqnarray} {\rm{\% }}RV = \frac{{RMSE_{CSA + }^2 - RMSE_{CSA - alone}^2}}{{RMSE_{CSA - alone}^2 - VA{{R}_{mea}}}} \times {\rm{\ }}100{\rm{\% }},\quad \end{eqnarray}
(11a)
 
\begin{eqnarray} VA{{R}_{mea}} = \frac{{\mathop \sum \nolimits_i^{{{I}_{total}} = 56} \mathop \sum \nolimits_{j = 1}^{J = 4} {{{(V{{A}_{ij}} - V{{A}_i})}}^2}}}{{{{I}_{total}}J}},\quad \end{eqnarray}
(11b)
where VARmea is the mean variance of repeated VA tests and the lower bound of RMSE2 for any model. 
CSFs With Different Shapes That Correspond With Identical VA Scores
Using the functional relationships between VA and CSF from the best model, we generated CSFs with very different shapes that yield identical VA scores by maintaining the VA constant and solving for the corresponding CSF parameters. 
Results
CSFs With Statistically Equivalent VA Scores
Of the 56 pairs of VA and CSF test results, we discovered numerous CSFs that corresponded with statistically equivalent VA scores. Figure 5 illustrates four such CSFs as three pairs across three panels, all corresponding to VAs of 0.00 ± 0.02 logMAR. In Figure 5A, the two CSFs exhibit very similar CSAs, but different PG and PF. In Figure 5B, the two CSFs display highly disparate CSAs. In Figure 5C, one CSF is entirely under the other; the two CSFs have different PGs, PFs, and CSAs. Just as shown in Figure 1, very similar VAs can result from significantly different CSFs. 
Figure 5.
 
CSFs with statistically equivalent VA scores (0.00 ± 0.02 logMAR). (A) Two CSFs with very similar CSAs but different PGs and PFs. (B) Two CSFs with highly disparate CSAs. (C) Two CSFs with different PGs, PFs and CSAs.
Figure 5.
 
CSFs with statistically equivalent VA scores (0.00 ± 0.02 logMAR). (A) Two CSFs with very similar CSAs but different PGs and PFs. (B) Two CSFs with highly disparate CSAs. (C) Two CSFs with different PGs, PFs and CSAs.
Correlations
Figure 6 presents scatter plots depicting pairs of CSF parameters. The Pearson's correlation coefficient between CSA and PF was 0.7271, between CSA and PG was 0.7895, and between PF and PG was 0.4643 (all P < 0.001). Figure 7 presents scatter plots between average VA scores and CSF parameters. The Pearson's correlation coefficient between VA and CSA was −0.9512, between VA and PF was −0.6626, and between VA and PG was −0.7943 (all P < 0.001). 
Figure 6.
 
Scatter plots illustrating the relationships among CSF parameters. Each point on the plot represents a pair of CSF parameters for a tested eye in one of the four Bangerter foil conditions (N = 56).
Figure 6.
 
Scatter plots illustrating the relationships among CSF parameters. Each point on the plot represents a pair of CSF parameters for a tested eye in one of the four Bangerter foil conditions (N = 56).
Figure 7.
 
Scatter plots illustrating the relationships between VA scores and CSF parameters (N = 56). Each point on the plot represents a pair of VA score and one of the CSF parameters for a tested eye in one of the four Bangerter foil conditions.
Figure 7.
 
Scatter plots illustrating the relationships between VA scores and CSF parameters (N = 56). Each point on the plot represents a pair of VA score and one of the CSF parameters for a tested eye in one of the four Bangerter foil conditions.
Model Comparison
As shown in Table, a one-sided sign test based on RMSEvalopt) from 5000 repeated cross-validation tests indicated that models 2 (CSA + PG), 3 (CSA + PF) and 4 (CSA + PG + PF) all outperformed the CSA-alone model significantly (all P < 0.001). However, model 5, which included all three CSF parameters along with their interactions, yielded significantly worse predictions compared with the CSA-alone model (P > 0.999), because it overfit the data. Further comparisons of models 3 (CSA + PF) and 4 (CSA + PG + PF) with model 2 (CSA + PG) showed that they both performed significantly worse than model 2 (all P > 0.999). These results suggest that the model incorporating both CSA and PG as predictors was the best model. 
Table.
 
Overall RMSE and Model Comparison Results
Table.
 
Overall RMSE and Model Comparison Results
Table also presents the overall RMSE and R2 for each of the five models. The model with both CSA and PG as predictors achieved the second lowest overall RMSE (0.0676) and the second highest R2 (0.9097). In contrast, model 4, which had the lowest overall RMSE and highest overall R2, produced a significantly higher RMSEval in cross-validation tests and was not identified as the best model. This result was likely due to overfitting, because the model with three coefficients captured too much noise in the data. The best model, which included PG along with CSA, explained 38.6% more of the residual variance that was left unexplained by the CSA-alone model. 
Functional Relationship Between VA and CSF Parameters
The predicted and observed VA scores from the best model are depicted in Figure 8 as a scatter plot. The overall RMSE between the predicted and observed VA scores was 0.0676 logMAR, comparable with the average standard deviation (\(\sqrt {VA{{R}_{mea}})} \) of the measurement (0.0627 logMAR). This finding suggests that the predictions were highly accurate. 
Figure 8.
 
A scatter plot of predicted and observed VA scores (N = 56). Each point on the plot represents a pair of predicted and observed VA scores for a tested eye in one of the four Bangerter foil conditions.
Figure 8.
 
A scatter plot of predicted and observed VA scores (N = 56). Each point on the plot represents a pair of predicted and observed VA scores for a tested eye in one of the four Bangerter foil conditions.
The best predictions were provided by the following equation:  
\begin{eqnarray} {{\widehat {VA}}_i} = 1.3916 - 0.8831\ CS{{A}_i} - 0.1557\ P{{G}_i}.\quad \end{eqnarray}
(12)
 
In this equation, the predicted VA score \({{\widehat {VA}}_i}\) decreases (becomes better) with both CSA and PG, without any involvement of PF. Because PF can vary within its observable range, the equation suggests that different combinations of CSF parameters may lead to the same VA score. In other words, there is a many-to-one relationship between CSF parameters and VA, indicating that multiple CSF configurations can correspond to the same VA outcome. 
CSFs With Identical VA Scores
Using Equation 12, we can investigate various combinations of CSF parameters that result in identical VA scores. By setting \({{\widehat {VA}}_i}\) to a fixed value, such as 0 logMAR, we can search for combinations of CSA and PG within their respective observable ranges from the literature that yield the fixed value, while freely selecting PF within its observable range from the literature. Figure 9 presents sample results from one such exploration. In this instance, we fixed \({{\widehat {VA}}_i}\) to 0 logMAR and identified numerous combinations of CSA and PG that satisfied Equation 12, and we illustrate a few of them. In Figure 9A, the two CSFs exhibit the same CSA but differ in PG and PF. In Figure 9B, the two CSFs have identical PF but vary in PG and CSA. In Figure 9C, the two CSFs share the same PG but differ in PF and CSA. Finally, in Figure 9D, the two CSFs demonstrate differences in PG, PF, and CSA. 
Figure 9.
 
CSFs with VA = 0 logMAR based on Equation 12. (A) Two CSFs with the same CSA but different PGs and PFs. (B) Two CSFs with the same PF but different PGs and CSAs. (C) Two CSFs with the same PG but different PFs and CSAs. (D) Two CSFs with different PGs, PFs, and CSAs.
Figure 9.
 
CSFs with VA = 0 logMAR based on Equation 12. (A) Two CSFs with the same CSA but different PGs and PFs. (B) Two CSFs with the same PF but different PGs and CSAs. (C) Two CSFs with the same PG but different PFs and CSAs. (D) Two CSFs with different PGs, PFs, and CSAs.
Discussion
In this study, we tested the hypothesis of whether CSA alone or in combination with other CSF parameters provides the best VA predictions and established the relationship between VA and CSF with the best predictive model. Initially, we estimated PG, PF, and CSA from the CSF data, exploring CSFs with statistically equivalent VAs. Subsequently, we evaluated correlations among VA and CSF parameters and applied ridge regression with cross-validation to evaluate five predictive models of VA. Finally, we generated CSFs leading to identical VA based on these relationships. 
Of 56 pairs of VA and CSF assessments, we found numerous highly distinct CSFs corresponding with statistically equivalent VA scores. Significant Pearson's correlation coefficients were observed among all three CSF parameters and between VA and each parameter (all P < .001). We identified the model that included CSA and PG as predictors as the best predictive model of VA, which explained 90.97% of the variance, with a RMSE between predicted and observed VA scores of 0.0676 logMAR, which is comparable with the mean standard deviation of 0.0627 logMAR in the measurement. 
Because only CSA and PG are included in the best predictive model and PF can vary within its observable range, the functional relationship between VA and CSF suggests that different combinations of CSF parameters may lead to the same VA. By setting the predicted VA to a fixed value, we identified numerous combinations of CSA, PG, and PF yielding identical VA, illustrating the many-to-one relationship between CSF configurations and VA. 
Many studies have found patients with certain ocular disorders exhibit substantial CS deficits despite having VA scores similar to those of control subjects or those in earlier stages of the disease.1520 In addition, subjective visual complaints are common in ophthalmology clinics, even among patients with a VA of 20/20.23 The quantitative functional relationship between VA and CSF found in this study may potentially explain these phenomena in clinical populations. Although VA is strongly correlated with CSA, as reported in the literature,414 it also tends to decrease with increasing PG. In other words, CSFs with both high CSA and high PG are associated with better VA. 
The quantitative relationship could also enhance the accuracy, precision, and efficiency of both VA and CSF tests within the Bayesian adaptive testing framework70,71 and be exploited in developing collective end points that combine information from both VA and CSF tests.55 For instance, the predicted VA from CSF measurements could serve as informative priors in the quantitative VA (qVA) test.56,72 Conversely, knowing the VA of an observer could inform the prior in the qCSF test.61 Such informative priors could greatly enhance the accuracy, precision, and efficiency of the tests. 
The quantitative relationship between VA and CSF identified in this study was derived from E-ETDRS and qCSF tests conducted on 14 eyes across four Bangerter foil conditions. This relationship may differ depending on the stimuli and procedures used in VA and CSF testing. For instance, VA measured with different charts, such as the Snellen chart,73 ETDRS chart,74 and qVA,72 might show varying relationships with CSF. Similarly, VA measured with ETDRS under different luminance levels may exhibit different relationships with CSF. Moreover, VA assessed with the ETDRS chart could show different relationships with CSF measured using grating stimuli. 
Additionally, the exact functional relationship between VA and CSF may vary among different clinical populations, even when using the same tests. Investigating whether this relationship remains consistent across diverse clinical populations or if each clinical population has a unique relationship would be highly insightful. Such research could have significant implications: If the relationship is consistent across all populations, it could be used to develop more efficient tests universally. Conversely, if the relationship varies by population, it may become a crucial metric for disease diagnosis. 
Despite the limitations posed by the specific data used in this study, the identified quantitative relationship between VA and CSF may help explain certain clinical phenomena in functional vision assessment. Furthermore, the methodology developed here can be applied to explore the functional relationships between VA and CSF parameters in other tests and populations. 
Acknowledgments
Supported by the National Eye Institute (EY017491 and EY032125). 
Disclosure: Z.-L. Lu, Adaptive Sensory Technology, Inc. (I, P), Jiangsu Juehua Medical Technology Co., LTD. (I, P); Y. Zhao, None; L.A. Lesmes, Adaptive Sensory Technology, Inc. (I, P, E); M. Dorr, Adaptive Sensory Technology, Inc. (I, P, E) 
References
Whittaker SG, Lovie-Kitchin J. Visual requirements for reading. Optom Vis Sci. 1993; 70(1): 54–65. [CrossRef] [PubMed]
Lord SR. Visual risk factors for falls in older people. Age Ageing. 2006; 35(Suppl 2): ii42–ii45. [PubMed]
Owsley C, McGwin G. Vision and driving. Vision Res. 2010; 50(23): 2348–2361. [CrossRef] [PubMed]
Elliott DB, Situ P. Visual acuity versus letter contrast sensitivity in early cataract. Vision Res. 1998; 38(13): 2047–2052. [CrossRef] [PubMed]
Brown B, Lovie-Kitchin JE. High and low contrast acuity and clinical contrast sensitivity tested in a normal population. Optom Vis Sci. 1989; 66(7): 467–473. [CrossRef] [PubMed]
Rubin GS, Roche KB, Prasada-Rao P, Fried LP. Visual impairment and disability in older adults. Optom Vis Sci. 1994; 71(12): 750. [CrossRef] [PubMed]
Haegerstrom-Portnoy G, Schneck ME, Brabyn JA. Seeing into old age: vision function beyond acuity. Optom Vis Sci. 1999; 76(3): 141–158. [CrossRef] [PubMed]
West SK, Rubin GS, Broman AT, Muñoz B, Bandeen-Roche K, Turano K. How does visual impairment affect performance on tasks of everyday life? The SEE Project. Salisbury Eye Evaluation. Arch Ophthalmol. 2002; 120(6): 774–780. [CrossRef] [PubMed]
Bellmann C, Unnebrink K, Rubin GS, Miller D, Holz FG. Visual acuity and contrast sensitivity in patients with neovascular age-related macular degeneration. Graefes Arch Clin Exp Ophthalmol. 2003; 241(12): 968–974. [CrossRef] [PubMed]
Alexander KR, Derlacki DJ, Fishman GA. Visual acuity vs letter contrast sensitivity in retinitis pigmentosa. Vision Res. 1995; 35(10): 1495–1499. [CrossRef] [PubMed]
Kiser AK, Mladenovich D, Eshraghi F, Bourdeau D, Dagnelie G. Reliability and consistency of visual acuity and contrast sensitivity measures in advanced eye disease. Optom Vis Sci. 2005; 82(11): 946–954. [CrossRef] [PubMed]
Cormack FK, Tovee M, Ballard C. Contrast sensitivity and visual acuity in patients with Alzheimer's disease. Int J Geriatr Psychiatry. 2000; 15(7): 614–620. [CrossRef] [PubMed]
Roark MW, Stringham JM. Visual performance in the “real world”: contrast sensitivity, visual acuity, and effects of macular carotenoids. Mol Nutr Food Res. 2019; 63(15): 1801053. [CrossRef]
Xiong YZ, Kwon M, Bittner AK, Virgili G, Giacomelli G, Legge GE. Relationship between acuity and contrast sensitivity: differences due to eye disease. Invest Ophthalmol Vis Sci. 2020; 61(6): 40. [CrossRef] [PubMed]
Joltikov KA, de Castro VM, Davila JR, et al. Multidimensional functional and structural evaluation reveals neuroretinal impairment in early diabetic retinopathy. Invest Ophthalmol Vis Sci. 2017; 58(6): BIO277–BIO290. [CrossRef] [PubMed]
Tu Y, Jin H, Xu M, et al. Reduced contrast sensitivity function correlated with superficial retinal capillary plexus impairment in early stage of dysthyroid optic neuropathy. Eye Vis. 2023; 10(1): 11. [CrossRef]
Onal S, Yenice O, Cakir S, Temel A. FACT contrast sensitivity as a diagnostic tool in glaucoma: FACT contrast sensitivity in glaucoma. Int Ophthalmol. 2008; 28(6): 407–412. [CrossRef] [PubMed]
Wai KM, Vingopoulos F, Garg I, et al. Contrast sensitivity function in patients with macular disease and good visual acuity. Br J Ophthalmol. 2022; 106(6): 839–844. [CrossRef] [PubMed]
Vingopoulos F, Wai KM, Katz R, Vavvas DG, Kim LA, Miller JB. Measuring the contrast sensitivity function in non-neovascular and neovascular age-related macular degeneration: the Quantitative Contrast Sensitivity Function test. J Clin Med. 2021; 10(13): 2768. [CrossRef] [PubMed]
Huang C, Tao L, Zhou Y, Lu ZL. Treated amblyopes remain deficient in spatial vision: a contrast sensitivity and external noise study. Vision Res. 2007; 47(1): 22–34. [CrossRef] [PubMed]
Jindra L, Zemon V. Contrast sensitivity testing - a more complete assessment of vision. J Cataract Refract Surg. 1989; 15(2): 141–148. [CrossRef] [PubMed]
Woods RL, Wood JM. The role of contrast sensitivity charts and contrast letter charts in clinical practice. Clin Exp Optom. 1995; 78(2): 43–57. [CrossRef]
Arden G . Importance of measuring contrast sensitivity in cases of visual disturbance. Br J Ophthalmol. 1978; 62(4): 198–209. [CrossRef] [PubMed]
Marron JA, Bailey IL. Visual factors and orientation-mobility performance. Am J Optom Physiol Opt. 1982; 59(5): 413–426. [CrossRef] [PubMed]
Freeman EE, Muñoz B, Turano KA, West SK. Measures of visual function and time to driving cessation in older adults. Optom Vis Sci. 2005; 82(8): 765. [CrossRef] [PubMed]
Brown B. Reading performance in low vision patients: relation to contrast and contrast sensitivity. Optom Vis Sci. 1981; 58(3): 218. [CrossRef]
Vingopoulos F, Bannerman A, Zhou P, et al. Towards the validation of quantitative contrast sensitivity as a clinical endpoint: correlations with vision-related quality of life in bilateral AMD. Br J Ophthalmol. 2024; 108(6): 846–851. [CrossRef] [PubMed]
Stellmann J, Young K, Pöttgen J, Dorr M, Heesen C. Introducing a new method to assess vision: computer-adaptive contrast-sensitivity testing predicts visual functioning better than charts in multiple sclerosis patients. Mult Scler J Exp Transl Clin. 2015; 1: 2055217315596184. [PubMed]
Mangione CM, Lee PP, Gutierrez PR, et al. Development of the 25-item National Eye Institute Visual Function Questionnaire. Arch Ophthalmol. 2001; 119(7): 1050–1058. [CrossRef] [PubMed]
Okamoto F, Okamoto Y, Hiraoka T, Oshika T. Vision-related quality of life and visual function after retinal detachment surgery. Am J Ophthalmol. 2008; 146(1): 85–90.e1. [CrossRef] [PubMed]
Anderson RS, Thibos LN. Relationship between acuity for gratings and for tumbling-E letters in peripheral vision. J Opt Soc Am A Opt Image Sci Vis. 1999; 16(10): 2321–2333. [CrossRef] [PubMed]
McAnany JJ, Alexander KR. Spatial frequencies used in Landolt C orientation judgments: relation to inferred magnocellular and parvocellular pathways. Vision Res. 2008; 48(26): 2615–2624. [CrossRef] [PubMed]
Anderson R, Thibos L. The filtered Fourier difference spectrum predicts psychophysical letter discrimination in the peripheral retina. Spat Vis. 2004; 17(1-2): 5–15. [CrossRef] [PubMed]
Alexander KR, McAnany JJ. Determinants of contrast sensitivity for the Tumbling E and Landolt C. Optom Vis Sci. 2010; 87(1): 28. [CrossRef] [PubMed]
Hall C, Wang S, Bhagat R, McAnany JJ. Effect of luminance noise on the object frequencies mediating letter identification. Front Psychol. 2014; 5: 663. [CrossRef] [PubMed]
McAnany JJ. The effect of exposure duration on visual acuity for letter optotypes and gratings. Vision Res. 2014; 105: 86–91. [CrossRef] [PubMed]
McAnany JJ, Alexander KR, Lim JI, Shahidi M. Object frequency characteristics of visual acuity. Invest Ophthalmol Vis Sci. 2011; 52(13): 9534–9538. [CrossRef] [PubMed]
Hou F, Lu ZL, Huang CB. The external noise normalized gain profile of spatial vision. J Vis. 2014; 14(13): 9. [CrossRef] [PubMed]
Thurman SM, Davey PG, McCray KL, Paronian V, Seitz AR. Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements. J Vis. 2016; 16(15): 15. [CrossRef] [PubMed]
Chung STL, Legge GE, Tjan BS. Spatial-frequency characteristics of letter identification in central and peripheral vision. Vision Res. 2002; 42(18): 2137–2152. [CrossRef] [PubMed]
Rovamo J, Luntinen O, Näsänen R. Modelling the dependence of contrast sensitivity on grating area and spatial frequency. Vision Res. 1993; 33(18): 2773–2788. [CrossRef] [PubMed]
Watson AB, Ahumada AJ, Jr. Predicting visual acuity from wavefront aberrations. J Vis. 2008; 8(4): 17. [CrossRef]
Majaj NJ, Pelli DG, Kurshan P, Palomares M. The role of spatial frequency channels in letter identification. Vision Res. 2002; 42(9): 1165–1184. [CrossRef] [PubMed]
Chen G, Hou F, Yan FF, et al. Noise provides new insights on contrast sensitivity function. PLoS One. 2014; 9(3): e90579. [CrossRef] [PubMed]
DeValois RL, DeValois KK. Spatial Vision. Oxford, UK: Oxford University Press; 1990.
Graham NVS. Visual Pattern Analyzers. Oxford, UK:Oxford University Press; 2001.
Watson AB. Detection and recognition of simple spatial forms. In: Braddick OJ, Sleigh AC, eds. Physical and Biological Processing of Images. New York: Springer-Verlag; 1983: 100–114.
Watson AB. Visual detection of spatial contrast patterns: evaluation of five simple models. Opt Express. 2000; 6(1): 12–33. [CrossRef] [PubMed]
Watson AB, Ahumada AJ. A standard model for foveal detection of spatial contrast. J Vis. 2005; 5(9): 717–740. [CrossRef] [PubMed]
Petrov AA, Dosher BA, Lu ZL. The dynamics of perceptual learning: an incremental reweighting model. Psychol Rev. 2005; 112(4): 715–743. [CrossRef] [PubMed]
Legge GE, Foley JM. Contrast masking in human vision. J Opt Soc Am. 1980; 70(12): 1458–1471. [CrossRef] [PubMed]
Watson AB, Solomon JA. Model of visual contrast gain control and pattern masking. J Opt Soc Am A. 1997; 14(9): 2379–2391. [CrossRef]
Chung STL, Tjan BS. Spatial-frequency and contrast properties of reading in central and peripheral vision. J Vis. 2009; 9(9): 16. [CrossRef] [PubMed]
Kwon M, Legge GE. Spatial-frequency cutoff requirements for pattern recognition in central and peripheral vision. Vision Res. 2011; 51(18): 1995–2007. [CrossRef] [PubMed]
Zhao Y, Lesmes LA, Dorr M, Lu ZL. Collective endpoint of visual acuity and contrast sensitivity function from hierarchical Bayesian joint modeling. J Vis. 2023; 23(6): 13. [CrossRef] [PubMed]
Zhao Y, Lesmes LA, Dorr M, Bex PJ, Lu ZL. Psychophysical validation of a novel active learning approach for measuring the visual acuity behavioral function. Transl Vis Sci Technol. 2021; 10(1): 1–1. [CrossRef] [PubMed]
Ginsburg AP. Contrast sensitivity and functional vision. Int Ophthalmol Clin. 2003; 43(2): 5–15. [CrossRef] [PubMed]
Owsley C. Contrast sensitivity. Ophthalmol Clin N Am. 2003; 16(2): 171–177. [CrossRef]
Ginsburg AP. A new contrast sensitivity vision test chart. Am J Optom Physiol Opt. 1984; 61(6): 403–407. [CrossRef] [PubMed]
Lu ZL, Zhao Y, Lesmes LA, Dorr M. Quantifying the functional relationship between visual acuity and contrast sensitivity function. Invest Ophthalmol Vis Sci. 2024; 65(7): 5456.
Lesmes LA, Lu ZL, Baek J, Albright TD. Bayesian adaptive estimation of the contrast sensitivity function: the quick CSF method. J Vis. 2010; 10(3): 17.1–21. [CrossRef] [PubMed]
Hou F, Lesmes L, Bex P, Dorr M, Lu ZL. Using 10AFC to further improve the efficiency of the quick CSF method. J Vis. 2015; 15(9): 2. [CrossRef] [PubMed]
Marquardt DW. Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation. Technometrics. 1970; 12(3): 591–612. [CrossRef]
Marquardt DW, Snee RD. Ridge regression in practice. Am Stat. 1975; 29(1): 3–20. [CrossRef]
Hoerl AE, Kennard RW. Ridge regression: biased estimation for nonorthogonal problems. Technometrics. 1970; 12(1): 55–67. [CrossRef]
Hoerl AE, Kennard RW. Ridge regression: applications to nonorthogonal problems. Technometrics. 1970; 12(1): 69–82. [CrossRef]
Beck RW, Moke PS, Turpin AH, et al. A computerized method of visual acuity testing: adaptation of the early treatment of diabetic retinopathy study testing protocol. Am J Ophthalmol. 2003; 135(2): 194–205. [CrossRef] [PubMed]
Hou F, Lesmes LA, Kim W, et al. Evaluating the performance of the quick CSF method in detecting contrast sensitivity function changes. J Vis. 2016; 16(6): 18. [CrossRef] [PubMed]
Tyler CW. Colour bit-stealing to enhance the luminance resolution of digital displays on a single pixel basis. Spat Vis. 1997; 10(4): 369–377. [CrossRef] [PubMed]
Lu ZL, Dosher BA. Visual Psychophysics: From Laboratory to Theory. Cambridge, MA: MIT Press; 2013.
Lu ZL, Xu P, Lesmes L, Yu D. Systems and methods for measuring visual function maps. Available at: https://patents.google.com/patent/US10925481B2/en?q=(contrast+sensitivity)&inventor=Zhong-LIn+lu. Published online February 23, 2021. Accessed May 19, 2024.
Lesmes LA, Dorr M. Active learning for visual acuity testing. In: Proceedings of the 2nd International Conference on Applications of Intelligent Systems. APPIS ’19. Association for Computing Machinery; 2019: 1–6.
Snellen H. Optotypi ad visum determinandum (letterproeven tot bepaling der gezichtsscherpte; probebuchstaben zur bestimmung der sehschaerfe). Utrecht Neth Weyers. Vol. 1. J. Greven, 1862.
Early Treatment Diabetic Retinopathy Study Research Group. Early Treatment Diabetic-Retinopathy Study design and base-line patient characteristics - ETDRS report number-7. Ophthalmology. 1991; 98(5): 741–756. [CrossRef] [PubMed]
Figure 1.
 
Individual (AE) and average (F) CSFs of five patients with treated amblyopia and 20/20 VA in both eyes (triangle, pFE; circle, pAE). The average CSFs for the pAEs and pFEs are shown with the P value of the paired t-test at each spatial frequency. #0.05 < P < 0.10; *P < 0.05; **P < 0.01. Adapted from Huang et al (2007).20
Figure 1.
 
Individual (AE) and average (F) CSFs of five patients with treated amblyopia and 20/20 VA in both eyes (triangle, pFE; circle, pAE). The average CSFs for the pAEs and pFEs are shown with the P value of the paired t-test at each spatial frequency. #0.05 < P < 0.10; *P < 0.05; **P < 0.01. Adapted from Huang et al (2007).20
Figure 2.
 
(A) VA optotypes at five different sizes. (B) CSF optotypes with one octave bandwidth at five different sizes. (C) Fourier power spectra of the VA optotypes, with each curve representing a different size. (D) Fourier power spectra of the CSF optotypes, with each curve representing a different size. (E) A CSF with five channels. (F) Activations of the five channels by VA optotypes from largest to smallest. Each panel displays activations by the optotype for a specific optotype size. (G) Activations of the five channels by CSF optotypes from largest to smallest.
Figure 2.
 
(A) VA optotypes at five different sizes. (B) CSF optotypes with one octave bandwidth at five different sizes. (C) Fourier power spectra of the VA optotypes, with each curve representing a different size. (D) Fourier power spectra of the CSF optotypes, with each curve representing a different size. (E) A CSF with five channels. (F) Activations of the five channels by VA optotypes from largest to smallest. Each panel displays activations by the optotype for a specific optotype size. (G) Activations of the five channels by CSF optotypes from largest to smallest.
Figure 3.
 
Illustrations of the optotypes in (A) E-ETDRS and (B) qCSF tests.
Figure 3.
 
Illustrations of the optotypes in (A) E-ETDRS and (B) qCSF tests.
Figure 4.
 
The log parabola model of the CSF: PG θPG, peak spatial frequency θPF, bandwidth θBH, and CSA θCSA.
Figure 4.
 
The log parabola model of the CSF: PG θPG, peak spatial frequency θPF, bandwidth θBH, and CSA θCSA.
Figure 5.
 
CSFs with statistically equivalent VA scores (0.00 ± 0.02 logMAR). (A) Two CSFs with very similar CSAs but different PGs and PFs. (B) Two CSFs with highly disparate CSAs. (C) Two CSFs with different PGs, PFs and CSAs.
Figure 5.
 
CSFs with statistically equivalent VA scores (0.00 ± 0.02 logMAR). (A) Two CSFs with very similar CSAs but different PGs and PFs. (B) Two CSFs with highly disparate CSAs. (C) Two CSFs with different PGs, PFs and CSAs.
Figure 6.
 
Scatter plots illustrating the relationships among CSF parameters. Each point on the plot represents a pair of CSF parameters for a tested eye in one of the four Bangerter foil conditions (N = 56).
Figure 6.
 
Scatter plots illustrating the relationships among CSF parameters. Each point on the plot represents a pair of CSF parameters for a tested eye in one of the four Bangerter foil conditions (N = 56).
Figure 7.
 
Scatter plots illustrating the relationships between VA scores and CSF parameters (N = 56). Each point on the plot represents a pair of VA score and one of the CSF parameters for a tested eye in one of the four Bangerter foil conditions.
Figure 7.
 
Scatter plots illustrating the relationships between VA scores and CSF parameters (N = 56). Each point on the plot represents a pair of VA score and one of the CSF parameters for a tested eye in one of the four Bangerter foil conditions.
Figure 8.
 
A scatter plot of predicted and observed VA scores (N = 56). Each point on the plot represents a pair of predicted and observed VA scores for a tested eye in one of the four Bangerter foil conditions.
Figure 8.
 
A scatter plot of predicted and observed VA scores (N = 56). Each point on the plot represents a pair of predicted and observed VA scores for a tested eye in one of the four Bangerter foil conditions.
Figure 9.
 
CSFs with VA = 0 logMAR based on Equation 12. (A) Two CSFs with the same CSA but different PGs and PFs. (B) Two CSFs with the same PF but different PGs and CSAs. (C) Two CSFs with the same PG but different PFs and CSAs. (D) Two CSFs with different PGs, PFs, and CSAs.
Figure 9.
 
CSFs with VA = 0 logMAR based on Equation 12. (A) Two CSFs with the same CSA but different PGs and PFs. (B) Two CSFs with the same PF but different PGs and CSAs. (C) Two CSFs with the same PG but different PFs and CSAs. (D) Two CSFs with different PGs, PFs, and CSAs.
Table.
 
Overall RMSE and Model Comparison Results
Table.
 
Overall RMSE and Model Comparison Results
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×