**Purpose**:
Perimetric sensitivities become more variable with glaucomatous functional loss. This study examines the extent to which this relation varies between locations, and whether this can be predicted by eccentricity-related differences in spatial summation.

**Methods**:
Longitudinal series of visual fields from standard automated perimetry were obtained from participants with suspected or extant glaucoma. For each location in the 24-2 visual field, heterogeneous fixed-effects models were fit to the data, assuming that variability increased exponentially as sensitivity decreased. The predicted variability at each location was calculated when sensitivity was either 30 dB or 25 dB.

**Results**:
Variability significantly increased with damage at all 52 locations. When sensitivity was 30 dB, variability increased with eccentricity, with *P* = 0.0003. The average SD was 1.54 dB at the four most central locations, versus 1.74 dB at the most peripheral locations. When sensitivity was 25 dB, variability did not vary predictably with eccentricity, with *P* = 0.340. The average SD was 2.36 dB at the four central locations, versus 2.24 dB at the most peripheral locations.

**Conclusions**:
The relation between sensitivity and variability differed by eccentricity. Among healthy locations, variability was lower centrally, where the stimulus size is larger than Ricco's area, than peripherally. Among damaged locations, variability did not systematically vary with eccentricity. This could be because Ricco's area expands in glaucoma, such that stimuli were now smaller than this area at all locations.

^{1–4}; however, its results are notoriously variable between tests for the same eye,

^{5–9}delaying detection of damage and disease progression.

^{10}Most perimetric research can be framed as a series of gradual improvements to the signal-to-noise ratio.

^{11–13}To reduce this variability in future functional testing paradigms, it is important to first understand the sources of the variability in current testing.

^{14}the experience level of the technician conducting the test,

^{15}the time of day,

^{15}and even the time of year.

^{15,16}However, the biggest source of variability is short-term fluctuation

^{8,17}related to the psychometric function. The physiologic response to a stimulus presentation involves an increase in the rate at which retinal ganglion cells produce neural spikes, but this spiking rate is inherently probabilistic.

^{18,19}Therefore, the observer will not always respond to stimuli of greater contrast than the physiologic detection threshold, and will sometimes respond to stimuli of lesser contrast.

^{19,20}Indeed, perimetric testing algorithms aim to converge to a contrast to which the observer will respond on a certain percentage of presentations, and not respond to the remaining presentations.

^{21}This causes an increase in intratest variability,

^{8,22,23}a major component of test-retest variability, because there is a broader range of contrasts at which the response probability will be neither 0% nor 100%. Heijl et al.

^{22}suggested that this increase in variability may vary by distance from fixation, whereby variability was lower in the central visual field than peripherally in eyes with near-normal sensitivity, but there was no apparent correlation between variability and eccentricity in eyes with moderate glaucomatous loss. That study was performed by testing 51 patients four times within a month. The effect may be confounded by the fact that normal sensitivities are higher centrally than peripherally, which would also be expected to result in lower variability.

^{21}

^{24,25}A related psychophysical feature of the normal visual system is that Ricco's area increases with visual field eccentricity.

^{26}For stimuli smaller than Ricco's area, complete spatial summation occurs, such that stimulus area multiplied by stimulus intensity is constant at the detection threshold (hence, a smaller stimulus must have proportionately higher intensity for it to be equally detectable). For stimuli larger than Ricco's area, only partial spatial summation occurs. Ricco's area is also larger in glaucomatous eyes than in normal eyes.

^{27}The difference in detection contrast between normal and glaucomatous locations appears to be greater for stimuli smaller than Ricco's area.

^{27}It has therefore been suggested that modifying stimulus size so that it remains within Ricco's area may be a way to increase the signal-to-noise ratio of perimetry.

^{27–29}

^{30}and more centrally in glaucomatous eyes,

^{27}the size III stimulus is approximately equal to or smaller than Ricco's area, causing complete spatial summation to occur in many eyes. The slope of partial summation (the change in threshold for a given change in stimulus size) becomes steeper with increasing eccentricity,

^{31}while the relation for complete spatial summation remains unchanged.

^{32}Therefore, existing perimetric data can be used to reveal important information about the consequences of using stimuli larger or smaller than Ricco's area. In particular, if the relative size of the stimulus and Ricco's area is indeed a key determinant of perimetric variability, then the sensitivity-variability relation should differ with eccentricity.

^{22}using a longitudinal dataset, analyzed in a manner so as to remove the effects of possible disease progression. The advantages of this approach are 2-fold: first, it represents a more clinically realistic scenario in particular with regard to learning effects,

^{33}because the intertest interval is 6 months rather than just a week; and second, it allows a much larger dataset to be used so that more precise pointwise results can be generated. If variability is indeed lower when stimulus size is larger than Ricco's area, then this will provide an essential piece of information concerning the expected variability when deciding on the optimal balance between stimulus size and contrast in testing algorithms.

^{34}Inclusion criteria were a diagnosis of primary open-angle glaucoma, and/or likelihood of developing glaucomatous damage (e.g., ocular hypertension with other risk factors, such as a suspicious-looking optic disc or a family history of glaucoma), as determined by each participant's physician. Exclusion criteria were an inability to perform reliable visual field testing, best-corrected visual acuity worse than 20/40, substantial cataract or media opacities likely to increase light scatter, or other conditions or medications that may significantly affect the visual field. A visual field defect was not a requirement for study entry. Participants provided written informed consent once all of the risks and benefits of participation were explained to them. All protocols were approved and monitored by the Legacy Health Institutional Review Board, and adhered to the Health Insurance Portability and Accountability Act of 1996 and the tenets of the Declaration of Helsinki.

^{35}on the Humphrey Field Analyzer II, with the 24-2 test pattern, were included for analysis. Tests were excluded if >15% false positives or >33% fixation losses were recorded. If the technician considered the test to be unreliable, it was repeated, and only the last test on that day for each eye was included in the analysis. For this study, only series of length of five or more visual fields were included.

^{36}

^{37,38}; see the Discussion for more on this point. Eyes were then excluded if fewer than five visual fields remained in their series, because estimates of their rate of change would be insufficiently accurate.

*LongitudinalData*contains columns for

*Sensitivity*(the pointwise sensitivity values at the chosen location),

*Eye*(a unique identifier for each eye in the dataset), and

*TestDate*(the date on which the visual field was taken, expressed as the number of days since the start of the series). The model assumes that

*Sensitivity*changes linearly with

*TestDate*, with a different starting point and rate for each eye.

*weights*argument in the model fitting function means that the error variance is assumed to be related exponentially to

*Sensitivity*, as in previous empirical studies.

^{21}For a given value of

*Sensitivity*, the SD of the error terms is assumed to be given by

*SD(Sensitivity) = A*, which simplifies to

_{0}*e^{B*(Sensitivity–C)}*SD(Sensitivity) = A * e*. The starting value for the algorithm was set to a value of

^{B*Sensitivity}*B = −0.08*, as this was found to result in successful convergence of the algorithm for all test locations. For heteroscedastic linear models, weighting observations according to the reciprocal of the error variance function has been shown to be more robust to model misspecification than transforming the data to achieve homoscedasticity.

^{39}

*A*was extracted from the fitted model using the code

*sigma(Fit)*. The fitted value of parameter

*B*was extracted together with its approximate 95% confidence interval (based on a normal approximation to the distribution of the maximum likelihood estimator), using the code

*intervals(Fit)$varStruct*. These were then used to calculate the predicted SD of errors when the sensitivity is 30 dB or 25 dB, to provide examples of the predictions and illustrate the differences in behavior across the visual field (note that these examples are based on the results from the entire dataset, not just on locations with these particular sensitivities).

*Fit*and

*Fit.Homoscedastic*were compared by Akaike's Information Criterion (AIC) to assess whether incorporating heteroscedasticity improves the fit of the model to the observed data, which can be taken as an indication of whether the error variance is significantly related to sensitivity. The null hypothesis is that the error variance is independent of sensitivity, in which case the homoscedastic and heteroscedastic models will fit the data equally well.

^{40}A recent study used an autoregressive moving average correlation structure for similar analyses

^{41}; however, that approach discretized time using “visit number” as its covariate, and a CAR model would be expected to be more realistic in a situation in which the intervisit time interval can vary substantially (for example, if data from one test visit was excluded due to unreliability of the sensitivity value, resulting in an intervisit interval of 1 year instead of 6 months). When fitting the

*Fit.CAR*model, the value of the variance function coefficient was fixed at the value of

*B*found in the primary model for this location. Principally, this is to ensure that direct comparisons between

*Fit*and

*Fit.CAR*are affected only by the presence of an autocorrelation term, and not by differing values of the variance function coefficient; a secondary advantage is that fixing this coefficient greatly aids with achieving algorithmic convergence. The comparison provided by

*anova(Fit, Fit.CAR)*can then be used to assess whether significant autocorrelation is present. Note that

*Fit*is nested within

*Fit.CAR*, hence they can be formally compared using an ANOVA, whereas

*Fit*and

*Fit.Homoscedastic*are not nested (due to the different weightings of observations) and so have to be compared using AIC.

*B*(top left); the fitted value of

*A*, equivalent to the SD of the error terms that would be predicted when

*Sensitivity = 0 dB*(top right); and the fitted SDs of the error terms when

*Sensitivity = 30 dB*(bottom left) and when

*Sensitivity = 25 dB*(bottom right) as examples illustrating the difference in behavior at different sensitivities. Note that the predicted SD of the test-retest variability would be a factor of √2 greater than this, due to there being variability in both the test and the retest observations. Notably, the variance function coefficient was negative at all locations (

*B*was significantly less than zero with

*P*< 0.05), consistent with previous findings that variability decreases as sensitivity increases.

^{21}Averaging across all locations gave mean values of

*A = 14.8*and

*B =*−

*0.070*.

*Sensitivity = 30 dB*, the predicted variability generally increases with eccentricity, from a mean SD of 1.54 dB at the four central locations to a mean of 1.74 dB at the most peripheral ring of locations. The Pearson correlation between eccentricity and the fitted SD at

*Sensitivity = 30 dB*was 0.483, with

*P*= 0.0003. The width of the 95% confidence interval for the variance function coefficient

*B*varied by location, from ±0.0037 dB to ±0.0067 dB. Treating the values of

*A*as fixed, this results in 95% intervals for the fitted SD at

*Sensitivity = 30 dB*with widths ranging from ±0.14 dB to ±0.31 dB away from the values shown. This indicates that the coefficients at locations around the edge of the 24-2 field were significantly higher than those at central locations with

*P*< 0.05.

*Sensitivity = 25 dB*, there is no obvious trend of variability against eccentricity. The Pearson correlation between eccentricity and the fitted SD at

*Sensitivity = 25 dB*was 0.135, with

*P*= 0.340. Now, the mean predicted SD was 2.36 dB at the four central locations, versus a mean of 2.24 dB at the most peripheral locations. At this sensitivity, the 95% confidence intervals for the fitted SDs had widths ranging from ±0.15 dB to ±0.39 dB. It is therefore possible that central locations may be even more variable than many of the peripheral locations once this level of functional loss has been sustained. However, such conclusions must be viewed with caution, because the proportion of eyes that had sensitivity between 15 dB and 25 dB was below 1.6% at all four of the most central locations, compared with more than 20% at the four most superior locations.

*Fit*in the Methods section above) fit the data significantly better than the homoscedastic model (

*Fit.Homoscedastic*) at all 52 locations, as evidenced by a lower AIC value. The model incorporating autocorrelation between observations that were close together in time significantly improved the fit of the model at 5 of the 52 locations. However, even at these locations, the fitted value of the correlation between the residuals at two observations 1 year apart was below 0.01, indicating that any autocorrelation was not meaningfully affecting the results.

*Fit.Homoscedastic*at each of the 52 locations were split into 1-dB-wide bins, and the SDs of the residuals from all data points within each bin were calculated. These results are plotted in Figure 2 for the two locations chosen above. It can be clearly seen that when the sensitivity is near-normal, the variance is higher for location (+15°, −15°) than for location (−9°, +3°), but this difference disappears once more substantial damage has occurred. Note that the apparently high variability at location (−9°, +3°) for sensitivities 34 to 35 dB may not be a reliable result, because only 30 visual fields (0.4%) fell within that bin.

*B*. The SD for an averaged location was given by

*SD(Sensitivity) = 14.8 * e*. Equivalently,

^{−0.070*Sensitivity}*log(SD) = −0.070*Sensitivity + 2.70*. This is close to, but slightly smaller than, the estimate from Henson et al.

^{21}of

*log(SD) = −0.081*Sensitivity + 3.27*. It is not surprising that these differ, because that article was estimating the slopes of frequency-of-seeing curves, and does not incorporate the effect of the perimetric testing algorithm. However, the similarity in the form of the function and in the magnitude of its parameters is reassuring. Other studies using perimetric data but with otherwise different methodologies have shown very similar effect sizes. In our results, averaging all locations, the predicted SD was 1.10 dB at a sensitivity of 35 dB, and 2.15 dB at a sensitivity of 25 dB. By way of comparison, Artes et al.

^{42}reported SDs of approximately 0.9 dB and 2.1 dB at those sensitivities, whereas Russell et al.

^{43}reported SDs of 1.2 dB and 2.2 dB.

*P*= 0.340. This confirms and extends previous findings by Heijl et al.

^{22}

^{29}and indeed it may be suboptimal to use the same stimulus size at all locations and all levels of damage.

^{13}This study demonstrates that the optimal stimulus size will not only be affected univariately by damage level and location, but also by the interaction between them.

^{30,31}and with glaucoma,

^{27}but contrast sensitivity for a stimulus of area equal to Ricco's area has been reported to remain constant.

^{27,32,44}It seems reasonable to propose then that variability might also remain constant between locations at which the sensitivity has deteriorated (due to aging and/or glaucoma) to the point at which Ricco's area exactly equals the stimulus size. In a healthy eye, Ricco's area equals that of a size III target at locations that are around the edge of, or just outside, the 24-2 visual field grid. At such locations, normal sensitivity for a subject of the average age in our cohort is between 27 and 30 dB. Notably, as seen in Figure 3, variability was most spatially homogeneous when sensitivity was 28 to 29 dB.

^{31}Similarly here, variability (for a given sensitivity) does appear to vary with eccentricity while sensitivity is above approximately 28 to 29 dB. There may be a discontinuity in the sensitivity-variability relation at the point at which Ricco's area starts to exceed the 0.43° diameter of the size III stimulus, with the magnitude of this change in the relation varying with eccentricity.

^{45}It is possible that these aberrations are the limiting factor for contrast sensitivity in normal eyes, causing variability to increase with eccentricity; whereas in a glaucomatous eye, undersampling resulting from loss of retinal ganglion cells may become the limiting factor and so the eccentricity effect is diminished. Further evidence would be needed before concluding that the magnitude of such an effect could entirely explain our results.

^{46}even though this effect is not as pronounced as at the edge of a glaucomatous scotoma.

^{47}

*R*

^{2}< 0.1) with sensitivities at the same location measured more accurately using frequency-of-seeing curves.

^{37}This is consistent with the fitted SD of the error terms increasing exponentially as sensitivity decreases. In this study, locations with sensitivity ≤15 dB were excluded from the analysis, so that this unreliability would not affect the main results; however, it should be noted that the heteroscedastic linear model used in this study fits the model by weighting observations according to the reciprocal of the error variance.

^{39}Therefore sensitivities ≤15 dB would have low weighting in the regression model, and so their omission is unlikely to have greatly affected our results or conclusions.

^{40,48}as would be performed when analyzing possible risk factors for rapid progression. Intuitively, such an exponential model is more consistent with longitudinal series in which the eye is functionally normal for an extended period of time before visual field loss develops. Certainly, a reduced baseline sensitivity is predictive of a worse rate of change.

^{49}However, an accelerating exponential model whose variability increases exponentially as sensitivity decreases cannot be fit to the data, due to lack of identifiability between the exponential parameters. Just as importantly, this study used data from patients' entire series of visual fields, even if a treatment change occurred partway. If an eye were seen to be rapidly progressing, then the patient's clinician would alter the management strategy accordingly, hopefully slowing the rate of progression. As a result, accelerating and decelerating series were approximately equally common in our dataset. When performing quadratic fits for individual eyes, 47.4% had a negative coefficient for Time squared, indicating that the series was decelerating (the rate of change was slowing over time); 52.6% had a positive coefficient, indicating that the series was accelerating (the rate of change was becoming faster). Overall, these coefficients were not significantly different from zero (

*P*= 0.206,

*t*-test). This causes a linear model to fit the data better than an accelerating exponential model in this particular study. Any such treatment changes would not occur at a consistent time in the series, or at a consistent sensitivity for an individual location, and so they should not influence the distribution of residuals.

^{14}

^{35}After all stimulus presentations have been made, the algorithm applies a proprietary spatial smoothing algorithm, aiming to reduce variability without obscuring true defects. It seems unlikely that this would cause a consistent eccentricity-related difference in the sensitivity-variability relation, but because the details are not public knowledge, it cannot be ruled out that this may have affected the results.

^{23,50}or size modulation.

^{13}In this study, it is apparent that the increase in variability with damage is also far less pronounced at peripheral than central locations when using static size III stimuli. It is therefore important to consider visual field location (and in particular eccentricity) as an additional relevant factor in such studies, because this represents a considerable confound when attributing such findings to stimulus type alone.

**S.K. Gardiner**, Haag-Streit Inc. (C)

*. 1997; 115: 777–784.*

*Arch Ophthalmol**. 2003; 12: 139–150.*

*J Glaucoma**. 2008; 115: 941–948.e1.*

*Ophthalmology**. 2016; 25: 629–633.*

*J Glaucoma**. 1987; 105: 1544–1549.*

*Arch Ophthalmol**. 1991; 98: 79–83.*

*Ophthalmology**. 1999; 40: 648–656.*

*Invest Ophthalmol Vis Sci**. 2001; 42: 1404–1410.*

*Invest Ophthalmol Vis Sci**. 2006; 46: 1732–1745.*

*Vision Res**. 2011; 52: 3237–3245.*

*Invest Ophthalmol Vis Sci**. 2009; 50: 4700–4708.*

*Invest Ophthalmol Vis Sci**. 2013; 2 (6): 3.*

*Trans Vis Sci Tech**. 2018; 8: 2172.*

*Sci Rep**. 1990; 97: 44–48.*

*Ophthalmology**. 2012; 53: 7010–7017.*

*Invest Ophthalmol Vis Sci**. 2013; 120: 724–730.*

*Ophthalmology**. 1984; 102: 704–706.*

*Arch Ophthalmol**. 2006; 6: 1159–1171.*

*J Vis**. 2008; 48: 1859–1869.*

*Vision Res**. 1976; 54: 325–338.*

*Acta Ophthalmol (Copenh)**. 2000; 41: 417–421.*

*Invest Ophthalmol Vis Sci**. 1989; 108: 130–135.*

*Am J Ophthalmol**. 2005; 46: 2451–2457.*

*Invest Ophthalmol Vis Sci**. 1995; 35: 7–24.*

*Vision Res**. 1990; 300: 5–25.*

*J Comp Neurol**. 2000; 17: 641–560.*

*J Opt Soc Am A Opt Image Sci Vis**. 2010; 51: 6540–6548.*

*Invest Ophthalmol Vis Sci**. 2013; 54: 3975–3983.*

*Invest Ophthalmol Vis Sci**. 2017; 37: 160–176.*

*Ophthalmic Physiol Opt**. 2015; 56: 3565–3576.*

*Invest Ophthalmol Vis Sci**. 2016; 11: e0158263.*

*PLoS One**. 1970; 207: 611–622.*

*J Physiol**. 2008; 85: 1043–1048.*

*Optom Vis Sci**. 2012; 53: 3598–3604.*

*Invest Ophthalmol Vis Sci**. 1997; 75: 368–375.*

*Acta Ophthalmol Scand**Nlme: Linear and Nonlinear Mixed Effects Models*.

*R Foundation for Statistical Computing*, 2017.

*. 2014; 121: 1359–1369.*

*Ophthalmology**. 2010; 128: 570–576.*

*Arch Ophthalmol**. 1982; 77: 878–882.*

*J Am Stat Assoc**. 2013; 54: 5505–5513.*

*Invest Ophthalmol Vis Sci**. 2017; 6 (3): 1.*

*Trans Vis Sci Tech**. 2002; 43: 2654–2659.*

*Invest Ophthalmol Vis Sci**. 2012; 53: 5985–5990.*

*Invest Ophthalmol Vis Sci**. 1990; 9: 273–336.*

*Progress in Retinal Research**. 1998; 15: 2522–2529.*

*J Opt Soc Am A**. 1989; 107: 417–420.*

*Am J Ophthalmol**. 1991; 98: 1529–1532.*

*Ophthalmology**. 2015; 4 (1): 8.*

*Trans Vis Sci Tech**. 2011; 88: 56–62.*

*Optom Vis Sci**. 2009; 50: 974–979.*

*Invest Ophthalmol Vis Sci*