**Purpose**:
To propose a static automated perimetry strategy that increases the speed of visual field (VF) evaluation while retaining threshold estimate accuracy.

**Methods**:
We propose a novel algorithm, spatial entropy pursuit (SEP), which evaluates individual locations by using zippy estimation by sequential testing (ZEST) but additionally uses neighboring locations to estimate the sensitivity of related locations. We model the VF with a conditional random field (CRF) where each node represents a location estimate that depends on itself as well as its neighbors. Tested locations are randomly selected from a pool of locations and new locations are added such that they maximally reduce the uncertainty over the entire VF. When no location can further reduce the uncertainty significantly, remaining locations are estimated from the CRF directly.

**Results**:
SEP was evaluated and compared to tendency-oriented strategy, ZEST, and the Dynamic Test Strategy by using computer simulations on a test set of 245 healthy and 172 glaucomatous VFs. For glaucomatous VFs, root-mean-square error (RMSE) of SEP was comparable to that of existing strategies (3.4 dB), whereas the number of stimulus presentations of SEP was up to 23% lower than that of other methods. For healthy VFs, SEP had an RMSE comparable to evaluated methods (3.1 dB) but required 55% fewer stimulus presentations.

**Conclusions**:
When compared to existing methods, SEP showed improved performances, especially with respect to test speed. Thus, it represents an interesting alternative to existing strategies.

^{1,2}and patients with neurological disorders.

^{3}

^{4,5}and (2) those who use neighboring locations to estimate sensitivity of related locations.

^{6–8}The latter category has received growing interest as it appears to optimize speed and accuracy more effectively.

^{6}uses a dynamic approach to estimate thresholds at locations and leverages found location values to seed neighboring locations when selected for testing. Alternatively, tendency-oriented strategy (TOP)

^{8,9}uses neighboring locations to estimate sensitivity thresholds in an asynchronous fashion, leading to extremely fast estimation at the cost of accuracy. More recently, new strategies

^{10,11}have focused on using neighboring locations in a more data-driven and coherent fashion, which has led to improved performances. Our work follows this line of research as well.

^{12}represented as a graph of location estimates whose values depend on themselves as well as on their neighbors. Prior location estimates as well as neighborhood relationships are derived from a large data set of glaucomatous visual fields. For comparison, we evaluated the performance of our method against that of existing strategies by using simulations implemented in the open perimetry interface (OPI).

^{13}

^{10}and Rubinstein et al.

^{11}Accordingly, an iterative scheme was derived that follows ZEST testing

^{5}at individual locations and internally determines the next location and stimulus intensity to use by leveraging a visual field model. We begin by explaining our method and the visual field model and how we evaluated our algorithm by using computer simulations.

^{14}is lower than a predefined value), at which point the location is removed from the pool of unfinished locations. ZEST terminates when all locations have been tested.

^{1,15}(4863 visual fields from 278 eyes of 139 glaucoma patients). To generate smooth PMFs from these data, the PMFs were smoothed by using a Gaussian kernel (

*σ*= 1.5, window = 10 dB).

*σ*= 2.5, window = 10 dB

*×*10 dB).

^{16}which propagates information found at individual locations to neighboring nodes iteratively and leverages the spatial relationships of locations encoded via edge connections. In this way, each node influences nodes further away according to probabilistic dependencies. To avoid updating the nodes corresponding to already finished locations, we fixed these nodes' PMFs during the information propagation.

^{17,18}could be used instead.

*pool*of four locations that are being tested at any given point in time and which are initially selected as those with highest uncertainty according to their initial PMF. The algorithm then automatically and dynamically removes terminated locations from this pool and adds new locations in a way that reduces the overall uncertainty of the visual field as much as possible.

*M*(

_{H}*i*) and

*M*(

_{G}*i*) are nonnegative and stand for the entropy and neighborhood heterogeneity of the visual field estimates at location

*i*, respectively. The parameter

*α ∈*R

^{+}is a weight that influences the relative importance of these two factors. The next location is then selected as the one maximizing

*C*in Equation 1, implying that either one or both of the defined measures should be high. The entropy measure,

_{i}*M*(

_{H}*i*), quantifies the uncertainty of the modeled PMF of location

*i*, while the neighborhood heterogeneity,

*M*(

_{G}*i*), represents an approximation of the spatial threshold gradient of the current field estimate (see 1 for computational details). In particular,

*M*(

_{G}*i*) quantifies neighborhood threshold consistency and is higher at locations whose neighbor estimates differ from one another.

*C*is moved to the pool of locations to be tested, we substitute the PMF of the newly added location with that of the modeled PMF. This is achieved by summing the model PMF with a constant

_{i}*δ*, to reduce its confidence, and then multiplying it with the prior PMF. This effectively avoids being overconfident in the model probabilities.

*C*is lower than a predefined value. In this case, no additional location is moved to the pool and the algorithm terminates as soon as the remaining three locations are finished. Importantly, this implies that SEP does not measure all visual field locations and infers sensitivity thresholds for the untested locations from the visual field model after termination of the algorithm.

_{i}^{13}Responses to stimuli presentations were modeled by sampling from a Frequency-of-Seeing (FOS) curve (i.e., psychometric function) with a predefined false-positive and false-negative response rate of 3% and 1%, respectively. The slope of the FOS curve for a given threshold was modeled with a cumulative normal distribution with standard deviation (SD) according to a published variability formula.

^{19}The maximum standard deviation allowed for the slope was set to 6 dB. To find suitable parameters for SEP and ZEST (i.e., parameters that minimize the number of stimulus presentations while yielding accuracy levels comparable to the dynamic test strategy), a parameter optimization procedure was performed (see 2) on a subset of the data.

^{20}and the Haag-Streit dynamic test strategy, which we denote as ZEST, TOP-like, and dynamic-like, respectively (exact implementations of dynamic test strategy and TOP are not public and may slightly differ from our implementation).

^{11}Here we computed

*max*, by calculating the greatest difference in threshold sensitivity between a location and any of its eight adjacent locations (ignoring the blind spot locations 26 and 35). Thus, high

_{d}*max*values indicate locations at scotoma borders, whereas low values indicate locations in uniform regions.

_{d}*U*test,

*P*> 0.05). Median RMSE in the dynamic-like strategy was 2.3 dB (significant difference with SEP, Mann-Whitney

*U*test,

*P*< 0.001). The median number of stimulus presentations was 64 in SEP, 98 in ZEST, and 142 in the dynamic-like strategy (significant differences, Mann-Whitney

*U*test,

*P*< 0.001). The TOP-like strategy had significantly lower number of stimulus presentations than all other algorithms, but also significantly higher median RMSE of 4.8 dB (Mann-Whitney

*U*test,

*P*< 0.001).

*U*test,

*P*> 0.05) and 3.5 dB in the dynamic-like strategy (significant difference with SEP, Mann-Whitney

*U*test,

*P*< 0.01). The median number of stimulus presentations was 113 in SEP, 123 in ZEST, and 146 in the dynamic-like strategy (significant difference, Mann-Whitney

*U*test,

*P*< 0.001). The TOP-like strategy has significantly lower number of stimulus presentations than the other algorithms, but also significantly higher median RMSE of 5.8 dB (Mann-Whitney

*U*test,

*P*< 0.001). In SEP, there is no clear dependency of RMSE on MD whereas number of stimulus presentations correlates well with MD especially for MD values between 0 and −14. This is expected as SEP was optimized as to provide the same accuracy level for any subject by adapting its speed. The relationship between RMSE and MD however reveals that the median error is highest in visual fields with intermediate MD of −14, corresponding mostly to heterogeneous visual fields (see 4).

*t*-test,

*P*< 0.05). Absolute mean error of tested locations of SEP was 0.23 dB (SD = 3.3), compared to that of untested locations, which was −0.0016 dB (SD = 4.5). The distributions between tested and untested locations of SEP differed significantly (two-sample

*t*-test,

*P*< 0.001). The average number of tested locations per visual field in SEP was 39 (SD = 5.1) out of 54.

*max*values among (a) tested locations and inferred (untested) locations. Additionally the errors of tested (c) and untested (d) locations are displayed as a function of

_{d}*max*.

_{d}*U*test,

*P*< 0.001).

*, the next location to test is picked only using the entropy measure (*

_{entropy}*α*= 0) and in SEP

*, locations are picked randomly. Note that SEP*

_{random}*tests location orders as in ZEST but still infers untested locations by using the CRF. The simulations were performed on a test set of glaucomatous visual fields and each visual field was measured five times.*

_{random}^{21}This can be interpreted as smart smoothing technique where SEP does not allow information propagation through the already tested locations, which avoids smoothing thresholds at the locations that have been thoroughly tested.

^{22}as well as by the fact that if a uniform PMF were used, this behavior would not be observed. This effect is stronger when using a smaller

*σ*for the Gaussian smoothing kernel (i.e., the peak at −1 is more pronounced) and can be prevented by using a higher

*σ*, but this has shown to increase the acquisition time significantly. As such, research in a more appropriate approach to model the prior probabilities could further improve performances for visual fields with high variability.

*max*value higher than 12 dB. This indicates that most locations at scotoma borders (high

_{d}*max*) are in fact being tested. Looking at the errors, it can be seen that for tested locations the error generally is not higher at scotoma borders where

_{d}*max*is high. Among untested locations, the error is generally higher than in tested locations (as already observed in Fig. 4f), but without an error increase as a function of

_{d}*max*. This indicates that for rare cases where a location with high

_{d}*max*remains unmeasured, these do not significantly increase the error in our algorithm.

_{d}*). Lastly, it can be seen that ZEST eliminates errors faster when using a CRF to get intermediate estimates for untested locations (SEP*

_{entropy}*). This is to be expected since the estimates for untested locations in ZEST are normative values that can be uninformative in some cases.*

_{random}^{11}Further analysis would be helpful to better understand the influence of the neighborhood model on the performance.

**D. Wild**, None;

**Ş.S. Kucur**, Haag-Streit Foundation (F);

**R. Sznitman**, Haag-Streit Foundation (F)

*Invest Ophthalmol Vis Sci*. 2013; 54: 6694–6700.

*Graefes Arch Clin Exp Ophthalmol*. 1994; 232: 509–515.

*Ophthalmology*. 1979; 86: 1302–1312.

*Ophthalmology*. 1993; 100: 949–954.

*Vision Res*. 1994; 34: 885–912.

*Ger J Ophthalmol*. 1995; 4: 25–31.

*Acta Ophthalmol Scand*. 1997; 75: 368–375.

*Perimetry Update, 1996/1997: Proceedings of the XIIth International Perimetric Society Meeting*. Vol 1997. Würzburg, Germany: Kugler Publications; 1996: 119–123.

*Vision Res Sup Jermov*. 1996; 36: 88.

*Invest Ophthalmol Vis Sci*. 2014; 55: 3265–3274.

*Transl Vis Sci Technol*. 2016; 5: 7.

*Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition*. Washington DC: IEEE Computer Society; 2004: 695–702.

*J Vis*. 2012; 12 (11): 22.

*Bell System Tech J*. 1948; 27: 79–423.

*Invest Ophthalmol Vis Sci*. 2014; 55: 2350–2357.

*Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence*. San Francisco, CA: Morgan Kaufmann Publishers Inc. 1999: 467–475.

*Vision Res*. 2009; 49: 2157–2163.

*Exp Eye Res*. 2012; 102: 70–78.

*Invest Ophthalmol Vis Sci*. 2000; 41: 417–421.

*Invest Ophthalmol Vis Sci*. 2003; 44: 1962–1968.

*Scand J Stat*. 1994; 21: 375–387.

*Optom Vis Sci*. 2005; 82: 43–51.

*Digital Image Processing*. 2nd ed. Boston, MA: Addison-Wesley Longman Publishing Co., Inc. 2001.

*M*(

_{G}*i*) is computed from the current visual field estimate

**E**constructed from the CRF visual field estimate. Let

**E**be an 8

*×*9 matrix,

*x*represents the median of the PMF at location

^{t}*i*and matrix entries that do not correspond to a visual field location are padded with zeros as in the study by Gonzalez and Woods.

^{23}

*M*(

_{G}*i*) is computed by using two 3

*×*3 kernels, which are convolved with the visual field estimate

**E**to get approximations of the horizontal and vertical derivatives, respectively.

*M*(

_{G}*i*) can then be computed as

*update*,

_{fp}*update*,

_{fn}*localStopVal*,

*nIter*,

*α*,

*δ*, and

*golbalStopVal*(Table) while fixing the above parameters at the found values from the first step. The parameters that minimized the mean number of stimuli presentations while not exceeding the mean acquisition error of dynamic-like test strategy were chosen.