Purchase this article with an account.
Gary Lee, Mary Durbin, Sophia Yu, Luke Chong, John Flanagan, Carol Cheung, Tin Aung, Tien Yin Wong, Ching Yu Cheng, Aiko Iwase, Makoto Araie, Thomas Callan; Performance of simulated visual fields using structure-derived prior information. Invest. Ophthalmol. Vis. Sci. 2020;61(9):PB0037.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In this preliminary study, we evaluated the performance of using structure-derived visual field priors (S-priors) for simulated visual fields (VFs).
Retrospective data from 1399 subjects (single eyes) from a Singapore population study were used. HFA™ II-i (ZEISS, Dublin, CA) SITA Standard 24-2 VFs and CIRRUS™ HD-OCT (ZEISS, Dublin, CA) 200x200 Optic Disc cubes were analyzed. 70% of eyes were randomly chosen and the data used to train regressors to predict the VF: i) a random forest (RF) using the 256-point circumpapillary retinal nerve fiber layer (RNFL) data and age; ii) a simplified mixed-scale dense convolutional neural net (CNN) [Pelt et al. PNAS 2018; 115(12)] using the RNFL thickness map. The remaining 30% of eyes were used as a test set to predict S-priors and as true input fields to a VF simulator.A simulator was developed that implemented a Bayesian ZEST using a bi-modal starting probability distribution (PDF), as described previously [Chong et al. OPO 2015; 35(2)], centering the normal mode on age normal values derived from 118 eyes described previously [Flanagan et al. IOVS 2016; 57(12)].ZESTs using a uni-modal PDF for custom priors centered on both types of S-priors were also simulated (ZEST-RF, ZEST-CNN). Slopes of frequency of seeing responses were modeled as previously described [Henson et al. IOVS 2000; 41(2)]. False answer rates were set to 0%, 5%, and 20% as 3 responder types.Performance between simulated and true VFs was evaluated by observing mean absolute error (MAE) and total number of stimulus questions. Two one-sided, paired t-tests (α=0.05) for inter-strategy equivalence versus ZEST were performed using limits of equivalence of ±0.5 dB for MAE and ±5% for total questions.
Mean VF mean deviations were -1.8±2.4 dB and -2.7±2.7 dB for training and test sets, respectively (p<0.001). The models performed better in higher vs. lower thresholds (Fig 1). However, overall MAEs for ZEST-RF and ZEST-CNN were equivalent to ZEST (see Table 1). Total questions were reduced 16-19% with S-priors.
The findings suggest that even models with limited data that predict VF priors from OCT data can reduce the duration of a VF exam in this population with comparable overall error. With more data representing a clinical population and more refined models, performance may be further improved.
This is a 2020 Imaging in the Eye Conference abstract.
Fig. 1. Overview of S-prior models
Table 1. Summary of MAE and total stimulus questions
This PDF is available to Subscribers Only