Purchase this article with an account.
Hsin-Hao Yu, Stefan Maetschke, Bhavna Josephine Antony, Hiroshi Ishikawa, Gadi Wollstein, Joel S Schuman, Simon Wail; Estimating visual field functions in glaucoma patients using multi-regional neural networks on OCT images. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1462.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To develop and validate a multi-regional neural network that combines features from the macular (MAC) and optic nerve head (ONH) regions, and estimate visual function parameters from OCT volumes.
Macular and ONH OCT images (Cirrus HD-OCT, Zeiss, Dublin, CA; 200x200x1024 samplings over 6x6x2mm, downsampled to 64x64x128 voxels) were acquired from both eyes of 64 healthy and 222 glaucoma subjects over multiple visits (range: 1-28; median=5.5), forming a dataset of 4000 scans. Automated perimetry (Humphrey visual field, SITA 24-2) tests were administered at each visit. Two deep neural networks with 3D convolutional filters were trained to estimate the visual field index (VFI) and the mean deviation (MD), using images from either region. In addition, a third network was constructed to take both MAC and ONH images as inputs, using a fully connected late-fusion layer that combined the outputs of the two regional networks. The MSE of VFI and MD were minimized using 80% of the data as training sets and 10% as validation sets. The performance was evaluated on 10% hold-out test sets in 5-fold cross-validation. Class activation maps (CAM) were used to localize the features extracted by the network.
The network performed well on the test set when trained only on ONH images: RMSE for VFI and MD were 12.53±2.30 (s.d.) and 4.28dB±0.78, which corresponded to Pearson correlation (r) of 0.89±0.02 and 0.87±0.03 respectively. Interestingly, the network was able to estimate VF test results when it was trained on MAC images only, at a slightly worse performance: r for VFI and MD were 0.79±0.09 and 0.79 ±0.09 respectively (RMSE: 15.6±2.76 for VFI and 5.19dB±1.01 for MD). The network trained on both MAC and ONH images did not achieved significantly higher performance than trained on ONH alone (r for VFI and MD: 0.87±0.01 and 0.86±0.03. RMSE for VFI and MD: 12.86±2.38 and 4.36dB±0.95; Fig. 1). CAM visualization indicated that the features learned by the networks were highly localized, concentrating on RNFL and the photoreceptor bands. The MAC/ONH combined network exhibited higher concentration on the photoreceptor bands in the macular (Fig. 2).
For estimating VFI and MD test results in glaucoma patients, structural information in the MAC and ONH are highly correlated and mostly redundant. ONH images contain almost all the relevant features.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only