We read with interest the article by Odden et al.,
1 “Evaluation of Central and Peripheral Visual Field Concordance in Glaucoma.”
The authors compared the percentage of abnormal points between central and peripheral visual field (VF) loss in patients with glaucoma and found that there was significantly more abnormal points in the peripheral area than the central field in patients with earlier stages of VF loss. The definition of abnormal points used in the study was “a sensitivity at least 6 dB below the expected value for that location,” for both central and peripheral fields. However, using the same standard for both central and peripheral fields may underestimate the percentage of abnormal points detected in the central VF.
Visual responses in different regions of the retina depend on the size and spacing of photoreceptors, the density of ganglion cells, and the functional connections between the retina and cortex. Because the fovea corresponds to a larger cortical surface than the periphery, the magnification factor is also larger in this area.
2 Johnson et al.
3 reported that the sensitivity of retinal rapidly decreases from the fovea to 30° eccentricity and continues to drop rapidly from 30° out. Heijl et al.
4 found that sensitivity deviations of less than 5 dB might be noteworthy near the fovea, while deviations of more than 10 dB might occur in the peripheral field before abnormality was expected. Furthermore, Katz and Sommer
5 have demonstrated that the standard deviations of threshold values by location in the automated VF test increased uniformly from the center to the periphery. Therefore, using reduction of 6 dB to define an abnormal point in central VF would be overly stringent. Meanwhile using the same definition in peripheral VF would also be controversial but it is limited by the peripheral 60 screening test algorithm used in this study.