Abstract
Purpose :
A feedback mechanism in the post-natal developing eye uses visual cues to control its axial elongation to achieve and maintain good focus, a process termed emmetropization. Here we present a model of how the human retina could use chromatic cues to determine the magnitude and sign of defocus in complex visual scenes integrated across the entire visual spectrum, that is, using hyperspectral images.
Methods :
We extended a model based on tree shrews (Gawne and Norton 2020) to the human eye, assuming that the activities of human medium- and long-wavelength sensitive cones ("M+LWS") are added together as a single value. We applied this model to 26 hyperspectral images of real-world scenes (Chakrabarty and Zickler 2011). For each hyperspectral image we calculated the radially averaged spatial frequency spectra for both SWS and M+LWS cone classes at several levels of simulated defocus. We define a "hyperspectral drive" as the difference between the averaged signal amplitude of the two cones classes, SWS - (M+LWS), at different spatial frequencies.
Results :
Fig.1 illustrates the hyperspectral drive as a function of image defocus for six representative spatial frequencies. At 0.25 and 0.5 cycled per degree (CPD), the drive is highly variable and not very accurate (drive not consistently zero at 0 D defocus). At 1 CPD the drive function is less variable across scenes and more accurate. At 2 CPD the drive is even less variable and still accurate, but loses effectiveness beyond about ±2D of defocus. At 10 CPD, the range of the drive function shrinks to less than ±1 D.
Conclusions :
Emmetropization likely uses multiple visual cues, and even for chromatic ones, likely integrates them across some range of spatial frequencies. However, this analysis suggests that there is a “sweet spot” for the use of chromatic signals in emmetropization roughly in the range of 1-2 CPD, within the resolution of the widely-spaced SWS cones.
This is a 2021 ARVO Annual Meeting abstract.