To our knowledge, there have been no previous reports on how reduced resolution affects the perception of blur in patients with CVL or other visual impairments, with the exception of a case study on cataracts removed in adulthood
28 and on patients undergoing routine cataract surgery
29 that suggest long-term contrast adaptation in these patients. However, as we noted, cataracts do reduce contrast in a low-pass fashion, unlike the reduced resolution without the contrast reduction associated with the sparse sampling of peripheral retina. We have demonstrated that adaptation to blur and sharpness can be measured in patients with vision impairments, specifically with CVL. Adaptation to both blurred and sharpened images was repeatable and demonstrated individual variability for subjects with CVL, as shown previously in NS individuals.
12 Although many subjects with CVL reported difficulty performing the test and they believed they were doing poorly, most (12/14) of the subjects with CVL who participated in this study could perform the task. The two subjects who could not complete the test did not have worse vision than the others. Patients with vision impairments similar to those who participated in this study have difficulties performing common everyday tasks such as reading, driving, and watching television.
30–33 Nevertheless, our subjects with CVL could differentiate blur from sharpness in the tested images. This is an important consideration in both designing image-enhancement devices and correcting refractive errors for patients with CVL.
It is possible that in our experiment, subjects with CVL used image-processing artifacts as cues rather than the intended blur and sharpness, to differentiate the images. For example, even though the images were equalized for mean brightness, it is possible that local brightness cues were available to the participants. More importantly, the method that we and others
5 have used to process the images (changing the slope of the original image spatial frequency spectrum), followed by adjustment of the overall RMS, causes an unintended change at the very low frequency range of the spatial spectrum. Specifically, when creating blurred images, in addition to the intended reduction of contrast for high frequencies, an unintended increase in contrast in the low-frequency content is created as well. Similarly, a contrast decrease in low frequencies is a consequence of increasing high frequency contrast when sharpening the images. This spatial content change might have been used by our subjects to differentiate between blurred and sharpened images. Even if these artifacts allowed discrimination of the differences, it is not clear that it would lead to any change in the PSN. Yet, this could be considered a limitation of studies using blurred and sharpened image processing until an alternative solution is found.
We hypothesized that subjects with CVL, despite their long-term neural blur (reduced resolution), would adapt to image blur similarly to the adaptation found at the fovea of NS controls. Indeed, we found that short-term blur adaptation takes place despite the long-term neural (low-resolution) blur in subjects with CVL, as perception of best focus in subjects with CVL was similar to that of subjects with normal vision. There was no difference in the perceived normal image when adapted to the original image (
YIntercept) or the gain (slope) of the blur adaptation curve between the CVL and NS groups. Attention to low-resolution images (from long-term use of peripheral vision, the PRL) does not change the apparent focus in subjects with CVL and thus does not require adaptation. Our results also indicated that the short-term blur adaptation is not a change in criterion, as the null stimulus for the adaptation (i.e., the spatial-spectral slope that does not produce a blur aftereffect) is also roughly the same stimulus that appears subjectively in focus foveally.
34
The CVL group had adaptation curve peaks (
XBlurPeak,
XSharpPeak, beyond which the magnitude of the PSN would not increase as the adapting stimuli was made blurrier or sharper) at more extreme adapting-stimulus levels than subjects in the NS group in this study. However, there was no difference in the proportion of subjects with CVL who had peaks that were not in the measurement range when compared to a larger sample of subjects with normal sight.
12 The adaptation level to the most blurred image tested in this study showed greater PSNs for subjects with CVL than for those with normal vision when tested in the periphery. This difference was not found for sharpness. Overall, the short-term adaptation to blur of the subjects with CVL was very similar to the foveal and peripheral adaptation of individuals with NS. The experience of reduced resolution consequent to using peripheral retina is often described as blur by patients with CVL. However, in an unrelated study (not published) in which CVL was experimentally induced with simulated scotomas and a gaze-contingent display system in subjects with normal sight, all the participants described their vision as having reduced resolution, not as appearing blurred. Thus, even though patients with CVL use the term “blur” to describe their vision, this may simply reflect a lack of vocabulary to distinguish between blur and reduced resolution.
No significant correlations were found between any of the individual demographic or visual parameters and the raw adaptation values or the gain of the adaptation function in subjects with CVL. Our results did not show differences in adaptation between the patient and control groups despite a significant difference in age between the groups (median for subjects with CVL was 61 years versus 25 years for NS subjects). Furthermore, no correlation between age and adaptation was found in either group. That result may appear different from that reported by Elliott et al.,
35 as they found a small difference when comparing central vision adaptation in a group of young adults (mean age, 25 years) and a group of older adults (mean age, 74 years). The greater within-individual variability found in our study, a consequence of testing subjects who use their peripheral vision, likely accounts for this difference between the studies. No differences in adaptation were found between sexes. In addition, no correlations were found between adaptation and habitual PRL, in spite of significant variability in PRL stability and location among the subjects with CVL. This was the case even when subject CVL3, who had very poor fixation stability, was excluded from the analyses. Grouped data showed that
YIntercept adaptation values were not different from those of NS control subjects. Thus, the image that appears normal to patients with CVL after adapting to the “normal” original image would be a normal image, unlike the myopic, non-CVL subject with long-term blur adaptation (
Fig. 7). In addition, when NS subjects adapted, using peripheral vision, they showed very similar values to the subjects with CVL using their PRL; adaptation was not stronger in the periphery.
When an NS subject was tested who was long-term adapted to dioptric blur because he did not use his glasses, a significant shift toward image blur being perceived as in focus was found in the adaptation curve measured with full refractive correction (
Fig. 7). This shift indicates that this subject experienced long-term adaptation to the defocus blur that resulted in perceiving a blurred retinal image as normal. Adaptation to defocus blur is different from the adaptation to the low-resolution, neural “blur” experienced by patients with CVL. When another NS subject was adapted to different types of optical blur (
Fig. 8), that subject showed no indication of long-term adaptation to the blur used in either of these conditions, as the
YIntercept was shifted toward sharpness for both conditions. This finding suggests that, unlike the subject who was habitually uncorrected and therefore exposed to long-term defocus blur (
Fig. 7), the amount of time this other subject (
Fig. 8) adapted to blur (1 hour) was not sufficient to provide the long-term adaptation required to show an effect with the short-term adaptation paradigm used in our study.
When NS subjects were freely looking at the processed images, they showed adaptation levels that were similar to those previously described.
5,12 Apart from differences in
XBlurPeak and
Gain, there were no differences between the free-viewing (central) and eccentric-viewing (peripheral) adaptation conditions in our subjects. Broadly, this finding is consistent with data reported by Haber et al.
21 on three subjects. As perception of blur and sensitivity to blur (and sharpness) vary with retinal eccentricity,
10,17 it might be expected that adaptation to blur and sharpness would also vary with eccentricity. However, the lack of an effect of eccentricity on adaptation found in our study may reflect a higher-level normalization to perception of blur and sharpness,
36 caused by a greater change in neural gain control in the periphery that causes greater suppression of sensitivity to blur in the periphery, not by a change in criterion.
34
Both groups of study participants adapted also to sharp images as part of the procedure. Subjects with CVL did not show significantly more adaptation to sharpness than NS subjects and did not seem to adapt to enhanced images differently. The main difference found in the adaptation values between NS and CVL subjects was that subjects with CVL had blur and sharp peaks that were at larger values of the adapting stimulus. We recommend that in future studies measuring adaptation using this paradigm in subjects with poor vision, the adaptation images should extend to levels beyond Δs = ±0.75.
Short- and long-term aftereffects of adaptation to enhanced (sharpened) images
12,37,38 may have significant implications in low vision rehabilitation options such as image enhancement. If patients adapt to the level of enhancement in a display (e.g., television, head mounted display), the benefits of the enhancement could be diminished, as they may no longer be perceived as enhanced. On the other hand, there are potentially beneficial effects of adaptation to sharpness. If patients adapt to the enhancement, the displayed images could appear more natural to them (not artificially distorted), so it would be more likely that they, as well as others with normal vision who may be using the display (e.g., television) at the same time, accept the enhancement. Individual variability found in the preferred level of enhancement of the displayed images, even when controlling for VA or impairment,
37,38 may be a consequence of individual differences in adaptation to various sharpness levels.
12 Adaptation to blur in patients with CVL may also influence their tolerance of blur with consequent implications for the prescription of visual aids.