Free
Visual Psychophysics and Physiological Optics  |   September 2012
Image Jitter Enhances Visual Performance when Spatial Resolution Is Impaired
Author Affiliations & Notes
  • Lynne M. Watson
    From the Department of Life Sciences, Glasgow Caledonian University, Glasgow, United Kingdom; and the
  • Niall C. Strang
    From the Department of Life Sciences, Glasgow Caledonian University, Glasgow, United Kingdom; and the
  • Fraser Scobie
    Department of Physics, Durham University, Durham, United Kingdom.
  • Gordon D. Love
    Department of Physics, Durham University, Durham, United Kingdom.
  • Dirk Seidel
    From the Department of Life Sciences, Glasgow Caledonian University, Glasgow, United Kingdom; and the
  • Velitchko Manahilov
    From the Department of Life Sciences, Glasgow Caledonian University, Glasgow, United Kingdom; and the
Investigative Ophthalmology & Visual Science September 2012, Vol.53, 6004-6010. doi:10.1167/iovs.11-9157
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lynne M. Watson, Niall C. Strang, Fraser Scobie, Gordon D. Love, Dirk Seidel, Velitchko Manahilov; Image Jitter Enhances Visual Performance when Spatial Resolution Is Impaired. Invest. Ophthalmol. Vis. Sci. 2012;53(10):6004-6010. doi: 10.1167/iovs.11-9157.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: Visibility of low–spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low–spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment.

Methods.: Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment.

Results.: Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals.

Conclusions.: Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

Introduction
To successfully interact with the visual world, humans detect and recognize complex patterns of light and dark. This information is limited for approximately 124 million people worldwide who have a reduced spatial resolution that cannot be improved by conventional corrective devices. 1 Such visual impairment affects many essential everyday tasks, including reading and face recognition. 2 Visual aids, which work on the principle of magnifying the visual object of regard to compensate for the reduction in clarity, have been the conventional method of improving vision in these patients. This method can often be useful, but the resulting magnification effects can create problems with spatial misrepresentation and information sampling. 3 Therefore, effective aids based on a different principle would be of great value. 
Individuals with normal vision see low–spatial frequency images better when the image contrast flickers at around 5 to 10 Hz compared with equivalent stationary images. 4 These data suggest that enhancing visual sensitivity by modulating the contrast of visual objects could be a promising approach to improve the performance of people with low vision who primarily perceive images comprised of low spatial frequencies (e.g. Fig. 1B). Indeed, improvement of contrast sensitivity to full-field flicker was found in visually impaired patients. 5 The improvement in vision, however, may come at a cost as the use of repetitive modulations of image contrast is not a typical everyday condition. Additionally, prolonged viewing of such cyclic temporal modulations may cause discomfort and neural habituation. 6  
Figure 1. 
 
A text sample and normalized word recognition speed in the presence of image jittering for visually impaired observers. (A) A text sample as presented on the screen. (B) A low-pass spatial frequency filtered text sample demonstrating how individuals with central visual impairment might perceive the stimulus. (C) Normalized word recognition speed (NRS; NRS % = 100[RSmRSs]/RSs, where RSm and RSs denote the word recognition speeds for modulated and stationary text, respectively) averaged across participants (n = 14) for temporally modulated text at various jitter amplitudes and interjitter intervals: 0.5°/166 ms, 1°/166 ms, 0.5°/100 ms, and 1°/100 ms. *Denotes the normalized word recognition speed is significantly different from zero at P < 0.05. Error bars represent SEM.
Figure 1. 
 
A text sample and normalized word recognition speed in the presence of image jittering for visually impaired observers. (A) A text sample as presented on the screen. (B) A low-pass spatial frequency filtered text sample demonstrating how individuals with central visual impairment might perceive the stimulus. (C) Normalized word recognition speed (NRS; NRS % = 100[RSmRSs]/RSs, where RSm and RSs denote the word recognition speeds for modulated and stationary text, respectively) averaged across participants (n = 14) for temporally modulated text at various jitter amplitudes and interjitter intervals: 0.5°/166 ms, 1°/166 ms, 0.5°/100 ms, and 1°/100 ms. *Denotes the normalized word recognition speed is significantly different from zero at P < 0.05. Error bars represent SEM.
To avoid these problems, we adopted an approach similar to that used by the visual system in people with good vision. When fixating on an object of interest, the eyes are constantly in motion, causing jittering of the image on the retina. Intriguingly, when these involuntary eye movements are eliminated in the laboratory, visual objects fade to a homogeneous field. 7,8 There are three types of involuntary eye movement: (1) tremor, an aperiodic motion with a frequency of approximately 90 Hz and an amplitude of less than 1 arcmin; (2) drifts, slow eye movements with an amplitude up to 5 arcmin; and (3) microsaccades, irregular jerk-like eye movements with amplitudes and frequencies in the range of 4 to 30 arcmin and 0.25 to 5 Hz. These eye movements are thought to counteract neural adaptation by jittering stationary images over the receptive fields of visual neurons, producing transient bursts of neural spikes. 6 This mechanism could determine—at least in part—the sustained activity of neurons with small-sized receptive fields, tuned to fine details, which form the parvocellular visual pathway. 9 For stationary stimuli with higher spatial frequencies, fixational microsaccades are of sufficient amplitude to induce sustained activity in these neurons, which lasts over the course of the stimulus presentation. The neurons from the magnocellular pathway have larger receptive fields and are sensitive to stimuli of low spatial frequencies and higher temporal frequencies. 9 These neurons respond transiently only to the onset and offset of stationary stimuli. The retinal image shifts, produced by fixational microsaccades, are not large enough to elicit neuronal activity during the exposure of stationary low–spatial frequency images. Retinal image shifts with larger amplitudes, however, could produce activation of magnocellular neurons during the stimulus exposure and thus enhance sensitivity to low–spatial frequency images. 
This led us to the hypothesis that those with central visual impairment could see better in the presence of induced retinal-image jitter. This idea appears incongruous as image jitter is usually associated with image degradation, not enhancement. Indeed, some studies have found impairment of reading speed 10 (Parish DH, Legge GE, unpublished observations, 1989) and visual acuity 11 in the presence of induced image jitter. Here we tested this counterintuitive hypothesis by measuring word recognition speed and accuracy of facial emotion discrimination in volunteers with AMD using stationary and jittering retinal images that resembled involuntary microsaccades, but with higher frequency and larger-than-normal amplitudes. 
The performance improvement with jitter in people with central visual loss could be related to stochastic repositioning of the image from the scotoma into peripheral functioning locations. As this suggests that people without central visual loss would not improve their performance with jitter, we also report the findings of a control experiment that measures word recognition speed in people with simulated low vision without central loss. 
In a further control experiment, we measured contrast sensitivity for discriminating orientation of gratings of 0.5 cyc/deg at various temporal modulations of stimulus contrast to gain direct evidence that temporal modulations improve visual sensitivity to low–spatial frequency stimuli in people with central visual loss. 
Methods
Participants
In total, 30 participants took part in this study of which 21 had AMD (see Supplementary Material and Supplementary Table S1). In control experiment 1, nine participants with normal vision, whose visual acuity was reduced to 1.2 LogMAR with Bangerter filters, were tested. Prior to the experiments, distance visual acuity (Bailey-Lovie chart 12 ) and contrast sensitivity (Pelli-Robson chart 13 ) were measured. All subjects passed the mini–mental state exam, suggesting no cognitive dysfunction. 14 Ethical approval was obtained from the Glasgow Caledonian University Ethics Board, and all tests were conducted in accordance with the tenets of the Declaration of Helsinki. 
Apparatus
Computer-generated stimuli were presented on a 19-inch RGB monitor (VisionMaster Pro 450; Iiyama, Oude Meer, The Netherlands) with a temporal resolution of 120 Hz, a spatial resolution of 1024 × 768 pixels, and a mean luminance of 30 cd/m2. A custom video summation device provided 256 grey levels and 12-bit precision. Software, written in Pascal for MS-DOS, was used to generate stimuli and control the experimental procedures. The 90% decay of the monitor phosphor signal (using white color and an image formed by one line) was approximately 1 ms. 
In experiments 1, 2, and control experiment 1, image jitter was produced by the random presentation of one of five stimuli that contained an image preloaded in five different pages of the video memory at different locations. 
In experiment 3, image jitter was produced by a monocular prototype of optoelectronic jitter goggles (see Supplementary Material and Supplementary Figs. S1 and S2), which can electronically control the displacement of the retinal image in one of four locations (forming a virtual square of 1.8° sides). The prototype contains two birefringent prisms that can deflect the light passing through one of two perpendicular angles, depending on the polarization state of the light. Two ferroelectric liquid crystal polarization modulators are used to switch the polarization state rapidly, controlling the deflection angle. A similar principle has been used in an alternative guise to produce a very fast switchable lenses for stereoscopic vision. 15  
Stimuli
Experiment 1 and Control Experiment 1. The stimuli consisted of 11 unrelated words of three, four, five, and six letters presented in four rows of 13-character spaces. The words were taken from a lexicon, 17 of 302 words, and were presented in Courier font (Fig. 1A). The letters were dark (4.2 cd/m2) on a light background (30 cd/m2), having a contrast (Weber) of 86%. The size of letter “x” was 1.4° (height) by 1.6° (width) as viewed at 57 cm. The angular center-to-center spacing of horizontally adjacent letters was 2.12°. This angular size is equivalent to a visual acuity of 1.4 LogMAR (6/150 Snellen acuity). 16 In experiment 2, the viewing distance was halved (28 cm) for three participants (E, F, and M; see Supplementary Material and Supplementary Table S1) with binocular acuity below 1.4 LogMAR, because they experienced difficulties recognizing letters at 57 cm. 
Experiments 2 and 3. Facial stimuli were sourced from the Mac Brain set 17 and were cropped using a data analysis program (MATLAB; MathWorks, Natick, MA) to remove external cues such as hair and clothing.The faces were angry or happy. The mean height/width ± SD were 13.6 ± 0.8/8.5 ± 0.6° at a viewing distance of 57 cm (experiment 2) and 9.2 ± 0.6/5.8 ± 0.4° at a viewing distance of 85 cm (experiment 3). Stimulus contrast of each image was adjusted to produce constant contrast energy of 0.092 deg 2 (experiment 2) or 0.184 deg 2 (experiment 3) calculated as the sum of the squared contrast of each image pixel, multiplied by the pixel area. Mean luminance was 30 cd/m2 in experiment 2 and 6 cd/m2 in experiment 3. The stimulus duration was 1.4 seconds. The stimulus onset and offset consisted of a gradual increase and decrease in luminance over 0.2 seconds, respectively. 
Control Experiment 2. The stimuli were sinusoidal luminance gratings (size of 30 × 22°) of 0.5 cyc/deg, generated on a computer screen. Their orientation was orientated at 45 or −45° from vertical. Grating contrast was reversed sinusoidally in counterphase at temporal frequencies of 0, 5, 10, and 20 Hz. 
Procedures
Experiment 1. Participants were presented in random order with stationary and dynamic text samples. The stationary text was centered at the screen. The dynamic text jittered between the screen's center position and one of four positions at a polar angle of 45, 135, 225, or 315°. The image jitter amplitude was 0.5 or 1° visual angle and the interjitter duration was 166 or 100 ms. Participants were instructed to speak aloud words as fast as possible. Each experimental condition was presented in a random order for 30 seconds. The first attempt for each condition was deemed a trial session and the results were omitted from the results. If the text sample was completed before the 30-second interval, another text sample was presented. The number of text samples in each experimental condition varied (range 1–4) depending on the reading speed. Word recognition ability was calculated as the number of words correctly identified per minute. Given that the words were unrelated, this experimental task could be characterized as word recognition, rather than reading. 
Control Experiment 1. The experimental procedure was similar to that used in experiment 1. The image jitter amplitude was 1° and the interjitter duration was 500, 250, 100, 50, and 25 ms. 
Experiment 2. Each trial started with a warning tone of 0.2 seconds, followed by a 0.5-second gap and a stimulus of 1-second duration. Subjects reported verbally if the presented face was angry or happy. The experimenter recorded the response by pressing an appropriate button. The experimental session involved 12 practice trials and 56 experimental trials. The stimuli were either stationary or jittering (as in experiment 1). Performance accuracy in each condition was measured using a discriminability index (d'), d' = z(H) − z(F), where z(H) and z(F) are the z-scores of the correct responses to happy faces (hits) and the errors made on angry faces (false alarms). The bias (C), reflecting a criterion shift, was evaluated as C = −[z(H) + z(F)]/2. 
Experiment 3. The experimental procedure was similar to that used in experiment 2. Participants viewed the stimuli with their dominant eye through the jitter goggles, adopting their habitual fixation behavior. 
Control Experiment 2. Contrast threshold for discriminating grating orientation was determined using a one-interval two-alternative forced choice procedure in conjunction with a staircase method designed to determine 79% correct responses. Each trial started with a warning tone of 0.2 seconds, followed by a 0.5-second gap and a stimulus of 1-second duration. Subjects were instructed to determine the perceived orientation of the gratings (45 or −45° from vertical) by pressing an appropriate mouse button. A feedback tone indicated to the observers if they were incorrect. Each staircase started at a suprathreshold contrast level of the signal with a contrast step of 0.2 log units. The stimulus contrast level was decreased after three correct responses and increased after one incorrect response. After each staircase reversal, the step size was halved and this process continued until the step size became 0.05 log units. The subsequent six staircase reversals were collected. The threshold measure was the geometric mean of these estimates. 
Statistical Analysis
The Shapiro-Wilk test for normality found the data for each experimental condition followed a normal distribution. This allowed the use of appropriate parametric tests. 
Results
Experiment 1
Word recognition ability was measured for 14 participants (mean age ± SD = 79 ± 4.4 years) with AMD (mean binocular visual acuity ± SD = 0.99 ± 0.39 LogMAR). Because the speed of word recognition for stationary text varied markedly across participants (10–74 words/minute), the speed in the presence of image jitter for each subject was normalized by the corresponding speed for stationary text. A repeated-measures ANOVA (jitter rate [2] × jitter amplitude [2]) did not reveal a significant main effect of any experimental condition on the normalized word recognition speed. One-sample t-tests with Holm-Bonferroni correction for multiple comparisons showed that the normalized speed of word recognition for each condition was significantly (t[13] > 3.03, P < 0.05) greater than zero (Fig. 1C). The average normalized speed and SEM across all conditions were 66 ± 9.4%. 
The effect of image jitter on the speed of word recognition decreased as the speed for stationary text increased. This is illustrated by a power function (Fig. 2) fitting the normalized word recognition speed data. A profound improvement was seen in participants with severe visual loss (binocular visual acuity > 1 LogMAR, mean ± SD = 1.31 ± 0.25 LogMAR, n = 8; Fig. 2, filled circles) whose normalized word recognition speed (101 ± 25 %) was significantly higher (t[9] = 2.946, P = 0.013, heteroscedastic t-test) than that (19 ± 9%) of participants with moderate visual loss (binocular visual acuity < 1 LogMAR, mean ± SD = 0.67 ± 0.16 LogMAR, n = 6; Fig. 2, empty circles). 
Figure 2. 
 
Word recognition speed normalized for each subject, averaged for different parameters of image jitter, as a function of word recognition speed for stationary text. Filled circles show data for participants with severe visual loss. Empty circles illustrate data for participants with moderate visual loss. Solid line represents the best-fitting power function: NRS = a/RSsb ; a = 7523, b = 1.61; R2 = 0.86.
Figure 2. 
 
Word recognition speed normalized for each subject, averaged for different parameters of image jitter, as a function of word recognition speed for stationary text. Filled circles show data for participants with severe visual loss. Empty circles illustrate data for participants with moderate visual loss. Solid line represents the best-fitting power function: NRS = a/RSsb ; a = 7523, b = 1.61; R2 = 0.86.
Control Experiment 1
Word recognition speed was measured for nine participants (28 ± 9 years) whose visual acuity was reduced to 1.2 LogMAR using Bangerter filters. Repeated measures ANOVA found a main effect of interjitter interval (F[5,40] = 172, P < 0.001). Pairwise multiple comparisons showed that the speed of word recognition, compared with stationary text, was significantly improved for interjitter intervals of 500, 250, and 100 ms, while interjitter intervals of 50 ms and 25 ms produced a marked slowing of word recognition speed (Table). 
Table. 
 
Word Recognition Speed for Different Interjitter Intervals, Normalized by that for Stationary Text
Table. 
 
Word Recognition Speed for Different Interjitter Intervals, Normalized by that for Stationary Text
Inter-Jitter Interval (ms) Normalized Word Recognition Speed (%) t P
500 42 5.235 <0.001
250 57 5.493 <0.001
100 40 3.876 0.001
50 −58 5.269 <0.001
25 −89 7.237 <0.001
Experiment 2
Performance accuracy (d') for discriminating facial emotions (Fig. 4C) was measured for 16 volunteers (mean age ± SD = 79 ± 4.6 years) with AMD (mean binocular visual acuity ± SD = 1.0 ± 0.37 LogMAR). Repeated-measures ANOVA revealed a main effect of temporal image modulation (F[4,60] = 24.4, P < 0.001). Pairwise multiple comparisons showed that d' for all conditions with jittering face images was significantly higher (t[30] > 7.210, P < 0.001, t-tests) than that for stationary stimuli. The differences between the data for various jitter parameters were not significant. The average d' for jittering stimuli increased by a factor of 2 compared with that of stationary stimuli. Performance improvement did not correlate with the visual acuity of the observers (correlation analysis, P = 0.36, R 2 = 0.09). It should be noted that in this single-interval task, the response biases for each experimental condition were not significantly different from zero, implying that the observers were not biased toward happy or angry emotional face expressions. 
Figure 3. 
 
Word recognition speed as a function of interjitter interval duration for subjects (n = 9) with simulated low vision, using Bangerter filters to reduce acuity to 1.2 LogMAR. Word recognition speed improved for interjitter intervals of 500, 250, and 100 ms and was impaired for interjitter intervals of 50 ms and 25 ms. The upper horizontal axis shows fundamental temporal frequency calculated as 1/(2 × T), where T denotes the interjitter interval duration in seconds. Error bars illustrate SEM.
Figure 3. 
 
Word recognition speed as a function of interjitter interval duration for subjects (n = 9) with simulated low vision, using Bangerter filters to reduce acuity to 1.2 LogMAR. Word recognition speed improved for interjitter intervals of 500, 250, and 100 ms and was impaired for interjitter intervals of 50 ms and 25 ms. The upper horizontal axis shows fundamental temporal frequency calculated as 1/(2 × T), where T denotes the interjitter interval duration in seconds. Error bars illustrate SEM.
Figure 4. 
 
Example photographs of facial stimuli and sensitivity for discriminating emotions of face images for observers with AMD (n = 16). (A) A face expressing angry and happy emotions. Reprinted with permission from the Research Network on Early Experience and Brain Development, http://www.macbrain.org/resources.htm. (B) Low-pass spatial frequency–filtered images illustrating how people with central visual impairment might perceive face images.(C) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bars) at various jitter amplitudes and interjitter intervals as explained in Figure 1. *Denotes significantly different d' (P < 0.001) compared with stationary stimuli. Error bars represent SEM.
Figure 4. 
 
Example photographs of facial stimuli and sensitivity for discriminating emotions of face images for observers with AMD (n = 16). (A) A face expressing angry and happy emotions. Reprinted with permission from the Research Network on Early Experience and Brain Development, http://www.macbrain.org/resources.htm. (B) Low-pass spatial frequency–filtered images illustrating how people with central visual impairment might perceive face images.(C) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bars) at various jitter amplitudes and interjitter intervals as explained in Figure 1. *Denotes significantly different d' (P < 0.001) compared with stationary stimuli. Error bars represent SEM.
Experiment 3
Performance accuracy was measured for seven participants (mean age ± SD = 80 ± 3.2 years) with AMD (mean monocular visual acuity ± SD = 0.73 ± 0.27 LogMAR) using the face stimuli presented in experiment 2. Participants looked with the dominant eye at the stimuli through the jitter goggles (Fig. 5A) when they were either inactive (making the face stimuli stationary) or active (causing the face images to jitter randomly with an amplitude of 1.8 or 2.6° and interjitter interval of 166 ms). Performance accuracy (d') for jittering stimuli increased significantly (t[12] = 2.916, P = 0.01, pairwise t-test) by a factor of 2.8 compared with that for stationary stimuli (Fig. 5B). The response biases for stationary and jittering stimuli were not significantly different from zero. 
Figure 5. 
 
Performance of people with AMD (n = 7) for discriminating facial emotions using jitter goggles. (A) Illustration of the prototype of jitter goggles, which contained: birefringent prisms (a and c); ferroelectric liquid crystal polarization modulators (b and d); and a polarizing filter (e). The polarization modulators allow one of four possible deflection angles, produced by the prisms, to be selected. (B) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bar; amplitude of 1.8 or 2.6° and interjitter interval of 166 ms). *Denotes significantly higher d' from stationary stimuli at P < 0.005. Error bars represent SEM.
Figure 5. 
 
Performance of people with AMD (n = 7) for discriminating facial emotions using jitter goggles. (A) Illustration of the prototype of jitter goggles, which contained: birefringent prisms (a and c); ferroelectric liquid crystal polarization modulators (b and d); and a polarizing filter (e). The polarization modulators allow one of four possible deflection angles, produced by the prisms, to be selected. (B) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bar; amplitude of 1.8 or 2.6° and interjitter interval of 166 ms). *Denotes significantly higher d' from stationary stimuli at P < 0.005. Error bars represent SEM.
Control Experiment 2
In this experiment, seven participants (mean age ± SD = 76 ± 7.9 years) with AMD (mean binocular visual acuity ± SD = 1.1 ± 0.3 LogMAR) took part. Contrast sensitivity (the reciprocal of contrast threshold) as a function of temporal frequency had a band pass form (Fig. 6). Repeated-measures ANOVA showed a main effect of temporal frequency (F[1.36, 8.16] = 5.32, P < 0.05, Huynh-Feldt correction). Pairwise multiple comparisons found that contrast sensitivities for temporal frequencies of 5 Hz (1.9 ± 0.10 log units; 1/threshold = 80) and 10 Hz (2.0 ± 0.10 log units; 1/threshold = 100) were significantly higher (P < 0.05, Holm-Bonferroni correction) than that of stationary stimuli (1.7 ± 0.11 log units; 1/threshold = 50). 
Figure 6. 
 
Contrast sensitivity for discriminating grating orientation as a function of temporal frequency for subjects (n = 7) with AMD. *Denotes significantly higher differences in contrast sensitivity at P < 0.05. Error bars illustrate SEM.
Figure 6. 
 
Contrast sensitivity for discriminating grating orientation as a function of temporal frequency for subjects (n = 7) with AMD. *Denotes significantly higher differences in contrast sensitivity at P < 0.05. Error bars illustrate SEM.
Discussion
Our findings support the proposed hypothesis that induced retinal-image jitter improves the ability of those with central visual impairment to recognize words and discriminate facial emotions. 
In experiment 1, text jittering markedly improved the word recognition speed for severe central vision loss observers (visual acuity > 1 LogMAR) who were able to recognize stationary words with slower speed. This effect was reduced in observers with moderate central vision loss (visual acuity < 1 LogMAR) whose word recognition speed for stationary text was faster. 
An explanation for the different effects of image jitter could be related to the fact that reading speed depends on print size. Reading speed is known to increase as print size increases up to a critical print size beyond which reading speed remains constant for a wide range of print sizes and then declines at extremely large prints. 18 In our study, we did not measure the critical print size for each subject, but the size of the text samples is likely to be below the critical print size for people with severe visual impairment, being up to twice the spatial resolution limit (“x” width of 1.6° corresponds to 1.28 LogMAR). It has previously been reported that reading rates are more dependent on contrast when letter size is below the critical print size. 19 Therefore, the enhanced word recognition speed in our subjects could be related to increased visibility of the text due to image jitter. The fundamental spatial frequency of the text samples (the reciprocal of the center-to-center character spacing 19 ) was 0.47 cyc/deg for a 57-cm viewing distance and 0.24 cyc/deg for a 28-cm viewing distance. Recognizing words requires a bandwidth of one octave in spatial frequency, extending upward from the fundamental frequency of the characters. 20 Therefore, the spectral information, which is used to recognize words in the present study, involves low–spatial frequency components within the range of 0.47 to 0.94 cyc/deg (57-cm viewing distance) and 0.24 to 0.48 cyc/deg (28-cm viewing distance). Such low–spatial frequency images are processed by peripheral retinal neurons with larger receptive fields in people with central field loss. The temporal modulations, elicited by the induced jitter of retinal images, could produce sustained neuronal responses leading to enhancement in the effective contrast of the text samples. 
People with moderate visual impairment showed faster speeds for recognizing stationary words suggesting that the character size is closer to their critical print size. This finding could be related to the possibility that people with visual acuity better than 1.0 LogMAR will use less peripheral parts of the retina than those with worse acuity, which in turn would lead to a reduction in critical print size. 21 Enhancement of apparent contrast, however, would therefore have little effect on word recognition speed as reading speed is less dependent on contrast when letter size is close to the critical print size. 18  
These findings suggest that the image jitter is an effective approach for people with severe visual field loss to enhance their word recognition speed without magnifying print size to the level of critical print size. Thus, some disadvantages of text magnification could be reduced: high text magnification affects field size and reduces visual span. 22 It must be noted that the word recognition speed, found for people with severe visual field loss in the presence of image jitter, is below or close to previously reported spot reading speed levels (40 words/minute). 23 It is possible that the enhancement of word recognition speed may be further improved if low vision patients underwent training sessions that employed the image-jitter approach. 
It has previously been reported that externally induced positional uncertainty of words impairs reading. Random fast (33 ms) displacements of text samples in both horizontal and vertical directions appeared to reduce reading speed compared with stationary text (Parish DH, Legge GE, unpublished observations, 1989). Another study 10 measured reading speed for single words presented at 4 and 8° of eccentricity whose position was randomly shifted every 40 ms along the radius of an arc on which the words were displayed. This induced positional uncertainty reduced reading speed compared with that without image jitter. These results suggested that the impaired fixation stability in low vision, which was found in people with newly developed 24 or longstanding macular degeneration, 25 might account for the reduced reading rates. Reducing retinal image instability, however, did not improve visual acuity in AMD. 11 Moreover, visual acuity was impaired when retinal image displacement had increased speed (by a factor of 10 compared with the speed of microsaccadic eye movements during fixation); enlarged amplitudes (mean amplitudes of 0.5° at an eccentricity of 5°) 26 ; and short interjitter intervals (24.8 ± 17.2 ms for speeds above 50°/second, as estimated from Fig. 4 11 ). 
The seemingly paradoxical findings of enhanced word recognition speed with image jitter found here and the reduced reading speed and visual acuity with image jitter found in the above-mentioned studies (Parish DH, Legge GE, unpublished observations, 1989) 1011 can be explained by differences in the temporal characteristics of the image jitter. Our control experiment 1 showed that in simulated low vision conditions, jittering text with interjitter intervals of 500, 250, and 100 ms (1–5 Hz fundamental temporal frequencies) enhanced the speed of word recognition, while shorter interjitter intervals of 50 and 25 ms (10–20 Hz) impaired performance in comparison with that for stationary text (Fig. 3). These results are closely related to the band-pass characteristic of contrast sensitivity to stimuli of low spatial frequencies as a function of stimulus temporal frequency. Robson 4 found that the sensitivity for detecting a central flickering grating of 0.5 cyc/deg, compared with that at a temporal frequency of 0.5 Hz, increased by a factor of 6 at a 6-Hz flicker rate and reduced by a factor of 10 at a 30-Hz flicker rate. Virsu et al. 27 showed that when gratings of 0.75 cyc/deg were presented at different eccentricities, contrast sensitivity decreased as a function of eccentricity, but the temporal contrast sensitivity functions had a similar band-pass form with a peak at approximately 6 Hz. Georgeson 28 showed that the peak of the contrast sensitivity function at 8 Hz for 1.5 cyc/deg flickering gratings was also found in the above-threshold temporal modulation function using a contrast-matching method. Our control experiment 2 also provides direct evidence that contrast modulations of 5 to 10 Hz enhanced contrast sensitivity for discriminating orientation of 0.5 cyc/deg gratings in people with central visual loss (Fig. 6).Thus, the impaired reading rate 10 (Parish DH, Legge GE, unpublished observations, 1989) and visual acuity 11 could be due to reduced sensitivity of the visual system to high-frequency modulations (interjitter intervals of 24–40 ms) of retinal images. 
It is worth noting that one study 29 has suggested that fixation instability could improve the perception of an eccentrically fixated text stimulus. Normal subjects were asked to alternate fixation, every 3 to 4 seconds, between two dots spaced 10° apart, and to report changes in the perception of a peripheral letter. With steady fixation, subjects perceived a rapid fading effect that reduced letter recognition, while they experienced a transient enhancement of the letter immediately after the saccade. The authors suggested that patients with central scotomas would improve their perception of peripheral text, if they perform saccades between different preferred retinal loci (PRLs). Such saccadic movements, however, cannot explain the elevation in word recognition speed, found in our study, because saccades between different PRLs would have similar effects on stationary and jittering text. 
It could be argued that image jitter improves word recognition in the presence of a central visual field defect by stochastically moving the retinal image away from the central scotoma to a preferred peripheral retinal location. In control experiment 1, Bangerter filters produced an attenuation of high spatial frequencies (for illustration see Figs. 4A and B), simulating reduced visual acuity without the presence of a scotoma. The results show that improvement in visual performance still occurred in the jitter condition. This suggests that the positive effect of image jitter is not confined to patients with central field loss and that this technique may be applicable to a variety of ocular conditions. 
Experiments 2 and 3 showed that jitter elevated the performance for facial emotion discrimination by factor of 2–2.8. This performance improvement was achieved without magnification of the images. The results of experiments 1 and 2, in which images were displaced on a computer display, support future development of new software or hardware to induce jitter to computer screen images. The similar performance elevation effect found with the prototype of jitter goggles (experiment 3) suggests that compact jitter goggles, based on the current prototype, could be a novel tool for improving the performance of low vision patients in various everyday activities without image magnification. Additionally, the optoelectronic method for displacing stationary images clearly shows that performance improvement by jittering images on a computer screen cannot be accounted for by persistence of the screen phosphor. 
It is well known that involuntary microsaccades have primarily excitatory effects at all levels of the visual system 6 and they may play a role in enabling efficient neural representation of natural stimuli. 30,31 However, this strong modulation of the neural responses was found for neurons sensitive to fine detail, but not for those selective to low–spatial frequency stimuli. 32,33 Our results suggest that frequent and large amplitude-induced jittering of retinal images produces sustained neuronal responses that could be an efficient mechanism for processing the low–spatial frequency images normally perceived by those with central visual impairment. It would be interesting to analyze the effects of image jitter on the spatiotemporal characteristics of retinal images during natural fixation. Such information is important because the real input to the visual system in the presence of image jitter is actually determined by the spatiotemporal properties of the jitter and involuntary eye movements. Thus, the statistical properties of the external world are modulated by the retinal image dynamic. 3133  
A number of questions are raised by these findings. Under the conditions of the present study, the performance improvement for images displayed on a computer display was found in a wide range of spatial and temporal characteristics of image jitter. What level of jitter would produce optimal performance in dynamic everyday activities? How does image jitter interact with involuntary eye movements? How is jitter tolerated by patients? The answers to these questions require further research. The present results suggest that optoelectronic aids, based on image jitter, are an exciting new concept that may lead to improving the quality of life of people with central visual impairment. 
Supplementary Materials
Acknowledgments
We thank Martin Banks, Gunter Loffler, and the anonymous reviewers for their valuable comments on the manuscript. 
References
Resnikoff S Pascolini D Etya'ale D Global data on visual impairment in the year 2002. Bull World Health Organ . 2004;82:844–851. [PubMed]
Hassan SE Lovie-Kitchin JE Woods RL. Vision and mobility performance of subjects with age-related macular degeneration. Optom Vis Sci . 2002;79:697–707. [CrossRef] [PubMed]
Dickinson CM Fotinakis V. The limitations imposed on reading by low vision aids. Optom Vis Sci . 2000;77:364–372. [CrossRef] [PubMed]
Robson JG. Spatial and temporal contrast sensitivity functions of the visual system. J Opt Soc . 1966;56:1141–1142. [CrossRef]
Hall EC Arditi A. Temporal processing in very low vision. Vis Impair Res . 2000;2:43–48. [CrossRef]
Martinez-Conde S Macknik SL Hubel DH. Microsaccadic eye movements and firing of single cells in the striate cortex of macaque monkeys. Nat Neurosci . 2000;3:251–258. [CrossRef] [PubMed]
Ditchburn RW Ginsborg BL. Vision with a stabilized retinal image. Nature . 1952;170:36–37. [CrossRef] [PubMed]
Riggs LA Ratcliff F Cornsweet JC Cornsweet TN. The disappearance of steadily fixated visual test objects. J Opt Soc Am . 1952;43:495–501. [CrossRef]
Livingstone M Hubel D. Segregation of form, color, movement, and depth: anatomy, physiology, and perception. Science . 1988;240:740–749. [CrossRef] [PubMed]
Falkenberg HK Rubin GS Bex PJ. Acuity, crowding, reading and fixation stability. Vision Res . 2007;47:126–135. [CrossRef] [PubMed]
Macedo AF Crossland MD Rubin GS. Investigating unstable fixation in patients with macular disease. Invest Ophthalmol Vis Sci . 2011;52:1275–1280. [CrossRef] [PubMed]
Bailey IL Lovie JE. New design principles for visual acuity letter charts. Am J Optom Physiol Opt . 1976;53:740–755. [CrossRef] [PubMed]
Pelli DG Robson JG Wilkins J. The design of a new letter chart for measuring contrast sensitivity. Clin Vis Sci . 1988;2:187.
Folstein MF Folstein SE McHugh PR. “Mini-mental state.” A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res . 1975;12:189–198. [CrossRef] [PubMed]
Love GD Hoffman DM Hands JW Gao J Kirby AK Banks MS. High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Expr . 2009;17:15716–15725. [CrossRef]
Legge GE Ross JA Luebker A LaMay JM. Psychophysics of reading. VIII. The Minnesota Low-Vision Reading Test. Optom Vis Sci . 1989;66:843–853.
Tottenham N Tanaka J Leon AC The Nim Stim set of facial expressions; judgments from untrained research participants. Psychiatry Res . 2009;168: 242–249. [CrossRef] [PubMed]
Legge GE Pelli DG Rubin GS Schleske MM. Psychophysics of reading--I. Normal vision. Vision Res . 1985;25:239–252. [CrossRef] [PubMed]
Legge GE Rubin GS Luebker A. Psychophysics of reading--V. The role of contrast in normal vision. Vision Res . 1987;27:1165–1177. [CrossRef] [PubMed]
Legge GE Rubin GS Pelli DG Schleske MM. Psychophysics of reading--II. Low vision. Vision Res . 1985;5:253–265. [CrossRef]
Chung STL Mansfield JS Legge GE. Psychophysics of reading. XVIII. The effect of print size on reading speed in normal peripheral vision. Vision Res . 1998;38:2949–2962.
Legge GE Ahn SJ Klitz TS Luebker A. Psychophysics of reading--XVI. The visual span in normal and low vision. Vision Res . 1997;37:1999–2010. [CrossRef] [PubMed]
Whittaker SG Lovie-Kitchin J. Visual requirements for reading. Opt Vis Sci . 1993;70:54–65. [CrossRef]
Crossland MD Culham LE Rubin GS. Fixation stability and reading speed in patients with newly developed macular disease. Ophthalmic Physiol Opt . 2004;24:327–233. [CrossRef] [PubMed]
Rubin GS Feely M. The role of eye movements during reading in patients with age-related macular degeneration. J Neuroophthalmol . 2009;33:120–126. [CrossRef]
Macedo AF Crossland MD Rubin GS. The effect of retinal image slip on peripheral visual acuity. J Vis . 2008;8:1–11. [CrossRef] [PubMed]
Virsu V Rovamo J Laurinen P Nasanen R. Temporal contrast sensitivity and cortical magnification. Vision Res . 1982;22:1211–1217. [CrossRef] [PubMed]
Georgeson MA. Temporal properties of spatial contrast vision. Vision Res . 1987;27:765–780. [CrossRef] [PubMed]
Deruaz A Matter M Whatham AR Can fixation instability improve text perception during eccentric fixation in patients with central scotomas? Br J Ophthalmol . 2004;88:461–463. [CrossRef] [PubMed]
Martinez-Conde S Macknik SL Troncoso XG Dyar TA. Microsaccades counteract visual fading during fixation. Neuron . 2006;49:297–305. [CrossRef] [PubMed]
Greschner M Bongard M Rujan P Ammermuller J. Retinal ganglion cell synchronization by fixational eye movements improves feature estimation. Nat Neurosci . 2002;5:341–347. [CrossRef] [PubMed]
Rucci M Lovin R Poletti M Fabrizio S. Miniature eye movements enhance fine spatial detail. Nat Neurosci . 2007;447:851–855.
Poletti M Rucci M. Oculomotor synchronization of visual responses in modeled populations of retinal ganglion cells. J Vis . 2008;8:1–15. [CrossRef] [PubMed]
Footnotes
 Supported by the College of Optometrists and Wellcome Trust (VS/07/GLAC/A2), UK.
Footnotes
 Disclosure: L.M. Watson, None; N.C. Strang, None; F. Scobie, None; G.D. Love, None; D. Seidel, None; V. Manahilov, None
Figure 1. 
 
A text sample and normalized word recognition speed in the presence of image jittering for visually impaired observers. (A) A text sample as presented on the screen. (B) A low-pass spatial frequency filtered text sample demonstrating how individuals with central visual impairment might perceive the stimulus. (C) Normalized word recognition speed (NRS; NRS % = 100[RSmRSs]/RSs, where RSm and RSs denote the word recognition speeds for modulated and stationary text, respectively) averaged across participants (n = 14) for temporally modulated text at various jitter amplitudes and interjitter intervals: 0.5°/166 ms, 1°/166 ms, 0.5°/100 ms, and 1°/100 ms. *Denotes the normalized word recognition speed is significantly different from zero at P < 0.05. Error bars represent SEM.
Figure 1. 
 
A text sample and normalized word recognition speed in the presence of image jittering for visually impaired observers. (A) A text sample as presented on the screen. (B) A low-pass spatial frequency filtered text sample demonstrating how individuals with central visual impairment might perceive the stimulus. (C) Normalized word recognition speed (NRS; NRS % = 100[RSmRSs]/RSs, where RSm and RSs denote the word recognition speeds for modulated and stationary text, respectively) averaged across participants (n = 14) for temporally modulated text at various jitter amplitudes and interjitter intervals: 0.5°/166 ms, 1°/166 ms, 0.5°/100 ms, and 1°/100 ms. *Denotes the normalized word recognition speed is significantly different from zero at P < 0.05. Error bars represent SEM.
Figure 2. 
 
Word recognition speed normalized for each subject, averaged for different parameters of image jitter, as a function of word recognition speed for stationary text. Filled circles show data for participants with severe visual loss. Empty circles illustrate data for participants with moderate visual loss. Solid line represents the best-fitting power function: NRS = a/RSsb ; a = 7523, b = 1.61; R2 = 0.86.
Figure 2. 
 
Word recognition speed normalized for each subject, averaged for different parameters of image jitter, as a function of word recognition speed for stationary text. Filled circles show data for participants with severe visual loss. Empty circles illustrate data for participants with moderate visual loss. Solid line represents the best-fitting power function: NRS = a/RSsb ; a = 7523, b = 1.61; R2 = 0.86.
Figure 3. 
 
Word recognition speed as a function of interjitter interval duration for subjects (n = 9) with simulated low vision, using Bangerter filters to reduce acuity to 1.2 LogMAR. Word recognition speed improved for interjitter intervals of 500, 250, and 100 ms and was impaired for interjitter intervals of 50 ms and 25 ms. The upper horizontal axis shows fundamental temporal frequency calculated as 1/(2 × T), where T denotes the interjitter interval duration in seconds. Error bars illustrate SEM.
Figure 3. 
 
Word recognition speed as a function of interjitter interval duration for subjects (n = 9) with simulated low vision, using Bangerter filters to reduce acuity to 1.2 LogMAR. Word recognition speed improved for interjitter intervals of 500, 250, and 100 ms and was impaired for interjitter intervals of 50 ms and 25 ms. The upper horizontal axis shows fundamental temporal frequency calculated as 1/(2 × T), where T denotes the interjitter interval duration in seconds. Error bars illustrate SEM.
Figure 4. 
 
Example photographs of facial stimuli and sensitivity for discriminating emotions of face images for observers with AMD (n = 16). (A) A face expressing angry and happy emotions. Reprinted with permission from the Research Network on Early Experience and Brain Development, http://www.macbrain.org/resources.htm. (B) Low-pass spatial frequency–filtered images illustrating how people with central visual impairment might perceive face images.(C) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bars) at various jitter amplitudes and interjitter intervals as explained in Figure 1. *Denotes significantly different d' (P < 0.001) compared with stationary stimuli. Error bars represent SEM.
Figure 4. 
 
Example photographs of facial stimuli and sensitivity for discriminating emotions of face images for observers with AMD (n = 16). (A) A face expressing angry and happy emotions. Reprinted with permission from the Research Network on Early Experience and Brain Development, http://www.macbrain.org/resources.htm. (B) Low-pass spatial frequency–filtered images illustrating how people with central visual impairment might perceive face images.(C) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bars) at various jitter amplitudes and interjitter intervals as explained in Figure 1. *Denotes significantly different d' (P < 0.001) compared with stationary stimuli. Error bars represent SEM.
Figure 5. 
 
Performance of people with AMD (n = 7) for discriminating facial emotions using jitter goggles. (A) Illustration of the prototype of jitter goggles, which contained: birefringent prisms (a and c); ferroelectric liquid crystal polarization modulators (b and d); and a polarizing filter (e). The polarization modulators allow one of four possible deflection angles, produced by the prisms, to be selected. (B) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bar; amplitude of 1.8 or 2.6° and interjitter interval of 166 ms). *Denotes significantly higher d' from stationary stimuli at P < 0.005. Error bars represent SEM.
Figure 5. 
 
Performance of people with AMD (n = 7) for discriminating facial emotions using jitter goggles. (A) Illustration of the prototype of jitter goggles, which contained: birefringent prisms (a and c); ferroelectric liquid crystal polarization modulators (b and d); and a polarizing filter (e). The polarization modulators allow one of four possible deflection angles, produced by the prisms, to be selected. (B) Sensitivity (d') for discriminating angry and happy facial emotions for stationary (black bar) and jittering images (empty bar; amplitude of 1.8 or 2.6° and interjitter interval of 166 ms). *Denotes significantly higher d' from stationary stimuli at P < 0.005. Error bars represent SEM.
Figure 6. 
 
Contrast sensitivity for discriminating grating orientation as a function of temporal frequency for subjects (n = 7) with AMD. *Denotes significantly higher differences in contrast sensitivity at P < 0.05. Error bars illustrate SEM.
Figure 6. 
 
Contrast sensitivity for discriminating grating orientation as a function of temporal frequency for subjects (n = 7) with AMD. *Denotes significantly higher differences in contrast sensitivity at P < 0.05. Error bars illustrate SEM.
Table. 
 
Word Recognition Speed for Different Interjitter Intervals, Normalized by that for Stationary Text
Table. 
 
Word Recognition Speed for Different Interjitter Intervals, Normalized by that for Stationary Text
Inter-Jitter Interval (ms) Normalized Word Recognition Speed (%) t P
500 42 5.235 <0.001
250 57 5.493 <0.001
100 40 3.876 0.001
50 −58 5.269 <0.001
25 −89 7.237 <0.001
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×