January 2019
Volume 60, Issue 1
Open Access
Low Vision  |   January 2019
People With Central Vision Loss Have Difficulty Watching Videos
Author Affiliations & Notes
  • Francisco M. Costela
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
    Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States
  • Daniel R. Saunders
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
    Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States
  • Dylan J. Rose
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Sidika Katjezovic
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Stephanie M. Reeves
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Russell L. Woods
    Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, Massachusetts, United States
    Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States
Investigative Ophthalmology & Visual Science January 2019, Vol.60, 358-364. doi:10.1167/iovs.18-25540
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Francisco M. Costela, Daniel R. Saunders, Dylan J. Rose, Sidika Katjezovic, Stephanie M. Reeves, Russell L. Woods; People With Central Vision Loss Have Difficulty Watching Videos. Invest. Ophthalmol. Vis. Sci. 2019;60(1):358-364. doi: 10.1167/iovs.18-25540.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: People with central vision loss (CVL) often report difficulties watching video. We objectively evaluated the ability to follow the story (using the information acquisition method).

Methods: Subjects with CVL (n = 23) or normal vision (NV, n = 60) described the content of 30-second video clips from movies and documentaries. We derived an objective information acquisition (IA) score for each response using natural-language processing. To test whether the impact of CVL was simply due to reduced resolution, another group of NV subjects (n = 15) described video clips with defocus blur that reduced visual acuity to 20/50 to 20/800. Mixed models included random effects correcting for differences between subjects and between the clips, with age, gender, cognitive status, and education as covariates.

Results: Compared to both NV groups, IA scores were worse for the CVL group (P < 0.001). IA reduced with worsening visual acuity (P < 0.001), and the reduction with worsening visual acuity was greater for the CVL group than the NV-defocus group (P = 0.01), which was seen as a greater discrepancy at worse levels of visual acuity.

Conclusions: The IA method was able to detect difficulties in following the story experienced by people with CVL. Defocus blur failed to recreate the CVL experience. IA is likely to be useful for evaluations of the effects of vision rehabilitation.

Individuals with central vision loss (CVL) report watching television and movies at least as much as people with full sight.1,2 Many viewers with CVL report significant difficulties such as recognizing faces and following plots,13 and report substantial difficulties using computers and portable electronic device to view video.2 The elderly, particularly those over the age of 65, also report watching TV more often than younger adults and children,2,4 and the proportion of this demographic sector affected by age-related visual impairments, such as AMD and cataract, is increasing.57 
When watching a directed video (i.e., video content controlled by a director—“Hollywood” movie or TV program), most individuals with normal vision (NV) will look in about the same region of a visual display at about the same time.8,9 People with CVL are expected to do the same as viewers with NV but with less ability, due to reduced vision (identifying objects of interest), poor eye movement control (being able to direct the gaze to the target location), and unstable fixation (holding the gaze at the visual target). Recently, we showed that people with a binocular preferred retinal locus (PRL; an alternate region of the retina used to view objects of interest when the fovea has been lost) do follow the same scan path as viewers with NV, although usually not with the clinically-measured (fixational) PRL.10 
These factors may impact the ability of subjects with CVL to follow the story and are likely responsible for the reports of difficulty watching video, such as on TV and in movies.1,2 To our knowledge, the impact of CVL on watching TV or movies has not been objectively measured. Here, we report two studies: In the first study, we present the first use of an objective approach to quantifying the information acquisition (IA) of subjects with CVL (N = 23) and normal vision (N = 60). The IA score11 evaluates the ability to follow the story in a video clip, which is a primary requirement of watching video, even when done for pleasure. Given the difficulties reported by people with CVL while watching TV, we hypothesized that IA scores would be lower for these subjects when compared with NV subjects. 
Some of what is known about the relationship between CVL and its effects on task performance comes from studies that have simulated vision impairment. Commonly used simulations include optical defocus (refractive blur),1217 diffusive filters,1722 and image blur (though image processing).15,16,2024 These may resemble the “initial stages of sudden onset, acquired visual loss”18 due to cataract, keratoconus, corneal scarring, or uncorrected refractive error in that their primary effect is a uniform reduction in image resolution across the visual field. Such simulations have been used to study reading,13,14,19,25 pedestrian mobility,1517 postural stability,22 gaze perception,16 eye–hand coordination,24 and driving.17 In addition, “visual impairment simulators,” which can be found at trade shows or in the laboratory,26,27 have been used to study way-finding,28 pharmacy education,27,29 and the impact on the empathy of medical residents.30 
Optical defocus places objects at a depth plane that is out of focus on the retina.1217 Studies that have used these simulations suggest that they may impair viewers comparably to those with low visual acuity and low vision for certain tasks.18,2022,31 In the second study, we induced blur by using defocus lenses (refractive blur), which had been previously used to simulate impaired vision.1217 We examined whether the effects of CVL simulated with optical defocus were similar to those of real CVL by using IA scores. We hypothesized that the low resolution provided by defocus would not entirely explain the deficit experienced by subjects with CVL. 
Methods
Subjects
The CVL group consisted of 23 subjects with CVL (median age was 60; range 29–87 years) from the community in and around Boston, Massachusetts. Vision characteristics of the CVL group, including diagnoses, are reported in Table 1. Subjects in the CVL group had an average binocular visual acuity (VA) of 0.84 logMAR (range −0.02 to 1.88; 20/19 to 10/1520), average letter contrast sensitivity (CS) scores of 1.22 (range 0.9 to 1.55) units, and a relative or full central scotoma or scotomata indicated by binocular perimetry. For Table 1, if monocular fixation was found to be at the fovea in either eye, the CVL subject was considered to be using a fovea (though the quality of vision, measured with standard clinical tests such as VA and visual field (VF) assessments, was impaired compared to healthy eyes). 
Table 1
 
Vision Characteristics of the CVL Group
Table 1
 
Vision Characteristics of the CVL Group
The NV group consisted of 60 subjects with NV who have been described previously.10,11,32,33 Recruitment was stratified with three equally sized age groups: <60 years, 60–70 years, and >70 years, each with equal numbers of men and women. Each NV-control subject watched a different subset of 40 video clips from a set of 200 clips. There was no difference between the CVL and NV groups in gender (χ2 = 0.28, P = 0.60), age (Kolmogorov-Smirnov, D = 0.17, P = 0.71) or education (D = 0.14, P = 0.92). Subjects in the NV group had an average binocular VA of 0.01 (range: −0.12 to 0.24) logMAR, average letter CS scores of 1.82 (range: 1.50 to 2.10) units, and no VF defects found in binocular perimetry. 
The NV-defocus group consisted of 15 additional NV subjects (median 29; range 21–67 years). There was no difference between the groups in gender, (χ2 = 2.22, P = 0.14), but the NV-defocus group was younger than the CVL group (Wilcoxon-Mann-Whitney, z = 4.35, P < 0.001). Education and Montreal Cognitive Assessment (MoCA)34 scores were not available for the NV-defocus group. 
Additional data on the characteristics of the individuals in the CVL group are described in Table 1. Summary demographics for the three groups are shown in Table 2
Table 2
 
Summary of Self-Reported Demographic Characteristics of Subjects in Each Group
Table 2
 
Summary of Self-Reported Demographic Characteristics of Subjects in Each Group
The binocular VA of all subjects was assessed using a computerized single-letter VA test. The letter CS was assessed using a custom computer program that produces letter CS scores comparable to the Mars and Pelli-Robson charts. The VF of all subjects were assessed using a Goldmann manual perimeter or a custom computerized VF mapping program (comparable to a Tangent screen). Each subject was screened for the presence of cognitive defects using the MoCA.34 MoCA scores in the CVL group (median 26, range 17 to 30) were not different (D = 0.16, P = 0.76) from those in the NV group (median 26, range 22 to 30). These MoCA scores were not adjusted for the difficulties experienced by people with CVL performing four of the items.35 All subjects had a MoCA score of 17 or better. Apart from the NV-defocus group, subjects were shown the video clips wearing habitual (not necessarily optimal) optical correction. The NV-defocus group had an optimal correction for the viewing distance, and positive lenses adjusted to obtain the required visual acuities at the 1-m viewing distance. 
The research followed the tenets of the Declaration of Helsinki. The Institutional Review Board of the Schepens Eye Research Institute approved all studies. Informed consent was obtained from each subject prior to data collection. 
Information Acquisition (IA) Method
IA is an objective approach to evaluate the ability to perceive and understand a sensory stimulus, using descriptions of the stimulus made by the observer. Here, in the case of video, IA evaluates the ability to follow the story. We restricted responses to descriptions of the visual content, even though audio content was available. We have found that, with careful instruction, responses can be restricted to the visual content32 with no difference in IA when the audio content is not available. Subjects viewed 30-second video clips wearing their habitual optical correction or the NV-defocus lenses. An experimenter gave the instructions and was in the room during data collection, but the MATLAB program automatically displayed the prompts after viewing each clip, asking the subject to provide verbal responses to the open-ended queries: “Describe this movie clip in a few sentences, as if to someone who has not seen it” and then, “List several additional visual details that you might not mention in describing the clip to someone who has not seen it.” Subjects were instructed to report, without time constrains, on the visual aspects of the clip only. The spoken responses to each prompt were recorded using a headset microphone and later transcribed. 
Video Clips
As previously described,10,11,32,33 there were 200 video clips, chosen to represent a range of genres and types of depicted activities. As the subjects viewed the clips on a 27-inch display (aspect ratio 16:9) from 100 cm, the videos were 33° of visual angle wide, with a varying height related to the aspect ratio of the original material. The clips were displayed by a MATLAB program using the Psychophysics Toolbox36 and Video Toolbox.37 Before beginning data collection, participants in all groups watched and described three 30-second clips as practice of the procedure. Each of the 200 video clips had been watched by at least 32 of 159 subjects with NV (which includes the 60 subjects in the NV group).11 This constituted the response (control) database to which each new response was compared (see section “Scoring of Description of the Video Clips,” below). 
Scoring of Descriptions of the Video Clips
The following are examples of responses provided by two subjects as descriptions of the same 30-second clip from the documentary Deep Blue
 

Subject with NV: “The scene opens with a shot of the beach with the waves coming in and lots of white birds on the beach, which it's a rocky beach and then it cuts to a clip of group of sea lions who are all sort of sitting around on the shore and their whole group of probably like 50 sea lions and the camera cuts to zoom in on an older sea lion playing with a younger sea lion. The seals were brown and it was daylight. The water was peaceful.” (IA score = 4.8)

 
 

Subject with CVL: “There were many seals by the coastline. The seal mother was playing with the baby seals.” (IA score = 1.5)

 
As described previously,10,11,32,33 these natural-language responses to an open-ended prompt were objectively scored for their relevant content using an automated “wisdom of the crowd” approach (i.e., collective opinion of a group of individuals rather than that of a single expert38) to determine the IA score. The text of each response was processed with the Text to Matrix Generator toolbox for MATLAB. The number of words (after removing stopwords) shared by each pair of responses, disregarding repeated instances of the word in either response, produced a shared-word count for each pair of responses. The IA score for each video clip for each study subject was the average of the shared-word counts from the paired comparisons with each of the responses from the response database (crowd) for the same clip. For subjects within the NV group, we removed their own response to a given clip from the response database when calculating the IA score (“leave one out” approach). 
Experimental Design
In Study 1, we evaluated the impact of CVL on watching movies and television by comparing the IA scores of people with CVL to people with NV. All 23 subjects with CVL viewed the same set of 20 video clips, while subjects with NV in Study 1 (NV group) viewed 40 clips randomly selected from the set of 200 video clips (that included the 20 video clips viewed by the subjects with CVL). 
In Study 2, we asked whether reduced resolution, simulated through image blur caused by defocus, was the cause of the reduced IA scores of the CVL group. We investigated the validity of using defocus to simulate CVL. We compared the IA scores of the CVL subjects who participated in Study 1 to NV subjects with four levels of VA reduction. 
The NV-defocus group has been reported previously to illustrate that reduced VA reduced IA scores (a dose response effect).32 When best corrected, subjects in the NV-defocus group had an average binocular VA of −0.14 (range: −0.3 to 0.1) logMAR and no VF defects were found in binocular perimetry. They watched the same set of 20 video clips watched by the CVL group, while wearing varying levels of spherical defocus lenses to produce optical blur. For each NV-defocus subject, lenses were found that produced five levels of VA through spherical myopic defocus. The lenses selected for each subject, ranging from 0 to +9 D, to produce visual acuities 20/20 or better (0.0 logMAR; no defocus), 20/50 (0.4 logMAR), 20/125 (0.8 logMAR), 20/320 (1.2 logMAR), and 20/800 (1.6 logMAR) at the 1-m viewing distance. Each subject in this group saw four video clips at each of these defocus levels, and the order of the defocus conditions and the clips was randomized between subjects. As about 12 video clips are required to obtain a stable estimate of the IA score in NV subjects,32 we obtained a noisy estimates of each defocus subject's IA score at each level and, thus, we do not report individual IA scores at each defocus level (however, the group estimates were robust at each defocus level). Some demographics for the three groups are shown in Table 2
Statistical Analyses
Statistical analyses were conducted using Stata/IC 14 (StataCorp, College Station, TX, USA). To compare group (study sample) demographics, we used the chi-square test for categories, the Kolmogorov-Smirnov test for the equivalence of ordered distributions, and the Wilcoxon-Mann-Whitney test to compare central tendency of ordered distributions. To analyze the primary questions of Studies 1 and 2, we generated linear mixed-models39 that accommodated the “crossed-random” design used to collect the data (i.e., subjects saw different subsets of video clips). Linear mixed-models are robust to missing data, and the random effects for subject and for video clip account for individual differences between subjects (some people are more loquacious or more observant) and between video clips (e.g., the average number of shared words per description varied between 1.6 and 8.7 per clip in responses made by the NV group). Saunders et al.32 found that IA scores can vary with age, education, and gender; therefore, we included these factors as covariates in models. Also, MoCA34 scores were included as a covariate, as cognitive ability could affect the ability to perform the task (describe video clips). As the sample sizes were small, we accepted P ≤ 0.01 as significant, and report terms with 0.10 ≥ P > 0.01 as trends. 
Results
Study 1
We used a linear mixed model to compare the CVL and NV groups, while “subject” and “video clip” were fully-crossed random factors, and age, education, gender, and MoCA scores were included as covariates. The CVL group had an average score that was 1.10 shared words lower than the NV group (z = 4.24, P < 0.001; Fig. 1) when corrected for age, gender, education, and MoCA score. IA scores decreased with increasing age (0.2 shared words per decade; z = 2.99; P = 0.003) and increased with increasing education level (0.2 shared words per increment in education; z = 2.20; P = 0.03) but did not vary with gender (z = 1.14; P = 0.26) or MoCA score (z = 0.20; P = 0.84). 
Figure 1
 
IA score between NV and CVL group. Error bars represent 95% confidence intervals.
Figure 1
 
IA score between NV and CVL group. Error bars represent 95% confidence intervals.
We hypothesized that subjects who were still able to use the fovea might show better performance on the task, as eye movement control is better with a fovea than with a PRL.40,41 Five CVL subjects were considered to be using their fovea in at least one eye (Table 1). Their IA scores were not significantly higher than the CVL subjects who did not have a functional fovea in either eye, so were using a PRL (b = 0.12 shared words; z = 0.08, P = 0.93), when corrected for VA (z = 1.53; P = 0.13), education (z = 3.56, P < 0.001), and gender (z = 1.75, P = 0.08). Age and MoCA scores were highly nonsignificant and were removed from the final linear mixed model to ensure sufficient degrees of freedom (total sample N = 438). 
Study 2
We used a linear mixed model to compare the CVL and NV-defocus groups with VA as a fixed factor, while “subject” and “video clip” were fully crossed random factors, and age and gender were included as covariates (education and MoCA scores were not available for the NV-defocus group). As shown in Figure 2, IA scores decreased with worsening VA in both the NV-defocus (−0.5 shared words per logMAR unit (i.e., per one line on a Bailey-Lovie42 VA chart, z = 4.49, P < 0.001) and CVL (−1.8 shared words per logMAR unit, z = 3.73, P < 0.001) groups, and IA scores decreased with worsening VA more quickly in the CVL group (z = 2.72, P = 0.007), when corrected for age and gender. IA scores decreased with increasing age (0.02 shared words per decade; z = 2.09, P = 0.04). 
Figure 2
 
IA score of the NV-defocus (red circles) grouped by defocus level and CVL subjects (blue squares). Error bars represent the 95% confident interval at each defocus level. Gray region corresponds to the 95% confident interval of the fit of the CVL group.
Figure 2
 
IA score of the NV-defocus (red circles) grouped by defocus level and CVL subjects (blue squares). Error bars represent the 95% confident interval at each defocus level. Gray region corresponds to the 95% confident interval of the fit of the CVL group.
As the NV-defocus group was younger than both the NV group (Wilcoxon-Mann-Whitney, z = 4.57, P < 0.001) and the CVL group (z = 4.36, P < 0.001), we compared IA scores between the NV group and the NV-defocus group at best corrected. When viewing without defocus lenses (best corrected), the NV-defocus group had IA scores that were not significantly worse (by 0.63 shared words; z = 1.65; P = 0.10; Fig. 3) than the NV group when corrected for age (z = 3.87; P = < 0.001) and gender (z = 0.95; P = 0.34). This illustrates that including age as a covariate can correct for the differences between the age in the two groups. 
Figure 3
 
Comparison of the IA score between NV and NV-defocus groups in the best-corrected condition, when corrected for age. Error bars represent 95% CI.
Figure 3
 
Comparison of the IA score between NV and NV-defocus groups in the best-corrected condition, when corrected for age. Error bars represent 95% CI.
Discussion
Currently, there are about four million people in the USA with low vision,5 most of whom have CVL. Self-reported difficulties with watching television, an activity of daily living, have been previously reported for CVL and other visual impairments.13,43 However, no previous studies have objectively evaluated this difficulty in people with CVL. Here, we present an innovative method to evaluate visual information acquisition in a group of people with CVL and in two control groups (one of them wearing spherical defocus blur lenses to equate VA). As we hypothesized, based on the difficulties that people with CVL report while watching movies, the CVL group had IA scores that were significantly lower than the NV group. This confirms that people with CVL have real difficulties watching movies. Watching television and movies is a common activity of daily living, and people with CVL watch television at least as much as people with full sight.1,2 Assisting people with such difficulties will improve their quality of life and may have secondary impacts on associated problems such as depression. Thus, further development and evaluation of novel methods of vision rehabilitation that are directed toward watching television and movies is required. 
Our results confirm that the IA metric can objectively find differences in the ability to view video material. It has the potential to be used to evaluate the impact of rehabilitation interventions that involve repeated measurements. As a preliminary evaluation of repeated testing, IA scores were obtained twice on five subjects with CVL, with intervals between test sessions that ranged from 1.9 years to 5.0 years with an average interval of 3.4 years. The difference in VA between sessions ranged from 0.00 logMAR to 0.42 logMAR with an average difference of 0.18 logMAR. As the 95% confidence interval for repeated VA tests of subjects with CVL is about 0.2 to 0.3 logMAR,44,45 there were only modest changes in VA. Central scotomas found using a custom, computer-based version of the Tangent screen were consistent in shape and location between visits for each subject. Figure 4 shows the IA scores on the two visits for each subject. Changes in IA score were less than one shared word and were not related to the measured changes in VA. Previously, we reported that the within-visit repeatability of IA scores of the NV subjects was ±1 shared word (95% confidence interval).33 
Figure 4
 
Comparison of the IA score between first and second session for the four CVL subjects who repeated the experiment.
Figure 4
 
Comparison of the IA score between first and second session for the four CVL subjects who repeated the experiment.
In that recent publication,33 we tested the benefits of using a superimposed dynamic cue that assisted people with hemianopia watching movies and were able to measure a within-subject rehabilitative effect using the IA score. The IA score could be adopted in other areas of vision research (e.g., video compression, image enhancement, reading, scene comprehension) and other disciplines (e.g., cognitive impairment, hearing impairment, speech impairment). For example, when standard vision and cognitive test scores were within the normal range, low IA scores could suggest the presence of aphasias or other forms of disfluency in speech or writing. Or, it could be used to test IA from auditory stimuli to identify differences in hearing ability, or to evaluate sound compression algorithms or low-quality audio settings. Further, if both speech and low-level vision are normal, a low IA score could suggest cognitive impairments, such as those resulting from traumatic brain injury or Alzheimer's disease. 
In our second study, we examined whether reduced resolution could exclusively explain the difficulties experienced by subjects with CVL. IA scores decreased as VA worsened with both CVL and defocus (Fig. 2). However, the NV-defocus group had significantly higher IA scores than the CVL group at worse VA levels. Our results suggest that the use of defocus lenses to simulate CVL viewing conditions failed to recreate the visual experience of people with CVL. Therefore, further studies should examine and quantify additional factors, such as oculomotor control patterns, for individuals with CVL and NV subjects under similar conditions. Indeed, we recently found that people with CVL do not look in about the same place as normally sighted people, (Woods, et al. IOVS 2017;58:ARVO E-Abstract 2483), which supports the large individual differences reported by previous studies in functional adaptation to CVL. 
To compensate for the loss of the fovea, people with CVL rely on eccentric viewing and often adopt a preferred retinal locus (PRL) or pseudo-fovea.46 Even so, the five subjects with CVL who were still using a fovea in at least one eye (during monocular viewing) in our sample did not have better IA scores than the 18 subjects using a PRL in both eyes. This suggests that monocular evaluations of the PRL may be inadequate and do not reflect function when watching movies. It also shows that even when there is some foveal sparing (with reduced VA, or even with “good” VA), there can be a disability when watching movies. Fixation with a PRL is much more unstable than with the fovea47 (even when the foveal view is blurred with defocus10), and unstable fixation further impairs target detection and identification.48 Recently, we showed that many people with CVL use a PRL to view videos that differs from that found with a fixation task.10 
As defocus blur was not enough to recreate the CVL experience on this task, for simulations of CVL that include a central scotoma, it may be essential to evaluate individuals with NV performing the visual tasks with simulated scotomas in a gaze-contingent paradigm. Training NV subjects to develop a PRL using accurate gaze-contingent systems is the key to realistically simulate impaired visual conditions. Recent studies have shown the benefit of training NV subjects to develop a fixational PRL49,50 and, promisingly, to develop oculomotor re-referencing. One of the problems with many gaze-contingent systems comes from the delay between an eye movement and the update in the display, particularly a problem when making saccades (about three times per second). This system latency can be measured easily51 and the effect of system latency in gaze-contingent systems can be reduced by predicting the saccadic eye movements and updating at the predicted location instead of the last measured location.52,53 
While we were able to find differences between groups that were consistent with our expectations, this does not mean that the IA score method is a valid measurement instrument. For that, we are preparing a manuscript that uses Rasch analysis54 to evaluate the measurement properties of our IA approach. While mixed-effects models (as used prior) account for differences between subjects and differences between video clips, it is not clear that this is equivalent to the adjustments made when data is fit to the Rasch model. Unlike Rasch analysis, mixed-effects models (or other common statistical tests) do not identify when items (here, video clips) are not performing properly (i.e., as expected under the Rasch measurement model). Similarly, mixed-effects models cannot identify when a subject performs inconsistently, as can be done with Rasch analysis. Thus, Rasch analysis may have substantial benefits over the mixed-effects models used here, but Rasch analysis is more complicated and time-consuming to conduct. 
In summary, the IA method was able to find the increased difficulty following the story experienced by people with CVL and is consistent with their reports of difficulty. Further, IA showed that defocus blur failed to recreate the CVL experience. These results confirm that IA can be used to evaluate the impact of vision impairment on the video-viewing task and is likely to be useful for evaluation of the effect of vision rehabilitation. 
Acknowledgments
The authors thank Sarah Sheldon for assistance with data collection. Supported by National Eye Institute awards R01EY019100 and P30EY003790. The funding organization had no role in the design or conduct of this research. 
Disclosure: F.M. Costela, None; D.R. Saunders, P; D.J. Rose, P; S. Katjezovic, None; S.M. Reeves, None; R.L. Woods, P 
References
Wolffsohn JS, Mukhopadhyay D, Rubinstein M. Image enhancement of real-time television to benefit the visually impaired. Am J Ophthalmol. 2007; 144: 436–440.
Woods RL, Satgunam P. Television, computer and portable display device use by people with central vision impairment. Ophthalmic Physiol Opt. 2011; 31: 258–274.
Neve H, van Doren K. Watching television by visually impaired elderly people. In: Proceedings of the 9th International Conference on Low Vision – Vision 2008. Montreal, Canada; 2008.
Depp CA, Schkade DA, Thompson WK, Jeste DV. Age, affective experience, and television use. Am J Prev Med. 2010; 39: 173–178.
Congdon N, O'Colmain B, Klaver CC, et al. Causes and prevalence of visual impairment among adults in the United States. Arch Ophthalmol. 2004; 122: 477–485.
Friedman DS, Wilson MR, Liebmann JM, Fechtner RD, Weinreb RN. An evidence-based assessment of risk factors for the progression of ocular hypertension and glaucoma. Am J Ophthalmol. 2004; 138: S19–S31.
Massof RW. A model on the prevalence and incidence of low vision and blindness among adults in the U.S. Optom Vision Sci. 2002; 79: 31–38.
Dorr M, Martinetz T, Gegenfurtner KR, Barth E. Variability of eye movements when viewing dynamic natural scenes. J Vis. 2010; 10 (10): 28.
Goldstein RB, Woods RL, Peli E. Where people look when watching movies: do all viewers look at the same place? Comput Biol Med. 2007; 37: 957–964.
Costela FM, Kajtezovic S, Woods RL. The preferred retinal locus used to watch videos. Invest Ophthalmol Vis Sci. 2017; 58: 6073–6081.
Saunders DR, Bex PJ, Rose DJ, Woods RL. Measuring information acquisition from sensory input using automated scoring of natural-language descriptions. PLoS One. 2014; 9: e93251.
Dickinson CM, Rabbitt PMA. Simulated visual impairment: effects on text comprehension and reading speed. Clin Vision Sci. 1991; 6: 301–308.
Thorn F, Thorn S. Television captions for hearing-impaired people: a study of key factors that affect reading performance. Hum Factors. 1996; 38: 452–463.
Chung ST, Jarvis SH, Cheung SH. The effect of dioptric blur on reading performance. Vision Res. 2007; 47: 1584–1594.
Rand KM, Barhorst-Cates EM, Kiris E, Thompson WB, Creem-Regehr SH. Going the distance and beyond: simulated low vision increases perception of distance traveled during locomotion. [ published online ahead of print April 21, 2018]. Psychol Res. https://doi:10.1007/s00426-018-1019-2.
Scott AC, Atkins KN, Bentzen BL, Barlow JM. Perception of pedestrian signals by pedestrians with varying levels of vision. Transp Res Rec. 2012; 2299: 10.3141/2299-07.
Bochsler TM, Legge GE, Kallie CS, Gage R. Seeing steps and ramps with simulated low acuity: impact of texture and locomotion. Optom Vision Sci. 2012; 89: E1299–E1307.
Bowers AR, Reid VM. Eye movements and reading with simulated visual impairment. Ophthalmic Physiol Opt. 1997; 17: 392–402.
Legge GE, Pelli DG, Rubin GS, Schleske MM. Psychophysics of reading – I. Normal vision. Vision Res. 1985; 25: 239–252.
Hecht H, Horichs J, Sheldon S, Quint J, Bowers A. The effects of simulated vision impairments on the cone of gaze. Atten Percept Psychophys. 2015; 77: 2399–2408.
Wood JM, Tyrrell RA, Chaparro A, Marszalek RP, Carberry TP, Chu BS. Even moderate visual impairments degrade drivers' ability to see pedestrians at night. Invest Ophthalmol Vis Sci. 2012; 53: 2586–2592.
Anand V, Buckley JG, Scally A, Elliott DB. Postural stability changes in the elderly with cataract simulation and refractive blur. Invest Ophthalmol Vis Sci. 2003; 44: 4670–4675.
Thompson WB, Legge GE, Kersten DJ, Shakespeare RA, Lei Q. Simulating visibility under reduced acuity and contrast sensitivity. J Opt Soc Am A Opt Image Sci Vis. 2017; 34: 583–593.
Maiello G, Kwon M, Bex PJ. Three-dimensional binocular eye-hand coordination in normal vision and with simulated visual impairment. Exp Brain Res. 2018; 236: 691–709.
Legge GE, Rubin GS, Pelli DG, Schleske MM. Psychophysics of reading – II. Low vision. Vision Res. 1985; 25: 253–265.
Rosenblum LP. How to make homemade vision simulators. Chapel Hill, NC: Early Intervention Training Center for Infants and Toddlers With Visual Impairments, FPG Child Development Institute, UNC-CH 2003. Available at: https://www.tsbvi.edu/technology/534-hatton-functional-vision/4788-session-i-handout-e-how-to-make-homemade-vision-simulators.
Zagar M, Baggarly S. Low vision simulator goggles in pharmacy education. Am J Pharm Educ. 2010; 74: 146.
Koneczny S, Rousek JB, Hallbeck MS. Simulating visual impairment to detect hospital way-finding difficulties. Stud Health Technol Inform. 2009; 142: 133–135.
Zagar M, Baggarly S. Simulation-based learning about medication management difficulties of low-vision patients. Am J Pharm Educ. 2010; 74: 146.
Hou CH, Lin KK, Chang CJ, Lee JS. The effect of low-vision simulators on ophthalmology residents' perception of quality of life. Can J Ophthalmol. 2009; 44: 692–696.
Rand KM, Barhorst-Cates EM, Kiris E, Thompson WB, Creem-Regehr SH. Going the distance and beyond: simulated low vision increases perception of distance traveled during locomotion [ published online ahead of print April 21, 2018]. Psychol Res. doi:10.1007/s00426-018-1019-2.
Saunders DR, Bex PJ, Woods RL. Crowdsourcing a normative natural language dataset: a comparison of Amazon Mechanical Turk and in-lab data collection. J Med Internet Res. 2013; 15: e100.
Costela FM, Saunders DR, Kajtezovic S, Rose DJ, Woods RL. Measuring the difficulty watching video with hemianopia and an initial test of a rehabilitation approach. Trans Vis Sci Tech. 2018; 7 (4): 13.
Nasreddine ZS, Phillips NA, Bedirian V, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatrics Soc. 2005; 53: 695–699.
Wittich W, Phillips N, Nasreddine ZS, Chertkow H. Sensitivity and specificity of the Montreal Cognitive Assessment modified for individuals who are visually impaired. J Visual Impair Blind. 2010; 104: 360–368.
Brainard DH. The psychophysics toolbox. Spat Vision. 1997; 10: 433–436.
Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vision. 1997; 10: 437–442.
Surowiecki J. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Society and Nations. New York, NY: Doubleday; 2004.
Janssen DP. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants. Behav Res Meth Instru. 2012; 44: 232–247.
Peli E. Control of eye movement with peripheral vision: implications for training of eccentric viewing. Am J Optom Physiol Opt. 1986; 63: 113–118.
Whittaker SG, Cummings RW, Swieson LR. Saccade control without a fovea. Vision Res. 1991; 31: 2209–2218.
Bailey IL, Lovie JE. New design principles for visual acuity letter charts. Am J Optom Physiol Opt. 1976; 53: 740–745.
Costela FM, Sheldon SS, Walker B, Woods RL. People with hemianopia report difficulty with TV, computer, cinema use, and photography. Optom Vision Sci. 2018; 95: 428–434.
Kiser AK, Mladenovich D, Eshraghi F, Bourdeau D, Dagnelie G. Reliability and consistency of visual acuity and contrast sensitivity measures in advanced eye disease. Optom Vision Sci. 2005; 82: 946–954.
Woods RL, Lovie-Kitchin JE. The reliability of visual performance measures in low vision. In: OSA Technical Digest Series. Vision Science and Its Applications. Vol. 1. Washington, DC: Optical Society of America; 1995: 246–249.
Cummings RW, Whittaker SG, Watson GR, Budd JM. Scanning characteristics and reading with a central scotoma. Am J Optom Physiol Opt. 1985; 62: 833–843.
Bellmann C, Feely M, Crossland MD, Kabanarou SA, Rubin GS. Fixation stability using central and pericentral fixation targets in patients with age-related macular degeneration. Ophthalmol. 2004; 111: 2265–2270.
Falkenberg HK, Rubin GS, Bex PJ. Acuity, crowding, reading and fixation stability. Vision Res. 2007; 47: 126–135.
Kwon M, Nandy AS, Tjan BS. Rapid and persistent adaptability of human oculomotor control in response to simulated central vision loss. Curr Biol. 2013; 23: 1663–1669.
Woods RL. PRL development, measurement and benefit. Paper presented at the Update on the PRL Symposium, American Academy of Opometry Annual Meeting, New Orleans, Louisiana, United States, October 2015.
Saunders DR, Woods RL. Direct measurement of the system latency of gaze-contingent displays. Behav Res Meth. 2014; 46: 439–447.
Han P, Saunders DR, Woods RL, Luo G. Trajectory prediction of saccadic eye movements using a compressed exponential model. J Vis. 2013; 13 (8): 27.
Wang S, Woods RL, Costela FM, Luo G. Dynamic gaze-position prediction of saccadic eye movements using a Taylor series. J Vis. 2017; 17 (14): 3.
Rasch G. Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagen, Denmark: Danmarks Paedagogiske Instiut; 1960.
Figure 1
 
IA score between NV and CVL group. Error bars represent 95% confidence intervals.
Figure 1
 
IA score between NV and CVL group. Error bars represent 95% confidence intervals.
Figure 2
 
IA score of the NV-defocus (red circles) grouped by defocus level and CVL subjects (blue squares). Error bars represent the 95% confident interval at each defocus level. Gray region corresponds to the 95% confident interval of the fit of the CVL group.
Figure 2
 
IA score of the NV-defocus (red circles) grouped by defocus level and CVL subjects (blue squares). Error bars represent the 95% confident interval at each defocus level. Gray region corresponds to the 95% confident interval of the fit of the CVL group.
Figure 3
 
Comparison of the IA score between NV and NV-defocus groups in the best-corrected condition, when corrected for age. Error bars represent 95% CI.
Figure 3
 
Comparison of the IA score between NV and NV-defocus groups in the best-corrected condition, when corrected for age. Error bars represent 95% CI.
Figure 4
 
Comparison of the IA score between first and second session for the four CVL subjects who repeated the experiment.
Figure 4
 
Comparison of the IA score between first and second session for the four CVL subjects who repeated the experiment.
Table 1
 
Vision Characteristics of the CVL Group
Table 1
 
Vision Characteristics of the CVL Group
Table 2
 
Summary of Self-Reported Demographic Characteristics of Subjects in Each Group
Table 2
 
Summary of Self-Reported Demographic Characteristics of Subjects in Each Group
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×