June 2008
Volume 49, Issue 6
Free
Visual Psychophysics and Physiological Optics  |   June 2008
The Effect of Peripheral Visual Field Loss on Representations of Space: Evidence for Distortion and Adaptation
Author Affiliations
  • Francesca C. Fortenbaugh
    From the Lions Vision Center, The Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland.
  • John C. Hicks
    From the Lions Vision Center, The Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland.
  • Kathleen A. Turano
    From the Lions Vision Center, The Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland.
Investigative Ophthalmology & Visual Science June 2008, Vol.49, 2765-2772. doi:10.1167/iovs.07-1021
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Francesca C. Fortenbaugh, John C. Hicks, Kathleen A. Turano; The Effect of Peripheral Visual Field Loss on Representations of Space: Evidence for Distortion and Adaptation. Invest. Ophthalmol. Vis. Sci. 2008;49(6):2765-2772. doi: 10.1167/iovs.07-1021.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To determine whether peripheral field loss (PFL) systematically distorts spatial representations and to determine whether persons with actual PFL show adaptation effects.

methods. Nine participants with PFL from retinitis pigmentosa (RP) learned the locations of statues in a virtual environment by walking a predetermined route. After this, the statues were removed and the participants were to walk to where they thought each statue had been located. Placement errors, defined as the differences between the actual and estimated locations, were calculated and decomposed into distance errors and angular offsets.

results. Participants showed distortions in remembered statue locations, with mean placement errors increasing with decreasing field of view (FOV) size. A correlation was found between FOV size and mean distance error but not mean angular offsets. Compared with eye movements of normal-vision participants with simulated PFL from a previous study, the eye movements of the RP participants were shorter in duration, and smaller saccadic amplitudes were observed only for the RP participants with the smallest FOV sizes. The RP participants also made more fixations to the statues than the simulated PFL participants. Results from a real-world replication of the task showed no behavioral differences between simulated and naturally occurring PFL.

conclusions. PFL is associated with distortions in spatial representations that increase with decreasing FOV. The differences in eye movement and gaze patterns suggest possible adaptive changes on the part of the RP participants. However, the use of different sampling strategies did not aid the performance of the RP participants as FOV size decreased.

Numerous studies have demonstrated that peripheral field loss (PFL) is associated with decreases in perceived ability 1 and actual performance 2 3 4 5 on mobility tasks. Although understanding the behavioral consequences related to PFL is important for the development of rehabilitation protocols, another important factor that has received less attention is how PFL affects spatial representations of the world. 
Many of the actions completed on a daily basis are guided by vision, when it is available. 6 7 8 Moreover, effective navigation requires that a person retain knowledge of his or her final destination, current location in an environment, and any obstacles that may lie along the way. Because of this, distortions in spatial representations resulting from PFL could lead to the execution of actions that are not effectively calibrated with the locations of objects in an environment. Thus, it may be that the orientation and mobility problems reported by persons with PFL result not from a failure to detect objects but also from a failure to accurately localize or remember the locations of objects that were previously seen. 
To date, a few studies 9 10 11 have demonstrated spatial localization deficits in persons with PFL in tasks in which their movements were restricted. More recently, a study by Fortenbaugh et al. 12 that simulated PFL in participants with normal vision while they walked around a virtual environment found evidence for systematic distortions in remembered target locations even when participants were free to move around the environment. However, the fact that PFL was simulated in the Fortenbaugh et al. 12 study means that participants were not given time to adapt to the loss of their peripheral vision. Disorders that result in PFL, such as retinitis pigmentosa (RP) and glaucoma, occur gradually over time. As a result of this, persons with PFL caused by pathologic conditions have time to adapt to the loss of their peripheral vision and to develop compensatory strategies. Therefore, to determine whether similar distortions occur when persons with naturally occurring PFL complete the same task, a replication of the study was conducted using participants with actual PFL to assess whether performance across the two groups is comparable. If persons with actual PFL showed adaptation effects, it was expected that their performance would be better than that of persons with simulated PFL. If not, similar performances were expected between the two groups. 
Methods
Participants
Nine adults with diagnoses of RP participated in this experiment. Participants ranged in age from 46 to 65 years old (mean age, 54.6 years) and had no other ocular or musculoskeletal disorders. The experiment followed the tenets of the Declaration of Helsinki, and all participants were compensated for their time. 
Visual Functioning
Table 1shows basic visual functioning for the participants. Visual acuity and contrast sensitivity were measured binocularly using each subject’s habitual refractive correction. Visual acuity was tested with an ETDRS eye chart 13 transilluminated at 130 cd/m2, and the number of letters correctly read 14 was converted to logMAR. Peak contrast sensitivity was tested with the Pelli-Robson letter sensitivity test. 15 The test was administered at a viewing distance of 1 m with overhead illumination (approximately 85 cd/m2). Contrast sensitivity was scored as the number of letters correctly read and converted to log contrast sensitivity. Visual fields were measured monocularly by kinetic perimetry with a Goldmann perimeter using the III/4e target (0.44° test spot at 320 cd/m2) on a background luminance of 10 cd/m2 (subject 4 was tested previously using II/4e [0.25° test spot] and V/4e [0.64° test spot] targets at 320 cd/m2). Binocular visual fields were estimated from the measured left and right monocular visual fields. 
Stimuli and Apparatus
The environment was an immersive virtual replication of the actual laboratory (Fig. 1) , created using three-dimensional software (Studio Max; Discreet, Montreal, Canada). Replication was used to prevent the experimenter from having to interfere with the participant’s walking patterns during the testing phase because all the walls and support columns present in the real laboratory were in the same locations in the virtual environment. Six replications of the same statue, each of different height (range, 1.6–3.0 m) and color, were placed throughout the room (distance from starting position, 2.7–11.1 m). 
Head and Eye Tracking.
An optical tracker (HiBall-3000; 3rd Tech, Chapel Hill, NC) was used to monitor head position and orientation (resolution, 0.2 mm; angular precision, ≤0.03°). Head position and orientation were sampled every 7 ms by optical sensors mounted in a holder attached to the top of the headset, which detected the signals from infrared light-emitting diodes housed in the ceiling of the testing room. The output of the head tracker was filtered using an exponential smoothing function with an 80-ms time constant. Point of view was calculated from the head position, and orientation data were collected. 
Eye tracking was performed using software developed in house (by LH) on the output of cameras housed within the headset in front of each eye (sampling rate, 60 Hz; mean spatial variability, 0.52°). Pupil tracking was performed with the identification of the center of mass of a threshold value within a specified region of interest. Five-point calibration was performed before beginning the learning phase. Drift-correction calibrations were performed every few minutes or as needed during the experiment by waiting until the participant returned to the starting position during the task and then briefly showing the calibration crosses and having the participant fixate on the center cross. 
Head-Mounted Display.
The display device was a head-mounted display system (a modified Low Vision Enhancement System developed by Robert Massof at the Wilmer Eye Institute). The headset contained two color microdisplays (SVGA; 800 × 600 3D OLED Microdisplay; eMagin, Bellevue, WA). Each display measured 51° (H) × 41° (V), with spatial resolution approximately 0.06°/pixel. The displays have a refresh rate of 60 Hz. Spatially offset images were sent to each display producing a stereo view with 100% overlap. 
Virtual Environment Task
Procedure.
Before the experiment began, participants completed three practice trials in an unrelated environment to help recalibrate their motor systems in the virtual environment. Then participants were led to the starting position and orientation by the experimenter and informed that they should pay attention to both the environment and the objects located within it because they would be asked about them later. 
The experiment consisted of a learning phase and a testing phase. During the learning phase, participants completed a predetermined walking path. This included walking from the starting position to each statue and back to the starting position to learn where the statues were located relative to the starting position and then walking from each statue to every other statue (and the starting position) such that each distance was traversed exactly once. This gave the participants a chance to learn where the statues were located relative to one another. Two distinct walking patterns were used and alternated across participants. 
Once the learning phase was completed, participants were turned away from the statues, and the experimenter pressed a button to make the statues disappear. The participants then returned to the starting orientation and walked, in a predetermined order, from the starting position to where they thought each statue had been located. When participants reached each estimated statue location, they verbally signaled to the experimenter, who then pressed a button to make the statues appear at the location and to record the location. Two different statue placement orderings were used, with the orderings alternating across participants. After all statues had been placed, participants returned to the starting position and were given the opportunity to move any statues they thought were incorrectly placed. This was done to allow the participants to use the statues as reference points for one another and to prevent any order effects from occurring. 
Performance Measures.
Three measures of accuracy were calculated for the estimated statue locations. First, placement errors (the distance between the estimated and the true statue locations) were calculated as a global measure of error. However, given that all walked distances originated from the starting position and orientation, placement errors were decomposed into distance errors and angular offsets. Distance errors were defined as the difference between the estimated distance from the starting position to the statue and the true distance to the statue. Thus, an underestimation of the distance to a statue gave a negative distance error. Angular offsets were defined as the difference between the angle from the x-axis (90° to the right of the starting orientation; Fig. 1 ) to the radial line connecting the starting position to the estimated statue location and the radial line connecting the starting position to the true statue location. Thus, counterclockwise rotations gave a negative angular offset. 
Eye Movements.
To determine whether the RP participants adopted eye movement strategies different from those of the simulated PFL participants, the total number of fixations and the mean fixation durations, fixation rates, and saccadic amplitudes were calculated for each participant during the learning and testing phases. Because the total number of fixations depended on walking speed and fixation rate was inversely related to fixation duration, only mean fixation duration and saccadic amplitude are reported here. Fixations were defined with a velocity threshold of 25°/s. With the temporal limits of our system, this corresponded to the eye remaining to within 1.6° of the same spot on the scene for two frames. 
Gaze Strategies.
Gaze patterns were also analyzed to determine whether the RP participants relied on different visual cues for learning the statue locations than the simulated PFL participants. Objects within the environment were classified in five groups (statues, columns, walls, ground, sky), and the proportion of fixations made to each category during the learning and testing phases were calculated. Proportions of fixations were calculated to control for individual differences in the time taken to complete the task, which could be confounded by the age of the participants and differences in mobility. 
Real-World Validation Task
To assess the validity of the results in the virtual environment, a real-world replication of the experiment was completed in the actual laboratory. Three of the normal-vision participants from the original study 12 and five of the RP participants (Table 1)who completed the virtual environment task in the first part of the study participated in the experiment. All participants performed the real-world validation task at least 1 month after they had completed the virtual environment task. Rectangular cardboard boxes (height range, 1.4–1.85 m; distance range from the starting position, 2.25–10.3 m), of the same colors as the statues in the virtual environment, were used as targets (Fig. 1) , and the instructions were identical to those used in the virtual environment task. Different statue locations were used to prevent the participants from relying on memories from the previous testing session in the virtual environment. Participants completed the task without wearing the headset used in the virtual environment. Therefore, no field of view (FOV) restrictions were applied, and eye movements could not be recorded. 
Results
As noted in Table 1 , two of the RP participants had annuli of spared vision in their peripheral visual fields. For clarity, FOV sizes for these two participants were classified in all figures according to the sizes of their remaining central visual fields. However, because their FOVs were qualitatively different from those of the other participants, the data for these two participants were excluded from group analyses. Given that the FOV sizes of all the RP participants were not equivalent to those of the simulated PFL participants in the previous study, 12 comparisons between the two groups were made by determining the mean and 95% confidence intervals for the three FOV sizes tested with those of the simulated PFL participants and interpolating the expected ranges for FOV sizes between those tested using linear functions. Therefore, all figures for the virtual environment task display the individual means for the RP participants and the group means and 95% confidence intervals for the simulated PFL participants tested previously. 12  
Virtual Environment Behavioral Data
Figure 2shows the mean placements errors for the RP participants and the simulated PFL participants. 12 The significant negative correlation observed follows that of the simulated PFL subjects, with placement errors increasing as FOV decreased (r = −0.87; P = 0.01). However, three of the four participants with 20° FOV showed smaller placement errors than those observed when comparable FOV sizes were simulated. Furthermore, the mean placement error for the participant with a 30° FOV and annuli was much larger than expected. 
For distance and angular offset errors, signed and absolute values were measured to assess the direction and magnitude of errors, respectively. Figures 3a and 3bshow the mean signed and absolute distance errors, respectively, for the RP participants and simulated PFL participants as a function of FOV. Signed distance errors show a significant correlation, with the participants increasingly underestimating statue distances as FOV size decreased (r = 0.84; P = 0.02). Absolute distance errors show a negative correlation, with the average magnitude of errors increasing with decreasing FOV size (r = −0.83; P = 0.02). Although two of the RP participants with 20° FOV and the RP participant with 30° FOV and annuli deviated from the expected signed distance errors estimated from the simulated PFL participants, all the RP participants were within the expected range for the absolute distance errors. 
Figures 3c and 3dshow the mean signed and absolute angular offsets, respectively, of the RP and simulated PFL participants. There is a significant correlation between signed angular offsets and FOV size (r = 0.85; P = 0.01) but not absolute angular offset and FOV size (r = −0.65; P = 0.12). RP participants with the smallest FOV sizes did show a greater counterclockwise (negative) angular offset for estimated target locations than the other RP participants. Angular offset errors were comparable in direction to, though smaller in magnitude than, the errors predicted from the simulated PFL participants. 
Virtual Environment Eye Movement Data
Because of technical difficulties, eye movement data were available for only six of the nine participants. Figures 4a and 4bshow the mean fixation durations for these RP participants and the simulated PFL participants during the learning phase and testing phase, respectively. Although there is no trend across FOV sizes for the RP participants (P > 0.3 for learning and testing phases), the graphs show a tendency for the RP participants to fixate for shorter durations than the simulated PFL participants in both the learning and the testing phases. 
Figures 4c and 4dshow the mean saccadic amplitudes for the RP participants and the simulated PFL participants in the learning and testing phases, respectively. Again, no trend across FOV size is seen for the RP participants (P > 0.4 for learning and testing phases). Mean saccadic amplitudes are comparable across the two groups, though the two RP participants with the smallest FOV sizes show smaller saccadic amplitudes than do the other participants. 
Virtual Environment Gaze Strategies
Figure 5ashows the mean proportion of fixations made to the statues, ground, walls, and columns, respectively, for the learning phase when all the statues were visible. (For brevity, the proportions for sky fixations are not shown.) The proportion of fixations did not correlate with FOV size for any of the categories (P > 0.1 for all). However, the figures show that the RP participants fixated on the statues to a greater extent than did the simulated PFL participants with comparable FOV sizes (approximately 40% of all fixations were to statues), whereas the proportion of wall fixations was less than that of the simulated PFL participants. In addition, one RP participant with a 10° FOV tended to look at the columns more often (approximately 15% of all fixations) than the other participants. 
Figure 5bshows the mean proportion of fixations made to the different categories during the testing phase. Again, no significant correlations between FOV size and proportion of fixations to a category were found (P > 0.1 for all). The general trend for the proportion of fixations made to the ground, walls, and columns resembled that of the simulated PFL participants. Although the proportion of fixations made to the statues was not as high as that seen during the learning phase, the graph still shows a tendency for the RP participants with smaller FOV sizes to fixate on the statues to a greater extent than the simulated PFL participants. Decreases in the proportion of fixations made to the statues are accompanied by increases in the proportion of fixations made to the ground and walls across FOV sizes. 
Real-World Validation Results
For analytical comparisons, the three normal-vision participants (i.e., 180° vertical FOV) in the real-world task were binned into a group with the 40° FOV simulated PFL participants because these two FOV sizes represent the largest FOV sizes possible in either condition because of the sizes of the screens in the head-mounted display (all analyses were also calculated without binning the data for the 180° FOV group in the real-world task and the 40° FOV simulated PFL group in the virtual environment; the significance of the results of the analyses did not change). The RP participant with a 9° FOV was binned into the 10° FOV condition. Figure 6ashows the mean placement errors for the eight participants in this experiment along with the means for the simulated PFL participants from the previous study. 12 Results of a 2 (environment) × 3 (FOV) ANOVA showed a significant effect of FOV size (F(2,23) = 11.62; P < 0.01). There was no effect of environment or FOV × environment interaction (P ≥ 0.4 for both). Figure 6bshows the mean signed distance errors for the participants across the two tasks as a function of FOV size. Results of a second 2 × 3 ANOVA showed a significant effect of FOV size (F(2,23) = 8.56; P < 0.01) but no main effect of environment or FOV × environment interaction (P > 0.6 for both). Figure 6cshows the mean signed angular offsets as a function of FOV for the two tasks. A 2 × 3 ANOVA calculated on these means did not show a main effect of FOV or environment or of FOV × environment interaction (P > 0.1 for all). 12  
Discussion
The present results demonstrate that persons with real PFL show distortions in spatial representations similar to those observed when PFL was simulated in participants with normal vision. 12 In particular, mean placement errors for the real PFL participants increased with decreasing FOV size. Further analyses showed that this effect is driven in large part by a tendency to underestimate the distances to the statues. Signed angular offsets were also comparable between the two groups. When comparing the performances of participants with real PFL to the simulated PFL participants in the real-world task, the same trends across FOV size were observed for placement and distance errors. Across all performance measures, no effect of environment or FOV × environment interaction was observed. This suggests that the distortions observed in the spatial representations of the real PFL and the simulated PFL participants in the virtual environment task are related to the loss of the peripheral visual field and not to an experimental artifact. Collectively, then, the behavioral data indicate a relationship between the ability to create veridical spatial representations and FOV size, with distortions increasing as FOV size decreases, regardless of the nature of the field loss (simulated or natural) or the type of environment (virtual or real world). 
There were, however, some unexpected results in the behavioral data from the participants with 20° FOV sizes. Two of the RP participants with 20° FOV showed an overestimation for signed distances, indicating a bias in the opposite direction compared with that predicted from the simulated PFL participants’ data. Although overestimating distances may carry more functional disadvantages than underestimating distances, such as bumping into objects that are not continually monitored, when coupled with the absolute distance errors, the results show that the magnitude of these participants’ biases still falls within the range predicted from the simulated PFL participants’ data. In addition, though the signed angular offsets of the RP participants were similar to those of the simulated PFL participants, the absolute angular offsets of six of the nine RP participants were smaller in magnitude than expected, indicating that these RP participants were either slightly better at representing the orientations of the statues or at maintaining their heading while walking to the statues. 
In contrast to the behavioral results, the eye movement and gaze strategies of the RP participants in the present study showed marked differences from those observed with the simulated PFL participants. First, fixation durations for both the testing and the learning phases were shorter for all but one of the RP participants than for the simulated PFL participants. Previous research 16 has shown that persons with PFL caused by ocular disorder exhibit shorter fixation durations than do normal-vision controls when completing a dot-counting task but not a visual search task. It has also been found that simulating PFL in normal-vision observers can lead to increases in fixation duration with decreasing FOV size. 17 Given that there is no consensus on how fixation durations typically change with reductions in FOV size, it is not clear whether the deviation in fixation durations between the RP and the simulated PFL participants here is the result of adaptive strategies on the part of the RP participants, a response to the FOV restrictions by the simulated PFL participants, or both. 
Analyses of gaze strategies also showed that the RP participants fixated on the statues to a greater extent than the simulated PFL participants in both the learning and the testing phases. The small decrease in the proportion of statue fixations in the testing phase is not surprising given that the statues were only visible in this part of the experiment after placement by the participants. Although this result is consistent with previous studies indicating that persons with PFL tend to focus their gaze on the target when walking, 18 19 the present study is limited in determining whether this result reflects a conscious strategy used by the RP participants. 
The results of the present study also suggest that the effect of PFL on spatial representations may depend not only on the absolute size of the remaining visual field but also on the parts of the visual field that are spared. Specifically, the RP participant with large annuli and a 30° central visual field showed larger signed distance errors and angular offsets than the other RP participants, with and without annuli, and the simulated PFL participants. When asked whether she perceived the annuli in her peripheral visual field, the participant responded that she did not. Instead, she reported “seeing” a larger, continuous FOV. Although both participants with annuli had large areas of spared vision, the annuli of the participant with a 30° central FOV spanned both hemifields, creating a unified ring around the remaining central region in her binocular FOV, whereas the annuli of the other participant did not. For both participants, however, annuli would not have provided more direct visual input during testing because the regions spanned by their annuli fell outside the FOV provided by the screens of the headset, with a maximum eccentricity of 25.5° horizontally and 20.5° vertically. 
In conclusion, the present results extend previous findings demonstrating distortions in perceived eccentricity 9 and online perception of distances 10 in persons with PFL, and they have important implications for the development of orientation and mobility rehabilitation protocols. The results indicate that the strategies used by the RP participants did not effectively compensate for their FOV loss in a task that involved remembering spatial layouts. As such, the present findings may help to explain some of the navigation difficulties reported by persons with PFL when completing similar tasks in their daily lives. 1 Furthermore, because previous studies 18 19 have found a tendency for persons with PFL to use certain types of strategies during navigation that do not involve actively learning spatial layouts, it may be helpful for persons with PFL to engage in training that focuses on deficits in encoding and retrieving spatial configurations and object locations in memory. However, further work is needed to understand the mechanisms driving these distortions and what compensatory strategies, if any, are effective in reducing them. 
 
Table 1.
 
Participant Characteristics
Table 1.
 
Participant Characteristics
Subject Sex Age (years) Acuity (logMAR) Contrast Sensitivity (logCS) FOV (degrees) Islands Real-World Task
1 M 51 0.34 0.95 5 No No
2 F 54 0.32 0.35 9 No Yes
3 M 52 −0.06 1.75 10 No Yes
4 F 46 0 1.75 18 Eccentricity: ∼45°–90° No
Radial: ∼40° above or below horizontal meridian
5 F 54 0.14 1.50 20 No No
6 F 55 0.16 1.40 20 No Yes
7 M 57 0.16 1.65 20 No Yes
8 F 57 0.56 0.85 20 No Yes
9 F 65 0.22 1.65 30 Eccentricity: ∼45°–70° No
Radial: ∼90° above or below horizontal meridian
Figure 1.
 
Environments used in this experiment. (a) Top-down view of the virtual environment with the statues present. The starting position is the bulls eye, which was visible throughout the experiment; the statues have been labeled with numbers here for clarity. Overlaid on the starting position is the coordinate frame used for analyses. The starting orientation was along the y-axis. (b) First-person view of the virtual environment as seen by the participants. (c) First-person view of the environment used for the real-world task.
Figure 1.
 
Environments used in this experiment. (a) Top-down view of the virtual environment with the statues present. The starting position is the bulls eye, which was visible throughout the experiment; the statues have been labeled with numbers here for clarity. Overlaid on the starting position is the coordinate frame used for analyses. The starting orientation was along the y-axis. (b) First-person view of the virtual environment as seen by the participants. (c) First-person view of the environment used for the real-world task.
Figure 2.
 
Virtual environment behavioral data. Mean placement error in meters as a function of FOV. The mean placement errors for the simulated PFL participants measured in the previous study 12 are shown as the gray circles and solid gray line. The region specifying the 95% confidence interval for these means is shown as the hatched area. Individual means for the RP participants tested in this study are shown as black diamonds for the participants without annuli and as black squares for the participants with annuli. FOV is measured in degrees of visual angle and represents the average diameter of the participant’s visual field extent for the RP participants with real PFL.
Figure 2.
 
Virtual environment behavioral data. Mean placement error in meters as a function of FOV. The mean placement errors for the simulated PFL participants measured in the previous study 12 are shown as the gray circles and solid gray line. The region specifying the 95% confidence interval for these means is shown as the hatched area. Individual means for the RP participants tested in this study are shown as black diamonds for the participants without annuli and as black squares for the participants with annuli. FOV is measured in degrees of visual angle and represents the average diameter of the participant’s visual field extent for the RP participants with real PFL.
Figure 3.
 
Virtual environment behavioral data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean signed distance error in meters as a function of FOV. (b) Mean absolute distance error in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. (d) Mean absolute angular offset in degrees as a function of FOV.
Figure 3.
 
Virtual environment behavioral data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean signed distance error in meters as a function of FOV. (b) Mean absolute distance error in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. (d) Mean absolute angular offset in degrees as a function of FOV.
Figure 4.
 
Virtual environment eye movement data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean fixation duration in seconds as a function of FOV during the learning phase. (b) Mean fixation duration in seconds as a function of FOV during the testing phase. (c) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the learning phase. (d) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the testing phase.
Figure 4.
 
Virtual environment eye movement data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean fixation duration in seconds as a function of FOV during the learning phase. (b) Mean fixation duration in seconds as a function of FOV during the testing phase. (c) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the learning phase. (d) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the testing phase.
Figure 5.
 
Virtual environment gaze strategies. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. The panels show the mean proportion of fixations made to the statues, ground, walls, and columns as a function of FOV, respectively. (a) Mean proportions of fixations for each category during the learning phase. (b) Mean proportion of fixations for each category during the testing phase.
Figure 5.
 
Virtual environment gaze strategies. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. The panels show the mean proportion of fixations made to the statues, ground, walls, and columns as a function of FOV, respectively. (a) Mean proportions of fixations for each category during the learning phase. (b) Mean proportion of fixations for each category during the testing phase.
Figure 6.
 
Real-world validation task. (a) Mean placement errors in meters as a function of FOV. (b) Mean signed distance errors in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. The black squares represent the means of the eight participants who returned for this study, and the gray circles represent the means of the simulated PFL participants from the previous experiment. 12 FOV is in degrees of visual angle, and the errors bars are ±1SE.
Figure 6.
 
Real-world validation task. (a) Mean placement errors in meters as a function of FOV. (b) Mean signed distance errors in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. The black squares represent the means of the eight participants who returned for this study, and the gray circles represent the means of the simulated PFL participants from the previous experiment. 12 FOV is in degrees of visual angle, and the errors bars are ±1SE.
The authors thank Judy Hao for help with data collection and analyses. 
TuranoKA, GeruschatDR, StahlJW, MassofRW. Perceived visual ability for independent mobility in persons with retinitis pigmentosa. Invest Ophthalmol Visl Sci. 1999;40:865–877.
HaymesS, GuestD, HeyesA, JohnstonA. Mobility of persons with retinitis pigmentosa as a function of vision and psychological variables. Optom Vis Sci. 1996;73:621–637. [CrossRef] [PubMed]
KuykT, ElliottJL, FuhrPS. Visual correlates of obstacle avoidance in adults with low vision. Optom Vis Sci. 1998;75:174–182. [CrossRef] [PubMed]
LongRG, RieserJJ, HillEW. Mobility in individuals with moderate visual impairments. J Vis Impairment Blindness. 1990;84:111–118.
MarronJA, BaileyIL. Visual factors and orientation-mobility performance. Am J Optom Physiol Opt. 1982;59:413–426. [CrossRef] [PubMed]
HollandsMA, Maple-HorvatDE. Coordination of eye and leg movements during visually guided stepping. J Motor Behav. 2001;33:205–216. [CrossRef]
HollandsMA, PatlaAE, VickersJN. “Look where you’re going!”: Gaze behavior associated with maintaining and changing the direction of locomotion. Exp Brain Res. 2002;143:221–230. [CrossRef] [PubMed]
LandMF, HayhoeMM. In what ways do eye movements contribute to everyday activities?. Vis Res. 2001;41:3559–3565. [CrossRef] [PubMed]
TemmeLA, MainoJH, NoellWK. Eccentricity perception in the periphery of normal observers and those with retinitis pigmentosa. Am J Optom Physiol Opt. 1985;62:736–743. [CrossRef] [PubMed]
TuranoKA, SchuchardRA. Space perception in observers with visual field loss. Clin Vis Sci. 1991;6:289–299.
TuranoKA. Bisection judgments in patients with retinitis pigmentosa. Clin Vis Sci. 1991;6:119–130.
FortenbaughFC, HicksJC, HaoL, TuranoKA. Losing sight of the bigger picture: peripheral field loss compresses representations of space. Vis Res. 2007;47:2506–2520. [CrossRef] [PubMed]
FerrisF, KassoffA, BresnickG, BaileyI. New visual acuity charts for clinical research. Am J Ophthalmol. 1982;94:91–96. [CrossRef] [PubMed]
BaileyI, BullimoreM, RaaschT, TaylorH. Clinical grading and the effects of scaling. Invest Ophthalmol Vis Sci. 1991;32:422–432. [PubMed]
PelliD, RobsonJ, WilkensA. The design of a new letter chart for measuring contrast sensitivity. Clin Vis Sci. 1988;2:187–199.
CoeckelberghTRM, CornelissenFW, BrouwerWH, KooijmanAC. The effect of visual field defects on eye movements and practical fitness to drive. Vis Res. 2002;42:669–677. [CrossRef] [PubMed]
CornelissenFW, BruinKJ, KooijmanAC. The influence of artificial scotomas on eye movements during visual search. Optom Vis Sci. 2005;82:27–35. [PubMed]
TuranoKA, YuD, HaoL, HicksJC. Optic-flow and egocentric-direction strategies in walking: cCentral vs peripheral visual field. Vis Res. 2005;45:3117–3132. [CrossRef] [PubMed]
Vargas-MartinF, PeliE. Eye movements of patients with tunnel vision while walking. Invest Ophthalmol Vis Sci. 2005;47:5295–5302.
Figure 1.
 
Environments used in this experiment. (a) Top-down view of the virtual environment with the statues present. The starting position is the bulls eye, which was visible throughout the experiment; the statues have been labeled with numbers here for clarity. Overlaid on the starting position is the coordinate frame used for analyses. The starting orientation was along the y-axis. (b) First-person view of the virtual environment as seen by the participants. (c) First-person view of the environment used for the real-world task.
Figure 1.
 
Environments used in this experiment. (a) Top-down view of the virtual environment with the statues present. The starting position is the bulls eye, which was visible throughout the experiment; the statues have been labeled with numbers here for clarity. Overlaid on the starting position is the coordinate frame used for analyses. The starting orientation was along the y-axis. (b) First-person view of the virtual environment as seen by the participants. (c) First-person view of the environment used for the real-world task.
Figure 2.
 
Virtual environment behavioral data. Mean placement error in meters as a function of FOV. The mean placement errors for the simulated PFL participants measured in the previous study 12 are shown as the gray circles and solid gray line. The region specifying the 95% confidence interval for these means is shown as the hatched area. Individual means for the RP participants tested in this study are shown as black diamonds for the participants without annuli and as black squares for the participants with annuli. FOV is measured in degrees of visual angle and represents the average diameter of the participant’s visual field extent for the RP participants with real PFL.
Figure 2.
 
Virtual environment behavioral data. Mean placement error in meters as a function of FOV. The mean placement errors for the simulated PFL participants measured in the previous study 12 are shown as the gray circles and solid gray line. The region specifying the 95% confidence interval for these means is shown as the hatched area. Individual means for the RP participants tested in this study are shown as black diamonds for the participants without annuli and as black squares for the participants with annuli. FOV is measured in degrees of visual angle and represents the average diameter of the participant’s visual field extent for the RP participants with real PFL.
Figure 3.
 
Virtual environment behavioral data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean signed distance error in meters as a function of FOV. (b) Mean absolute distance error in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. (d) Mean absolute angular offset in degrees as a function of FOV.
Figure 3.
 
Virtual environment behavioral data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean signed distance error in meters as a function of FOV. (b) Mean absolute distance error in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. (d) Mean absolute angular offset in degrees as a function of FOV.
Figure 4.
 
Virtual environment eye movement data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean fixation duration in seconds as a function of FOV during the learning phase. (b) Mean fixation duration in seconds as a function of FOV during the testing phase. (c) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the learning phase. (d) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the testing phase.
Figure 4.
 
Virtual environment eye movement data. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. (a) Mean fixation duration in seconds as a function of FOV during the learning phase. (b) Mean fixation duration in seconds as a function of FOV during the testing phase. (c) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the learning phase. (d) Mean saccadic amplitude in degrees of visual angle as a function of FOV during the testing phase.
Figure 5.
 
Virtual environment gaze strategies. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. The panels show the mean proportion of fixations made to the statues, ground, walls, and columns as a function of FOV, respectively. (a) Mean proportions of fixations for each category during the learning phase. (b) Mean proportion of fixations for each category during the testing phase.
Figure 5.
 
Virtual environment gaze strategies. The same symbols from Figure 2are used here. The region specifying the 95% confidence interval for the means of the simulated PFL participants is shown as the hatched area. The panels show the mean proportion of fixations made to the statues, ground, walls, and columns as a function of FOV, respectively. (a) Mean proportions of fixations for each category during the learning phase. (b) Mean proportion of fixations for each category during the testing phase.
Figure 6.
 
Real-world validation task. (a) Mean placement errors in meters as a function of FOV. (b) Mean signed distance errors in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. The black squares represent the means of the eight participants who returned for this study, and the gray circles represent the means of the simulated PFL participants from the previous experiment. 12 FOV is in degrees of visual angle, and the errors bars are ±1SE.
Figure 6.
 
Real-world validation task. (a) Mean placement errors in meters as a function of FOV. (b) Mean signed distance errors in meters as a function of FOV. (c) Mean signed angular offset in degrees as a function of FOV. The black squares represent the means of the eight participants who returned for this study, and the gray circles represent the means of the simulated PFL participants from the previous experiment. 12 FOV is in degrees of visual angle, and the errors bars are ±1SE.
Table 1.
 
Participant Characteristics
Table 1.
 
Participant Characteristics
Subject Sex Age (years) Acuity (logMAR) Contrast Sensitivity (logCS) FOV (degrees) Islands Real-World Task
1 M 51 0.34 0.95 5 No No
2 F 54 0.32 0.35 9 No Yes
3 M 52 −0.06 1.75 10 No Yes
4 F 46 0 1.75 18 Eccentricity: ∼45°–90° No
Radial: ∼40° above or below horizontal meridian
5 F 54 0.14 1.50 20 No No
6 F 55 0.16 1.40 20 No Yes
7 M 57 0.16 1.65 20 No Yes
8 F 57 0.56 0.85 20 No Yes
9 F 65 0.22 1.65 30 Eccentricity: ∼45°–70° No
Radial: ∼90° above or below horizontal meridian
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×