December 2016
Volume 57, Issue 15
Open Access
Low Vision  |   December 2016
Indoor Spatial Updating With Impaired Vision
Author Affiliations & Notes
  • Gordon E. Legge
    Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • Christina Granquist
    Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • Yihwa Baek
    Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • Rachel Gage
    Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • Correspondence: Gordon E. Legge, Department of Psychology, University of Minnesota, 75 E River Road, Minneapolis, MN 55455, USA; [email protected]
Investigative Ophthalmology & Visual Science December 2016, Vol.57, 6757-6765. doi:https://doi.org/10.1167/iovs.16-20226
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gordon E. Legge, Christina Granquist, Yihwa Baek, Rachel Gage; Indoor Spatial Updating With Impaired Vision. Invest. Ophthalmol. Vis. Sci. 2016;57(15):6757-6765. https://doi.org/10.1167/iovs.16-20226.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions.

Methods: Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation.

Results: The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions.

Conclusions: People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient.

Spatial updating refers to the ability to keep track of one's position and orientation while moving through an environment.1 The ability to judge the dimensions of an indoor space is also important for safe and effective mobility. How do people with impaired vision compare with normally sighted subjects in spatial updating and in judging the sizes of rooms? 
Gallistel2 defined a cognitive map as “a record in the central nervous system of macroscopic geometric relations among surfaces in the environment used to plan movements through the environment” (p. 103). Many studies have revealed the salience and importance of the geometry of rectangular spaces in establishing orientation in animals (rats, birds, fish, monkeys) and humans.3 It has even been debated whether there exists a special brain module for encoding the geometry of navigable spaces.27 In this context, identifying and encoding the boundaries of an indoor space, such as a room, may be important for establishing a frame of reference for maintaining orientation. Kelly et al.8 showed that spatial updating with respect to a target location was better in rooms with straight walls and visible corners than in a circular room with no visible corners. Both animals and humans also make use of nongeometric features (landmarks) in establishing and maintaining orientation within a space. For example, Kalia et al.9 showed that subjects with low vision make use of nongeometric visual features in learning the layouts of the corridor structure in a building. 
There is a growing body of evidence that spatial updating and other aspects of spatial navigation differ little between sighted and visually impaired subjects when testing conditions are equivalent.1012 But it is possible that access to visual landmarks or a frame of reference provided by room shape might reveal differences in updating performance between blind, low-vision, and normally sighted subjects. 
In our previous study,13 normally sighted subjects estimated room dimensions with and without visual restrictions. With unrestricted vision, mean errors in judging room dimensions were near 20%. Severe blur (Snellen 20/900) but not mild blur (20/135) yielded larger errors in dimension judgments. A narrow field (8°) was associated with increased error, but less than with severe blur. To test spatial updating, the subjects walked along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. There was no effect of visual restriction on estimates of distance back to the starting location, and only severe blur yielded larger errors in the direction estimates. If the results from this study generalize to people with low vision, severe deficits in acuity or field might adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation would be less affected by visual impairment. 
In the present article, we report on spatial updating and room-size estimates from groups of blind and low-vision subjects. We compare their performance to that of the normally sighted group from our previous study.13 
This study is part of a multidisciplinary project aimed at understanding and enhancing visual accessibility for people with impaired vision. Visual accessibility is the use of vision to travel efficiently and safely through an environment, to perceive the spatial layout of key features in the environment, and to keep track of one's location in the environment. In previous studies, we focused on factors affecting the visibility of local features present in indoor spaces, such as steps, ramps, and geometrically simple convex objects.1417 These studies involved both normally sighted subjects with artificial acuity reduction and low-vision subjects. In the current study, we have focused on perception of large-scale properties of indoor spaces—specifically, the ability to judge the size of a space and the ability to update one's location within the space. 
Methods
Participants
Thirty-two normally sighted college students (20 female and 12 male) comprised the normally sighted group. Mean acuity (Early Treatment Diabetic Retinopathy Study [ETDRS] chart) was −0.10 logMAR, and mean contrast sensitivity (Pelli-Robson chart) was 1.98. The data for one subject were excluded because distance responses were unrealistic, implying that the subject misunderstood the task. Data for room estimates were unavailable for one other subject due to time constraints. As a result, the room-size data are based on 30 subjects and the spatial-updating data on 31 subjects. 
Groups of 16 blind (mean age 53.5 years) and 16 low-vision (mean age 52.9 years) subjects participated. They were selected based on their self-reported ability to navigate independently. For purposes of this study, “low vision” was defined functionally; it refers to subjects who self-identified as having low vision with known deficits in acuity and/or field, and who had some useful vision for detecting large architectural features such as doorways and intersections. “Blind” subjects were those who relied entirely on nonvisual cues for mobility. 
For the low-vision group, we recruited subjects with a wide range of acuities and several with peripheral field loss because our previous study13 indicated that severe blur and severe field restriction affected task performance for subjects with normal vision. Field loss was classified into three categories, based on self-report—central (C), peripheral (P), and none (N). Tables 1 and 2 list characteristics of the visually impaired subjects. 
Table 1
 
Subject Characteristics: Low Vision
Table 1
 
Subject Characteristics: Low Vision
Table 2
 
Subject Characteristics: Blind
Table 2
 
Subject Characteristics: Blind
The experiment required one session lasting 1 to 2 hours. Written informed consent was obtained with procedures approved by the University of Minnesota's Institutional Review Board. This research complied with the tenets of the Declaration of Helsinki. 
Test Spaces and Conditions
Testing took place in seven rectangular rooms (Fig. 1) in a campus building. The subjects were tested in six of these rooms, usually A through F in Figure 1. A seventh room (G) was used when room C became unavailable. 
Figure 1
 
Photos from the seven rooms. The dimensions of the rooms were (Door Side × Non-Door Side in feet) (A) 7.6 × 15.2, (B) 16.2 × 20.0, (C) 19.9 × 22.1, (D) 32.7 × 16.6, (E) 33.2 × 16.6, (F) 27.1 × 23.7, (G) 17.4 × 19.0.
Figure 1
 
Photos from the seven rooms. The dimensions of the rooms were (Door Side × Non-Door Side in feet) (A) 7.6 × 15.2, (B) 16.2 × 20.0, (C) 19.9 × 22.1, (D) 32.7 × 16.6, (E) 33.2 × 16.6, (F) 27.1 × 23.7, (G) 17.4 × 19.0.
Figure 2
 
Schematic diagram illustrating the three-segment path. An experimenter guided the subject along a three-segment path beginning at the doorway (starting location). At the first waypoint, the subject dropped a beanbag (target). At the end of the path, the subject made judgments about the distance and direction to the starting location and the target. (Adapted from Fig. 2 in Legge et al.13)
Figure 2
 
Schematic diagram illustrating the three-segment path. An experimenter guided the subject along a three-segment path beginning at the doorway (starting location). At the first waypoint, the subject dropped a beanbag (target). At the end of the path, the subject made judgments about the distance and direction to the starting location and the target. (Adapted from Fig. 2 in Legge et al.13)
All the rooms had overhead fluorescent lights, and three had windows with natural daylight. Three rooms had carpeted floors, and the other four had light-colored linoleum flooring. 
Subjects estimated the length and width in feet or meters of each room with free viewing while standing at the doorway. We used the term “Door Side” to refer to the length of the side containing the entrance door, and “Non-Door Side” to refer to the orthogonal dimension. 
In the spatial-updating trials, the experimenter led the subject along the path with a 2-ft wooden rod (Fig. 2). Path segments were approximately 3, 6, or 9 ft long (range from 2 ft 10 in to 8 ft 10 in). None of the turning angles were 90°; most were between 20° and 60° (range from 3° to 69°). The subject dropped a target (beanbag) at the end of the first segment. 
Updating trials were conducted with the following five viewing conditions varying in the availability of visual and auditory input. 
Control.
Subjects made spatial-updating judgments with no visual constraints; that is, they were allowed to look back at the starting point and target location. The purpose of the control condition was to estimate baseline performance levels without any visual or auditory restrictions. For the blind subjects, the control condition was similar to the forward facing and auditory conditions. 
Forward Facing.
Subjects used their normal, binocular vision. They were not permitted to look back at the starting point or target location. This forced them to use spatial-updating information gathered along the path. 
Preview.
The subject observed the space, with both vision and hearing, from the doorway for 10 seconds. Visual and auditory cues were then blocked during the guided walk along the path. Subjects with vision were blindfolded, and all subjects wore noise-reducing earmuffs with earphones playing auditory white noise to mask acoustic cues. The preview condition was included to determine whether visual or auditory imagery, gleaned during the preview, would facilitate spatial updating. 
Auditory.
The normally sighted and low-vision subjects were blindfolded and had no visual exposure to the room before or during the updating trials. 
Deprivation.
Subjects with vision were blindfolded, and all subjects wore noise-reducing earmuffs with earphones as described above. The goal was to test spatial updating in the absence of visual and auditory cues. 
After reaching the end of the three-segment path, the subject made four spatial-updating judgments—distance and direction to the starting location (doorway), and distance and direction to the target (beanbag). Subjects reported directions using a version of the verbal pointing method, which has been shown to yield more accurate and less biased estimates than physical pointing.18 In brief, the subject first reported the quadrant (front left, front right, back left, or back right), and then the number of degrees away from the front–back axis. 
For both types of updating responses, our performance measure was absolute error, computed as the unsigned difference between the physically measured distance or direction and the subject's response. For example, if the distance to the starting location was 20 ft and the subject estimated either 24 ft or 16 ft, the absolute error was scored as 4 ft. 
Procedure
Subjects were familiarized with the distance and direction responses in practice trials. 
Estimates of room dimensions were made while the subject stood at the doorway, and were conducted on separate room visits from the updating trials. Seven of the blind subjects actively probed the space with sounds—tongue clicks, claps, or finger snaps. None of the low-vision subjects used these strategies. 
The subject began each updating trial at the doorway of the space, facing directly into the space, holding the rod in the left hand and the beanbag in the right hand. The experimenter always guided the subject along the route with the rod, even for subjects with vision in the control and forward facing trials. At the end of the path, the subject remained facing forward (except in the control condition where free viewing was allowed) while giving the four updating responses. 
All subjects performed five control updating trials in one room. The protocol differed between the normally sighted and the two visually impaired groups for the other four visual/auditory conditions. Each normally sighted subject was tested once in each of the four conditions so that data were available for 32 trials for each testing condition (31 trials after exclusion of the data from one subject as described earlier). The blind and low-vision subjects were tested twice in each of the four visual–auditory restriction conditions. Because there were 16 subjects in each of these groups, data were available from 32 trials for each group in each condition. 
For each group separately, we conducted a within-subject 1-way ANOVA to examine the effect of the viewing conditions on absolute error in the distance or direction estimates. For the blind group, data from the auditory and forward facing conditions were merged because they were functionally equivalent. 
To compare updating performance between the three groups for each visual condition, we used a linear mixed-effects (LME) model because we had an unbalanced design. ANOVA results were derived from the LME models for repeated measures with group as a fixed effect and subject as a random effect. 
Results
Estimating Room Dimensions
Each subject estimated the length and width of six rooms. For each subject, we fitted linear regression models relating estimates of length to physical length, and compiled individual slopes and intercepts of the regression lines, as well as the Pearson's correlation coefficients. We focus here primarily on the Door Side estimates because the range of room sizes was greater for the Door Side (7.63–33.25 ft) than for the Non-Door Side (15.25–23.75 ft). 
The mean correlation coefficients for the Door Side, averaged across subjects within the groups, were high and similar for the normally sighted and low-vision groups (0.92 and 0.88). The mean correlation coefficients were lower for the blind group, 0.53, but still significantly greater than 0. 
We computed the ratios of subjects' estimates of the room dimensions to the physical lengths. Ratios greater than 1.0 represent overestimates of room size, and values less than 1.0 represent underestimates. For the Door Side, the mean ratios for the three groups were normally sighted 1.10 (significantly greater than 1.0; t[178] = 4.383, P < 0.0001); low vision 0.95 (not significantly different from 1.0); and blind 0.70 (significantly less than 1.0, t[95] = −7.55, P < 0.0001). 
We also computed Weber fractions for room-size estimates, defined as the ratio of absolute (unsigned) error in the size estimates to physical size. For the low-vision group, the mean Weber fractions across rooms were 0.23 for the Door Side and 0.22 for the Non-Door Side. For the normally sighted group, the Weber fractions were 0.25 for the Door Side and 0.18 for the Non-Door side. The mean Weber fractions did not differ significantly between the normally sighted and low-vision groups. 
Figure 3 shows separate panels for the 16 low-vision subjects, arranged in order of logMAR acuity from best (0.1 at the top left) to worst (1.96 at the bottom right). The letters P, C, and N refer to field loss—peripheral, central, or none. Each panel shows the subject's Door Side length estimates as a function of the physical length. Each also includes a linear regression line and a diagonal equality line. Table 3 lists the corresponding slope, intercept, and Pearson's correlation coefficients for each subject. The correlation coefficients range from 0.62 (low vision [LV]15) to 0.99 (LV10), with a mean of 0.88. The mean slope of the regression lines is 0.85, a little lower than the veridical value of 1.0 (t[18] = −2.03, P = 0.06). The mean intercept of 1.75 ft does not differ significantly from 0. Remarkably, none of these parameters of the individual data—slope, intercept, or Pearson's r—correlates significantly with logMAR acuity or contrast sensitivity of the low-vision subjects. 
Figure 3
 
Estimates of room-size length (Door Side) for 16 low-vision subjects. Each panel of the figure shows the subject's estimate of the Door Side length as a function of the physical length. The panels are arranged in order of increasing logMAR acuity from top left (logMAR 0.1) to lower right (logMAR 1.96). The single letters in each panel represent field loss categorized as central (C), peripheral (P), or none (N). Red lines are linear regression fits (with parameters in Table 3), and black diagonal lines represent perfect performance.
Figure 3
 
Estimates of room-size length (Door Side) for 16 low-vision subjects. Each panel of the figure shows the subject's estimate of the Door Side length as a function of the physical length. The panels are arranged in order of increasing logMAR acuity from top left (logMAR 0.1) to lower right (logMAR 1.96). The single letters in each panel represent field loss categorized as central (C), peripheral (P), or none (N). Red lines are linear regression fits (with parameters in Table 3), and black diagonal lines represent perfect performance.
Table 3
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Low-Vision Subjects
Table 3
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Low-Vision Subjects
For the 30 subjects in the normally sighted group, the corresponding means were slope = 1.02; intercept = 1.70 ft; and Pearson's r = 0.92. None of these values differed significantly between the normally sighted and low-vision groups. 
We also tested for differences in regression slope, intercept, and correlation coefficients for the three subgroups of low-vision subjects defined by field status. These comparisons may not generalize beyond our study, given the small subgroups—nine with peripheral loss, three with central loss, and four with no field loss. For the Door Side data, 1-way ANOVA tests revealed no significant difference in regression slope or correlation coefficients but a significant difference in intercepts (F[2,13] = 4.46, P < 0.05). The difference in intercepts was due to the relatively high mean value of 5.1 ft for the central-loss group. 
These results imply that people with a wide range of low-vision conditions are able to estimate room sizes as accurately as normally sighted subjects. 
Figure 4 and Table 4 show corresponding data for the 16 blind subjects. All of the regression lines, with the exception of that for blind subject B17, have regression slopes substantially less than 1, representing compression of the range of estimates and, in many cases, substantial underestimates. Only for 4 of the 16 subjects (B8, B10, B12, and B17) are the correlation coefficients significantly greater than 0 (P < 0.05). By comparison, 14 of the 16 subjects in the low-vision group and 28 of the 30 in the normally sighted group had correlation coefficients significantly greater than 0. 
Figure 4
 
Estimates of room-size length (Door Side) for 16 blind subjects. Each panel in the figure shows the subject's estimate of the Door Side length as a function of the physical length. Red lines are linear regression fits (with parameters in Table 4), and black diagonal lines represent perfect performance.
Figure 4
 
Estimates of room-size length (Door Side) for 16 blind subjects. Each panel in the figure shows the subject's estimate of the Door Side length as a function of the physical length. Red lines are linear regression fits (with parameters in Table 4), and black diagonal lines represent perfect performance.
Table 4
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Blind Subjects
Table 4
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Blind Subjects
Updating Performance
After walking along a three-segment path, the subjects estimated the distance and direction to the starting location (doorway) and the target (a beanbag dropped at the first turn). Figure 5 shows mean absolute errors and confidence intervals for the three groups in the five auditory/visual conditions. 
Figure 5
 
Errors in estimating distance and direction to the starting location and the target for three groups (normally sighted, low vision, and blind) for five viewing conditions. The bars represent group mean absolute errors (with 95% confidence intervals) in feet for the starting distance (upper left), for the target distance (upper right), and in degrees for the starting direction (lower left) and target direction (lower right).
Figure 5
 
Errors in estimating distance and direction to the starting location and the target for three groups (normally sighted, low vision, and blind) for five viewing conditions. The bars represent group mean absolute errors (with 95% confidence intervals) in feet for the starting distance (upper left), for the target distance (upper right), and in degrees for the starting direction (lower left) and target direction (lower right).
Data in the control condition provide baseline performance with unrestricted viewing. For the normally sighted group, the mean absolute error for estimating the starting distance in the control condition was 3.28 ft. When expressed as a fraction of the physical distance (Weber fraction), the mean was 0.22. This value is in agreement with the Weber fractions for room-size estimates discussed above, indicating similar precision in the two cases. A portion of this error is attributable to a bias to underestimate the starting distance. Averaged across all trials in the control condition, the mean ratio of estimated distance to physical distance was 0.85, which differed significantly (t[154] = −7.51, P < 0.0001) from a value of 1.0 representing unbiased estimates. This underestimation bias is quantitatively similar to results from other studies using verbal estimates of distance.19 The low-vision group exhibited very similar performance in the control condition for the starting distance, with a mean absolute error of 3.41 ft, a mean Weber fraction of 0.23, and a bias to underestimate the distance with a mean ratio of 0.81. The corresponding errors for the blind group were somewhat larger: mean absolute error of 5.49 ft, a mean Weber fraction of 0.37, and an underestimation bias with a mean ratio of 0.72. In short, all three groups exhibited an underestimation bias for distance back to the starting location. 
In contrast, the direction estimates for the control trials for starting distance did not exhibit any systematic bias (analysis of signed errors showed no significant differences from 0). The mean absolute errors for the three groups were normally sighted 26.5°, low vision 27.8°, and blind 36.4°. These values are larger than values near 5° cited by Philbeck et al.,18 who pioneered the verbal-pointing method we used. Their data were obtained under conditions likely to lead to smaller errors—stationary subjects with more precise control over facing direction, estimating distances to small, localized targets on a nearby table, with potentially useful visual cues to direction in the background beyond the target. Their measures of 5° directional accuracy may represent a lower bound on errors for directional judgments. 
Next, we will discuss the effects of the five viewing conditions and then compare performance across the three groups. 
The effects of the viewing conditions were small for updating with respect to the starting location. For the starting distance, F-tests revealed that there was no significant effect of viewing condition for any of the three groups. For the starting direction, neither the normally sighted group nor the blind group showed an effect of viewing condition. But there was a significant effect for the low-vision group (F[4,60] = 3.233, P < 0.05), with t-tests revealing that the absolute errors in the control condition (27.8°) and the forward facing condition (31.4°) were significantly lower than in the deprivation condition (49.9°). 
The effects of viewing condition were more prominent for updating with respect to the target. For target distance, the low-vision group showed no significant effect of viewing condition. But there was a significant effect for the normally sighted group (F[4,104] = 7.691, P < 0.0001), with t-tests revealing that the absolute error in the control condition (2.61 ft) was significantly lower than in the auditory condition (3.73 ft) and the deprivation condition (4.23 ft). The blind group also exhibited a significant effect of condition on target distance (F[3,45] = 4.229, P < 0.05), with the absolute error in the control condition (3.23 ft) being smaller than in the deprivation condition (4.27 ft). 
For the target direction, the normally sighted group exhibited a significant effect of viewing condition (F[4,104] = 5.393, P < 0.001), with t-tests revealing that the error in the control condition (25.4°) was lower than in the preview (37.0°), auditory (38.8°), and deprivation (45.7°) conditions. There was also an effect of viewing condition on target direction for the low-vision group (F[4,60] = 3.048, P < 0.05), with the error in the control condition (28.1°) being significantly lower than in the auditory (43.9°) and deprivation (45.6°) conditions. The blind group did not show an effect of viewing condition on target direction. 
When effects of the viewing condition occurred, they were most frequently associated with the poorer performance in the deprivation condition and with updating to the beanbag target. 
Next, we compare the updating performance between the three groups. We conducted separate ANOVAs (see Methods) for the five different viewing conditions and the four performance measures depicted in Figure 5. Only 2 of the 20 tests yielded significant effects, both associated with control conditions. The significant ANOVA results were (1) control condition for starting distance (F[2,60] = 4.73, P < 0.05), with group mean absolute errors of normally sighted (3.28 ft), low vision (3.41 ft), and blind (5.49 ft); and (2) control condition in the target direction (F[2,60] = 4.67, P < 0.05), with group mean absolute errors in direction of normally sighted (25.4°), low vision (28.2°), and blind (36.3°). It is not surprising that the normally sighted and low-vision groups had consistently smaller errors in the control trials than the blind subjects. In these free-viewing trials, subjects with vision could look back at the starting or target location, an advantage not possible for the blind subjects. 
For the remaining conditions, in which none of the subjects had direct viewing of the starting or target locations, no statistically significant group differences were observed. Although not statistically significant, the distance errors of the blind group had numerically larger values than the other two groups. This difference was less evident for the direction errors. 
The overall pattern of updating results indicates that when subjects were not permitted to look directly back at reference landmarks, any differences in spatial-updating performance between the normally sighted, low-vision, and blind groups were small. 
We note that our normally sighted group was younger on average than our two visually impaired groups. Previous research has shown that aging can affect indoor wayfinding, for example, in the use of geometric versus nongeometric landmarks9 or the use of vestibular cues for spatial updating.20 The lack of major group differences in our updating data implies that the age differences among our groups did not seem to play an important role. In confirmation, we found no significant correlations between age and the absolute errors in the four updating responses in the control condition for our blind and low-vision groups. Correlations between absolute errors and the number of years since onset of visual impairment were also low and insignificant except for one case: a significant correlation of 0.32 (P < 0.005) for the blind group for judging the starting distance. 
Discussion
The goals of this study were to assess the impact of impaired vision on the ability to judge room dimensions in a building and to keep track of location within the room while moving through the space. 
Room Dimension Estimates
We found that our low-vision subjects performed as well as our normally sighted subjects in estimating room dimensions. For both groups, the mean ratios of estimated size to physical size were close to the value of 1.0 representing no systematic error, Weber fractions were close to 20%, and individual subjects exhibited dimension estimates that were highly correlated with physical lengths. 
The acuities of our low-vision subjects ranged from logMAR 0.1 to 1.96 (Snellen 20/25–20/1824). The good performance of these subjects is consistent with previous findings that artificial blur (in the range 20/500–20/800) had little impact on the perceived distance of visible objects for normally sighted subjects.21,22 But in our previous study,13 we found that normally sighted subjects with severe blur (logMAR 1.65), but not mild blur (logMAR 0.83), exhibited larger errors in room-size judgments. Only two of our low-vision subjects had acuities similar to or worse than the “severe blur”—LV05 (logMAR 1.62) and LV17 (logMAR 1.96). These two subjects had regression slopes and correlation coefficients (Table 3) that were below the group means. If we had recruited more subjects with acuities poorer than logMAR 1.6, we might have observed increased errors in room-size judgments. 
Our previous study indicated that normally sighted subjects with fields artificially restricted to 8° diameter exhibited slightly larger errors in estimating room size. Although we did not have quantitative measures of field size for our low-vision subjects, at least three of them—LV09, LV10, and LV12—were classified as low vision due to restricted fields, with diameters less than 20°. It is evident from Figure 3 that these three subjects performed well in estimating room size. It may be the case that none of our subjects had fields of 8° or less, and that increased errors in room-size judgments occur only for the severest of field restrictions. 
Several studies have found deficits in obstacle-avoidance tasks correlated with the extent of field loss in low vision.2325 In an obstacle-avoidance task with artificial field restriction, normally sighted subjects showed performance deficits for restricted fields from roughly 10° to 30° for low-, medium-, and high-contrast conditions.26 These studies make clear that field restriction poses problems for navigating through cluttered spaces, but they don't speak directly to the impact of field restriction on distance perception. Our study indicates that field loss does not compromise ability to judge the dimensions of spaces. 
What cues might our low-vision subjects have used for judging room size? Low vision would limit the use of cues that rely on high spatial frequencies such as texture gradients. A more reliable cue was likely the angle of declination between the line of sight and the wall–floor boundary.13,21 In most of the spaces, the contrast between the wall and the floor was high due to differences in surface materials or to differences in illumination from windows or overhead lights. 
Most of our blind subjects had difficulty judging room size, often underestimating size; 74 of the 96 data points lie below the equality line in Figure 4. Several of the regression lines are nearly flat with vertical intercepts near 10 ft, likely representing guessing on the part of the subjects. Only 4 out of 16 subjects exhibited significant correlations between their Door Side estimates and the physical length. 
What acoustic cues might be useful in judging room size? The perceived distance of sound sources in a room, such as a human speaking from a podium, is useful in placing a lower bound on room size.27 In our study, the rooms were quiet, and there were no consistent sound sources to aid in size judgments. The only external sound source was the voice of the experimenter, who always stood near the subjects. 
Anyone who has experienced an anechoic room notices that it sounds different from most other rooms. There is substantial evidence that echolocation can be used by humans for judging object size, distance, shape, and surface material.28 Our seven rooms (Fig. 1) varied in acoustic properties, due to differences in floor and wall materials and the presence and distribution of furniture. Information from echolocation decreases with object distance, but can be used out to 2 to 7 m for object detection or depth discrimination.29,30 It is likely that echolocation provided cues to room size in our study. Three of the four subjects whose Door Side estimates correlated significantly with physical size used an active form of echolocation. Subject B10 intentionally wore heavy boots and stomped the floor prior to her estimates. Subject B12 spoke into the room and sometimes clapped. Subject B17 used finger snaps and claps. Subject B08 did not use any explicit echolocation strategies. Four of the 12 subjects who did not exhibit significant correlations also generated sounds for echolocation (B03, B11, B14, and B16). Their correlations ranged from 0.50 to 0.71, and they may have received some useful information about room size. 
The bottom line is that even very low vision is sufficient for estimating room size, but auditory cues are not reliable in spaces lacking sound sources for reference. 
Spatial Updating
Loomis et al.1 described two different methods for spatial updating. Piloting relies on reference to external visual or auditory landmarks for spatial updating. Path integration depends on proprioceptive, vestibular, and optic or acoustic flow information about self-motion for updating and might be less dependent on visual or auditory input. 
If subjects required visual landmarks for spatial updating, we anticipated that the vision status of our three groups would affect accuracy of their spatial-updating judgments. This was not the case. Apart from the control condition in which subjects with vision could look directly back at the starting and target locations, there were no major differences between the three groups. Previous research has also shown that vision is not necessary for path integration.1 Our findings contribute to the growing body of evidence that spatial updating and other aspects of spatial navigation differ little between sighted and visually impaired subjects when testing conditions are equivalent.1012 Our results indicate further that even with the benefit of access to visual landmarks, there is little difference in spatial updating between normally sighted, low-vision, and blind groups. Our results are also consistent with the proposal that spatial representations used in updating are “amodal,” and can be abstracted from distinct sensory channels (vision, auditory, proprioceptive) or even language descriptions.7,11,12,31,32 
If auditory cues were useful for spatial updating, we anticipated that performance in the deprivation condition (no visual or auditory cues) would be worse than in the auditory condition (auditory cues only). But we found no significant differences in updating performance between these two conditions for any of our three groups. 
The lack of dependence on visual or auditory input implies that nonvisual body-centered cues were sufficient for spatial updating. Vestibular and proprioceptive cues during movement can be useful for path integration33 and sometimes take precedence over vision.34,35 Previous studies have demonstrated the important role of vestibular cues in path integration.36,37 Our previous study included conditions in which normally sighted subjects were pushed in a wheelchair along the three-segment paths to determine if reduced proprioceptive input would result in poorer spatial updating.13 The wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. These results confirm that vestibular cues can be effective for spatial updating. Note, however, that Sholl38 reported more accurate directional estimates from walking subjects than from subjects pushed in a wheelchair. One possible reason for the difference in the two studies: Sholl used a pointing response while we used a verbal response. 
We now comment briefly on two remaining issues—updating with respect to the target and updating in the preview condition. Based on the prior literature,39 we thought that updating with respect to an arbitrary landmark (the beanbag target) might be prone to greater errors than updating with respect to the starting location. We found no evidence for this difference. Both the blind and low-vision groups actually had slightly smaller distance errors with respect to the target than to the starting location. 
We included the preview condition for comparison with the deprivation condition. Previous research has shown that visual preview can sometimes facilitate spatial updating in a blind-walking task.40,41 Our blind subjects showed no consistent preview benefit. This may not be surprising, given that their preview was limited to auditory cues. Both the normally sighted and low-vision groups had smaller direction errors in the preview condition compared with the deprivation condition, but these differences were not statistically significant. Rieser et al.42 found that subjects with a life history of visual experience (normally sighted blindfolded, and late-blind subjects) performed better than early-blind subjects in indicating the directions to objects in a familiar room after locomotion through the room. They proposed that the better performance was due to calibration of visual flow with nonvisual walking cues that could be used for keeping track of the directions to unseen objects. In our simpler task, subjects updated with respect to points along the path of locomotion (the starting point and target). In this case, calibration of distance and direction was likely based on proprioceptive cues. 
Conclusions
Our study was motivated by an interest in the impact of impaired vision on perception of global features of indoor environments. A pedestrian's safe and effective mobility in indoor spaces can benefit from knowledge of the size and shape of layouts, and also the ability to keep track of one's position and orientation in the space. 
A major conclusion is that people with a wide range of low-vision conditions are able to judge room size as accurately as people with normal vision. There are two caveats. First, our study was limited to spaces of moderate size (up to roughly 30 ft in length). It is possible that judgments of larger spaces—lobbies, concert halls, railway stations—would place people with low vision at a disadvantage. Second, we believe that an important cue for judging room dimensions in low vision is high contrast between the floor and the walls. Without this cue, low-vision performance might suffer. Good building design could enhance visual accessibility by ensuring high contrast through appropriate selection of the floor and wall finishes. 
Another major conclusion is that vision status has only a small impact on performance in a simple spatial-updating task. Performance was very similar for blind, low-vision, and normally sighted subjects. Performance was little affected when subjects in the three groups were deprived of auditory cues. We conclude that proprioceptive and vestibular cues are sufficient for a simple spatial-updating task. But the real-world paths taken by people are often much more complex than the three-segment paths studied here, and pedestrians may be distracted by other activities in the space. For example, consider circulating at a party and engaging in conversation while trying to keep track of distance and direction to the room entrance. As the number of path segments increases, path-integration errors in position and orientation would increase. Increasing errors in spatial updating as a function of the number of path segments have been observed for normally sighted subjects tested in circular rooms lacking geometric cues for a frame of reference.8 But, in most cases, a person with normal vision can use familiar landmarks in the space for error correction. If such landmarks are not visible for a person with low vision, and stable auditory landmarks are absent, difficulties in spatial updating may occur. Large, highly visible features of known size and location in the space could prove useful as landmarks for people with low vision. 
Acknowledgments
The authors thank Tiana Bochsler for help with the experimental design and Loren Fabry and Jee Won Choi for help in testing subjects. 
Presented in the form of preliminary reports at the Envision conference, Minneapolis, Minnesota, United States, September 19, 2014; and at the annual meeting of the Association for Research in Vision and Ophthalmology, Denver, Colorado, United States, May 6, 2015. Data from this study are available from the Data Repository for the University of Minnesota (DRUM) at the following persisting identifier: http://hdl.handle.net/11299/182367
Supported by National Institutes of Health Grant EY017835. 
Disclosure: G.E. Legge, Precision Vision (R); C. Granquist, None; Y. Baek, None; R. Gage, None 
References
Loomis JM, Klatzky RL, Golledge RG, Cicinelli JG, Pellegrino JW, Fry PA. Nonvisual navigation by blind and sighted: assessment of path integration ability. J Exp Psychol Gen. 1993; 122: 73–91.
Gallistel CR. The Organization of Learning. Cambridge, MA: MIT Press; 1990.
Cheng K, Newcombe NS. Is there a geometric module for spatial orientation? Squaring theory and evidence. Psychon Bull Rev. 2005; 12: 1–23.
Cheng K. A purely geometric module in the rat's spatial representation. Cognition. 1986; 23: 149–178.
Epstein R, Kanwisher N. A cortical representation of the local visual environment. Nature. 1998; 392: 598–601.
Cheng K. Whither geometry? Troubles of the geometric module. Trends Cogn Sci. 2008; 12: 355–361.
Wolbers T, Klatzky RL, Loomis JM, Wutte MG, Giudice NA. Modality-independent coding of spatial layout in the human brain. Curr Biol. 2011; 21: 984–989.
Kelly JW, McNamara TP, Bodenheimer B, Carr TH, Rieser JJ. The shape of human navigation: how environmental geometry is used in maintenance of spatial orientation. Cognition. 2008; 109: 281–286.
Kalia AA, Legge GE, Giudice NA. Learning building layouts with non-geometric visual information: the effects of visual impairment and age. Perception. 2008; 37: 1677–1699.
Klatzky RL, Golledge RG, Loomis JM, Cicinelli JG, Pellegrino JW. Performance of blind and sighted persons on spatial tasks. J Vis Impair Blind. 1995; 89: 70–82.
Giudice NA, Betty MR, Loomis JM. Equivalence of spatial images from touch and vision: evidence from spatial updating in blind and sighted individuals. J Exp Psychol Learn Mem Cogn. 2011; 37: 621–634.
Schinazi VR, Thrash T, Chebat D-R. Spatial navigation by congenitally blind individuals. Wiley Interdiscip Rev Cogn Sci. 2016; 7: 37–58.
Legge GE, Gage R, Baek Y, Bochsler TM. Indoor spatial updating with reduced visual information. PLoS One. 2016; 11: e0150708.
Legge GE, Yu D, Kallie CS, Bochsler TM, Gage R. Visual accessibility of ramps and steps. J Vis. 2010; 10 (11): 8.
Bochsler TM, Legge GE, Kallie CS, Gage R. Seeing steps and ramps with simulated low acuity: impact of texture and locomotion. Optom Vis Sci. 2012; 89: E1299–E1307.
Bochsler TM, Legge GE, Gage R, Kallie CS. Recognition of ramps and steps by people with low vision. Invest Ophthalmol Vis Sci. 2013; 54: 288–294.
Kallie CS, Legge GE, Yu D. Identification and detection of simple 3D objects with severely blurred vision. Invest Ophthalmol Vis Sci. 2012; 53: 7997–8005.
Philbeck J, Sargent J, Arthur J, Dopkins S. Large manual pointing errors, but accurate verbal reports, for indications of target azimuth. Perception. 2008; 37: 511–534.
Loomis JM, Philbeck JW. Measuring spatial perception with spatial updating and action. In: Klatzky RL, Behrmann M, MacWhinney B, eds. Embodiment, Ego-Space, and Action. New York, NY: Taylor & Francis; 2008: 1–43.
Allen GL, Kirasic KC, Rashotte MA, Haun DBM. Aging and path integration skill: kinesthetic and vestibular contributions to wayfinding. Percept Psychophys. 2004; 66: 170–179.
Tarampi MR, Thompson WB. Intact spatial updating with severely degraded vision. Atten Percept Psychophys. 2010; 72: 23–27.
Kalia AA, Schrater PR, Legge GE. Combining path integration and remembered landmarks when navigating without vision. PLoS One. 2013; 8: 0072170.
Marron JA, Bailey IL. Visual factors and orientation-mobility performance. Am J Optom Physiol Opt. 1982; 59: 413–426.
Long RG, Rieser JJ, Hill EW. Mobility in individuals with moderate visual impairments. J Vis Impair Blind. 1990; 84: 111–118.
Kuyk T, Elliott JL, Biehl J, Fuhr PS. Environmental variables and mobility performance in adults with low vision. J Am Optom Assoc. 1996; 67: 403–409.
Hassan SE, Hicks JC, Lei H, Turano KA. What is the minimum field of view required for efficient navigation? Vision Res. 2007; 47: 2115–2123.
Kolarik AJ, Pardhan S, Cirstea S, Moore BC. Using acoustic information to perceive room size: effects of blindness, room reverberation time, and stimulus. Perception. 2013; 42: 985–990.
Kolarik AJ, Cirstea S, Pardhan S, Moore BCJ. A summary of research investigating echolocation abilities of blind and sighted humans. Hear Res. 2014; 310: 60–68.
Schenkman BN, Nilsson ME. Human echolocation: blind and sighted persons' ability to detect sounds recorded in the presence of a reflecting object. Perception. 2010; 39: 483–501.
Schörnich S, Nagy A, Wiegrebe L. Discovering your inner bat: echo-acoustic target ranging in humans. J Assoc Res Otolaryngol. 2012; 13: 673–682.
Bryant DJ. Representing space in language and perception. Mind Lang. 1997; 12: 239–264.
Loomis JM, Klatzky RL, Giudice NA. Representing 3D space in working memory: spatial images from vision, touch, hearing, and language. In: Lacey S, Lawson R, eds. Multisensory Imagery: Theory and Applications. New York, NY: Springer; 2013: 131–156.
Mittelstaedt ML, Mittelstaedt H. Idiothetic navigation in humans: estimation of path length. Exp Brain Res. 2001; 139: 318–332.
Campos JL, Byrne P, Sun HJ. The brain weights body-based cues higher than vision when estimating walked distances. Eur J Neurosci. 2010; 31: 1889–1898.
Chance SS, Gaunet F, Beall AC, Loomis JM. Locomotion mode affects the updating of objects encountered during travel: the contribution of vestibular and proprioceptive inputs to path integration. Presence. 1998; 7: 168–178.
Klatzky RL, Loomis JM, Beall AC, Chance SS, Golledge RG. Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychol Sci. 1998; 9: 293–298.
Arthur JC, Kortte KB, Shelhamer M, Schubert MC. Linear path integration deficits in patients with abnormal vestibular afference. Seeing Perceiving. 2012; 25: 155–178.
Sholl MJ. The relation between horizontality and rod-and-frame and vestibular navigational performance. J Exp Psychol Learn Mem Cogn. 1989; 15: 110–125.
Wang RF, Spelke ES. Updating egocentric representations in human navigation. Cognition. 2000; 77: 215–250.
Arthur JC, Philbeck JW, Chichka D. Spatial memory enhances the precision of angular self-motion updating. Exp Brain Res. 2007; 183: 557–568.
Philbeck JW, Klatzky RL, Behrmann M, Loomis JM, Goodridge J. Active control of locomotion facilitates nonvisual navigation. J Exp Psychol Hum Percept Perform. 2001; 27: 141–153.
Rieser JJ, Guth DA, Hill EW. Sensitivity to perspective structure while walking without vision. Perception. 1986; 15: 173–188.
Figure 1
 
Photos from the seven rooms. The dimensions of the rooms were (Door Side × Non-Door Side in feet) (A) 7.6 × 15.2, (B) 16.2 × 20.0, (C) 19.9 × 22.1, (D) 32.7 × 16.6, (E) 33.2 × 16.6, (F) 27.1 × 23.7, (G) 17.4 × 19.0.
Figure 1
 
Photos from the seven rooms. The dimensions of the rooms were (Door Side × Non-Door Side in feet) (A) 7.6 × 15.2, (B) 16.2 × 20.0, (C) 19.9 × 22.1, (D) 32.7 × 16.6, (E) 33.2 × 16.6, (F) 27.1 × 23.7, (G) 17.4 × 19.0.
Figure 2
 
Schematic diagram illustrating the three-segment path. An experimenter guided the subject along a three-segment path beginning at the doorway (starting location). At the first waypoint, the subject dropped a beanbag (target). At the end of the path, the subject made judgments about the distance and direction to the starting location and the target. (Adapted from Fig. 2 in Legge et al.13)
Figure 2
 
Schematic diagram illustrating the three-segment path. An experimenter guided the subject along a three-segment path beginning at the doorway (starting location). At the first waypoint, the subject dropped a beanbag (target). At the end of the path, the subject made judgments about the distance and direction to the starting location and the target. (Adapted from Fig. 2 in Legge et al.13)
Figure 3
 
Estimates of room-size length (Door Side) for 16 low-vision subjects. Each panel of the figure shows the subject's estimate of the Door Side length as a function of the physical length. The panels are arranged in order of increasing logMAR acuity from top left (logMAR 0.1) to lower right (logMAR 1.96). The single letters in each panel represent field loss categorized as central (C), peripheral (P), or none (N). Red lines are linear regression fits (with parameters in Table 3), and black diagonal lines represent perfect performance.
Figure 3
 
Estimates of room-size length (Door Side) for 16 low-vision subjects. Each panel of the figure shows the subject's estimate of the Door Side length as a function of the physical length. The panels are arranged in order of increasing logMAR acuity from top left (logMAR 0.1) to lower right (logMAR 1.96). The single letters in each panel represent field loss categorized as central (C), peripheral (P), or none (N). Red lines are linear regression fits (with parameters in Table 3), and black diagonal lines represent perfect performance.
Figure 4
 
Estimates of room-size length (Door Side) for 16 blind subjects. Each panel in the figure shows the subject's estimate of the Door Side length as a function of the physical length. Red lines are linear regression fits (with parameters in Table 4), and black diagonal lines represent perfect performance.
Figure 4
 
Estimates of room-size length (Door Side) for 16 blind subjects. Each panel in the figure shows the subject's estimate of the Door Side length as a function of the physical length. Red lines are linear regression fits (with parameters in Table 4), and black diagonal lines represent perfect performance.
Figure 5
 
Errors in estimating distance and direction to the starting location and the target for three groups (normally sighted, low vision, and blind) for five viewing conditions. The bars represent group mean absolute errors (with 95% confidence intervals) in feet for the starting distance (upper left), for the target distance (upper right), and in degrees for the starting direction (lower left) and target direction (lower right).
Figure 5
 
Errors in estimating distance and direction to the starting location and the target for three groups (normally sighted, low vision, and blind) for five viewing conditions. The bars represent group mean absolute errors (with 95% confidence intervals) in feet for the starting distance (upper left), for the target distance (upper right), and in degrees for the starting direction (lower left) and target direction (lower right).
Table 1
 
Subject Characteristics: Low Vision
Table 1
 
Subject Characteristics: Low Vision
Table 2
 
Subject Characteristics: Blind
Table 2
 
Subject Characteristics: Blind
Table 3
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Low-Vision Subjects
Table 3
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Low-Vision Subjects
Table 4
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Blind Subjects
Table 4
 
Linear Regression Fits (Slope, Intercept, and Correlation Coefficients) for Estimates of Door Side Length as a Function of Physical Length in Blind Subjects
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×