July 2015
Volume 56, Issue 8
Free
Low Vision  |   July 2015
Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene
Author Affiliations & Notes
  • Joram J. van Rheede
    Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
  • Iain R. Wilson
    Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
  • Rose I. Qian
    Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
  • Susan M. Downes
    Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford Eye Hospital, Oxford University Hospitals National Health Service (NHS) Trust, Oxford, United Kingdom
    National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, United Kingdom
  • Christopher Kennard
    Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
    National Institute for Health Research (NIHR) Biomedical Research Centre, Oxford, United Kingdom
  • Stephen L. Hicks
    Division of Clinical Neurology Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
  • Correspondence: Stephen L. Hicks, Nuffield Department of Clinical, Neurosciences, John Radcliffe Hospital, University of Oxford, OX3 9DU, Oxfordshire, UK; stephen.hicks@ndcn.ox.ac.uk
Investigative Ophthalmology & Visual Science July 2015, Vol.56, 4802-4809. doi:https://doi.org/10.1167/iovs.14-16311
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joram J. van Rheede, Iain R. Wilson, Rose I. Qian, Susan M. Downes, Christopher Kennard, Stephen L. Hicks; Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene. Invest. Ophthalmol. Vis. Sci. 2015;56(8):4802-4809. https://doi.org/10.1167/iovs.14-16311.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles.

Methods: We accomplished this by developing a pair of “residual vision glasses” (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions.

Results: All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation.

Conclusions: A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.

Severe visual impairment has a profound impact on independence and quality of life.1,2 However, most types of visual impairment, including those meeting the criteria for “blindness,”3,4 still leave people with some remaining vision. Low vision services are available to help people increase the use of such residual vision, but often rely mainly on magnification.5 Progress in wearable technology has resulted in a number of head-mounted electronic magnification devices that appear to be of some benefit to low vision patients.6 However, such detail-enhancing interventions have only modest effects on well-being, and a particularly limited impact on mobility and independence. 
Several other strategies for increasing the use of residual vision have been explored. Firstly, some approaches focus on presenting the image to the part of the retina with best function by warping7 or displacing parts of the visual image.8,9 However, this seems to have limited success,9 partially due to the fact that people with significant visual field defects often already adapt their gaze direction to achieve the same effect. Another approach is to use image processing strategies to boost the visibility of a scene through increased contrast, brightness, or emphasis of contours and features.1013 In a similar fashion, the visual scene can be simplified to accommodate substantially reduced visual ability.14 However, if such algorithms are applied indiscriminately to the whole image the actual benefit may be limited, as the elements of the visual scene most relevant to a person with low vision may not necessarily be the ones that will be most enhanced. To avoid this problem, machine learning algorithms based on scene segmentation and object detection can be used to highlight items of particular importance to the user.1517 
Indeed, particularly in the case of the lowest vision patients, it may be desirable to employ an intelligent prioritisation strategy so that those elements of the visual scene relevant to a particular task are emphasised, while others are left to the background. Such a prioritization strategy should be based on the type of information that is most relevant to the visual problem that people are trying to solve. In the context of mobility, the most important information is an awareness of the spatial position of oneself in relation to any obstacles in the environment. Encoded relative to a person, this can be expressed as distance. 
Therefore, we have attempted to make residual vision more useful to sight-impaired people by representing the visual scene in a manner that emphasizes information about the distance of objects. This distance-based representation of the visual scene was implemented on a head-mounted set of displays (residual vision glasses [RVGs]). We describe experiments performed with visually impaired patients using a prototype that represents an improvement on an earlier version that was used for a proof-of-principle study.18 In the current study, we have assessed the use of these RVGs in a mobility setting that required participants to cross a room while avoiding obstacles. We also introduced new, objective measures of mobility performance that probe visual awareness. These allowed us to investigate the dynamics of mobility in more detail than with traditionally used measures, such as walking speed and number of unintentional obstacle contacts alone. 
Methods
Participants
We recruited 11 visually impaired participants from the local population of Oxfordshire, United Kingdom, including three men and eight women with ages ranging from 26 to 85. Participants had a variety of visual problems, the most notable of which was retinitis pigmentosa (5 participants), and had visual acuities ranging from 0.5 to 1.6 on the logMAR scale. The characteristics and visual status of individual participants are recorded in the Table
Table
 
Visual Status of Participants
Table
 
Visual Status of Participants
Additionally, five control participants (3 female, 2 male) participated in the experiment (age range, 26–28). These served as a reference point for “normal” visual mobility performance. The research followed the tenets of the Declaration of Helsinki, informed consent was obtained from all subjects after explanation of the nature and possible consequences of the study, and the research was approved by the research ethics committee of the University of Oxford. 
Residual Vision Glasses
The RVGs consisted of a pair of 4 × 4 cm organic light emitting diode (OLED) panels and an infrared depth camera, mounted within an adjustable headset (Figs. 1a–c). The displays were nonsee-through and participants were prevented from seeing any other part of the visual environment by enclosing the head-mounted display in opaque material. The displays were positioned at a distance of approximately 3 cm from the participants' eyes producing a field of view of approximately 60° horizontal per eye. The OLED panels were mounted without optics and, as such, were too close to focus on. Participants were encouraged to focus beyond the displays at a comfortable distance as if they were looking through them. The images on the left and right panels were adjusted to produce a binocularly fused display. The RVGs were built without focusing optics as it is envisaged that this form factor could readily be built into a see-through display with a very wide field of view. The viewing angle of contemporary head-mounted see-through displays is limited to less than 25° horizontal due the bulk and distortion inherent in wider angle see-through lenses. Due to the severely impaired vision of our participants, we prioritized field of view over image clarity. 
Figure 1
 
A prototype for depth-based residual vision glasses to aid mobility in low vision. (a) The residual vision glasses including the head mount. Arrows indicate the location of the camera and displays. (b) Headset mounted on a glass model head, with arrows indicating the distance from the eyes to the displays and the location of the camera relative to the displays. (c) The representation of the visual environment in the residual vision glasses as the depth-to-brightness algorithm is running. While the representation on the displays is low resolution, it is easy to pick out the presence of a human figure in the foreground. Note that for the experiments, an opaque cover was used to block out any visual information other than what was presented on the displays.
Figure 1
 
A prototype for depth-based residual vision glasses to aid mobility in low vision. (a) The residual vision glasses including the head mount. Arrows indicate the location of the camera and displays. (b) Headset mounted on a glass model head, with arrows indicating the distance from the eyes to the displays and the location of the camera relative to the displays. (c) The representation of the visual environment in the residual vision glasses as the depth-to-brightness algorithm is running. While the representation on the displays is low resolution, it is easy to pick out the presence of a human figure in the foreground. Note that for the experiments, an opaque cover was used to block out any visual information other than what was presented on the displays.
The depth camera used was an Asus Xtion Pro (Asus, Fremont, CA, USA), based on the Primesense Carmine 1080 structured light sensor with a 58° H, 45° V field of view and a range of approximately 8 m. The displays were custom made using Denistron (Corona, CA, USA) color OLED panels (DD-160128FC-2B) with a maximum resolution of 160 × 128. Custom driver boards were made that were capable of transmitting 30 Hz video data in 8 bit gray scale. The light output for each gray level is shown in Supplementary Figure S1 and ranged from 0 (the lowest level) to 75.5 (maximal brightness) cd/m2. The resolution was down-sampled to 20 × 16 pixels per eye. The displays were mounted on a headset in such a way as to allow each of the OLED panels to be repositioned independently for each eye. The displays covered an area of approximately 60° H and 50° V, which allowed a 1-to-1 mapping of the camera image onto the external world. 
It should be noted that normally sighted participants (n = 3) were all able to resolve individual and adjacent pixels despite the fact that they were not in focus. Two individual bright pixels, when spaced one pixel or more apart, could be resolved separately, and the same was true for two individual dark pixels, when spaced one pixel or more apart, suggesting that it was possible to make use of the full resolution of the display. 
RVGs: Image Processing
Processing was performed on a Thinkpad X220 laptop (Lenovo, Beijing, China) that was carried by either the participant or experimenter. The laptop ran Ubuntu Linux with our own custom code written in C++. Depth information, acquired from the Xtion camera at 30 frames per second, was used to generate a visual image that would represent nearby objects as brighter and objects further away as less bright. Maximum brightness was attributed to surfaces between 0.7 and 1.0 m away, and minimum brightness, that is, black, was attributed to surfaces at 3.5 m or further. The principal effect of this process is to dramatically increase the contrast of foreground objects. One limitation of this system is that that the Xtion camera is insensitive to distances closer than 70 cm and any surface within this range is represented as black. However, participants were made aware of this beforehand and as obstacles could be seen brightening as they approached we do not believe this seriously impacted the results. 
Experimental Procedure
Testing was done in a sports hall at the John Radcliffe Hospital complex in Oxford, United Kingdom. The lighting in the sports hall was turned down to simulate navigation in a dimly lit environment. Before starting any trials with the RVGs, participants were briefly familiarized with the headset, and it was confirmed that they could use the RVGs to identify basic shapes in their environment as well as detect motions of the experimenter, such as walking or hand waving. No other training was given; however, an experimenter always was nearby to prevent participants from injuring themselves through high velocity collisions. 
Participants were asked to traverse the length of a 15 × 5 m arena. Cylindrical soft foam obstacles with a diameter of 0.5 m and a height of 1.2 m, and covered in dark cloth were placed on randomly assigned locations in the participant's path in 5 × 5 possible positions in the 5 × 5 m central part of this area (Fig. 2a, Supplementary Fig. S2). Participants were instructed to walk to the other side of the room at a comfortable pace, while avoiding making contact with the obstacles. Participants completed a variable number of trials (on average 28) and the difficulty of the obstacle course (in terms of the number of obstacles placed in their path) was increased over the course of the trials up to a maximum of six obstacles, incrementing by one obstacle per two trials. Subjects completed a number of trials with and without the RVGs. Condition order was counterbalanced across subjects. We will refer to the condition without the RVGs as “unaided” and the condition with the RVGs as “aided.” 
Figure 2
 
Dynamic measures of mobility performance in an obstacle avoidance task. (a) A schematic illustration of the obstacle avoidance task. Participants were asked to cross a 15-m obstacle course. Soft cylindrical foam obstacles (height, 120 cm; diameter, 0.5 m) were randomly positioned in the 5 × 5 m central area. Overhead cameras captured each trial and the footage was used to extract participant position over time. The grid indicated with the dotted lines in the central 5 × 5 m area indicates all possible obstacle positions. (b) Visual obstacle awareness was quantified as “deviation distance,” which was defined as the distance between the participant and an obstacle at the final time point before the participant deviated from a collision course with the obstacle. (c) Hesitation behavior was quantified using fluctuations in the speed of participants as they crossed the obstacle course. The speed was normalized to the maximum speed attained during a trial, and then the differences between subsequent peaks and troughs in the speed signal were added up to yield a hesitation score for each participant, where lower scores indicate less and higher scores indicate more hesitation.
Figure 2
 
Dynamic measures of mobility performance in an obstacle avoidance task. (a) A schematic illustration of the obstacle avoidance task. Participants were asked to cross a 15-m obstacle course. Soft cylindrical foam obstacles (height, 120 cm; diameter, 0.5 m) were randomly positioned in the 5 × 5 m central area. Overhead cameras captured each trial and the footage was used to extract participant position over time. The grid indicated with the dotted lines in the central 5 × 5 m area indicates all possible obstacle positions. (b) Visual obstacle awareness was quantified as “deviation distance,” which was defined as the distance between the participant and an obstacle at the final time point before the participant deviated from a collision course with the obstacle. (c) Hesitation behavior was quantified using fluctuations in the speed of participants as they crossed the obstacle course. The speed was normalized to the maximum speed attained during a trial, and then the differences between subsequent peaks and troughs in the speed signal were added up to yield a hesitation score for each participant, where lower scores indicate less and higher scores indicate more hesitation.
Data Acquisition and Analysis
Participants' paths were recorded through the mobility course with a pair of ceiling-mounted low light monochrome area scan cameras (GigE UI-5240CP-M-XX, resolution 1280 H x 1024 V, focal length 3.5 mm; IDS, Obersulm, Germany). Each camera had an effective field of view of 90° and were mounted 6 m above the testing area, 3 m apart, and together were able to capture the entire scene. Video was acquired at 25 frames per second and logged to a PC for offline processing. Before tracing participants' paths, the video was corrected to remove fisheye distortion. Obstacle contacts were noted by the experimenters during the trials. 
The bird's eye view video footage of each trial was used to trace the paths of participants in custom written software in LabView (National Instruments Corporation, Austin, TX, USA). This resulted in a participant X and Y position in each video frame. Paths were imported into Matlab (MathWorks, Natick, MA, USA) and analyzed with custom scripts. In addition to providing information about participants' overall speed and trial completion time, these paths allowed us to investigate the dynamics of obstacle avoidance and target recognition. 
While the commonly used measures of participants' walking speed and number of (unintentional) obstacle contacts provide important insights in the functional mobility of participants, we also were interested in visual awareness. Goodrich and Ludt19 introduced the “detection distance” to probe this construct, as a measure of the distance at which a participant could explicitly recognise whether there was an object ahead of them. However, this required participants to stop and interrupt their walking. We introduced a related, but implicit, measure of obstacle detection distance that we were able to extract from participants' paths after the experiments: We determined participants' minimum distance to an obstacle before they deviated from it (“deviation distance,” Fig. 2b). This was accomplished by first establishing whether at any point during a given trial, a participant's trajectory would have put them on a collision course with any of the obstacles. The trajectory was determined from the change in x,y-position between successive video frames, and was considered a “collision course” when an extrapolation of that trajectory intersected with the area occupied by an obstacle. The point at which a participant deviated from a collision course from the obstacle then was determined. In the case where a participant was on a collision course with the same obstacle more than once, the final time the participant deviated from the collision course was used to calculate deviation distance (defined as the distance between this “deviation point” and the obstacle). 
Finally, we wanted to investigate people's confidence in their visual judgments. We noted that participants with mobility difficulty often hesitated, indicating lack of confidence in their ability to detect or evade obstacles (Fig. 2c). We used changes in walking speed as a measure of visual confidence. For each trial, speed was normalized to vary from 0 (standstill) to 1 (maximum speed) and smoothing was used to remove small variations in speed. Peaks and troughs in the speed signal then were detected between the point where 50% of maximum velocity was first attained and where it was last attained. To exclude noise and small fluctuations introduced by manual tracking, peaks closer than 20 frames (approximately 2/3 of a second) to the nearest larger peak, and troughs closer than 20 frames to the nearest bigger trough, were ignored. To quantify hesitation, we looked at the absolute speed differences between consecutive peaks and troughs, and added the differences to give a hesitation score, with higher values indicating a greater number and a larger magnitude of speed variations. This represents an objective measure of hesitation that does not require an independent mobility trainer to observe trials to rate hesitation behavior. 
Statistical Analysis
We performed linear mixed effects analyses of the relationship between condition (aided versus unaided) and trial completion time, deviation distance, and hesitation score of visually impaired participants using the lme4 package20 in R (R Core Team, Vienna, Austria). In addition to “condition,” we entered logMAR and condition order (aided first versus aided last) as fixed effects in the model to investigate if visual acuity or condition order had an effect on mobility. We then included “subject” as a random effect, allowing for random intercepts and slopes. P values were obtained using likelihood ratio tests of the full model with the effect in question against the model without the effect in question. The final model was chosen as the least complex model beyond which adding further explanatory variables did not lead to a significant improvement. Effect sizes are reported as main effects ± SE. Other statistical comparisons are performed using paired t-tests unless otherwise noted. 
Results
Obstacle Contacts
Unaided, just over half of visually impaired participants (6/11) were able to navigate the course without making contact with any of the obstacles. Each of the participants who had contacted obstacles in the unaided condition showed an improvement in their number of obstacle contacts in the aided condition (Fig. 3a). Without RVGs, these participants unintentionally contacted obstacles on 30.5 ± 15.0% of trials, while in the aided condition they did so on 11.2 ± 14.1% of trials (P = 0.031, Wilcoxon signed-rank test, n = 5). This indicates that those participants who had difficulties avoiding obstacles in the unaided condition were able to use the RVGs to improve their visual mobility performance. 
Figure 3
 
Residual vision glasses reduce collisions but also increase trial completion time, (a) Percentage of trials with one or more collisions for each subject, ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that all those participants who could not avoid colliding with obstacles in the unaided condition showed a marked improvement in the aided condition. (b) Trial completion time in seconds for each subject, ordered by unaided performance. Arrows, diamonds, and control participants as in (a).
Figure 3
 
Residual vision glasses reduce collisions but also increase trial completion time, (a) Percentage of trials with one or more collisions for each subject, ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that all those participants who could not avoid colliding with obstacles in the unaided condition showed a marked improvement in the aided condition. (b) Trial completion time in seconds for each subject, ordered by unaided performance. Arrows, diamonds, and control participants as in (a).
Trial Completion Time
An important aspect of mobility performance is the speed at which people are able to travel. Therefore, we were interested to see whether participants' trial completion times were significantly altered in trials with the RVGs. Moreover, we investigated whether there were effects of visual acuity or condition order. A linear mixed effects analysis of trial completion time revealed that a model with condition (“aided” or “unaided”) and logMAR score as fixed effects, and a random slope for subject, was able to best explain our experimental data. In the aided condition participants took a longer time by 8.2 ± 1.8 s, and a higher logMAR score was associated with slower trial completion (−13.6 ± 4.6 s for an increase of 1 logMAR). The random slope for subject indicates that the effect of wearing the RVGs was variable between subjects as can be noted in Figure 3b. This model had significantly more explanatory power than a null (intercept only) model (χ2[4] = 161.44, P < 0.0001) as well as the model without condition (χ2[1]) = 11.56, P = 0.0007) without logMAR (χ2[1] = 6.23, P = 0.013), or without a random slope for subject (χ2[2] = 52.97, P < 0.0001). It should be noted that visually impaired participants proceeded more slowly with the glasses even during straight walking trials where they were aware no obstacles were present (walking speed unaided, 110 ± 15 pixels/s; aided, 79 ± 14 pixels/s; P = 0.026; n = 7). For reference, on average 1 pixel = 1 cm. 
Deviation Distance
While obstacle contacts and speed provide some indication of people's mobility performance, it says less about to what extent participants were visually aware of obstacles. Moreover, we observed a clear ceiling effect for obstacle contact avoidance, as the majority of participants were able to navigate the course without unintentional obstacle contacts. Therefore, to estimate individual visual awareness of obstacles, we investigated walking behavior in relation to obstacles over time (Fig. 4a). We used deviation distance (see Methods) as an indicator of visual awareness of an obstacle. First, we validated the measure by confirming that it could differentiate between control and visually impaired participants, and found a significant difference in deviation distance between these two groups in the unaided condition (Control, 386 ± 39 pixels; visually impaired, 271 ± 93 pixels; P = 0.0194). We then performed a linear mixed effects analysis to investigate the effect of wearing the RVGs as well as any effect of visual acuity or condition order. This revealed that a model with no fixed effects, but an effect of condition that interacted with subject, was best able to account for the observed data. In other words, there was a significant effect of condition (“aided” or “unaided”), but the direction of this effect varied by subject. This model provided a significant increase in explanatory value over the null model (χ2[2] = 8.42, P = 0.015). Further exploring the relationship between subject and effect of condition, the effect of wearing the glasses correlated linearly with performance without the glasses, such that those people with the worst visual ability as measured by detection distance were able to benefit most from the RVGs, and those with the best visual ability benefited least (r = −0.90, P < 0.0001; see Fig. 4b). 
Figure 4
 
Residual vision glasses can increase visual obstacle awareness but increase hesitation. (a) Deviation distance ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that participants with low deviation distances tend to perform better, while participants with high deviation distances tend to perform worse with the RVGs. (b) Deviation distance difference plotted against unaided deviation distance, illustrating a highly significant negative correlation between baseline performance and the effect of the RVGs, such that those with the worst prior performance showed most improvement and those with the best prior performance showed most deterioration. Black dots represent visually impaired participants, blue dots represent control participants. (c) Hesitation score ordered by unaided performance (dots, arrows, diamonds, and control participants as in [a]). Hesitation increased for nearly all participants in the aided condition.
Figure 4
 
Residual vision glasses can increase visual obstacle awareness but increase hesitation. (a) Deviation distance ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that participants with low deviation distances tend to perform better, while participants with high deviation distances tend to perform worse with the RVGs. (b) Deviation distance difference plotted against unaided deviation distance, illustrating a highly significant negative correlation between baseline performance and the effect of the RVGs, such that those with the worst prior performance showed most improvement and those with the best prior performance showed most deterioration. Black dots represent visually impaired participants, blue dots represent control participants. (c) Hesitation score ordered by unaided performance (dots, arrows, diamonds, and control participants as in [a]). Hesitation increased for nearly all participants in the aided condition.
Hesitation Score
Participants' confidence in their visual performance was quantified through a hesitation score (see Methods and Fig. 4c). First, it was confirmed that this score could differentiate between mobility performance of control subjects and performance of patients (control, 0.30 + 0.28; visually impaired, 1.34 ± 0.77; P = 0.0113). Next, we performed a linear mixed effects analysis to investigate the effect of wearing the glasses as well as possible contributions of visual acuity and condition order. A model with condition plus a random slope for each subject was able to best explain the data. In the aided condition, hesitation scores increased by 0.72 ± 0.25 though the random slope for each subject indicated that effect size varied significantly between subjects. This model performed significantly better than the null model (χ2[3] = 42.14, P < 0.0001), the model without the random slope (χ2[2] = 19.43, P < 0.0001), and the model with no main effect of condition (χ2[1] = 6.09, P = 0.014). Hence, while their performance in terms of avoiding obstacle contacts and detecting obstacles could improve, we did find that participants slowed down and hesitated more with the RVGs. 
Discussion
In this study, we investigated whether a prototype set of RVGs that represent the distance of objects in the environment as brightness on a set of head-mounted displays, could improve the mobility of people with low vision. We report that participants at the lower end of the visual ability spectrum experienced a benefit from the RVGs when traversing an obstacle course. While RVGs increased the time that participants took to navigate the course, participants who were unable to avoid obstacles in the control condition significantly reduced the number of unintentional contacts with obstacles. By analyzing participants' walking paths, we were able to extract information about the dynamics of walking behavior that could be used to infer to what extent participants are aware of their visual surroundings. We found that obstacle awareness can be increased for the people with most significant visual mobility problems. However, there was an increase in hesitation and a decrease in overall walking speed in trials that included the RVGs, suggesting that the unfamiliar displays can decrease confident walking in untrained participants. These results provided further evidence that, in principle, visual obstacle detection can be improved by presenting distance information in a manner that is easier to interpret for low vision patients. 
It is likely that participants' unfamiliarity with the RVGs and the altered representation of the visual world can account for the increased hesitation and lower walking speeds of participants. The finding that even in trials where participants knew no obstacles were present, walking speed still was significantly slower when using the RVGs supports this hypothesis. It is important to note that in our current study, participants were given only a few minutes to become familiar with the visual scene representation on the displays, performing a few simple orienting tasks, such as pointing out the experimenter walking around the room or identifying one or two obstacles when they were placed in front of them. 
The increasing difficulty of the task (with increasing numbers of obstacles as trials progressed) meant that it was difficult to observe any training effects. Future work will need to establish how long it takes participants to reach optimal performance with the RVGs. A study investigating the use of night vision goggles for people with night blindness found that subjective improvements continued for 3 weeks of home use of the goggles, and objective improvements in walking speed were observed during mobility tests at a time point after 5 weeks of training.21 
Dynamic Measures of Mobility Performance
In this study, we presented and validated new measures of mobility performance based on the paths taken by participants through an obstacle course. These measures are objective in that they are generated from the path data and do not require rating by an experimenter or mobility trainer. We used deviation distance as a measure of visual obstacle awareness, and observed a difference between controls and visually impaired participants in the control condition, indicating that a large deviation distance is indicative of better visual performance. This represents an implicit way to measure the same construct as the explicit measure “detection distance” of Goodrich and Ludt,19 which does not require interruption of walking. Moreover, a hesitation score, probing hesitations in walking behavior, was used as a measure of confidence in visual awareness of obstacles or lack thereof. Again, this measure was significantly different between visually impaired and control participants, indicating that a low hesitation score is indicative of normal mobility performance. 
In the future, these measures can be used in mobility tests that are more environmentally valid, as these measures do not require participants to follow a set path or move from a set starting point to a set end point – values are simply extracted from a participants movements with respect to any nearby obstacles or targets. This should allow for conducting mobility tests with tasks that are more relevant to the real world, such as tasks that require searching for targets or free exploration. 
Methodological Considerations
In the current study, we used obstacles that were of a set size and shape, cylinders of 1.20 m in height and 50 cm in diameter. While such obstacles pose real mobility hazards to visually impaired individuals, obstacles in the real world are of a more variable nature. Future studies could include hazards, such as steps or low lying trip hazards and overhanging obstacles. 
Normally sighted participants were included in the study for two reasons. Firstly, they were included to establish the maximum expected performance with the RVGs – visually impaired participants would not be expected to do better than normally sighted, healthy volunteers. Secondly, it was useful to obtain a reference point for normal, healthy mobility performance to validate the deviation distance and hesitation score measures. However, our normally sighted volunteers were all relatively young (26–28 years old), and it is possible that there are differences between younger and older participants in terms of their ability to rapidly adapt to our novel way of displaying the visual world. Therefore, the maximum expected performance of people of similar ages to our visually impaired group may differ somewhat from that of our control group. 
Head-Mounted Display
In this study, we were able to show that focused displays are not a prerequisite for sighted navigation. The information presented was simple enough not to require focusing optics, though a focusable display would provide better local contrast on the retina, which might be beneficial to some participants. While normally sighted individuals were able to benefit from the full resolution of the display, we cannot exclude the possibility that our participants may have used a combination of head movements and total display luminance as a depth cue to overcome lack of spatial resolution due to the lack of focus. Focusable displays would be likely to make it easier for visually impaired participants to obtain spatial information from the displays. 
The RVGs used in this study were assembled mainly from low cost, off the shelf components, demonstrating that a depth-based visual aid to help low vision patients with mobility can be produced inexpensively. The nonsee-through nature of the displays meant that people could not use the normal visual cues that they used for mobility, resulting in the displays hindering mobility performance in normally sighted volunteers and the better-performing visually impaired participants. Combining the RVGs' wide field of view with an optically see-through display is likely to be an advantage for real world navigation. 
Acknowledgments
We thank Robert Cowburn and Jonathan Attwood for their assistance with the running of experiments. 
Supported by NIHR i4i Grant NIHR i4i II-LB-1111-20005, and by the NIHR Biomedical Research Centre, Oxford (CK, SD). This report presents independent research commissioned by the NIHR under the i4i program. The authors alone are responsible for the content and writing of the paper. 
Disclosure: J.J. van Rheede, None; I.R. Wilson, None; R.I. Qian, None; S.M. Downes, None; C. Kennard, None; S.L. Hicks, None 
References
Stelmack J. Quality of life of low-vision patients and outcomes of low-vision rehabilitation. Optom Vis Sci. 2001; 78: 335–342.
Scott IU, Smiddy WE, Schiffman J, Feuer WJ, Pappas CJ. Quality of life of low-vision patients and the impact of low-vision services. Am J Ophthalmol. 1999; 128: 54–62.
World Health Organization. Global data on visual impairments 2010. Available at: http://www.who.int/blindness/GLOBALDATAFINALforweb.pdf.
World Health Organization. International Statistical Classification of Diseases and Related Health Problems 10th Revision. Geneva, Switzerland: World Health Organization; 2010.
Lamoureux EL, Pallant JE, Pesudovs K, Rees G, Hassell JB, Keeffe JE. The effectiveness of low-vision rehabilitation on participation in daily living and quality of life. Invest Ophthalmol Vis Sci. 2007; 48: 1476–1482.
Culham LE, Chabra A, Rubin GS. Clinical performance of electronic head-mounted, low-vision devices. Oph Phys Optics. 2004; 24: 281–290.
Loshin DS, Juday RD. The programmable remapper: clinical applications for patients with field defects. Optom Vis Sci. 1989; 66: 389–395.
Parodi MB, Toto L, Mastropasqua L, Depollo M, Ravalico G. Prismatic correction in patients affected by age-related macular degeneration. Clin Rehabil. 2004; 18: 828–832.
Smith HJ, Dickinson CM, Cacho I, Reeves BC, Harper RAA. Randomized controlled trial to determine the effectiveness of prism spectacles for patients with age-related macular degeneration. Arch Ophthalmol. 2005; 123: 1042–1050.
Peli E, Peli T. Image enhancement for the visually impaired. Opt Engin. 1984; 23: 47–51.
Peli E, Goldstein RB, Young GM, Trempe CL, Buzney SM. Image enhancement for the visually impaired. Invest Ophthalmol Vis Sci. 1991; 32: 2337–2350.
Wolffsohn JS, Mukhopadhyay D, Rubinstein M. Image enhancement of real-time television to benefit the visually impaired. Am J Ophthalmol. 2007; 144: 436–440.
Al-Atabany WI, Memon MA, Downes SM, Degenaar PA. Designing and testing scene enhancement algorithms for patients with retina degenerative disorders. Biomed Eng Online. 2010; 9: 27.
Atabany W, Degenaar P. A robust edge enhancement approach for low vision patients using scene simplification. In: Cairo International Biomedical Engineering Conference (CIBEC). New York, NY: Institute of Electrical and Electronics Engineers; 2008.
Everingham MR, Thomas BT, Troscianko T. Head-mounted mobility aid for low vision using scene classification techniques. Int J Virt Real. 1999; 3: 3–12.
Everingham MR, Thomas BT, Troscianko T. Wearable mobility aid for low vision using scene classification in a Markov random field model framework. Int J Human-Computer Interact. 2003; 15: 231–244.
Jones T, Troscianko T. Mobility performance of low-vision adults using an electronic mobility aid. Clin Exp Optom. 2005; 89: 10–17.
Hicks SL, Wilson J, Muhammed L, Worlsfold J, Downes SM, Kennard C. A depth-based head-mounted visual display to aid navigation in partially sighted individuals. PLoS One. 2013; 8: e67695.
Goodrich GL, Ludt R. Assessing visual detection ability for mobility in individuals with low vision. Vis Impair Res. 2003; 5: 57–71.
Bates D, Maechler M, Bolker B, Walker S. lme4: linear mixed-effects models using Eigen and S4. In: R Package Version 1.1-7. 2014. Available at: http://CRAN.R-project.org/package=lme4.
Hartong DT, Jorritsma FF, Neve JJ, Melis-Dankers BJM, Kooijman AC. Improved mobility and independence of night-blind people using night-vision goggles. Invest Ophthalmol Vis Sci. 2004; 45: 1725–1731.
Figure 1
 
A prototype for depth-based residual vision glasses to aid mobility in low vision. (a) The residual vision glasses including the head mount. Arrows indicate the location of the camera and displays. (b) Headset mounted on a glass model head, with arrows indicating the distance from the eyes to the displays and the location of the camera relative to the displays. (c) The representation of the visual environment in the residual vision glasses as the depth-to-brightness algorithm is running. While the representation on the displays is low resolution, it is easy to pick out the presence of a human figure in the foreground. Note that for the experiments, an opaque cover was used to block out any visual information other than what was presented on the displays.
Figure 1
 
A prototype for depth-based residual vision glasses to aid mobility in low vision. (a) The residual vision glasses including the head mount. Arrows indicate the location of the camera and displays. (b) Headset mounted on a glass model head, with arrows indicating the distance from the eyes to the displays and the location of the camera relative to the displays. (c) The representation of the visual environment in the residual vision glasses as the depth-to-brightness algorithm is running. While the representation on the displays is low resolution, it is easy to pick out the presence of a human figure in the foreground. Note that for the experiments, an opaque cover was used to block out any visual information other than what was presented on the displays.
Figure 2
 
Dynamic measures of mobility performance in an obstacle avoidance task. (a) A schematic illustration of the obstacle avoidance task. Participants were asked to cross a 15-m obstacle course. Soft cylindrical foam obstacles (height, 120 cm; diameter, 0.5 m) were randomly positioned in the 5 × 5 m central area. Overhead cameras captured each trial and the footage was used to extract participant position over time. The grid indicated with the dotted lines in the central 5 × 5 m area indicates all possible obstacle positions. (b) Visual obstacle awareness was quantified as “deviation distance,” which was defined as the distance between the participant and an obstacle at the final time point before the participant deviated from a collision course with the obstacle. (c) Hesitation behavior was quantified using fluctuations in the speed of participants as they crossed the obstacle course. The speed was normalized to the maximum speed attained during a trial, and then the differences between subsequent peaks and troughs in the speed signal were added up to yield a hesitation score for each participant, where lower scores indicate less and higher scores indicate more hesitation.
Figure 2
 
Dynamic measures of mobility performance in an obstacle avoidance task. (a) A schematic illustration of the obstacle avoidance task. Participants were asked to cross a 15-m obstacle course. Soft cylindrical foam obstacles (height, 120 cm; diameter, 0.5 m) were randomly positioned in the 5 × 5 m central area. Overhead cameras captured each trial and the footage was used to extract participant position over time. The grid indicated with the dotted lines in the central 5 × 5 m area indicates all possible obstacle positions. (b) Visual obstacle awareness was quantified as “deviation distance,” which was defined as the distance between the participant and an obstacle at the final time point before the participant deviated from a collision course with the obstacle. (c) Hesitation behavior was quantified using fluctuations in the speed of participants as they crossed the obstacle course. The speed was normalized to the maximum speed attained during a trial, and then the differences between subsequent peaks and troughs in the speed signal were added up to yield a hesitation score for each participant, where lower scores indicate less and higher scores indicate more hesitation.
Figure 3
 
Residual vision glasses reduce collisions but also increase trial completion time, (a) Percentage of trials with one or more collisions for each subject, ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that all those participants who could not avoid colliding with obstacles in the unaided condition showed a marked improvement in the aided condition. (b) Trial completion time in seconds for each subject, ordered by unaided performance. Arrows, diamonds, and control participants as in (a).
Figure 3
 
Residual vision glasses reduce collisions but also increase trial completion time, (a) Percentage of trials with one or more collisions for each subject, ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that all those participants who could not avoid colliding with obstacles in the unaided condition showed a marked improvement in the aided condition. (b) Trial completion time in seconds for each subject, ordered by unaided performance. Arrows, diamonds, and control participants as in (a).
Figure 4
 
Residual vision glasses can increase visual obstacle awareness but increase hesitation. (a) Deviation distance ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that participants with low deviation distances tend to perform better, while participants with high deviation distances tend to perform worse with the RVGs. (b) Deviation distance difference plotted against unaided deviation distance, illustrating a highly significant negative correlation between baseline performance and the effect of the RVGs, such that those with the worst prior performance showed most improvement and those with the best prior performance showed most deterioration. Black dots represent visually impaired participants, blue dots represent control participants. (c) Hesitation score ordered by unaided performance (dots, arrows, diamonds, and control participants as in [a]). Hesitation increased for nearly all participants in the aided condition.
Figure 4
 
Residual vision glasses can increase visual obstacle awareness but increase hesitation. (a) Deviation distance ordered by unaided performance. Black dots represent baseline performance, and arrows and diamonds represent performance in the aided condition. Red arrows indicate deterioration, green arrows indicate improvement, and black diamonds indicate equal performance. Control participants are represented separately in the blue dotted rectangle. Note that participants with low deviation distances tend to perform better, while participants with high deviation distances tend to perform worse with the RVGs. (b) Deviation distance difference plotted against unaided deviation distance, illustrating a highly significant negative correlation between baseline performance and the effect of the RVGs, such that those with the worst prior performance showed most improvement and those with the best prior performance showed most deterioration. Black dots represent visually impaired participants, blue dots represent control participants. (c) Hesitation score ordered by unaided performance (dots, arrows, diamonds, and control participants as in [a]). Hesitation increased for nearly all participants in the aided condition.
Table
 
Visual Status of Participants
Table
 
Visual Status of Participants
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×