Purchase this article with an account.
Ellen Bowman, Lei Liu; Low Vision Patients Can Transfer Skills They Learned From Virtual Reality to Real Streets. Invest. Ophthalmol. Vis. Sci. 2015;56(7 ):2622.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Virtual reality holds great potential to improve efficiency, affordability and accessibility of low vision rehabilitation. However, whether patients with impaired vision can learn useful skills in a virtual environment and then apply them to solve real world problems has not been tested.
Twelve subjects with vision too poor to use pedestrian signals to make a safe street crossing were trained to use the start (surge) of the traffic in the lanes next to her (near lane) going in the same direct (parallel) to determine the safest time to cross a street within a WALK cycle. A safe surge came a few seconds after the onset of the pedestrian signal, when cars entered the intersection. The subject was asked to say “GO” on the curb when she felt the safest to initiate crossing. The time of the onset of the pedestrian signal, the surge time (first straight-going car passing through the outer boundary of the crosswalk) and the GO time were recorded using stopwatches by two testers. The GO time was converted to a safety score (SS) which was the proportion of remaining “WALK” cycle after GO. SS=0 if GO was in the red cycle. The SS for all subjects was evaluated at 4 real street corners before and after training, at least 3 attempts per corner. A certified orientation & mobility specialist taught the subjects how to use near lane parallel traffic surge. Eight subjects were trained in virtual streets generated by a semi-cave simulator (VS group) and 4 were trained in real streets (RS group).
Both groups showed a significant increase in SS after training (0.25 vs. 0.81, t=15.11; p<.0001 for VS; 0.29 vs. 0.76, t=8.48, p<.0001 for RS). Repeated measure ANOVA showed no significant interaction between training effect and group. Before training, VS group said GO 65% of the time in the red cycle (InRed), 17% in the WALK cycle but before surge (BeforeSurge), 1% in the WALK cycle but with less than half the cycle left (SafeLow) and only 17% in the WALK cycle with more than half the cycle left (SafeHigh). After training in virtual streets, the numbers became 3%, 4%, 2% and 91% (Fig. 1). The RS group showed a similar pattern after real street training (Fig. 2).
Patients with severely impaired vision can learn important visual skills in a virtual environment and can apply the skills to solve real world problems. The effectiveness of virtual street training can be as good as real street training.
This PDF is available to Subscribers Only