Purchase this article with an account.
Brent Scott Carpenter, Quan Lei, Daniel Kersten, Gordon E Legge; Validating physics based 3D renderings for use with low vision psychophysics. Invest. Ophthalmol. Vis. Sci. 2017;58(8):5416.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Detecting pedestrian hazards is a common and difficult task for people with low vision. Past studies of the visibility of ramps and steps by people with reduced acuity (due to blurring goggles and natural low vision) have employed handmade structures (Bochsler, Legge, Gage & Kallie, 2013). However, physical construction of environments places severe limits on the range of conditions that can be practically investigated. The present study investigated observers' abilities to detect common pedestrian hazards by employing the use of physics-based three-dimensional (3D) modeling and image processing as a substitute for constructing scenes and stimuli. In order to validate 3D rendering as a medium for creating stimuli for psychophysical experiments, we tested whether the pattern of errors gathered via a computer display matched the patterns gathered in a real scene with hand-built navigational hazards.
We employed a five-alternative choice object recognition task that utilizes physics based 3D renders of a real scene containing five obstacle classes: step up, step down, ramp up, ramp down and flat. The images were created using Radiance rendering software (Ward, 1994). Normally sighted observers viewed spatially filtered images of these scenes which simulated six acuities ranging from LogMAR 1.0 to LogMAR 2.0 in steps of 0.2. The images were displayed to match as closely as possible the real scene in terms of contrast and visual angle. The patterns of confusions across the five categories in the rendered scenes were compared with those obtained in the real scene.
The pattern of classification performance using 3D renders with simulated low vision were nearly identical to that of normally sighted participants' doing the same task with the targets in the real space while wearing blurring goggles. Bootstrapping of confusion matrices showed that the digital task had a Spearman’s rank correlation of 0.9041, 99% CI [0.8672, 0.9410] with the physical task. A simulated null showed a Spearman’s rank correlation of -0.0079, 99% CI [-0.5164, 0.5006].
With the use of physics based 3D renderings and spatial filtering, we observed nearly identical recognition patterns via computer displays as has been observed using real obstacles. Our results validate the use of computer displays with physically accurate 3D renders as a means to expand the range of testable spaces for studies of visual accessibility.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.
This PDF is available to Subscribers Only