Purchase this article with an account.
Nick M Barnes, Adele F Scott, Ashley Stacey, Chris McCarthy, David Feng, Matthew A Petoe, Lauren N Ayton, Rebecca Dengate, Robyn H Guymer, Janine Walker; Enhancing object contrast using augmented depth improves mobility in patients implanted with a retinal prosthesis. Invest. Ophthalmol. Vis. Sci. 2015;56(7 ):755. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Prosthetic vision shows promise for improving performance in orientation and mobility tasks. To date, prosthetic vision research has occurred mainly in high-contrast (e.g., black/white) environments with limited investigation of performance in environments with low contrast. In retinal implants, vision processing is critical for scene understanding including detecting objects of low contrast with their environment, and ensuring their visibility. The ability to detect objects that are poorly contrasted is impaired with the state-of-the-art vision processing, i.e., Intensity-based visual representations. We evaluated the effectiveness of an Augmented Depth-based vision processing algorithm (ADVP) compared to Intensity-based vision processing (IVP) and System Off (SO) for avoiding low-contrast trip hazards in retinal prosthetic vision.
Two participants with profound vision loss (bare light perception) due to retinitis pigmentosa who were implanted with a 24-channel retinal prosthesis (400 or 600 μm diameter) into the suprachoroidal space.<br /> <br /> Participants traversed a straight corridor (2.2 x 7.5 meters; dark floor, white walls) with dark ground-based obstacles that varied in number, size and placement. The presentation order was randomized for Visual Representation and the obstacle characteristics. Participants 1 and 2 completed 60 and 25 traversals,respectively, over a two-day period. The primary outcome measure was the number of contacts with the objects and walls per traversal.
ADVP (P1 n=21, mean=.714±.784; P2 n=12, mean=1.33±1.44) was associated with significantly fewer contacts than IVP (P1 n=22, mean=1.18±.907, p=.025; P2 n=7, mean=3.14±1.77, p=.025) and SO (P1 n=17, mean=1.76±1.20, p=.002; P2 n=6, mean=6.17±3.55, p=.001) for both participants. No significant difference was evident between IVP and SO in the number of collisions for either P1 (p=.237) or P2 (p=.067).
Vision processing techniques that provide scene understanding through enhanced depth perception can improve the performance of a retinal prosthesis for detecting and avoiding low-contrast trip hazards compared to the standard Intensity-based visual representation and System Off. These findings highlight the need for robust vision processing methods in retinal prostheses especially given the display limitations of current devices.
This PDF is available to Subscribers Only