June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Navigational outcomes with a depth-based vision processing method in a second generation suprachoroidal retinal prosthesis
Author Affiliations & Notes
  • Lauren Moussallem
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Lisa Lombardi
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Matthew A. Petoe
    Bionics Institute, East Melbourne, Victoria, Australia
    Medical Bionics Department, The University of Melbourne, Melbourne, Victoria, Australia
  • Rui Jin
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Optometry and Vision Science, Faculty of Medicine, Dentistry and Health Science, The University of Melbourne, Parkville, Victoria, Australia
  • Maria Kolic
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Elizabeth K. Baglin
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Carla J Abbott
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
    Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
  • Janine G. Walker
    Health and Biosecurity, Commonwealth Scientific and Industrial Research Organisation, Canberra, Australian Capital Territory, Australia
  • Nick Barnes
    School of Computing, Australian National University, Canberra, Australian Capital Territory, Australia
  • Penelope J Allen
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
    Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
  • Footnotes
    Commercial Relationships   Lauren Moussallem Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Lisa Lombardi Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Matthew Petoe Bionics Institute, Code F (Financial Support), Bionics Institute, Code P (Patent); Rui Jin Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Maria Kolic Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Elizabeth Baglin Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Carla Abbott Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Janine Walker Bionic Vision Technologies Pty Ltd, Code F (Financial Support); Nick Barnes Bionic Vision Technologies Pty Ltd, Code F (Financial Support), Australian National University, Code P (Patent); Penelope Allen Bionic Vision Technologies Pty Ltd, Code F (Financial Support), Centre for Eye Research Australia, Code P (Patent)
  • Footnotes
    Support  NHMRC Grant 1082358, Retina Australia Grant
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 4616. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lauren Moussallem, Lisa Lombardi, Matthew A. Petoe, Rui Jin, Maria Kolic, Elizabeth K. Baglin, Carla J Abbott, Janine G. Walker, Nick Barnes, Penelope J Allen; Navigational outcomes with a depth-based vision processing method in a second generation suprachoroidal retinal prosthesis. Invest. Ophthalmol. Vis. Sci. 2023;64(8):4616.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Our second-generation (44-channel) suprachoroidal retinal prosthesis has been shown to improve object localisation and navigation in recipients with end stage retinitis pigmentosa using an intensity-based vision processing method (Lanczos2; L2). However, detecting objects that have low contrast relative to the background remains difficult. Hence, the aim was to determine the effectiveness of a depth-based vision processing method (LBE) in navigating a laboratory obstacle course for both high and low contrast obstacles compared to L2.

Methods : Four implant recipients (#NCT05158049) with profound vision loss due to retinitis pigmentosa were acclimatised to both vision processing methods. The seeded obstacles included a black mannequin, a white mannequin, a black overhanging box, a white overhanging box, a black bin, a white bin, a black stationary box, and a white stationary box. The obstacle course background was white with a length of 19.2m. One object was positioned in each of five rows in either the left, middle, or right position so that five of six objects were used per trial. Participants were masked to the number and type of obstacles. They were asked to detect and navigate through the obstacle course, avoiding contact with each obstacle and the wall (20-30 trials each, randomised).

Results : Overall, the LBE method (63.6 ± 10.7 %, mean ± SD) performed significantly better than L2 (48.5 ± 11.2 %), for detection of obstacles (p < 0.001, Mack-Skillings). For low contrast obstacles (white), LBE performed significantly better than L2 (LBE 62.6 ± 5.9 %, L2 15.4 ± 10.5 %, p < 0.001). Whereas, for high contrast obstacles (black), L2 outperformed LBE (L2 81.7 ± 16.5 %, LBE 65.0 ± 14.5 %, p = 0.005). Critically, LBE performance was equivalent in detecting both high and low contrast obstacles, whereas L2 was not. In comparing object type, LBE outperformed L2 for detection of mannequins, overhanging boxes and large bins (p = 0.045), whereas L2 outperformed LBE for detection of ground-based boxes (p = 0.045).

Conclusions : The depth-based vision processing method performed better than the intensity-based method navigating an obstacle course seeded with both high and low contrast obstacles. Hence, there is potential for LBE to be incorporated into the bionic eye vision processing system to aid real-world navigation.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×