June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Comparing alerts in a smart object finder for visual prosthesis wearers
Author Affiliations & Notes
  • Gislin Dagnelie
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland, United States
  • Roksana Sadeghi
    Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
  • Arathy Kartha
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland, United States
  • Paul Gibson
    Advanced Medical Electronics Corp, Minneapollis, Minnesota, United States
  • Ryan Chamberlain
    Minnesota Health Solutions, Minneapolis, Minnesota, United States
  • Louis Barrett
    Minnesota Health Solutions, Minneapolis, Minnesota, United States
  • Kevin Kramer
    Minnesota Health Solutions, Minneapolis, Minnesota, United States
  • Footnotes
    Commercial Relationships   Gislin Dagnelie None; Roksana Sadeghi None; Arathy Kartha None; Paul Gibson Advanced Medical Electronics Corp, Code E (Employment); Ryan Chamberlain Minnesota Health Solutions, Code E (Employment); Louis Barrett Minnesota Health Solutions, Code E (Employment); Kevin Kramer Minnesota Health Solutions, Code E (Employment)
  • Footnotes
    Support  R44EY027650
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 4523 – F0310. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gislin Dagnelie, Roksana Sadeghi, Arathy Kartha, Paul Gibson, Ryan Chamberlain, Louis Barrett, Kevin Kramer; Comparing alerts in a smart object finder for visual prosthesis wearers. Invest. Ophthalmol. Vis. Sci. 2022;63(7):4523 – F0310.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Visual prosthesis wearers' ability to locate people and objects in real-life environments is limited by low resolution and distracting information. Object recognition by a neural net can assist them, provided we can efficiently convey information about the location and nature of a desired object in the visual field.

Methods : Video from a head-worn camera was analyzed for the presence of selected objects by a TensorFlow SSD neural net trained on the COCO dataset, running on a Raspberry Pi processor in a control box worn by the subject. Subjects wore their Argus system and earbuds, and were either shown the raw Argus II video or given one of 3 prompts for objects in view: a flashing icon in the image, icon + binaural tone conveying direction and size/closeness, or icon + spoken object identity. Here we present data collected in Argus II users looking for and walking towards a person in our lab in one of 6 pre-selected random positions. Each position was tested twice, in random order, followed by a return trial to the starting location; a trial ended in reaching the target or time-out/incomplete. Prior to testing, subjects familiarized themselves with the size of the room and were given several practice trials on a person standing nearby, with each modality. Outcomes were success rate, number of steps, and time to completion.

Results : With the native Argus II system, success rates varied from 25–38%; 95–99 s and 27–33 steps were required to reach the target. In the trials with prompts, these values were: 92 – 96%, 26–34 s and 9–17 steps (icon); 83 – 100%, 31–32 s and 12–15 steps (icon+tone); and 92 – 100%, 35–37 s and 9–18 steps (icon+voice). All changes in prompted trials relative to the native system were significant by 2-tailed t-test. Only for one subject was a difference found between prompts: Icon + tone resulted in faster completion than icon alone. Return trials showed a slight, but not significant, improvement over outbound trials. Subjects preferred using the system, but did not express a preference for a particular prompt.

Conclusions : Providing an Argus II user with a system that detects a desired object results in significant improvement of all measured performance aspects, but no evidence was found for advantage of a particular alert system.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×