Abstract
Purpose :
The Intracortical Visual Prosthesis (ICVP) uses wireless floating microelectrode arrays to stimulate the visual cortex, aiming to restore vision in blind individuals. A camera captures light and sends information to software that controls stimulation of the visual cortex. We tested the device’s ability to capture the direction of motion of people passing in front of an implanted participant (P1), a function that is essential for daily navigation and avoiding moving obstacles. Additionally, we want to evaluate potential learning effects across trials.
Methods :
A direction-of-walking task was performed using 2 glasses-mounted cameras, 1 thermal (TC) and 1 visible-light sensitive (VC), and no camera (control). For each task, P1 sat facing a dark background as a walker in light colored clothing walked across their field of view at 2.7 m distance, entering randomly from the left or the right side. P1 reported verbally from which side the walker entered. Response accuracy and time were recorded. Twenty trials each were tested for the VC and control conditions. Only 10 trials were run for the TC condition due to limited time. All analyses used two-sample t-tests.
Results :
P1 reached 100% accuracy with either TC or VC, which was significantly higher than the accuracy of the control condition (45%, p < 0.0001). P1 responded faster with VC than control (difference = 1.20 s, p < 0.01). There was no significant difference between the two cameras, although on average, P1 responded faster with VC (difference = 0.998 s, p = 0.066) (Fig. 1). There was evidence for learning, as the response time for the second half of VC trials was lower than the first (difference = 0.86 s, p < 0.05); however, the linear trend between response time and trial number was weak for both cameras, due to the high variability (Fig. 2). Future tests will be done to confirm the learning effects, especially for TC.
Conclusions :
The ICVP with a camera gives the user real-time information about motion direction, allowing the participant to identify motion more quickly and with perfect accuracy. Without the device, the participant’s performance is at chance. Since the stimulus contained high contrast and radiated heat, either camera worked well. This study validates the ICVP’s capacity for motion detection in real-time. Future testing will address different situations such as lower contrast to test the flexibility of the cameras and optimal threshold settings.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.