Open Access
Clinical Trials  |   August 2017
Identification of Characters and Localization of Images Using Direct Multiple-Electrode Stimulation With a Suprachoroidal Retinal Prosthesis
Author Affiliations & Notes
  • Mohit N. Shivdasani
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Nicholas C. Sinclair
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Lisa N. Gillespie
    Bionics Institute, East Melbourne, Victoria, Australia
  • Matthew A. Petoe
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Samuel A. Titchener
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • James B. Fallon
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Thushara Perera
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Darien Pardinas-Diaz
    Bionics Institute, East Melbourne, Victoria, Australia
  • Nick M. Barnes
    Computer Vision Research Group, Data61, Canberra, Australian Capital Territory, Australia
    College of Engineering & Computer Science, Australian National University, Canberra, Australian Capital Territory, Australia
  • Peter J. Blamey
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia
  • Correspondence: Mohit N. Shivdasani, Bionics Institute, 384-388 Albert Street, East Melbourne, VIC 3002, Australia; mshivdasani@bionicsinstitute.org
  • Footnotes
     See the appendix for the members of the Bionic Vision Australia Consortium.
Investigative Ophthalmology & Visual Science August 2017, Vol.58, 3962-3974. doi:https://doi.org/10.1167/iovs.16-21311
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mohit N. Shivdasani, Nicholas C. Sinclair, Lisa N. Gillespie, Matthew A. Petoe, Samuel A. Titchener, James B. Fallon, Thushara Perera, Darien Pardinas-Diaz, Nick M. Barnes, Peter J. Blamey, for the Bionic Vision Australia Consortium; Identification of Characters and Localization of Images Using Direct Multiple-Electrode Stimulation With a Suprachoroidal Retinal Prosthesis. Invest. Ophthalmol. Vis. Sci. 2017;58(10):3962-3974. https://doi.org/10.1167/iovs.16-21311.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Retinal prostheses provide vision to blind patients by eliciting phosphenes through electrical stimulation. This study explored whether character identification and image localization could be achieved through direct multiple-electrode stimulation with a suprachoroidal retinal prosthesis.

Methods: Two of three retinitis pigmentosa patients implanted with a suprachoroidal electrode array were tested on three psychophysical tasks. Electrode patterns were stimulated to elicit perception of simple characters, following which percept localization was tested using either static or dynamic images. Eye tracking was used to assess the association between accuracy and eye movements.

Results: In the character identification task, accuracy ranged from 2.7% to 93.3%, depending on the patient and character. In the static image localization task, accuracy decreased from near perfect to <20% with decreasing contrast (patient 1). Patient 2 scored up to 70% at 100% contrast. In the dynamic image localization task, patient 1 recognized the trajectory of the image up to speeds of 64 deg/s, whereas patient 2 scored just above chance. The degree of eye movement in both patients was related to accuracy and, to some extent, stimulus direction.

Conclusions: The ability to identify characters and localize percepts demonstrates the capacity of the suprachoroidal device to provide meaningful information to blind patients. The variation in scores across all tasks highlights the importance of using spatial cues from phosphenes, which becomes more difficult at low contrast. The use of spatial information from multiple electrodes and eye-movement compensation is expected to improve performance outcomes during real-world prosthesis use in a camera-based system. (ClinicalTrials.gov number, NCT01603576.)

Over the last decade, retinal prostheses have emerged as the only regulatory approved technology to provide artificial vision to patients with profound vision loss due to photoreceptor dystrophies such as retinitis pigmentosa.1 These devices work by electrically stimulating surviving second- and third-order retinal neurons via an implanted array of electrodes to elicit the perception of light flashes termed phosphenes. Multiple electrodes can be stimulated to induce the perception of an image, usually captured by a video camera. While there have been over 20 groups worldwide trying to develop such a device, only three devices have achieved commercialization so far. The Argus II epiretinal (i.e., electrode array attached directly to the inner surface of the retina) prosthesis from Second Sight Medical Products, Inc. (Sylmar, CA, USA) is approved for sale both in the United States and the European Union, while the Alpha-IMS/AMS subretinal (i.e., electrode array placed between the retinal pigment epithelium and the choroid) prosthesis from Retina Implant AG (Reutlingen, Germany) and the IRIS II epiretinal prosthesis from Pixium Vision (Paris, France) are currently approved for sale in the European Union only. Our group, through the Bionic Vision Australia program, has been working on placing an electrode array between the sclera and the choroid, thus using a suprachoroidal approach.25 While each anatomical location provides unique advantages, the potential benefits of using the suprachoroidal location are the ease of surgical implantation of the electrode array, mechanical stability in situ over the long term, the ability to cover a large area of the visual field, and minimal risk of retinal trauma.6 
During a 2-year clinical trial in three RP patients, our suprachoroidal electrode array was found to be safe, with no ocular complications during surgery and bleeding that resolved without any intervention being the only intraocular adverse event.2 Reliable phosphene thresholds could be measured in all patients, and the most effective stimulus parameters3 and individual phosphene characteristics5 were determined through extensive psychophysical studies using single-electrode stimulation. Using a camera-based semiportable system along with head scanning, patients were able to recognize simple shapes and objects, and data from one patient showed that the implant was capable of providing measureable visual acuity on an optotype acuity task.2,4 However, between performing psychophysical studies with single-electrode stimulation3,5 to providing functional vision using multiple electrodes with a camera-based system,2,4 a number of psychophysical tests involving direct-to-array multiple-electrode stimulation were conducted. The main goal of these tasks was to explore how spatiotemporal interactions and vision preprocessing on images,4 as would typically occur with a camera-based system performing multiple-electrode stimulation, affected visual performance, but without the use of head scanning. We wanted to evaluate the degree to which patients were able to integrate spatial information intrinsic to the electrode array with multiple-electrode stimulation. In particular, we evaluated how patients identified three commonly encountered spatial features of stimuli: shape, direction without motion, and direction with motion. In two of the three tasks, an electrical representation of actual visual stimuli was used. A second subgoal was to study how changing the properties of either stimulus pulses (amplitude, polarity, timing, etc.) corresponding to the representation of the visual stimuli or properties of the visual stimuli themselves (contrast, speed) affected patient performance. 
The performance of patients in these tasks would be expected to provide insights into their behavior when using a full camera-based system. For example, we would expect that patients with a strong performance on these psychophysical tasks would rely less on head scanning to explore a visual scene while those with lower performance would prefer to use head scanning. These would in turn inform targeted rehabilitation strategies that could be customized to each patient to fully benefit from the visual percepts patients receive upon electrical stimulation, while complementing their ability to integrate such percepts. Additionally, during all tasks the direction of gaze was recorded using an external video-based eye tracker as it has been shown with the Argus II device that eye movements can significantly affect patient performance on psychophysical tasks involving phosphene localization.7 Specifically in our study, we wanted to ascertain if eye movements occurred during the image localization task and if the degree and direction of movement was related to performance. A priori, we hypothesized that larger eye movements would lead to poorer performance. 
Materials and Methods
Patient and Device Description
Detailed descriptions have been provided in our previous reports.2,3 Briefly, three long-term blinded patients with RP were recruited for this trial. The research followed the tenets of the Declaration of Helsinki and informed consent was obtained from all three patients upon a detailed explanation of the nature of the study. All procedures were approved by the Human Research Ethics Committee of the Royal Victorian Eye and Ear Hospital, and the trial was registered at ClinicalTrials.gov (no. NCT01603576). All patients were implanted with a prototype suprachoroidal retinal prosthesis consisting of a 19 × 8-mm intraocular electrode array with 20 stimulating electrodes (schematic in Fig. 1A). A helically coiled cable exited the eye and connected the electrode array to a titanium percutaneous plug that exited the skin behind the ear. This provided direct access to the electrodes via an external stimulator8 for flexibility in conducting various psychophysical experiments through a custom-designed psychophysics software suite.3 
Figure 1
 
Schematic of electrode array layout. (A) Twenty electrodes were available for stimulation. Note electrodes 9, 17, and 19 were smaller in diameter (400 vs. 600 μm for the other electrodes), and the outer ring of electrodes (filled black) was shorted together and available for use as part of a CG return. (B) Arrangement of ganged pairs for P2. A total of 10 ganged pairs were made available for stimulation. P2, patient 2.
Figure 1
 
Schematic of electrode array layout. (A) Twenty electrodes were available for stimulation. Note electrodes 9, 17, and 19 were smaller in diameter (400 vs. 600 μm for the other electrodes), and the outer ring of electrodes (filled black) was shorted together and available for use as part of a CG return. (B) Arrangement of ganged pairs for P2. A total of 10 ganged pairs were made available for stimulation. P2, patient 2.
General Task Parameters
The stimulus parameters for the various psychophysical tasks were chosen based on a systematic study of perceptual thresholds3 and phosphene characteristics5 from stimulating each individual electrode on the array, and they are summarized in Table 1. Patient 1 (P1) and P2 were able to perceive phosphenes that changed in brightness and size with increasing charge levels and had distinct appearance and locations in visual space for most electrodes on the array.5 In contrast, P3 did not perceive much variance in phosphene appearance regardless of the electrode being stimulated, the stimulus amplitude level above threshold, or the mode of stimulation used.5 As a result, we performed the psychophysical tasks described in this study only with P1 and P2. For all tasks, sequential multiple-electrode stimulation was used with charge-balanced biphasic current pulses presented in an interleaved fashion (i.e., continuous stimulation of multiple electrodes) using a fixed delay of 100 μs between the end of the pulse on one electrode to the start of the pulse on another electrode. Stimulation was performed either in a monopolar (MP) configuration using one of two intraocular return electrodes or in a common ground configuration using all the electrodes on the intraocular array shorted together as a return. The electrodes stimulated, duration of stimulation, charge amplitudes for each electrode, and duty cycle for stimulation were varied depending on the task. The three tasks were performed at different times relative to the day of surgical implantation in each patient (Table 1). Between the time of performing the character identification task and the other tasks, it was found that the electrode–retina distance and thresholds greatly increased in P2,3 up to a point where many electrodes could not be stimulated at sufficient levels above threshold while staying below the safe charge limits (447 nanocoulombs for each electrode).3 Therefore, a ganged-pair approach was adopted for P2 in the two image localization tasks, where two adjacent electrodes (for example 1 and 6 or 5 and 10; Fig. 1B) were shorted together and simultaneously stimulated against the MP return to elicit the perception of a single phosphene. These ganged pairs described in our previous study were made up using mutually exclusive electrodes, thus giving a choice of 10 different ganged pairs (Fig. 1B) from 20 stimulating electrodes that elicited 10 different phosphenes in P2.3 However, at the time of performing the image localization tasks, it was found that electrode 3 had an intermittent connection within the percutaneous plug, and therefore, for safety reasons, the pair containing electrodes 3 and 8 was disabled and not stimulated. During all tasks, the direction of gaze was recorded using an external infrared eye-tracking camera (Arrington Research, Inc., Scottsdale, AZ, USA) fitted to a pair of glasses to analyze eye-movement data. The eye tracker was continuously sampled at 60 Hz by the psychophysics software, and data were analyzed offline in relation to stimulus times. 
Table 1
 
Summary of Parameters Used for the Three Tasks in Both Patients
Table 1
 
Summary of Parameters Used for the Three Tasks in Both Patients
Character Identification
The main objective of this task was to assess if patterns containing multiple electrodes could be identified and recognized in a recall task. To aid patients with memorizing, we configured the pattern sets to create the perception of a simple character (letter or number) that patients could associate with having seen before they lost their vision. As the contribution of each electrode to shape perception was expected to be patient specific due to shape characteristics of individual phosphenes,5 a patient-directed approach was used to form individual electrode patterns. The procedure involved using groups of up to 10 electrodes on the array stimulated at fixed levels (up to 6 dB above threshold), and patients were asked to describe what they saw. The number and location of electrodes or stimulus amplitudes were adjusted until the patient described that a shape that closely resembled a recognizable character could be clearly seen each time the same pattern of electrodes was stimulated for at least five repeat trials (tested on multiple days over a few weeks). 
Two patient-specific sets of eight recognizable characters were finalized over a few weeks of testing with feedback and familiarization: P1 (Fig. 2A; letters: F and P; numbers: 1, 3, 5, 6, 7, and 9) and P2 (Fig. 2B; letters: C, F, L, O, P, and Z; numbers: 1 and 7). During the initial phase of this task, we were unable to reliably elicit the perception of the numbers 3, 5, 6, and 9 in P2 despite adjusting stimulus parameters, so these were replaced with other letters of the alphabet that were easier to recognize. The finalized pattern sets were then used in an eight-alternative forced choice (8AFC) recall task, with three repeats per pattern (2-second duration) across blocks of 24 trials. The patient was asked to respond with the character seen with no feedback provided. Data were collected over several weeks from multiple blocks of trials. The method of familiarization followed by testing was deliberately done to assess how well patients could integrate spatial information intrinsic to the electrode array and combine multiple phosphenes to associate the resultant percept with a prelearned familiar character when asked to do so at random. As a corollary to this investigation, for P1 we also assessed the effects of electrode configuration (MP versus common ground [CG]), stimulus amplitude above threshold (2 vs. 6 dB), and polarity (anodic-first [AF] versus cathodic-first [CF]) on task performance. 
Figure 2
 
Electrodes used for character identification. The eight different patterns of electrodes used for this task in P1 (A) and P2 (B). Filled electrodes indicate those that were stimulated in a sequential fashion for each character. Note the electrode numbering is oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). Due to different phosphene locations and phosphene shapes for individual electrodes, the pattern of electrodes stimulated on the array did not always match the intended character (for example with P1, the pattern of electrodes for the number 5 does not look like the number 5 on the array).
Figure 2
 
Electrodes used for character identification. The eight different patterns of electrodes used for this task in P1 (A) and P2 (B). Filled electrodes indicate those that were stimulated in a sequential fashion for each character. Note the electrode numbering is oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). Due to different phosphene locations and phosphene shapes for individual electrodes, the pattern of electrodes stimulated on the array did not always match the intended character (for example with P1, the pattern of electrodes for the number 5 does not look like the number 5 on the array).
Static Image Localization
In this task, fixed patterns of electrodes, selected from discrete regions of the array, were presented to the patient as a direct-to-array variant of the BaLM task (Fig. 3) described in Bach et al.9 Traditionally, the BaLM task has been performed using a camera-based system with head scanning, and results from our patients have been reported previously.2,4 However, the advantage of using a direct-to-array version of the task in this study was threefold. Firstly, we were able to eliminate the contributions of head scanning to localization, as head scanning can increase the acuity provided by the device over that of the physical limit imposed by the geometric spacing between electrodes.10 We were also able to eliminate other factors such as nonvisual information that can influence performance with prosthetic vision.11 Lastly, performance using different stimulation strategies could be directly compared with a high degree of stimulus repeatability. 
Figure 3
 
Setup for the static image localization task in P1 at different contrast levels showing output levels for each electrode obtained from the MVP algorithm applied to each image. The percentage number shown at the top indicates the contrast level (i.e., background intensity subtracted from the wedge intensity, with the wedge intensity fixed to 100%). The number inside each electrode and the color denotes the pixel brightness value (maximum of 255) for that electrode obtained from the vision-processing algorithm. Note, at contrast levels less than 100%, all 20 electrodes were stimulated, whereas at the 100% contrast level, only three electrodes were stimulated for this orientation. Note the electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). MVP, minimal vision processing.
Figure 3
 
Setup for the static image localization task in P1 at different contrast levels showing output levels for each electrode obtained from the MVP algorithm applied to each image. The percentage number shown at the top indicates the contrast level (i.e., background intensity subtracted from the wedge intensity, with the wedge intensity fixed to 100%). The number inside each electrode and the color denotes the pixel brightness value (maximum of 255) for that electrode obtained from the vision-processing algorithm. Note, at contrast levels less than 100%, all 20 electrodes were stimulated, whereas at the 100% contrast level, only three electrodes were stimulated for this orientation. Note the electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). MVP, minimal vision processing.
To generate the stimuli, an image of a white wedge (67-degree segment of a circle presented on the center of the screen) on a black background was converted into a set pattern of stimulated electrodes on the array. The wedge was oriented in one of four directions (left, right, up, or down). The choice of electrodes to be stimulated and the stimulus amplitude above threshold for each electrode was determined using vision-preprocessing algorithms4 applied to the image (as would typically happen with a camera-based system), with the electrode layout overlaid as an electrode image field corresponding to a visual field projection of 19.44 W × 19.2 H degrees on the retina.12 To calculate the stimulus amplitude for each electrode, a minimal vision-processing (MVP) scheme4 was implemented, where the brightness of a single nearest neighbor pixel corresponding to the area of the image overlaid by each electrode was used to assign a brightness output value from 0 to 255 (Fig. 3). An output level of 0 from the algorithm for a given electrode was assigned to a current level of 0 μA, that is, the electrode was not stimulated, the minimum nonzero output of 1 was assigned to the perceptual threshold for that electrode (i.e., 0 dB above threshold), and a level of 255 was assigned to a designated maximum charge level for each electrode (typically, 6 dB above threshold). Levels between 1 and 255 were quantized into 10 equal steps that, on the charge scale, were also quantized into 10 steps from 0 dB to the maximum level in decibels above threshold. The task was set up as a 4AFC where, for each trial, the patient was asked to identify the orientation of the wedge randomly appearing in one of the four directions. Data were collected over several weeks from multiple blocks of trials. However, the task setup slightly differed between the two patients as described below. 
Task Setup in P1
Patient 1 was found to be very good at localizing individual phosphenes,5 and when first trying this task, we obtained ceiling results of 100% accuracy with every configuration of parameters tested. Therefore, to make the task more challenging, a contrast reduction technique (Fig. 3) was implemented with P1. To do this, a different set of images was used with the wedge intensity kept fixed at 100% (i.e., white) and the background intensity increased from black (0% intensity) toward white in 10% grayscale steps until there was only 10% contrast (i.e., 10% intensity difference between the wedge and background). 
Furthermore, performance on this task in P1 was compared using a brightness-balanced versus an unbalanced phosphene map since at contrast levels below 100%, all 20 electrodes were stimulated (see Fig. 3) and it was possible that brighter electrodes (in the unbalanced map) could skew the patient's perception of wedge orientation. For the brightness-unbalanced map, the maximum stimulus amplitude for each electrode was fixed to 6 dB above threshold. For the brightness-balanced map, the maximum stimulus amplitude on each electrode was adjusted so that all 20 electrodes produced equal brightness at this maximum stimulus amplitude. All electrodes were stimulated in a MP fashion using AF pulses, 148 μs phase width, 20 μs interphase gap, and a stimulation rate of 200 pulses per second (pps) per electrode (Table 1). In order to explore spatiotemporal effects13 when addressing all 20 electrodes with these parameters, two timing strategies were chosen using burst intervals of 167 milliseconds (Table 1). One strategy was where seven electrodes per interval were stimulated, thus requiring three time intervals to address all 20 electrodes (i.e., interval 1: seven electrodes; interval 2: seven electrodes; and interval 3: six electrodes). The second strategy was where 12 electrodes per interval were stimulated, requiring only two of the three intervals to address all electrodes, with the third interval having no electrodes stimulated (i.e., interval 1: 12 electrodes; interval 2: 8 electrodes; and interval 3: no electrodes). Intervals were repeated cyclically to provide a total stimulus duration of 2 seconds, and electrode selection was in decreasing order of assigned pixel intensity by the vision-processing algorithm. Performance between these two timing strategies was compared in conjunction with the use of the brightness-balanced phosphene map only. 
Task Setup in P2
We previously found with P2 that higher rate stimulation on single electrodes caused individual phosphenes to be fuller, persistent and more intense.5 Also, at the time of this study, a ganged-pair approach was required for stimulation in P2 on this task since individual electrode thresholds increased. Performance in P2 was therefore compared between two stimulus parameter combinations (Table 1; both using MP stimulation with AF pulses): 500 μs phase width, 500 μs interphase gap, 50 pps per electrode versus 148 μs phase width, 20 μs interphase gap, 400 pps per electrode. In these comparisons only 100% contrast was used since P2 had difficulties in performing the task at lower contrast levels due to all electrode pairs being stimulated in every trial. 
Dynamic Image Localization
In this task, a series of dynamic electrode patterns were presented, corresponding to the processed image of a white moving bar sweeping across the array (Fig. 4). The width of the bar image was fixed to 3.5 degrees, and the speed of the bar was varied between 16 and 80 deg/s, thus varying the duration of each trial between ∼0.24 seconds (for a vertically moving bar at 80 deg/s) to 1.71 seconds (for a diagonally moving bar at 16 deg/s). The patient was asked to respond with the direction of perception of the moving bar. To produce the electrode patterns, a vision-preprocessing algorithm continuously sampled an animated gif image of a moving bar on a black background approximately every 100 milliseconds (akin to using a camera with a frame rate of 10 Hz), thus activating a different pattern of up to 12 electrodes in P1 or up to six ganged pairs in P2 every 100 milliseconds. The vision-processing scheme in this task included the use of a Lanczos2 filter (as described in Barnes et al.4), to ensure that the amplitude envelope of stimulation as the bar moved across was smoothly modulated. The main difference between this vision-processing scheme and the MVP scheme used in the static image localization task was that, instead of a single pixel assigned to an electrode, multiple pixels covering a circular area were sampled by each electrode and a weighted average value of these pixel values was used to generate the filter output.4 
Figure 4
 
Setup for the dynamic image localization task in P1. In each image, the electrodes activated, and the output levels for each electrode were obtained from the Lanczos2 filter applied to the moving bar that was presented to the algorithm as an uncompressed gif image. The number inside each electrode and the color denotes the pixel brightness value for that electrode (maximum of 255) obtained from the vision-processing algorithm. In this example, the bar orientation was from left to right as indicated by the arrow direction. Note filter color scheme is identical to that in Fig. 3, and electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1).
Figure 4
 
Setup for the dynamic image localization task in P1. In each image, the electrodes activated, and the output levels for each electrode were obtained from the Lanczos2 filter applied to the moving bar that was presented to the algorithm as an uncompressed gif image. The number inside each electrode and the color denotes the pixel brightness value for that electrode (maximum of 255) obtained from the vision-processing algorithm. In this example, the bar orientation was from left to right as indicated by the arrow direction. Note filter color scheme is identical to that in Fig. 3, and electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1).
P1 used a brightness-balanced phosphene map for this task, as brightness balancing was shown to improve performance compared to an unbalanced phosphene map in the previous task (see results). Stimuli for P1 included eight orientations of the bar (two moving horizontally, i.e., left to right and right to left; two vertically, i.e., top to bottom and bottom to top; and four moving diagonally across the screen) tested at all speeds. P1 completed multiple blocks of 40 trials each (five repeats of each orientation at five different speeds). For P2, because of general difficulties with performing tasks involving sequential stimulation, lower performance levels on the first two tasks, and inability to perform this task at higher speeds, only four cardinal orientations were tested (without diagonals) using a single speed of 16 deg/s. P2 completed multiple blocks of 16 trials each (four repeats of each orientation at a single speed). 
Data Analyses
Accuracy scores were obtained separately for each task in each patient. Comparison of accuracy scores between different stimulus parameters were analyzed separately for each task and for each patient using Pearson's χ2 statistics for binomial distributions. Confusion matrices were further constructed to observe stimulus versus response patterns for different stimuli in each task. Eye-movement data obtained with the eye tracker (Fig. 5A) were analyzed only for the static image localization task for three datasets since this was the task where we expected to see the maximum influence of eye movement on performance. These were (1) for P2 when using a stimulus rate of 50 pps, (2) for P2 when using a stimulus rate of 400 pps, and (3) for P1 when using a contrast level of 30% with all three brightness-balancing strategies combined (this contrast level was chosen to match the accuracy of P2 on this task). Additionally, sequences of eye-position data recorded in the absence of any stimuli (using sham stimulus times) were used as a control for each patient, giving a total of five data sets. The eye-tracker system output the gaze location at each sample in time as an x,y coordinate pair (Fig. 5B). The eye-tracking system was uncalibrated so the absolute values of gaze location were converted to eye movement (separate for the x and y directions) relative to a reference point (see Fig. 10 in Results). Gaze location immediately before each trial served as a convenient reference time point since patients were asked to center their eyes before each trial was initiated. Prior to determining the central gaze location, all raw data (sampled at 60 Hz) were smoothed using a centered moving average window of 300 milliseconds and down-sampled to 20 Hz. The average gaze location at the time point 150 milliseconds before trial onset was chosen as an approximation of the center gaze location for each trial, as this was obtained entirely from samples during the period 300 milliseconds immediately before stimulation (i.e. −300–0 milliseconds). In the x direction, positive eye movements correspond to the right and negative movements to the left, while in the y direction, positive movements correspond to upward and negative movements, downward. 
Figure 5
 
Eye-tracking data. (A) Video image of eye-tracker view to determine if patient was looking at the center before initiating the start of each trial. (B) Raw eye-tracker data (sampled at 60 Hz) during task 2 (x gaze and y gaze) as a function of time during a 2-second stimulus presentation (indicated by dashed lines).
Figure 5
 
Eye-tracking data. (A) Video image of eye-tracker view to determine if patient was looking at the center before initiating the start of each trial. (B) Raw eye-tracker data (sampled at 60 Hz) during task 2 (x gaze and y gaze) as a function of time during a 2-second stimulus presentation (indicated by dashed lines).
Figure 6
 
Mean (±SEM) accuracy for P1 on the character identification task for different stimulus parameters. Accuracy was found to have no dependency on the level above threshold, the return configuration, or the polarity of the pulses (χ2 = 5.1, P = 0.645).
Figure 6
 
Mean (±SEM) accuracy for P1 on the character identification task for different stimulus parameters. Accuracy was found to have no dependency on the level above threshold, the return configuration, or the polarity of the pulses (χ2 = 5.1, P = 0.645).
Figure 7
 
Confusion matrices showing overall percentage number of trials where P1 and P2 responded for a given presented pattern on the character identification task. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 7
 
Confusion matrices showing overall percentage number of trials where P1 and P2 responded for a given presented pattern on the character identification task. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 8
 
Mean accuracy scores (± SEM) across five blocks of trials in P1 on the static image localization task when testing different contrast levels. Accuracy significantly depended on the contrast and also depended on the stimulation timing strategy at a contrast of 10%.
Figure 8
 
Mean accuracy scores (± SEM) across five blocks of trials in P1 on the static image localization task when testing different contrast levels. Accuracy significantly depended on the contrast and also depended on the stimulation timing strategy at a contrast of 10%.
Figure 9
 
Confusion matrices showing overall percentage number of trials where P2 responded with a wedge orientation after excluding the “not clearly seen” trials on the static image localization task. P2 performed significantly better on the task when testing with the 400-pps stimuli as it was found to evoke clearer, brighter, and more persistent phosphenes.5 The left direction was found to invoke the least accuracy. D, down; L, left; R, right; U, up. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 9
 
Confusion matrices showing overall percentage number of trials where P2 responded with a wedge orientation after excluding the “not clearly seen” trials on the static image localization task. P2 performed significantly better on the task when testing with the 400-pps stimuli as it was found to evoke clearer, brighter, and more persistent phosphenes.5 The left direction was found to invoke the least accuracy. D, down; L, left; R, right; U, up. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 10
 
Relative eye movement from center as a function of time. (A) Average (solid line) ± SEM (shading) for x (blue) and y (red) direction eye movements relative to the center gaze location (−0.15 seconds) in P1 (n = 45 trials). (B) Similar data plotted for P2 when using the 400-pps stimulus rate (n = 84 trials). Similar data also analyzed when using sham trials constructed from time between stimuli in (C) P1 (n = 99 trials) and (D) P2 (n = 73 trials).
Figure 10
 
Relative eye movement from center as a function of time. (A) Average (solid line) ± SEM (shading) for x (blue) and y (red) direction eye movements relative to the center gaze location (−0.15 seconds) in P1 (n = 45 trials). (B) Similar data plotted for P2 when using the 400-pps stimulus rate (n = 84 trials). Similar data also analyzed when using sham trials constructed from time between stimuli in (C) P1 (n = 99 trials) and (D) P2 (n = 73 trials).
Statistical analyses were performed on the eye-tracker data to answer three main questions: (1) Did the patients perform significant eye movements that were stimulus related? (2) Did the magnitude of eye movement vary with the accuracy of each trial (i.e., whether the patient answered correctly or incorrectly)? (3) Was the direction of eye movement associated with the direction of the stimulus? Eye movement relative to center was analyzed using a separate analysis of variance (ANOVA) in each patient and separately for the x and y directions. Post hoc analyses were carried out using the Tukey method. 
Results
Both patients were able to perform all three tasks but with significantly different levels of accuracy. P1 performed better than P2 across all tasks. 
Character Identification
When analyzing across all stimulus parameters and all trials, P1 scored significantly above chance (12.5% accuracy) on this 8AFC task (P1, n = 600 trials, 45.5% accuracy, χ2 = 158.7, P < 0.0001); however, P2 did not perform significantly better than chance (n = 96 trials, 17.7% accuracy, χ2 = 1, P = 0.314). When looking at the effect of stimulus parameters in P1, accuracy scores ranged between 37.5% and 62.5% within each block of 24 trials, but on average did not depend on stimulus parameter combinations (Figure 6; χ2 = 5.1, P = 0.645). Confusion matrices were generated for both P1 and P2 in order to assess stimulus-dependent effects on accuracy (Fig. 7). Accuracy scores across all trials were found to largely vary with the stimulus pattern for P1 and P2. Accuracy scores for P1 ranged between 2.7% for the number 1 and 93.3% for the number 7, with the letter P (69.3%) being the next most accurate. For P2, scores ranged between 8.3% for the letter Z and 36.4% for the letter L, with the number 7 (27.3%) being the next most accurate. 
In P1, it was observed that most confusion occurred where the number 1 (in 70.7% of trials) and the letter F (in 54.7% of trials) were both perceived as the letter P. However, the confusion between 1 and P was found to be highly asymmetrical, with only 1 confused with the letter P and not the other way round. Similarly, the numbers 3, 5, and 9 were often confused with each other. In P2, there were several varying degrees of confusion with every pattern type, with the most prominent confusions being the letter F and Z evoking the responses of the number 1 (in 50% of trials) and the letter O (in 41.7% of trials), respectively. 
Static Image Localization
Both P1 and P2 performed significantly above chance (p < 0.05) on this task. Furthermore, they also performed above the clinically relevant pass criterion9 of 62.5% when the target versus background contrast was set to 100%. The 62.5% pass criterion was used since there is less than 0.011% probability of randomly achieving or exceeding this criterion, using analyses of psychometric characteristics of 4AFC tasks.9 P1 performed with 100% accuracy on this task when the contrast was set to 100%, but performance worsened at lower contrast levels (Fig. 8). While P1 performed at a similar level of accuracy when using brightness-balanced and brightness-unbalanced phosphene maps, the timing strategy of seven electrodes per time interval tended to improve performance, particularly at low contrast (Fig. 8). To confirm this, statistical analyses were performed for the three low-contrast bins (10%, 20%, and 30%) using χ2 tests and Bonferroni correction. Performance was found to be significantly higher for the balanced map with seven electrodes per frame compared to the other two strategies, but only at a 10% contrast (χ2 = 8.75, P = 0.013). 
In P2, accuracy significantly depended (χ2 = 20, P < 0.0001) on whether pulses were presented at 400 pps (n = 168 trials, 63.1% accuracy) or at 50 pps (n = 168 trials, 38.7% accuracy). In addition, when using the 50-pps rate, P2 reported not clearly seeing the stimulus for significantly greater number of trials compared to when using the 400-pps rate, and therefore a forced guess was necessary (χ2 = 7.5, p < 0.01; 50 pps, n = 14 trials not clear; 400 pps, n = 3 trials not clear). Confusion matrices for this task (Fig. 9) after excluding the “not clearly seen” trials, revealed that the left direction was the most difficult to identify with both the 50-pps (17.2%) and 400-pps (54.5%) stimuli, while the direction with the highest accuracy was either up (50 pps, 63.4%) or right (400 pps, 70.5%). 
The Effect of Eye Movements on Image Localization
Figure 10 shows the average (± standard error of the mean [SEM]) x and y magnitude of eye movement relative to the center gaze location as a function of time relative to stimulus onset. Both P1 (Fig. 10A) and P2 (Fig. 10B) exhibited significant eye movement during the 2-second stimulus presentation (separate 1-way ANOVAs, P < 0.001). Note that the observed nystagmus did not affect the quality of the eye-tracking information (determined by the pupil quality measure reported by the commercial eye-tracker software). P1 was found to move more in the x direction compared to the y direction, and P2 tended to move more in the y direction during the first half of the trial and in the x direction toward the second half of the trial. No such systematic eye movements were observed in either P1 (Fig. 10C) or P2 (Fig. 10D) when sham stimulus trials were analyzed in periods during which there was no stimulus presented (separate 1-way ANOVAs, P ≥ 0.986). However, the variability in eye movement data during these times supports the notion of varying degrees of nystagmus in resting state for both patients. 
To analyze the effects of stimulus direction and accuracy, we calculated the average eye movement separately for correct and incorrect trials for each stimulus direction (Fig. 11). Using a 2-way ANOVA, we found that the average magnitude of eye movement across all trials relative to center was significantly dependent on the interaction between stimulus direction and accuracy (2-way ANOVA, P < 0.001), but dependent differently in each patient and differently for the two stimulus rates in P2 (Fig. 11B, 11C). Generally, P1 moved her eye for all stimulus directions mainly toward the right but also slightly downward (Fig. 11A), while P2 always moved his eye toward the left but also in the direction congruent with the up and down stimuli (Fig. 11B, 11C). In addition, the magnitude of eye movements was higher in P2 compared to P1 but similar for both stimulus rates in P2. More interestingly, correct trials were associated with significantly larger eye movements compared to incorrect trials when analyzing the direction of eye movement in the direction congruent with the stimulus. Table 2 shows the relationship between the magnitude of eye movements and accuracy in the two movement directions for each of the patient data sets and stimulus directions using the Tukey method for post hoc comparisons. In all but two conditions, P1 and P2 performed significantly larger eye movements when they were correct in their response compared to when they answered incorrectly (Table 2). In only two instances (both when the stimulus direction was toward the right), P1 performed a larger rightward movement for incorrect compared to correct trials, and P2 performed a similar degree of leftward movement for both correct and incorrect trials. 
Figure 11
 
Average eye movement in arbitrary units (a.u.) from center (0,0). (A) Average eye movement from center as a function of stimulus direction and accuracy for P1 (correct trials: green; incorrect trials: red). Circles represent mean (±SEM) of those movements where correct trials were associated with significantly larger eye movements compared to incorrect trials. Dots represent mean of movements that were either not significantly different between correct and incorrect trials or where incorrect trials were associated with a larger eye movement (Tukey post hoc tests; see Table 2). Similar data are plotted for P2 when using a 50-pps rate (B) and when using a 400-pps rate (C). Note the axes have been adjusted to display the range of average movement in each patient.
Figure 11
 
Average eye movement in arbitrary units (a.u.) from center (0,0). (A) Average eye movement from center as a function of stimulus direction and accuracy for P1 (correct trials: green; incorrect trials: red). Circles represent mean (±SEM) of those movements where correct trials were associated with significantly larger eye movements compared to incorrect trials. Dots represent mean of movements that were either not significantly different between correct and incorrect trials or where incorrect trials were associated with a larger eye movement (Tukey post hoc tests; see Table 2). Similar data are plotted for P2 when using a 50-pps rate (B) and when using a 400-pps rate (C). Note the axes have been adjusted to display the range of average movement in each patient.
Table 2
 
Relationship Between Eye Movement and Accuracy
Table 2
 
Relationship Between Eye Movement and Accuracy
Dynamic Image Localization
Figure 12 shows total accuracy scores in P1 for the dynamic image localization task. Performance was above chance at all speeds tested and at a high level of accuracy at most speeds. Only at the fastest speed was performance found to reduce to just below the pass criterion of 56.25% based on an 8AFC task.9 Analyses (χ2) revealed that accuracy significantly depended on the speed of the moving bar (n = 56 trials with each speed, χ2 = 28.8, P < 0.0001). Multiple comparison pairwise χ2 tests after Bonferroni correction against a control speed of 16 deg/s showed that performance significantly reduced only when testing the 80 deg/s speed (χ2 = 10.5, P < 0.0001). 
Figure 12
 
Total accuracy scores in P1 on the dynamic image recognition task when testing with different moving bar speeds. Accuracy significantly depended on the speed of the bar and significantly reduced only when the speed was increased to 80 deg/s. Dashed line indicates 56.25% accuracy (pass criterion).
Figure 12
 
Total accuracy scores in P1 on the dynamic image recognition task when testing with different moving bar speeds. Accuracy significantly depended on the speed of the bar and significantly reduced only when the speed was increased to 80 deg/s. Dashed line indicates 56.25% accuracy (pass criterion).
P2, on this task with four alternative choices of bar orientation and a constant speed of 16 deg/s, performed significantly above chance (P < 0.05) and significantly better when testing with the 400-pps stimuli compared to the 50-pps stimuli (χ2 = 5.3, P < 0.05). He was unable to reach the pass criterion of 62.5% (4AFC9) for either stimulus rate tested (50 pps, n = 48 trials, 27.1% accuracy; 400 pps, n = 48 trials, 50% accuracy). In addition, the patient reported that ∼10% of the trials when testing 50 pps were not seen. Confusion matrix analyses did not reveal any systematic pattern in confusion between the four stimulus directions (data not shown). 
Discussion
This study aimed to provide insights into the type of perceptual information that could be delivered to blind recipients implanted with a suprachoroidal retinal prosthesis through stimulation of multiple electrodes. Performance was evaluated on a range of psychophysical tasks involving character identification, static image localization, and dynamic image localization. The main finding is that sequential stimulation of multiple electrodes can produce distinct and recognizable percepts, and patients are generally able to use this information to identify characters, locate them in the visual field, and recognize the direction of motion; however, there can be a vast degree of performance variation between patients and between different stimulation strategies. 
In the first task, patients were tested on character identification after having learned the percepts evoked from predetermined meaningful patterns of stimulation. Most studies involving patients implanted with other devices have used ad hoc pattern identification and reported outcomes,1416 and only a few studies, including ours, have assessed this in a systematic task-based form. For example, in a study using subretinal stimulation17 with interelectrode gaps of 10–200 milliseconds (i.e., very slow interleaving compared to our study, where gaps were ∼100 μs), identification of different letters was assessed in one patient. In that study, the patient scored above chance for three different groups of four letters each, and scored nearly 100% when distinguishing between the letters C, I, L, and O. While the accuracy scores obtained for some groups of letters were higher in this patient than those found in our study, it is important to note that the two study protocols were significantly different. In the subretinal study, since electrodes were not continuously interleaved and much larger interpulse intervals were used, it is likely that spatiotemporal electrode interactions would have occurred differently and the patient would not have seen phosphenes as occurring simultaneously. This notion is corroborated by another study using epiretinal stimulation in patients,18 where it has been shown that the amount of phase shift between pulses can affect performance on a discrimination task between two patterns of interleaved electrodes and that smaller phase shifts are usually harder to discriminate.18 Despite the task being difficult, we found that P1 could identify most characters with a reasonable degree of accuracy and P2 could identify some of the characters. It is possible that some of the characters were identified solely on the basis of their overall or local differences in brightness against others or the presence of a given electrode in the pattern evoking a unique phosphene shape. This may have been due to the fact that we did not brightness-match the individual phosphenes evoked by single electrodes for this task. It is important to note that patients were familiarized for a few weeks with these electrode patterns, and both patients described that they did perceive letters and numbers upon stimulation. Therefore, we expected that some level of cognition in recognizing spatial information delivered through the patterns would be maintained during the testing sessions. Another possibility, as also discussed by Horsager et al.,18 is that our patients used fine temporal differences between different electrodes forming a pattern as a cue to recognize a certain character; however, this is unlikely given the very short (100 μs) interval between electrodes. 
When analyzing the confusion matrices in Figure 7 and the electrode patterns in Figure 2, it seems that the degree of identifying a given character to some extent depended on the similarities and differences in the electrodes used to create it. For example in P1, confusion between the letter F and letter P could be attributed to only one electrode (E4) being additionally stimulated for the character P. Similarly, the electrodes used to form the characters 3, 5, and 9 were very similar (E4 swapped with E11 was the only difference between 5 and 9), thus possibly causing confusion between them. Conversely, the high accuracy in identifying the number 7 could have been attributed to the fact that this pattern uniquely involved stimulation of E13 and E19, which were not used in any other patterns. The asymmetrical confusion between 1 and P was surprising; however, it raises the possibility of axonal activation leading to a more elongated percept in the direction temporal to the optic nerve as depicted in recent calcium-imaging studies, particularly as the phase widths used for this task would have led to both direct and network-mediated activation of retinal ganglion cells.19 Similarly in P2, while the overall score was below chance, the character L was the most identifiable, and the pattern used to form this character activated electrodes that evoked phosphenes in only the left and upper visual fields. The confusion between the other characters seemed to be more random in this patient. Also interestingly, in P1 we expected more current spread and larger phosphene overlap at higher stimulus amplitudes, possibly causing difficulty in character identification. However, increasing the charge on each electrode from 2 to 6 dB above threshold did not affect identification performance. While the brightness of phosphenes elicited by individual electrodes would increase with increasing charge, they may not have necessarily scaled by the same amount; it is highly encouraging to see that performance did not get worse at higher amplitudes. Finally, going from a MP mode of stimulation to a CG mode (which may be expected to reduce current spread20 and therefore reduce phosphene overlap) also did not affect performance. These combined results further suggest that cues evoked by stimulation (or exclusion) of unique electrodes may have been predominantly relied on by both patients, but their abilities to use these cues greatly differed. While we have data from only two patients, the intersubject variability seen in our study is consistent with another recent study assessing long-term reproducibility of phosphenes evoked by electrode patterns in a larger cohort of patients.21 Thus, overall, this test provided some basic insights into whether individual patients were likely to use spatial information intrinsic to the electrode array. Specifically, we confirmed that patterns more similar in spatial aspects (i.e., only one to two electrodes different) are easily confused, while those that involve the use of orthogonal sets of electrodes can be highly recognizable despite the presence of extensive phosphene overlap. In addition, axonal stimulation can possibly distort patterns to an extent that they are easily confused with another pattern. Confusion between patterns that share common electrodes has also been reported for Argus II users interpreting direct-to-array braille patterns.16 
When static image localization was performed, P1 was able to detect the orientation of the wedges with high accuracy when using both high- and low-contrast images, while P2 performed above chance levels and even exceeded the pass criterion when using the 400-pps stimuli, but only for a high-contrast image. The difference between P1 and P2 was probably due to P1's ability to recognize small brightness differences between phosphenes and localize individual phosphenes in the visual field,5 whereas P2 may have relied more on overall brightness cues for this task. It is important to note, however, that for some wedge orientations, P2 was able to score at a high level (up to 70%), indicating an ability to obtain limited spatial information from multiple-electrode stimulation. The lower scores in the left direction are most likely due to the fact that we had to disable the electrode pair containing electrodes 3 and 8 (see Methods), and stimulation of this electrode pair would have strongly conveyed information on the left. 
In addition, testing with P1 at low contrast highlighted that a brightness-balanced map of phosphenes with fewer electrodes stimulated per time interval may be more useful than an unbalanced map when multiple phosphenes are presented concurrently. It is reasonable to understand why balancing phosphene brightness would lead to higher performance, but it is less clear as to why seven electrodes per time interval tended to improve performance. We hypothesize this may be related to the fact that we used three time intervals to complete one cycle of stimulation. In each interval, the seven brightest electrodes out of 20 chosen to be stimulated first would have been the ones overlaying the wedge as opposed to the background, since the wedge was always brighter than the background by at least 10%. Conversely, when the system would pick 12 electrodes to stimulate in each interval, this would include a mix of the brightest electrodes overlaying the wedge as well as background electrodes that would evoke less-bright phosphenes. Thus it is possible that P1 made use of not just spatial cues but rather combined spatiotemporal and brightness cues to determine the orientation of the wedge in this task at low-contrast levels. The notion of using temporal cues was also strengthened by the results obtained at 10% contrast, whereby P1 scored above chance levels only when using the seven electrodes per interval strategy. At this contrast, the use of brightness cues would be difficult given the quantization of pixel brightness values into electrical stimulus amplitudes, which meant that all electrodes were stimulated at the same level above threshold (for the unbalanced maps) or at the same brightness (for the balanced maps). While a strategy involving fewer electrodes per interval and a brightness-balanced phosphene map might be more beneficial to use in a full camera-based system, P1's scores remained above the pass criterion when using 12 electrodes per interval and even when using brightness-unbalanced phosphenes. This was true up to contrast levels as low as 30%, making P1's ability impressive considering the degree of difficulty with several phosphenes occurring simultaneously at different brightness levels. Thus, overall, both patients were able to localize phosphenes occurring with multiple-electrode stimulation, but with significantly different abilities. 
The notion that P2 relied predominantly on brightness cues for phosphene localization was confirmed with the dynamic image localization task. This task predominantly required the use of spatial cues, and P2, while scoring above chance, was unable to reach a pass criterion, again indicating that limited spatial information was conveyed to him. P1 on the other hand was highly accurate at detecting and following changes occurring in phosphenes as the bar moved, and even at speeds up to 64 deg/s performance did not drop substantially. To our knowledge, only one other study has reported the identification of motion direction using a similar moving bar task (albeit with a head-mounted camera and with head scanning) in patients implanted with the Argus II device.22 In that study, most patients were tested using a bar speed of 31.6 deg/s, and because results were reported as a response error in degrees rather than as an accuracy score, we are unable to directly compare our results. Other studies in patients implanted with the Alpha-IMS device also tested motion recognition; however, they used a random sequence of dots on a screen that moved in a certain direction.23,24 In those studies, patients could recognize motion at speeds of up to 35 deg/s. Thus, although our data are limited, we believe suprachoroidal stimulation has the potential to provide comparable vision to other devices but with the added benefit of a simpler surgery and a more stable electrode–tissue interface. While the low scores with P2 on all three tasks may allude to the importance of being able to integrate spatial information through the implant, we cannot rule out the possibility that the much larger electrode–retina distances and associated increased thresholds observed in P2 compared to P1 over the first year of implantation2 significantly affected the resolution of the device and P2's ability to integrate spatial information, as higher charge levels would have led to more current spread and more overlap between adjacent phosphenes. Interestingly, P2 did perform better overall when using the 400-pps stimuli compared to the 50-pps rate; this correlates with our previous report that phosphenes were clearer and more distinct5 and thresholds were lower3 when using higher stimulus rates. It may thus be useful for a suprachoroidal implant to stimulate at higher rates for patients with large electrode–retina distances, although the safety of high-rate stimulation of the eye is currently unknown. 
Some interesting observations were also made when assessing the effect of eye movement on performance. Firstly, we found that both patients tended to perform systematic, seemingly controlled, and less variable stimulus-related eye movements even when instructed to keep their eyes centrally fixated. Interestingly, this was despite P2 having a more severe degree of nystagmus in a resting state compared to P1 as observed in our data (larger SEM of eye movements in Figure 10D compared to 10B when no stimuli were being presented) and also visual observation of his eyes between trials. The intentional eye movements may be a natural response to patients having located a phosphene in their visual field. In a study performing epiretinal stimulation, it has been shown that eye movements can cause phosphenes to move farther away in the visual field.7 Therefore, a priori, one might expect that eye movements would have been the main reason for P2 performing poorly and reduction in P1's scores with decreasing contrast, and to some extent this is true, as P2 did move his eyes by a larger degree compared to P1. However, further analyses of our data challenge this assumption to some extent. We observed a similar degree of eye movements for P1 across different contrast levels (data not shown), and both patients exhibited significantly larger eye movements on trials where they responded correctly rather than incorrectly. These results suggest that eye movements may have either helped our patients to respond correctly or that they moved their eyes after they detected the stimulus direction. We cannot determine the exact phenomenon since we did not record their response times in relation to stimulus times. We also cannot rule out the reason for these movements being related to the specific placement of the electrode array with respect to the retina or compensatory habits learned by the patients. 
Secondly, we found that in P2 for 3/4 stimulus directions, regardless of whether a trial was answered correctly or incorrectly, eye movements were congruent to the stimulus direction, while P1's eyes did not “follow” the direction of the phosphenes. This was a very interesting finding as it suggests that P2's eyes “knew” the direction of where phosphenes occurred and reflexively moved in that direction, but that did not always translate into enough cognitive information for him to answer correctly. The relationship between the direction of eye movement and the stimulus direction seen in P2 has also been observed in patients implanted with the Alpha-IMS photodiode-based device,25 where eye gaze location correlated with the location of the stimulus. This strengthens the notion that the eye's natural response to any visual stimulus occurring in our visual field is to move in the direction of that stimulus, as shown in the control subjects in the study performed by Hafed et al.25 
It is important to note that while it may be true that eye movements in our patients are not likely to be the primary reason for poor performance in a static image localization task requiring spatial discrimination, it is assumed that eye movements, particularly those associated with nystagmus, would be detrimental when using the prosthesis in conjunction with a fixed front-facing camera. Sabbah et al.7 showed that head and eye position in implanted patients were significantly mismatched and that patients had to train themselves on strategies to combat the effects of eye movements on perception, particularly when trying to perform tasks involving visual and motor coordination. It is also particularly interesting that both P1 and P2 scored significantly less on a similar wedge task when head scanning with an external camera was involved our previous study.4 Specifically, P1 and P2 on average scored 18% and 13% lower, respectively, when using 100% contrast wedges and the same vision-processing algorithm settings with an external camera compared to the direct-to-array task described in this study. This further highlights the negative influence that head scanning without compensating for eye movements can have on real-world performance as patients were not asked to keep their eyes centrally fixated when conducting the camera-based study, and we observed that they were not able to maintain a steady gaze despite being instructed to do so. This stresses the importance of eye tracking in patients implanted with a visual prosthesis so that not only can patient eye behavior can be studied during their use of the prosthesis, but also eye tracking would help in designing compensation techniques to combat issues of moving phosphenes as a result of eye movements.7,26 One way to implement a compensation technique would be to enable the prosthesis users to scan a visual scene exclusively with their eyes or in conjunction with head scanning. 
Based on the combined results from all three tasks in our study, we would expect that in a camera-based navigation task, P1 would use both spatial and brightness information in phosphenes, whereas P2 may predominantly use a “phosphene detection” approach with limited spatial information through multiple-electrode stimulation. Indeed, we have already shown2,4 that P1 has been able to perform and score reasonably well on the Landolt-C and grating acuity tasks, which require an even higher degree of spatial resolution than the tasks presented in this study. P2 was not able to perform these acuity tasks, indicating the spatial information provided to him by the implant was limited. Nevertheless, our study has shown that suprachoroidal stimulation can provide enough information to enable patients to perform basic identification and localization tasks with multiple-electrode stimulation. We propose that the tasks described in this paper could be used as part of a screening process to assess each patient's ability to integrate spatial information intrinsic to the electrode array to identify salient features of visual stimuli before embarking on a full camera-based use of the prosthesis. The results of the tasks could then be used to optimize rehabilitation strategies for each patient so that a patient can fully utilize the information received from electrical stimulation for maximum benefit with the prosthesis. 
Acknowledgments
Supported by the Australian Research Council through its Special Research Initiative in Bionic Vision Science and Technology awarded to Bionic Vision Australia, an NHMRC Project Grant 1082358 awarded to PJ Allen, and by the Bertalli Family and Clive & Vera Ramaciotti Foundations to the Bionics Institute; the Victorian Government through its Operational Infrastructure Program (Bionics Institute and the Centre for Eye Research Australia [CERA]); and a National Health and Medical Research Council, Centre for Clinical Research Excellence Award #529923 (CERA). 
Disclosure: M.N. Shivdasani, P; N.C. Sinclair, P; L.N. Gillespie, None; M.A. Petoe, P; S.A. Titchener, None; J.B. Fallon, None; T. Perera, None; D. Pardinas-Diaz, None; N.M. Barnes, P; P.J. Blamey, P 
References
Lok C. Curing blindness: vision quest. Nature. 2014; 513: 160–162.
Ayton LN, Blamey PJ, Guymer RH et al. First-in-human trial of a novel suprachoroidal retinal prosthesis. PLoS One. 2014; 9: e115239.
Shivdasani MN, Sinclair NC, Dimitrov PN, et al. Factors affecting perceptual thresholds in a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2014; 55: 6467–6481.
Barnes N, Scott AF, Lieby P, et al. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering. J Neural Eng. 2016; 13: 036013.
Sinclair NC, Shivdasani MN, Perera T, et al. The appearance of phosphenes elicited using a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2016; 57: 4948–4961.
Shepherd RK, Shivdasani MN, Nayagam DA, Williams CE, Blamey PJ. Visual prostheses for the blind. Trends Biotechnol. 2013; 31: 562–571.
Sabbah N, Authie CN, Sanda N, Mohand-Said S, Sahel JA, Safran AB. Importance of eye position on spatial localization in blind subjects wearing an argus ii retinal prosthesis. Invest Ophthalmol Vis Sci. 2014; 55: 8259–8266.
Slater KD, Sinclair NC, Nelson TS, Blamey PJ, McDermott HJ. Neurobi: a highly configurable neurostimulator for a retinal prosthesis and other applications. IEEE J Transl Eng Health Med. 2015; 3: 1–11.
Bach M, Wilke M, Wilhelm B, Zrenner E, Wilke R. Basic quantitative assessment of visual performance in patients with very low vision. Invest Ophthalmol Vis Sci. 2010; 51: 1255–1260.
Caspi A, Zivotofsky AZ. Assessing the utility of visual acuity measures in visual prostheses. Vision Res. 2015; 108: 77–84.
Garcia S, Petrini K, Rubin GS, Da Cruz L, Nardini M. Visual and non-visual navigation in blind patients with a retinal prosthesis. PLoS One. 2015; 10: e0134369.
Dacey DM, Petersen MR. Dendritic field size and morphology of midget and parasol ganglion cells of the human retina. Proc Natl Acad Sci U S A. 1992; 89: 9666–9670.
Horsager A, Greenberg RJ, Fine I. Spatiotemporal interactions in retinal prosthesis subjects. Invest Ophthalmol Vis Sci. 2010; 51: 1223–1233.
Klauke S, Goertz M, Rein S, et al. Stimulation with a wireless intraocular epiretinal implant elicits visual percepts in blind humans: results from stimulation tests during the epiret3 prospective clinical trial. Invest Ophthalmol Vis Sci. 2011; 52: 449–455.
Rizzo JFIII, Wyatt J, Loewenstein J, Kelly S, Shire D. Perceptual efficacy of electrical stimulation of human retina with a microelectrode array during short-term surgical trials. Invest Ophthalmol Vis Sci. 2003; 44: 5362–5369.
Lauritzen TZ, Harris J, Mohand-Said S, et al. Reading visual braille with a retinal prosthesis. Front Neurosci. 2012; 6: 168.
Wilke R, Gabel VP, Sachs H, et al. Spatial resolution and perception of patterns mediated by a subretinal 16-electrode array in patients blinded by hereditary retinal dystrophies. Invest Ophthalmol Vis Sci. 2011; 52: 5995–6003.
Horsager A, Greenberg RJ, Fine I. Spatiotemporal interactions in retinal prosthesis subjects. Invest Ophthalmol Vis Sci. 2010; 51: 1223–1233.
Weitz AC, Nanduri D, Behrend MR, et al. Improving the spatial resolution of epiretinal implants by increasing stimulus pulse duration. Sci Transl Med. 2015; 7: 318ra203.
Cicione R, Shivdasani MN, Fallon JB, et al. Visual cortex responses to suprachoroidal electrical stimulation of the retina: Effects of electrode return configuration. J Neural Eng. 2012; 9: 036009.
Luo YH, Zhong JJ, Clemo M, da Cruz L. Long-term repeatability and reproducibility of phosphene characteristics in chronically implanted argus ii retinal prosthesis subjects. Am J Ophthalmol. 2016; 170: 100–109.
Dorn JD, Ahuja AK, Caspi A, et al. The detection of motion by blind subjects with the epiretinal 60-electrode (argus ii) retinal prosthesis. Arch Ophthalmol. 2012: 1–7.
Stingl K, Bartz-Schmidt KU, Besch D, et al. Artificial vision with wirelessly powered subretinal electronic implant alpha-ims. Proc Biol Sci. 2013; 280: 20130077.
Stingl K, Bartz-Schmidt KU, Besch D, et al. Subretinal visual implant alpha ims-clinical trial interim report. Vision Res. 2015; 111: 149–160.
Hafed ZM, Stingl K, Bartz-Schmidt KU, Gekeler F, Zrenner E. Oculomotor behavior of blind patients seeing with a subretinal visual implant. Vision Res. 2016; 118: 119–131.
Barry MP, Dagnelie G. Hand-camera coordination varies over time in users of the Argus II retinal prosthesis system. Front Syst Neurosci. 2016; 10: 41.
Appendix
The Bionic Vision Australia Consortium consists of five member organizations (Centre for Eye Research Australia, Bionics Institute, Data61, University of Melbourne, and University of New South Wales) and three partner organizations (The Royal Victorian Eye and Ear Hospital, National Vision Research Institute of Australia, and the University of Western Sydney). For this publication, the consortium members consist of (in alphabetical order): 
Penelope J. Allen,1,2 Lauren N. Ayton,1,2 Peter N. Dimitrov,1 Chi D. Luu,1,2 Chris McCarthy,3,4 Hugh J. McDermott,5,6 David A.X. Nayagam,5,7 Robert K. Shepherd,5,6 Joel Villalobos,5,6 and Chris E. Williams5,6 
1Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia 
2Department of Surgery (Ophthalmology), The University of Melbourne, Parkville, Victoria, Australia 
3Computer Vision Research Group, Data61, Canberra, Australian Capital Territory, Australia 
4Department of Computer Science and Software Engineering, Swinburne University of Technology, Hawthorn, Victoria, Australia 
5Bionics Institute, East Melbourne, Victoria, Australia 
6Department of Medical Bionics, The University of Melbourne, Parkville, Victoria, Australia 
7Department of Pathology, The University of Melbourne, Parkville, Victoria, Australia 
Figure 1
 
Schematic of electrode array layout. (A) Twenty electrodes were available for stimulation. Note electrodes 9, 17, and 19 were smaller in diameter (400 vs. 600 μm for the other electrodes), and the outer ring of electrodes (filled black) was shorted together and available for use as part of a CG return. (B) Arrangement of ganged pairs for P2. A total of 10 ganged pairs were made available for stimulation. P2, patient 2.
Figure 1
 
Schematic of electrode array layout. (A) Twenty electrodes were available for stimulation. Note electrodes 9, 17, and 19 were smaller in diameter (400 vs. 600 μm for the other electrodes), and the outer ring of electrodes (filled black) was shorted together and available for use as part of a CG return. (B) Arrangement of ganged pairs for P2. A total of 10 ganged pairs were made available for stimulation. P2, patient 2.
Figure 2
 
Electrodes used for character identification. The eight different patterns of electrodes used for this task in P1 (A) and P2 (B). Filled electrodes indicate those that were stimulated in a sequential fashion for each character. Note the electrode numbering is oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). Due to different phosphene locations and phosphene shapes for individual electrodes, the pattern of electrodes stimulated on the array did not always match the intended character (for example with P1, the pattern of electrodes for the number 5 does not look like the number 5 on the array).
Figure 2
 
Electrodes used for character identification. The eight different patterns of electrodes used for this task in P1 (A) and P2 (B). Filled electrodes indicate those that were stimulated in a sequential fashion for each character. Note the electrode numbering is oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). Due to different phosphene locations and phosphene shapes for individual electrodes, the pattern of electrodes stimulated on the array did not always match the intended character (for example with P1, the pattern of electrodes for the number 5 does not look like the number 5 on the array).
Figure 3
 
Setup for the static image localization task in P1 at different contrast levels showing output levels for each electrode obtained from the MVP algorithm applied to each image. The percentage number shown at the top indicates the contrast level (i.e., background intensity subtracted from the wedge intensity, with the wedge intensity fixed to 100%). The number inside each electrode and the color denotes the pixel brightness value (maximum of 255) for that electrode obtained from the vision-processing algorithm. Note, at contrast levels less than 100%, all 20 electrodes were stimulated, whereas at the 100% contrast level, only three electrodes were stimulated for this orientation. Note the electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). MVP, minimal vision processing.
Figure 3
 
Setup for the static image localization task in P1 at different contrast levels showing output levels for each electrode obtained from the MVP algorithm applied to each image. The percentage number shown at the top indicates the contrast level (i.e., background intensity subtracted from the wedge intensity, with the wedge intensity fixed to 100%). The number inside each electrode and the color denotes the pixel brightness value (maximum of 255) for that electrode obtained from the vision-processing algorithm. Note, at contrast levels less than 100%, all 20 electrodes were stimulated, whereas at the 100% contrast level, only three electrodes were stimulated for this orientation. Note the electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1). MVP, minimal vision processing.
Figure 4
 
Setup for the dynamic image localization task in P1. In each image, the electrodes activated, and the output levels for each electrode were obtained from the Lanczos2 filter applied to the moving bar that was presented to the algorithm as an uncompressed gif image. The number inside each electrode and the color denotes the pixel brightness value for that electrode (maximum of 255) obtained from the vision-processing algorithm. In this example, the bar orientation was from left to right as indicated by the arrow direction. Note filter color scheme is identical to that in Fig. 3, and electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1).
Figure 4
 
Setup for the dynamic image localization task in P1. In each image, the electrodes activated, and the output levels for each electrode were obtained from the Lanczos2 filter applied to the moving bar that was presented to the algorithm as an uncompressed gif image. The number inside each electrode and the color denotes the pixel brightness value for that electrode (maximum of 255) obtained from the vision-processing algorithm. In this example, the bar orientation was from left to right as indicated by the arrow direction. Note filter color scheme is identical to that in Fig. 3, and electrodes are oriented to patient perception (i.e., vertically flipped to that depicted in Fig. 1).
Figure 5
 
Eye-tracking data. (A) Video image of eye-tracker view to determine if patient was looking at the center before initiating the start of each trial. (B) Raw eye-tracker data (sampled at 60 Hz) during task 2 (x gaze and y gaze) as a function of time during a 2-second stimulus presentation (indicated by dashed lines).
Figure 5
 
Eye-tracking data. (A) Video image of eye-tracker view to determine if patient was looking at the center before initiating the start of each trial. (B) Raw eye-tracker data (sampled at 60 Hz) during task 2 (x gaze and y gaze) as a function of time during a 2-second stimulus presentation (indicated by dashed lines).
Figure 6
 
Mean (±SEM) accuracy for P1 on the character identification task for different stimulus parameters. Accuracy was found to have no dependency on the level above threshold, the return configuration, or the polarity of the pulses (χ2 = 5.1, P = 0.645).
Figure 6
 
Mean (±SEM) accuracy for P1 on the character identification task for different stimulus parameters. Accuracy was found to have no dependency on the level above threshold, the return configuration, or the polarity of the pulses (χ2 = 5.1, P = 0.645).
Figure 7
 
Confusion matrices showing overall percentage number of trials where P1 and P2 responded for a given presented pattern on the character identification task. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 7
 
Confusion matrices showing overall percentage number of trials where P1 and P2 responded for a given presented pattern on the character identification task. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 8
 
Mean accuracy scores (± SEM) across five blocks of trials in P1 on the static image localization task when testing different contrast levels. Accuracy significantly depended on the contrast and also depended on the stimulation timing strategy at a contrast of 10%.
Figure 8
 
Mean accuracy scores (± SEM) across five blocks of trials in P1 on the static image localization task when testing different contrast levels. Accuracy significantly depended on the contrast and also depended on the stimulation timing strategy at a contrast of 10%.
Figure 9
 
Confusion matrices showing overall percentage number of trials where P2 responded with a wedge orientation after excluding the “not clearly seen” trials on the static image localization task. P2 performed significantly better on the task when testing with the 400-pps stimuli as it was found to evoke clearer, brighter, and more persistent phosphenes.5 The left direction was found to invoke the least accuracy. D, down; L, left; R, right; U, up. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 9
 
Confusion matrices showing overall percentage number of trials where P2 responded with a wedge orientation after excluding the “not clearly seen” trials on the static image localization task. P2 performed significantly better on the task when testing with the 400-pps stimuli as it was found to evoke clearer, brighter, and more persistent phosphenes.5 The left direction was found to invoke the least accuracy. D, down; L, left; R, right; U, up. To better highlight confusions, each cell is colored according to its frequency of occurrence from 0% (yellow) to maximum (green).
Figure 10
 
Relative eye movement from center as a function of time. (A) Average (solid line) ± SEM (shading) for x (blue) and y (red) direction eye movements relative to the center gaze location (−0.15 seconds) in P1 (n = 45 trials). (B) Similar data plotted for P2 when using the 400-pps stimulus rate (n = 84 trials). Similar data also analyzed when using sham trials constructed from time between stimuli in (C) P1 (n = 99 trials) and (D) P2 (n = 73 trials).
Figure 10
 
Relative eye movement from center as a function of time. (A) Average (solid line) ± SEM (shading) for x (blue) and y (red) direction eye movements relative to the center gaze location (−0.15 seconds) in P1 (n = 45 trials). (B) Similar data plotted for P2 when using the 400-pps stimulus rate (n = 84 trials). Similar data also analyzed when using sham trials constructed from time between stimuli in (C) P1 (n = 99 trials) and (D) P2 (n = 73 trials).
Figure 11
 
Average eye movement in arbitrary units (a.u.) from center (0,0). (A) Average eye movement from center as a function of stimulus direction and accuracy for P1 (correct trials: green; incorrect trials: red). Circles represent mean (±SEM) of those movements where correct trials were associated with significantly larger eye movements compared to incorrect trials. Dots represent mean of movements that were either not significantly different between correct and incorrect trials or where incorrect trials were associated with a larger eye movement (Tukey post hoc tests; see Table 2). Similar data are plotted for P2 when using a 50-pps rate (B) and when using a 400-pps rate (C). Note the axes have been adjusted to display the range of average movement in each patient.
Figure 11
 
Average eye movement in arbitrary units (a.u.) from center (0,0). (A) Average eye movement from center as a function of stimulus direction and accuracy for P1 (correct trials: green; incorrect trials: red). Circles represent mean (±SEM) of those movements where correct trials were associated with significantly larger eye movements compared to incorrect trials. Dots represent mean of movements that were either not significantly different between correct and incorrect trials or where incorrect trials were associated with a larger eye movement (Tukey post hoc tests; see Table 2). Similar data are plotted for P2 when using a 50-pps rate (B) and when using a 400-pps rate (C). Note the axes have been adjusted to display the range of average movement in each patient.
Figure 12
 
Total accuracy scores in P1 on the dynamic image recognition task when testing with different moving bar speeds. Accuracy significantly depended on the speed of the bar and significantly reduced only when the speed was increased to 80 deg/s. Dashed line indicates 56.25% accuracy (pass criterion).
Figure 12
 
Total accuracy scores in P1 on the dynamic image recognition task when testing with different moving bar speeds. Accuracy significantly depended on the speed of the bar and significantly reduced only when the speed was increased to 80 deg/s. Dashed line indicates 56.25% accuracy (pass criterion).
Table 1
 
Summary of Parameters Used for the Three Tasks in Both Patients
Table 1
 
Summary of Parameters Used for the Three Tasks in Both Patients
Table 2
 
Relationship Between Eye Movement and Accuracy
Table 2
 
Relationship Between Eye Movement and Accuracy
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×