Open Access
Low Vision  |   June 2017
Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis
Author Affiliations & Notes
  • Matthew A. Petoe
    Bionics Institute of Australia, East Melbourne, Victoria, Australia
    Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
  • Chris D. McCarthy
    Computer Vision Research Group, Data61, Canberra, Australian Capital Territory, Australia
    Department of Computer Science and Software Engineering, Swinburne University of Technology, Hawthorn, Victoria, Australia
  • Mohit N. Shivdasani
    Bionics Institute of Australia, East Melbourne, Victoria, Australia
    Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
  • Nicholas C. Sinclair
    Bionics Institute of Australia, East Melbourne, Victoria, Australia
    Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
  • Adele F. Scott
    Computer Vision Research Group, Data61, Canberra, Australian Capital Territory, Australia
  • Lauren N. Ayton
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
    Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
  • Nick M. Barnes
    Computer Vision Research Group, Data61, Canberra, Australian Capital Territory, Australia
    College of Engineering & Computer Science, Australian National University, Canberra, Australian Capital Territory, Australia
  • Robyn H. Guymer
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
    Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
  • Penelope J. Allen
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
    Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
  • Peter J. Blamey
    Bionics Institute of Australia, East Melbourne, Victoria, Australia
    Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
  • Correspondence: Matthew A. Petoe, Bionics Institute, 384-388 Albert Street, East Melbourne, VIC 3002, Australia; mpetoe@bionicsinstitute.org
  • Footnotes
     See the appendix for the members of the Bionic Vision Australia Consortium.
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 3231-3239. doi:https://doi.org/10.1167/iovs.16-21041
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew A. Petoe, Chris D. McCarthy, Mohit N. Shivdasani, Nicholas C. Sinclair, Adele F. Scott, Lauren N. Ayton, Nick M. Barnes, Robyn H. Guymer, Penelope J. Allen, Peter J. Blamey, for the Bionic Vision Australia Consortium; Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis. Invest. Ophthalmol. Vis. Sci. 2017;58(7):3231-3239. https://doi.org/10.1167/iovs.16-21041.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: With a retinal prosthesis connected to a head-mounted camera, subjects can perform low vision tasks using a combination of electrode discrimination and head-directed localization. The objective of the present study was to investigate the contribution of retinotopic electrode discrimination (perception corresponding to the arrangement of the implanted electrodes with respect to their position beneath the retina) to visual performance for three recipients of a 24-channel suprachoroidal retinal implant. Proficiency in retinotopic discrimination may allow good performance with smaller head movements, and identification of this ability would be useful for targeted rehabilitation.

Methods: Three participants with retinitis pigmentosa performed localization and grating acuity assessments using a suprachoroidal retinal prosthesis. We compared retinotopic and nonretinotopic electrode mapping and hypothesized that participants with measurable acuity in a normal retinotopic condition would be negatively impacted by the nonretinotopic condition. We also expected that participants without measurable acuity would preferentially use head movement over retinotopic information.

Results: Only one participant was able to complete the grating acuity task. In the localization task, this participant exhibited significantly greater head movements and significantly lower localization scores when using the nonretinotopic electrode mapping. There was no significant difference in localization performance or head movement for the remaining two subjects when comparing retinotopic to nonretinotopic electrode mapping.

Conclusions: Successful discrimination of retinotopic information is possible with a suprachoroidal retinal prosthesis. Head movement behavior during a localization task can be modified using a nonretinotopic mapping. Behavioral comparisons using retinotopic and nonretinotopic electrode mapping may be able to highlight deficiencies in retinotopic discrimination, with a view to address these deficiencies in a rehabilitation environment. (ClinicalTrials.gov number, NCT01603576).

Over the last decade, retinal prostheses have emerged as a promising technology to restore limited visual function to those who are blind from photoreceptor loss (such as in retinitis pigmentosa). State-of-the art prostheses aim to electrically stimulate surviving neurons within the retina via an implant containing an array of stimulating electrodes.13 When used in conjunction with a head-mounted video camera, the visual environment can be sampled and conveyed to the recipient as patterns of electrical activity, perceived as “phosphenes” of light. As we have recently reported for a suprachoroidal retinal prosthesis,4 these phosphenes have complex and overlapping shapes, but have an arrangement in visual space that was perceived by two patients as being generally retinotopic (corresponding to the arrangement of the implanted electrodes with respect to their position beneath the retina). As may be the case for a subset of other retinal prosthesis recipients, our third patient was unable to discriminate the layout of phosphenes in the visual field, and this can affect visual performance. 
Visual performance with a retinal prosthesis is mediated by how effectively a recipient can use and interpret the phosphenes (Lauritzen T, et al. IOVS 2011;52:ARVO E-Abstract 4927). For implant recipients, the most basic of low-vision tasks are light detection (detecting electrode activity) and light localization (discerning which viewing direction causes the electrodes to be active). These tasks can be achieved using nonspatial information from the phosphenes—no true form or pattern vision is required to perform such tasks.5 More challenging tasks include motion detection, object recognition or discrimination, target following, and wayfinding, which require interpretation of patterned information from multiple electrodes. Interpreting patterned information relating to a topographical electrode arrangement can be termed intrinsic retinotopic discrimination (retina-centered). This is in contrast to interpreting extrinsic spatiotopic information (world-centered), which incorporates head scanning and can be perceived to some degree with a single or merged phosphene.6 
Retinotopic discrimination can be measured using an acuity task, such as grating acuity or the Landolt-C, but can also be considered during a localization task—as precise discrimination can allow localization within the narrow implant field-of-view without the obligation for camera movement. Earlier studies have proposed that scrambling or merging the image-to-electrode mapping can allow an objective comparison to normal electrode mapping conditions, with the hypothesis that performance on a visual task will deteriorate with the removal of retinotopic information if patients have perceptual access to that information. For example, Caspi et al.7 performed a simulation-based comparison between a normal “patterned display” and an “unpatterned display” (conveying the same overall brightness of the normal mode but without any pattern) and found poorer Landolt-C acuity and a threefold increase in response time for participants viewing the unpatterned display. In a study of retinal prosthesis (Argus I; Second Sight Medical Productions, Inc., Sylmar, CA, USA) users with 16 epiretinal electrodes, scrambling the image-to-electrode mapping reduced performance on a grating acuity task to chance levels.8 A similar deterioration in performance with a scrambled electrode map has been observed in 60-channel retinal prosthesis (Argus II; Second Sight Medical Productions, Inc.) users performing motion detection9 and letter recognition tasks,10 leading to the conclusion that retinotopic discrimination was beneficial in those activities. Moreover, a distinction between retinotopic and spatiotopic abilities was noted in retinal prosthesis (Second Sight Medical Productions, Inc.) users by Kotecka et al.,11 who found in a tabletop reach-and-grab task that scrambled mapping allowed detection of an object, but insufficient information to make a precise movement toward it. Finally, Luo et al. (IOVS 2014;55:ARVO E-Abstract 1834) found in retinal prosthesis (Second Sight Medical Productions, Inc.) users that a scrambled mapping did not impact identification scores for solid shapes, presumably as the total number of active phosphenes was sufficient to determine some useful features of the objects (e.g., large versus small) in the absence of retinotopic information. Scrambling in a light localization task has not been previously reported. 
The objective of the present study was to investigate the contribution of retinotopic discrimination to visual performance in a light localization task and a grating acuity task for three recipients of a 24-channel suprachoroidal retinal implant. Using comparisons between retinotopic and scrambled image-to-electrode mapping conditions in these two low-vision tasks, the present study examined relationships between retinotopic discrimination, localization performance, and magnitude of head and camera movement. Determining the relative contributions of retinotopic discrimination and head scanning to specific visual tasks is a preliminary step in assessing patient behaviors and tailoring rehabilitation programs to develop specific prosthetic vision skills and strategies (Lauritzen T, et al. IOVS 2012;53:ARVO E-Abstract 5508). 
Methods
Participants
Three participants with profound vision loss from retinitis pigmentosa (i.e., bare-light perception in both eyes) were enrolled in a clinical trial of a prototype 24-channel electrode array implanted in the suprachoroidal space.12 The participants (one 53-year-old female [P1], and two males aged 50 [P2] and 63 years [P3]) were identified through a screening process at the Centre for Eye Research Australia. The Human Ethics Committee of the Royal Victorian Eye & Ear Hospital approved the study (Application #11/1032H) and the study was registered at www.clinicaltrials.gov (ClinicalTrials.gov number, NCT01603576). The research followed the tenets of the Declaration of Helsinki and informed consent was obtained from all participants upon explanation of the nature and possible consequences of the study. 
All participants had between 8 and 10 years (P2) and 20 years (P1 and P3) of light-perception only vision and were guide dog users. The worse-seeing eye was selected for implantation, and had bare-light perception acuity in all three participants. This was determined during three separate preoperative clinical assessments, which included a range of tests such as visual acuity, electroretinography, and manual Goldmann perimetry.13 
Electrical stimulation parameters were explored in a psychophysics setting to address “between-subject” differences in perceptual thresholds14 and allow a period of familiarization with the implant. The test participants were later trained to use a head-mounted video camera with the implant system by utilizing head movements to explore the field-of-view. At the time of the present study, each participant had at least 6 months of lab-based experience using the head-mounted camera to perform screen-based low vision tasks—two of which are reported here. Full details of the first 12 months of this clinical trial have been previously reported by Ayton et al.,12 as well as factors affecting perceptual thresholds16 and phosphene perception.4 
The Implant System
The 24-channel device used in this study consisted of an intraocular electrode array, composed of a silicone substrate (19-mm long, 8-mm wide) with 20 stimulating platinum electrodes (17 × 600 μm and 3 × 400 μm diameter) implanted in the suprachoroidal space.12,14 The intraocular array was connected by a helical lead wire to a titanium percutaneous connector, which was externalized behind the subject's ear, allowing direct access to the electrodes via an external stimulator.15 Two large 2000 μm diameter electrodes on the intraocular array served as return paths, with the remaining two channels connected to alternate return paths (not used in this study): a guard-ring surrounding the stimulating electrodes and a platinum pin implanted subcutaneously behind the ear. 
Stimulation Parameters
Stimulation parameters for each participant were described in a phosphene map that specified the number of available electrodes that could be used for stimulation, the pulse width (PW), interphase gap (IPG) and stimulation rate (pulses-per-second [pps]) for each electrode, the threshold current for each electrode and the maximum current for each electrode (set to 6 dB above threshold current).14 For P1, monopolar anodic-first biphasic pulses with 500 μs PW, 500 μs IPG at 50 pps were used for the light localization task, and 148 μs PW, 20 μs IPG at 200 pps were used for the grating acuity task, chosen by patient preference for their brightness and temporal (fading) profiles. Phosphene maps for P2 and P3 specified electrically coupled (“ganged”) pairs of adjacent electrodes, as combining the effective surface area proved more effective at evoking phosphenes within the safe charge limit.14 Parameters for P2 were anodic-first biphasic pulses of 148 μs PW, 20 μs IPG at 400 pps. Parameters for P3 were anodic-first biphasic pulses of 200 μs PW, 200 μs IPG at 200 pps. For all participants, stimulation during the light localization task was interspersed with intervals of no stimulation to alleviate brightness fading and adaptation observed with continuous stimulation.16,17 
Phosphene appearance varied depending upon the subject, electrode position and stimulation parameters, but percepts were controllable (in terms of size and brightness), generally retinotopically arranged (albeit with some distortion) and locatable in the visual field by P1 and P2.4 Although phosphene intensity increased with stimulation level in P3, there was no clearly discernible difference between phosphenes from different electrodes. Nystagmus and spontaneous visual percepts were complicating factors for P2 and P3. 
Video Processing
Testing incorporated a head-mounted video camera (Arrington Research, Inc., Scottsdale, AZ, USA) with a manufacturer stated field-of-view of 67° × 50.25° and a pixel dimension of 320 by 240 pixels. Within the implant, the 20 stimulating electrodes were arranged in a staggered grid covering 3.5 × 3.46 mm of retinal area, corresponding to a visual field projection on the retina of approximately 13° × 12°.4 A similarly sized subregion of the camera image was presented to the electrodes after video processing. 
Video processing was performed using a custom implementation in a computing environment (MATLAB; MathWorks, Inc., Natick, MA, USA) running under a personal computer operating system (Windows XP; Microsoft Corp., Redmond, WA, USA). The video processing for each electrode mapping condition (Fig. 1) is described below. 
Figure 1
 
Image-to-electrode mapping conditions used in this study, represented as electrode activity on the 20 electrode layout. (A) A captured image. (B) Normal (retinotopic) conditions, where the intensity values are mapped to electrodes in the corresponding location, using the Lanzcos2 transform described in Barnes et al.20 (C) In the scrambled condition, the location mapping was purposefully corrupted to be nonretinotopic but contain the same number and intensity of phosphenes as for normal. (D) In the noise condition, the electrodes are repeatedly assigned random intensity values and are unrelated to the camera image. (E) In the system-off condition no electrodes are stimulated.
Figure 1
 
Image-to-electrode mapping conditions used in this study, represented as electrode activity on the 20 electrode layout. (A) A captured image. (B) Normal (retinotopic) conditions, where the intensity values are mapped to electrodes in the corresponding location, using the Lanzcos2 transform described in Barnes et al.20 (C) In the scrambled condition, the location mapping was purposefully corrupted to be nonretinotopic but contain the same number and intensity of phosphenes as for normal. (D) In the noise condition, the electrodes are repeatedly assigned random intensity values and are unrelated to the camera image. (E) In the system-off condition no electrodes are stimulated.
System-On (Normal)
In the normal (retinotopic) electrode mapping condition, each captured camera image was filtered using a Lanzcos2 antialiasing filter to reduce inherent aliasing artefacts such as flickering.18 The filtered image was then downsampled and translated to electrode stimulation parameters, using the physical position of each electrode on the retina to define the image-to-electrode mapping and the sampled luminance of the scene to define the stimulation intensity. 
Scrambled
Scrambled applied the same video processing to the captured image as system-on (normal). However, the mapping between camera image and electrode layout was purposefully randomized every 5 seconds so that the image-to-electrode mapping was nonretinotopic (Fig. 1) and repeatedly unfamiliar. The total charge dosage delivered to participants was comparable to that which would have been delivered under the normal condition. Temporal information from the input image stream (e.g., in response to camera movement) was still present. This enabled effective masking of the use of the scrambled condition (i.e., subjects tended to be unable to differentiate scrambled and normal conditions), making it a suitable positive control.8,10,11 
Random Noise Stimulation (Noise)
A noise condition was included to mask residual vision and to assess participant engagement in comparison to the system-off condition. Whenever a new image was acquired from the camera, each electrode was assigned random intensity values within the safe dynamic range. The mean intensity of stimulation over the electrode array was uncorrelated with the input image stream. From a participant perspective, there was no systematic relationship between spatial information, head movement, and electrode activity. 
System-Off
Under the system-off condition, no stimulation was delivered via the implanted electrode array, and thus no meaningful information was displayed. 
Vision Tasks
Light Localization.
In the first part of the study, the four electrode mapping conditions were compared using static images from the light localization subtask of the basic assessment of light and motion (BaLM) test.19 The BaLM has proven validity and reliability, and has been used with low vision populations, and prosthetic and low-vision assistive device recipients.19,20 In the light localization task, participants identified the orientation of a wedge of light on a black computer screen (up/down/left/right) in a four–alternative forced choice (4AFC) paradigm. The chance rate was 25% and a criterion of 62.5% was the benchmark for success as described by Bach et al.19 Participants completed a total of 40 to 64 trials per condition, in block sizes of 8 trials per condition, collected over 2 to 5 sessions. The order of presentation of the electrode mapping conditions was counterbalanced, controlled, and randomly allocated for each session using a computerized automated system (www.random.org, provided in the public domain by Randomness and Integrity Services Limited, Dublin, Ireland). 
The light localization stimuli were presented on a computer screen with 2048 × 1280 resolution (64 cm width, 40 cm height; Model U3011; Dell, Inc., Round Rock, TX, USA). Participants were seated at a calibrated distance of 57 cm from their head-mounted camera to the computer screen, and therefore the screen subtended 58.7° width by 38.7° height. The initial fixation dot was 6° in diameter, and the wedges of light had 20° eccentricity, presented against a black background. Background lighting conditions were 111 ± 3 lux, measured using a light meter (Amprobe LM-200LED; Allied Electronics, Inc., Fort Worth, TX, USA) with the measuring transducer near the participant's head, facing up toward the lighting source. 
Participants had been previously trained to find the fixation dot at the center of the screen by first seeking phosphene activity in the central position, or defaulting to a natural head position if no activity could be found. Participants were permitted to touch the screen enclosure to orientate to the center between trials. The researcher initiated the presentation of a light wedge at a randomly selected location after the participant verbally indicated they had found the central fixation point. Using the central fixation point as a reference, participants were instructed to locate the wedge by carefully exploring each of the four possible directions without urgency. It was also possible to perform the task by keeping the head-mounted camera centralized over the fixation dot and interpreting peripheral electrode activity as corresponding to the wedge location. Participants completed eight training trials in the “normal” condition at the beginning of each session in order to familiarize themselves with the task. 
Grating Acuity.
In the second part of the study, participants attempted the grating acuity task from the Freiburg visual acuity test (FrACT, version 3.8.1sq,21 available online at http://michaelbach.de/fract). This task required participants to identify the orientation of parallel lines of varying spacing (4AFC) and provided an estimate of visual acuity using the BestPEST search algorithm.22 
Gratings had a square-wave profile and were presented as a 1280 × 800 resolution image onto a 2030 × 1520 mm projector screen (Grandview GRPC100V; Hills Ltd., Edwardstown SA, Australia) using a short-throw projector (Model No. S500wi; Dell, Inc.). Using a projected image allowed a lower acuity range to be measured than with the computer screen. The FrACT software included a 700-pixel calibration bar, which was measured at the start of each session and was typically around 1160 mm. At a viewing distance of 2 m, the image dimension was stated by the FrACT software to correspond to 2.85 arcmin per pixel. 
The grating acuity task contained 23 presentations; beginning with 0.121 cyc/deg. The lowest possible grating frequency was 0.033 cyc/deg (floor effect). We recorded and analyzed the BestPEST acuity threshold results and accuracy per presentation across conditions. 
Data Acquisition
During the light localization task, responses were given verbally by participants, and recorded by two researchers on score sheets (MP, CM). When the participant responded, the operator immediately advanced the presentation of the static images (cycling from random light wedge to subsequent fixation dot) by pressing the right-arrow key of the computer keyboard. These keypresses also initiated a time-stamped capture of the camera image (Fig. 2). Accuracy was expressed as percentage of correct participant responses. For the grating acuity task, P1 responded using a handheld keypad, whereas P2 and P3 preferred to provide verbal responses. Image snapshots were also recorded during the grating acuity task, but accuracy data were provided by the FrACT program itself. 
Figure 2
 
A screen capture (left image) from the head-mounted camera at the start of a BaLM light localization task trial shows the image subregion used for video processing as a red boundary box. For each repeated trial, starting position was defined as the distance between the fixation dot and the centroid of the video processing subregion (shown as a green arrow), converted to degrees. Response offset (right image) was defined as the distance between the starting position and the centroid of the video processing subregion (shown as a red arrow), converted to degrees.
Figure 2
 
A screen capture (left image) from the head-mounted camera at the start of a BaLM light localization task trial shows the image subregion used for video processing as a red boundary box. For each repeated trial, starting position was defined as the distance between the fixation dot and the centroid of the video processing subregion (shown as a green arrow), converted to degrees. Response offset (right image) was defined as the distance between the starting position and the centroid of the video processing subregion (shown as a red arrow), converted to degrees.
A measure of head position at the start of each trial (“start position”) and offset at response time (“response offset”) was calculated from captured snapshot images. Each of these snapshot images included a red-bordered box to denote the camera subregion used by video processing (Fig. 2) and was analyzed in a custom computing environment (MathWorks, Inc.) script. Camera offset from the fixation dot at the start of each trial was calculated in pixels and converted to degrees (“starting position,” Fig. 2, left). Camera position at response time was calculated in pixels and converted to degrees (“ending position,” Fig. 2, right). Response offset was taken to be the magnitude of the vector between start and end positions. It was not practical to quantify response offset this way during the grating acuity task, as there was no fixation dot between trials. 
Statistical Analyses
The main outcome measures for the BaLM light localization task included accuracy (%), start position, and response offset (degrees). Comparisons of accuracy between conditions were calculated using χ2 statistics, and post hoc comparisons used pairwise Fisher's exact tests with a Bonferroni-adjusted P value of 0.05/6, or 0.008. Accuracy in each condition was also compared to the 4AFC chance rating (25%) using a right-tailed comparison. Start position, and response offset data were from nonnormal distributions (Anderson-Darling test), and thus were compared between conditions at group-level using the nonparametric Kruskal-Wallis and post hoc using Dunn's test.23 Additionally, we performed a between-subject comparison of response offset in the normal condition only. 
The main outcome measure for the grating acuity task was threshold (cyc/deg). Due to the low number of repeat observations, a Mood median test was used for comparisons of grating acuity threshold (cyc/deg) between conditions. Statistical analyses were performed in a commercial statistics package (Minitab; Minitab, Inc., State College, PA, USA). 
Results
All participants were able to complete the BaLM light localization task. Participants were not informed which electrode mapping condition was being used, but were aware of the system-off condition due to the absence of phosphenes. Participants were successfully masked to the scrambled condition, but occasionally reported that the output from the noise condition seemed random and did not appear to be modulated by their head or camera movements. Mean response times were 14 seconds for P1, 55 seconds for P2, and 36 seconds for P3. 
Figure 3 shows the accuracy scores on the BaLM light localization task, for each participant using the four electrode mapping conditions. 
Figure 3
 
Graph indicates mean scores on the 4AFC BaLM light localization task, with bars indicating 95% confidence intervals. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001. The solid horizontal line indicates the level of chance (25%) for a correct response, while the dashed horizontal line shows the criterion for success (62.5%).
Figure 3
 
Graph indicates mean scores on the 4AFC BaLM light localization task, with bars indicating 95% confidence intervals. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001. The solid horizontal line indicates the level of chance (25%) for a correct response, while the dashed horizontal line shows the criterion for success (62.5%).
BaLM Light Localization Task
Accuracy.
For P1, accuracy differed between test conditions (Pearson χ2 = 52.691, df = 3, P < 0.001). Accuracy in the normal condition was significantly better than for any other condition (all P < 0.001). There were no significant differences between scrambled, noise, or system-off (all P > 0.05). The proportion of correct responses was significantly above chance for the normal condition (P < 0.001), but not for the scrambled (P = 0.085), Noise (P = 0.560), or system-off (P = 0.301) conditions. 
For P2, accuracy differed between test conditions (Pearson χ2 = 17.185, df = 3, P < 0.001). Accuracy in the normal condition was significantly better than for noise (P = 0.0002) and system-off (P = 0.0023). There were no significant differences between normal and scrambled, or between other combinations of test conditions (all P > 0.05). The proportion of correct responses was significantly above chance for the normal (P < 0.001) and scrambled (P = 0.001) conditions, but not for the noise (P = 0.429) or system-off (P = 0.200) conditions. 
For P3, accuracy did not differ between test conditions (Pearson χ2 = 6.215, df = 3, P = 0.102). The proportion of correct responses was significantly above chance for the normal (P = 0.001) and scrambled (P = 0.005) conditions, but not for the noise (P = 0.560) or system-off (P = 0.103) conditions. 
Starting Position.
Starting position (Fig. 4) was significantly affected by test condition for all three participants (all P < 0.001, Kruskal-Wallis H = 30.15 for P1, H = 52.97 for P2, H = 16.97 for P3). Post hoc rank sum comparisons were performed using a critical Z-value of 2.128. 
Figure 4
 
Graph indicates median starting position (degrees eccentricity from center fixation dot) recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 4
 
Graph indicates median starting position (degrees eccentricity from center fixation dot) recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Starting position for P1 was most eccentric from the fixation dot in the scrambled condition (Z = 5.089, P < 0.001 versus normal; Z = 3.860, P < 0.001 versus noise; Z = 3.330, P < 0.001 versus system-off). 
Starting position of P2 was most eccentric from the fixation dot in the system-off condition (Z = 6.879, P < 0.001 versus normal; Z = 5.036, P < 0.001 versus scrambled; Z = 4.210, P < 0.001 versus noise). Starting position for noise was significantly more eccentric than for normal (Z = 2.321, P = 0.02). 
Starting position of P3 was significantly more eccentric from the fixation dot in the system-off condition, when compared to the normal condition (Z = 3.630, P < 0.001) and the scrambled condition (Z = 3.390, P < 0.001). 
Response Offset.
Response offset (Fig. 5) was significantly different between conditions for P1 and P2 (H = 36.85, P < 0.001 for P1; H = 23.83, P < 0.001 for P2). Response offset for P3 was not significantly affected by test condition (Kruskal-Wallis H = 1.71, P = 0.634). Post hoc rank sum comparisons were performed using a critical Z-value of 2.128. 
Figure 5
 
Graph indicates median response offset recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 5
 
Graph indicates median response offset recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Response offset of P1 was smallest in the normal condition (Z = 4.365, P < 0.001 versus scrambled; Z = 5.346, P < 0.001 versus noise; Z = 4.871, P < 0.001 versus system-off). 
Response offset of P2 was smallest in the system-off condition (Z = 3.736, P < 0.001 versus normal; Z = 2.268, P = 0.023 versus scrambled; Z = 3.974, P < 0.001 versus noise). 
Between-subject differences in response offset were examined in the normal condition only. Kruskal-Wallis confirmed a significant difference between subjects; H = 48.00, P < 0.001. Subject P1 had the smallest response offset for any participant (Z = 4.883, P < 0.001 versus P2; Z = 6.452, P < 0.001 versus P3). There was no significant difference in response offset between P2 and P3 (critical Z-value of 1.834). 
Grating Acuity Task
Subjects P2 and P3 were unable to score above the measurement floor (0.033 cyc/deg) in the 4AFC grating acuity task in the normal condition. No attempt was made for the other test conditions. 
Subject P1 was able to complete the 4AFC grating acuity task in the normal condition. The maximum theoretical acuity for this participant was approximately 0.141 cyc/deg, or 2.33 logMAR (20/4242), based on the 1-mm electrode pitch and projection on the fovea.24 
Grating acuity results for P1 (Fig. 6) were significantly better in the normal condition than for the three other conditions (Mood median test: χ2 = 11.44, df = 3, P = 0.010). A low result of 0.048 cyc/deg in the normal condition was obtained following a run using the scrambled condition, and is likely to indicate participant fatigue. Grubbs' test suggests this result was an outlier (G = 2.47, P = 0.003). The result of 0.053 cyc/deg in the system-off condition can similarly be considered an outlier (G = 1.50, P < 0.001). All remaining results in the scrambled, noise, and system-off conditions were at the measurement floor, 0.033 cyc/deg. 
Figure 6
 
Graph indicates grating acuity results for P1 using the FrACT software. The dashed line at 0.141 cyc/deg indicates the maximum theoretical acuity for this participant, based on a 1-mm electrode pitch and proximity to the fovea. The solid line at 0.033 cyc/deg indicates the measurement floor of the testing environment.
Figure 6
 
Graph indicates grating acuity results for P1 using the FrACT software. The dashed line at 0.141 cyc/deg indicates the maximum theoretical acuity for this participant, based on a 1-mm electrode pitch and proximity to the fovea. The solid line at 0.033 cyc/deg indicates the measurement floor of the testing environment.
Discussion
The purpose of the present study was to quantify differences in participant behavior and performance using a retinal prosthesis on tasks that benefit from retinotopic discrimination. We hypothesized that participants with measurable acuity would be negatively affected by a nonretinotopic (scrambled) electrode mapping. We further hypothesized that participants with poor acuity would be less impacted by this nonretinotopic electrode mapping, and that this subset of participants would preferentially use head movement over intrinsic information. We included “noise” and “system-off” conditions to elucidate the contributions of head position, residual vision, and participant engagement on task performance. 
The main finding of this study was that the removal of retinotopic information had varying impacts on participant performance, in accordance with their assessed acuity. Subject P1 localization scores using the scrambled mapping were at chance levels, whereas localization scores for P2 and P3 using scrambled were above chance. Subject P1 was the only participant to be significantly impacted by the scrambled condition—demonstrating significantly poorer accuracy scores when compared to the Normal condition (33.9% vs. 89.6%, P < 0.001), a significantly more eccentric starting position from the fixation dot, and a significantly more eccentric response offset. Subject P1 was also the only participant in this cohort to obtain a measurable grating acuity. 
The null difference in accuracy between normal and scrambled conditions implies that participants P2 and P3 were not predominantly reliant on intrinsic retinotopic information to perform localization. When comparing their response offset in the normal condition, P2 and P3 exhibited larger head movements during the task than P1. Further, P2 and P3 demonstrated no significant within-subject difference in head movement between the normal and scrambled conditions. In contrast, P1 head position in the scrambled condition was significantly further from the fixation dot at the start of each trial and significantly more eccentric at response time (compared to normal). 
These results support the notion that an absence of meaningful retinotopic discrimination leads to a greater reliance on head movement to complete a localization task. In the context of navigation, an awareness of retinotopic information could allow obstacles to be localized within a stationary field-of-view, whereas participants with limited retinotopic discrimination would require head movements to localize obstacles. This is in agreement with Caspi et al.,8 who advised that the removal of retinotopic information from an electrode array will limit the acuity to match that of the extent of the array. 
Similar to the present study, only a small fraction of subretinal and epiretinal bionic eye recipients can perform an acuity task to an assessable degree.2,9,25 Success on any given low-vision task requires a judgement of which method would be best to use (retinotopic discrimination versus head scanning) and these are two distinct skill sets to develop (Lauritzen T, et al. IOVS 2012;53:ARVO E-Abstract 5508). A comparison between normal and scrambled mapping provides a means to assess retinotopic discrimination in patients that are unable to perform an acuity task. If no behavioral differences are observed between normal and scrambled conditions, it can be concluded that the subject is using the prosthesis as a “single pixel” display.7 In this event, the experimenter should confirm that the electrode mapping strategy is appropriate for the recipient. An electrode mapping strategy that stimulates more spatially separated electrodes (by omitting intermediate electrodes) may make retinotopic discrimination more intuitive to the user, albeit at the expense of visual acuity. 
The notion of a normal versus scrambled test condition revealing successful utilization of retinotopic information may also apply to visual prostheses with an unknown perceptual layout. In the case of cortical implants26,27 and optic nerve stimulation,28 the relationship between electrode and percept location is less well known. If a scrambled test condition yields no impact on performance when compared to a normal condition, it may be inferred that discrimination of patterned information is absent or that the mapping of percept locations is not yet optimal. 
The present study also sheds light on participant engagement in each condition, with a view to overcome a perceived lack of participant engagement in tasks when using system off. We observed this situation in P2, corresponding to a significantly eccentric starting position and minimal head movement (i.e., small response offset), in spite of our instruction to attempt the task with residual vision. The addition of electrode activity (including the noise condition) provided motivation for this participant to perform head-directed scanning appropriate to the task. 
We propose that a scrambled test condition provides valuable information on retinotopic discrimination, and is preferable over the system-off condition for use as a control setting in vision prosthesis trials, but advocate judicious frequency of use. Although recent reports on retinal prosthesis (Argus II; Second Sight, Sylmar, CA, USA) patients have questioned whether retinotopic information has relevance to subjects' lives (proposing system-on versus system-off is sufficient to assess functional ability),29 outcomes from subretinal visual implant (Alpha IMS; Retina Implant AG, Reutlingen, Germany) patients suggest that an ability to recognize shapes or details of objects (form vision requiring retinotopic discrimination) correlates with the device being reported as “useful” in daily life.2 
One of the limitations of the present study is that the head movement data are surmised from only two time-points. If a subject moved their head back to center before verbally responding, the head movement measure would be underestimated. Future studies could examine head and eye position behavior throughout the entire task. Observation of patient behaviors may inform targeted rehabilitation programs to develop specific skills and strategies, such as improved retinotopic discrimination (Lauritzen T, et al. IOVS 2012;53:ARVO E-Abstract 5508). There remains, however, the possibility that testing with a scrambled mapping can be fatiguing, and may affect subject confidence and learning if used too regularly. 
Conclusions
Using a comparison between retinotopic and nonretinotopic image-to-electrode mapping conditions, we have confirmed successful discrimination of retinotopic information is possible with a suprachoroidal retinal prosthesis. Following an analysis of behavioral measures on tasks requiring a combination of localization and retinotopic discrimination, we propose that a comparison between normal and scrambled electrode mapping is able to highlight deficiencies in retinotopic discrimination, and that these deficiencies may be accompanied by a greater reliance on head movement to perform localization. If future retinal implants are able to provide a wide field-of-view for the purpose of navigation, it seems apparent that retinotopic discrimination must be assessed in prosthesis recipients with a view to offer targeted rehabilitation or specific electrode layouts for differing discrimination abilities. 
Acknowledgments
Supported by the Australian Research Council through its Special Research Initiative in Bionic Vision Science and Technology awarded to Bionic Vision Australia, an NHMRC Project Grant 1082358 awarded to PJ Allen, and by the Bertalli Family and Clive & Vera Ramaciotti Foundations to the Bionics Institute; the Victorian Government through its Operational Infrastructure Program (Bionics Institute and the Centre for Eye Research Australia [CERA]); and a National Health and Medical Research Council, Centre for Clinical Research Excellence Award #529923 (CERA). 
Disclosure: M.A. Petoe, None; C.D. McCarthy, None; M.N. Shivdasani, None; N.C. Sinclair, None; A.F. Scott, None; L.N. Ayton, None; N.M. Barnes, None; R.H. Guymer, None; P.J. Allen, None; P.J. Blamey, None 
References
da Cruz L, Dorn JD, Humayun MS, et al. Five-year safety and performance results from the Argus II retinal prosthesis system clinical trial. Ophthalmology. 2016; 123: 2248–2254.
Stingl K, Bartz-Schmidt KU, Besch D, et al. Subretinal visual implant Alpha IMS–clinical trial interim report. Vision Res. 2015; 111: 149–160.
Fujikado T, Kamei M, Sakaguchi H, et al. Clinical trial of chronic implantation of suprachoroidal-transretinal stimulation system for retinal prosthesis. Sensor Mater. 2012; 24: 181–187.
Sinclair NC, Shivdasani MN, Perera T, et al. The appearance of phosphenes elicited using a suprachoroidal retinal prosthesis: phosphenes of a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2016; 57: 4948–4961.
Dagnelie G. Psychophysical evaluation for visual prosthesis. Annu Rev Biomed Eng. 2008; 10: 339–368.
Caspi A, Roy A, Dorn JD, Greenberg RJ. Retinotopic to spatiotopic mapping in blind patients implanted with the Argus II retinal prosthesis. Invest Ophthalmol Vis Sci. 2017; 58: 119–127.
Caspi A, Zivotofsky AZ. Assessing the utility of visual acuity measures in visual prostheses. Vision Res. 2015; 108: 77–84.
Caspi A, Dorn JD, McClure KH, Humayun MS, Greenberg RJ, McMahon MJ. Feasibility study of a retinal prosthesis. Arch Ophthalmol. 2009; 127: 398–401.
Dorn JD, Ahuja AK, Caspi A, et al. The detection of motion by blind subjects with the epiretinal 60-electrode (Argus II) retinal prosthesis. JAMA Ophthalmol. 2013; 131: 183–189.
da Cruz L, Coley BF, Dorn J, et al. The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. Brit J Ophthalmol. 2013; 97: 632–636.
Kotecha A, Zhong J, Stewart D, da Cruz L. The Argus II prosthesis facilitates reaching and grasping tasks: a case series. BMC Ophthalmol. 2014; 14: 1.
Ayton LN, Blamey PJ, Guymer RH, et al. First-in-human trial of a novel suprachoroidal retinal prosthesis. PLoS One. 2014; 9: e115239.
Ayton LN, Apollo NV, Varsamidis M, Dimitrov PN, Guymer RH, Luu CD. Assessing residual visual function in severe vision loss. Invest Ophthalmol Vis Sci. 2014; 55: 1332–1338.
Shivdasani MN, Sinclair NC, Dimitrov PN, et al. Factors affecting perceptual thresholds in a suprachoroidal retinal prosthesis. Invest Ophthalmol Vis Sci. 2014; 55: 6467–6481.
Slater KD, Sinclair NC, Nelson TS, Blamey PJ, McDermott HJ. neuroBi: a highly configurable neurostimulator for a retinal prosthesis and other applications. IEEE J Transl Eng Health Med. 2015; 3: 1–11.
Pérez Fornos A, Sommerhalder J, da Cruz L, et al. Temporal properties of visual perception on electrical stimulation of the retina. Invest Ophthalmol Vis Sci. 2012; 53: 2720–2731.
Ray A, Lee EJ, Humayun MS, Weiland JD. Continuous electrical stimulation decreases retinal excitability but does not alter retinal morphology. J Neural Eng. 2011; 8: 045003.
Barnes N, Scott AF, Lieby P, et al. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering. J Neural Eng. 2016; 13: 036013.
Bach M, Wilke M, Wilhelm B, Zrenner E, Wilke R. Basic quantitative assessment of visual performance in patients with very low vision. Invest Ophthalmol Vis Sci. 2010; 51: 1255–1260.
Nau A, Bach M, Fisher C. Clinical tests of ultra-low vision used to evaluate rudimentary visual perceptions enabled by the BrainPort vision device. Trans Vis Sci Tech. 2013; 2 (3): 1.
Bach M. The Freiburg visual acuity test - automatic measurement of visual acuity. Optometry Vision Sci. 1996; 73: 49–53.
Lieberman HR, Pentland AP. Microcomputer-based estimation of psychophysical thresholds - the Best Pest. Behav Res Meth Instr. 1982; 14: 21–25.
Dunn OJ. Multiple comparisons using rank sums. Technometrics. 1964; 6: 241–252.
Dacey DM, Petersen MR. Dendritic field size and morphology of midget and parasol ganglion-cells of the human retina. Proc Natl Acad Sci U S A. 1992; 89: 9666–9670.
Ahuja AK, Behrend MR. The Argus (TM) II retinal prosthesis: factors affecting patient selection for implantation. Prog Retin Eye Res. 2013; 36: 1–23.
Dobelle WH, Turkel J, Henderson DC, Evans JR. Mapping projection of visual-field onto visual-cortex in man by direct electrical-stimulation. T Am Soc Art Int Org. 1978; 24: 15–17.
Dobelle WH, Turkel J, Henderson DC, Evans JR. Mapping the representation of the visual-field by electrical-stimulation of human visual-cortex. Am J Ophthalmol. 1979; 88: 727–735.
Sun JJ, Lu YL, Cao PJ, et al. Spatiotemporal properties of multipeaked electrically evoked potentials elicited by penetrative optic nerve stimulation in rabbits. Invest Ophthalmol Vis Sci. 2011; 52: 146–154.
Geruschat DR, Richards TP, Arditi A, et al. An analysis of observer-rated functional vision in patients implanted with the Argus II retinal prosthesis system at three years. Clin Exp Optom. 2016; 99: 227–232.
Appendix
The Bionic Vision Australia Consortium
The Bionic Vision Australia Consortium consists of five member organizations (Centre for Eye Research Australia, Bionics Institute, Data61, University of Melbourne and University of New South Wales) and three partner organizations (The Royal Victorian Eye and Ear Hospital, National Vision Research Institute of Australia and the University of Western Sydney). For this publication, the consortium members consist of (in alphabetical order): Anthony N. Burkitt,1,2 Owen Burns,1 Peter N. Dimitrov,3 Lisa N. Gillespie,1,4 Paulette Lieby,5 Chi D. Luu,3,6 Hugh J. McDermott,1,4 David AX. Nayagam,1,4 Darien Paradinas-Diaz,1 Thushara Perera,1,4 Robert K. Shepherd,1,4 Joel Villalobos,1,4 Janine G. Walker,5,7 and Chris E. Williams.1,4 
  •  
    1Bionics Institute of Australia, East Melbourne, VIC 3002, Australia
  •  
    2Department of Electrical and Electronic Engineering, University of Melbourne, Australia
  •  
    3Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
  •  
    4Department of Medical Bionics, University of Melbourne, Parkville, VIC 3052, Australia
  •  
    5Computer Vision Research Group, Data61, Canberra, ACT 2601, Australia
  •  
    6Department of Surgery (Ophthalmology), University of Melbourne, Parkville, VIC 3052, Australia
  •  
    7Centre for Mental Health Research, Australian National University, Canberra, Australia
Figure 1
 
Image-to-electrode mapping conditions used in this study, represented as electrode activity on the 20 electrode layout. (A) A captured image. (B) Normal (retinotopic) conditions, where the intensity values are mapped to electrodes in the corresponding location, using the Lanzcos2 transform described in Barnes et al.20 (C) In the scrambled condition, the location mapping was purposefully corrupted to be nonretinotopic but contain the same number and intensity of phosphenes as for normal. (D) In the noise condition, the electrodes are repeatedly assigned random intensity values and are unrelated to the camera image. (E) In the system-off condition no electrodes are stimulated.
Figure 1
 
Image-to-electrode mapping conditions used in this study, represented as electrode activity on the 20 electrode layout. (A) A captured image. (B) Normal (retinotopic) conditions, where the intensity values are mapped to electrodes in the corresponding location, using the Lanzcos2 transform described in Barnes et al.20 (C) In the scrambled condition, the location mapping was purposefully corrupted to be nonretinotopic but contain the same number and intensity of phosphenes as for normal. (D) In the noise condition, the electrodes are repeatedly assigned random intensity values and are unrelated to the camera image. (E) In the system-off condition no electrodes are stimulated.
Figure 2
 
A screen capture (left image) from the head-mounted camera at the start of a BaLM light localization task trial shows the image subregion used for video processing as a red boundary box. For each repeated trial, starting position was defined as the distance between the fixation dot and the centroid of the video processing subregion (shown as a green arrow), converted to degrees. Response offset (right image) was defined as the distance between the starting position and the centroid of the video processing subregion (shown as a red arrow), converted to degrees.
Figure 2
 
A screen capture (left image) from the head-mounted camera at the start of a BaLM light localization task trial shows the image subregion used for video processing as a red boundary box. For each repeated trial, starting position was defined as the distance between the fixation dot and the centroid of the video processing subregion (shown as a green arrow), converted to degrees. Response offset (right image) was defined as the distance between the starting position and the centroid of the video processing subregion (shown as a red arrow), converted to degrees.
Figure 3
 
Graph indicates mean scores on the 4AFC BaLM light localization task, with bars indicating 95% confidence intervals. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001. The solid horizontal line indicates the level of chance (25%) for a correct response, while the dashed horizontal line shows the criterion for success (62.5%).
Figure 3
 
Graph indicates mean scores on the 4AFC BaLM light localization task, with bars indicating 95% confidence intervals. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001. The solid horizontal line indicates the level of chance (25%) for a correct response, while the dashed horizontal line shows the criterion for success (62.5%).
Figure 4
 
Graph indicates median starting position (degrees eccentricity from center fixation dot) recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 4
 
Graph indicates median starting position (degrees eccentricity from center fixation dot) recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 5
 
Graph indicates median response offset recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 5
 
Graph indicates median response offset recorded during BaLM light localization task, with bars indicating critical confidence intervals of 86.8% around the median. Significance levels are denoted with asterisks. *P < 0.05. **P < 0.01. ***P < 0.001.
Figure 6
 
Graph indicates grating acuity results for P1 using the FrACT software. The dashed line at 0.141 cyc/deg indicates the maximum theoretical acuity for this participant, based on a 1-mm electrode pitch and proximity to the fovea. The solid line at 0.033 cyc/deg indicates the measurement floor of the testing environment.
Figure 6
 
Graph indicates grating acuity results for P1 using the FrACT software. The dashed line at 0.141 cyc/deg indicates the maximum theoretical acuity for this participant, based on a 1-mm electrode pitch and proximity to the fovea. The solid line at 0.033 cyc/deg indicates the measurement floor of the testing environment.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×