February 2018
Volume 59, Issue 2
Open Access
Eye Movements, Strabismus, Amblyopia and Neuro-ophthalmology  |   February 2018
Eye Movement Control in the Argus II Retinal-Prosthesis Enables Reduced Head Movement and Better Localization Precision
Author Affiliations & Notes
  • Avi Caspi
    Jerusalem College of Technology, Jerusalem, Israel
    Second Sight Medical Products, Inc., Sylmar, California, United States
  • Arup Roy
    Second Sight Medical Products, Inc., Sylmar, California, United States
  • Varalakshmi Wuyyuru
    Second Sight Medical Products, Inc., Sylmar, California, United States
  • Paul E. Rosendall
    The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, United States
  • Jason W. Harper
    The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, United States
  • Kapil D. Katyal
    The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, United States
  • Michael P. Barry
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Gislin Dagnelie
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Robert J. Greenberg
    Second Sight Medical Products, Inc., Sylmar, California, United States
  • Correspondence: Avi Caspi, Department of Electrical and Electronic Engineering, Jerusalem College of Technology Jerusalem 91160, Israel and Second Sight Medical Products, Inc., Sylmar, CA 91342, USA; [email protected]
Investigative Ophthalmology & Visual Science February 2018, Vol.59, 792-802. doi:https://doi.org/10.1167/iovs.17-22377
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Avi Caspi, Arup Roy, Varalakshmi Wuyyuru, Paul E. Rosendall, Jason W. Harper, Kapil D. Katyal, Michael P. Barry, Gislin Dagnelie, Robert J. Greenberg; Eye Movement Control in the Argus II Retinal-Prosthesis Enables Reduced Head Movement and Better Localization Precision. Invest. Ophthalmol. Vis. Sci. 2018;59(2):792-802. https://doi.org/10.1167/iovs.17-22377.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Visual scanning by sighted individuals is done using eye and head movements. In contrast, scanning using the Argus II is solely done by head movement, since eye movements can introduce localization errors. Here, we tested if a scanning mode utilizing eye movements increases visual stability and reduces head movements in Argus II users.

Methods: Eye positions were measured in real-time and were used to shift the region of interest (ROI) that is sent to the implant within the wide field of view (FOV) of the scene camera. Participants were able to use combined eye-head scanning: shifting the camera by moving their head and shifting the ROI within the FOV by eye movement. Eight blind individuals implanted with the Argus II retinal prosthesis participated in the study. A white target appeared on a touchscreen monitor and the participants were instructed to report the location of the target by touching the monitor. We compared the spread of the responses, the time to complete the task, and the amount of head movements between combined eye-head and head-only scanning.

Results: All participants benefited from the combined eye-head scanning mode. Better precision (i.e., narrower spread of the perceived location) was observed in six out of eight participants. Seven of eight participants were able to adopt a scanning strategy that enabled them to perform the task with significantly less head movement.

Conclusions: Integrating an eye tracker into the Argus II is feasible, reduces head movements in a seated localization task, and improves pointing precision.

Retinal prostheses for restoring limited sight are used worldwide to treat people who have lost their sight due to outer retinal degenerative diseases such as retinitis pigmentosa (RP). In retinal prosthesis systems currently approved to treat blindness, there are different approaches regarding the position of the sensing element that captures the visual information of the scene. The Argus II retinal prosthesis uses a head-mounted camera13 while the Alpha-IMS uses a photodiode array on the retina.4 In all cases, the visual percept in the brain is generated by electrical stimulation of the remaining secondary retinal neurons using an array of electrodes. For the percept to be useful, the electrical stimulation—which is delivered in retina-centered coordinates—should convey information to the brain that is perceived in the correct spatial location (i.e., in world-centered coordinates). The brain performs this transformation on the basis of eye and head positions. This should work equally well with the implanted photodiode approach, because the source image and stimulation patterns change as the eyes move. Eye movements have a different effect for systems based on a head-mounted camera, since the source image and stimulation patterns do not change unless the camera moves, even if the eyes point in a drastically different direction. 
It has been shown5,6 that eye positions affect the perceived location of phosphenes elicited by electrical stimulation in the retina. This effect is consistent with the inherent dissociation between the image acquisition device and eye movements in the head-mounted camera configuration. Ideally the visual axes of the eye and the camera should be aligned, such that the brain's information on eye orientation, therefore percept location, is not misleading. Any parafoveal placement of the stimulating array can be taken into account by reinterpreting the eye's visual axis to pass through the center of the array instead of the fovea. The camera's axis can be configured by selecting a region of interest (ROI) in the scene camera's wide field of view (FOV). This ROI setting has also been referred to as the camera alignment position.7 Implantees with head-mounted cameras and fixed ROIs are typically instructed to scan using head motions and to keep their eyes straight when using the system. The ROI can be set to match where an implantee localizes percepts on average. Nevertheless, prosthesis users can still suffer eye-camera misalignments during head motion due to, for example, the vestibulo-ocular reflex.5 
Despite this eye orientation-stimulation dissociation, there are still several advantages to the head-mounted camera approach. The sensing element is not implanted in the eye and it is therefore not exposed to the hostile biological environment. In addition, sending information from an external camera allows for flexible image processing, such as implementing algorithms for object or face detection. Moreover, placing the capturing device outside the eye allows for the modular use of various other sensors such as thermal imagers8 or depth cameras for obstacle detection.9 The implanted photodiode array approach, without an external sensing element, lacks such flexibility. However, the sensor does move with the eye and stimulation is consistent with oculomotor information.10 
To provide visual information aligned with the oculomotor system, a combined approach of an implanted photodiode array and an external sensor has undergone preclinical testing11 and was approved for human clinical trials.12 In this device, images acquired by external camera are projected onto the retina using near-infrared light, which is then converted into electric current by photodiodes in each pixel of the implanted arrays to stimulate the nearby inner retinal neurons. Nonetheless, treating blindness due to diseases that affect the inner retina or the optic nerve will require stimulation at a higher location in the visual pathway, for example at the lateral geniculate nucleus (LGN),13 or at the visual cortex.14 The topographic map within these areas is retina based.15 Such implants will stimulate areas that have a retinotopic map with an image from an external sensor and will thus also have a dissociation between the eye movement and imager. 
The idea to compensate for the neural stimulation pattern according to the gaze direction was proposed over 2 decades ago,16 but a reliable off-the-shelf technology to calibrate and track the eye position in blind individuals was not available until recently. Herein, we designed an experiment that measures the benefit of an eye tracker integrated with the head-mounted imaging system of the Argus II retinal prosthesis. The measured eye position is used, in real-time, to select the area within the head-mounted camera image that will be processed and delivered to the electrodes on the retina. To accomplish this, we used a self-calibrating eye tracker to shift the line of sight of the implant based on eye position. Specifically, our research focused on whether eye movements, voluntary and involuntary, can be used to steer the retinal prosthesis' line of sight, reduce the amount of head scanning, and improve pointing precision. 
The experiment addresses several unexplored issues related to eye movements in blind individuals. It is unclear if they are able to hold their gaze steady during head scanning, and if they can reduce involuntary eye movements. If they cannot, compensating for eye movements may reduce localization errors. In addition, it is unknown if blind individuals can plan a top-down, voluntary, eye movement to assist with searching and scanning. To address these questions, we measured the pointing precision and the amount of eye movements during a localization task in two scanning modes: “head-only” and “eye-head.” The head-only mode replicates the normal behavior during daily use of the implant. In this mode, scanning can be done only through head movements that steer the camera, and there is no correction for eye position. In the eye-head mode, the eye tracker is enabled, and the ROI is based on the instantaneous measured eye position. In this mode, scanning can be done by either head or eye movements, since there is a correction for eye position. 
Methods
Participants, Informed Consent, and the Argus II Implant
Argus II implantees in the United States were invited to participate in the psychophysics research study that took place at Second Sight Medical Products, Inc. in Sylmar, California, and in the Wilmer Eye Institute at Johns Hopkins University, Baltimore, Maryland. 
Eight Argus II implantees (Table 1) whose blindness was caused by retinitis pigmentosa participated in the study. No substantial nystagmus was observed with any of the participants while using the Argus II. For two participants, implantation and rehabilitation with the Argus II was done as part of the Argus II clinical trial (ClinicalTrials.gov Identifier: NCT00407602). The other implantees received the implant as a routine medical procedure at one of the US-based Argus II implant centers. These implantations were performed under humanitarian device exemption (HDE) H110002 issued by the United States Food and Drug Administration (FDA; Feb 13, 2013). 
Table 1
 
Participant Demographics
Table 1
 
Participant Demographics
The study protocol, including the eye-tracking procedure, was approved by the Western IRB and by the Johns Hopkins Medicine IRB. Informed consent was read to the participants, who signed the consent form after all questions were answered, prior to the start of the experiment. All research procedures adhered to the tenets of the Declaration of Helsinki. 
The Argus II retinal implant consists of an implanted array of 60 electrodes arranged in a 10 × 6 rectangular layout. The array covers an area of the retina corresponding to 18° × 11°, assuming that 293 μm on the retina equates to 1° of visual angle.17 The stimulation waveform at each of the 10 × 6 implanted electrodes is calculated by a video processing unit and sent wirelessly to the implant. The transceiver that communicates with the implant was taped to the eye-tracking glasses. 
Eye Tracker Setup
The experimental setup allowed participants to steer the line of sight of the Argus II by using either head or eye movements. Head movements shifted the FOV of the scene camera, while eye movements shifted the ROI within the camera's FOV. This setup allowed combined line-of-sight steering by head and eye movements. 
A tracking device (Eye Tracking Glasses 2.0, ETG 2.0; SensoMotoric Instruments, Teltow, Germany) was used to measure gaze position at 60 Hz. A miniature camera and six infrared illuminators for each eye, mounted on a lightweight glasses frame, were used in a self-calibrating mode. Eye-tracking calibration without visual stimuli was done based on pupil location relative to corneal reflections of the infrared illuminators.18 Snapshots from the eye-tracking camera showing the pupil and the six corneal reflections can be seen in Figure 1. In the self-calibrating mode, the participants moved their eyes across a sufficient range to allow the system to create a model of the eyeball without any participant response to visual stimuli. 
Figure 1
 
Snapshots from the scene and eye tracker cameras during the localization task. Top: Images from the eye tracking camera with the corneal reflections of the self-calibrating eye tracker. Bottom: Images from the scene camera. The orange disk indicates the gaze position. The yellow rectangle indicates the ROI that was sent to the implant.
Figure 1
 
Snapshots from the scene and eye tracker cameras during the localization task. Top: Images from the eye tracking camera with the corneal reflections of the self-calibrating eye tracker. Bottom: Images from the scene camera. The orange disk indicates the gaze position. The yellow rectangle indicates the ROI that was sent to the implant.
The eye tracking range of the ETG 2.0 is ±40° horizontally and ±30° vertically. However, the transceiver of the Argus II will lose the link with the implant at angular displacements smaller than the range of the eye tracker, hence the range of the eye tracker did not pose any limitation. The Argus II is designed to maintain a link at orientations of ±30° of the implanted coil/antenna relative to the external antenna when the coils are collinear with a separation of 20 mm in air. In practice, the range is narrower due to the attenuation of radiofrequency (RF) energy by the tissues and misalignment between the external and implanted antennas. In order to keep a stable link between the external transceiver and the implant, the Argus II system beeps in case of a link loss. The beep alerts the user that the eyes need to be shifted back to the center to restore the link. 
The head-mounted scene camera (IDM-200; Imaging Diagnostics, Ness Ziona, Israel) has a 1/4 CMOS sensor with a resolution of 640 × 480 pixels, and a 73° × 55° FOV, when using a lens with a focal length of 2.84 mm (LP2839IR-M7, Misumi Electronics Corp., New Taipei City, Taiwan). The size of the ROI matched the FOV of the Argus II implant, 18° × 11°. The position of the ROI within the scene image was either fixed (head-only condition) or set in real time according to the eye position from the eye tracker (eye-head condition). The instantaneous eye position was acquired by a laptop and was used to calculate the ROI in each video frame. The ROI image content was delivered to the external processing unit of the Argus II, which then sent stimulation levels to the 10 × 6 epiretinal electrodes via the RF transmitter. 
Closed-Loop Eye Movement Scanning
The new mode of scanning using eye movements was explained to each participant prior to the beginning of the session. The nose piece of the eye tracking glasses was adjusted for each participant to center the eye in the eye-tracking image (Fig. 1). Participants were instructed to voluntarily move their eyes up-down-right-left to complete the self-calibration of the eye tracker. 
In order to familiarize the participants with the eye-scanning mode, we conducted a simple two-alternative forced choice (2-AFC) test. A vertical white bar was presented either to the right or to the left and the participant was requested to report its location. Participants moved their eyes right and left to steer the line of sight of the implant and find the bar. At least 20 search trials were conducted for each participant and all participants were able to find the target with 100% accuracy using eye movements. Feedback was given after each trial and the participants gained confidence that eye movements can be used to shift the implant's line of sight. 
To measure the benefit of eye position image steering, we compared the localization precision and amount of head movements with and without closed-loop, eye-tracking controlled scanning. With eye-tracking control, voluntary and involuntary eye movements moved the ROI within the scene image. In this mode, the participants were able to scan either by head movements that shifted the entire FOV of the scene camera or by eye movements that shifted the ROI within the camera's FOV. Without eye-tracking control, the participants were able to scan only by head movements that shifted the entire camera's FOV and the ROI was fixed at the center of the camera's FOV. 
Pointing Task
A white target appeared at random locations on a touchscreen monitor (1915L; ELO Touch Solutions, Menlo Park, CA, USA) and the participant was instructed to report the target's location by touching it.7,19 The pointed location on the touchscreen was registered by the software. The target was a circle with a diameter of 5.5 cm and the participant was seated so that the distance from the camera to the screen was 40 cm. 
In each trial, the center location of the target was selected randomly, independent from the other trials in the run. The maximal eccentricity of the target's center was 16° along the horizontal axis and 10.5° along the vertical axis. The angle subtended by the target on the camera was about 8° when the target was presented at the center of the monitor and about 7.6° when the target was in a corner, at maximal eccentricity. 
Each session consisted of several runs. Each run consisted of 20 trials. In half of the runs, the participants searched for the target using combined eye-head scanning, while in the other half of the runs, the eye-tracking control was disabled and participants used head-only scanning, as done in daily use of the Argus II. The participants were instructed which scanning condition to use in each run, combined eye-head scanning or head-only scanning. In combined eye-head scanning, participants were advised to try using eye movements as they did before losing sight, but that they were free to move their head as well. 
Pointing precision differences between combined eye-head scanning and head-only scanning were analyzed for each participant using the nonparametric Wilcoxon rank-sum test. In this study, we evaluated the closeness of the pointing locations to each other. The accuracy (i.e., the closeness of the pointing locations to the target's true location) was not considered in these analyses. Pointing accuracy, after accounting for eye orientation, should be a function of the position of electrodes on the retina, inherent point bias, and any differences in eye and camera positions not taken into account. Errors in pointing accuracy can be corrected by adjusting the ROI within the camera's FOV.7 We did not correct the average offset between the pointing location and the perceived location, so there was a fixed difference between the target and the pointing locations. With the offset corrected, the participants might either see their pointing hand or determine when the hand blocks the target. In the current study, participants were not able to see their hand, so pointing was an open-loop task. 
In a closed-loop pointing task the subject can see the pointing finger. Therefore, visual feedback will guide the pointing finger to the point where the error is reduced to zero,20 and thus will not reveal information about the actual spatial location where the stimulus is perceived in the brain. A closed-loop pointing task has merit if the trajectory of the pointing is recorded. In such a setup, one can differentiate between the initial and stabilization phases of the trajectory.21 The initial phase of the movement is based on the perceived location of the stimulus, while the stabilization phase is based on the visual feedback that minimizes the error. In an open-loop pointing task, the error is a function of the perceived location in the brain and the ability to point without seeing the finger. This pointing error is the same in eye-head and head-only scanning. Hence, the performance in these two open loop scanning-conditions assessed the difference in precision with which the brain registered the stimuli. 
For each trial, we calculated the distance on the screen along the horizontal (Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(dx\)) and vertical (Display Formula\(dy\)) axes between the location of the target and the perceived location the participant reported by touching the monitor. The pointing precision for each trial was defined as the angular distance between the locations pointed to in the trial and the average across all responses in the run. This angular pointing precision in trial j, is given by:  
\begin{equation}\tag{1}Precision\left( j \right) = \arctan \left[ {{{\sqrt {{{\left( {d{x_j} - dx} \right)}^2} + {{\left( {d{y_j} - dy} \right)}^2}} } \over L}} \right]\end{equation}
where Display Formula\(dx\) and Display Formula\(dy\) are the averages of the distances on the screen between the marked location and the location of the target along the horizontal and vertical axes, respectively; Display Formula\(d{x_j}\) and Display Formula\(d{y_j}\) are the distances on the screen at trial j between the marked location and the location of the target along the horizontal and vertical axes, respectively; and Display Formula\(L\) is the nominal fixed distance between the camera and the screen.  
The pointing deviation in the horizontal and vertical axis in trial j, is given by:  
\begin{equation}\tag{2}\eqalign{&D{V_{Horizontal}}\left( j \right) = d{x_j} - \left\langle {dx} \right\rangle \cr&D{V_{Vertical}}\left( j \right) = d{y_j} - \left\langle {dy} \right\rangle \cr} \end{equation}
 
Head Motion Recording
To quantify the amount of head motion during the task, an inertial measurement unit (3DM-GX3-25; MicroStrain, Williston, VT, USA) was mounted on the glasses. The amount of head movement (i.e., head scanning), was quantified using the root mean square (RMS) of the head angular velocity.2224 For each trial, we calculated the RMS of the head velocity using the following equation:  
\begin{equation}\tag{3}{V_{RMS}} = \sqrt {{1 \over N}\,\cdot\,\mathop \sum \limits_{i = 1}^N \left( {v_{xi}^2 + v_{yi}^2 + v_{zi}^2} \right)} {\rm ,}\end{equation}
where Display Formula\(v_{xi}^{}\), Display Formula\(v_{yi}^{}\), and Display Formula\(v_{zi}^{}\) are the head velocity of the ith sample along the x, y, and z axes, respectively, and N is the number of samples in the trial.  
Data Analysis
Three variables, pointing precision, head motion, and trial duration, were analyzed. The Wilcoxon rank sum test was performed on each variable and a Bonferroni-corrected P value of 0.016 (n = 0.05/3) was considered significant. Comparison of the data of all participants was done using the Wilcoxon signed-rank test to compare matched samples from each participant with eye-tracking control and without eye-tracking control. 
Correlation coefficients between pointing deviation and trial-end eye position were calculated using the Pearson linear correlation method. As we compared four different independent correlations—horizontal and vertical in two scanning modes for each participant—a Bonferroni-corrected P value of 0.0125 (n = 0.05/4) was considered significant. 
The confidence ellipse that contains 95% of the eye position samples was calculated from the two-dimensional covariance matrix (in the public domain, http://www.visiondummy.com/2014/04/draw-error-ellipse-representing-covariance-matrix/). Based on the χ2 distribution with 2 degrees of freedom, the sizes of semimajor and semiminor axes of the ellipse was set to 5.991 times the standard deviation along the respective axes. 
Comparison in a Sighted Subject
Open- and closed-loop pointing precision of a sighted subject was measured (Fig. 2) in the first author (male, 50 years old) using the same task as in the blind participants. The ROI from the scene camera was presented on liquid-crystal display (LCD) goggles (Wrap 920 VR; Vuzix Corp., Rochester, NY, USA). The scene camera was mounted on the LCD goggles to allow head scanning. 
Figure 2
 
Closed-loop (left) and open-loop (right) pointing for a sighted subject using LCD goggles. The symbols indicate the pointing location in each trial is relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. In the closed-loop task, the visual feedback reduced the deviation. Data represent 60 trials for each condition.
Figure 2
 
Closed-loop (left) and open-loop (right) pointing for a sighted subject using LCD goggles. The symbols indicate the pointing location in each trial is relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. In the closed-loop task, the visual feedback reduced the deviation. Data represent 60 trials for each condition.
To occlude the pointing finger and create an open-loop pointing task, the ROI was set to an off-center location in the scene camera FOV, but displayed at the center of the LCD goggles. The fixed deviation caused the sighted participant to point outside the ROI, so the pointing finger wasn't visible. 
Results
Pointing precision, the amount of head movement, and the time to find the target were measured in two scanning modes. Corrections for eye position were only done in the combined eye-head scanning mode and not for the head-only scanning, in which the eye-tracking control was disabled. Pointing precision was computed from the measured deviations across trials as specified in Equation 1. The amount of head movement was quantified by calculating the RMS velocity of the head motion. The measured results from all eight Argus II participants are summarized in Table 2
Table 2
 
Pointing Precision, Head Movement, and Time to Find the Target for all Participants
Table 2
 
Pointing Precision, Head Movement, and Time to Find the Target for all Participants
Paired Wilcoxon signed-rank tests comparing the matched samples of eye-head to head-only scanning found that there were two benefits in combining eye-head scanning. The spread in the pointing location was 25% narrower (P < 0.016) and there was on average, 50% less head movement (P < 0.016). It is important to note that head movements were much more variable across subjects than eye movements. There was no significant difference in the time to find the target between the two scanning conditions for most participants. 
Sample charts comparing the precision and amount of head movement between the two scanning modes are shown in Figures 3 through 6. Figure 3 presents the data of participant P2, who had better pointing precision and was able to conduct the task with significantly less head movement in the combined eye-head scanning mode (left panels) relative to the head-only scanning mode (right panels). Figure 4 shows the data of participant P3 who was able to conduct the task with almost no head movement using combined eye-head scanning and had better precision than with head-only scanning. 
Figure 3
 
Results for participant P2 that show better precision and less head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 3
 
Results for participant P2 that show better precision and less head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 4
 
Results for participant P3 that show better precision and almost no head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 4
 
Results for participant P3 that show better precision and almost no head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 5
 
Results for participant P5 that show good precision (i.e., the spread of pointing errors was confined to an area approximately equal to the size of the target, in both scanning modes). Nonetheless, this participant conducted the task with significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 5
 
Results for participant P5 that show good precision (i.e., the spread of pointing errors was confined to an area approximately equal to the size of the target, in both scanning modes). Nonetheless, this participant conducted the task with significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 6
 
Results for participant P6 who had similar precision in the two scanning modes, but had significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 6
 
Results for participant P6 who had similar precision in the two scanning modes, but had significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figures 5 and 6 show data from participants P5 and P6, respectively. These two participants showed no difference in pointing precision between the two scanning modes. Nevertheless, both performed the search task with significantly less head movement in the combined eye-head scanning mode relative to the head-only scanning mode. It is worthwhile noting that P5 was able to achieve a pointing precision that was narrower than the size of the target with head-only scanning, so one would not expect to observe improved precision with eye-head scanning. The data of the other participants are given in the Supplementary Figures S1 through S4
The eye positions in all trials for the two scanning conditions are shown in Figure 7 for one participant. The data for all participants are shown in the Supplementary Figures S5 through S10. The solid line marks the confidence ellipse that defines the region that contains 95% of all eye position samples. The dimensions of the 95% confidence ellipse represent the amount of eye movements and are given in Table 3. It can be seen that all participants had a larger spread of eye positions in the eye-head scanning relative to the head-only mode. Nevertheless, there was a significant amount of eye movements for all participants during the head-only mode. 
Figure 7
 
Eye position samples in all trials during the eye-head scanning (left) and the head-only scanning mode (right). Sample data are presented for participant P3, who had the least extent of eye movements during the head-only scanning mode. Data for all participants are summarized in Table 3 and shown in Supplementary Figures S5 through S10. Each color symbol shows the eye samples during a single trial. The solid contour is the ellipse that contains 95% of the eye position samples. The size of the ellipse in the head-only scanning is a measure of the participants' ability to hold their gaze straight during the task. The difference in ellipse sizes between conditions is an indication of the increase in eye movements in the eye-head condition.
Figure 7
 
Eye position samples in all trials during the eye-head scanning (left) and the head-only scanning mode (right). Sample data are presented for participant P3, who had the least extent of eye movements during the head-only scanning mode. Data for all participants are summarized in Table 3 and shown in Supplementary Figures S5 through S10. Each color symbol shows the eye samples during a single trial. The solid contour is the ellipse that contains 95% of the eye position samples. The size of the ellipse in the head-only scanning is a measure of the participants' ability to hold their gaze straight during the task. The difference in ellipse sizes between conditions is an indication of the increase in eye movements in the eye-head condition.
Table 3
 
The Dimensions of the Ellipse That Contains 95% of Eye Position Samples Across All Trials
Table 3
 
The Dimensions of the Ellipse That Contains 95% of Eye Position Samples Across All Trials
To assess the contribution of the eye position to the pointing precision, we examined the dependency between the pointing deviation and the eye position at the end of the trial. Sample charts of the pointing deviation along the horizontal and vertical axis as a function of the eye position along the respective axis, for the two scanning conditions, are presented in Figure 8. For this participant, there is a significant correlation (P < 0.0125) between the pointing deviation and the eye position only in the head-only scanning mode. The correlation coefficients for all participants are given in Table 4 and the charts of the deviation versus eye position are provided in the Supplementary Materials. Table 4 shows the Pearson's linear correlation coefficient for cases in which the probability of a nonzero correlation is smaller than 0.0125. 
Figure 8
 
Pointing deviation versus the eye position at the end of the trial for participant P2. Charts for all participants can be found in the Supplementary Figures S11 through S16. Data represent 40 trials for each scanning condition. Left: shows the data in the eye-head scanning mode, without correction for eye movements. Right: shows the head-only scanning data, with a correction for eye movements. It can be seen that the eye positions contribute to deviations in pointing when there is no correction for eye movements.
Figure 8
 
Pointing deviation versus the eye position at the end of the trial for participant P2. Charts for all participants can be found in the Supplementary Figures S11 through S16. Data represent 40 trials for each scanning condition. Left: shows the data in the eye-head scanning mode, without correction for eye movements. Right: shows the head-only scanning data, with a correction for eye movements. It can be seen that the eye positions contribute to deviations in pointing when there is no correction for eye movements.
Table 4
 
Pearson's Linear Correlation Coefficient Between the Pointing Deviation and the Eye Position at the End of Trial
Table 4
 
Pearson's Linear Correlation Coefficient Between the Pointing Deviation and the Eye Position at the End of Trial
For four out of seven participants (P2, P3, P6, and P7) there is a significant correlation between the pointing deviation in the two axes and eye position at the end of the trial (i.e., when the participants indicate their response), in the head-only scanning condition. For these participants there was no significant correlation between the pointing deviation and eye position in the eye-head scanning mode. This indicates that the eye position has a significant contribution to the pointing precision for these participants in the head-only mode. In the eye-head mode, the eye tracker improves the pointing precision by correcting the contribution of eye position to the pointing deviation. 
For participant P5, there is an inverse significant correlation between the pointing deviation and the eye position in the eye-head scanning mode. No correlation was observed for this participant in the head-only mode. This participant has the best pointing performance of all participants in the head-only scanning mode. He was able to hold the gaze straight at the time of the pointing, thus, eye movement did not contribute to the pointing deviation. In the eye-head scanning mode, the eye tracker could have been slightly misaligned and overcorrected the eye movements. The overcorrection caused a pointing deviation in the opposite direction, which can explain why there was no improvement in the precision. Nonetheless, the eye tracker benefited this participant in significantly reducing head scanning, as can be seen in Figure 4
For participants P4 and P8, there was no significant correlation between pointing deviation and eye position at the end of the trial in either scanning mode. It is noteworthy that P4 had a remarkably high head velocity and that P8 had the shortest response times (Table 2). As the eye position is sampled at a specific time window relative to the end of the trial, it is possible that due to the fast head motion and fast response time, the time of the decision about the location is not made at the same point in each trial. Participant P1 was tested with an earlier version of the experimental setup without synchronization between the eye data and the time of pointing. Hence, these data are not available for P1. 
Discussion
All participants demonstrated better precision and/or less head movement from integrating the eye tracker into the Argus II retinal prosthesis. In addition, one participant (P3) was able to perform the search task faster with the new eye-head scanning mode. For all other participants the duration of the trials with and without eye control was similar. Most participants (P1, P2, P3, P4, P7, and P8) had a significantly narrower spread of pointing locations with combined eye-head scanning relative to head-only scanning. All participants except P4 moved their head significantly less in the combined eye-head scanning. Although participant P4 had better precision with eye-head scanning, he didn't change his scanning strategy in the new combined eye-head scanning mode. It is possible that due to his relatively old age (84 at the time of testing) and the length of time that he spent using the system with head-only scanning (more than 9 years), he was set in his ways of searching using head movements. 
The response on the touch monitor was done in an open-loop pointing manner, meaning that the participants could not see their hand while pointing. Hence, the measured pointing error was a summation of the error of the perceived target location in the brain and the error in hand pointing. Most likely, the participants used proprioception of their hand muscles to guide the hand to the location of the percept in world-centered coordinates. Errors in proprioception and motor control would have added variability to touch responses and decreased pointing precision. An additional source of variability was that the participants were not instructed to find the center or a specific location of the target. Therefore, it is possible that in different trials, the participants observed different parts of target. Nevertheless, these sources of variability would affect both scanning modes. Based on the above, we can conclude that the differences in pointing precision between eye-head and head-only scanning modes are attributed to the effects of integrating eye tracking into the prosthesis system. 
The mean open-loop pointing precision of the sighted subject presented in Figure 2 is 6.6°. This score is worse than those of all Argus II participants in the eye-head scanning mode with correction for eye position (Table 2). This can be attributed to an adaption of blind individuals to point in open-loop, as they do during their daily activities without seeing their hand. In a similar pointing task,25 the relative pointing deviation (i.e., precision) was on average 4.4°. This precision is similar to the precision we measured in the other blind participants implanted with the Argus II prosthesis. 
The deviation in the pointing for most of the participants (P1, P2, P3, P5, and P7) was equivalent to the size of the target. A smaller target size should be explored in future open-loop pointing task experiments. In addition, it might be advised to use the same predefined target location in the different modes that are being tested. Subjects might prefer to point to different parts of the target depending on the location of the target on the screen. A separate comparison of the results for each location can eliminate any bias from the target location. 
In real-world tasks, hand pointing is generally done in closed loop, where the person can correct for errors in the pointing based on visual or tactile feedback. Nonetheless, the improvement in the pointing precision as measured here in open loop is an indication that the brain mapped the visual information of the retinal stimulation to more accurate spatial locations based on the position of the eye. Visual stability is the ability to create spatial continuity of the world across movements of the eye and head.26,27 Our results support the notion that eye movements introduce confusion to locating percepts in Argus II users. Integration of an eye tracker in a visual prosthesis compensates for the eye movements and improves visual stability. 
Due to the narrow FOV of currently available retinal prostheses, scanning is a key component for their efficient use. For percepts to be assigned correct locations in space, it is critical to keep the line of sight of the implant aligned with that of the eye during scanning. This becomes more important during activities that do not allow for slow, repeated scanning, such as mobility tasks. Unfortunately, many eye movements during gaze shifts are involuntary and cannot be voluntarily suppressed. It was shown that sighted observers perform several saccades per head movement that are not always in the same direction as the head movement.28 There are also many factors that affect eye-head coordination in real-world gaze behavior,29 including external factors such as irregularity of the terrain.30 Involuntary eye movements, even when users have been trained to keep their eyes straight, introduce localization error. Our integrated eye tracking helps to compensate for the effects of such eye movements on perceived phosphene locations. 
In this study, the participants did not have significant training with the new eye-head scanning mode. The improvements presented here indicate that the ability to map stimulation at retina-centered coordinates to correct locations in world-centered coordinates still exists in adventitiously blind people's brains. This is consistent with previous observations that blind patients with a subretinal visual implant can conduct a saccadic eye movement to the correct location.10 Training and adaptation to the new combined eye-head scanning might further improve performance with closed-loop eye movement scanning. It has been shown that people with tunnel vision alter their eye scanning pattern after training.31 After acquiring a target using eye scanning, a visual prosthesis user may need to conduct some sort of head compensation to center the eyes relative to the head. This can improve localization precision as the accuracy of eye tracking is better when the eyes are centered as opposed to in the periphery. 
Sighted observers often move their eyes based on visual information acquired through peripheral vision. In contrast, retinal prosthesis users need to direct eye movements outside the implant's FOV. Such saccades outside the implant's FOV are not triggered by visual information and are observed in patients with tunnel vision.31,32 Sighted observers plan saccades in advance (i.e., the endpoint of a second saccade is set before the first saccade is initiated).33 Most likely, visual prosthesis users cannot plan eye movements in advance. If the target is not found in the ROI, the brain must issue a top-down command to move the eye to the next location. Top-down commands cannot be planned ahead and most likely add to the delay relative to scene scanning in sighted observers. Prosthesis users may need to adopt a different scanning strategy than sighted people to adapt to eye scanning with their visual prosthesis. Training and adaptation may lead to a faster target acquisition with combined eye-head scanning. The short duration of these experiments did not allow us to examine this effect. 
The results presented here confirm that an eye tracker can be used to enable combined eye-head scanning in the Argus II retinal prosthesis users. With the new feature, there is a link between instantaneous eye position and visual information delivered to the retinal electrodes. The dissociation between the camera and the eye position in the Argus II allows complications to arise from the influence of eye position on the location that the brain assigns to a percept. This has been used to explain the variation in the percept locations in Argus II users.34 Our results show that an eye tracker can be used to reduce the variability in the percept locations. In principle, the disadvantage of the head-mounted camera based retinal prosthesis relative to the implanted photodiodes approach with regard to eye movements can be resolved with the integration of an eye tracker that will steer the line of sight within the camera's FOV. 
Future multicenter research is needed to test the benefit of an eye tracker on real-world tasks. This will require a mobile eye tracking device integrated with the prosthesis.35 It will be of particular interest to see if the integration of an eye tracker will improve the performance of patients in orientation and mobility tasks such as sidewalk tracking.2 Furthermore, with an integrated mobile eye tracker, the participants will be able to be trained for substantial periods of time with this new feature. This can lead to a more efficient use of the eye scanning mode. 
Acknowledgments
Supported by the Alfred E. Mann Fund. 
Disclosure: A. Caspi, Second Sight Medical Products (C), P; A. Roy, Second Sight Medical Products (E, I), P; V. Wuyyuru, Second Sight Medical Products (E, I), P; P.E. Rosendall, None; J.W. Harper, None; K.D. Katyal, None; M.P. Barry, Second Sight Medical Products (F); G. Dagnelie, eSight (F), Quadra Logic Technologies (C, F) P; R.J. Greenberg, Second Sight Medical Products (E, I), P 
References
da Cruz L, Dorn JD, Humayun MS, et al. Five-year safety and performance results from the Argus II Retinal Prosthesis System Clinical Trial. Ophthalmology. 2016; 123: 2248–2254.
Dagnelie G, Christopher P, Arditi A, et al. Performance of real-world functional vision tasks by blind subjects improves after implantation with the Argus(R) II retinal prosthesis system. Clin Exp Ophthalmol. 2017; 45: 152–159.
Ho AC, Humayun MS, Dorn JD, et al. Long-term results from an epiretinal prosthesis to restore sight to the blind. Ophthalmology. 2015; 122: 1547–1554.
Stingl K, Bartz-Schmidt KU, Besch D, et al. Subretinal visual implant alpha IMS--Clinical trial interim report. Vision Res. 2015; 111: 149–160.
Sabbah N, Authie CN, Sanda N, Mohand-Said S, Sahel JA, Safran AB. Importance of eye position on spatial localization in blind subjects wearing an Argus II retinal prosthesis. Invest Ophthalmol Vis Sci. 2014; 55: 8259–8266.
Caspi A, Roy A, Dorn JD, Greenberg RJ. Retinotopic to spatiotopic mapping in blind patients implanted with the Argus II retinal prosthesis. Invest Ophthalmol Vis Sci. 2017; 58: 119–127.
Barry MP, Dagnelie G. Hand-camera coordination varies over time in users of the Argus((R)) II retinal prosthesis system. Front Syst Neurosci. 2016; 10: 41.
Hedin DS, Seifert GJ, Dagnelie G, Havey GD, Knuesel RJ, Gibson PL. Thermal imaging aid for the blind. Conf Proc IEEE Eng Med Biol Soc. 2006; 1: 4131–4134.
McCarthy C, Walker JG, Lieby P, Scott A, Barnes N. Mobility and low contrast trip hazard avoidance using augmented depth. J Neural Eng. 2015; 12: 016003.
Hafed ZM, Stingl K, Bartz-Schmidt KU, Gekeler F, Zrenner E. Oculomotor behavior of blind patients seeing with a subretinal visual implant. Vision Res. 2016; 118: 119–131.
Lorach H, Goetz G, Smith R, et al. Photovoltaic restoration of sight with high visual acuity. Nat Med. 2015; 21: 476–482.
Waltz E. French regulators approve human trial of a bionic eye. IEEE Spectrum. 2017. Available at: https://spectrum.ieee.org/the-human-os/biomedical/bionics/french-regulators-approve-human-trial-of-a-bionic-eye
Pezaris JS, Reid RC. Demonstration of artificial visual percepts generated through thalamic microstimulation. Proc Natl Acad Sci U S A. 2007; 104: 7670–7675.
Normann RA, Greger B, House P, Romero SF, Pelayo F, Fernandez E. Toward the development of a cortically based visual neuroprosthesis. J Neural Eng. 2009; 6: 035001.
Andersen RA, Essick GK, Siegel RM. Encoding of spatial location by posterior parietal neurons. Science. 1985; 230: 456–458.
Dagnelie G, Massof RW. Toward an artificial eye. IEEE Spectrum. 1996; 33: 20–29.
Oyster CW. The Human Eye: Structure and Function. Sunderland, MA: Sinauer Associates, Inc.; 1999.
Zoccolan D, Graham BJ, Cox DD. A self-calibrating, camera-based eye tracker for the recording of rodent eye movements. Front Neurosci. 2010; 4: 193.
Ahuja AK, Dorn JD, Caspi A, et al. Blind subjects implanted with the Argus II retinal prosthesis are able to improve performance in a spatial-motor task. Br J Ophthalmol. 2011; 95: 539–543.
Crawford JD, Medendorp WP, Marotta JJ. Spatial transformations for eye-hand coordination. J Neurophysiol. 2004; 92: 10–19.
Sarlegna F, Blouin J, Bresciani JP, Bourdin C, Vercher JL, Gauthier GM. Target and hand position information in the online control of goal-directed arm movements. Exp Brain Res. 2003; 151: 524–535.
Hammal Z, Cohn JF, Messinger DS. Head movement dynamics during play and perturbed mother-infant interaction. IEEE Trans Affect Comput. 2015; 6: 361–370.
Demer JL, Goldberg J, Porter FI. Effect of telescopic spectacles on head stability in normal and low vision. J Vestib Res. 1990; 1: 109–122.
Hammal Z, Cohn JF, George DT. Interpersonal coordination of head motion in distressed couples. IEEE Trans Affect Comput. 2014; 5: 155–167.
Endo T, Kanda H, Hirota M, Morimoto T, Nishida K, Fujikado T. False reaching movements in localization test and effect of auditory feedback in simulated ultra-low vision subjects and patients with retinitis pigmentosa. Graefes Arch Clin Exp Ophthalmol. 2016; 254: 947–956.
Melcher D. Visual stability. Philos Trans R Soc Lond B Biol Sci. 2011; 366: 468–475.
Wurtz RH. Neuronal mechanisms of visual stability. Vision Res. 2008; 48: 2070–2089.
Fang Y, Nakashima R, Matsumiya K, Kuriki I, Shioiri S. Eye-head coordination for visual cognitive processing. PLoS One. 2015; 10: e0121035.
Lappi O. Eye movements in the wild: oculomotor control, gaze behavior & frames of reference. Neurosci Biobehav Rev. 2016; 69: 49–68.
‘t Hart BM, Einhäuser W. Mind the step: complementary effects of an implicit task on eye and head movements in real-life gaze allocation. Exp Brain Res. 2012; 223: 233–249.
Ivanov IV, Mackeben M, Vollmer A, Martus P, Nguyen NX, Trauzettel-Klosinski S. Eye movement training and suggested gaze strategies in tunnel vision - a randomized and controlled pilot study. PLoS One. 2016; 11: e0157825.
Luo G, Vargas-Martin F, Peli E. The role of peripheral vision in saccade planning: learning from people with tunnel vision. J Vis. 2008; 8 (14): 25.
Caspi A, Beutter BR, Eckstein MP. The time course of visual information accrual guiding eye movement decisions. Proc Natl Acad Sci U S A. 2004; 101: 13086–13090.
Luo YH, Zhong JJ, Clemo M, da Cruz L. Long-term repeatability and reproducibility of phosphene characteristics in chronically implanted Argus II retinal prosthesis subjects. Am J Ophthalmol. 2016; 170: 100–109.
Tomasi M, Pundlik S, Bowers AR, Peli E, Luo G. Mobile gaze tracking system for outdoor walking behavioral studies. J Vis. 2016; 16 (3): 27.
Figure 1
 
Snapshots from the scene and eye tracker cameras during the localization task. Top: Images from the eye tracking camera with the corneal reflections of the self-calibrating eye tracker. Bottom: Images from the scene camera. The orange disk indicates the gaze position. The yellow rectangle indicates the ROI that was sent to the implant.
Figure 1
 
Snapshots from the scene and eye tracker cameras during the localization task. Top: Images from the eye tracking camera with the corneal reflections of the self-calibrating eye tracker. Bottom: Images from the scene camera. The orange disk indicates the gaze position. The yellow rectangle indicates the ROI that was sent to the implant.
Figure 2
 
Closed-loop (left) and open-loop (right) pointing for a sighted subject using LCD goggles. The symbols indicate the pointing location in each trial is relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. In the closed-loop task, the visual feedback reduced the deviation. Data represent 60 trials for each condition.
Figure 2
 
Closed-loop (left) and open-loop (right) pointing for a sighted subject using LCD goggles. The symbols indicate the pointing location in each trial is relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. In the closed-loop task, the visual feedback reduced the deviation. Data represent 60 trials for each condition.
Figure 3
 
Results for participant P2 that show better precision and less head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 3
 
Results for participant P2 that show better precision and less head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. The width of each square in the grid is 10°; solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 4
 
Results for participant P3 that show better precision and almost no head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 4
 
Results for participant P3 that show better precision and almost no head movement using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 5
 
Results for participant P5 that show good precision (i.e., the spread of pointing errors was confined to an area approximately equal to the size of the target, in both scanning modes). Nonetheless, this participant conducted the task with significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 5
 
Results for participant P5 that show good precision (i.e., the spread of pointing errors was confined to an area approximately equal to the size of the target, in both scanning modes). Nonetheless, this participant conducted the task with significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 60 trials for each scanning condition.
Figure 6
 
Results for participant P6 who had similar precision in the two scanning modes, but had significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 6
 
Results for participant P6 who had similar precision in the two scanning modes, but had significantly less head movement using the using the combined eye-head scanning (left) in comparison to head-only scanning (right). Top: Pointing location in each trial relative to the mean pointing location. Solid circles indicate the size of the target. Bottom: Histograms of RMS head velocity. Data represent 40 trials for each scanning condition.
Figure 7
 
Eye position samples in all trials during the eye-head scanning (left) and the head-only scanning mode (right). Sample data are presented for participant P3, who had the least extent of eye movements during the head-only scanning mode. Data for all participants are summarized in Table 3 and shown in Supplementary Figures S5 through S10. Each color symbol shows the eye samples during a single trial. The solid contour is the ellipse that contains 95% of the eye position samples. The size of the ellipse in the head-only scanning is a measure of the participants' ability to hold their gaze straight during the task. The difference in ellipse sizes between conditions is an indication of the increase in eye movements in the eye-head condition.
Figure 7
 
Eye position samples in all trials during the eye-head scanning (left) and the head-only scanning mode (right). Sample data are presented for participant P3, who had the least extent of eye movements during the head-only scanning mode. Data for all participants are summarized in Table 3 and shown in Supplementary Figures S5 through S10. Each color symbol shows the eye samples during a single trial. The solid contour is the ellipse that contains 95% of the eye position samples. The size of the ellipse in the head-only scanning is a measure of the participants' ability to hold their gaze straight during the task. The difference in ellipse sizes between conditions is an indication of the increase in eye movements in the eye-head condition.
Figure 8
 
Pointing deviation versus the eye position at the end of the trial for participant P2. Charts for all participants can be found in the Supplementary Figures S11 through S16. Data represent 40 trials for each scanning condition. Left: shows the data in the eye-head scanning mode, without correction for eye movements. Right: shows the head-only scanning data, with a correction for eye movements. It can be seen that the eye positions contribute to deviations in pointing when there is no correction for eye movements.
Figure 8
 
Pointing deviation versus the eye position at the end of the trial for participant P2. Charts for all participants can be found in the Supplementary Figures S11 through S16. Data represent 40 trials for each scanning condition. Left: shows the data in the eye-head scanning mode, without correction for eye movements. Right: shows the head-only scanning data, with a correction for eye movements. It can be seen that the eye positions contribute to deviations in pointing when there is no correction for eye movements.
Table 1
 
Participant Demographics
Table 1
 
Participant Demographics
Table 2
 
Pointing Precision, Head Movement, and Time to Find the Target for all Participants
Table 2
 
Pointing Precision, Head Movement, and Time to Find the Target for all Participants
Table 3
 
The Dimensions of the Ellipse That Contains 95% of Eye Position Samples Across All Trials
Table 3
 
The Dimensions of the Ellipse That Contains 95% of Eye Position Samples Across All Trials
Table 4
 
Pearson's Linear Correlation Coefficient Between the Pointing Deviation and the Eye Position at the End of Trial
Table 4
 
Pearson's Linear Correlation Coefficient Between the Pointing Deviation and the Eye Position at the End of Trial
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×