Investigative Ophthalmology & Visual Science Cover Image for Volume 64, Issue 15
December 2023
Volume 64, Issue 15
Open Access
Visual Psychophysics and Physiological Optics  |   December 2023
The Influence of Phosphene Synchrony in Driving Object Binding in a Simulation of Artificial Vision
Author Affiliations & Notes
  • Noya Meital-Kfir
    Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, United States
    Department of Neurosurgery, Harvard Medical School, Boston, Massachusetts, United States
  • John S. Pezaris
    Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, United States
    Department of Neurosurgery, Harvard Medical School, Boston, Massachusetts, United States
  • Correspondence: John S. Pezaris, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA; [email protected]
Investigative Ophthalmology & Visual Science December 2023, Vol.64, 5. doi:https://doi.org/10.1167/iovs.64.15.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Noya Meital-Kfir, John S. Pezaris; The Influence of Phosphene Synchrony in Driving Object Binding in a Simulation of Artificial Vision. Invest. Ophthalmol. Vis. Sci. 2023;64(15):5. https://doi.org/10.1167/iovs.64.15.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Electrical microstimulation techniques used in visual prostheses are designed to restore visual function following acquired blindness. Patterns of induced focal percepts, known as phosphenes, are achieved by applying localized electrical pulses to the visual pathway to bypass the impaired site in order to convey images from the external world. Here, we use a simulation of artificial vision to manipulate relationships between individual phosphenes to observe the effects on object binding and perception. We hypothesize that synchronous phosphene presentation will facilitate object perception as compared to asynchronous presentation.

Methods: A model system that tracks gaze position of normal, sighted participants to present patterns of phosphenes on a computer screen was used to simulate prosthetic vision. Participants performed a reading task at varying font sizes (1.1–1.4 logMAR) and under varying levels of phosphene temporal noise while reading accuracy and speed were measured.

Results: Reading performance was significantly affected by temporal noise in phosphene presentation, with increasing desynchronization leading to lower reading scores. A drop in performance was also observed when the total latency between the gaze position and phosphene update was increased without adding temporal noise.

Conclusions: Object perception (here, text perception) is enhanced with synchronously presented phosphenes as compared to asynchronously presented ones. These results are fundamental for developing an efficient temporal pattern of stimulation and for the creation of high-fidelity prosthetic vision.

The utility of a visual prosthesis device lays in its ability to provide detailed, cohesive percepts. Although recipients of visual prostheses emphasize the importance of functional vision and the ability of prosthetic devices to provide real-world experiences,1,2 little is known about the details of how artificial visual input creates the perception of an image. In natural vision, the integration of features with common properties into a unified object, known as perceptual binding, is thought to depend on the temporal relationships between neural representations. Visual input that evokes synchronized neural activity is more likely to be perceptually bound than input that does not.36 In a previous report, we reviewed the low-level effects of synchronous versus asynchronous stimulation in both simulated and clinical visual prostheses7 and posed a series of open questions. Here, we begin to address the first question by experimentally assessing the influence of stimulation synchrony on simulated prosthetic vision in a higher-level task. 
The creation of phosphenes, the small spots of light that form the pixels of artificial vision, can be achieved in visual prostheses by applying localized electrical pulses to intact tissue in the visual pathway. A single electrical contact typically elicits a single phosphene with a fixed location in the visual field. Multiple contacts create sets of phosphenes in constellation-like patterns across the visual field. 
The creation of an image from a set of phosphenes requires translating the visual scene into a pattern of phosphene activation that varies in intensity across the set. While this approach successfully produces functional restoration,811 temporal and spatial interactions between electrodes may have significant perceptual impact.7,10,1214 
In particular, multielectrode stimulation can be designed to take place simultaneously, with the spatial pattern of current applied in the same instant to all electrodes to be stimulated, resulting in synchronous electrical pulses, or it may involve some form of sequencing from electrode to electrode, resulting in asynchronous electrical pulses.7 Although synchrony is known to facilitate object formation in normal vision, this effect might be reversed under artificial sight due to the peculiarities of electrically evoked neural activity.15 In cochlear implants, for example, synchronous stimulation reduces the quality of the percept due to interelectrode electric field interactions.3,6,7,16 Studies exploring the contribution of synchrony to artificial vision in animals17 as well as patients18,19 reported high perceptual sensitivity to temporal shifts between stimulation trains applied to separate electrodes. In an early study, one participant demonstrated higher accuracy reading of visual braille with asynchronous versus synchronous stimulation.20 
Contemporary visual prostheses like Argus II and Orion,21 IRIS II,22 Australian bionic eye,23 and CORTIVIS24 are all believed to use asynchronous stimulation7 through multiplexing architectures that rapidly switch stimulation from electrode to electrode. These systems have been successful in demonstrations of object perception in blind participants.12,14,25 However, most studies activated a minimal number of electrodes with preselected retinotopic locations that corresponded to the shape to be conveyed, without gaze compensation to impart allospatial object stability,26,27 and did not explore the question of synchronous versus asynchronous stimulation. 
Here, we explore the perceptual consequences of desynchronizing artificial stimulation on the formation of complex visual percepts. We used a noninvasive simulation of artificial vision with sighted participants as a model system for blind individuals with visual prostheses to assess the effect of temporal desynchronization of phosphene presentation in a reading task based on the MNREAD test.28,29 We hypothesized that synchronous phosphene presentation, as a model for synchronous electrical stimulation, would facilitate object perception, and thus reading ability, as compared to asynchronous presentation. 
Methods
Participants
Our experiment included 23 naive participants (9 male/14 female) with normal or corrected-to-normal vision. Participants were all expert English readers, recruited from staff and students at the Massachusetts General Hospital (MGH). The experimental task and procedure were approved by the MGH Institutional Review Board and adhered to the Declaration of Helsinki. Informed consent was obtained from each participant. 
Apparatus
Participants were seated in front of a gaze tracker (Tobii Pro Spectrum 1200; Tobii, Inc., Stockholm, Sweden) with a high-speed 25-in. display mounted on top (ROG Swift 360Hz PG259QN; Asus, Taipei, Taiwan). Behavioral control, including stimulus creation and presentation, was performed through custom software (PLECS v.210906) running on a standard personal computer (Model P330; Lenovo, Inc., Beijing, China). Participants were seated approximately 65 cm away from the stimulus monitor and positioned so that the center of the screen was at a comfortable gaze location. The display occupied 45 by 36 degrees of the visual field. To help minimize participant movement, a fixed-position chair was used. Although the gaze tracker is capable of higher temporal resolution, we ran it at 300 Hz to reduce gaze position noise to an acceptable level. The participant display was set to a 240 Hz frame rate and 1280 × 720 resolution. The frame-to-frame delay at 240 Hz is approximately 4.2 ms. Thus, a three-frame delay is 12.5 ms, and a six-frame delay is 25 ms; these values have important roles in this report, as will be seen later. The total system latency from eye motion to photons entering the eye from an updated display includes delays within the gaze tracker, stimulus frame generation, and monitor. Based on published figures, we estimate gaze tracker delays at 5 ms, the minimum delay for computing a new frame is 4.2 ms (at 240 Hz), and the display has been reported to have a 2.6-ms delay (TFT Central, tftcentral.co.uk/reviews/asus_rog_swift_360hz_pg259qn), for a conservatively estimated minimum system latency of 12 ms. In this experiment, when additional frames of latency are included, they are added to that floor. Latency values for clinical devices are not generally available but are expected to be at least as large, based on estimated frame rates.7 
Reading Task and Stimuli
Participants performed a simple reading task (see Procedure) that was presented as a block of trials where the participant attempted to read sentences out loud, under the control condition of natural view, where text was presented normally on the screen, and experimental conditions of phosphene view, where text was presented through an array of phosphenes using a gaze-contingent architecture. 
Reading stimuli presented to the participants were 72 simple task sentences (or just sentences) taken from the MNREAD assessment of reading acuity.28,29 Each task sentence contains 60 characters (including spaces) and between 10 and 13 words that are broken evenly across three lines. Sentences were rendered using the Times New Roman font using four font sizes (1.1–1.4 logMAR in steps of 0.1 logMAR). Similar font sizes have been used in our previous work; here, the upper limit was determined by the extent of the monitor, as sentences at the largest font size filled the screen. Each sentence was presented once and was shown in natural view or one of the phosphene viewing conditions. Natural view of the sentences, the normal viewing of text on the computer screen, was used as a control condition; phosphene view, with text presented through the artificial vision simulation, was used as the experimental condition. It is important to note that while we employed sentences taken from the MNREAD corpus, this task does not follow the MNREAD procedures and uses a different method for scoring and evaluation. 
The Phosphene Pattern and Generation of the Experimental Viewing Condition
A pattern of 2000 phosphenes, P2000, was used to simulate vision with a prosthesis device.30 Phosphene positions were determined by eccentricity, more densely packed toward the point of regard, and more sparsely packed at the periphery but spanned the entire visual field. Phosphene size was also determined by eccentricity, tracking the profile of receptive field size across the visual field for the early visual system, scaled by empirical measurements from the literature and unpublished nonhuman primate results in our laboratory, as used in our previous reports.27,3036 The particular pattern used reflects a thalamic visual prosthesis13,30 and features a phosphene count that we believe to be achievable in a near-term future device.31 Previous reports have assessed the visual acuity of highly similar patterns at 1.08 ± 0.09 logMAR.36 
The phosphene pattern P2000 was computationally applied as a veridical filter on an image of a text sentence, with each phosphene's brightness representing the local image luminance.33 Phosphenes were modeled by independent two-dimensional Gaussians. The brightness of each phosphene was determined by the average luminance of the portion of the image it covered using the phosphene's Gaussian profile as a weighting distribution. The brightness of spatially overlapping phosphenes combined linearly for the overlapping portions, with a soft saturation. A static version of this filtering for the central area of the phosphene pattern is shown in Figure 1
Figure 1.
 
Phosphene (upper row) and natural view (lower row) at the four different font sizes (logMAR 1.1–1.4). The images here depict approximately 6 degrees of visual angle centered on the point of regard as the participant is looking at the word /to/. For illustrative clarity, phosphenes that fall on the black background are depicted in dark gray to indicate their locations and sizes; during the experiment, they were shown as black. When viewed through the active simulation that is responsive to changes in gaze position, recognition is substantially easier than might be expected from these static depictions.
Figure 1.
 
Phosphene (upper row) and natural view (lower row) at the four different font sizes (logMAR 1.1–1.4). The images here depict approximately 6 degrees of visual angle centered on the point of regard as the participant is looking at the word /to/. For illustrative clarity, phosphenes that fall on the black background are depicted in dark gray to indicate their locations and sizes; during the experiment, they were shown as black. When viewed through the active simulation that is responsive to changes in gaze position, recognition is substantially easier than might be expected from these static depictions.
We used a gaze contingent architecture to create phos-phenes that were stabilized on the retina13,31,32 using methods previously reported27,3336 and summarized here. The center of P2000 was shifted, on a frame-by-frame basis, to track the participant's instantaneous gaze position, thus stabilizing the phosphene positions on the retina and allowing the participant to steer the pattern using normal eye and head movements. The phosphene pattern illuminated a virtual text image that was fixed in space at the surface of the monitor, as it would be during natural view trials, with different potions of the image appearing through the phosphene filtering as the participant read the text. The exact number of phosphenes that fell within the screen area varied as the pattern was steered by the participant's gaze position (recall that P2000 spanned the entire visual field). Because of the center-weighted nature of the pattern, about half of the phosphenes by count were within the area of the monitor during the task.34 
We used a real-time design to present the stimuli. For each 240 Hz frame of the participant monitor, the gaze position was read, the phosphene pattern position updated, filtering applied to an off-screen text image to determine each phosphene brightness, phosphenes rendered to an internal buffer, and then the buffer sent to the video card (see Figure 4 in Vurro et al.33). This architecture was then augmented to support synchronized versus desynchronized phosphene presentation as described below. 
Phosphene Temporal Noise
We used four different phosphene temporal patterns to test the effect of temporal desynchronization of phosphene presentation on performance. Each temporal pattern was defined by its latency and dispersion. Latency (L) expresses the mean temporal delay between the current eye position and phosphene activation, averaged over all phosphenes (such measurements are sometimes also called a group delay because they describe the behavior of the group as a whole). For example, a latency or group delay of six frames (25 ms) implies that at each time point, the phosphenes display the image intensity that corresponds to the eye position six frames earlier despite appearing in visual space at the current eye position. Large latencies create the perceptual effect of an elastic slippage where the image seen through the phosphenes appears to take a while to catch up after each saccade. Dispersion (D) describes conditions in which temporal scatter is added to the presentation of the phosphenes relative to the average latency in order to create asynchrony. In this case, a dispersion of three frames (12.5 ms) means that the activation of phosphenes is uniformly distributed across six frames in time (plus or minus three frames) relative to the average latency. Large dispersions create the perceptual effect of scintillation. 
Four phosphene temporal patterns were used for this study, shown graphically in Figure 2, coded as (1) L0/D0, in which all phosphenes were presented simultaneously (zero dispersion, or D0) in correspondence to the current eye position (zero additional latency, or L0); (2) L6/D0, where all phosphenes were presented simultaneously with a fixed delay of six frames relative to the current eye position; (3) L6/D3, in which the presentation of phosphenes was distributed across plus or minus three frames relative to the average six-frame latency; and (4) L6/D6, in which the presentation of phosphenes was distributed across plus or minus six frames relative to the averaged six-frame latency. In a given trial, only one temporal pattern was used (see Fig. 3 for a comparison), but the specific latency for each phosphene was resampled from a uniform distribution based on the current temporal pattern for the computation of each frame. Trials with zero dispersion presented phosphenes in a traditional manner, equivalent to conditions used in previous reports from our group where experiments were conducted without added temporal noise27,3236; trials with nonzero dispersion, the novel presentation condition used here, spread the activation of phosphenes over brief windows of time. 
Figure 2.
 
The four phosphene temporal patterns. Latency (L) expresses the average temporal delay or group delay between the current eye position and the activation of each phosphene. Dispersion (D) is the amount of temporal scatter added to the average latency. In trials with nonzero dispersion, the specific latency for each phosphene is resampled from uniform distributions (gold, red) on a frame-by-frame basis. The participant's monitor has a frame rate of 240 Hz, so each frame is approximately 4.2 ms. The color code for stimulus conditions introduced here is continued in later figures.
Figure 2.
 
The four phosphene temporal patterns. Latency (L) expresses the average temporal delay or group delay between the current eye position and the activation of each phosphene. Dispersion (D) is the amount of temporal scatter added to the average latency. In trials with nonzero dispersion, the specific latency for each phosphene is resampled from uniform distributions (gold, red) on a frame-by-frame basis. The participant's monitor has a frame rate of 240 Hz, so each frame is approximately 4.2 ms. The color code for stimulus conditions introduced here is continued in later figures.
Figure 3.
 
Example images under the four latency/dispersion conditions. In each case, the gaze location is at the center of the screen, where a large letter A suddenly appears. For the top row, L0/D0, the letter is immediately and synchronously displayed at frame 0 ( purple outline). For the second row, L6/D0, the letter is delayed to frame 6 but synchronously displayed (green outline). For the third row, L6/D3, the letter is asynchronously displayed from frames 3 to 9 (gold outline) and is stabilized afterward. For the fourth row, L6/D6, the letter is asynchronously displayed from frames 0 to 12 (red outline) and stabilized afterward, off the edge of the figure. Scintillation can be seen in the bottom two rows where dispersion is nonzero.
Figure 3.
 
Example images under the four latency/dispersion conditions. In each case, the gaze location is at the center of the screen, where a large letter A suddenly appears. For the top row, L0/D0, the letter is immediately and synchronously displayed at frame 0 ( purple outline). For the second row, L6/D0, the letter is delayed to frame 6 but synchronously displayed (green outline). For the third row, L6/D3, the letter is asynchronously displayed from frames 3 to 9 (gold outline) and is stabilized afterward. For the fourth row, L6/D6, the letter is asynchronously displayed from frames 0 to 12 (red outline) and stabilized afterward, off the edge of the figure. Scintillation can be seen in the bottom two rows where dispersion is nonzero.
Procedure
Participants took an informal, binocular Snellen chart test at the beginning of the experimental session to verify their vision was in the normal range (all participants had acuity of 20/40 or better, with a median of 20/10). Importantly, fine visual acuity was not necessary for this task. 
Participants were then seated in front of a computer monitor and a gaze tracker system. The screen-to-eye distance was adjusted for each participant to ensure stable gaze detection by the gaze tracker system with a nominal value of 65 cm. A standard 9-point grid calibration procedure followed, using the manufacturer's calibration tool (Eyetracker Manager; Tobii, Inc., Stockholm, Sweden). The experiment then proceeded as two blocks of 36 trials each. A break of a few minutes was given between blocks. Each trial (Fig. 4) started with the participant fixating on a central white dot for at least 400 ms. Then, one of the sentences was presented on the screen in either phosphene view (Fig. 1, upper row; Fig. 5) or natural view mode (Fig. 1, lower row). The participants were instructed to read the sentence out loud as quickly and accurately as possible while scanning the text, left to right, without going back to previous words or making corrections. In case of difficulty or uncertainty, the participants were encouraged to guess as many words as possible without spending too much time and effort on each one. Once finished reading, the participant terminated the trial by fixating on a dot near the top of the screen area for at least 400 ms. Reading time was measured between the start of sentence presentation to the start of fixation of the next-trial dot (i.e., the startup latency of initiating reading was included, but the 400 ms terminating fixation was not; methodologic differences from similar tests in the literature may have resulted in lower reading speeds here). No participant had difficulty with fixation. During each trial, the experimenter used a prepared score sheet of task sentences to mark words that were read correctly and those that were read incorrectly or skipped. An audio recording was made during data collection to facilitate post hoc scoring verification. 
Figure 4.
 
Experimental paradigm. Each trial started with the presentation of a central target that the participant was required to fixate in order to engage the simulation. Then, a simple, three-line sentence was presented on the screen in either natural view or phosphene view. Participants read the sentence out loud as quickly and accurately as possible but without any limitation on time. Participants indicated they were done with the sentence by fixating on a dot displayed at the top of the screen. The time to read a given sentence was measured as the moment the sentence was displayed through the time the participant fixated the Advance to Next Sentence dot.
Figure 4.
 
Experimental paradigm. Each trial started with the presentation of a central target that the participant was required to fixate in order to engage the simulation. Then, a simple, three-line sentence was presented on the screen in either natural view or phosphene view. Participants read the sentence out loud as quickly and accurately as possible but without any limitation on time. Participants indicated they were done with the sentence by fixating on a dot displayed at the top of the screen. The time to read a given sentence was measured as the moment the sentence was displayed through the time the participant fixated the Advance to Next Sentence dot.
Figure 5.
 
Reading phase of the experimental paradigm. Here, the core reading period of a phosphene-view trial is schematically depicted. The base image used to create each frame is shown (top image), with an overlay of the participant's gaze position from the trial ( purple trace). The base image is fixed relative to the monitor as the participant moves their gaze to steer the phosphene pattern across the screen. The scanning path reflects tracing out the three lines of text. Below, three key moments from the reading period (blue arrows) are depicted in phosphene view (bottom row), as they were shown to the participant, with the participant's gaze (and thus the center of the phosphene pattern) at the start of the sentence (/The/), middle (/here/), and end (/boat/). The smaller size and higher density of phosphenes in the pattern near the point of regard and larger, lower density toward the periphery can be seen to shift between the three frames in response to gaze movements, in order to stabilize the phosphene pattern on the retina.
Figure 5.
 
Reading phase of the experimental paradigm. Here, the core reading period of a phosphene-view trial is schematically depicted. The base image used to create each frame is shown (top image), with an overlay of the participant's gaze position from the trial ( purple trace). The base image is fixed relative to the monitor as the participant moves their gaze to steer the phosphene pattern across the screen. The scanning path reflects tracing out the three lines of text. Below, three key moments from the reading period (blue arrows) are depicted in phosphene view (bottom row), as they were shown to the participant, with the participant's gaze (and thus the center of the phosphene pattern) at the start of the sentence (/The/), middle (/here/), and end (/boat/). The smaller size and higher density of phosphenes in the pattern near the point of regard and larger, lower density toward the periphery can be seen to shift between the three frames in response to gaze movements, in order to stabilize the phosphene pattern on the retina.
Each participant completed 72 trials in total. For the phosphene viewing conditions, four presentations of each combination of font size and latency/dispersion condition were performed (4 presentations × 4 font sizes × 4 L/D conditions = 64 trials); for the natural viewing condition, two presentations at each font size were performed (2 presentations × 4 font sizes = 8 trials). All participants followed the same fixed sequence of sentences. To help the participants gain familiarity with the task, the first sentence was presented in a normal viewing condition, and the following two sentences were presented in the easiest phosphenated viewing conditions (largest two font sizes with L0/D0). For the remaining 69 trials, the order of experimental condition (font size, viewing condition, and phosphene temporal pattern) was randomly interleaved in a balanced manner to mitigate longitudinal effects, with a different random sequence for each participant. 
Analysis
Reading performance was measured through accuracy and speed, the percentage of correctly read words, and the number of correctly read words per minute, respectively. Psychometric curves of reading accuracy and speed were created for each participant by fitting sigmoid functions to the performances for each L/D combination. For reading accuracy, the inflection point of the fitted sigmoid was taken as the accuracy threshold (in other reports, we have called this measurement equivalent acuity), and for reading speed, the inflection point was taken as the speed threshold. For reading accuracy, sigmoidal fits were performed for curves spanning 0% to 100%; for reading speed, the range was 0 to 90 words per minute (WPM), with the upper limit based on a fitting to the population L0/D0 data. Means and standard deviations of these values were computed across the participant population and statistical tests applied to determine significant differences as reported in the Results section. Significances were tested with α = 0.05, using Holm–Bonferroni correction. 
Results
To determine the effect of asynchronous phosphene activation, we measured the speed and accuracy of reading under different temporal patterns of phosphene drive. Figure 6 shows the psychometric curves of reading accuracy (top row) and reading speed (bottom row) as functions of font size for each of the L/D temporal patterns. By inspection, and consistent with earlier work in our group,27,3335 reading performance improved from smaller to larger font sizes (reading accuracy: F3, 22 = 325, P = 0.001; reading speed: F3, 22 = 93.5, P = 0.001; pooled across all phosphene conditions). For this report, we will concentrate on how these curves shifted between conditions. 
Figure 6.
 
Psychometric curves of population reading accuracy (top row) and reading speed (bottom row) as a function of font size for each of the four latency and dispersion conditions. Gray open circles represent observed data of individual participants (n = 23). Black lines are sigmoidal fits across font size for the population, with filled colored circles and bars indicating the mean and standard deviation of the inflection points across participants.
Figure 6.
 
Psychometric curves of population reading accuracy (top row) and reading speed (bottom row) as a function of font size for each of the four latency and dispersion conditions. Gray open circles represent observed data of individual participants (n = 23). Black lines are sigmoidal fits across font size for the population, with filled colored circles and bars indicating the mean and standard deviation of the inflection points across participants.
Figure 7.
 
Dependence of accuracy thresholds (left) and speed thresholds (right) on latency/dispersion condition. Colors are as in Figure 2. Thresholds are the inflection points of fitted sigmoids (Fig. 6) of reading accuracy and speed versus font size for 23 participants. For each metric, we see an increasing logMAR value with increasing asynchrony (L6/D0, green; L6/D3, gold; L6/D6, red), reflecting increasing levels of difficulty of the task. Performance in the comparison condition with zero latency and asynchrony (L0/D0, purple) is substantially better, demonstrating that there is an effect from the group delay as well as synchrony level. Significances are shown for pairwise t-tests with asterisks representing Holm–Bonferroni corrected thresholds for α values of 0.05, 0.01, and 0.001 (see main text for discussion).
Figure 7.
 
Dependence of accuracy thresholds (left) and speed thresholds (right) on latency/dispersion condition. Colors are as in Figure 2. Thresholds are the inflection points of fitted sigmoids (Fig. 6) of reading accuracy and speed versus font size for 23 participants. For each metric, we see an increasing logMAR value with increasing asynchrony (L6/D0, green; L6/D3, gold; L6/D6, red), reflecting increasing levels of difficulty of the task. Performance in the comparison condition with zero latency and asynchrony (L0/D0, purple) is substantially better, demonstrating that there is an effect from the group delay as well as synchrony level. Significances are shown for pairwise t-tests with asterisks representing Holm–Bonferroni corrected thresholds for α values of 0.05, 0.01, and 0.001 (see main text for discussion).
The inflection points of sigmoidal fits were used to express performance thresholds for each L/D stimulus condition derived from reading accuracy and reading speed (Fig. 6). Lower thresholds reflect better performance. For reading accuracy, the L0/D0 condition showed a threshold of 1.02 ± 0.06 logMAR; L6/D0, 1.16 ± 0.05 logMAR; L6/D3, 1.18 ± 0.06 logMAR; and L6/D6, 1.19 ± 0.06 logMAR. For reading speed, the L0/D0 condition showed a threshold of 1.18 ± 0.09 logMAR; L6/D0, 1.40 ± 0.07 logMAR; L6/D3, 1.42 ± 0.06 logMAR; and L6/D6, 1.45 ± 0.06 logMAR. In the natural view control condition, all participants showed 100% reading accuracy for all font sizes, with a maximum reading speed of 165 ± 37 WPM found for the population at logMAR 1.2. For phosphene view at L0/D0, maximum reading speed was 73 ± 15 WPM, at logMAR 1.4. Both natural view and L0/D0 measurements are consistent with our previous reports (all of which used phosphene view that would be classified as L0/D0 here).27,33,34 For phosphene view at L6/D0, L6/D3, and L6/D6, reading speed at the largest font size was 46 ± 26, 42 ± 12, and 38 ± 13 WPM, respectively. 
Using these values, we made two fundamental comparisons, one varying the level of synchrony (equivalently, dispersion) and another varying the level of group delay (equivalently, latency). Each variation revealed sensitivity in both reading accuracy and reading speed thresholds (Fig. 7). Reading performances were found to be sensitive to the synchrony level of the phosphene temporal pattern (accuracy threshold: F2, 22 = 14, P < 0.001; speed threshold F2, 22 = 19, P < 0.001), being significantly better under the zero dispersion condition L6/D0 than the three frames dispersion L6/D3 (accuracy threshold: t22 = −3.2, P = 0.004; speed threshold: t22 = −3.6, P = 0.002) or the six frames dispersion (L6/D6) (accuracy threshold: t22 = −4.8, P = 0.001; speed threshold: t22 = −6.5, P < 0.001). A significant difference between the highest temporal dispersion (L6/D6) and the mid-level dispersion condition (L6/D3) was found for the speed threshold (t22 = −2.1, P = 0.04) but not for the accuracy threshold (t22 = −1.9, P = 0.08, NS). 
Then we examined latency alone, for the two zero-dispersion conditions. Increasing latency from zero to six frames, L0/D0 to L6/D0, significantly reduced performance in both reading accuracy and reading speed (t22 = −15, P < 0.0001 and t22 = −19, P < 0.001, correspondingly). 
Discussion
In the current work, we assessed the effect of temporal desynchronization of phosphene presentation on the quality of simulated artificial vision. We hypothesized that performance on a reading task would be better with synchronously presented phosphenes than with asynchronously presented ones. Overall, our results provide evidence that prosthetic vision is sensitive to temporal changes in phosphene presentation, in terms of both phosphene-to-phosphene delay as a temporal dispersion around an average group delay and also group delay itself. 
We found that temporal desynchronization of phos-phenes negatively impacted participant performance. Baseline reading scores with synchronously presented phosphenes (L6/D0) degraded with increasing temporal dispersion (L6/D3 and L6/D6). These findings are consistent with the assumption that perceptual binding is facilitated by synchronous activation of distributed neural activity. Synchronization increases the probability that the neural response will exceed the threshold level, therefore eliciting a highly salient neuronal signal. Based on this temporal selectivity, the human visual system binds visual elements of a single object and segregates one representation from the other.35 
In addition, we found that increasing the average phos-phene latency, or group delay, from baseline plus zero to baseline plus six frames also negatively impacted performance under the conditions of zero dispersion. A temporal delay between the current eye position and the phosphene display evokes transient conflicts between what we see and where we are looking with each eye movement,26 thereby reducing reading efficiency.27 
In this work, the visual input was either delivered simultaneously or was temporally scattered across multiple frames. In the case of temporally distributed phosphene presentation, we might expect the visual input to lose some of its relevancy as its presentation is delayed relative to the stimulus onset. A potential explanatory mechanism is provided by the rank-order neural coding theory,37 a feed-forward model according to which visual information is encoded in the order by which each cell fires. The receptive field of each neuron is associated with a specific weight, reflecting its effectiveness given the temporal order of neural firing. A maximal weight is given to cells that are strongly activated and therefore are first to fire. Subsequently, due to the engagement of inhibitory circuity by first-firing cells, lower weights are given to cells that fire later.3740 Accordingly, object perception depends on the first burst of neural activity. Temporal dispersion of phosphene presentation, as tested in this work, potentially attenuates the effectiveness of later-arriving information by breaking the synergistic effects of simultaneity.41,42 
The effect of temporal desynchronization that was observed here was more pronounced for reading speed than for reading accuracy. This dichotomy is understood with the observation that image resolution was not changed by our temporal manipulations, only phosphene timing. While the same amount of information was delivered in all conditions, the temporal pattern in which this information was delivered varied, influencing the time necessary to amass the information but not the final amount available. When looking at fine time scales, phosphenes are presented gradually in the asynchronous conditions (e.g., Fig. 3), and thus a longer time is required until all the visual information is perceptually available. 
Supporting our hypothesis that asynchronous phos-phenes could have a perceptual impact are reports of high sensitivity to the pulse timing across electrodes in animal17 and human studies of retinal18,19 and cortical43 stimulation. In an experiment by Horsager and colleagues,19 in which pulse trains were applied to multiple electrodes with varying synchrony, retinal prosthesis recipients were able to differentiate between patterns of electrical stimulation based on their temporal properties down to 1.5 ms of asynchrony (less than a frame in this study, and so equivalent to D0.36 using the scale used here). A complementary report18 by that group explored the same question in more detail, asking participants to compare the brightness of percepts produced when stimulating pairs of electrodes synchronously or asynchronously. For asynchrony of 0.075 to 9.0 ms, an increase in current was required to match the brightness of synchronous stimulation in six of nine electrode pairs. 
There is limited information on the appearance of lateral geniculate nucleus (LGN) phosphenes, complicating the presentation of accurate images to the participants in these simulations that would generate neural activity corresponding to thalamic microstimulation. We have taken a simple approach of using Gaussian blobs, corresponding to reports from both retinal and cortical microstimulation in humans. Given the lack of information about thalamic phosphene spatial and temporal interaction, we have assumed independence and no fading; while those assumptions are likely inaccurate, without empirical data, the choices for improvement are not clear. We know from work in nonhuman primates that LGN phosphenes appear to be spatially focal with relatively short persistence.13 The closest analogy to LGN stimulation might be found in optic nerve stimulation, where phosphenes were reported to be surprisingly complex, albeit with surface rather than penetrating electrodes.44 While challenges to be faced in improving our simulation model include the potential effects of loss of retinal input and degeneration of the retinothalamic projection on LGN phosphene creation, experimental45 and clinical14,46 descriptions of phosphenes generated through cortical microstimulation in blind and sighted patients suggest that our simulations here are not wildly inaccurate. 
Although we have concentrated on the conveyance of scenes through visual prostheses, there can be utility to asynchronous stimulation with other considerations. For example, there has been some investigation for the presentation of individual letters either in braille20 or as traced-out paths14 that used asynchronous stimulation to advantage.7 It is also important to recognize the potential engineering benefit of lower circuit cost with a temporally multiplexed system that switches a small number of stimulation generators among a larger number of electrodes.7 
Conclusions
In the current work, we studied the impact of phosphene timing on object binding. We recognize that functional, naturalistic, artificial vision experiences must hold for a wide range of stimuli. Therefore, our task was designed to simulate vision with a large number of electrodes that cover a wide range of the visual field while integrating gaze information in scene processing. We showed that the perceptual experience varies according to the phosphenes’ temporal pattern, with temporal synchronization facilitating reading performance. We conclude that temporal information has a fundamental role in object binding and segregation in artificial vision, providing prosthesis designers with a new tool to separate objects of interest from background. 
Acknowledgments
Supported by the William M. Wood Foundation, Bank of Boston Trustee. 
Disclosure: N. Meital-Kfir, None; J.S. Pezaris, None 
References
Erickson-Davis C, Korzybska H. What do blind people “see” with retinal prostheses? Observations and qualitative reports of epiretinal implant users. PLoS One. 2021; 16(2): e0229189. [CrossRef] [PubMed]
Karadima V, Pezaris EA, Pezaris JS. Attitudes of potential recipients toward emerging visual prosthesis technologies. Sci Rep. 2023; 13(1): 10963. [CrossRef] [PubMed]
Singer W. Neuronal synchrony: a versatile code for the definition of relations? Neuron. 1999; 24(1): 49–65. [CrossRef] [PubMed]
Usher M, Donnelly N. Visual synchrony affects binding and segmentation inperception. Nature. 1998; 559(1996): 1996–1999.
Leonards U, Singer W, Fahle M. The influence of temporal phase differences on texture segmentation. Vis Res. 1996; 36(17): 2689–2697. [CrossRef] [PubMed]
Gray CM. The temporal correlation hypothesis of visual feature integration: still alive and well. Neuron. 1999; 24: 31–47. [CrossRef] [PubMed]
Moleirinho S, Whalen AJ, Fried SI, Pezaris JS. The impact of synchronous versus asynchronous electrical stimulation in artificial vision. J Neural Eng. 2021; 18(5), doi:10.1088/1741-2552/abecf1/pdf.
Shepherd RK, Shivdasani MN, Nayagam DAX, Williams CE, Blamey PJ. Visual prostheses for the blind. Trends Biotechnol. 2013; 31(10): 562–571. [CrossRef] [PubMed]
Ayton LN, Barnes N, Dagnelie G, et al. An update on retinal prostheses. Clin Neurophysiol. 2020; 131(6): 1383–1398. [CrossRef] [PubMed]
Foroushani AN, Pack CC, Sawan M. Cortical visual prostheses : from microstimulation to functional percept. J Neural Eng. 2018; 15(2): 021005. [CrossRef] [PubMed]
Mirochnick RM, Pezaris JS. Contemporary approaches to visual prostheses. Mil Med Res. 2019; 6(1): 19. [PubMed]
Chen SC, Suaning GJ, Morley JW, Lovell NH. Simulating prosthetic vision: I. Visual models of phosphenes. Vis Res. 2009; 49(12): 1493–1506. [CrossRef] [PubMed]
Pezaris JS, Reid RC. Demonstration of artificial visual percepts generated through thalamic microstimulation. Proc Natl Acad Sci USA. 2007; 104(18): 7670–7675. [CrossRef] [PubMed]
Beauchamp MS, Oswalt D, Sun P, et al. Dynamic stimulation of visual cortex produces form vision in sighted and blind humans. Cell. 2020; 181(4): 774–783. [CrossRef] [PubMed]
Kara P, Pezaris JS, Yurgenson S, Reid RC. The spatial receptive field of thalamic inputs to single cortical simple cells revealed by the interaction of visual and electrical stimulation. Proc Nat Acad Sci USA. 2002; 99(25): 16261–16266. [CrossRef] [PubMed]
De Balthasar C, Boëx C, Cosendai G, Valentini G, Sigrist A, Pelizzone M. Channel interactions with high-rate biphasic electrical stimulation in cochlear implant subjects. Hear Res. 2003; 182(1–2): 77–87. [PubMed]
Manzur HE, Alvarez J, Babul C, Maldonado PE. Synchronization across sensory cortical areas by electrical microstimulation is sufficient for behavioral discrimination. Cereb Cortex. 2013; 23(12): 2976–2986. [CrossRef] [PubMed]
Horsager A, Boynton GM, Greenberg RJ, Fine I. Temporal interactions during paired-electrode stimulation in two retinal prosthesis subjects. Invest Ophthalmol Vis Sci. 2011; 52(1): 549–557. [CrossRef] [PubMed]
Horsager A, Greenberg RJ, Fine I. Spatiotemporal interactions in retinal prosthesis subjects. Invest Ophthalmol Vis Sci. 2010; 51(2): 1223–1233. [CrossRef] [PubMed]
Donaldson P . Experimental visual prosthesis. Proc IEE. 1973; 120(2): 281–298.
Bloch E, da Cruz L. The Argus II retinal prosthesis system. In: Vinjamuri R, ed. Prosthesis. London, UK: IntechOpen; 2019, doi 10.5772/intechopen.84947.
Hornig R, Dapper M, Joliff E, et al. Pixium vision: first clinical results and innovative developments. In: Gabel V, ed. Artificial Vision. Cham, Switzerland: Springer; 2017: 99–113.
Shivdasani L, Cicione F, Allen L, Suaning L, Shepherd W. Evaluation of stimulus parameters and electrode geometry for an effective suprachroidal retinal prosthesis. J Neural Eng. 2010; 7(3):036008.
Fernandez N . CORTIVIS approach for an intracortical visual prostheses. In: Gabel V, ed. Artificial Vision. Cham, Switzerland: Springer; 2017: 191–201.
da Cruz L, Coley BF, Dorn J, et al. The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. Br J Ophthalmol. 2013; 97(5): 632–636. [CrossRef] [PubMed]
Paraskevoudi N, Pezaris JS. Eye movement compensation and spatial updating in visual prosthetics: mechanisms, limitations and future directions. Front Syst Neurosci Syst Neurosci. 2019; 12: 1–21.
Paraskevoudi N, Pezaris JS. Full gaze contingency provides better reading performance than head steering alone in a simulation of prosthetic vision. Sci Rep. 2021; 11(1): 1–17. [PubMed]
Mansfield JS, Legge GE, Luebker A, Cunningham K. MNRead Acuity Charts: Continuous-Text Reading-Acuity Charts for Normal and Low Vision. Long Island City, NY: Lighthouse Low Vision Products; 1994.
Mansfield JS, Legge GE. The MNREAD Acuity Chart. In: Legge G, ed. Psychophysics of Reading in Normal and Low Vision. Mahwah, NJ: Lawrence Erlbaum Associates Inc.; 2007: 167–191.
Pezaris JS, Reid RC. Simulations of electrode placement for a thalamic visual prosthesis. IEEE Trans Biomed Eng. 2009; 56(1): 172–178. [CrossRef] [PubMed]
Pezaris JS, Eskandar EN. Getting signals into the brain: visual prosthetics through thalamic microstimulation. Neurosurg Focus. 2009; 27(1): 1–20. [CrossRef]
Killian NJ, Vurro M, Keith SB, Kyada MJ, Pezaris JS. Perceptual learning in a non-human primate model of artificial vision. Sci Rep. 2016; 6: 1–16. [CrossRef] [PubMed]
Vurro M, Crowell AM, Pezaris JS. Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans. Front Hum Neurosci. 2014; 8: 1–14. [CrossRef] [PubMed]
Rassia KEK, Pezaris JS. Improvement in reading performance through training with simulated thalamic visual prostheses. Sci Rep. 2018; 8(1): 1–19. [CrossRef] [PubMed]
Rassia KEK, Moutoussis K, Pezaris JS. Reading text works better than watching videos to improve acuity in a simulation of artificial vision. Sci Rep. 2022; 12: 12953. [CrossRef] [PubMed]
Bourkiza B, Vurro M, Jeffries A, Pezaris JS. Visual acuity of simulated thalamic visual prostheses in normally sighted humans. PLoS One. 2013; 8(9): e73592. [CrossRef] [PubMed]
Thorpe SJ, Imbert M. Biological constraints on connectionist modeling. In: Pfeifer R, Schreter Z, Fogelman-Soulié F, Steels L, eds. Connectionism in Perspective. Amsterdam: Elsevier; 1989: 63–92.
Thorpe S, Delorme A, Van Rullen R. Spike-based strategies for rapid processing. Neural Networks. 2001; 14(6–7): 715–725. [PubMed]
Van Rullen R, Thorpe SJ. Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural Comput. 2001; 13: 1255–1283. [CrossRef] [PubMed]
Van Rullen R, Guyonneau R, Thorpe SJ. Spike times make sense. Trends Neurosci. 2005; 28: 1–4.
Elliott MA, Müller HJ. Synchronous information presented in 40-Hz flicker enhances visual feature binding. Psychol Sci. 1998; 9(4): 277–283. [CrossRef]
Fahle M. Figure-ground discrimination from temporal information. Proc R Soc B Biol Sci. 1993; 254(1341): 199–203.
Cicione R, Fallon JB, Rathbone GD, Williams CE, Shivdasani MN. Spatiotemporal interactions in the visual cortex following paired electrical stimulation of the retina. Invest Ophthalmol Vis Sci. 2014; 55(12): 7726–7738. [CrossRef] [PubMed]
Veraart C, Wanet-Defalque M-C, Gérard B, Vanlierde A, Delbeke J. Pattern recognition with the optic nerve visual prosthesis. Artif Organs. 2003; 27(11): 996–1004. [CrossRef] [PubMed]
Brindley GS, Lewin WS. The sensations produced by electrical stimulation of the visual cortex. J Physiol. 1968; 196(2): 479–493. [CrossRef] [PubMed]
Oswalt D, Bosking W, Sun P, et al. Multi-electrode stimulation evokes consistent spatial patterns of phosphenes and improves phosphene mapping in blind subjects. Brain Stimulation. 2021; 14: 1356–1372. [CrossRef] [PubMed]
Figure 1.
 
Phosphene (upper row) and natural view (lower row) at the four different font sizes (logMAR 1.1–1.4). The images here depict approximately 6 degrees of visual angle centered on the point of regard as the participant is looking at the word /to/. For illustrative clarity, phosphenes that fall on the black background are depicted in dark gray to indicate their locations and sizes; during the experiment, they were shown as black. When viewed through the active simulation that is responsive to changes in gaze position, recognition is substantially easier than might be expected from these static depictions.
Figure 1.
 
Phosphene (upper row) and natural view (lower row) at the four different font sizes (logMAR 1.1–1.4). The images here depict approximately 6 degrees of visual angle centered on the point of regard as the participant is looking at the word /to/. For illustrative clarity, phosphenes that fall on the black background are depicted in dark gray to indicate their locations and sizes; during the experiment, they were shown as black. When viewed through the active simulation that is responsive to changes in gaze position, recognition is substantially easier than might be expected from these static depictions.
Figure 2.
 
The four phosphene temporal patterns. Latency (L) expresses the average temporal delay or group delay between the current eye position and the activation of each phosphene. Dispersion (D) is the amount of temporal scatter added to the average latency. In trials with nonzero dispersion, the specific latency for each phosphene is resampled from uniform distributions (gold, red) on a frame-by-frame basis. The participant's monitor has a frame rate of 240 Hz, so each frame is approximately 4.2 ms. The color code for stimulus conditions introduced here is continued in later figures.
Figure 2.
 
The four phosphene temporal patterns. Latency (L) expresses the average temporal delay or group delay between the current eye position and the activation of each phosphene. Dispersion (D) is the amount of temporal scatter added to the average latency. In trials with nonzero dispersion, the specific latency for each phosphene is resampled from uniform distributions (gold, red) on a frame-by-frame basis. The participant's monitor has a frame rate of 240 Hz, so each frame is approximately 4.2 ms. The color code for stimulus conditions introduced here is continued in later figures.
Figure 3.
 
Example images under the four latency/dispersion conditions. In each case, the gaze location is at the center of the screen, where a large letter A suddenly appears. For the top row, L0/D0, the letter is immediately and synchronously displayed at frame 0 ( purple outline). For the second row, L6/D0, the letter is delayed to frame 6 but synchronously displayed (green outline). For the third row, L6/D3, the letter is asynchronously displayed from frames 3 to 9 (gold outline) and is stabilized afterward. For the fourth row, L6/D6, the letter is asynchronously displayed from frames 0 to 12 (red outline) and stabilized afterward, off the edge of the figure. Scintillation can be seen in the bottom two rows where dispersion is nonzero.
Figure 3.
 
Example images under the four latency/dispersion conditions. In each case, the gaze location is at the center of the screen, where a large letter A suddenly appears. For the top row, L0/D0, the letter is immediately and synchronously displayed at frame 0 ( purple outline). For the second row, L6/D0, the letter is delayed to frame 6 but synchronously displayed (green outline). For the third row, L6/D3, the letter is asynchronously displayed from frames 3 to 9 (gold outline) and is stabilized afterward. For the fourth row, L6/D6, the letter is asynchronously displayed from frames 0 to 12 (red outline) and stabilized afterward, off the edge of the figure. Scintillation can be seen in the bottom two rows where dispersion is nonzero.
Figure 4.
 
Experimental paradigm. Each trial started with the presentation of a central target that the participant was required to fixate in order to engage the simulation. Then, a simple, three-line sentence was presented on the screen in either natural view or phosphene view. Participants read the sentence out loud as quickly and accurately as possible but without any limitation on time. Participants indicated they were done with the sentence by fixating on a dot displayed at the top of the screen. The time to read a given sentence was measured as the moment the sentence was displayed through the time the participant fixated the Advance to Next Sentence dot.
Figure 4.
 
Experimental paradigm. Each trial started with the presentation of a central target that the participant was required to fixate in order to engage the simulation. Then, a simple, three-line sentence was presented on the screen in either natural view or phosphene view. Participants read the sentence out loud as quickly and accurately as possible but without any limitation on time. Participants indicated they were done with the sentence by fixating on a dot displayed at the top of the screen. The time to read a given sentence was measured as the moment the sentence was displayed through the time the participant fixated the Advance to Next Sentence dot.
Figure 5.
 
Reading phase of the experimental paradigm. Here, the core reading period of a phosphene-view trial is schematically depicted. The base image used to create each frame is shown (top image), with an overlay of the participant's gaze position from the trial ( purple trace). The base image is fixed relative to the monitor as the participant moves their gaze to steer the phosphene pattern across the screen. The scanning path reflects tracing out the three lines of text. Below, three key moments from the reading period (blue arrows) are depicted in phosphene view (bottom row), as they were shown to the participant, with the participant's gaze (and thus the center of the phosphene pattern) at the start of the sentence (/The/), middle (/here/), and end (/boat/). The smaller size and higher density of phosphenes in the pattern near the point of regard and larger, lower density toward the periphery can be seen to shift between the three frames in response to gaze movements, in order to stabilize the phosphene pattern on the retina.
Figure 5.
 
Reading phase of the experimental paradigm. Here, the core reading period of a phosphene-view trial is schematically depicted. The base image used to create each frame is shown (top image), with an overlay of the participant's gaze position from the trial ( purple trace). The base image is fixed relative to the monitor as the participant moves their gaze to steer the phosphene pattern across the screen. The scanning path reflects tracing out the three lines of text. Below, three key moments from the reading period (blue arrows) are depicted in phosphene view (bottom row), as they were shown to the participant, with the participant's gaze (and thus the center of the phosphene pattern) at the start of the sentence (/The/), middle (/here/), and end (/boat/). The smaller size and higher density of phosphenes in the pattern near the point of regard and larger, lower density toward the periphery can be seen to shift between the three frames in response to gaze movements, in order to stabilize the phosphene pattern on the retina.
Figure 6.
 
Psychometric curves of population reading accuracy (top row) and reading speed (bottom row) as a function of font size for each of the four latency and dispersion conditions. Gray open circles represent observed data of individual participants (n = 23). Black lines are sigmoidal fits across font size for the population, with filled colored circles and bars indicating the mean and standard deviation of the inflection points across participants.
Figure 6.
 
Psychometric curves of population reading accuracy (top row) and reading speed (bottom row) as a function of font size for each of the four latency and dispersion conditions. Gray open circles represent observed data of individual participants (n = 23). Black lines are sigmoidal fits across font size for the population, with filled colored circles and bars indicating the mean and standard deviation of the inflection points across participants.
Figure 7.
 
Dependence of accuracy thresholds (left) and speed thresholds (right) on latency/dispersion condition. Colors are as in Figure 2. Thresholds are the inflection points of fitted sigmoids (Fig. 6) of reading accuracy and speed versus font size for 23 participants. For each metric, we see an increasing logMAR value with increasing asynchrony (L6/D0, green; L6/D3, gold; L6/D6, red), reflecting increasing levels of difficulty of the task. Performance in the comparison condition with zero latency and asynchrony (L0/D0, purple) is substantially better, demonstrating that there is an effect from the group delay as well as synchrony level. Significances are shown for pairwise t-tests with asterisks representing Holm–Bonferroni corrected thresholds for α values of 0.05, 0.01, and 0.001 (see main text for discussion).
Figure 7.
 
Dependence of accuracy thresholds (left) and speed thresholds (right) on latency/dispersion condition. Colors are as in Figure 2. Thresholds are the inflection points of fitted sigmoids (Fig. 6) of reading accuracy and speed versus font size for 23 participants. For each metric, we see an increasing logMAR value with increasing asynchrony (L6/D0, green; L6/D3, gold; L6/D6, red), reflecting increasing levels of difficulty of the task. Performance in the comparison condition with zero latency and asynchrony (L0/D0, purple) is substantially better, demonstrating that there is an effect from the group delay as well as synchrony level. Significances are shown for pairwise t-tests with asterisks representing Holm–Bonferroni corrected thresholds for α values of 0.05, 0.01, and 0.001 (see main text for discussion).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×