May 2019
Volume 60, Issue 6
Open Access
Visual Psychophysics and Physiological Optics  |   May 2019
Perceptual Learning of Visual Span Improves Chinese Reading Speed
Author Affiliations & Notes
  • Zhuoting Zhu
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Yin Hu
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Chimei Liao
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
  • Ren Huang
    School of Art and Design, Guangdong University of Technology, Guangzhou, China
  • Stuart Keel
    Centre for Eye Research Australia; Ophthalmology, Department of Surgery, University of Melbourne; Royal Victorian Eye and Ear Hospital, Melbourne, Australia
  • Yanping Liu
    Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Guangdong Provincial Key Laboratory of Brain Function and Disease, Sun Yat-sen University, Guangzhou, China
  • Mingguang He
    State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
    Centre for Eye Research Australia; Ophthalmology, Department of Surgery, University of Melbourne; Royal Victorian Eye and Ear Hospital, Melbourne, Australia
  • Correspondence: Mingguang He, Department of Preventive Ophthalmology, Zhongshan Ophthalmic Center, no. 54 Xianlie Road, Yuexiu District, Guangzhou 510060, People's Republic of China; mingguang_he@yahoo.com
  • Yanping Liu, Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, No.2, Zhishan Garden, No.132, East Outer Ring Road, Guangzhou 510060, China; liuyp33@mail.sysu.edu.cn
  • Footnotes
     ZZ, YH, and CL contributed equally to the work presented here and should therefore be regarded as equivalent authors.
Investigative Ophthalmology & Visual Science May 2019, Vol.60, 2357-2368. doi:10.1167/iovs.18-25780
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhuoting Zhu, Yin Hu, Chimei Liao, Ren Huang, Stuart Keel, Yanping Liu, Mingguang He; Perceptual Learning of Visual Span Improves Chinese Reading Speed. Invest. Ophthalmol. Vis. Sci. 2019;60(6):2357-2368. doi: 10.1167/iovs.18-25780.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Evidence has indicated that the size of the visual span (the number of identifiable letters without movement of the eyes) and reading speed can be boosted through perceptual learning in alphabetic scripts. In this study, we investigated whether benefits of perceptual learning could be extended to visual-span size and sentence reading (all characters are presented at the same time) for Chinese characters and explored changes in sensory factors contributing to changes in visual-span size following training.

Methods: We randomly assigned 26 normally sighted subjects to either a control group (n = 13) or a training group (n = 13). Pre- and posttests were administered to evaluate visual-span profiles (VSPs) and reading speed. Training consisted of trigram (sequences of three characters) character-recognition trials over 4 consecutive days. VSPs are plots of recognition accuracy as a function of character position. Visual-span size was quantified as the area under VSPs in bits of information transmitted. A decomposition analysis of VSPs was used to quantify the effects of sensory factors (crowding and mislocation). We compared the size and sensory factors of visual span and reading speed following training.

Results: Following training, the visual-span size significantly increased by 11.7 bits, and reading speed increased by 50.8%. The decomposition analysis showed a significant reduction for crowding (−13.1 bits) but a minor increase in the magnitude of mislocation errors (1.46 bits) following training.

Conclusions: These results suggest that perceptual learning expands the visual-span size and further improves Chinese text sentence-reading speed, indicating that visual span may be a common sensory limitation on reading that can be overcome with practice.

Visual span is the limited number of letters that can be recognized accurately with a single fixation.1 The visual span can be considered a “window size” limitation on reading based on the significant correlations between visual-span size and reading speed in both normal and visually impaired populations.16 The correlations between visual-span size and reading speed are also implicated in the reading development of English-speaking children.7 Furthermore, the extension of the visual-span hypothesis across different print sizes and contrasts, testing eccentricities, different writing scripts (e.g., English and Korean), and various tasks (e.g., reading and face recognition) implies that visual span may be a common limitation in pattern recognition.1,2,816 Previous studies have shown that the visual-span size can be enlarged through perceptual learning and that the attendant reading speed, measured using the rapid serial visual presentation (RSVP) paradigm, can also be improved with alphabetic scripts.8,11,14,15,17 However, little is known regarding the impact of perceptual learning on visual-span size and reading performance for Chinese characters. In this study, we explored whether the benefits of perceptual learning could be extended to the visual-span size and reading speed for Chinese characters. 
Visual perceptual learning is defined as long-lasting enhancements in performance on visual tasks through training and has been considered to be mediated by brain plasticity, which occurs not only during early development but also with aging.1823 The effectiveness of visual perceptual learning has been validated in a variety of visual tasks, including motion detection,24 global shape discrimination,25 peripheral texture judgment,26 multiple object searching,27 stimulus orientation detection,28 contour synthesis,29 and contrast sensitivity discrimination.30 Therefore, improvements in visual performance observed in previous studies have enabled investigations of such targeted, noninvasive practice and training to postpone, slow, or reverse declines in visual function.31 
Perceptual learning has been considered a low-level form of training, given its specificity to the trained features (e.g., orientation specificity) and location.18 Furthermore, evidence from electrophysiological and neuroimaging studies has suggested that such training paradigms target specific subsets of V1 neurons encoding related features3235 or improve specific V1 sensory signals.3638 However, recently, researchers have found that the effects of perceptual learning can be generalized to different stimulus locations and orientations.3942 Importantly, some studies have documented that low-level training paradigms can improve high-level cognitive processes such as attention, reading fluency, and working memory recall ability.8,9,11,14,15,43 These results imply that high-level cognitive functions, such as reading, can be improved by basic perceptual learning. 
One of the most common complaints in low vision clinics is slow reading,44 even with the use of magnifying aids. One explanation for the slow reading rate is that most people with low vision, particularly among persons with vision loss in the central field, have to rely on their peripheral visual field to read, thus significantly reducing the efficiency of reading.2,8,12 Legge and colleagues in 20012 measured the visual span of eight normally sighted subjects using the trigram (sequences of three random letters) letter-recognition task and confirmed a positive link between visual-span size and reading speed. A number of studies have further shown that the reading speed of subjects with normal vision or low vision can be increased by enlarging visual-span size through the trigram letter-recognition task.8,9,11,14,15 Therefore, in the current research, we investigated whether Chinese text reading speed can be improved through perceptual training with the Chinese-character recognition task. 
Three sensory factors have been proposed to limit visual-span size4: (1) decreased resolution, that is, declines in acuity at character positions farther from the fixation point; (2) crowding, that is, difficulty identifying target objects due to the contiguity of neighbors; and (3) mislocations, that is, mistakes with respect to relative position in a string of letters. Among the three sensory factors, crowding has been considered the major sensory factor contributing to compromised reading performance in the peripheral visual field.45 In support of this view, He and colleagues46 concluded that a considerable reduction in crowding, a small reduction in the magnitude of mislocation errors, and a negligible effect on visual acuity were the basis of the training-related improvement in reading speed for English text. However, whether patterns of these sensory factors change after training with Chinese text remains unclear. Therefore, we also explore these shifted patterns after training using a decomposition method of analysis (see Methods section). 
A large number of studies have investigated the benefits of perceptual learning on visual-span size and the attendant reading speed measured using the RSVP paradigm.8,9,11,14,15 Within this paradigm, the presentation duration and presentation location of the word are strictly controlled. Furthermore, reading performance at different eccentricities can be obtained using the RSVP paradigm. However, the RSVP paradigm limits eye movement and preliminary processing of forthcoming words,47 which are significant aspects during sentence reading (where all words/characters are presented at the same time).48 By contrast, sentence reading is a much more natural form of reading and allows readers to endogenously distribute attention and skip or refixate words. Importantly, an established relationship between the RSVP paradigm and sentence reading is still lacking. Nevertheless, evidence has confirmed a positive relationship between the visual-span size and reading speed in sentence reading.49 Therefore, in our study, we used a sentence-reading mode to examine the benefits of perceptual learning on Chinese reading performance. 
Of note, many differences in linguistic features exist between Chinese and alphabetic scripts.50,51 First, Chinese is a morphosyllabic writing system. A Chinese character, a box-shaped symbol, represents the basic unit of Chinese text. There are approximately 2500 frequently used characters in Chinese in contrast to 26 letters in English. Second, complexity varies across different characters (there is a range of 1 to 64 strokes per character),52 and the visual information packed into each Chinese character is much larger than that in each English letter. Last, but not least, Chinese text is written by a string of equally spaced characters and there is no extra space demarcating the word boundaries in Chinese; readers are forced to utilize linguistic/lexical knowledge to segment characters into meaningful words. Therefore, the current study investigated whether the benefits of perceptual learning on visual-span size and reading speed for alphabetic scripts can be observed in Chinese text. If the visual-span size can be enlarged for Chinese text through perceptual learning and the attendant reading performance can also be improved, we would further explore how sensory factors, such as resolution, crowding, and mislocations, explain the training-related improvements in the visual-span size and reading speed. 
Methods
Subjects
Twenty-six native Mandarin speakers were recruited from the University of Sun Yat-sen (mean age [SD]: 23.5 [1.30]; range, 21–27 years). All subjects were right-handed, with normal or corrected-to-normal visual acuity (best-corrected visual acuity in each eye: 20/20 or better) and no history of ophthalmologic (except for refractive errors), neurological, or psychiatric disorders. All of the subjects were naïve about the goals of the experiment and gave written informed consent prior to the experiment. The protocol was approved by the Institutional Review Board at the Sun Yat-sen University and was in accordance with the Declaration of Helsinki. 
Apparatus and Stimuli
The stimuli were generated using MATLAB (MathWorks, Natick, MA, USA) with Psychophysics Toolbox extensions53,54 and presented on an ASUS monitor (VG278HE; refresh rate = 144 Hz; resolution = 1960*1080; Taipei, Taiwan, China). The stimuli were Chinese characters (Song font; white color) on a black background (24 cd/m2). The Spyder calibrator (Datacolor, Lawrenceville, NJ, USA) was used to calibrate the correspondence between luminance and gray level. All stimuli were viewed binocularly from 60 cm in a dark room. The viewing distance was maintained using a chin rest and the subject's eye movements were recorded (monocular, right eye) using a desktop-mounted EyeLink 1000 system (sampling rate = 500 Hz; spatial resolution = 0.018; SR Research, Osgoode, Ontario, Canada). The stimulus size (defined as the height of a Chinese character) in the trigram character-recognition task and the sentence-reading task subtended 1° retinal angle, which is significantly above the acuity threshold in central vision.55 The center-to-center spacing between adjacent positions in the trigram character-recognition task and between adjacent characters in the sentence-reading task were 1.1 times the width of the Chinese character (the width of the Chinese character: 1° retinal angle). 
Experimental Design
Prior to the experiment, all subjects were tested with the Hanyu Shuiping Kaoshi (HSK; level 4), which is an official standardized test to evaluate Chinese language proficiency. All subjects scored above 95, indicating normal reading comprehension abilities, and were included in the formal experiment. The experimental protocol used in this study is shown in Figure 1. The formal experiment consisted of a pretest (day 1), training paradigm (days 2–5), and a posttest (day 6). In the pre- and posttests, subjects completed a trigram character-recognition task for visual-span profile (VSP) measurements, and a sentence-reading task for reading speed measurements. To ascertain changes in VSP measurements and sentence-reading speed measurements attributable to the perceptual learning rather than to the effects of repetition, we randomly assigned the 26 subjects to two groups: a control group (n = 13, age: 23.5 ± 1.51 years) and a training group (n = 13, age: 23.5 ± 1.13 years; P > 0.05). All of the subjects received the pretests and posttests on day 1 and day 6. From day 2 to day 5, subjects in the training group received an additional 4 days of perceptual training. Standard semiautomatic calibration and validation procedures were conducted at the commencement of the experiment, after every 50 trials in the trigram character-recognition task or 16 sentences in the sentence-reading task, or when repeated errors occurred during the fixation control phase. 
Figure 1
 
A schematic cartoon illustrating the basic experimental design of the study. A total of 26 subjects were randomly assigned to two groups: a control group (n = 13) and a training group (n = 13). The pretest and posttest consisted of measurements of sentence-reading speed and visual-span profile. Subjects belonging to the control group received only the pretest and posttest, while each subject in the training group took a pretest and a posttest with an intervening training procedure consisting of four sessions, scheduled on 4 consecutive days.
Figure 1
 
A schematic cartoon illustrating the basic experimental design of the study. A total of 26 subjects were randomly assigned to two groups: a control group (n = 13) and a training group (n = 13). The pretest and posttest consisted of measurements of sentence-reading speed and visual-span profile. Subjects belonging to the control group received only the pretest and posttest, while each subject in the training group took a pretest and a posttest with an intervening training procedure consisting of four sessions, scheduled on 4 consecutive days.
Pre- and Posttest Visual-Span Profile Measurement
Through the trigram character-recognition task, we measured VSPs in the pre- and posttests. In each trial, a trigram (3 characters randomly selected from 26 characters in the C3 group [Fig. 2] from Wang et al.56) was presented on the horizontal midline (Fig. 3A). Based on the perimetric complexity,57 the 700 most frequently used Chinese characters (ordered by frequency in State Language Work Committee, Bureau of Standard, 1992) were split into five mutually exclusive groups. Twenty-six characters with complexity values close to the median complexity value in each group were chosen to constitute C1 to C5 subgroups. The median complexity subgroup (C3) was selected for the trigram character-recognition task in this study. 
Figure 2
 
The stimulus set for the trigram character-recognition task selected from the C3 group in Wang et al.56 Based on the perimetric complexity,57 the 700 most frequently used Chinese characters (ordered by frequency in State Language Work Committee, Bureau of Standard, 1992) were split into five mutually exclusive groups. Twenty-six characters with complexity values close to the median complexity value in each group were chosen to constitute C1 to C5 subgroups. The median complexity subgroup (C3) was selected for the trigram character-recognition task in this study.
Figure 2
 
The stimulus set for the trigram character-recognition task selected from the C3 group in Wang et al.56 Based on the perimetric complexity,57 the 700 most frequently used Chinese characters (ordered by frequency in State Language Work Committee, Bureau of Standard, 1992) were split into five mutually exclusive groups. Twenty-six characters with complexity values close to the median complexity value in each group were chosen to constitute C1 to C5 subgroups. The median complexity subgroup (C3) was selected for the trigram character-recognition task in this study.
Figure 3
 
The visual-span profile measurement for Chinese characters using the trigram character-recognition method. (A) Schematic illustration of visual-span profile. Top: A string of three characters (Image not available, Image not available, Image not available) was presented at position 4 in the horizontal midline. The gray numbers indicate the position of each slot, which was not presented during the test. The size of visual span was quantified using two methods: the width of the fitted split-Gaussian curve at 80% of recognition accuracy (number of characters) and the measure of the area under the split-Gaussian curve in bits of information transmitted. Bottom: The visual-span profile was a plot of recognition accuracy by character positions and then transformed to information transmitted (bits). The recognition accuracy approached 100% at the fixation point and gradually dropped with increasing distance from the fixation point. (B) Schematic illustration of visual-span measurement. After the white dot fixation stimulus was displayed at the center of the black screen for 1000 ms, two vertically aligned green dots were presented in the center of the screen to maintain stable fixation until the end of each trial. Three underlines were displayed for 50 ms indicating the next trigram positions. After a 70-ms duration of the two vertical green dots, the trigram stimulus was presented on the screen for 250 ms. Then, the screen went blank, and the subject was required to report the three characters of the trigram in order, from left to right.
Figure 3
 
The visual-span profile measurement for Chinese characters using the trigram character-recognition method. (A) Schematic illustration of visual-span profile. Top: A string of three characters (Image not available, Image not available, Image not available) was presented at position 4 in the horizontal midline. The gray numbers indicate the position of each slot, which was not presented during the test. The size of visual span was quantified using two methods: the width of the fitted split-Gaussian curve at 80% of recognition accuracy (number of characters) and the measure of the area under the split-Gaussian curve in bits of information transmitted. Bottom: The visual-span profile was a plot of recognition accuracy by character positions and then transformed to information transmitted (bits). The recognition accuracy approached 100% at the fixation point and gradually dropped with increasing distance from the fixation point. (B) Schematic illustration of visual-span measurement. After the white dot fixation stimulus was displayed at the center of the black screen for 1000 ms, two vertically aligned green dots were presented in the center of the screen to maintain stable fixation until the end of each trial. Three underlines were displayed for 50 ms indicating the next trigram positions. After a 70-ms duration of the two vertical green dots, the trigram stimulus was presented on the screen for 250 ms. Then, the screen went blank, and the subject was required to report the three characters of the trigram in order, from left to right.
A total of 15 positions along the horizontal midline (Fig. 3A), indicated by the position of the middle character in the trigram, were tested. The position on the central fixation was labeled 0, and left and right positions were labeled with negative and positive numbers, respectively. Each trigram position was tested 10 times in a randomized order, thus yielding a total of 150 trials in each block. The subject was initially shown the card with 26 characters in the C3 group and then told to remain fixated on the white dot or between the two vertically aligned green dots during the trials. Prior to the formal experiment, a practice session was conducted to ensure stable fixation. The white dot fixation point was displayed at the center of the black screen for 1000 ms (Fig. 3B). Then, two vertically aligned green dots were presented in the center of the screen to maintain stable fixation until the end of each trial. Three underlines were displayed for 50 ms indicating the next trigram positions. After a 70-ms duration of the two vertical green dots, the trigram stimulus was presented on the screen for 250 ms. Then, the screen went blank, and the subject was required to report the three characters of the trigram, from left to right. The subject could refer to the card with the 26 characters in the C3 group when needed. A character was scored as being correctly recognized only when the subject reported the exact character and its correct position in the trigram. The subject pressed the space key to initiate the next trial. All of the subjects were encouraged to take a short break after completing 50 trials. 
Combining the recognition scores at each of the character positions, that is, the left, middle, or right character of a trigram, we calculated the recognition accuracy at different positions. A trial was excluded if there was greater than 1° of retinal angle movement away from the fixation point based on the eye tracking analysis. However, the frequency of unstable fixation was very low (less than 15 [10%] trials per subject). After that, we fitted each data set using the split-Gaussian function.2,8 Of note, only 13 (−6 to +6) positions were included for curve fitting, since there was no inner character of the trigram displaying at positions ±7. Recognition probabilities at different positions x, P(x), were calculated by the following equation:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}P(x) = \left\{ \matrix{{A\exp \left( { - {x^2}/2\sigma _R^2} \right){\rm{\ if\ }}x \ge 0} \cr {A\exp \left( { - {x^2}/2\sigma _L^2} \right){\rm{\ if\ }}x \lt 0}} \right.\end{equation}
with A, Display Formula\({\sigma _L}\), and Display Formula\({\sigma _R}\) representing the amplitude of the Gaussian curve and the standard deviation of the left and the right half of the Gaussian curve, respectively.  
The visual-span size was quantified using two methods: the width of the fitted split-Gaussian curve at 80% recognition accuracy (number of characters) and a measure of the area under the split-Gaussian curve in bits of information transmitted (Fig. 3A). Information transmitted in bits was computed through the following methods.2,58 In brief, the following equation could transform recognition accuracy to information bits:59  
\begin{equation}\tag{2}{\rm{Bits\ of\ Information}} = - 0.037 + 4.676 \times {\rm{Proportion\ Correct\ of\ Letter\ Identification}}\end{equation}
where recognition accuracy was linearly associated with bits of information. We then integrated bits of information transmitted across all positions of the VSPs. The information transmitted of the VSPs was further divided into three parts, representing foveal vision (interval −1 to +1) and parafoveal vision to the left (interval −6 to −1) and to the right (interval +1 to +6) of midline. These subareas were used to test the effects of different regions in the visual field on reading performance.  
Reading Speed Measurement
We quantified the effects of perceptual training on reading performance using a sentence-reading task in which all characters were presented at the same time (Fig. 4). Reading speed was measured in the pre- and posttests. The sentences were chosen from the reading material of Liu et al.60 Prior to the sentence-reading task, subjects were told to read the sentence as fast as possible and answer a comprehension question after each sentence. After a fixation calibration, the fixation dot was displayed at the center of the screen. Subjects pressed the space bar on the keyboard to initiate the sentence presentation on the horizontal midline of the computer screen. Subjects pressed the space bar again to signal the completion of the sentence reading and termination of the trial. Subjects read 80 sentences silently over five blocks according to the counterbalanced design. The number of total and unique characters for each sentence ranged from 18 to 32 (24.7 ± 3.93) and 16 to 29 (23.2 ± 3.42) characters, respectively. The same protocol was used to measure reading speeds in the posttest, but with a different set of 80 sentences. The number of total and unique characters for each sentence in the posttest ranged from 17 to 33 (24.2 ± 3.98) and 15 to 30 (22.4 ± 3.68) characters, respectively. There were no differences in the number of total characters and unique characters for each sentence in the reading materials before and after training (P > 0.05). The average complexity of the character (defined as number of strokes) was similar for the two sets of sentences (pretest: 7.50 ± 0.55 strokes, range, 6.21–9.05 strokes; posttest: 7.35 ± 0.74 strokes, range, 5.96–9.35 strokes, P > 0.05). There was 11.9% and 10.7% overlap of characters from the pretest and posttest sentences with those in C3 group. 
Figure 4
 
Schematic illustration of sentence-reading task. The sentences were chosen from the reading material of Liu et al.60 Prior to the sentence-reading task, subjects were told to read the sentence as fast as possible and answer a comprehension question after each sentence. After a fixation calibration, the fixation dot was displayed at the center of the screen. Subjects pressed the space bar on the keyboard to initiate the sentence presentation on the horizontal midline of the computer screen. Subjects pressed the space bar again to signal the completion of the sentence reading and termination of the trial. English translations were provided for illustrative purposes and were not shown during the experiment.
Figure 4
 
Schematic illustration of sentence-reading task. The sentences were chosen from the reading material of Liu et al.60 Prior to the sentence-reading task, subjects were told to read the sentence as fast as possible and answer a comprehension question after each sentence. After a fixation calibration, the fixation dot was displayed at the center of the screen. Subjects pressed the space bar on the keyboard to initiate the sentence presentation on the horizontal midline of the computer screen. Subjects pressed the space bar again to signal the completion of the sentence reading and termination of the trial. English translations were provided for illustrative purposes and were not shown during the experiment.
Training
Subjects in the training group received training from day 2 to day 5 on the trigram character-recognition task as mentioned previously. Our training procedure consisted of 16 trigram character-recognition blocks and was conducted over four sessions (2400 trials in total). Subjects completed the training on 4 consecutive days, and each training exercise lasted 1.0 to 1.5 hours. 
Data Analysis
Visual-Span Decomposition
The decomposition method was used to quantitatively assess the sensory factors underlying the visual span. The visual-span decomposition method has been described in detail elsewhere.46 In brief, three VSPs with different criteria were plotted: a standard profile with both correct recognition of the character and its position in the trigram, a profile permitting mislocation errors, and a profile based on single-character recognition accuracy. The losses in information transmitted resulting from limitations in visual resolution, the impact of crowding, and mislocation errors were computed by comparing these three types of VSPs with 100% perfect performance (Fig. 5). The contribution of acuity was quantified by comparing 100% perfect performance and the single-character profile. The effect of crowding was assessed by the losses of information transmitted between the single-character profile and mislocation errors survival profile. The impact of mislocations was defined by comparing the mislocation error profile with the standardized profile. Previous studies46,56 concluded that the crowding imposed the largest impact on visual-span size and that there was only a negligible influence of visual acuity on both English and Chinese VSPs. Moreover, characters in this study were subtended to 1° retinal angle, which is significantly above the acuity threshold in 10° retinal eccentricities.55 Therefore, we assumed perfect performance for single-character recognition. 
Figure 5
 
Schematic illustration of decomposition analysis. Perfect performance − standard profile = resolution/acuity effect + crowding effect + mislocation errors effect. Acuity effect = perfect profile − single-letter profile (green area). Crowding effect = single-letter profile − mislocation profile (blue area). Mislocation effect = mislocation profile − standard profile (red area). Three types of visual-span profiles were plotted: a standard profile with both correct recognition of the character and its position in the trigram, a profile permitting mislocation errors, and a profile based on single-character recognition accuracy. The losses in information transmitted resulting from limitations in visual resolution, the impact of crowding, and mislocation errors were computed by comparing these three types of VSPs with 100% perfect performance. The contribution of acuity was quantified by comparing 100% perfect performance and the single-character profile. The effect of crowding was assessed by the losses of information transmitted between the single-character profile and mislocation errors survival profile. The impact of mislocations was defined by comparing the mislocation errors profile with the standardized profile.
Figure 5
 
Schematic illustration of decomposition analysis. Perfect performance − standard profile = resolution/acuity effect + crowding effect + mislocation errors effect. Acuity effect = perfect profile − single-letter profile (green area). Crowding effect = single-letter profile − mislocation profile (blue area). Mislocation effect = mislocation profile − standard profile (red area). Three types of visual-span profiles were plotted: a standard profile with both correct recognition of the character and its position in the trigram, a profile permitting mislocation errors, and a profile based on single-character recognition accuracy. The losses in information transmitted resulting from limitations in visual resolution, the impact of crowding, and mislocation errors were computed by comparing these three types of VSPs with 100% perfect performance. The contribution of acuity was quantified by comparing 100% perfect performance and the single-character profile. The effect of crowding was assessed by the losses of information transmitted between the single-character profile and mislocation errors survival profile. The impact of mislocations was defined by comparing the mislocation errors profile with the standardized profile.
Sentence-Reading Speed
The sentence-reading speed was calculated from the eye movement data in the sentence-reading task. The total duration time (minutes) for presentation of each sentence was extracted from the eye tracking data. For each sentence, the average reading speed (characters per minute, cpm) was defined by the ratio of the number of characters in the sentence and total duration time (minutes). 
Statistical Analysis
All statistical analyses were performed using Stata (ver. 14.0; StataCorp, College Station, TX, USA). Recognition accuracy in different character positions was fit using MATLAB 5.2.2 with an asymmetric or split-Gaussian function. Visual-span parameters (including visual-span size, peak amplitude, standard deviation of the left and the right half of the Gaussian curve, foveal area, parafoveal right area, parafoveal left area), reading performance measurements between control group and training group, and changes in these parameters following training were compared with linear mixed models (LMMs), where group, testing sessions, and their interaction were modeled as fixed effects and the deviation of each subject as a random effect. In these analyses, we corrected for multiple comparisons with the Bonferroni method. 
Results
At first, we compared the visual-span parameters and sentence-reading speeds in the control and training groups, and no significant differences existed between the two groups in the pretest (all P values > 0.05). 
Visual-Span Profiles
Subjects' performance for identifying characters in the control group and the training group are presented in Figure 6A, where the recognition accuracy of the characters (from −6 to +6) is plotted by middle character position. Fitted split-Gaussian curves are illustrated in Figure 6B. Both the raw curves (Fig. 6A) and fitted curves (Fig. 6B) showed that the visual-span size in the posttest was larger than that in the pretest in the training group, suggesting an enhancement in character identification performance across most character positions after training. The magnitude of visual-span size between pre- and posttest was consistent in the control group (pretest: 5.67 ± 1.27 characters, posttest: 5.89 ± 0.99 characters; Bonferroni corrected P > 0.05). However in training group, the visual-span size extended from 5.04 ± 0.96 characters to 8.41 ± 1.47 characters after 4-day training (Bonferroni corrected P < 0.001) (Fig. 7A). The changes in the visual-span size following training were different between control group and training group (Bonferroni corrected P < 0.001 for interaction, Table 1). The visual-span size in bits of information transmitted in the posttest was statistically larger than that in the pretest in the training group (pretest: 32.8 ± 3.99 bits, posttest: 44.4 ± 3.89 bits; Bonferroni corrected P < 0.001); however, a similar amount of information was transmitted in the pretest and the posttest in the control group (pretest: 35.2 ± 4.70 bits, posttest: 36.3 ± 3.81 bits; Bonferroni corrected P > 0.05; Fig. 7B). The changes in bits of information transmitted through the visual span following training were different between the control and training groups (Bonferroni corrected P < 0.001 for interaction, Table 1). 
Figure 6
 
Visual-span profiles for pretest (dashed line) and posttest (line) in the control group and the training group. (A) Raw data. (B) Fitted split-Gaussian curve. Character recognition performance across character positions in the control and the training groups are presented in Figure 7. The raw data (A) and fitted curve (B) of recognition accuracy for 13 trigram positions showed that the visual-span size in the posttest was larger than that in the pretest in the training group, suggesting an enhancement in character identification performance across most character positions after training.
Figure 6
 
Visual-span profiles for pretest (dashed line) and posttest (line) in the control group and the training group. (A) Raw data. (B) Fitted split-Gaussian curve. Character recognition performance across character positions in the control and the training groups are presented in Figure 7. The raw data (A) and fitted curve (B) of recognition accuracy for 13 trigram positions showed that the visual-span size in the posttest was larger than that in the pretest in the training group, suggesting an enhancement in character identification performance across most character positions after training.
Figure 7
 
Main parameters: (A) size of the visual span in the number of Chinese characters, (B) size of the visual span in bits of information transmitted, and (C) reading speed between the pretest and posttest in the control and training group. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. The size of visual span in Chinese characters, the size of visual span in bits of information transmitted, and sentence-reading speed between pretest and posttest were similar in the control group (all Bonferroni corrected P > 0.05). In the training group, the visual-span size in the number of Chinese characters, in bits of information transmitted, and sentence-reading speeds were extended from 5.04 ± 0.96 to 8.41 ± 1.47 characters (Bonferroni corrected P < 0.001), from 32.8 ± 3.99 to 44.4 ± 3.89 bits (Bonferroni corrected P < 0.001), and from 319.0 ± 74.2 to 484.6 ± 161.3 cpm (Bonferroni corrected P < 0.001) after 4-day training.
Figure 7
 
Main parameters: (A) size of the visual span in the number of Chinese characters, (B) size of the visual span in bits of information transmitted, and (C) reading speed between the pretest and posttest in the control and training group. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. The size of visual span in Chinese characters, the size of visual span in bits of information transmitted, and sentence-reading speed between pretest and posttest were similar in the control group (all Bonferroni corrected P > 0.05). In the training group, the visual-span size in the number of Chinese characters, in bits of information transmitted, and sentence-reading speeds were extended from 5.04 ± 0.96 to 8.41 ± 1.47 characters (Bonferroni corrected P < 0.001), from 32.8 ± 3.99 to 44.4 ± 3.89 bits (Bonferroni corrected P < 0.001), and from 319.0 ± 74.2 to 484.6 ± 161.3 cpm (Bonferroni corrected P < 0.001) after 4-day training.
Table 1
 
Results of the Linear Mixed Model Analysis of Reading Speed, Visual-Span Size in Number of Chinese Characters, and in Information Transmitted
Table 1
 
Results of the Linear Mixed Model Analysis of Reading Speed, Visual-Span Size in Number of Chinese Characters, and in Information Transmitted
The differences in other parameters of VSPs between the pre- and posttests in the control group were not significant (all P > 0.05). The peak amplitude (pretest: 0.97 ± 0.04, posttest: 0.99 ± 0.01; Bonferroni corrected P > 0.05; Fig. 8A) and foveal area (pretest: 13.5 ± 0.40 bits, posttest: 13.9 ± 0.18 bits; Bonferroni corrected P > 0.05; Fig. 8D) of the VSPs in the training group from the pretest and posttest were not significantly different. Figures 8B, 8C, 8E, and 8F illustrate the significant differences between the pretest and posttest in the training group: Display Formula\({\sigma _L}\) (pretest: 3.89 ± 0.78, posttest: 6.21 ± 1.14; Bonferroni corrected P < 0.001); Display Formula\({\sigma _R}\) (pretest: 4.41 ± 0.61, posttest: 6.68 ± 1.11; Bonferroni corrected P < 0.001); parafoveal left area (pretest: 8.70 ± 2.14 bits, posttest: 14.6 ± 2.24 bits; Bonferroni corrected P < 0.001) and parafoveal right area (pretest: 10.5 ± 1.70 bits, posttest: 15.9 ± 1.83 bits; Bonferroni corrected P < 0.001). The improvements in Display Formula\({\sigma _L}\), Display Formula\({\sigma _R}\), parafoveal left area, and parafoveal right area were statistically different between the control group and the training group (all Bonferroni corrected P < 0.001 for interaction, Table 2). 
Figure 8
 
Parameters of visual-span profiles between pretest and posttest in the control and training groups. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. (A, D) showed that the changes in the peak amplitude and foveal area of the visual-span profiles were not statistically different between pretest and posttest in the control and the training groups (all Bonferroni corrected P > 0.05). (B, C, E, F) showed significant differences between the posttest and pretest visual-span profile's parameters, including standard deviation \({\sigma _L}\) and \({\sigma _R}\), parafoveal left area, and parafoveal right area in the training group (all Bonferroni corrected P < 0.001), as well as similar magnitudes in these parameters in the control group (all Bonferroni corrected P > 0.05).
Figure 8
 
Parameters of visual-span profiles between pretest and posttest in the control and training groups. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. (A, D) showed that the changes in the peak amplitude and foveal area of the visual-span profiles were not statistically different between pretest and posttest in the control and the training groups (all Bonferroni corrected P > 0.05). (B, C, E, F) showed significant differences between the posttest and pretest visual-span profile's parameters, including standard deviation \({\sigma _L}\) and \({\sigma _R}\), parafoveal left area, and parafoveal right area in the training group (all Bonferroni corrected P < 0.001), as well as similar magnitudes in these parameters in the control group (all Bonferroni corrected P > 0.05).
Table 2
 
Results of the Linear Mixed Model Analysis of Peak Amplitude, Standard Deviation of the Left and the Right, Foveal Area, Parafoveal Right and Left Area Half of the Gaussian Curve, Crowding, and Mislocation Errors Effect
Table 2
 
Results of the Linear Mixed Model Analysis of Peak Amplitude, Standard Deviation of the Left and the Right, Foveal Area, Parafoveal Right and Left Area Half of the Gaussian Curve, Crowding, and Mislocation Errors Effect
Decomposition Analysis
In the decomposition analysis (Fig. 9A), the area representing the crowding effects (blue) was significantly larger than the area representing the mislocation effects (red). We also found a notable reduction in the area representing the crowding effects (blue) from the pre- to posttest in the training group. Statistical analyses also showed a significant reduction in the effects of crowding (pretest: 27.0 ± 4.73 bits, posttest: 13.8 ± 3.45 bits; Bonferroni corrected P < 0.001) and an increase in mislocation effects (pretest: 1.36 ± 1.30 bits, posttest: 2.82 ± 1.12 bits; Bonferroni corrected P = 0.003) following training in training group (Fig. 9B). The reduction of crowding effects for the training group was statistically larger than that for the control group (Bonferroni corrected P < 0.001 for interaction, Table 2), while the changes of mislocation effects were similar between control and training groups. The impact of crowding and mislocation was similar in the pretest and posttest in the control group (all P > 0.05). 
Figure 9
 
Decomposition analysis of the visual span. Upper: Comparison of decomposition profiles in pretest and posttest. Error bars represent ±1 SEM. n.s., not significant. **P < 0.01; ***P < 0.001. Decomposition analysis of the visual span. The area representing the crowding effects (blue) was significantly larger than the areas representing the mislocation effects (red). A notable reduction in the area representing the crowding effects (blue) from pretest to posttest was observed in the training group. Lower: Comparison of effects for each factor in pretest and posttest. There was a significant reduction in the effects of crowding (13.1 bits) and a small increase in mislocation effects (1.46 bits) following training in the training group. The impact of crowding and mislocation was similar in the pretest and posttest in the control group (all Bonferroni corrected P > 0.05).
Figure 9
 
Decomposition analysis of the visual span. Upper: Comparison of decomposition profiles in pretest and posttest. Error bars represent ±1 SEM. n.s., not significant. **P < 0.01; ***P < 0.001. Decomposition analysis of the visual span. The area representing the crowding effects (blue) was significantly larger than the areas representing the mislocation effects (red). A notable reduction in the area representing the crowding effects (blue) from pretest to posttest was observed in the training group. Lower: Comparison of effects for each factor in pretest and posttest. There was a significant reduction in the effects of crowding (13.1 bits) and a small increase in mislocation effects (1.46 bits) following training in the training group. The impact of crowding and mislocation was similar in the pretest and posttest in the control group (all Bonferroni corrected P > 0.05).
Sentence-Reading Speed
The average baseline sentence-reading speeds in the control group and the training group were similar in the pretest (control group: 340.9 ± 131.7 cpm, training group: 319.0 ± 74.2 cpm; P > 0.05). However, in the training group, subjects' reading speed increased significantly to 484.6 ± 161.3 cpm after training, representing a 50.8% increase in reading speed (Bonferroni corrected P < 0.001). Subjects in the control group showed nearly identical average reading speeds in the pre- and posttest sessions (pretest: 340.9 ± 131.7 cpm, posttest: 343.3 ± 127.6 cpm; Bonferroni corrected P > 0.05) (Fig. 7C). The mean improvement in sentence-reading speed was significantly higher in the training group than in the control group (Bonferroni corrected P < 0.001 for interaction, Table 1). 
Discussion
The main objective of the present study was to test whether the visual-span size for Chinese characters could be widened through perceptual learning, and if so, whether this expanded visual-span size was associated with faster Chinese script sentence-reading speed. Our results indicated that after 4 days of perceptual learning, the visual-span size increased by 11.7 bits and the sentence-reading speed improved by 50.8%. Furthermore, our decomposition analysis revealed that 4 days of training induced an enormous reduction in crowding effects (−13.1 bits) but a minor increase in mislocation effects (1.46 bits). These findings suggested that perceptual learning improved the visual-span size and Chinese text sentence-reading speed, indicating that visual span may be a sensory limitation in pattern recognition that can be overcome through training. 
Our findings were consistent with previous studies, including those of Chung et al.8 and Bernard et al.,17 who reported an increase in the visual-span size of 6 bits and 6.4 bits, and Yu et al.15 and Lee et al.,11 who documented an improvement of the visual-span size of 4.7 bits and 8.8 bits following training. Different eccentricities where the trigram was presented (horizontal midline versus horizontal line 10° in the upper or lower visual field) may explain the relative larger improvement in visual-span size observed in the present study. The improvement in the reading speed in our experiment was slightly larger than the 41.0% in the study of Chung et al.8 but slightly smaller than the 54.0% in the study of Yu et al.,15 83.5% reported by Lee et al.,11 and 63.6% reported by Bernard et al.17 The differences in processing logographic and alphabetic scripts, the RSVP and sentence-reading paradigm, and testing eccentricities could explain these discrepancies. Of note, comparisons between results need to be cautious. According to the information transmitted calculation formula, 100% perfect performance in identifying character or letter equals 4.7 bits of information transmitted. The improvements in bits of information for each fixation after training may boost reading performance. The average number of letters for an English word is 5.1 letters, as for the Chinese language, it makes 1.5 characters.61,62 We may infer that one bit of information may have different impacts on Chinese and English reading performance. Further studies are needed to elucidate this speculation. 
Exploring the effects of sensory factors on the visual-span size following perceptual learning is important to more deeply understand the attendant improvements on reading performance. Previous studies46,56 have reported a negligible impact of visual acuity on both English and Chinese VSPs. Moreover, the characters used in our study were subtended 1° retinal angle, which is well above the acuity threshold in 10° retinal eccentricities.55 Therefore, we assumed the effects of visual acuity on the visual span to be zero and determined the crowding and mislocation effects in the analysis. Our decomposition analysis indicated that the enlargement in the visual-span size was related to a significant reduction in the crowding impact but a small increase in the mislocation effects. Our finding in terms of the decreased effects of crowding supported findings from Pelli et al.45 and He et al.46 These authors also showed a strong correlation between visual-span size and crowding, and a significant reduction in crowding effects after training. The mechanisms underlying the training-related reduction in crowding are unknown. However, based on the principle of crowding, we speculated that training may reduce the difficulty in identifying the target character between neighboring characters, reflecting either bottom-up58,63 or top-down64 influences. Both the bottom-up and top-down proposals assert that crowding is excessive feature integration over an inappropriately large area. Based on the bottom-up proposals, the size of this area is anatomically fixed, therefore independent of attention and other factors.58,63 However, according to top-down proposals, the critical spacing related to crowding varies and is under attentional control.64 Bernard et al.17 found similar benefits of trigram training on letter recognition and maximum reading speed by using uncorrelated random trigrams and trigrams more specific to the reading task (correlated trigrams frequently found in everyday English usage), suggesting potential bottom-up influences. In contrast, He et al.46 observed that the training effects could transfer from a trained visual field to an untrained one. Yu et al.15 not only reported the transfer of learning from the trained visual field to an untrained one, but also observed the transfer of training across different print sizes. These results imply that nonretinotopic mechanisms and top-down influences are involved in the training effects. Nevertheless, the present study could not distinguish between these two explanations. Additional studies are needed to explore the mechanisms underlying the training-related improvement. 
Despite the consistent reduction in crowding effects following training, the impact of training on mislocation errors is still inconclusive. He et al.46 demonstrated a minor reduction in mislocation errors after training. However, Xiong et al.40 divided the mislocation errors into target misplacement (reporting a correctly identified target letter at a flanker location) and flanker substitution errors (reporting a flanker as the target letter), and found that the normalized target misplacement errors were increased rather than decreased, while the normalized flanker substitution errors were unchanged following training. Furthermore, after reanalyzing the data from He et al., Xiong et al.40 concluded that mislocation errors were unchanged if normalized by the corresponding recognition errors. In the present study, we observed a small increase in the effect of mislocation errors. Even when our mislocation errors were normalized by the total error rates, we still found an increment in the normalized mislocation errors after training. We were unable to divide mislocation errors into target misplacement and flanker substitution errors in the present analysis. Nevertheless, we might speculate that the increment in the mislocation errors observed in the present analysis may be due to the enhancement in the target misplacement errors following training. Rather than reporting errors, the target misplacement errors might be more likely to reflect memory errors.40 In the trigram recognition task, the subject was required to report recognizable characters within the trigram in order. With the relatively narrower Chinese-character visual span compared to alphabetic letters,56 subjects were less likely to differentiate characters in the trigram presented farther away from the horizontal midline. After 4 days of training, using an identical protocol to that used for alphabetic letters, the ability of character recognition in the visual periphery improved. However, the load of the working memory also increased, which might increase memory errors. Furthermore, based on the feature-integration theory,58,65 perceptual training reduced the abnormal integration of features between target and flankers; mislocation errors—as positional errors—might not be reduced. Therefore, our results were in favor of the feature-integration explanation of crowding. 
Explanations of the attendant improvements in reading speed following perceptual learning remain inconclusive. Reading is completely different from recognizing letters/characters. One possible explanation may be that the larger visual-span size improves the reading speed after perceptual training. Improvements in recognition accuracy following training may increase the information transmitted through the visual span. According to the information transmitted calculation formula, 100% perfect performance in identifying characters equals 4.7 bits of information transmitted. The additional bits of information transmitted for each fixation after training may boost reading performance. Moreover, there have been theoretical models1,2,12 documenting significant associations between visual-span size and reading speed for different print fonts, contrast conditions, and testing eccentricities. In our previous analysis, we also found a positive relationship between the visual-span size and Chinese script sentence-reading speed (correlation coefficient = 10.3, P = 0.021, data collected in 2017 and submitted for publication). In the present analysis, a strong positive correlation was found between the improvement of reading speed and enlargement of visual-span size (r = 0.61, P = 0.001). These results confirmed the link between letter/character recognition and reading speed, suggesting visual span as a sensory limitation in reading speed for different languages. Furthermore, in the present analysis, we found that readers' parafoveal right area below the VSPs and the standard deviation of the right half of fitted curves were significantly increased after training. The considerable improvement of the visual-span size in the reading direction may be beneficial for parafoveal preview and increasing processing reading speed by shortening reading times on the subsequent foveated word.66,67 However, in contrast to the research by Chung et al.,8 we did not find significant changes in the peak amplitude of the fitted curves after training. The training protocol utilized in the present study was presenting the trigram in the horizontal midline, and the peak amplitude across subjects approached 4.7 bits (100% recognition accuracy) prior to the training. Ceiling effects in performance may have existed in this study. In contrast to the present study, in the study of Chung et al.,8 the trigram was presented on a horizontal line 10° in the upper or lower visual field and the averaged peak amplitude was approximately 3.70 (80% recognition accuracy) in the pretest. Similarly, no significant differences were observed in the foveal area of the VSPs after training, which may also due to the ceiling effects in the foveal recognition performance. These findings indicated that the training-induced enhancement of reading speed in previous studies8,11,14,17 and the improvement in sentence-reading speed in our study could be attributed to the changes in the VSPs. 
However, some studies have indicated that the improvements in visual-span size and reading speed may be a consequence of general learning.68 It has been reported that training can improve the ability to distribute attention effectively.6971 In contrast, Lee et al.11 reported that allocating attention in peripheral vision was not correlated with improvements in the trigram recognition task after training. Another explanation for the improvements may be associated with gained experiences (e.g., practice effect).72 Nonetheless, comparing the changes between the training group and the control group in our study can rule out the possibility that the improvements on the visual task are due only to performing the same task repeatedly. 
Our findings offer encouraging evidence that sentence-reading speed can be improved and crowding effects can be reduced following perceptual training. These findings have practical implications. First, reading speed is significant for the quality of and productivity in daily life. Second, Chinese patients with central vision loss who must use their peripheral vision to read may benefit from perceptual learning and improve their sentence-reading speeds. In China, the leading cause of central vision loss is age-related macular degeneration. A recent systematic review and meta-analysis reported that the prevalence of age-related macular degeneration in China ranged from 2.44% in people aged 45 to 49 years to 19.0% in those aged 85 to 89 years.73 However, interpretation of our findings may be subject to the following limitations. First, without follow-up data, we cannot report on retention rates. However, evidence has suggested that subjects can preserve a large proportion of their improvements for at least 3 months.8 Based on comparable results and the similar experimental design in the previous study,8 we may speculate high retention rates for our subjects. Second, subjects in our study were young adults with normal vision; therefore the findings in this study may not generalize to patients with low vision, who are usually much older. However, a number of studies have reported that older people with vision loss can also benefit from reading speed training.9,74,75 
In summary, our study suggested that perceptual learning improved visual-span size and Chinese text sentence-reading speed. These findings indicate that the visual span is a common limitation in reading that can be overcome with training. Furthermore, we also revealed that a reduction in the crowding effect contributed to the improvements in visual-span size. Future studies investigating the cortical sites of perceptual learning will be useful to confirm these findings and elucidate the possible mechanisms. 
Acknowledgments
Supported by Fundamental Research Funds of the State Key Laboratory in Ophthalmology and the National Natural Science Foundation of China (31500890). Mingguang He receives support from the University of Melbourne at Research Accelerator Program and the CERA Foundation. The Centre for Eye Research Australia receives Operational Infrastructure Support from the Victorian State Government. The sponsor or funding organization had no role in the design or conduct of this research. 
Disclosure: Z. Zhu, None; Y. Hu, None; C. Liao, None; R. Huang, None; S. Keel, None; Y. Liu, None; M. He, None 
References
Legge GE, Ahn SJ, Klitz TS, Luebker A. Psychophysics of reading--XVI. The visual span in normal and low vision. Vision Res. 1997; 37: 1999–2010.
Legge GE, Mansfield JS, Chung ST. Psychophysics of reading. XX. Linking letter recognition to reading speed in central and peripheral vision. Vision Res. 2001; 41: 725–743.
Liu R, Patel BN, Kwon M. Age-related changes in crowding and reading speed. Sci Rep. 2017; 7: 8271.
Yu D, Legge GE, Wagoner G, Chung ST. Sensory factors limiting horizontal and vertical visual span for letter recognition. J Vis. 2014; 14 (9): 23.
Cheong AM, Legge GE, Lawrence MG, Cheung SH, Ruff MA. Relationship between visual span and reading performance in age-related macular degeneration. Vision Res. 2008; 48: 577–588.
Crossland MD, Rubin GS. Eye movements and reading in macular disease: further support for the shrinking perceptual span hypothesis. Vision Res. 2006; 46: 590–597.
Kwon M, Legge GE, Dubbels BR. Developmental changes in the visual span for reading. Vision Res. 2007; 47: 2889–2900.
Chung ST, Legge GE, Cheung SH. Letter-recognition and reading speed in peripheral vision benefit from perceptual learning. Vision Res. 2004; 44: 695–709.
Chung ST. Improving reading speed for people with central vision loss through perceptual learning. Invest Ophthalmol Vis Sci. 2011; 52: 1164–1170.
He Y, Scholz JM, Gage R, Kallie CS, Liu T, Legge GE. Comparing the visual spans for faces and letters. J Vis. 2015; 15 (8): 7.
Lee HW, Kwon M, Legge GE, Gefroh JJ. Training improves reading speed in peripheral vision: is it due to attention? J Vis. 2010; 10 (6): 18.
Legge GE, Hooven TA, Klitz TS, Stephen Mansfield JS, Tjan BS. Mr. Chips 2002: new insights from an ideal-observer model of reading. Vis Res. 2002; 42: 2219–2234.
Papinutto M, Lao J, Ramon M, Caldara R, Miellet S. The Facespan-the perceptual span for face recognition. J Vis. 2017; 17 (5): 16.
Yu D, Cheung SH, Legge GE, Chung ST. Reading speed in the peripheral visual field of older adults: does it benefit from perceptual learning? Vision Res. 2010; 50: 860–869.
Yu D, Legge GE, Park H, Gage E, Chung ST. Development of a training protocol to improve reading performance in peripheral vision. Vision Res. 2010; 50: 36–45.
He Y, Kwon M, Legge GE. Common constraints limit Korean and English character recognition in peripheral vision. J Vis. 2018; 18 (1): 5.
Bernard JB, Arunkumar A, Chung ST. Can reading-specific training stimuli improve the effect of perceptual learning on peripheral reading speed? Vis Res. 2012; 66: 17–25.
Fahle M, Poggio,T. Perceptual Learning. Cambridge, MA: MIT Press; 2002.
Frank SM, Reavis EA, Tse PU, Greenlee MW. Neural mechanisms of feature conjunction learning: enduring changes in occipital cortex after a week of training. Hum Brain Mapp. 2014; 35: 1201–1211.
Gibson EJ. Perceptual learning. Annu Rev Psychol. 1963; 14: 29–56.
Goldstone RL. Perceptual learning. Annu Rev Psychol. 1998; 49: 585–612.
Sagi D. Perceptual learning in vision research. Vision Res. 2011; 51: 1552–1566.
Seitz A, Watanabe T. A unified model for perceptual learning. Trends Cogn Sci. 2005; 9: 329–334.
Bower JD, Watanabe T, Andersen GJ. Perceptual learning and aging: improved performance for low-contrast motion discrimination. Front Psychol. 2013; 4: 66.
Mayhew SD, Kourtzi Z. Dissociable circuits for visual shape learning in the young and aging human brain. Front Hum Neurosci. 2013; 7: 75.
Andersen GJ, Ni R, Bower JD, Watanabe T. Perceptual learning, aging, and improved visual performance in early stages of visual processing. J Vis. 2010; 10 (13): 4.
Legault I, Allard R, Faubert J. Healthy older observers show equivalent perceptual-cognitive training benefits to young adults for multiple object tracking. Front Psychol. 2013; 4: 323.
DeLoss DJ, Watanabe T, Andersen GJ. Optimization of perceptual learning: effects of task difficulty and external noise in older adults. Vision Res. 2014; 99: 37–45.
McKendrick AM, Battista J. Perceptual learning of contour integration is not compromised in the elderly. J Vis. 2013; 13 (1): 5.
DeLoss DJ, Watanabe T, Andersen GJ. Improving vision among older adults: behavioral training to improve sight. Psychol Sci. 2015; 26: 456–466.
Deveau J, Lovcik G, Seitz AR. Applications of perceptual learning to ophthalmology. In: Davey P, ed. Ophthalmology: Current Clinical and Research Updates. Rijeka, Croatia: InTech; 2014: 395–414.
Karni A, Sagi D. Where practice makes perfect in texture discrimination: evidence for primary visual cortex plasticity. Proc Natl Acad Sci U S A. 1991; 88: 4966–4970.
Schoups AA, Vogels R, Orban GA. Human perceptual learning in identifying the oblique orientation: retinotopy, orientation specificity and monocularity. J Physiol. 1995; 483 (pt 3): 797–810.
Teich AF, Qian N. Learning and adaptation in a recurrent model of V1 orientation selectivity. J Neurophysiol. 2003; 89: 2086–2100.
Yan Y, Rasch MJ, Chen M, et al. Perceptual training continuously refines neuronal population codes in primary visual cortex. Nat Neurosci. 2014; 17: 1380–1387.
Dosher BA, Lu ZL. Mechanisms of perceptual learning. Vision Res. 1999; 39: 3197–3221.
Law CT, Gold JI. Reinforcement learning can account for associative and perceptual learning on a visual-decision task. Nat Neurosci. 2009; 12: 655–663.
Mollon JD, Danilova MV. Three remarks on perceptual learning. Spat Vis. 1996; 10: 51–58.
Xiao LQ, Zhang JY, Wang R, Klein SA, Levi DM, Yu C. Complete transfer of perceptual learning across retinal locations enabled by double training. Curr Biol. 2008; 18: 1922–1926.
Xiong YZ, Yu C, Zhang JY. Perceptual learning eases crowding by reducing recognition errors but not position errors. J Vis. 2015; 15 (11): 16.
Zhang JY, Zhang GL, Xiao LQ, Klein SA, Levi DM, Yu C. Rule-based learning explains visual perceptual learning and its specificity and transfer. J Neurosci. 2010; 30: 12323–12328.
Wang R, Zhang JY, Klein SA, Levi DM, Yu C. Task relevancy and demand modulate double-training enabled transfer of perceptual learning. Vision Res. 2012; 61: 33–38.
Lawton T, Shelley-Tremblay J. Training on movement figure-ground discrimination remediates low-level visual timing deficits in the dorsal stream, improving high-level cognitive functioning, including attention, reading fluency, and working memory. Front Hum Neurosci. 2017; 11: 236.
Elliott DB, Trukolo-Ilic M, Strong JG, Pace R, Plotkin A, Bevers P. Demographic characteristics of the vision-disabled elderly. Invest Ophthalmol Vis Sci. 1997; 38: 2566–2575.
Pelli DG, Tillman KA, Freeman J, Su M, Berger TD, Majaj NJ. Crowding and eccentricity determine reading rate. J Vis. 2007; 7 (2): 20.
He Y, Legge GE, Yu D. Sensory and cognitive influences on the training-related improvement of reading speed in peripheral vision. J Vis. 2013; 13 (7): 14.
Hutzler F, Fuchs I, Gagl B, et al. Parafoveal X-masks interfere with foveal word recognition: evidence from fixation-related brain potentials. Front Syst Neurosci. 2013; 7: 33.
Rayner K. Eye movements and attention in reading, scene perception, and visual search. Q J Exp Psychol. 2009; 62: 1457–1506.
Risse S. Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading. J Vis. 2014; 14 (8): 11.
Rayner K, Li X, Juhasz BJ, Yan G. The effect of word predictability on the eye movements of Chinese readers. Psychon Bull Rev. 2005; 12: 1089–1093.
Wei W, Li X, Pollatsek A. Word properties of a fixated region affect outgoing saccade length in Chinese reading. Vision Res. 2013; 80: 1–6.
Taylor I, Taylor MM. Writing and Literacy in Chinese, Korean and Japanese. John Benjamins Publishing; 1995: 412.
Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997; 10: 433–436.
Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997; 10: 437–442.
Zhang JY, Zhang T, Xue F, Liu L, Yu C. Legibility of Chinese characters in peripheral vision and the top-down influences on crowding. Vision Res. 2009; 49: 44–53.
Wang H, He X, Legge GE. Effect of pattern complexity on the visual span for Chinese and alphabet characters. J Vis. 2014; 14 (8): 6.
Pelli DG, Burns CW, Farell B, Moore-Page DC. Feature detection and letter identification. Vision Res. 2006; 46: 4646–4674.
Pelli DG, Palomares M, Majaj NJ. Crowding is unlike ordinary masking: distinguishing feature integration from detection. J Vis. 2004; 4 (12): 1136–1169.
Beckmann PJ. Preneural Factors Limiting Letter Identification in Central and Peripheral Vision [doctoral thesis]. Minneapolis, MN: University of Minnesota; 1998.
Liu Y, Reichle ED, Li X. The effect of word frequency and parafoveal preview on saccade length during the reading of Chinese. J Exp Psychol Hum Percept Perform. 2016; 42: 1008–1025.
Olive J, Christianson C, McCary J, eds. Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation. Springer Science+Business Media; 2011.
Freeman J, Pelli DG. An escape from crowding. J Vis. 2007; 7 (2): 22.
He S, Cavanagh P, Intriligator J. Attentional resolution and the locus of visual awareness. Nature. 1996; 383: 334–337.
Greenwood JA, Bex PJ, Dakin SC. Positional averaging explains crowding with letter-like stimuli. Proc Natl Acad Sci U S A. 2009; 106: 13130–13135.
Inhoff AW. Integrating information across eye fixations in reading: the role of letter and word units. Acta Psychol. 1990; 73: 281–297.
Rayner K, McConkie GW, Ehrlich S. Eye movements and integrating information across fixations. J Exp Psychol Hum Percept Perform. 1978; 4: 529–544.
Saugstad P, Lie I. Training of peripheral visual acuity. Scand J Psychol. 1964; 5: 218–224.
Baron A, Mattila WR. Response slowing of older adults: effects of time-limit contingencies on single- and dual-task performances. Psychol Aging. 1989; 4: 66–72.
McDowd JM. The effects of age and extended practice on divided attention performance. J Gerontol. 1986; 41: 764–769.
Richards E, Bennett PJ, Sekuler AB. Age related differences in learning with the useful field of view. Vision Res. 2006; 46: 4217–4231.
Beard BL, Levi DM, Reich LN. Perceptual learning in parafoveal vision. Vision Res. 1995; 35: 1679–1690.
Song P, Du Y, Chan KY, Theodoratou E, Rudan I. The national and subnational prevalence and burden of age-related macular degeneration in China. J Glob Health. 2017; 7: 020703.
Nilsson UL, Nilsson SE. Rehabilitation of the visually handicapped with advanced macular degeneration. A follow-up study at the Low Vision Clinic, Department of Ophthalmology, University of Linkoping. Doc Ophthalmol. 1986; 62: 345–367.
Nilsson UL, Frennesson C, Nilsson SE. Location and stability of a newly established eccentric retinal locus suitable for reading, achieved through training of patients with a dense central scotoma. Optom Vis Sci. 1998; 75: 873–878.
Figure 1
 
A schematic cartoon illustrating the basic experimental design of the study. A total of 26 subjects were randomly assigned to two groups: a control group (n = 13) and a training group (n = 13). The pretest and posttest consisted of measurements of sentence-reading speed and visual-span profile. Subjects belonging to the control group received only the pretest and posttest, while each subject in the training group took a pretest and a posttest with an intervening training procedure consisting of four sessions, scheduled on 4 consecutive days.
Figure 1
 
A schematic cartoon illustrating the basic experimental design of the study. A total of 26 subjects were randomly assigned to two groups: a control group (n = 13) and a training group (n = 13). The pretest and posttest consisted of measurements of sentence-reading speed and visual-span profile. Subjects belonging to the control group received only the pretest and posttest, while each subject in the training group took a pretest and a posttest with an intervening training procedure consisting of four sessions, scheduled on 4 consecutive days.
Figure 2
 
The stimulus set for the trigram character-recognition task selected from the C3 group in Wang et al.56 Based on the perimetric complexity,57 the 700 most frequently used Chinese characters (ordered by frequency in State Language Work Committee, Bureau of Standard, 1992) were split into five mutually exclusive groups. Twenty-six characters with complexity values close to the median complexity value in each group were chosen to constitute C1 to C5 subgroups. The median complexity subgroup (C3) was selected for the trigram character-recognition task in this study.
Figure 2
 
The stimulus set for the trigram character-recognition task selected from the C3 group in Wang et al.56 Based on the perimetric complexity,57 the 700 most frequently used Chinese characters (ordered by frequency in State Language Work Committee, Bureau of Standard, 1992) were split into five mutually exclusive groups. Twenty-six characters with complexity values close to the median complexity value in each group were chosen to constitute C1 to C5 subgroups. The median complexity subgroup (C3) was selected for the trigram character-recognition task in this study.
Figure 3
 
The visual-span profile measurement for Chinese characters using the trigram character-recognition method. (A) Schematic illustration of visual-span profile. Top: A string of three characters (Image not available, Image not available, Image not available) was presented at position 4 in the horizontal midline. The gray numbers indicate the position of each slot, which was not presented during the test. The size of visual span was quantified using two methods: the width of the fitted split-Gaussian curve at 80% of recognition accuracy (number of characters) and the measure of the area under the split-Gaussian curve in bits of information transmitted. Bottom: The visual-span profile was a plot of recognition accuracy by character positions and then transformed to information transmitted (bits). The recognition accuracy approached 100% at the fixation point and gradually dropped with increasing distance from the fixation point. (B) Schematic illustration of visual-span measurement. After the white dot fixation stimulus was displayed at the center of the black screen for 1000 ms, two vertically aligned green dots were presented in the center of the screen to maintain stable fixation until the end of each trial. Three underlines were displayed for 50 ms indicating the next trigram positions. After a 70-ms duration of the two vertical green dots, the trigram stimulus was presented on the screen for 250 ms. Then, the screen went blank, and the subject was required to report the three characters of the trigram in order, from left to right.
Figure 3
 
The visual-span profile measurement for Chinese characters using the trigram character-recognition method. (A) Schematic illustration of visual-span profile. Top: A string of three characters (Image not available, Image not available, Image not available) was presented at position 4 in the horizontal midline. The gray numbers indicate the position of each slot, which was not presented during the test. The size of visual span was quantified using two methods: the width of the fitted split-Gaussian curve at 80% of recognition accuracy (number of characters) and the measure of the area under the split-Gaussian curve in bits of information transmitted. Bottom: The visual-span profile was a plot of recognition accuracy by character positions and then transformed to information transmitted (bits). The recognition accuracy approached 100% at the fixation point and gradually dropped with increasing distance from the fixation point. (B) Schematic illustration of visual-span measurement. After the white dot fixation stimulus was displayed at the center of the black screen for 1000 ms, two vertically aligned green dots were presented in the center of the screen to maintain stable fixation until the end of each trial. Three underlines were displayed for 50 ms indicating the next trigram positions. After a 70-ms duration of the two vertical green dots, the trigram stimulus was presented on the screen for 250 ms. Then, the screen went blank, and the subject was required to report the three characters of the trigram in order, from left to right.
Figure 4
 
Schematic illustration of sentence-reading task. The sentences were chosen from the reading material of Liu et al.60 Prior to the sentence-reading task, subjects were told to read the sentence as fast as possible and answer a comprehension question after each sentence. After a fixation calibration, the fixation dot was displayed at the center of the screen. Subjects pressed the space bar on the keyboard to initiate the sentence presentation on the horizontal midline of the computer screen. Subjects pressed the space bar again to signal the completion of the sentence reading and termination of the trial. English translations were provided for illustrative purposes and were not shown during the experiment.
Figure 4
 
Schematic illustration of sentence-reading task. The sentences were chosen from the reading material of Liu et al.60 Prior to the sentence-reading task, subjects were told to read the sentence as fast as possible and answer a comprehension question after each sentence. After a fixation calibration, the fixation dot was displayed at the center of the screen. Subjects pressed the space bar on the keyboard to initiate the sentence presentation on the horizontal midline of the computer screen. Subjects pressed the space bar again to signal the completion of the sentence reading and termination of the trial. English translations were provided for illustrative purposes and were not shown during the experiment.
Figure 5
 
Schematic illustration of decomposition analysis. Perfect performance − standard profile = resolution/acuity effect + crowding effect + mislocation errors effect. Acuity effect = perfect profile − single-letter profile (green area). Crowding effect = single-letter profile − mislocation profile (blue area). Mislocation effect = mislocation profile − standard profile (red area). Three types of visual-span profiles were plotted: a standard profile with both correct recognition of the character and its position in the trigram, a profile permitting mislocation errors, and a profile based on single-character recognition accuracy. The losses in information transmitted resulting from limitations in visual resolution, the impact of crowding, and mislocation errors were computed by comparing these three types of VSPs with 100% perfect performance. The contribution of acuity was quantified by comparing 100% perfect performance and the single-character profile. The effect of crowding was assessed by the losses of information transmitted between the single-character profile and mislocation errors survival profile. The impact of mislocations was defined by comparing the mislocation errors profile with the standardized profile.
Figure 5
 
Schematic illustration of decomposition analysis. Perfect performance − standard profile = resolution/acuity effect + crowding effect + mislocation errors effect. Acuity effect = perfect profile − single-letter profile (green area). Crowding effect = single-letter profile − mislocation profile (blue area). Mislocation effect = mislocation profile − standard profile (red area). Three types of visual-span profiles were plotted: a standard profile with both correct recognition of the character and its position in the trigram, a profile permitting mislocation errors, and a profile based on single-character recognition accuracy. The losses in information transmitted resulting from limitations in visual resolution, the impact of crowding, and mislocation errors were computed by comparing these three types of VSPs with 100% perfect performance. The contribution of acuity was quantified by comparing 100% perfect performance and the single-character profile. The effect of crowding was assessed by the losses of information transmitted between the single-character profile and mislocation errors survival profile. The impact of mislocations was defined by comparing the mislocation errors profile with the standardized profile.
Figure 6
 
Visual-span profiles for pretest (dashed line) and posttest (line) in the control group and the training group. (A) Raw data. (B) Fitted split-Gaussian curve. Character recognition performance across character positions in the control and the training groups are presented in Figure 7. The raw data (A) and fitted curve (B) of recognition accuracy for 13 trigram positions showed that the visual-span size in the posttest was larger than that in the pretest in the training group, suggesting an enhancement in character identification performance across most character positions after training.
Figure 6
 
Visual-span profiles for pretest (dashed line) and posttest (line) in the control group and the training group. (A) Raw data. (B) Fitted split-Gaussian curve. Character recognition performance across character positions in the control and the training groups are presented in Figure 7. The raw data (A) and fitted curve (B) of recognition accuracy for 13 trigram positions showed that the visual-span size in the posttest was larger than that in the pretest in the training group, suggesting an enhancement in character identification performance across most character positions after training.
Figure 7
 
Main parameters: (A) size of the visual span in the number of Chinese characters, (B) size of the visual span in bits of information transmitted, and (C) reading speed between the pretest and posttest in the control and training group. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. The size of visual span in Chinese characters, the size of visual span in bits of information transmitted, and sentence-reading speed between pretest and posttest were similar in the control group (all Bonferroni corrected P > 0.05). In the training group, the visual-span size in the number of Chinese characters, in bits of information transmitted, and sentence-reading speeds were extended from 5.04 ± 0.96 to 8.41 ± 1.47 characters (Bonferroni corrected P < 0.001), from 32.8 ± 3.99 to 44.4 ± 3.89 bits (Bonferroni corrected P < 0.001), and from 319.0 ± 74.2 to 484.6 ± 161.3 cpm (Bonferroni corrected P < 0.001) after 4-day training.
Figure 7
 
Main parameters: (A) size of the visual span in the number of Chinese characters, (B) size of the visual span in bits of information transmitted, and (C) reading speed between the pretest and posttest in the control and training group. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. The size of visual span in Chinese characters, the size of visual span in bits of information transmitted, and sentence-reading speed between pretest and posttest were similar in the control group (all Bonferroni corrected P > 0.05). In the training group, the visual-span size in the number of Chinese characters, in bits of information transmitted, and sentence-reading speeds were extended from 5.04 ± 0.96 to 8.41 ± 1.47 characters (Bonferroni corrected P < 0.001), from 32.8 ± 3.99 to 44.4 ± 3.89 bits (Bonferroni corrected P < 0.001), and from 319.0 ± 74.2 to 484.6 ± 161.3 cpm (Bonferroni corrected P < 0.001) after 4-day training.
Figure 8
 
Parameters of visual-span profiles between pretest and posttest in the control and training groups. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. (A, D) showed that the changes in the peak amplitude and foveal area of the visual-span profiles were not statistically different between pretest and posttest in the control and the training groups (all Bonferroni corrected P > 0.05). (B, C, E, F) showed significant differences between the posttest and pretest visual-span profile's parameters, including standard deviation \({\sigma _L}\) and \({\sigma _R}\), parafoveal left area, and parafoveal right area in the training group (all Bonferroni corrected P < 0.001), as well as similar magnitudes in these parameters in the control group (all Bonferroni corrected P > 0.05).
Figure 8
 
Parameters of visual-span profiles between pretest and posttest in the control and training groups. Error bars represent ±1 SEM. n.s., not significant. ***P < 0.001. (A, D) showed that the changes in the peak amplitude and foveal area of the visual-span profiles were not statistically different between pretest and posttest in the control and the training groups (all Bonferroni corrected P > 0.05). (B, C, E, F) showed significant differences between the posttest and pretest visual-span profile's parameters, including standard deviation \({\sigma _L}\) and \({\sigma _R}\), parafoveal left area, and parafoveal right area in the training group (all Bonferroni corrected P < 0.001), as well as similar magnitudes in these parameters in the control group (all Bonferroni corrected P > 0.05).
Figure 9
 
Decomposition analysis of the visual span. Upper: Comparison of decomposition profiles in pretest and posttest. Error bars represent ±1 SEM. n.s., not significant. **P < 0.01; ***P < 0.001. Decomposition analysis of the visual span. The area representing the crowding effects (blue) was significantly larger than the areas representing the mislocation effects (red). A notable reduction in the area representing the crowding effects (blue) from pretest to posttest was observed in the training group. Lower: Comparison of effects for each factor in pretest and posttest. There was a significant reduction in the effects of crowding (13.1 bits) and a small increase in mislocation effects (1.46 bits) following training in the training group. The impact of crowding and mislocation was similar in the pretest and posttest in the control group (all Bonferroni corrected P > 0.05).
Figure 9
 
Decomposition analysis of the visual span. Upper: Comparison of decomposition profiles in pretest and posttest. Error bars represent ±1 SEM. n.s., not significant. **P < 0.01; ***P < 0.001. Decomposition analysis of the visual span. The area representing the crowding effects (blue) was significantly larger than the areas representing the mislocation effects (red). A notable reduction in the area representing the crowding effects (blue) from pretest to posttest was observed in the training group. Lower: Comparison of effects for each factor in pretest and posttest. There was a significant reduction in the effects of crowding (13.1 bits) and a small increase in mislocation effects (1.46 bits) following training in the training group. The impact of crowding and mislocation was similar in the pretest and posttest in the control group (all Bonferroni corrected P > 0.05).
Table 1
 
Results of the Linear Mixed Model Analysis of Reading Speed, Visual-Span Size in Number of Chinese Characters, and in Information Transmitted
Table 1
 
Results of the Linear Mixed Model Analysis of Reading Speed, Visual-Span Size in Number of Chinese Characters, and in Information Transmitted
Table 2
 
Results of the Linear Mixed Model Analysis of Peak Amplitude, Standard Deviation of the Left and the Right, Foveal Area, Parafoveal Right and Left Area Half of the Gaussian Curve, Crowding, and Mislocation Errors Effect
Table 2
 
Results of the Linear Mixed Model Analysis of Peak Amplitude, Standard Deviation of the Left and the Right, Foveal Area, Parafoveal Right and Left Area Half of the Gaussian Curve, Crowding, and Mislocation Errors Effect
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×