November 2016
Volume 57, Issue 14
Open Access
Clinical and Epidemiologic Research  |   November 2016
Feasibility of Macular Integrity Assessment (MAIA) Microperimetry in Children: Sensitivity, Reliability, and Fixation Stability in Healthy Observers
Author Affiliations & Notes
  • Pete R. Jones
    Institute of Ophthalmology, University College London (UCL), London, United Kingdom
    National Institute for Health Research Moorfields Biomedical Research Centre, London, United Kingdom
  • Narmella Yasoubi
    Institute of Ophthalmology, University College London (UCL), London, United Kingdom
  • Marko Nardini
    Institute of Ophthalmology, University College London (UCL), London, United Kingdom
    Department of Psychology, Durham University, Durham, United Kingdom
  • Gary S. Rubin
    Institute of Ophthalmology, University College London (UCL), London, United Kingdom
    National Institute for Health Research Moorfields Biomedical Research Centre, London, United Kingdom
  • Correspondence: Pete R. Jones, Institute of Ophthalmology, University College London, 11-43 Bath Street, London EC1V 9EL, United Kingdom; p.r.jones@ucl.ac.uk
Investigative Ophthalmology & Visual Science November 2016, Vol.57, 6349-6359. doi:https://doi.org/10.1167/iovs.16-20037
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pete R. Jones, Narmella Yasoubi, Marko Nardini, Gary S. Rubin; Feasibility of Macular Integrity Assessment (MAIA) Microperimetry in Children: Sensitivity, Reliability, and Fixation Stability in Healthy Observers. Invest. Ophthalmol. Vis. Sci. 2016;57(14):6349-6359. https://doi.org/10.1167/iovs.16-20037.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To assess the feasibility of macular integrity assessment (MAIA) microperimetry (MP) in children. Also to establish representative outcome measures (differential light sensitivity, fixation stability, test–retest reliability) for children without visual impairment.

Methods: Thirty-three adults and 33 children (9–12 years) were asked to perform three monocular MAIA examinations within a single session (dominant eye only).

Results: Children exhibited poorer test–retest reliability than adults for measures of both mean sensitivity (95% coefficient of repeatability [CoR95] = 2.7 vs. 2.3 dB, P = 0.036) and pointwise sensitivity (CoR95 = 6.2 vs. 5.7 dB, P < 0.001). Mean sensitivity was lower in children (27.6 vs. 29.5 dB, P < 0.001), and fixation stability was poorer (95% bivariate contour ellipse area [BCEA95] = 4.58 vs. 1.14, P < 0.001). Mean sensitivity was negatively correlated with fixation stability (r = −0.44, P < 0.001). Both children and adults exhibited substantial practice effects, with mean sensitivity improving by 0.5 dB (adults) and 0.9 dB (children) between examinations 1 and 2 (P ≤ 0.017). There were no significant differences between examinations 2 and 3 (P ≥ 0.374).

Conclusions: Microperimetry is feasible in 9- to 12-year-old children. However, systematically lower sensitivities mean that the classification boundary for “healthy” performance should be lowered in children, pending development of techniques to improve attentiveness/fixation that may reduce or remove this difference. High measurement variability suggests that the results of multiple tests should be averaged when possible. Learning effects are a potential confound, and it is recommended that the results of the first examination be discarded.

Microperimetry (MP) is a method for mapping contrast sensitivity across the visual field.1 Unlike related techniques, such as standard automated threshold perimetry, MP incorporates fundus imaging, and uses eye tracking to correct for eye movements and/or noncentral fixation. This allows MP to relate measures of visual function to precise locations on the retina, making it particularly well suited to assessing subtle changes in visual function. In clinics, MP is used most widely to diagnose and monitor macular degeneration2—a disease that predominantly affects the elderly. Increasingly, though, there is a desire to use MP with children, for example, to assess the efficacy of gene therapies for inherited retinal dystrophies.35 
However, there are outstanding questions regarding the feasibility and reliability of MP in children. The test requires participants to sit still and to remain vigilant for extended periods, and it is unclear how these demands affect completion rates or measurement reliability. Furthermore, while there have been numerous studies of test–retest reliability in adults,614 a lack of equivalent data in children makes interpreting test data problematic. For example, failure to evidence a change in sensitivity before and after treatment could be due to (currently unknown) levels of measurement error, while spurious improvements in sensitivity could be misreported due to our current ignorance of practice effects. Finally, there currently exist no normative databases for children (existing norms are based exclusively on populations aged 18+ years1517). In terms of MP's use as a diagnostic method, it is therefore unclear at present what constitutes “normal” performance for healthy children. 
The goals of the present study were twofold: to assess the feasibility and test–retest reliability of MP in children, and to establish age-appropriate outcome measures (sensitivity, fixation stability) for healthy children aged 9 to 13 years without visual impairment. 
Methods
Participants
Participants were 33 children (9.2–12.9 years; M = 10.7, SD = 1.0; 13 female) and 33 adults (18.8–31.1 years; M = 24.1, SD = 3.1; 23 female), with no known visual problems or medical conditions and no previous experience of perimetry. Children younger than 9 years were not tested, as it was found during piloting that they were often too physically small to operate the MP comfortably (see next section). 
Normal vision was assessed by an Early Treatment Diabetic Retinopathy Study (ETDRS) recognition acuity chart (uncorrected monocular acuity ≤ 0.50 logMAR; M = 0.0), and also by a brief parental questionnaire. Children were recruited via advertisements in the local area and received small toys and certificates for participating. Adults were recruited through the University College London (UCL) psychology subject pool and received £7 compensation for their time. 
Prior to testing, written consent was obtained from all participants (adults) or a responsible caregiver (children), and children gave verbal (<10 years) or written (≥10 years) assent to participate. The research was carried out in accordance with the Declaration of Helsinki and was approved by the UCL Ethics Committee. 
Macular Integrity Assessment (MAIA) Microperimeter
Microperimetry assessments were carried out using the MAIA (Centervue, Padova, Italy). This device was selected as it has a dynamic range wide enough to avoid ceiling effects in healthy observers, and because it is capable of operating without the use of mydriatics (minimum pupil size: 2.5 mm). This is important as children often find mydriatrics unpleasant and because the use of mydriatrics can also distort the results of static perimetry (e.g., underestimation of differential light sensitivity18,19). The MAIA uses a 25-Hz eye tracker to monitor fixation, performs retinal imaging using a scanning laser ophthalmoscope, and generates targets by projecting light from a white light-emitting diode (LED) directly onto the retina. For present purposes, the principal limitation of the current MAIA device is that because of its current design, young children are often too short to reach the chin rest when seated Centervue states that child-friendly modifications of the forehead and front-rest assembly are currently being developed). Testing was therefore restricted to 9- to 12-year-olds. In contrast, we have previously used the Nidek MP-1 (Nidek Technologies, Padova, Italy) to test small numbers of patients as young as 5 years old.3 However, the MP-1 was not appropriate for the present study, as its limited dynamic range means that healthy observers would be expected to perform at ceiling.8,20 
Test Protocol
Participants completed three MP examinations within a single session. Examinations used the MAIA's default test parameters and its standard 10°/37-point test pattern (see Fig. 1). All testing was conducted monocularly, with the participant's nondominant eye patched, and testing took place in a quiet, darkened room (0.29 lux; Amprobe LM-120 Light Meter; Danaher Corporation, Washington, DC, USA). Before testing, participants underwent a brief period of training, allowing them to familiarize themselves with the stimulus target and practice correct operation of the response button. In total, testing took approximately 40 minutes, including an initial adaptation period of 7 to 10 minutes and including a short (3–5 minutes) break between each successive test. Participants remained within the same darkened room during these breaks to avoid having to readapt. 
Figure 1
 
The test pattern, consisting of 37 locations distributed within a 10° diameter area. Targets were Goldmann III (0.43° radius) circles of variable intensity, presented against a uniform 4-asb background. The maximum differential luminance of the target was 1000 asb, yielding a dynamic range of 36 dB. Target intensity was adjusted in 4-dB/2-dB steps, using a 4-2 threshold strategy. Target locations were determined by a radial grid consisting of a single central point, and three loci, at 1°, 3°, 5° eccentricity. The cardinal locations, marked with asterisks, were tested first, followed by all other locations in random order (interleaved). During testing, participants were asked to maintain fixation on the central red annulus, which was 0.5° in radius.
Figure 1
 
The test pattern, consisting of 37 locations distributed within a 10° diameter area. Targets were Goldmann III (0.43° radius) circles of variable intensity, presented against a uniform 4-asb background. The maximum differential luminance of the target was 1000 asb, yielding a dynamic range of 36 dB. Target intensity was adjusted in 4-dB/2-dB steps, using a 4-2 threshold strategy. Target locations were determined by a radial grid consisting of a single central point, and three loci, at 1°, 3°, 5° eccentricity. The cardinal locations, marked with asterisks, were tested first, followed by all other locations in random order (interleaved). During testing, participants were asked to maintain fixation on the central red annulus, which was 0.5° in radius.
Primary Outcome Measures
Pointwise Sensitivity (PWS).
This is the minimum luminance contrast that the observer reliably reported seeing at a particular test location (n = 37). This is otherwise known as differential light sensitivity and is the principal output of the MAIA. In accordance with perimetric convention, PWS is reported in decibels of attenuation, thus:  where ΔLjnd is the smallest differential luminance level that could be reliable detected (as estimated by the MAIA), and ΔLmax is the greatest differential luminance level that the MP device is capable of producing. For the MAIA, ΔLmax = 1000 apostilb (asb), and the smallest possible value of ΔLjnd is 0.25 asb. The dynamic range is therefore 36 dB, with PWSdB = 0 and 36 representing floor and ceiling performance, respectively.  
Mean Sensitivity.
This is an overall index of luminance sensitivity, computed as the arithmetic mean of all 37 PWS values for a given test:    
95% Coefficient of Repeatability (CoR95).
This is a measure of test–retest reliability based upon the variance of within-subject, intertest differences.21,22 Also known as the smallest real difference (SRD), the CoR95 is directly related to the 95% limits of agreement (LoA) proposed by Bland and Altman.21,22 CoR95 was computed separately for both mean sensitivity (MS) and each of the 37 PWS values, and this was done for every test repetition (exam 1 vs. 2, exam 2 vs. 3) and both age groups (children, adults). The formula for CoR95 is  where std is the sample population (“Bessel”) corrected measure of standard deviation.  
95% Bivariate Contour Ellipse Area (BCEA95).
This is a measure of fixation stability, computed based on the best-fitting ellipse containing 95% of the raw fixation coordinates from a given test.23,24 Note that this measure was preferred over the arbitrary, discrete heuristic of Fujii and colleagues25 (“stable” if 75% < |2°|, “relatively unstable” if 75% < |4°|, “unstable” otherwise), which is less statistically powerful, and which has been shown to correlate less well with reading speed.24 The formula for BCEA95 is  where σH and σV is the (corrected) standard deviation of horizontal and vertical fixation coordinates, respectively, and ρ is the product–moment correlation of these two variables. Note that 5.99 ≈ −2ln(0.05).  
Statistical Analysis
Data were analyzed in Matlab (The Mathworks, Natick, MA, USA). For all statistics, 95% confidence intervals (CI95) were computed via bootstrapping (n = 20,000), using the bias-corrected and accelerated-percentile (BCa) method.26 A nonparametric bootstrapping procedure, similar in principle to a Mann-Whitney U test, was also used to evaluate differences in CoR95. For example, to compare repeatability (exam 1 vs. 2) between children and adults, 32 paired-samples MS values (MSexam1, MSexam2) were independently randomly drawn from the empirical data for each age group. These were used to compute two independent estimates of CoR95. The difference in CoR95 was then computed (ChildrenCoR95 − AdultsCoR95). This procedure was repeated 20,000 times. The P value was then computed as 2x, where x was the proportion of these 20,000 differences that had the opposite sign to the observed difference in CoR95
To ensure normality, BCEA and test duration data were log-transformed prior to parametric analysis. 
Raw Data Repository
All reported data are available as Supplementary Material. 
Results
Completion Rates
Of 33 adults enrolled, 32 (97%) completed all three tests successfully. One adult was excluded as their eyes could not be tracked reliably by the MAIA (zero tests completed). 
Of 33 children enrolled, 28 (85%) completed all three tests successfully. Two children were excluded as their eyes could not be tracked reliably (zero tests completed). Two children were excluded as they were restless and uncooperative (fewer than three tests completed, noisy and unreliable data). One child was excluded due to a freak technical error (the response button broke and had to be replaced). 
Test Duration
As shown in Figure 2, mean test duration was consistently greater for children (M = 5.8; SD = 0.8 minutes) than adults (M = 5.1; SD = 0.4 minutes). To assess this difference formally, a 3 × 2 mixed-model ANOVA was run, with a within-subjects factor of exam number (three levels: 1, 2, 3), and a between-subjects factor of age (two levels: child, adult). There was a significant main effect of age (F(1, 58) = 44.98, P ≪ 0.001), indicating that test durations were consistently longer for children than adults. There was no main effect of exam number (F(2, 116) = 0.74, P = 0.480), and no interaction between age and exam number (F(2, 116) = 0.57, P = 0.568), indicating no substantive effects of learning or fatigue on test duration. 
Figure 2
 
Mean test durations (±95% CI) for children (solid blue line) and adults (dashed red line).
Figure 2
 
Mean test durations (±95% CI) for children (solid blue line) and adults (dashed red line).
Mean Sensitivity
Figure 3A shows MS for children and adults, as well as normative reference values (black lines). Across all exams, group mean MS was 29.2 dB in adults (SD = 1.5) and 27.3 dB in children (SD = 1.6). However, a significant learning effect was observed in both age groups, with MS increasing (exam 1 vs. 3) from 28.8 to 29.5 dB in adults and from 26.7 to 27.6 dB in children. 
Figure 3
 
Mean sensitivity (MS). (A) Histograms showing how MS was distributed across individual participants. Black curves show normative reference data based on 200 normally sighted 50- to 87-year-olds.16 (B) Group mean MS (±95% CI), broken down by examination number and age group. Asterisks indicate significant effects (see text).
Figure 3
 
Mean sensitivity (MS). (A) Histograms showing how MS was distributed across individual participants. Black curves show normative reference data based on 200 normally sighted 50- to 87-year-olds.16 (B) Group mean MS (±95% CI), broken down by examination number and age group. Asterisks indicate significant effects (see text).
To analyze the data, MS values were entered into a 3 × 2 mixed-model ANOVA (exam number × age). There was a significant main effect of age (F(1, 58) = 34.12, P ≪ 0.001) and no interaction between age and exam number (F(2, 116) = 0.65, P = 0.526). Together with Figure 3B, this confirms that children exhibited consistently lower MS scores than adults. There was a significant main effect of exam number: (F(2, 116) = 14.18, P ≪ 0.001), confirming that MS scores improved across exams (Fig. 3B). Post hoc tests showed that this practice effect was confined primarily to the first examination, with significant differences between exams 1 and 2 but not between exams 2 and 3 for either age group (see Table 1). Similarly, adult MS values were significantly lower than the mean normative value of 29.78 dB in exam 1, but not in exam 2 or 3 (Table 1). Children's MS scores were consistently lower than the mean of the (adult) normative data throughout. 
Table 1
 
Statistical Differences in Group Mean MS, as Compared to Previously Published Normative Data16 (Columns 2–3) or to the Previous Exam (Columns 4–5)
Table 1
 
Statistical Differences in Group Mean MS, as Compared to Previously Published Normative Data16 (Columns 2–3) or to the Previous Exam (Columns 4–5)
Pointwise Sensitivity
To examine how luminance sensitivity varied across the visual field, Figure 4 shows mean PWS values, averaged across children (Fig. 4A) and adults (Fig. 4B). 
Figure 4
 
Pointwise sensitivity (PWS). (A) Distribution of children's group mean PWS values for each location in the visual field (averaged across all three exams). (B) Equivalent data for the 32 adult participants, shown in the same format as in (A). (C) Mean difference in PWS between children and adults, shown for individual points (top), and averaged across points of equal eccentricity (bottom). Significant differences are shown in red, with shading to indicate effect size. Nonsignificant differences are shown in gray.
Figure 4
 
Pointwise sensitivity (PWS). (A) Distribution of children's group mean PWS values for each location in the visual field (averaged across all three exams). (B) Equivalent data for the 32 adult participants, shown in the same format as in (A). (C) Mean difference in PWS between children and adults, shown for individual points (top), and averaged across points of equal eccentricity (bottom). Significant differences are shown in red, with shading to indicate effect size. Nonsignificant differences are shown in gray.
The hill of vision was apparent in both children and adults, with sensitivity decreasing as a function of eccentricity (note that sensitivity in the central point was depressed, due to a known effect of simultaneous masking by the fixation target27). Comparing between children and adults, the previously established difference in MS consisted principally of a uniform reduction in PWS across the visual field, with children exhibiting significantly lower PWS values at 35 of 37 locations (Fig. 4C; ΔPWS37: M = −1.9; SD = 0.5 dB). However, as shown in the lower part of Figure 4C, the difference between children and adults did tend to be somewhat smaller for near (eccentricity = 1°: mean ΔPWS = −1.7 dB) points than for intermediate (3°: −2.14 dB) or far (5°: −2.14 dB) points. The cause of this difference is unclear, but a pure order effect can be discounted, since test trials for the outermost (5°) and inner (1°) loci were randomly interleaved. 
Test–Retest Reliability
Bland-Altman analyses were used to compute the 95% coefficient of repeatability (CoR95) for each repetition (exam 1 vs. 2, exam 2 vs. 3) and each age group (children, adults). For each of these four permutations, CoR95 was computed once for MS, and independently for each of the 37 individual locations. The principal results are given in rows 19 to 22 of Table 2, shown after comparable data from nine previous studies. 
Table 2
 
Sensitivity (MS) and Test–Retest Repeatability (CoR95 for MS and PWS) Estimates for Previous Studies (Rows 1–18) and the Present Study (Rows 19–22)
Table 2
 
Sensitivity (MS) and Test–Retest Repeatability (CoR95 for MS and PWS) Estimates for Previous Studies (Rows 1–18) and the Present Study (Rows 19–22)
Repeatability of MS.
The mean CoR95 for MS was greater (worse) for children than adults (Fig. 5C; 2.7 vs. 2.3 dB). Using the bootstrapping analysis described in the Statistical Analysis section, the difference between children and adults was not significant when comparing either of the two individual repetitions (exam 1 vs. 2: P = 0.104; exam 2 vs. 3: P = 0.060), but was significant when CoR95 values were averaged across both repetitions (main effect: P = 0.036). Thus, MS values were less reliable for children than adults. There was an indication that reliability may improve with practice, with CoR95 decreasing from 3.0 to 2.4 dB in children and 2.6 to 1.9 dB in adults (Figs. 5A, 5B). However, this difference was not significant, either when averaging across age groups (main effect: P = 0.065) or when comparing within each age group (children: P = 0.110; adults: P = 0.164). 
Figure 5
 
Test–retest reliability (CoR95) for MS. (A) Bland-Altman plots for exam 1 versus exam 2. Gray shaded regions show 95% confidence intervals around the mean difference. Dashed red lines indicate the 95% limits of agreement (μ ± CoR95) (B) Equivalent data exam 2 versus exam 3, shown in the same format as in (A). (C) Group mean CoR95 values (±95% CI), broken down by repetition number and age group (higher = less reliable). Analogous plots for PWS given in Figure 7.
Figure 5
 
Test–retest reliability (CoR95) for MS. (A) Bland-Altman plots for exam 1 versus exam 2. Gray shaded regions show 95% confidence intervals around the mean difference. Dashed red lines indicate the 95% limits of agreement (μ ± CoR95) (B) Equivalent data exam 2 versus exam 3, shown in the same format as in (A). (C) Group mean CoR95 values (±95% CI), broken down by repetition number and age group (higher = less reliable). Analogous plots for PWS given in Figure 7.
Figure 6
 
Test–retest reliability (CoR95) for PWS. (A) Group mean CoR95 values (±95% CI), broken down by repetition number and age group. Values are computed based on the individual CoR95 values given in (BE). See Figure 7 for analogous values computed via a single Bland-Altman analysis, applied to all of the raw PWS values pooled together. Asterisks indicate significant effects (see text). (BE) Group mean CoR95 values (±95% CI) for each location in the visual field. Each value represents the output of an independent Bland-Altman analysis using the PWS values for that location only. Points where children and adults differed significantly are highlighted in red.
Figure 6
 
Test–retest reliability (CoR95) for PWS. (A) Group mean CoR95 values (±95% CI), broken down by repetition number and age group. Values are computed based on the individual CoR95 values given in (BE). See Figure 7 for analogous values computed via a single Bland-Altman analysis, applied to all of the raw PWS values pooled together. Asterisks indicate significant effects (see text). (BE) Group mean CoR95 values (±95% CI) for each location in the visual field. Each value represents the output of an independent Bland-Altman analysis using the PWS values for that location only. Points where children and adults differed significantly are highlighted in red.
Figure 7
 
Test–retest reliability for PWS (pooled PWS analysis): same format as Figure 5. The analysis in (C) is an alternative to that given in Figure 6A, and yields qualitatively identical results. However, instead of averaging over independent reliability estimates made at each location, in the present case data from every test location are pooled together to provide a single estimate of PWS test–retest reliability. This analysis is insensitive to any potential location-specific differences in PWS reliability, but is computationally easy, and is provided here for comparison with previous studies (e.g., Refs. 6, 10, 14).
Figure 7
 
Test–retest reliability for PWS (pooled PWS analysis): same format as Figure 5. The analysis in (C) is an alternative to that given in Figure 6A, and yields qualitatively identical results. However, instead of averaging over independent reliability estimates made at each location, in the present case data from every test location are pooled together to provide a single estimate of PWS test–retest reliability. This analysis is insensitive to any potential location-specific differences in PWS reliability, but is computationally easy, and is provided here for comparison with previous studies (e.g., Refs. 6, 10, 14).
There was a small but significant relationship between variability (|ΔMS|) and sensitivity (mean MS), with greater variability at lower sensitivities (Pearson's linear correlation: r118 = −0.36, P < 0.001; data aggregated across all conditions). 
Repeatability of PWS.
The mean CoR95 for PWS was greater (worse) for children than adults (Fig. 6A; 6.2 vs. 5.7 dB). This overall difference was significant (P < 0.001), and age differences were also significant when making post hoc, between-subjects comparisons within each repetition (exam 1 vs. 2: P < 0.001; exam 2 vs. 3: P < 0.001). Thus, PWS values were less reliable for children than adults. However, there was no indication that differences in reliability between adults and children varied systematically across the visual field (Figs. 6B, 6C). 
Across repetitions, CoR95 decreased from 7.3 to 5.2 dB in children (exam 1 vs. 2 versus exam 2 vs. 3) and from 6.5 to 4.8 dB in adults. The decrease in CoR95 with practice was significant when averaging across age groups (main effect: P = 0.018). However, when each age group was analyzed independently, the decrease in CoR95 was borderline significant for adults (P = 0.040) but nonsignificant for children (P = 0.063). Thus, the present sample provided positive but weak evidence that test reliability improves with practice. 
There was a small but significant relationship between variability (|ΔPWS|) and sensitivity (mean PWS), with greater variability at lower sensitivities (Pearson's linear correlation: r4438 = −0.22, P ≪ 0.001; data aggregated across all conditions). This effect persisted even when mean PWS values greater than 28 dB were excluded (r2237 = −0.39, P ≪ 0.001) (i.e., variability is likely to be systematically underestimated for sensitivities near ceiling. For example, a mean PWS of 35 dB could not, by definition, have a test–retest increase greater than 2 dB, as the maximum score was 36 dB). 
Fixation Stability
Figure 8A shows group mean contour ellipses, averaged separately within children and adults. It is clear by inspection that children were poorer at maintaining their gaze on the fixation target (geometric mean BCEA95: 4.58 vs. 1.14). This meant that children frequently fixated target areas 3° eccentric, while adults' gaze was predominantly localized to within the central 1°. 
Figure 8
 
Fixation stability. (A) Bivariate contour ellipse heat maps for children (left) and adults (right). Larger distributions indicate less fixation stability. Data from all three exams were included. The test grid (blue circles) and an arbitrary fundus image are also shown, for scale (target eccentricities: 0°, 1°, 3°, 5°). (B) Group mean BCEA95 values (±95% CI), for children (solid blue line) and adults (dashed red line). (C) Group mean (±95% CI) percentage of eye-tracking samples containing an eye movement exceeding a velocity criterion of 20°/s. (D) Fixation instability (dispersion) as a function of time, for children (blue) and adults (red). Fixation stability was indexed by the standard distance deviation (SDD) of fixation coordinates within 15-second bins. Markers indicate group mean dispersion. Shaded regions indicate 95% confidence intervals. Solid lines indicate piecewise linear (“broken stick”) fits to the data from all three exams (for fitting purposes, time assumed to increase continuously from 0 to 900 seconds. Fits were made using least-squares linear spline fitting, and contained four free parameters: left slope/intercept, right slope, break point). (E) Scatter plot showing the relationship between logBCEA95 and MS. Each marker indicates a single exam, with red/blue markers indicating adults/children. The solid black line is the best-fitting geometric mean regression slope.
Figure 8
 
Fixation stability. (A) Bivariate contour ellipse heat maps for children (left) and adults (right). Larger distributions indicate less fixation stability. Data from all three exams were included. The test grid (blue circles) and an arbitrary fundus image are also shown, for scale (target eccentricities: 0°, 1°, 3°, 5°). (B) Group mean BCEA95 values (±95% CI), for children (solid blue line) and adults (dashed red line). (C) Group mean (±95% CI) percentage of eye-tracking samples containing an eye movement exceeding a velocity criterion of 20°/s. (D) Fixation instability (dispersion) as a function of time, for children (blue) and adults (red). Fixation stability was indexed by the standard distance deviation (SDD) of fixation coordinates within 15-second bins. Markers indicate group mean dispersion. Shaded regions indicate 95% confidence intervals. Solid lines indicate piecewise linear (“broken stick”) fits to the data from all three exams (for fitting purposes, time assumed to increase continuously from 0 to 900 seconds. Fits were made using least-squares linear spline fitting, and contained four free parameters: left slope/intercept, right slope, break point). (E) Scatter plot showing the relationship between logBCEA95 and MS. Each marker indicates a single exam, with red/blue markers indicating adults/children. The solid black line is the best-fitting geometric mean regression slope.
To assess this age difference formally, a 3 × 2 mixed-model ANOVA was run (Fig. 8B), with a within-subjects factor of exam number (three levels: 1, 2, 3), and a between-subjects factor of age (two levels: child, adult). There was a significant main effect of age (F(1, 58) = 20.86, P ≪ 0.001), indicating that fixation stability was consistently lower for children than adults (see Fig. 8B). There was also a main effect of exam number (F(2, 116) = 5.35, P = 0.006), indicating an effect of learning or fatigue. There was no interaction between age and exam number (F(2, 116) = 0.47, P = 0.629). 
Furthermore, 3.3% of children's gaze samples exceeded a velocity criterion of 20°/s (Fig. 8C). This indicates that children were making rapid eye movements 3.3% of the time during testing. In contrast, only 1.2% of adults' gaze samples exhibited substantial eye movements. This main effect of age on %HighVelocity was highly significant (F(1, 58) = 38.91, P ≪ 0.001), indicating that children not only moved their eyes over a greater spatial extent (Fig. 8A), but also exhibited more frequent eye movements (Fig. 8C). There was also no main effect of exam number (F(2, 116) = 1.85, P = 0.163), and no interaction between age and exam number (F(2, 116) = 0.55, P = 0.578). 
To further investigate possible learning/fatigue effects, the standard distance deviation (SDD: a two-dimensional analogue of standard deviation) of raw fixation coordinates was computed for successive 15-second time bins. The results are shown in Figure 8D, averaged within age groups. By inspection, it can be seen that fixation stability was relatively constant in adults throughout the period of testing (fitted regression slope: β < 0.001). Conversely, children's fixation stability improved throughout exam 1 (−0.028°/min), becoming most stable by approximately 2.5 minutes into exam 2, after which point stability deteriorated until the end of testing (β = 0.024°/min). This pattern is most parsimoniously explained by an initial period of learning, followed by mounting fatigue. 
As shown in Figure 8E, there was a significant relationship between fixation stability and test performance (Spearman's rank correlation of logBCEA95 versus MS: r178 = −0.44, P < 0.001). Greater fixation stability predicted greater MS, and changes in fixation stability explained 19% of the variability in MS. A multiple regression containing both logBCEA and age was able to explain an additional 13% of the variability in MS (Radj2 = 0.32, F(3, 176) = 29.19, P ≪ 0.001), and age was also a significant predictor in the model (t177 = 5.55, P < 0.001). This indicates that some, but not all, of the age-related difference in sensitivity was related to fixation stability. Note that it was not the case that both logBCEA and MS were simply comorbid upon age, since within-subject changes in fixation stability also predicted changes in MS between exams (r58 = −0.21, P = 0.019). However, we cannot, with the present data, prove a direct causal link between fixation stability and sensitivity, and it may be that logBCEA and MS are associated via a third, more general factor, that varies with age, such as attentiveness. 
Discussion
The results indicate that MP is feasible in children, but that it is somewhat slower and less reliable than in adults. They also demonstrate that normally sighted children exhibit systematically lower sensitivities. Norms for classifying abnormal vision should be modified accordingly when assessing children. As has been reported previously in adults,10 there was strong evidence of learning between the first and second examinations, and in future it may be desirable to build some form of formal practice component into MP test protocols. However, any additional demands upon participants should be tempered against possible signs of fatigue, as fixation stability in children deteriorated toward the end of their third examination. 
Feasibility and Reliability
In terms of completion rates, the results were encouraging, and were similar to values reported previously for children performing standard automated threshold perimetry.28 Although more children (five) than adults (one) failed to complete all three MP assessments, only two of these failures were due to fatigue or fussiness. The individuals (two children, one adult) excluded due to poor eye tracking could likely have been tested after administering mydriatics, and this was confirmed post hoc in the one adult participant (0.5% tropicamide). The remaining 28 children completed all three exams successfully and generally exhibited good test compliance. 
Compared with adults, children's test results were less reliable in terms of both PWS (CoR95 = 6.2 vs. 5.7 dB) and overall MS (CoR95 = 2.7 vs. 2.3 dB). However, from a clinical perspective, the mean difference between children and adults was relatively small (∼0.5 dB). What may be more of a concern is the relatively large amount of variability observed in both children and adults. These levels were similar to those reported previously for adult patients and controls (Table 2), and represent a relatively large degree of measurement error. For example, a pointwise decrease of 6.2 dB translates to an increase of over 3005 in target luminance. The corollary of this is that, wherever possible, assessments should always be based on the average of multiple test locations and/or multiple examinations. 
Finally, it is important to note that, due to the physical configuration of the current MAIA device, we were able to reliably test only children aged 9 years and older. During piloting, some younger children were able to complete examinations while standing. However, these children reported substantial discomfort, and generally failed to complete the full study protocol. Those children younger than 9 who could be seated comfortably generally appeared to perform the test without difficulty, and their results did not, prima facie, appear to differ substantively from those reported for older children in the present study. However, it remains to be seen exactly what the minimum feasible age is for MP, and how completion rates and reliability vary among very young children (i.e., who, for example, exhibit poorer fixation stability29 and struggle to inhibit saccadic fixations30). 
Sensitivity
Adult sensitivity values were consistent with expected normative values, as well as with data from previous studies (Table 2). For example, in examination 3, group mean MS was 29.5 dB for adults, versus the expected normative value of 29.8 dB. In contrast, children's group mean MS values were 1.9 dB poorer than those of adults (2.2 dB below adult normative data). To put this 1.9-dB difference in context, an MS deficit of 2 to 3 dB is often considered clinically31,32 and/or statistically33 meaningful. To avoid incorrectly classifying visual fields from healthy children as abnormal, it is therefore recommended that, in the short term, the classification boundaries for abnormal/suspect/normal performance be down-adjusted by 1.9 dB when testing children. In the longer term, it may be possible to resolve this age-related difference in sensitivity by accounting for factors such as inattentiveness and fixation stability, as discussed below. 
Children's lower sensitivity estimates do not appear to be an artifact of random measurement error, as age-related differences were consistent and robust. Children exhibited a level of interobserver variability similar to adults (SD = 1.5 dB vs. 1.4 dB in adults), and the 1.9-dB deficit is consistent with previous data from ordinary static threshold perimetry. For example, Patel and colleagues34 reported a 1.85-dB deficit among 9- to 11-year-olds (grid = ±24°), while Tschopp and colleagues35 reported a ∼1-dB deficit in 7- and 8-year-olds (grid = ±3°). 
Children's reduced sensitivity estimates are also unlikely to reflect physiological immaturities, since retinal cell layers are generally fully developed by 10 years of age,36,37 and the fovea itself appears morphologically mature by 24 months post term.37,38 Instead, Tschopp et al.35 suggested that age differences in perimetry may be due to gradual maturation in selective visual attention, with the need to fixate a central fixation target causing a form of “cognitive tunnel vision” in younger children. The pattern of sensitivity estimates observed in the present study are consistent with this hypothesis, in that children's deficit was most pronounced at more eccentric locations (3° and 5°) and smaller around the center (1°, 0°). However, while differences in selective attention may be part of the explanation, they cannot obviously account for why children exhibited poorer fixation stability than adults. 
One further possibility is that children's unstable fixation and their lower sensitivities are both symptoms of some more global immaturity, such as a general lack of attentiveness. This could be assessed in future by adding explicit catch trials to the test protocol in order to assess children's false-negative (lapse) rates and false-positive rates (trigger errors).39 Previous work by Tschopp and colleagues40 has shown that lapse rates measured in this manner are strong predictors of children's thresholds on standard automated perimetry; and on this basis, it seems plausible that such lapses may also explain some or all of children's deficits in MP. A second/additional possibility is that children's reduced sensitivities were a direct consequence of their reduced fixation stability. Substantial reductions in visual sensitivity are known to occur during eye movements,41,42 and children made more and larger eye movements than adults (e.g., due to “searching” behaviors, physiological nystagmus, or failure to suppress instinctual foveation behaviors). Such eye movements may have caused children to miss some suprathreshold stimuli altogether. Furthermore, even when a saccade concluded before stimulus presentation was complete, the effective duration of some stimuli may have been reduced (and for brief stimuli, Bloch's law states that luminance detection thresholds are dependent on stimulus duration43,44). Finally, eye movements toward a stimulus on trial T may have led to the stimulus on trial T+1 being misplaced onto a more eccentric (i.e., less sensitive) retinal location. Ideally, any such eye movements should have been automatically compensated for, since the MAIA is explicitly designed to quantify and correct for changes in gaze location. However, the sampling rate of the MAIA's integrated eye tracker is only 25 Hz, and this, while sufficient for detecting gross changes in preferred retinal locus, is orders of magnitude too slow for the system to compensate fully for brief, rapid eye movements such as saccades and microsaccades. This means that excessive fixation instability is still likely to affect sensitivity estimates. In future studies it would therefore be instructive to see whether children's fixation stability can be improved, either through practice, instruction, reward, or the use of a more engaging fixation target, and whether, as has been found in some adults,45,46 improvements in fixation stability lead to improvements in estimated visual sensitivities. 
In this regard, the fixation target in the MAIA may be considered particularly poor, consisting as it does of an annulus (Fig. 1). The advantage of this design is that it allows targets to be presented at the direct center of the macula (i.e., inside the target). However, participants often reported being confused about exactly where they were supposed to fixate, and previously it has been shown that such “hollow” (pericentral) targets result in quantifiably poorer fixation stability than a simple cross or dot.47 Thus, it may be that a simple change in fixation target may increase test reliability in some children. In younger children, however, it may be necessary to take more extreme measures to inhibit the foveation saccades that are normally triggered by the sudden appearance of light stimuli in the visual field (Ygge JE, et al. IOVS 2004;45:ARVO E-Abstract 2512).29,48 For example, researchers have previously tried using a moving fixation target, which must be tracked using a joystick,49 or a brief display in the fixation square of a graphic symbol, which the subject has to name correctly for the test to proceed.50 It remains to be understood, though, precisely how the benefits of these approaches (e.g., in terms of test accuracy and reliability) trade off against the concomitant increases in test duration and complexity. 
Learning
A substantial learning effect was observed in both children and adults, with sensitivity increasing between exams 1 and 2, though not between exams 2 and 3. This implies that sensitivity is substantially underestimated during the first examination. Failure to appreciate this potential confound could cause the beneficial effects of a treatment to be exaggerated or a progressive deterioration in visual function to be occluded. 
Based on present data, the immediate recommendation is to discard the results of the first examination, and to use the second test as the baseline (as has been suggested previously for adults10) (Wong E, et al. IOVS 2013;54:ARVO E-Abstract 3603). It is unknown whether this procedure would have to be repeated every session (i.e., or whether any learning is retained for weeks/months between visits). However, this would be straightforward to assess in future empirical studies. 
In the longer term, the additional demands of a “practice test” should be tempered against the fact that children's fixation stability deteriorated markedly during exam 3—a fact that we interpret as a likely sign of fatigue. Since the majority of learning on psychophysical tests tends to occur within the first few trials,51,52 it may therefore be beneficial to develop a formal practice regimen capable of inducing asymptotic performance rapidly, without having to complete an entire examination. For example, using the Octopus static perimeter, Tschopp and colleagues48 found that around 75 trials was sufficient training for an 8-year-old child, compared with the several hundred trials that occur within an entire complete test. Since in the present data the test locations were interleaved, it was not possible to confirm whether 75 practice trials is also sufficient in MAIA MP. However, given the similarity of the two procedures, a comparable figure seems likely. It is worth noting though that in Tschopp's data, the amount of practice required varied with age and increased substantially for younger children. This highlights the importance of avoiding a “one size fits all” approach to pediatric assessments and the need to tailor examination protocols to the needs and requirements of the individual. 
Conclusions
  1.  
    Microperimetry is feasible in 9- to 12-year-old children. Most children were able to successfully perform three consecutive MP assessments, with only two (6%) needing to be excluded due to fussiness.
  2.  
    However, children exhibited poorer test–retest reliability, in terms of both MS (CoR95 = 2.7 vs. 2.3 dB, P = 0.036) and PWS (CoR95 = 6.2 vs. 5.7 dB, P < 0.001). Given the magnitude of variability, it is recommended that independent estimates be averaged wherever possible.
  3.  
    Estimated sensitivity was 1.9 dB lower in children than adults (MS = 27.6 vs. 29.5 dB, P < 0.001). This suggests that the classification boundary for “healthy” performance should be lowered in children, pending development of techniques to improve attentiveness/fixation that may reduce or remove this difference.
  4.  
    Children's lower sensitivities were correlated with their poorer fixation stability (BCEA95 = 4.58 vs. 1.14, P < 0.001). This may represent greater inattentiveness among children—a possibility that could be assessed by incorporating explicit catch trials into the current test design. Additionally, there may be a direct causal relationship between fixation stability and estimated sensitivity. This could be evaluated by using alternative fixation targets, which have already been shown to promote fixation stability in adults.
  5.  
    Both children and adults exhibited substantial practice effects, with sensitivity increasing significantly between examinations 1 and 2 (ΔMS = 0.5/0.9 dB, P ≤ 0.017), but not between examinations 2 and 3 (ΔMS = 0.1/0.2 dB, P ≥ 0.374). Such learning is an important potential confound, and based on this, it is recommended that the results of the first examination be discarded, at least until an effective training protocol can be developed.
Acknowledgments
The authors thank Sarah Kalwarowsky for help with pilot testing, and Aisha McLean for help with data analysis. 
Supported by Fight for Sight, and by the National Institute for Health Research Biomedical Research Centre located at (both) Moorfields Eye Hospital and the University College London Institute of Ophthalmology. 
Disclosure: P.R. Jones, None; N. Yasoubi, None; M. Nardini, None; G.S. Rubin, None 
References
Midena E. Microperimetry: an introduction. In: Midena E, ed. Microperimetry and Multimodal Retinal Imaging. Berlin, Heidelberg: Springer; 2014: 3–4.
Kroisamer J-S, Gerendas BS, Schmidt-Erfurth U. Early and intermediate age-related macular degeneration. In: Midena E, ed. Microperimetry and Multimodal Retinal Imaging. Berlin Heidelberg: Springer; 2014: 69–76.
Bainbridge JWB, Mehat MS, Sundaram V, et al. Long-term effect of gene therapy on Leber's congenital amaurosis. N Engl J Med. 2015; 372: 1887–1897.
Buckland KF, Gaspar HB. Gene and cell therapy for children—new medicines, new challenges? Adv Drug Deliv Rev. 2014; 73: 162–169.
Garg A, Tsang SH. Retinal dystrophies. In: Midena E, ed. Microperimetry and Multimodal Retinal Imaging. Berlin, Heidelberg: Springer; 2014: 137–142.
Chen FK, Patel PJ, Xing W, et al. Test-retest variability of microperimetry using the Nidek MP1 in patients with macular disease. Invest Ophthalmol Vis Sci. 2009; 50: 3464–3472.
Weingessel B, Sacu S, Vécsei-Marlovits PV, Weingessel A, Richter-Mueksch S, Schmidt-Erfurth U. Interexaminer and intraexaminer reliability of the microperimeter MP-1. Eye. 2009; 23: 1052–1058.
Acton JH, Bartlett NS, Greenstein VC. Comparing the Nidek MP-1 and Humphrey field analyzer in normal subjects. Optom Vis Sci. 2011; 88: 1288–1297.
Cideciyan AV, Swider M, Aleman TS, et al. Macular function in macular degenerations: repeatability of microperimetry as a potential outcome measure for ABCA4-associated retinopathy trials. Invest Ophthalmol Vis Sci. 2012; 53: 841–852.
Wu Z, Ayton LN, Guymer RH, Luu CD. Intrasession test-retest variability of microperimetry in age-related macular degeneration. Invest Ophthalmol Vis Sci. 2013; 54: 7378–7385.
Jeffrey BG, Cukras CA, Vitale S, Turriff A, Bowles K, Sieving PA. Test-retest intervisit variability of functional and structural parameters in X-linked retinoschisis. Trans Vis Sci Tech. 2014; 3: 5.
Wu Z, Jung CJ, Ayton LN, Luu CD, Guymer RH. Test-retest repeatability of microperimetry at the border of deep scotomas. Invest Ophthalmol Vis Sci. 2015; 56: 2606–2611.
Molina-Martín A, Piñero DP, Pérez-Cambrodí RJ. Reliability and intersession agreement of microperimetric and fixation measurements obtained with a new microperimeter in normal eyes. Curr Eye Res. 2015; 41: 1–10.
Wong EN, Mackey DA, Morgan WH, Chen FK. Intersession test-retest variability of conventional and novel parameters using the MP-1 microperimeter. Clin Ophthalmol. 2016; 10: 29–42.
Midena E, Vujosevic S, Cavarzeran F; Microperimetry Study Group. Normal values for fundus perimetry with the microperimeter MP1. Ophthalmology. 2010; 117: 1571–1576.
Vujosevic S, Smolek MK, Lebow KA, Notaroberto N, Pallikaris A, Casciano M. Detection of macular function changes in early (AREDS 2) and intermediate (AREDS 3) age-related macular degeneration. Ophthalmologica. 2010; 225: 155–160.
Sabates FN, Vincent RD, Koulen P, Sabates NR, Gallimore G. Normative data set identifying properties of the macula across age groups: integration of visual function and retinal structure with microperimetry and spectral-domain optical coherence tomography. Retina. 2011; 31: 1294–1302.
Kudrna GR, Stanley MA, Remington LA. Pupillary dilation and its effects on automated perimetry results. J Am Optom Assoc. 1995; 66: 675–680.
Mendívi A. Influence of a dilated pupil on the visual field in glaucoma. J Glaucoma. 1997; 6: 217–220.
Parodi MB, Triolo G, Morales M, et al. MP1 and MAIA fundus perimetry in healthy subjects and patients affected by retinal dystrophies. Retina. 2015; 35: 1662–1669.
Vaz S, Falkmer T, Passmore AE, Parsons R, Andreou P. The case for using the repeatability coefficient when calculating test-retest reliability. PLoS One. 2012; 8: e73990.
Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999; 8: 135–160.
Steinman RM. Effect of target size, luminance, and color on monocular fixation. J Opt Soc Am. 1965; 55: 1158–1164.
Crossland MD, Dunbar HMP, Rubin GS. Fixation stability measurement using the MP1 microperimeter. Retina. 2009; 29: 651–656.
Fujii GY, de Juan E, Sunness J, Humayun MS, Pieramici DJ, Chang TS. Patient selection for macular translocation surgery using the scanning laser ophthalmoscope. Ophthalmology. 2002; 109: 1737–1744.
Efron B. Better bootstrap confidence intervals. J Am Stat Assoc. 1987; 82: 171–185.
Denniss J, Astle AT. Central perimetric sensitivity estimates are directly influenced by the fixation target. Ophthalmic Physiol Opt. 2016; 36: 453–458.
Patel DE, Cumberland PM, Walters BC, Russell-Eggitt I, Rahi JS; for the OPTIC Study Group. Study of optimal perimetric testing in children (OPTIC): feasibility reliability and repeatability of perimetry in children. PLoS One. 2015; 10: e0130895.
Aring E, Grönlund MA, Hellström A, Ygge J. Visual fixation development in children. Graefes Arch Clin Exp Ophthalmol. 2007; 245: 1659–1665.
Munoz DP, Everling S. Look away: the anti-saccade task and the voluntary control of eye movement. Nat Rev Neurosci. 2004; 5: 218–228.
Kahook MY, Noecker RJ. How do you interpret a 24-2 Humphrey Visual Field printout. Glaucoma Today. 2007; 57–59.
Anderson DR. Interpretation of a single field. Automated Static Perimetry. Mobsy; 1991: 91–161.
Wall M, Doyle CK, Zamba KD, Artes P, Johnson CA. The repeatability of mean defect with size III and size V standard automated perimetry. Invest Ophthalmol Vis Sci. 2013; 54: 1345–1351.
Patel DE, Cumberland PM, Walters BC, Russell-Eggitt I, Cortina-Borja M, Rahi JS; for the OPTIC Study Group. Study of optimal perimetric testing in children (OPTIC): normative visual field values in children. Ophthalmology. 2015; 122: 1711–1717.
Tschopp C, Safran AB, Viviani P, Reicherts M, Bullinger A, Mermoud C. Automated visual field examination in children aged 5–8 years: part II: normative values. Vision Res. 1998; 38: 2211–2218.
Vajzovic L, Hendrickson AE, O'Connell RV, et al. Maturation of the human fovea: correlation of spectral-domain optical coherence tomography findings with histology. Am J Ophthalmol. 2012; 154: 779–789.
Hendrickson A, Possin D, Vajzovic L, Toth CA. Histologic development of the human fovea from midgestation to maturity. Am J Ophthalmol. 2012; 154: 767–778.e2.
Dubis AM, Costakos DM, Subramaniam CD, et al. Evaluation of normal human foveal development using optical coherence tomography and histologic examination. Arch Ophthalmol. 2012; 130: 1291–1300.
Jones PR, Kalwarowsky S, Braddick OJ, Atkinson J, Nardini M. Optimizing the rapid measurement of detection thresholds in infants. J Vis. 2015; 15 (11): 2.
Tschopp C, Viviani P, Reicherts M, et al. Does visual sensitivity improve between 5 and 8 years? A study of automated visual field examination. Vision Res. 1999; 39: 1107–1119.
Kulikowski JJ. Effect of eye movements on the contrast sensitivity of spatio-temporal patterns. Vision Res. 1971; 11: 261–273.
Volkmann FC, Riggs LA, White KD, Moore RK. Contrast sensitivity during saccadic eye movements. Vision Res. 1978; 18: 1193–1199.
Krauskopf J. Discrimination and detection of changes in luminance. Vision Res. 1980; 20: 671–677.
Nachmias J. Effect of exposure duration on visual contrast sensitivity with square-wave gratings. J Opt Soc Am. 1967; 57: 421–427.
Tarita-Nistor L, González EG, Mandelcorn MS, Lillakas L, Steinbach MJ. Fixation stability, fixation location and visual acuity after successful macular hole surgery. Invest Ophthalmol Vis Sci. 2009; 50: 84–89.
Tarita-Nistor L, González EG, Markowitz SN, Steinbach MJ. Plasticity of fixation in patients with central vision loss. Vis Neurosci. 2009; 26: 487–494.
Bellmann C, Feely M, Crossland MD, Kabanarou SA, Rubin GS. Fixation stability using central and pericentral fixation targets in patients with age-related macular degeneration. Ophthalmology. 2004; 111: 2265–2270.
Tschopp C, Safran AB, Viviani P, Bullinger A, Reicherts M, Mermoud C. Automated visual field examination in children aged 5–8 years: Part I: experimental validation of a testing procedure. Vision Res. 1998; 38: 2203–2210.
Mutlukan E, Damato BE. Computerised perimetry with moving and steady fixation in children. Eye. 1993; 7: 554–561.
Frisen L. High-pass resolution perimetry. In: Lachapelle, ed. Doc Ophthalmol. Springer; 1993: 1–25.
Molloy KEA, Moore DR, Sohoglu E, Amitay S. Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning. PLoS One. 2012; 7: e36929.
Poggio T, Fahle M, Edelman S. Fast perceptual learning in visual hyperacuity. Science. 1992; 256: 1018–1021.
Figure 1
 
The test pattern, consisting of 37 locations distributed within a 10° diameter area. Targets were Goldmann III (0.43° radius) circles of variable intensity, presented against a uniform 4-asb background. The maximum differential luminance of the target was 1000 asb, yielding a dynamic range of 36 dB. Target intensity was adjusted in 4-dB/2-dB steps, using a 4-2 threshold strategy. Target locations were determined by a radial grid consisting of a single central point, and three loci, at 1°, 3°, 5° eccentricity. The cardinal locations, marked with asterisks, were tested first, followed by all other locations in random order (interleaved). During testing, participants were asked to maintain fixation on the central red annulus, which was 0.5° in radius.
Figure 1
 
The test pattern, consisting of 37 locations distributed within a 10° diameter area. Targets were Goldmann III (0.43° radius) circles of variable intensity, presented against a uniform 4-asb background. The maximum differential luminance of the target was 1000 asb, yielding a dynamic range of 36 dB. Target intensity was adjusted in 4-dB/2-dB steps, using a 4-2 threshold strategy. Target locations were determined by a radial grid consisting of a single central point, and three loci, at 1°, 3°, 5° eccentricity. The cardinal locations, marked with asterisks, were tested first, followed by all other locations in random order (interleaved). During testing, participants were asked to maintain fixation on the central red annulus, which was 0.5° in radius.
Figure 2
 
Mean test durations (±95% CI) for children (solid blue line) and adults (dashed red line).
Figure 2
 
Mean test durations (±95% CI) for children (solid blue line) and adults (dashed red line).
Figure 3
 
Mean sensitivity (MS). (A) Histograms showing how MS was distributed across individual participants. Black curves show normative reference data based on 200 normally sighted 50- to 87-year-olds.16 (B) Group mean MS (±95% CI), broken down by examination number and age group. Asterisks indicate significant effects (see text).
Figure 3
 
Mean sensitivity (MS). (A) Histograms showing how MS was distributed across individual participants. Black curves show normative reference data based on 200 normally sighted 50- to 87-year-olds.16 (B) Group mean MS (±95% CI), broken down by examination number and age group. Asterisks indicate significant effects (see text).
Figure 4
 
Pointwise sensitivity (PWS). (A) Distribution of children's group mean PWS values for each location in the visual field (averaged across all three exams). (B) Equivalent data for the 32 adult participants, shown in the same format as in (A). (C) Mean difference in PWS between children and adults, shown for individual points (top), and averaged across points of equal eccentricity (bottom). Significant differences are shown in red, with shading to indicate effect size. Nonsignificant differences are shown in gray.
Figure 4
 
Pointwise sensitivity (PWS). (A) Distribution of children's group mean PWS values for each location in the visual field (averaged across all three exams). (B) Equivalent data for the 32 adult participants, shown in the same format as in (A). (C) Mean difference in PWS between children and adults, shown for individual points (top), and averaged across points of equal eccentricity (bottom). Significant differences are shown in red, with shading to indicate effect size. Nonsignificant differences are shown in gray.
Figure 5
 
Test–retest reliability (CoR95) for MS. (A) Bland-Altman plots for exam 1 versus exam 2. Gray shaded regions show 95% confidence intervals around the mean difference. Dashed red lines indicate the 95% limits of agreement (μ ± CoR95) (B) Equivalent data exam 2 versus exam 3, shown in the same format as in (A). (C) Group mean CoR95 values (±95% CI), broken down by repetition number and age group (higher = less reliable). Analogous plots for PWS given in Figure 7.
Figure 5
 
Test–retest reliability (CoR95) for MS. (A) Bland-Altman plots for exam 1 versus exam 2. Gray shaded regions show 95% confidence intervals around the mean difference. Dashed red lines indicate the 95% limits of agreement (μ ± CoR95) (B) Equivalent data exam 2 versus exam 3, shown in the same format as in (A). (C) Group mean CoR95 values (±95% CI), broken down by repetition number and age group (higher = less reliable). Analogous plots for PWS given in Figure 7.
Figure 6
 
Test–retest reliability (CoR95) for PWS. (A) Group mean CoR95 values (±95% CI), broken down by repetition number and age group. Values are computed based on the individual CoR95 values given in (BE). See Figure 7 for analogous values computed via a single Bland-Altman analysis, applied to all of the raw PWS values pooled together. Asterisks indicate significant effects (see text). (BE) Group mean CoR95 values (±95% CI) for each location in the visual field. Each value represents the output of an independent Bland-Altman analysis using the PWS values for that location only. Points where children and adults differed significantly are highlighted in red.
Figure 6
 
Test–retest reliability (CoR95) for PWS. (A) Group mean CoR95 values (±95% CI), broken down by repetition number and age group. Values are computed based on the individual CoR95 values given in (BE). See Figure 7 for analogous values computed via a single Bland-Altman analysis, applied to all of the raw PWS values pooled together. Asterisks indicate significant effects (see text). (BE) Group mean CoR95 values (±95% CI) for each location in the visual field. Each value represents the output of an independent Bland-Altman analysis using the PWS values for that location only. Points where children and adults differed significantly are highlighted in red.
Figure 7
 
Test–retest reliability for PWS (pooled PWS analysis): same format as Figure 5. The analysis in (C) is an alternative to that given in Figure 6A, and yields qualitatively identical results. However, instead of averaging over independent reliability estimates made at each location, in the present case data from every test location are pooled together to provide a single estimate of PWS test–retest reliability. This analysis is insensitive to any potential location-specific differences in PWS reliability, but is computationally easy, and is provided here for comparison with previous studies (e.g., Refs. 6, 10, 14).
Figure 7
 
Test–retest reliability for PWS (pooled PWS analysis): same format as Figure 5. The analysis in (C) is an alternative to that given in Figure 6A, and yields qualitatively identical results. However, instead of averaging over independent reliability estimates made at each location, in the present case data from every test location are pooled together to provide a single estimate of PWS test–retest reliability. This analysis is insensitive to any potential location-specific differences in PWS reliability, but is computationally easy, and is provided here for comparison with previous studies (e.g., Refs. 6, 10, 14).
Figure 8
 
Fixation stability. (A) Bivariate contour ellipse heat maps for children (left) and adults (right). Larger distributions indicate less fixation stability. Data from all three exams were included. The test grid (blue circles) and an arbitrary fundus image are also shown, for scale (target eccentricities: 0°, 1°, 3°, 5°). (B) Group mean BCEA95 values (±95% CI), for children (solid blue line) and adults (dashed red line). (C) Group mean (±95% CI) percentage of eye-tracking samples containing an eye movement exceeding a velocity criterion of 20°/s. (D) Fixation instability (dispersion) as a function of time, for children (blue) and adults (red). Fixation stability was indexed by the standard distance deviation (SDD) of fixation coordinates within 15-second bins. Markers indicate group mean dispersion. Shaded regions indicate 95% confidence intervals. Solid lines indicate piecewise linear (“broken stick”) fits to the data from all three exams (for fitting purposes, time assumed to increase continuously from 0 to 900 seconds. Fits were made using least-squares linear spline fitting, and contained four free parameters: left slope/intercept, right slope, break point). (E) Scatter plot showing the relationship between logBCEA95 and MS. Each marker indicates a single exam, with red/blue markers indicating adults/children. The solid black line is the best-fitting geometric mean regression slope.
Figure 8
 
Fixation stability. (A) Bivariate contour ellipse heat maps for children (left) and adults (right). Larger distributions indicate less fixation stability. Data from all three exams were included. The test grid (blue circles) and an arbitrary fundus image are also shown, for scale (target eccentricities: 0°, 1°, 3°, 5°). (B) Group mean BCEA95 values (±95% CI), for children (solid blue line) and adults (dashed red line). (C) Group mean (±95% CI) percentage of eye-tracking samples containing an eye movement exceeding a velocity criterion of 20°/s. (D) Fixation instability (dispersion) as a function of time, for children (blue) and adults (red). Fixation stability was indexed by the standard distance deviation (SDD) of fixation coordinates within 15-second bins. Markers indicate group mean dispersion. Shaded regions indicate 95% confidence intervals. Solid lines indicate piecewise linear (“broken stick”) fits to the data from all three exams (for fitting purposes, time assumed to increase continuously from 0 to 900 seconds. Fits were made using least-squares linear spline fitting, and contained four free parameters: left slope/intercept, right slope, break point). (E) Scatter plot showing the relationship between logBCEA95 and MS. Each marker indicates a single exam, with red/blue markers indicating adults/children. The solid black line is the best-fitting geometric mean regression slope.
Table 1
 
Statistical Differences in Group Mean MS, as Compared to Previously Published Normative Data16 (Columns 2–3) or to the Previous Exam (Columns 4–5)
Table 1
 
Statistical Differences in Group Mean MS, as Compared to Previously Published Normative Data16 (Columns 2–3) or to the Previous Exam (Columns 4–5)
Table 2
 
Sensitivity (MS) and Test–Retest Repeatability (CoR95 for MS and PWS) Estimates for Previous Studies (Rows 1–18) and the Present Study (Rows 19–22)
Table 2
 
Sensitivity (MS) and Test–Retest Repeatability (CoR95 for MS and PWS) Estimates for Previous Studies (Rows 1–18) and the Present Study (Rows 19–22)
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×