Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 8
July 2024
Volume 65, Issue 8
Open Access
Retina  |   July 2024
Microperimetry Characteristics of Regions With a Truly Nonresponding Location: Implications for Atrophic Age-Related Macular Degeneration
Author Affiliations & Notes
  • Zhichao Wu
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
    Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
  • Maximilian Pfau
    Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
    Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
  • Monika Fleckenstein
    Department of Ophthalmology and Visual Sciences, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah, United States
  • Robyn H. Guymer
    Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
    Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
  • Correspondence: Zhichao Wu, Centre for Eye Research Australia, Level 7, 32 Gisborne Street, East Melbourne, VIC 3002, Australia; [email protected]
Investigative Ophthalmology & Visual Science July 2024, Vol.65, 44. doi:https://doi.org/10.1167/iovs.65.8.44
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhichao Wu, Maximilian Pfau, Monika Fleckenstein, Robyn H. Guymer; Microperimetry Characteristics of Regions With a Truly Nonresponding Location: Implications for Atrophic Age-Related Macular Degeneration. Invest. Ophthalmol. Vis. Sci. 2024;65(8):44. https://doi.org/10.1167/iovs.65.8.44.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To understand the microperimetry response characteristics of regions with a truly nonresponding location, which will be useful when considering criteria for end-stage atrophic age-related macular degeneration (AMD).

Methods: A simulation model was developed using data from 128 participants with bilateral large drusen at baseline seen over 36 months at 6-month intervals. One hundred thousand pairs of real-world microperimetry testing results were simulated separately with and without one truly nonresponding location, where the sensitivity of one randomly selected location for the former group was derived from the distribution of responses from a truly nonresponding location at the optic nerve head from 60 healthy participants.

Results: Only 60% of the simulated test pairs with a truly nonresponding location had ≥1 location that was <0 decibel (dB) on both tests. In contrast, 91% of the simulated test pairs had ≥1 location that was ≤10 dB on both tests, and 87% had ≥1 location that was ≤10 dB on both tests and <0 dB for one of the tests. Of the simulated test pairs without a truly nonresponding location, there were 0.04%, 1.4%, and 0.4% that met these three above criteria, respectively.

Conclusions: Regions with a truly nonresponding test location do not almost always show a repeatable absolute scotoma (<0 dB), but instead, much more often a deep visual sensitivity defect (≤10 dB), with or without having an absolute scotoma on one of the tests. These findings are crucial if functional criteria are to be considered as part of a definition of end-stage atrophic AMD.

“Geographic atrophy” (GA) is the term used to define the late atrophic stage of age-related macular degeneration (AMD) that can develop when eyes with the early stages of this disease progress.14 Development of GA has been used as part of the primary end point of late AMD in several large preventative treatment trials.58 For many, implicit in the consideration of GA as representing the end-stage of AMD is an assumption that visual function will be non-existent in this region, meaning that there would be an absolute scotoma in the region of GA.9 
A foundational study by Sunness and colleagues, nearly three decades ago, evaluated visual sensitivity in the macular region in eyes with GA and reported that 95% of all locations with an absolute scotoma fell within, or on the border of, an area of atrophy.10 In that study, an absolute scotoma was defined as a nonresponse to both single presentations of a dimmer and brighter stimulus (approximately 0.17° × 0.17°, presented for 400 milliseconds) using a custom-built fundus-oriented perimeter.10 However, 83% of points with an inconsistent response (where a dimmer stimulus was seen, but a brighter stimulus was not seen) also fell within, or on the border, of an area of atrophy.10 An average of 3 points with a relative scotoma (where a dimmer stimulus was not seen, but a brighter stimulus was seen) and 0.5 points that were seeing (where both the dimmer and brighter stimuli were seen) per individual were also seen in an area of atrophy, from an average of 57 points tested per individual.10 Although it is not possible to specifically determine what proportion of test locations within an area of atrophy had a non-seeing response in this previous study, these findings underscore how a non-seeing response is not always consistently returned within regions considered to be atrophic. 
One of our recent studies11 using a modern microperimeter showed that test locations 0.65° inside a region of GA had visual sensitivities that were higher than expected for an absolute scotoma, which were not entirely accounted for by false-positive responses. Another one of our recent studies12 observed that only 64% of points that fell within 0 µm to 250 µm inside the GA margin, and 88% of points that fell >250 µm inside the GA margin, were missed on defect-mapping microperimetry (where a single 10 decibel [dB] stimulus was presented once at 208 test locations in the central 8° radius region). These findings indicate that a notable proportion of 10 dB stimuli falling within an atrophic region exhibited a seen response.12 Therefore, these recent findings highlight how regions with GA may not always exhibit an absolute scotoma, likely as a result of both false-positive responses and residual visual sensitivity. 
In order to better understand what visual sensitivity responses could be expected in atrophic regions in AMD and the phenomenon of residual visual sensitivity, it is critical to first understand the characteristics of microperimetry tests in regions with a truly nonresponding location. Such knowledge will also be helpful for considering functional criteria that might define an end-stage atrophic AMD lesion when considering end points in this disease. This study, therefore, sought to address this knowledge gap by developing a computer model to simulate real-world microperimetry test results from eyes of individuals with intermediate AMD, with and without a truly nonresponding test location. The characteristics of high-density, targeted microperimetry tests of regions with early OCT atrophic lesions from our recent study13 were then compared with those from the simulated microperimetry tests. This was performed to illustrate how the knowledge about the response characteristics of tests from regions with a truly nonresponding location can be used to interpret the clinical findings of visual sensitivity loss associated with atrophic AMD. 
Methods
This study included individuals who were enrolled in the sham treatment arm of the Laser Intervention in the Early Stages of AMD (LEAD) Study,14,15 and observational studies conducted at the Centre for Eye Research Australia (CERA).16,17 Data from these previous studies were used to develop a computer simulation model for microperimetry testing results to compare different approaches for detecting tests with a nonresponding test location. These approaches were then evaluated in individuals enrolled in a prospective observational study who underwent high-density, targeted microperimetry testing of early optical coherence tomography (OCT) atrophic lesions, conducted at CERA.13 These studies were approved by the respective human research ethics committee and conducted in adherence with the Declaration of Helsinki, and all participants provided written informed consent. 
Development of a Microperimetry Computer Simulation Model
As an overview, we sought to develop a computer simulation model of mesopic microperimetry results from eyes of individuals with bilateral large drusen (meeting the definition of intermediate AMD2) in this study. This model could then be used to simulate microperimetry testing results in two groups to evaluate the performance of different approaches for detecting tests with a nonresponding test location: (i) a group where the visual sensitivity of one test location is derived from the distribution of responses from nonresponding locations (from the optic nerve head [ONH] of healthy individuals, which are expected to be nonresponding), and (ii) another group without a nonresponding test location. These two groups thus serve as cases and controls, respectively, for the presence of a nonresponding test location. They enable the evaluation of the true- and false-positive rates respectively of different approaches for detecting such tests. 
Clinical Cohorts Used to Develop Simulation Model
A summary of the eligibility criteria of the previous studies and the microperimetry testing performed in each of the studies, as well as the test locations and inclusion criteria for the tests for this study, is provided in Figure 1. In brief, individuals in the sham treatment arm of the LEAD study14,15 were used to develop the simulation model of microperimetry test results from eyes of individuals with bilateral large drusen. The exclusion criteria for all eyes was the evidence of late AMD on multimodal imaging (MMI), based on the presence of neovascular AMD, GA on conventional fundus photography (CFP), or nascent geographic atrophy (nGA) on OCT imaging, as defined previously.18,19 Only eyes of individuals that did not have MMI-defined late AMD over at least five visits were included to enable robust estimation of “true” visual sensitivity at each test location (described further below). Healthy individuals in two previous studies that underwent microperimetry testing at ONH as a model for deep scotomas or a region of nonresponding retina,16,17 were included to obtain estimates of visual sensitivity measurements from a nonresponding test location. All microperimetry testing was performed using the Macular Integrity Assessment (MAIA) device (CenterVue, Padova, Italy), and only tests performed using the “follow-up” function (i.e. using estimates from a prior test to seed the thresholding procedure, to minimize the systematic underestimation of visual sensitivity loss20) and that had a false-positive rate ≤25% were included in this study. 
Figure 1.
 
Summary of the clinical cohorts included in this study. AMD = age-related macular degeneration; MMI = multimodal imaging; ONH = optic nerve head; * = based on the presence of neovascular AMD, geographic atrophy on color fundus photography, or nascent geographic atrophy on optical coherence tomography imaging; # = determined by identifying the location with the largest difference in visual sensitivity (or highest gradient) between its two adjacent locations.
Figure 1.
 
Summary of the clinical cohorts included in this study. AMD = age-related macular degeneration; MMI = multimodal imaging; ONH = optic nerve head; * = based on the presence of neovascular AMD, geographic atrophy on color fundus photography, or nascent geographic atrophy on optical coherence tomography imaging; # = determined by identifying the location with the largest difference in visual sensitivity (or highest gradient) between its two adjacent locations.
Construction of the Simulation Model
Real-world microperimetry test results from eyes with large drusen were simulated by obtaining estimates of the “true” visual sensitivity at each location from the clinical cohort, as well as estimates of measurement variability. To obtain the “true” estimates of visual sensitivity, an ordinary least squares regression model was fitted to the measured visual sensitivity over time at each location. The estimated visual sensitivities across all the locations at each visit for each eye provided a collection of “sensitivity templates.” The differences between the measured and fitted visual sensitivities at each location (or the residuals) were then derived to obtain estimates of measurement variability. 
The residuals were binned according to their fitted values in 1-dB bins, and they were pooled across all locations from all eyes to generate empirical probability distribution functions (PDFs) for each of the fitted visual sensitivity bins. This was performed as measurement variability is expected to differ based on the true visual sensitivity.20,21 The residuals from each location of each test were then converted into probabilities based on the empirical PDFs of its fitted sensitivity, thereby providing standardized estimates of measurement variability and a collection of “noise templates” (or templates of testing performance that account for the correlations between the responses across all locations for a given test22). 
Because all eyes included had at least five visits of testing and up to two microperimetry tests per visit, only the first five visits were included, and only one test per visit was randomly included. One hundred “noise templates” per individual were then randomly sampled with replacement, and ten randomized sets of each sampled “noise template” were then created by randomizing their probabilities across the different test locations, to create a total of 1000 “noise templates” per person to sample from. Ten thousand “sensitivity templates” were then randomly sampled with replacement from the collection of “sensitivity templates” available, by randomly selecting one person and then one “sensitivity template” from all the available visits. Real-world microperimetry test results can then be simulated by combining a randomly selected “sensitivity template” with a randomly selected “noise template.” The probabilities from the latter were then used to determine the residual (or measurement variability) to be added from the corresponding empirical PDF of the “true” sensitivity at a given location. These simulated test results would thus represent those from eyes with large drusen and without a nonresponding test location. To simulate tests from eyes with one nonresponding test location, visual sensitivity at one randomly selected location was instead derived from the empirical PDF of the responses from the ONH. This empirical PDF was also derived by randomly selecting, with replacement, one location from one microperimetry test of each individual for up to 10,000 times (to ensure equal contribution from each individual). These above steps are illustrated in Figure 2
Figure 2.
 
Schematic of the development and implementation of the computer simulation model for generating real-world microperimetry test results. “Development of Computer Simulation Model” section = starting from the top left, visual sensitivity measurements at each location from each test performed at each study visit is fitted with an ordinary least squares regression model (two test locations are shown for example). The fitted sensitivities of each test location represent the “true” visual sensitivity, and thus providing “sensitivity templates” at each visit. The differences between the actual measured visual sensitivity and the fitted sensitivity (or residuals) were then derived, and empirical probability distribution functions (PDFs) of these residuals binned into 1 decibel (dB) bins were then derived (an example of the distribution at the 30 dB bin is shown). The empirical PDFs were then used to convert the residuals at each visit into probabilities (or standardized estimates of measurement variability”), and these were termed “noise templates.” “Simulation of Microperimetry Test Results” section = a randomly selected “sensitivity template” and “noise template” were then used to derive the corresponding residuals at each test location (A). The residuals are then added to “sensitivity template” to simulate a real-world microperimetry test result, from a region without a truly nonresponding test location (B). When simulating test results from a region with a truly nonresponding test location, one randomly selected location is then also simulated to have a random visual sensitivity threshold value from the empirical PDF of the optic nerve head (ONH) (C). Note that the value of “−1” dB is assigned to locations with an absolute scotoma (or “<0 dB”).
Figure 2.
 
Schematic of the development and implementation of the computer simulation model for generating real-world microperimetry test results. “Development of Computer Simulation Model” section = starting from the top left, visual sensitivity measurements at each location from each test performed at each study visit is fitted with an ordinary least squares regression model (two test locations are shown for example). The fitted sensitivities of each test location represent the “true” visual sensitivity, and thus providing “sensitivity templates” at each visit. The differences between the actual measured visual sensitivity and the fitted sensitivity (or residuals) were then derived, and empirical probability distribution functions (PDFs) of these residuals binned into 1 decibel (dB) bins were then derived (an example of the distribution at the 30 dB bin is shown). The empirical PDFs were then used to convert the residuals at each visit into probabilities (or standardized estimates of measurement variability”), and these were termed “noise templates.” “Simulation of Microperimetry Test Results” section = a randomly selected “sensitivity template” and “noise template” were then used to derive the corresponding residuals at each test location (A). The residuals are then added to “sensitivity template” to simulate a real-world microperimetry test result, from a region without a truly nonresponding test location (B). When simulating test results from a region with a truly nonresponding test location, one randomly selected location is then also simulated to have a random visual sensitivity threshold value from the empirical PDF of the optic nerve head (ONH) (C). Note that the value of “−1” dB is assigned to locations with an absolute scotoma (or “<0 dB”).
Functional Criteria for Detecting Tests with a Nonresponding Location
The simulated pairs of microperimetry tests were used to determine the true- and false-positive rates for detecting tests with a nonresponding location. Two different functional criteria were evaluated across varying cutoffs (in 1-dB increments): (i) when requiring ≥1 test locations to have a sensitivity lower than the cutoff on both tests at the same location, and (ii) when requiring ≥1 test locations to have a sensitivity <0 dB on one of the tests, and a sensitivity lower than the cutoff on the other test, at the same location. The true and false-positive rates for detecting tests with a nonresponding location were specifically reported for cutoffs based on the floor of the physical and effective dynamic range of the microperimeter used in this study. The former corresponds to the brightest stimuli that can be presented by a device (i.e. 0 dB), and the latter corresponds to the upper limits of human perception.23 For the microperimeter used in this study, the floor of the effective dynamic range – when defined based on the threshold where 5% of the retest values include ≤0 dB (i.e. considered indistinguishable from the floor of the physical dynamic range) – was previously reported to be 10 dB.21 As such, the true- and false-positive rates for detecting tests with a nonresponding location based on the cutoffs of <0 dB and ≤10 dB using the two functional criteria above were reported specifically. 
Evaluation of Functional Criteria in Early OCT Atrophic Changes
Individuals who underwent high-density, targeted microperimetry testing of early OCT atrophic changes in a recent prospective observational study13 were included to illustrate how the proportion of lesions meeting different functional criteria for having a nonresponding test location differs based on the severity of the corresponding OCT atrophic changes present. A summary of the eligibility criteria of this study and parameters used for microperimetry testing is also provided in Figure 1. Briefly, this study included eyes with at least incomplete retinal pigment epithelium (RPE) and outer retinal atrophy (iRORA)24 at baseline, which was defined according to the original description as requiring the presence of choroidal signal hypertransmission associated with a region of RPE attenuation or disruption and evidence of overlying photoreceptor degeneration (based on presence of ellipsoid zone [EZ] or external limiting membrane [ELM] disruption, outer nuclear layer [ONL] thinning, subsidence of the outer plexiform layer [OPL] and inner nuclear layer [INL], or a hyporeflective wedge-shaped band within Henle's fiber layer).9,24 Participants were first seen at a baseline visit, and those with nGA (based on the presence of subsidence of the OPL and INL, and/or a hyporeflective wedge-shaped band18,19) or more progressed atrophic changes were also offered an opportunity to undergo longitudinal follow-up at 3-month intervals up to 12 months. 
All participants underwent microperimetry testing using a high-density stimulus pattern, which was manually centered on an atrophic lesion of interest (as described in detail previously13). At each visit, the two microperimetry tests, performed using the “follow-up” function, were evaluated for the presence of at least one test location that was: (i) <0 dB on both tests, (ii) ≤10 dB on both tests, and (iii) ≤10 dB on both tests, and <0 dB for one of the tests. 
OCT imaging was also performed at all visits, and each lesion that was tested on microperimetry was independently graded to confirm the presence of at least iRORA. The following features were then measured for these lesions: (i) choroidal signal hypertransmission, (ii) RPE attenuation or disruption, (iii) complete RPE loss (defined when a bare Bruch's membrane was visible, without overlying dysmorphic RPE or debris), and (iv) subsidence of the INL and OPL, and/or hyporeflective wedge-shaped band (meeting the definition of nGA). These measurements were then also used to determine if the lesion met the definition of complete RPE and outer retinal atrophy (cRORA),9 which required the presence of hypertransmission ≥250 µm associated with RPE attenuation or disruption ≥250 µm and evidence of overlying photoreceptor degeneration. In this recent prospective observational study,13 we observed that the presence of hypertransmission ≥500 µm, complete RPE loss ≥250 µm, and nGA ≥500 µm were all independently associated with a significant increase in the number of test locations with a visual sensitivity of ≤10 dB.13 We thus determined if a lesion had cRORA or nGA ≥500 µm, with or without (i) either hypertransmission ≥500 µm or complete RPE loss ≥250 µm, (ii) hypertransmission ≥500 µm, (iii) complete RPE loss ≥250 µm, or (iv) both hypertransmission ≥500 µm and complete RPE loss ≥250 µm (providing a total of 10 different criteria for early OCT atrophy). 
The average proportion of lesions with microperimetry tests that had at least one location with visual sensitivities on two tests that were below <0 dB and ≤10 dB, as described above, was then derived. This was performed by randomly selecting one visit per participant for up to 10,000 times (to also ensure equal contributions from each individual), and then calculating the average proportion of microperimetry test result pairs meeting the different criteria above. All analyses were performed using Stata/BE (software version 18; StataCorp, College Station, TX, USA). 
Results
The characteristics of the clinical cohorts used to develop the simulation model, and the cohort with early OCT atrophic changes used to evaluate the different functional criteria for detecting a microperimetry test with a nonresponding location, are summarized in Table 1
Table 1.
 
Characteristics of the Clinical Cohorts Included to Develop the Microperimetry Simulation Model and to Evaluate the Functional Criteria for Detecting a Nonresponding Test Location
Table 1.
 
Characteristics of the Clinical Cohorts Included to Develop the Microperimetry Simulation Model and to Evaluate the Functional Criteria for Detecting a Nonresponding Test Location
For the 231 eyes from 128 participants with bilateral large drusen at baseline, the median mean sensitivity and pointwise sensitivity standard deviation from all microperimetry tests across all visits in this cohort was 26.2 dB (interquartile range [IQR] = 24.8 to 27.5 dB) and 2.1 dB (IQR = 1.8 to 2.4 dB), respectively. The median mean sensitivity and pointwise sensitivity standard deviation from 100 simulated microperimetry tests from the same visits of each eye of each participant in the clinical cohort was 26.2 dB (IQR = 24.8 to 27.4 dB) and 2.1 dB (IQR = 1.8 to 2.5 dB) respectively, which was near-identical to those seen in the clinical cohort. The distribution of these two parameters in the clinical cohort and simulated tests are shown in Figure 3. For the cohort of 60 eyes of 60 healthy individuals who underwent microperimetry testing at the ONH, the 90th and 95th percentile of the pointwise sensitivity of the locations within the ONH from the empirical probability distribution function was 2 dB and 9 dB, respectively. 
Figure 3.
 
Histograms showing the distribution of the mean sensitivity (first row) and pointwise sensitivity standard deviation (both measured in decibels [dB]; second row) from all tests across all visits in the clinical cohort (left column) and from 100 simulated sequences from the same visits, eyes, and participants in the clinical cohort (right column).
Figure 3.
 
Histograms showing the distribution of the mean sensitivity (first row) and pointwise sensitivity standard deviation (both measured in decibels [dB]; second row) from all tests across all visits in the clinical cohort (left column) and from 100 simulated sequences from the same visits, eyes, and participants in the clinical cohort (right column).
Functional Criteria for Detecting Simulated Tests With a Nonresponding Location
To understand the microperimetry testing characteristics of a region with a truly nonresponding location, the true- and false-positive rates for detecting the simulated tests with and without a nonresponding location using different cutoffs are plotted in Figure 4. Two different types of criteria across different cutoffs were considered: criterion 1 = requiring ≥1 location to have a sensitivity lower than the cutoff at the same location on two tests; criterion 2 = requiring ≥1 location to have a sensitivity <0 dB on one test, and a sensitivity lower than the cutoff, at the same location on two tests. 
Figure 4.
 
True- and false-positive rates (left and right, respectively) for detecting simulated microperimetry tests with and without one nonresponding test location respectively for two different criteria, across different cutoffs (in decibels [dB]): criterion 1 = requiring ≥1 location to have a sensitivity lower than the cutoff at the same location on two tests; criterion 2 = requiring ≥1 location to have a sensitivity <0 dB on one test, and a sensitivity lower than the cutoff on the other test, at the same location on two tests.
Figure 4.
 
True- and false-positive rates (left and right, respectively) for detecting simulated microperimetry tests with and without one nonresponding test location respectively for two different criteria, across different cutoffs (in decibels [dB]): criterion 1 = requiring ≥1 location to have a sensitivity lower than the cutoff at the same location on two tests; criterion 2 = requiring ≥1 location to have a sensitivity <0 dB on one test, and a sensitivity lower than the cutoff on the other test, at the same location on two tests.
This shows, for example, that the true- and false-positive rates of detecting tests with a nonresponding test location were 60% and 0.04%, respectively, when requiring ≥1 location to be <0 dB at the same location on two tests (termed “<0/<0 dB”). In contrast, requiring ≥1 location to be ≤10 dB at the same location on two tests (termed “≤10/≤10 dB”) resulted in a true- and false-positive rate of 91% and 1.4% respectively, whilst these results were 87% and 0.4% respectively when further requiring the location to also be <0 dB on one of the two tests (termed “<0/≤10 dB”). 
Evaluation in Early OCT Atrophic Changes
The prevalence of repeatable visual sensitivity abnormalities for early OCT atrophic lesions meeting the three different cutoffs above (<0/<0 dB, <0/≤10 dB, and ≤10/≤10 dB) are presented in Table 2
Table 2.
 
Percentage of Early Atrophic Lesions Across Different Criteria on Optical Coherence Tomography (OCT) Imaging With Repeatable Visual Sensitivity Abnormalities on Microperimetry
Table 2.
 
Percentage of Early Atrophic Lesions Across Different Criteria on Optical Coherence Tomography (OCT) Imaging With Repeatable Visual Sensitivity Abnormalities on Microperimetry
For the <0/<0 dB cutoff, only lesions that either had cRORA or nGA ≥500 µm with hypertransmission ≥500 µm and complete RPE loss ≥250 µm showed a comparable proportion of microperimetry tests meeting this criterion (58% for both) as those seen from the simulations above (which had a true- and false-positive rate of 60% and 0.04%, respectively; see Fig. 2). 
Considering the <0/≤10 dB cutoff, only lesions that either had (i) cRORA with both hypertransmission ≥500 µm and complete RPE loss ≥250 µm, or (ii) nGA ≥500 µm with complete RPE loss ≥250 µm and with or without hypertransmission ≥500 µm, showed a comparable proportion of tests meeting this criterion (100% and ≥92%, respectively) as those seen from the simulations above (which had a true- and false-positive rate of 87% and 0.4%, respectively; see Fig. 2). 
Finally, for the ≤10/≤10 dB cutoff, lesions that either had (i) cRORA with complete RPE loss ≥250 µm and with or without hypertransmission ≥500 µm, or (ii) nGA ≥500 µm with either hypertransmission ≥500 µm and/or complete RPE loss ≥250 µm, showed a comparable proportion of tests meeting this criterion (≥88% and ≥97%, respectively) as those seen from the simulations above (which had a true- and false-positive rate of 91% and 1.4%, respectively; see Fig. 2). 
Discussion
This study demonstrated that a repeatable absolute scotoma (<0/<0 dB; based on the floor of the physical dynamic range of the device) was only seen in 60% of simulated microperimetry test pairs of regions with a truly nonresponding location. In contrast, a repeatable deep visual sensitivity defect (≤10/≤10 dB; based on the floor of the effective dynamic range of the device) was seen in >90% of the simulated microperimetry test pairs, or >85% of the time if additionally requiring an absolute scotoma to be present in at least one of the two tests (<0/≤10 dB). These findings provide an important reference for interpreting microperimetry testing results of regions with atrophic AMD. 
When pooling data from our two previous studies where the ONH of healthy individuals was assessed on microperimetry testing,16,17 we observed that 95% of the pointwise visual sensitivities were ≤9 dB in a region where visual sensitivity is expected to be absent. However, these findings are unsurprising, as computer simulation models based on standard automated perimetry have previously shown that locations simulated as being truly nonresponding do not always have an absolute scotoma, as evident by the measurement error seen in such locations.25,26 More specifically, findings from a recent computer simulation model that considered individual-specific false-positive response rates with the same microperimeter device used in this study showed that 95% of the pointwise sensitivities were expected to be ≤14 dB in that cohort.11 
These above findings therefore mean that it is unsurprising that only 60% of the simulated microperimetry test pairs in this study, with a truly nonresponding location, showed a repeatable absolute scotoma, as false-positive responses can result in either one or both tests returning a non-scotomatous response. Accordingly, microperimetry testing of regions of atrophic AMD would thus not be expected to almost always return a repeatable absolute scotoma, and even more so if there are indeed some surviving photoreceptors that could be present in such atrophic regions.2729 This is an important finding to consider when seeking to define anatomic feature(s) of end-stage atrophic AMD that is associated with having nonexistent visual function, as not even a region with a location that is truly nonresponding would be expected to return a repeatable absolute scotoma consistently. 
Instead, we observed that such simulated test pairs frequently exhibited a repeatable deep visual sensitivity defect that was ≤10 dB on both tests, being similar with or without additionally requiring one of these locations to be an absolute scotoma (>85% for <0/≤10 dB and >95% for ≤10/≤10 dB, respectively). These findings were expected because the cut-off of ≤10 dB – based on the floor of the effective dynamic range of the microperimeter used in this study21 – allows truly nonresponding test locations to be effectively captured, even in the presence of false-positive responses. Such repeatable deep visual sensitivity defects are also rarely present in eyes of those with intermediate AMD (without atrophic AMD), occurring in 0.4% and 1.4% of the simulated test pairs based on the <0/≤10 dB and ≤10/≤10 dB criteria. As such, these criteria based on detecting repeatable deep visual sensitivity defects ≤10 dB, rather than repeatable absolute scotomas (<0 dB) only, are both highly sensitive and specific for identifying regions with a truly nonresponding test location. 
We illustrate the implications of these findings from the computer simulation model in a clinical cohort of individuals who underwent targeted, high-density microperimetry testing of early OCT atrophic lesions, as reported recently.13 The findings showed that only 10%, 20%, and 58% of cRORA lesions met the <0/<0 dB, <0/≤10 dB, and ≤10/≤10 dB criteria, respectively, on such testing, which was markedly lower than seen in the computer simulation results of a region with a truly nonresponding location (60%, 87%, and 91%, respectively). These findings thus illustrate the functional characteristics of a current definition of end-stage atrophic AMD on OCT imaging (i.e. cRORA), and how it does not exhibit similar response characteristics as those from a region with a truly nonresponding test location. In contrast, cRORA lesions with both hypertransmission ≥500 µm and complete RPE loss ≥250 µm show a more comparable prevalence of tests meeting these abovementioned criteria on microperimetry testing (58%, 100%, and 100%, respectively), underscoring how such lesions are more likely to represent those with a truly nonresponding test location. 
Further work to independently assess these OCT scans by multiple reading centers is underway to comprehensively assess the OCT features that are associated with visual sensitivity loss. Further work is also underway to also determine how often atrophic AMD changes seen on other imaging modalities (e.g. color fundus photographs and fundus autofluorescence) also meet these above functional criteria in this cohort. Nonetheless, these above findings illustrate how the results of the computer simulation model can provide a useful reference for understanding whether certain anatomical features on retinal imaging exhibit similar characteristics on microperimetry testing as those with a truly nonresponding test location. 
Limitations of this study include some methodological aspects of the computer simulation model. For example, the estimates of the “true” visual sensitivity at each test location in the eyes of individuals with intermediate AMD could potentially be more robustly determined by repeated testing over a short period of time, rather than by evaluation of longitudinal data (of up to 36 months of follow-up), as performed in this study. Modeling of the “true” visual sensitivity using the longitudinal data in this study assumed linear changes over time, which may not always capture the dynamic changes in the pathological features in the early stages of AMD.30,31 Despite this, the distribution of the visual sensitivity characteristics (mean sensitivity and pointwise sensitivity standard deviation) of the simulated and clinical tests were near-identical, demonstrating how the computer simulation model managed to generate real-world microperimetry test results with high fidelity. Another limitation of this study was the use of visual sensitivity responses from the ONH of healthy individuals in the simulation model who were younger than those with intermediate AMD in this study. However, previous studies have reported that false-positive response rates in adults are relatively similar across different age groups,32,33 and it is thus unlikely that the results of this study would be markedly different if the responses from the ONH of an older cohort of healthy individuals were used in the computer simulation model instead. 
In conclusion, this study showed that regions with a truly nonresponding test location show a repeatable absolute scotoma in only 60% of the simulated microperimetry test pairs, whereas >85% to 90% showed a repeatable deep visual sensitivity defect (≤10 dB, with or without additionally requiring at least one of the two tests to have an absolute scotoma). These findings provide crucial evidence regarding the microperimetry test characteristics of regions with a truly nonresponding location, which is needed when interpreting the results of microperimetry tests of regions with atrophic AMD and critical if functional criteria are to be considered as part of a definition of end-stage atrophic AMD. 
Acknowledgments
Supported by the National Health & Medical Research Council of Australia (#2008382 [Z.W.] and #1194667 [R.H.G.]) and National Institutes of Health (Core Grant EY014800 to the Department of Ophthalmology & Visual Sciences, University of Utah, and an Unrestricted Grant from Research to Prevent Blindness, New York, NY, to the Department of Ophthalmology & Visual Sciences, University of Utah). CERA receives operational infrastructure support from the Victorian Government. The funders had no role in the manuscript writing and the decision to submit the manuscript for publication. 
Disclosure Z. Wu, None; M. Pfau, Roche/Genentech (F), Novartis (F), Apellis (F) outside the submitted work; M. Fleckenstein, None; R.H. Guymer, Roche/Genentech (F), Bayer (F), Novartis (F), Apellis (F), Janssen (F), Astellas (F), AbbVie (F) outside the submitted work 
References
Bird AC, Bressler NM, Bressler SB, et al. An international classification and grading system for age-related maculopathy and age-related macular degeneration. Surv Ophthalmol. 1995; 39(5): 367–374. [CrossRef] [PubMed]
Ferris FL, III, Wilkinson C, Bird A, et al. Clinical classification of age-related macular degeneration. Ophthalmology. 2013; 129(4): 844–851. [CrossRef]
Fleckenstein M, Keenan TD, Guymer RH, et al. Age-related macular degeneration. Nat Rev Dis Primers. 2021; 7(1): 1–25. [CrossRef] [PubMed]
Guymer RH, Campbell TG. Age-related macular degeneration. Lancet. 2023; 401(10386): 1459–1472. [CrossRef] [PubMed]
Age-Related Eye Disease Study Research Group. A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS Report No. 8. Arch Ophthalmol. 2001; 119(10): 1417–1436. [CrossRef] [PubMed]
Complications of Age-Related Macular Degeneration Prevention Trial Research Group. Laser treatment in patients with bilateral large drusen: the complications of Age-Related Macular Degeneration Prevention Trial. Ophthalmology. 2006; 113(11): 1974–1986. [CrossRef] [PubMed]
Friberg TR, Brennen PM, Freeman WR, Musch DC; PTAMD Study Group. Prophylactic treatment of age-related macular degeneration report number 2: 810-nanometer laser to eyes with drusen: bilaterally eligible patients. Ophthalmic Surg Lasers Imaging. 2009; 40(6): 530–538. [CrossRef] [PubMed]
The Age-Related Eye Disease Study 2 Research Group. Lutein + zeaxanthin and omega-3 fatty acids for age-related macular degeneration. JAMA. 2013; 309(19): 2005–2015. [CrossRef] [PubMed]
Sadda SR, Guymer R, Holz FG, et al. Consensus definition for atrophy associated with age-related macular degeneration on OCT: classification of Atrophy Report 3. Ophthalmology. 2018; 125(4): 537–548. [CrossRef] [PubMed]
Sunness JS, Bressler NM, Maguire MG. Scanning laser ophthalmoscopic analysis of the pattern of visual loss in age-related geographic atrophy of the macula. Am J Ophthalmol. 1995; 119(2): 143–151. [CrossRef] [PubMed]
Pfau M, von der Emde L, Dysli C, et al. Light sensitivity within areas of geographic atrophy secondary to age-related macular degeneration. Invest Ophthalmol Vis Sci. 2019; 60(12): 3992–4001. [CrossRef] [PubMed]
Wu Z, Hadoux X, Jannaud M, et al. Visual sensitivity loss in geographic atrophy: structure–function evaluation using defect-mapping microperimetry. Invest Ophthalmol Vis Sci. 2024; 65(1): 36. [CrossRef]
Wu Z, Glover EK, Gee EE, et al. Functional evaluation of retinal pigment epithelium and outer retinal atrophy by high-density targeted microperimetry testing. Ophthalmol Sci. 2024; 4(2): 100425. [CrossRef] [PubMed]
Guymer RH, Wu Z, Hodgson LAB, et al. Subthreshold nanosecond laser intervention in age-related macular degeneration: the LEAD randomized controlled clinical trial. Ophthalmology. 2019; 126(6): 829–838. [CrossRef] [PubMed]
Wu Z, Luu CD, Hodgson LAB, et al. Secondary and exploratory outcomes of the subthreshold nanosecond laser intervention randomized trial in age-related macular degeneration: a LEAD Study Report. Ophthalmol Retina. 2019; 3(12): 1026–1034. [CrossRef] [PubMed]
Wu Z, Jung CJ, Ayton LN, et al. Test-retest repeatability of microperimetry at the border of deep scotomas. Invest Ophthalmol Vis Sci. 2015; 56(4): 2606–2611. [CrossRef] [PubMed]
Wu Z, Cimetta R, Caruso E, Guymer RH. Performance of a defect-mapping microperimetry approach for characterizing progressive changes in deep scotomas. Transl Vis Sci Technol. 2019; 8(4): 16. [CrossRef] [PubMed]
Wu Z, Luu CD, Hodgson LA, et al. Prospective longitudinal evaluation of nascent geographic atrophy in age-related macular degeneration. Ophthalmol Retina. 2020; 4(6): 568–575. [CrossRef] [PubMed]
Wu Z, Goh KL, Hodgson LAB, Guymer RH. Incomplete retinal pigment epithelial and outer retinal atrophy: longitudinal evaluation in age-related macular degeneration. Ophthalmology. 2023; 130(2): 205–212. [CrossRef] [PubMed]
Wu Z, Hadoux X, Jannaud M, et al. Systematic underestimation of visual sensitivity loss on microperimetry: implications for testing protocols in clinical trials. Transl Vis Sci Technol. 2023; 12(7): 11. [CrossRef] [PubMed]
Pfau M, Lindner M, Müller PL, et al. Effective dynamic range and retest reliability of dark-adapted two-color fundus-controlled perimetry in patients with macular diseases. Invest Ophthalmol Vis Sci. 2017; 58(6): BIO158–BIO167. [CrossRef] [PubMed]
Wu Z, Medeiros FA. Development of a visual field simulation model of longitudinal point-wise sensitivity changes from a clinical glaucoma cohort. Transl Vis Sci Technol. 2018; 7(3): 22. [CrossRef] [PubMed]
Wall M, Woodward KR, Doyle CK, Zamba G. The effective dynamic ranges of standard automated perimetry sizes III and V and motion and matrix perimetry. Arch Ophthalmol. 2010; 128(5): 570–576. [CrossRef] [PubMed]
Guymer RH, Rosenfeld PJ, Curcio CA, et al. Incomplete retinal pigment epithelial and outer retinal atrophy in age-related macular degeneration: classification of Atrophy Meeting Report 4. Ophthalmology. 2020; 127(3): 394–409. [CrossRef] [PubMed]
Spry P, Johnson C, McKendrick A, Turpin A. Measurement error of visual field tests in glaucoma. Br J Ophthalmol. 2003; 87(1): 107–112. [CrossRef] [PubMed]
Spenceley S, Henson D. Visual field test simulation and error in threshold estimation. Br J Ophthalmol. 1996; 80(4): 304–308. [CrossRef] [PubMed]
Sarks J, Sarks S, Killingsworth M. Evolution of geographic atrophy of the retinal pigment epithelium. Eye. 1988; 2(5): 552–577. [CrossRef] [PubMed]
Bird AC, Phillips RL, Hageman GS. Geographic atrophy: a histopathological assessment. JAMA Ophthalmol. 2014; 132(3): 338–345. [CrossRef] [PubMed]
Kim S, Sadda S, Humayun M, et al. Morphometric analysis of the macula in eyes with geographic atrophy due to age-related macular degeneration. Retina. 2002; 22(4): 464–470. [CrossRef] [PubMed]
Hartmann KI, Bartsch DUG, Cheng L, et al. Scanning laser ophthalmoscope imaging stabilized microperimetry in dry age-related macular degeneration. Retina. 2011; 31(7): 1323–1331. [CrossRef] [PubMed]
Wu Z, Cunefare D, Chiu E, et al. Longitudinal associations between microstructural changes and microperimetry in the early stages of age-related macular degeneration. Invest Ophthalmol Vis Sci. 2016; 57(8): 3714–3722. [CrossRef] [PubMed]
Nelson-Quigg JM, Twelker JD, Johnson CA. Response properties of normal observers and patients during automated perimetry. Arch Ophthalmol. 1989; 107(11): 1612–1615. [CrossRef] [PubMed]
Shirakami T, Omura T, Fukuda H, et al. Real-world analysis of the aging effects on visual field reliability indices in central 10-2 tests. J Pers Med. 2022; 12(10): 1600. [CrossRef] [PubMed]
Figure 1.
 
Summary of the clinical cohorts included in this study. AMD = age-related macular degeneration; MMI = multimodal imaging; ONH = optic nerve head; * = based on the presence of neovascular AMD, geographic atrophy on color fundus photography, or nascent geographic atrophy on optical coherence tomography imaging; # = determined by identifying the location with the largest difference in visual sensitivity (or highest gradient) between its two adjacent locations.
Figure 1.
 
Summary of the clinical cohorts included in this study. AMD = age-related macular degeneration; MMI = multimodal imaging; ONH = optic nerve head; * = based on the presence of neovascular AMD, geographic atrophy on color fundus photography, or nascent geographic atrophy on optical coherence tomography imaging; # = determined by identifying the location with the largest difference in visual sensitivity (or highest gradient) between its two adjacent locations.
Figure 2.
 
Schematic of the development and implementation of the computer simulation model for generating real-world microperimetry test results. “Development of Computer Simulation Model” section = starting from the top left, visual sensitivity measurements at each location from each test performed at each study visit is fitted with an ordinary least squares regression model (two test locations are shown for example). The fitted sensitivities of each test location represent the “true” visual sensitivity, and thus providing “sensitivity templates” at each visit. The differences between the actual measured visual sensitivity and the fitted sensitivity (or residuals) were then derived, and empirical probability distribution functions (PDFs) of these residuals binned into 1 decibel (dB) bins were then derived (an example of the distribution at the 30 dB bin is shown). The empirical PDFs were then used to convert the residuals at each visit into probabilities (or standardized estimates of measurement variability”), and these were termed “noise templates.” “Simulation of Microperimetry Test Results” section = a randomly selected “sensitivity template” and “noise template” were then used to derive the corresponding residuals at each test location (A). The residuals are then added to “sensitivity template” to simulate a real-world microperimetry test result, from a region without a truly nonresponding test location (B). When simulating test results from a region with a truly nonresponding test location, one randomly selected location is then also simulated to have a random visual sensitivity threshold value from the empirical PDF of the optic nerve head (ONH) (C). Note that the value of “−1” dB is assigned to locations with an absolute scotoma (or “<0 dB”).
Figure 2.
 
Schematic of the development and implementation of the computer simulation model for generating real-world microperimetry test results. “Development of Computer Simulation Model” section = starting from the top left, visual sensitivity measurements at each location from each test performed at each study visit is fitted with an ordinary least squares regression model (two test locations are shown for example). The fitted sensitivities of each test location represent the “true” visual sensitivity, and thus providing “sensitivity templates” at each visit. The differences between the actual measured visual sensitivity and the fitted sensitivity (or residuals) were then derived, and empirical probability distribution functions (PDFs) of these residuals binned into 1 decibel (dB) bins were then derived (an example of the distribution at the 30 dB bin is shown). The empirical PDFs were then used to convert the residuals at each visit into probabilities (or standardized estimates of measurement variability”), and these were termed “noise templates.” “Simulation of Microperimetry Test Results” section = a randomly selected “sensitivity template” and “noise template” were then used to derive the corresponding residuals at each test location (A). The residuals are then added to “sensitivity template” to simulate a real-world microperimetry test result, from a region without a truly nonresponding test location (B). When simulating test results from a region with a truly nonresponding test location, one randomly selected location is then also simulated to have a random visual sensitivity threshold value from the empirical PDF of the optic nerve head (ONH) (C). Note that the value of “−1” dB is assigned to locations with an absolute scotoma (or “<0 dB”).
Figure 3.
 
Histograms showing the distribution of the mean sensitivity (first row) and pointwise sensitivity standard deviation (both measured in decibels [dB]; second row) from all tests across all visits in the clinical cohort (left column) and from 100 simulated sequences from the same visits, eyes, and participants in the clinical cohort (right column).
Figure 3.
 
Histograms showing the distribution of the mean sensitivity (first row) and pointwise sensitivity standard deviation (both measured in decibels [dB]; second row) from all tests across all visits in the clinical cohort (left column) and from 100 simulated sequences from the same visits, eyes, and participants in the clinical cohort (right column).
Figure 4.
 
True- and false-positive rates (left and right, respectively) for detecting simulated microperimetry tests with and without one nonresponding test location respectively for two different criteria, across different cutoffs (in decibels [dB]): criterion 1 = requiring ≥1 location to have a sensitivity lower than the cutoff at the same location on two tests; criterion 2 = requiring ≥1 location to have a sensitivity <0 dB on one test, and a sensitivity lower than the cutoff on the other test, at the same location on two tests.
Figure 4.
 
True- and false-positive rates (left and right, respectively) for detecting simulated microperimetry tests with and without one nonresponding test location respectively for two different criteria, across different cutoffs (in decibels [dB]): criterion 1 = requiring ≥1 location to have a sensitivity lower than the cutoff at the same location on two tests; criterion 2 = requiring ≥1 location to have a sensitivity <0 dB on one test, and a sensitivity lower than the cutoff on the other test, at the same location on two tests.
Table 1.
 
Characteristics of the Clinical Cohorts Included to Develop the Microperimetry Simulation Model and to Evaluate the Functional Criteria for Detecting a Nonresponding Test Location
Table 1.
 
Characteristics of the Clinical Cohorts Included to Develop the Microperimetry Simulation Model and to Evaluate the Functional Criteria for Detecting a Nonresponding Test Location
Table 2.
 
Percentage of Early Atrophic Lesions Across Different Criteria on Optical Coherence Tomography (OCT) Imaging With Repeatable Visual Sensitivity Abnormalities on Microperimetry
Table 2.
 
Percentage of Early Atrophic Lesions Across Different Criteria on Optical Coherence Tomography (OCT) Imaging With Repeatable Visual Sensitivity Abnormalities on Microperimetry
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×