September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Spatial Interpolation Enables Normative Data Comparison in Gaze-Contingent Perimetry
Author Affiliations & Notes
  • Jonathan Denniss
    School of Psychology, University of Nottingham, Nottingham, United Kingdom
  • Andrew Astle
    School of Psychology, University of Nottingham, Nottingham, United Kingdom
  • Footnotes
    Commercial Relationships   Jonathan Denniss, None; Andrew Astle, None
  • Footnotes
    Support  This work was supported by a College of Optometrists Postdoctoral Award (JD & ATA) and an NIHR Postdoctoral Fellowship (ATA). This abstract presents independent research funded by the NIHR. The views expressed are those of the authors and not necessarily those of the NHS, NIHR, or the Department of Health.
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 3938. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jonathan Denniss, Andrew Astle; Spatial Interpolation Enables Normative Data Comparison in Gaze-Contingent Perimetry. Invest. Ophthalmol. Vis. Sci. 2016;57(12):3938.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Comparison to normative data is key to interpretation of many clinical tests, but is currently limited in microperimetry due to variability in test grid position according to fixation. We hypothesized that a surface could be accurately fit to normative data collected using a densely sampled, spatially extensive grid. This would enable comparison of sensitivity at any tested spatial location to corresponding points on the normative surface.

Methods : Healthy participants (n=60, age 19-50, median 23 years) undertook microperimetry (MAIA-2, CentreVue, Padova, Italy) using custom test locations with 237 points on a square grid with 1° spacing up to 5° eccentricity and 2° spacing from 5 to 13° eccentricity. A surface was fit to the 5th percentile of the data at each location by a variety of spatial interpolation methods, and Universal Kriging was selected based on RMS error. The efficacy of this approach was assessed in two ways: First, a leave-one-out method was used to compare individual datasets to surfaces fitted to the remaining data, similar to Total Deviation indices in standard automated perimetry. Second, we considered instances in which the location of the anatomical fovea is unknown by spatially shifting the test grid position by an amount sampled from the distribution of relative fovea-optic disc positions and comparing the shifted data to the fixed fitted surface. This sampling procedure was repeated 1000 times per participant.

Results : All participants achieved reliable results with stable fixation. Surfaces were fit accurately with RMS error across all tested locations <0.5dB. Across all participants median 4.6% (IQR 2.0% to 9.6%) of tested points fell below the surface fitted to the 5th percentile of remaining data, close to the expected 5%. When grids were spatially shifted (fovea location unknown), difference in number of abnormal locations from baseline ranged between -17 and +15 (mean -0.2, SD 2.3, p=0.11).

Conclusions : Spatial interpolation of densely sampled, spatially extensive normative microperimetry data is accurate and enables sensitivity from any testable visual field location to be compared to normative limits. When fovea location is unknown, normative limits may require alteration to maintain specificity. With a larger normative database, this technique will aid the detection of early functional defects in patients with suspected central vision disorders.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×