May 2007
Volume 48, Issue 13
ARVO Annual Meeting Abstract  |   May 2007
Numerical Modeling of Visual Field Test Data for Glaucoma Detection and Evaluation
Author Affiliations & Notes
  • D. Wroblewski
    BioFormatix, Escondido, California
  • B. A. Francis
    Doheny Eye Institute, University of Southern California, Los Angeles, California
  • V. Chopra
    Doheny Eye Institute, University of Southern California, Los Angeles, California
  • P. Quiros
    Doheny Eye Institute, University of Southern California, Los Angeles, California
  • R. K. Massengill
    Mednovus, Leucadia, California
  • Footnotes
    Commercial Relationships D. Wroblewski, BioFormatix, E; B.A. Francis, BioFormatix, C; V. Chopra, BioFormatix, C; P. Quiros, BioFormatix, C; R.K. Massengill, BioFormatix, C.
  • Footnotes
    Support NIH Grant EY014077
Investigative Ophthalmology & Visual Science May 2007, Vol.48, 1642. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      D. Wroblewski, B. A. Francis, V. Chopra, P. Quiros, R. K. Massengill; Numerical Modeling of Visual Field Test Data for Glaucoma Detection and Evaluation. Invest. Ophthalmol. Vis. Sci. 2007;48(13):1642.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose:: To implement and evaluate novel pre-processing approaches and pattern recognition methods to automatic interpretation of the visual field test and glaucoma detection.

Methods:: The visual field (VF) test data were obtained with the standard automated perimetry device (HFA II using 24-2 algorithm). The ancillary data included treated and untreated intra-ocular pressure (IOP), cup-to-disk ratio, patient’s age, sex, and family history risk factor. Classification of visual fields and the ophthalmic diagnosis was provided by experts. 1135 cases including normal, glaucoma suspect and pre-perimetric glaucoma, and glaucoma patients with different level of progression were analyzed. Automatic classification scheme uses hierarchical approach that produces classification into normal vs. artifactual fields followed by classification into normal, glaucoma suspect, or glaucoma. The parameters used by the classification models were derived from sensitivity distributions, and selected through statistical analyses and model sensitivity evaluations. Non-parametric, non-linear classification algorithms (radial basis function neural networks and support vector machines) were used. Model ensembles were used to increase the classification accuracy. Classification accuracy of the models was determined using advanced numerical validation methods (random cross-validation and bootstrap). Classification confidence level is evaluated based on the results of training and numerical validation.

Results:: Numerical analyses indicate about 80-90% agreement between classifications provided by automated system and glaucoma experts. The eyes that are deemed to be erroneously classified are reevaluated by experts. This iterative procedure assures high quality of the database. The numerical validation of models indicates about 60% accuracy in the classification of glaucoma suspect cases, which nominally do not exhibit VF defects. This indicates the ability of the numerical model to discern subtle changes in the VF that may be indicative of early stages of glaucoma. The overall accuracy is estimated to be 89% and 78% for classifications into artifactual and correct classes, and into the three diagnostic classes, respectively.

Conclusions:: Ongoing effort is directed at enhancement of VFExpert annotated database and modeling of glaucoma progression. The classification models are implemented as part of Windows-based VFExpert program, which also provides data transfer and data management capabilities.

Keywords: visual fields • computational modeling 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.