May 2004
Volume 45, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2004
Support vector machine (SVM) classification of multifocal visual evoked potential responses (mfVEP) from Glaucoma patients.
Author Affiliations & Notes
  • F. Baroumand
    Ophthalmology Department,
    Columbia University, New York, NY
  • A.B. Kundaje
    Computer Science Department,
    Columbia University, New York, NY
  • X. Zhang
    Visual Science Lab – Psychology Department,
    Columbia University, New York, NY
  • C. Leslie
    Computer Science Department,
    Columbia University, New York, NY
  • D.C. Hood
    Visual Science Lab – Psychology Department,
    Columbia University, New York, NY
  • Footnotes
    Commercial Relationships  F. Baroumand, None; A.B. Kundaje, None; X. Zhang, None; C. Leslie, None; D.C. Hood, None.
  • Footnotes
    Support  EY02115
Investigative Ophthalmology & Visual Science May 2004, Vol.45, 3302. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      F. Baroumand, A.B. Kundaje, X. Zhang, C. Leslie, D.C. Hood; Support vector machine (SVM) classification of multifocal visual evoked potential responses (mfVEP) from Glaucoma patients. . Invest. Ophthalmol. Vis. Sci. 2004;45(13):3302.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Abstract: : Purpose: To compare the support vector machine (SVM) classification of multifocal visual evoked potential responses (mfVEP) from patients with Glaucoma to those based upon routine clinical tests. Methods: The 50 patients studied had: 1. a mean deviation (MD) of better than –8 dB in both eyes on the 24–2 Humphrey Visual Field (HVF) and 2. Glaucomatous damage in at least one eye as defined by an abnormal disc and an abnormal HVF (PSD<5% and/or Glaucoma Hemifield Test (GHT) outside of normal limits). Monocular mfVEPs were obtained using VERIS (EDI), three channels of recording, and probability plots analyzed with custom software [1,2]. The eyes were divided into two groups based upon (GHT), cup–to–disk ratio (CDR) and disk morphology. The eyes with abnormal results on all three tests were labeled as "Glaucoma", while those with one or more normal test were labeled "suspect". The mfVEPs from "Glaucoma" eyes and from 50 normal controls were used to train the SVM [3]. The trained SVM was applied to mfVEP records from the "suspect " eyes and machine outcomes were compared to results based upon mfVEP and HVF. Each eye was represented as a feature vector by concatenating data points from all channels. Simple dimensionality reduction techniques such as averaging over windows of time and averaging over spatial sectors were explored. Performance was evaluated by hold–one–out class loss, i.e. the proportion of incorrect predictions made in hold–one–out cross–validation experiments. In this setting, we held out both eyes of a given patient from the training set in order to avoid bias. Results: Cross–validation results indicated that time averaging improved the test accuracy, while spatial averaging did not. The best error rate on the hold–one–out cross validation was only 37% (25% incorrect predictions on the normal eyes and 61% on the affected eyes). SVM predictions for the "suspect" eyes agreed with the mfVEP (cluster test) in only 57% of eyes and with the HVF (PSD–cluster test) in 53% of eyes, while the mfVEP and HVF cluster tests had an agreement of 74% in these eyes. Conclusions. The weak performance of the SVM might be improved by using other dimensionality reduction techniques, such as PCA, or by training on a larger data set.1. Hood et al (2002) AO. 2. Hood & Greenstein (2003) Prog Ret Eye Res. 3. Boser et al (1992) Fifth Ann. Workshop Comp. Learn Th., 144–152.

Keywords: electrophysiology: clinical • perimetry • visual fields 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×