Purchase this article with an account.
F. Baroumand, A.B. Kundaje, X. Zhang, C. Leslie, D.C. Hood; Support vector machine (SVM) classification of multifocal visual evoked potential responses (mfVEP) from Glaucoma patients. . Invest. Ophthalmol. Vis. Sci. 2004;45(13):3302.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose: To compare the support vector machine (SVM) classification of multifocal visual evoked potential responses (mfVEP) from patients with Glaucoma to those based upon routine clinical tests. Methods: The 50 patients studied had: 1. a mean deviation (MD) of better than –8 dB in both eyes on the 24–2 Humphrey Visual Field (HVF) and 2. Glaucomatous damage in at least one eye as defined by an abnormal disc and an abnormal HVF (PSD<5% and/or Glaucoma Hemifield Test (GHT) outside of normal limits). Monocular mfVEPs were obtained using VERIS (EDI), three channels of recording, and probability plots analyzed with custom software [1,2]. The eyes were divided into two groups based upon (GHT), cup–to–disk ratio (CDR) and disk morphology. The eyes with abnormal results on all three tests were labeled as "Glaucoma", while those with one or more normal test were labeled "suspect". The mfVEPs from "Glaucoma" eyes and from 50 normal controls were used to train the SVM . The trained SVM was applied to mfVEP records from the "suspect " eyes and machine outcomes were compared to results based upon mfVEP and HVF. Each eye was represented as a feature vector by concatenating data points from all channels. Simple dimensionality reduction techniques such as averaging over windows of time and averaging over spatial sectors were explored. Performance was evaluated by hold–one–out class loss, i.e. the proportion of incorrect predictions made in hold–one–out cross–validation experiments. In this setting, we held out both eyes of a given patient from the training set in order to avoid bias. Results: Cross–validation results indicated that time averaging improved the test accuracy, while spatial averaging did not. The best error rate on the hold–one–out cross validation was only 37% (25% incorrect predictions on the normal eyes and 61% on the affected eyes). SVM predictions for the "suspect" eyes agreed with the mfVEP (cluster test) in only 57% of eyes and with the HVF (PSD–cluster test) in 53% of eyes, while the mfVEP and HVF cluster tests had an agreement of 74% in these eyes. Conclusions. The weak performance of the SVM might be improved by using other dimensionality reduction techniques, such as PCA, or by training on a larger data set.1. Hood et al (2002) AO. 2. Hood & Greenstein (2003) Prog Ret Eye Res. 3. Boser et al (1992) Fifth Ann. Workshop Comp. Learn Th., 144–152.
This PDF is available to Subscribers Only