July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Accuracy of a popular online symptom checker for ophthalmic diagnoses
Author Affiliations & Notes
  • Michael Nguyen
    McMaster University, Hamilton, Ontario, Canada
  • Alexander Gregor
    University of Toronto, Toronto, Ontario, Canada
  • Anne Beattie
    McMaster University, Hamilton, Ontario, Canada
  • Gloria Isaza
    McMaster University, Hamilton, Ontario, Canada
  • Carl Shen
    McMaster University, Hamilton, Ontario, Canada
  • Footnotes
    Commercial Relationships   Michael Nguyen, None; Alexander Gregor, None; Anne Beattie, None; Gloria Isaza, None; Carl Shen, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 5225. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Nguyen, Alexander Gregor, Anne Beattie, Gloria Isaza, Carl Shen; Accuracy of a popular online symptom checker for ophthalmic diagnoses. Invest. Ophthalmol. Vis. Sci. 2018;59(9):5225.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : As more patients use the internet to research their health concerns, it is important for ophthalmologists to be familiar with the features and limitations of online symptom checkers. This cross-sectional descriptive study evaluated the diagnostic accuracy, triage urgency, and inter-rater agreement of a popular online symptom checker for common ophthalmic presentations.

Methods : Forty-two validated clinical vignettes of ophthalmic complaints were generated and distilled to their core presenting symptoms. Cases were entered into the WebMD® online symptom checker by both medically trained and non-medically trained personnel. Output from the symptom checker including number of symptoms, ranking and list of diagnoses, and triage urgency was recorded. Inter-rater agreement was calculated using Cohen’s kappa coefficient.

Results : The mean number of symptoms entered was 3.6±1.6 (range 1-8), of which a mean of 0.5±0.8 (range 0-3) were extra-ocular. The median number of diagnoses generated by the symptom checker was 26.8±21.8 (range 1-99). The primary diagnosis by the symptom checker was correct in 11/42 (26%) of cases. The correct diagnosis was included in the symptom checker's top 3 diagnoses in 16/42 (40%) of cases. The correct diagnosis was not included in the symptom checker's list at all in 18/42 (43%) of cases. The average position on the differential list generated by the symptom checker when the correct diagnosis was appropriately listed was 4.7±8.2 (range 1-39). The most common primary diagnosis made by the symptom checker was, "nearsightedness". Fourteen of 17 cases where the primary diagnosis' triage urgency was incorrect would have led to an urgent case being triaged as non-urgent. The symptom checker performed better in diagnostic accuracy of non-urgent conditions. Inter-rater agreement for the correct diagnosis being in the top 3 listed was strong (Cohen's kappa = 0.74).

Conclusions : Online symptom checkers can arrive at the correct clinical diagnosis but a significant proportion of diagnoses are not captured. As a particularly visual specialty with similar common symptomatic presentations of distinct diseases, ophthalmology may represent a particularly challenging field for internet based symptom checkers to excel in. Further research to reflect the real-life application of internet diagnostic resources is required.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×