Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Inter-rater agreement on decision making in glaucoma clinical encounters
Author Affiliations & Notes
  • Isdin Oke
    Boston Medical Center, Boston, Massachusetts, United States
  • Babak Eliassi-Rad
    Boston Medical Center, Boston, Massachusetts, United States
  • Manishi Desai
    Boston Medical Center, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Isdin Oke, None; Babak Eliassi-Rad, None; Manishi Desai, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 4554. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Isdin Oke, Babak Eliassi-Rad, Manishi Desai; Inter-rater agreement on decision making in glaucoma clinical encounters. Invest. Ophthalmol. Vis. Sci. 2020;61(7):4554.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Glaucoma encounters provide an ideal model to explore patterns in clinical decision making, given the large amount of quantitative data that must be synthesized at each visit, and high degree of variability in management patterns between providers. We propose using a series of cases to identify patterns in the decision making process of individual providers. We hypothesize that the overall inter-rater agreement of evaluators will increase with level of training.

Methods : We performed a retrospective chart review of consecutive glaucoma clinic visits of patients diagnosed with primary open angle glaucoma or normal tension glaucoma, managed medically on 1 or 2 intra-ocular pressure (IOP) lowering medications. We developed a survey of 40 cases using retrospectively derived parameters including demographics, IOP, central corneal thickness, cup-to-disc ratio, nerve fiber layer thickness, and visual fields. We queried providers with varying training backgrounds to evaluate whether or not they would escalate IOP lowering therapy for each case. The inter-rater agreement was assessed with Cohen’s kappa and Fleiss kappa statistics.

Results : We had 32 individuals participate in the investigation at various stages of training (glaucoma specialists N = 7, comprehensive providers N = 8, residents N = 9, students N = 8). Inter-rater agreement for the entire cohort was 0.368 (95% CI = 0.353 to 0.382) for glaucoma specialists 0.352 (95% CI = 0.283 to 0.422), and for resident trainees 0.381 (95% CI = 0.327 to 0.434). There were no statistically significant differences in inter-rater agreement between training groups. Pairwise kappa values ranged from 0.827 to -0.228 with the highest inter-rater agreement score between comprehensive providers. Of the cases where there was agreement between at least 6/7 glaucoma specialists (N = 24) the trainee inter-rater agreement was 0.445 (95% CI = 0.424 to 0.466), and of encounters with disagreement between specialists, the inter-rater agreement was 0.159 (95% CI = 0.125 to 0.193).

Conclusions : There was moderate inter-rater agreement between glaucoma specialists. We did not observe any statistically significant differences in inter-rater agreement between training levels in this study. Quantative understanding of prioritized parameters and decison making patterns between experts can have applications in resident education and the design of preferred practise patterns.

This is a 2020 ARVO Annual Meeting abstract.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×