Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
An assessment of subjective meibography image grading between observers and the impact formal gland interpretation training on inter-observer agreement of grading scores.
Author Affiliations & Notes
  • Leslie O'Dell
    Dry Eye Center of PA, Manchester, Pennsylvania, United States
  • Clare Halleran
    East Coast Eye Care Country, Newfoundland, Canada
  • Scott Schwartz
    Dr. Scott Schwartz Optometrist and Associate, Michigan, United States
  • Scott G Hauswirth
    University of Denver, Colorado, United States
  • Jonathan Andrews
    Andrews Eye Corporation / Optometric Associates, Lancaster PA, Pennsylvania, United States
  • Jennifer Harthan
    Illinois College of Optometry, Illinois, United States
  • justin kwan
    Professional Eye Care Center, Illinois, United States
  • Jacob Lang
    Associated Eye Care, Minnesota, United States
  • katherine mastrota
    The New York Hotel Trades Council and Hotel Association of NYC Health Center, Inc, New York, United States
  • Tracy Doll
    Pacific College of Optometry, Oregon, United States
  • Milton M Hom
    Canyon City Eyecare, California, United States
  • Footnotes
    Commercial Relationships   Leslie O'Dell, Aeire (C), Alcon (C), Allergan (C), Bausch Health (C), Eye Eco (C), Eyevance (C), Glaukos (C), JNJ vision (C), Kala (C), Novartis (C), Shire (C), Sight Sciences (C), Sun (C), Takeda (C); Clare Halleran, Allergan (C), Shire Canada (C); Scott Schwartz, Alcon (F), Bausch Health (F); Scott Hauswirth, Alcon (C), Allergan (C), Avedro (C), Biotissue/Tissue Tech (C), Dompe (C), EyePoint Pharma (C), EyeVance (C), Glaukos (C), Horizon (C), Hovione (C), JNJ vision (C), Kala (C), Novartis (C), NuSight Medical (C), Ocular Therapeutix (C), Science based health (C), Sight Sciences (C), Sun (C), Takeda (C), TearRestore (C), Tear Solutions (C); Jonathan Andrews, None; Jennifer Harthan, Allergan (R), Contamac (F), Essilor (R), Essilor (C), Kala (F), Metro (F), Metro Optics (R), Metro Optics (C), SynergEyes (R), Takeda (R), Tangible Science (F); justin kwan, None; Jacob Lang, Allergan (C), Horizon (C), Novartis (C), Ocular Therapeutix (C), Sun (C); katherine mastrota, Ocusoft (I), Science based Health (C), Sun (C), Tear Lab (I); Tracy Doll, Allergan (C), Bausch Health (C), Novartis (C), Sun (C), Tissue Tech (C); Milton Hom, Allergan (C), Bausch Health (C), Eyenovia (C), Eyevance (C), Hovione (C), Kala (C), Novartis (C), Shire (C), Sydnexis (C), Tarsus (C)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 486. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Leslie O'Dell, Clare Halleran, Scott Schwartz, Scott G Hauswirth, Jonathan Andrews, Jennifer Harthan, justin kwan, Jacob Lang, katherine mastrota, Tracy Doll, Milton M Hom; An assessment of subjective meibography image grading between observers and the impact formal gland interpretation training on inter-observer agreement of grading scores.. Invest. Ophthalmol. Vis. Sci. 2020;61(7):486.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Many characteristics of meibomian glands can be graded subjectively by the observer including atrophy and segmentation. Past studies have shown subjective measures of meibography imaging to be variable between observers. This study looked at inter-observer agreement for atrophy and a novel scale of meibomian gland segmentation. How formal training of meibography may improve inter-observer agreement was investigated.

Methods : Twenty meibography images (10 upper eyelid images, 10 lower eyelid images) from three commercial meibographers were selected from multiple clinics and graded by 9 skilled dry eye clinician observers. Meibography images were graded on a whole integer ordinal scale (0 to 4) using the Pult Atrophy scale and the LEO Segmentation scale. The observers were then asked to complete a meibography interpretation training and regrade the same set of meibography images. Fleiss' kappa test was performed to see how each grader compared to a key that was agreed upon by 3 experienced graders.

Results : When assessing gland atrophy, mean agreement was 0.34 (0.23-0.51) before training and 0.34 (0.05-0.55) after training. Five graders improved in agreement with a change in k of between 0.04 to 0.11, two graders’ agreement decreased by -0.01 to -0.08, one observer decreased by -0.22 only because that observer did not grade all of the images and any missed grades were scored as disagreement. One observer did not complete the second round of grading. Mean segmentation agreement was 0.065 (-0.01-0.21) before training and 0.13 (0.05-0.23) after training. Agreement improved for three observers from 0.02 to 0.11 and decreased for four observers by -0.01 to -0.21.

Conclusions : Meibography images are currently graded subjectively bringing into question the reliability between observers. Image quality is paramount to interpretation. Intense training on meibography interpretation improves grading scores for gland segmentation and gland atrophy. Atrophy continues to be the most reliable measure between observers. However, with continued training and use of new grading scales, clinicians may grade meibography images more reliably. Future objective and artificial intelligence/machine learning quantification of meibomian gland morphology would obviate the need for subjective grading.

This is a 2020 ARVO Annual Meeting abstract.

 

 

LEO Segmentation Scale

LEO Segmentation Scale

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×