July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Comparison of subjective grading and objective assessment in meibography between two computational programs and evaluation of inter and intraobserver reproducibility.
Author Affiliations & Notes
  • Manuel Alejandro Garza Leon
    Health Sciences Division, Universidad de Monterrey, Monterrey, Mexico
    Departamento de Cornea, Asociación para Evitar la Ceguera en México, Mexico, Ciudad de México, Mexico
  • LAURA GONZALEZ
    Departamento de Cornea, Asociación para Evitar la Ceguera en México, Mexico, Ciudad de México, Mexico
  • Nallely Ramos Betancourt
    Departamento de Cornea, Asociación para Evitar la Ceguera en México, Mexico, Ciudad de México, Mexico
  • Everardo Hernandez-Quintela
    Departamento de Cornea, Asociación para Evitar la Ceguera en México, Mexico, Ciudad de México, Mexico
  • Footnotes
    Commercial Relationships   Manuel Garza Leon, None; LAURA GONZALEZ, None; Nallely Ramos Betancourt, None; Everardo Hernandez-Quintela, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 3811. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Manuel Alejandro Garza Leon, LAURA GONZALEZ, Nallely Ramos Betancourt, Everardo Hernandez-Quintela; Comparison of subjective grading and objective assessment in meibography between two computational programs and evaluation of inter and intraobserver reproducibility.. Invest. Ophthalmol. Vis. Sci. 2018;59(9):3811.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Objective assessment of Meibography It is an important step forward to evaluate the Meibomian Glands and new equipment appears frequently, however the analysis of the images is little studied, so our objective is to compare the measurement of the area of Meibomian glands loss between two computer programs as well as to evaluate the intra and interobserver reproducibility and to compare the subjective clinical evaluation with both computer programs.

Methods : Prospective, longitudinal, observational study. Random selection of meibographies taken with the Antares® meibograph (CSO, Florence Italy). Images were analyzed with two programs (Phoenix and ImageJ) 5 times with a week of separation between each measurement by an expert observer for each program. Intraobserver repeatability was evaluated with the intraclass correlation coefficient (ICC) and the intrasubject standard deviation. The comparison between both programs was made comparing the means of both. The interobserver reproducibility was performed comparing the first measurement of each equipment by both experts, finally the subjective staging by an expert was compared with the measurement of the computer programs.

Results : Fifty four images were evaluated. CCI of 0.989 for Phoenix and 0.988 for ImageJ, the intra-subject SD of 2.54 and 2.94. There was a significant difference (p <0.0001) in the measurement with both programs (Phoenix 24.48 ± 13.97%, ImageJ 29.05 ± 15.17). The interobserver evaluation did not have statistically significant difference with any of the programs (Phoenix for the first observer 24.48 ± 13.97% and 24.93 ± 12.70% for the second, ImageJ first observer 27.91 ± 14.82% and 29.05 ± 15.17% the second observer)

Conclusions : The comparison of both teams showed significant difference. The interobserver repeatability with ImageJ and with Phoenix was not statistically significant. Intraobserver repeatability showed high repeatability.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×