Purchase this article with an account.
M. J. Perez Carrasco, C. Sánchez-Ramos, M. Vinas-Pena, C. Bonnin-Arias, A. Forlan, M. San Miguel, G. Ramírez; Analysis of Intra-observer's Repeatability in Retinopathy's Classification of Patients With, Both, Clear and Yellow Iol's Implantation. Invest. Ophthalmol. Vis. Sci. 2009;50(13):745.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To discriminate between different moderate retinopathies according to the Clinical Age-Related Maculopathy Staging System (CARMS) using for it retinographies. Also to analyze the intra-observer's reliability.
808 retinographies were made to 101 subjects (both gender). Every patient had suffered cataract surgery with clear and yellow IOL's implantation. 55 retinographies were classified by using CARMS system, which categorizes the images depending on the Drusen's presence or absence, retinal pigment epithelium irregularities, geographic atrophy, pigment epithelium detachment and choroidal neovascularisation. The 55 retinographies categorizing was made by 2 experts in a blind, independent way. The method followed consisted in 2 different sequences with every of the 3 evaluated systems. The goal of the doubled-classification-method, with changing order no randomised, was to reject the influence of the evaluation of the previous image in the following one (Velo's effect). Inter-observer's repeatability was checked, because of Kappa index was calculated, as an agreement parameter between both Experts
Intra-observer's reliability analysis was the following: CARMS' concordance obtained was 87.27% for observer A and 95.45% for the Observer B. The expected concordance was 57.36% and 57.44% for A and B respectively. Kappa index was 0.7016 and 0.8932 for A and B respectively. Drusen's concordance obtained for observer A was 87.27% and 98.18% for observer B. The expected values were 59.50% and 61.16% for A and B respectively. Kappa index was 0.6857 and 0.9532 for A and B respectively. Pigmentation's concordance obtained was 87.27% y 92.73%, for A and B observers. The expected concordance was 55.21% and 53.72% for A and B and Kappa indexes were 0.7159 and 0.8429, for both. The reliability measured by Kappa index followed the rule: 0.80-1.00 (Excellent), 0.60-0.80 (Good) y 0.40-0.60 (Moderate), 0.20-0.40 (Low), <0.20 (Bad).
Kappa's indexes obtained for the intra-observer analysis show a good reliability, between Good and Excellent. A higher discrepancy in the first retinas, which were classified, was observed. This means that a previous training is necessary.
This PDF is available to Subscribers Only