December 2010
Volume 51, Issue 12
Free
Letters to the Editor  |   December 2010
Rasch Analysis for QoL Questionnaires
Author Affiliations & Notes
  • Henk Kelderman
    Faculty of Psychology and Education, Free University, Amsterdam, The Netherlands; and
  • Joost Felius
    the Retina Foundation of the Southwest, Dallas, Texas.
  • Jan Passchier
    Faculty of Psychology and Education, Free University, Amsterdam, The Netherlands; and
Investigative Ophthalmology & Visual Science December 2010, Vol.51, 6898-6899. doi:https://doi.org/10.1167/iovs.10-5762
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Henk Kelderman, Joost Felius, Jan Passchier; Rasch Analysis for QoL Questionnaires. Invest. Ophthalmol. Vis. Sci. 2010;51(12):6898-6899. https://doi.org/10.1167/iovs.10-5762.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Vianya-Estopa et al. 1 have criticized the Amblyopia and Strabismus Questionnaire (A&SQ), in contrast with the very favorable outcome of our recent factor analysis. 2 We would like to comment on the measurement model used in their appraisal. 
In clinimetrics, test administration is seen as a probability experiment in which the essential attributes of interest, here quality of life, as well as incidental effects such as errors due to the measurement experiment itself, are thought to influence the item responses. 3 The purpose of a measurement model is to describe this experiment and provide estimates of the latent variable of interest and the variance of its measurement errors. The authors analyzed A&SQ data with Andrich's rating scale model 4 which assumes that (1) the subject is making a series of consecutive choices between neighboring categories, 5 (2) the items measure in the same scale units, 6 and (3) a subject's responses uniquely indicate the attribute of interest and do not depend on each other. 4 None of these assumptions, however, seem to adequately describe the measurement experiment or have much relevance for the meaningfulness of statements based on A&SQ scores. Lack of fit of this model should, however, not be interpreted as evidence for multidimensionality but as a lack of fit to all the assumptions of the rating scale model. Therefore, the second dimension found by the authors may well have been the result of a shortcoming in the description of the measurement experiment. 
In our analysis 2 we took a pragmatic approach and used principal component analysis (PCA) to assess whether a substantial amount of the response variance can be explained by a small number of factors (preferably one), instead of detecting deviations from a restrictive one-dimensional model. PCA allows items to be measured on different interval scales, 7 and controls for correlated residuals by introducing additional factors that explain negligible amounts of response variance. We searched for effect size rather than adherence to a (virtually nonattainable) normative standard such as Rasch homogeneity. For this reason PCA is generally considered the model of choice in assessment of measurement instruments for attitudes and personality like the Big Five. 8,9  
If there is reason to believe that our methods have been too crude, the usual method is to compare the fit of a one-dimensional discrete item response model against a well-fitting multidimensional model. 10 Given the number of parameters to be estimated, multidimensional models require a large sample of subjects. We predict, however, that even when a two-dimensional model is preferred over a one-dimensional one, the correlation between the factors will be too high in most populations to be of much practical interest. If uncorrelated, the second dimension may be an effect of the measurement experiment and picks up only small factors due to artifacts such as priming, which will generally not have a systematic influence on the A&SQ score. Summarizing, we do not think the analysis of Vianya et al. 1 undermines the practical usefulness of the A&SQ or threatens the meaningfulness of statements based on A&SQ scores. 
References
Vianya-Estopa M Elliott DB Barrett BT . An evaluation of the Amblyopia and Strabismus Questionnaire using Rasch analysis. Invest Ophthalmol Vis Sci. 2010;51:2496–2503 [CrossRef] [PubMed]
Van de Graaf ES Felius J van Kempen-du Saar H . Construct validation of the Amblyopia and Strabismus Questionnaire (A&SQ) by factor analysis. Graefes Arch Clin Exp Ophthalmol. 2009;247:1263–1268. [CrossRef] [PubMed]
Hobart JC Cano SJ Zajicek JP Thompson AJ . Rating scales as outcome measures for clinical trials in neurology: problems, solutions, and recommendations. Lancet Neurol. 2007;6:1094–1105. [CrossRef] [PubMed]
Andrich D . A rating formulation for ordered response categories. Psychometrika. 1978;43:561–573. [CrossRef]
Mellenbergh GJ . Conceptual notes on models for discrete polytomous item responses. Appl Psychol Meas. 1995;19:91–100. [CrossRef]
Fischer GH . Derivations of the Rasch model. In: Fischer GH Molenaar IW eds. Rasch Models: Foundations, Recent Developments and Applications. Berlin: Springer Verlag; 1995:37.
Kelderman H . Measurement exchangeability and normal one-factor models. Biometrika. 2004;91:738–742. [CrossRef]
Rost CCHJ von Davier M . Sind die big five Rasch-skalierbar? [Are the big five Rasch scalable?] Diagnostica. 1999;45(4):119–127. [CrossRef]
McCrae RM Zonderman AB Costa PT Bond MH . Evaluating replicability of factors in the revised NEO Personality Inventory: confirmatory factor analysis versus Procrustes rotation. J Pers Soc Psychol. 1996;70:552–566. [CrossRef]
Bock RD Gibbons R Muraki E . Full-Information factor analysis. Appl Psychol Meas. 1988;12:261–280. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×