May 2006
Volume 47, Issue 5
Free
Clinical and Epidemiologic Research  |   May 2006
How Evidence-Based Are Publications in Clinical Ophthalmic Journals?
Author Affiliations
  • Timothy Y. Y. Lai
    From the Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Eye Hospital, Hong Kong, People’s Republic of China; and the
    Department of Community Medicine, The University of Hong Kong, Hong Kong, People’s Republic of China.
  • Gabriel M. Leung
    Department of Community Medicine, The University of Hong Kong, Hong Kong, People’s Republic of China.
  • Victoria W. Y. Wong
    From the Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Eye Hospital, Hong Kong, People’s Republic of China; and the
  • Robert F. Lam
    From the Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Eye Hospital, Hong Kong, People’s Republic of China; and the
  • Andy C. O. Cheng
    From the Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Eye Hospital, Hong Kong, People’s Republic of China; and the
  • Dennis S. C. Lam
    From the Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong Eye Hospital, Hong Kong, People’s Republic of China; and the
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 1831-1838. doi:10.1167/iovs.05-0915
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Timothy Y. Y. Lai, Gabriel M. Leung, Victoria W. Y. Wong, Robert F. Lam, Andy C. O. Cheng, Dennis S. C. Lam; How Evidence-Based Are Publications in Clinical Ophthalmic Journals?. Invest. Ophthalmol. Vis. Sci. 2006;47(5):1831-1838. doi: 10.1167/iovs.05-0915.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To evaluate the methodological quality and level of evidence of publications in four leading general clinical ophthalmology journals.

methods. All 1919 articles published in the American Journal of Ophthalmology, Archives of Ophthalmology, British Journal of Ophthalmology, and Ophthalmology in 2004 were reviewed. The methodological rigor and the level of evidence in the articles were rated according to the McMaster Hedges Project criteria and the Oxford Centre for Evidence-Based Medicine levels of evidence.

results. Overall, 196 (24.4%) of the 804 publications that were included for assessment met the Hedges criteria. Articles on economics evaluation and those on prognosis achieved the highest passing rate, with 80.0% and 74.4% of articles, respectively, meeting the Hedges criteria. Publications on etiology, diagnosis, and treatment fared less well, with respective passing rates of 28.3%, 20.2%, and 14.7%. Published systematic reviews and randomized controlled trials were uncommon in the ophthalmic literature, at least in these four journals during 2004. According to the Oxford criteria, 57.6% of the articles were classified as level 4 evidence compared with 18.1% classified as level 1. Articles on prognosis had the highest proportion (43.0%) rated as level 1 evidence. Generally, articles that reached the Hedges threshold were rated higher on the level-of-evidence scale (Spermans ρ = 0.73; P < 0.001).

conclusions. The methodological quality of publications in the clinical ophthalmic literature was comparable to that in the literature of other specialties. There was substantial heterogeneity in quality between different types of articles. Future methodological improvements should focus on the areas identified as having the largest deficiencies.

In recent years, there has been increasing demand for and subsequently acquired interest from clinicians to engage explicitly in the practice of evidence-based medicine (EBM). EBM has been formally defined as “the conscientious, explicit, and judicious use of the best current evidence in making decisions about the care of individual patients.” 1 The practice of EBM means integrating individual clinical expertise with the best currently available external clinical evidence from systematic research. 1 The concept of EBM is to encourage clinical practice to be grounded in scientific inquiry for the provision of quality care. 2 3 We have found that the proportion of evidence-based interventions in ophthalmology, in a local specialized acute care setting, is comparable with that in other specialties, with approximately 80% of interventions being directly supported by published evidence. 4 Medical journals provide valuable sources of information for clinicians in practicing EBM, as the journals are more readily available and up to date compared with traditional textbooks. However, to practice according to the best evidence, research should have high internal and external validity so that clinicians can have a solid ground on which to practice EBM. 
Studies have been performed to evaluate the methodological quality of journal articles in different specialties including general internal medicine, anesthesiology, family medicine, pediatric surgery, and physical therapy. 5 6 7 8 9 10 11 12 Based on standardized criteria for the assessment of the methodological rigor and clinical relevance of research articles, McKibbon et al. 5 demonstrated that only approximately 7% of all articles published in the top 20 clinical journals in general internal medicine passed the prespecified criteria as having high methodological quality and clinical relevance. A similar study in physical therapy also showed that only 11% of journal articles meeting the predefined standard. 6 Moreover, although investigators in several studies have evaluated the compliance of methodological standards of ophthalmic publications stratified by different study designs (e.g., randomized controlled trials [RCTs] or reports on diagnostic tests 13 14 15 16 ), a systematic, comprehensive review using this approach has not yet been performed. We therefore assessed all consecutively published articles, stratified by study types, published during 2004 in four leading general clinical ophthalmology journals, to determine their relevance and methodological quality for the practice of evidence-based ophthalmology. 
Methods
All articles published from January 1 to December 31, 2004, in the top four ranking journals in general clinical ophthalmology (American Journal of Ophthalmology, Archives of Ophthalmology, British Journal of Ophthalmology, and Ophthalmology) according to the 2003 ISI (Institute for Scientific Information) impact factors were manually reviewed by four of the authors (TYYL, VWYW, RFL, and ACOC). All 48 regular issues of the four monthly journals were reviewed; the two supplementary issues of the American Journal of Ophthalmology published in 2004 were excluded. Each issue of the journal was randomly assigned for assessment by two of the authors independently, and disagreement was settled by consensus. All reviewers were practicing ophthalmologists and had received Master’s level training in EBM methods. 
To evaluate the methodological quality of the articles, we adopted the Hedges Project criteria developed by the Health Information Research Unit of McMaster University. 17 This assessment method was chosen because it is a validated standardized instrument developed for appraising health research in meeting the standards of high-quality secondary evidence-based journals, such as the ACP Journal Club. 17 Details of the Hedges Project criteria are listed in Table 1and a summary flowchart of the review process is presented in Figure 1 . With the use of these criteria, all studies were classified according to the study format as follows: (1) an original study that reports first-hand data, (2) a review article that aims at summarizing the preexisting literature, (3) a general or miscellaneous article that discusses a topic without original observation, or (4) a case report that reports individualized data. All except general and miscellaneous articles were then assessed to determine whether they were of direct interest to human healthcare, and only articles that dealt with patient care were further evaluated. The purpose of each article was subsequently identified according to the definition in Table 1and articles that did not fit into any of the purpose categories were labeled “other.” Articles classified as other and qualitative studies were excluded from the methodological assessment. Each article was evaluated according to the methodological rigor for its corresponding category and rated dichotomously as pass or fail. An article must fulfill all the defined criteria for it to “pass” the category. In addition to using the Hedges Project criteria in assessing the methodological rigor of the journal articles, each article was classified according to the Oxford Centre for Evidence-based Medicine Levels of Evidence. 18 This classification system ranks the validity of the evidence into a hierarchy, with level 1 being the highest level and level 5 the lowest. The criteria for level of evidence are different for each purpose category. For example, RCTs would be considered the highest level of evidence (level 1) for articles on therapy, prevention, or etiology, whereas cohort studies would be considered the highest level of evidence for articles on prognosis and diagnosis. 
Statistical analyses were performed on computer (SPSS ver. 11.5; SPSS, Inc., Chicago, IL). The number and proportion of articles that passed or failed each of the Hedges Project criteria and their Oxford level of evidence for the purpose category were calculated. The nonparametric Mann-Whitney test was used to assess the differences in the Oxford level of evidence of the articles that passed or failed, according to the Hedges Project criteria. Results of these two sets of standards were also analyzed with Spearman rank correlation tests. P ≤ 0.05 was considered statistically significant. 
Results
Classification of Article Format
A total of 1919 articles published in the four selected journals during 2004 were assessed. There were 1043 (54.4%) original studies, 26 (1.4%) review articles, and 421 (21.9%) case reports (Table 2) . Four hundred twenty-nine (22.4%) articles were classified as general or miscellaneous and were excluded from the methodological assessment (e.g., editorials and letters to the editor). A further 128 (6.7%) articles were excluded, as they did not pertain to human healthcare and hence were not immediately relevant to clinical practice (e.g., animal or laboratory studies). 
Purpose of the Articles
Table 3summarizes the remaining 1362 articles with relevance to human healthcare according to the purpose of the article. The most common purpose classified was “other, ” with 530 (38.9%) articles that failed to meet the definition according to the Hedges Project criteria, followed by treatment, with 505 (37.1%) articles. An example of an article classified as other would be a descriptive study reporting the clinical features of a condition. There were 87 to 104 (6.4%–7.6%) articles in each category evaluating the etiological, prognostic, and diagnostic aspects of various ophthalmic diseases. Studies that concerned qualitative outcomes were uncommon. Similarly infrequent, there were only five (0.4%) articles on the economic aspects of ophthalmology (all published in the British Journal of Ophthalmology) whereas two (0.1%) articles were identified as assessing clinical prediction guides. 
Assessment by Hedges Project Criteria and Oxford Level of Evidence Stratified by Purpose of the Article
Of the 804 publications evaluated, 196 (24.4%) passed the assessment (Table 4) . The passing rate was highest for articles on economics evaluation and lowest for articles on clinical prediction guides, albeit with fewer than five entries each, so that the small denominators might have confounded any direct comparison with other more common categories. Of the five publications on economics of healthcare, four (80%) articles passed the assessment and one (20%) article failed because of the lack of sensitivity analysis in the study. Both articles on clinical prediction guides failed to fulfill the Hedges Project criteria because of the absence of validation of the guide on another set of real patients (test set). 
The commonest type of ophthalmic publications that were evaluated for methodological rigor was on therapy, in which 73 (14.7%) of 496 articles passed the assessment. The main reason for failure to meet the criteria was the lack of random allocation of participants to treatment groups in 412 (83.1%) articles. Another criterion that treatment studies failed to fulfill was a follow-up of at least 80% of patients entering the investigation (n = 23; 4.6%). 
Methodological assessment of studies on etiology showed 28 (28.3%) of the 99 articles passed. Areas of methodological standards in which the etiology articles failed to reach the specified criteria included 50 (50.5%) articles without prospective collection of data, 43 (43.4%) without blinding of observers to exposure, and 35 (35.4%) without clearly identified comparison groups. 
As for the 86 publications on prognosis, the passing rate was high, with 64 (74.4%) articles passing the assessment. The articles that failed to meet the criteria on prognosis included 12 (14.0%) publications without an inception cohort of individuals initially free of the outcome of interest and 11 (12.8%) articles without follow-up of at least 80% of patients until a major study end point was reached. 
There were 99 articles on diagnostic studies, of which 20 (20.2%) publications passed. The commonest reason for failure was the lack of interpretation of diagnostic standard without knowledge of test result and vice versa (blinding), which occurred in 73 (73.7%) articles. Other reasons included failure to include a sufficient spectrum of participants in 38 (38.4%) articles, the lack of an objective “gold” diagnostic standard for the diagnosis in 24 (24.2%), failure to conduct both the new test and diagnostic standard on participants in 26 (26.3%), and analysis inconsistent with the study design in 4 (4.0%). 
Of the 25 review articles that were relevant to human healthcare, eight were excluded, as their purposes were classified as other. Seven (41.2%) of the remaining 17 review articles passed the assessment. Articles that failed to meet the criteria included nine (52.9%) without an explicit statement of the inclusion and exclusion criteria, eight (47.1%) without description of the methods, and five (29.4%) without including one or more articles that met other criteria listed for the purpose category. 
Regarding the level of evidence assigned to the selected studies, two publications on clinical prediction guides were excluded, as the assessment of clinical prediction is currently unavailable under the classification system devised by the Oxford Centre for Evidence-Based Medicine. Analyses showed that articles that passed according to the Hedges Project criteria had a significantly higher level of evidence compared with articles that failed, both for the overall group (P < 0.001) and for individual purpose categories (all P < 0.05). Significant correlation between the Hedges Project criteria and the Oxford system was also found (Spearman’s ρ = 0.73; P < 0.001). 
Most of the assessed publications were classified as level 4 evidence, with 462 (57.6%) articles. One hundred forty-five (18.1%) articles were rated as level 1 evidence, and five (0.6%) were rated as level 5 (i.e., articles based on expert opinion without explicit critical appraisal). Similar to the Hedges Project criteria-passing rate, the level of evidence rating was high for publications on prognosis, with 37 (43.0%) and 31 (35.6%) of the 86 articles rated as level 1 and 2 evidence, respectively. In contrast, etiological studies generally had much lower level of evidence, with two (2.0%) and six (6.0%) articles rated as level 1 and 2 evidence, respectively. Studies on diagnosis were intermediate, with 28 (28.3%) and 6 (6.1%) articles rated as level 1 and 2 evidence, respectively. For articles on treatment, nearly all were classified as level 4 evidence, as studies on therapy were mostly case series. 
Assessment by Hedges Project Criteria and Oxford’s Level of Evidence Stratified by the Publishing Journal
Of the 804 publications included in the methodological assessment, the British Journal of Ophthalmology (29.0%) had the highest proportion of articles that passed, followed by Ophthalmology and the Archives of Ophthalmology with 28.6% and 26.4%, respectively (Table 5) . The American Journal of Ophthalmology had the lowest proportion, with 18.4% of articles passing the criteria. Again, a significant positive relationship was found between the articles that passed the Hedges Project criteria in each journal and their Oxford level of evidence (all P < 0.001). It was found that articles published in the Archives of Ophthalmology had the highest level of evidence, with 26.1% of articles rated as level 1, followed by Ophthalmology with 21.3% of articles at level 1. Most of the articles evaluated (>50% in each journal) were level 4. 
Discussion
For clinicians to practice EBM, the literature should be based on good-quality research, as publications with poor methodological quality may provide misleading intelligence and misguide clinical practice, especially when most busy clinicians often do not have the time or, frankly, the necessary skill set in epidemiology or biostatistics to appraise such research critically. The quality of publications in ophthalmic journals has apparently been improving in recent years, as more emphasis is being put on the quality of research publications. For instance, Ang et al. 19 reported that the proportions of prospective studies published in the American Journal of Ophthalmology and British Journal of Ophthalmology have increased from 1% to 2% in 1980 to approximately 10% in 1999. This suggests that, using the rigor of study design as a proxy, the quality of publication may have been gradually improving over the years. Using the Hedges Project criteria, we found in the present study that 196 (24.4%) of the 804 assessed publications in the four ophthalmic journals in the year 2004 passed the standardized methodological assessment. This rate is similar compared with previous studies in general internal medicine and physical therapy, in which approximately 10% to 20% of publications fulfilled the criteria. 5 6 For rating by the Oxford Centre of Evidence-Based Medicine level of evidence, 18.1% of the assessed articles were rated as having level 1 evidence. Our findings showed that articles that passed the Hedges Project criteria had a significantly higher level of evidence than did those that failed. More than 80% of articles that passed the Hedges Project criteria were level 1 or 2 evidence, compared with 7.6% of articles that failed the criteria. This suggested that both the Oxford and Hedges methods are consistent with each other in the assessment of evidence. However, despite having a high level of evidence, some articles failed to meet the Hedges Project criteria because of differences in the sets of criteria used. For example, 12 articles on diagnosis were rated as level 1 evidence but failed the Hedges Project criteria assessment because of lack of interpretation of the test without knowledge of the diagnostic (“gold”) standard result and vice versa. Therefore, both methods of evaluating evidence may complement each other and enhance the validity assessment of the publications. 
The commonest topic of research articles identified in our study was treatment, comprising more than half of the assessed articles. However, only 14.7% of articles passed the methodological assessment. The main reason for articles failing the assessment was the lack of randomization (>80% of studies). This finding confirmed that RCTs on therapy are uncommon in the ophthalmic literature, as has been observed for other clinical specialties. 6 7 8 9 10 11 12 Ideally, all trials on therapy should be RCTs, since the design is subject to the least amount of bias. Lauritsen and Moller 7 found that only approximately 20% of articles published in five leading anesthesia journals were RCTs, whereas Thomas et al. 9 reported that only 6% of studies published in three primary care journals within a 5-year period were RCTs. In another review on articles published in two pediatric surgery journals in 1998, only 3 of the 111 studies identified were RCTs. 11 The proportion of RCTs was also low in sports medicine journals, in which only 9.5% of original research articles were RCTs. 8 Nonetheless, quantitative studies such as case reports or small case series may be the only type of evidence available for uncommon conditions. Therefore, it is unreasonable to expect that any journal would obtain a perfect score from the methodological quality assessments. RCTs thus cannot be the only basis for making decisions about patient care. Results from RCTs should also be weighed with physicians’ experience and this combination should form the essence of the practice of EBM. 
Our assessment of diagnostic studies showed that 20 (20.2%) of the 99 publications passed the Hedges Project criteria. Nearly three fourths of articles on diagnosis failed to fulfill the criteria because they lacked objective blinded assessment of test results and vice versa, which is important in preventing biases that may overestimate the accuracy of the diagnostic test. Moreover, nearly 40% of articles on diagnosis did not contain a sufficient spectrum of participants and may therefore contain selection or recruitment biases. 14 Other areas in which the studies failed to meet the criteria included a lack of objective gold standard for diagnosis and/or without consistent performance of the diagnostic standards in all participants. Without the reference standard, it would be difficult for readers to assess the clinical applicability of the diagnostic test under investigation. 
For etiological studies, cohort and case–control studies are usually the best study designs, as it is usually unethical or inappropriate to perform RCTs. The passing rate of these studies was 28.3%, with areas of methodological deficiencies including the lack of prospective recruitment of subjects, absence of blinding of observers, and the lack of clearly identified comparison groups. Inclusion of these methodological aspects is important, as it would allow readers to assess the likelihood of potential biases and the robustness of any associations. 
Our study showed that studies on prognosis had a high passing rate of 74.4%, indicating that most of the studies had an appropriate-sized cohort of patients (i.e., patients who were initially free of the outcome of interest) and a high follow-up rate. 
Studies on healthcare economics in ophthalmology also had a high passing rate, with four (80%) of the five articles fulfilling the criteria. The only reason in which one study failed was the lack of sensitivity analysis. Although healthcare economics studies are rare in the ophthalmic literature, they are increasing in popularity throughout the healthcare literature, with more economics studies being published in recent years. 20 The high quality of research in healthcare economics may be associated with the special section titled “value-based ophthalmology” in the British Journal of Ophthalmology, which provides a specific channel for researchers to report on economic aspects of ophthalmology. With increasing scarcity of healthcare resources, economic evaluation is becoming more relevant to clinicians, and more research in this area is desirable for enhancing the practice of EBM. 
In this study, we found that review articles are uncommon in the four selected journals with only 26 articles published in year 2004. Most articles failed the methodological assessment due to the lack of systematic explicit statement of inclusion and exclusion criteria. Review articles, especially systematic reviews, have the potential to serve as a valuable source of evidence for clinicians. The publication policies of individual journals might have affected the methodological standards of the review articles. For example, Ophthalmology has a special section for evidence-based review on ophthalmic technological assessment, and this may enhance the availability of systematic review in ophthalmic literature. In contrast, the “perspective” section in the British Journal of Ophthalmology mainly publishes narrative reviews, which do not have explicit inclusion and exclusion criteria. 
One of the shortcomings of this study included a relatively short review period in which we assessed only articles published within 1 year. Because we only selectively assessed four general clinical ophthalmology journals, the results of the present study should not be extrapolated to the rest of the ophthalmic literature. The actual quality of ophthalmic publications might significantly differ compared with our study, as the journals that we assessed were those with the highest impact factors for general clinical ophthalmology. Another limitation of our study was related to the adoption of the Hedges Project criteria for methodological assessment. Although the Hedges Project criteria allow for comprehensive assessment of various types of studies, they do not allow in-depth assessment of some study designs. The quality of RCTs and studies on diagnostic tests were not assessed as comprehensively as the items listed in the CONSORT (Consolidated Standards of Reporting Trials) and STARD (Standards for Reporting of Diagnostic Accuracy) statements, 21 22 which were developed for enhancing the quality of reporting in RCTs and diagnostic studies, respectively. Important aspects of methodology and reporting in RCTs, such as randomization methods, adequacy of allocation concealment, blinding, sample size calculations, and intention-to-treat analysis, are commonly underreported in RCTs. 23 As for studies on diagnostic tests in ophthalmology, it is crucial for researchers to report on the important details of diagnostic tests as these can significantly alter the research findings. For example, failure to report indeterminate results as listed in the STARD statement was found to cause significant overestimation of the performance of the diagnostic test. 24 We also classified nearly 40% of articles as “other,” and the methodological rigors of these articles could not be assessed with the Hedges Project criteria or the Oxford level-of-evidence classification. Many publications, particularly case reports were descriptive, did not fit into any of the purpose definitions, and thus could not be assessed using the two systems. This high rate of exclusion was also observed in a previous study in which the Hedges Project criteria were used in the assessment of publication methodology. 6  
In conclusion, our study showed that the methodological quality of ophthalmic literature was comparable to the standards in other areas of medicine. The methodological rigors vary considerably between articles of different purpose, and this study has identified areas for improvement of different categories. Articles on economics and prognosis were found to have much higher passing rates than those on etiology, diagnosis, or treatment. Our findings provided a baseline for further investigation into the quality standards of ophthalmic research. More articles with better methodological standards will be published in the future, allowing the practice of EBM to be backed by higher quality of clinical research. 
 
Table 1.
 
Methodological Rigor Assessment According to the Hedges Project Criteria
Table 1.
 
Methodological Rigor Assessment According to the Hedges Project Criteria
Purpose Category Definition Items of Methodological Rigor Assessment
Etiology Content pertains directly to determining if there is an association between an exposure and a disease or condition i. Observations concerned with the relationship between exposures and putative clinical outcomes;
ii. Data collection is prospective;
iii. Clearly identified comparison group(s);
iv. Blinding of observers of outcome to exposure.
Prognosis Content pertains directly to the prediction of the clinical course or natural history of a disease or condition i. Inception cohort initially free of the outcome of interest;
ii. Follow-up of 80% or more patients until the occurrence of a major study end point or to the end of the study;
iii. Analysis consistent with study design.
Diagnosis Content pertains directly to using a tool to arrive at a diagnosis of a disease or condition i. Inclusion of a spectrum of participants;
ii. Objective diagnostic (“gold”) standard or current clinical standard for diagnosis;
iii. Participants received both the new test and some form of the diagnostic standard;
iv. Interpretation of diagnostic standard without knowledge of test result and visa versa;
v. Analysis consistent with study design.
Treatment Content pertains directly to an intervention for therapy (including adverse effects studies), prevention, rehabilitation, quality improvement or continuing medical education. i. Random allocation of participants to comparison groups;
ii. Outcome assessment of at least 80% of those entering the investigation accounted for in one major analysis at any given follow-up assessment;
iii. Analysis consistent with study design.
Economics Content pertains directly to the economics of a health care issue. i. The research question is a comparison of alternatives;
ii. Alternative services or activities compared on outcomes produced (effectiveness) and resources consumed (costs);
iii. Evidence of effectiveness must be from a study of real patients that meets the above-noted criteria for diagnosis, treatment, quality improvement, or a systematic review article;
iv. Effectiveness and cost estimates based on individual patient data (micro-economics);
v. Results presented in terms of the incremental or additional costs and outcomes of one intervention over another;
vi. Sensitivity analysis if there is uncertainty.
Clinical prediction guide Content pertains directly to the prediction of some aspect of a disease or condition. i. Guide is generated in one or more sets of real patients (training set);
ii. Guide is validated in another set of real patients (test set).
Review articles Any full text bannered “review, overview, or meta-analysis” in the title or section heading, or indicated in the test that the intention was to review, summarize or highlight the literature on a particular topic i. Statement of the clinical topic;
ii. Explicit statement of the inclusion and exclusion criteria;
iii. Description of the methods;
iv. One or more article in the review must meet the above-noted criteria.
Qualitative Content relates to how people feel or experience certain situations. Excluded from methodological rigor assessment
Other Content of the study does not fit into any of the other definitions Excluded from methodological rigor assessment
Figure 1.
 
Summary flow diagram of the review process.
Figure 1.
 
Summary flow diagram of the review process.
Table 2.
 
Summary of the Study Format of the 1919 Articles Assessed
Table 2.
 
Summary of the Study Format of the 1919 Articles Assessed
Journal Original Study Review Case Report General and Miscellaneous* Total
American Journal of Ophthalmology 301 (47.7) 11 (1.7) 179 (28.4) 140 (22.2) 631 (100)
Archives of Ophthalmology 160 (42.6) 4 (1.0) 116 (30.9) 96 (25.5) 376 (100)
British Journal of Ophthalmology 306 (64.8) 5 (1.1) 88 (18.6) 73 (15.5) 472 (100)
Ophthalmology 276 (62.7) 6 (1.4) 38 (8.6) 120 (27.3) 440 (100)
Total 1043 (54.4) 26 (1.4) 421 (21.9) 429 (22.4) 1919 (100)
Table 3.
 
Study Purpose of the 1362 Articles Relevant to Human Health Care for Each Journal
Table 3.
 
Study Purpose of the 1362 Articles Relevant to Human Health Care for Each Journal
Publishing Journal Study Format Etiology Prognosis Diagnosis Treatment Economics Clinical Prediction Guide Qualitative Other* Total
American Journal of Ophthalmology All 36 (7.9) 28 (6.2) 40 (8.8) 189 (41.7) 0 (0.0) 1 (0.2) 8 (1.8) 151 (33.3) 453
Original study 20 (7.6) 27 (10.3) 20 (7.6) 119 (45.2) 0 (0.0) 1 (0.4) 8 (3.0) 68 (25.9) 263
Review 0 (0.0) 0 (0.0) 3 (27.3) 2 (18.2) 0 (0.0) 0 (0.0) 0 (0.0) 6 (54.5) 11
Case report 16 (8.9) 1 (0.6) 17 (9.5) 68 (38.0) 0 (0.0) 0 (0.0) 0 (0.0) 77 (43.0) 179
Archives of Ophthalmology All 13 (5.3) 15 (6.1) 14 (5.7) 77 (31.2) 0 (0.0) 1 (0.4) 7 (2.8) 120 (48.6) 247
Original study 9 (7.1) 15 (11.8) 11 (8.7) 45 (35.4) 0 (0.0) 1 (0.8) 7 (5.5) 39 (30.7) 127
Review 0 (0.0) 0 (0.0) 0 (0.0) 2 (50.0) 0 (0.0) 0 (0.0) 0 (0.0) 2 (50.0) 4
Case report 4 (3.4) 0 (0.0) 3 (2.6) 30 (68.1) 0 (0.0) 0 (0.0) 0 (0.0) 79 (68.1) 116
British Journal of Ophthalmology All 32 (9.1) 21 (6.0) 20 (5.7) 115 (32.7) 5 (1.4) 0 (0.0) 10 (2.8) 149 (42.3) 352
Original study 25 (9.6) 20 (7.7) 17 (6.5) 80 (30.8) 5 (1.9) 0 (0.0) 9 (3.5) 104 (40.0) 260
Review 1 (20.0) 1 (20.0) 0 (0.0) 3 (60.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 5
Case report 6 (6.9) 0 (0.0) 3 (3.4) 32 (36.8) 0 (0.0) 0 (0.0) 1 (1.1) 45 (51.7) 87
Ophthalmology All 20 (6.5) 23 (7.4) 30 (9.7) 124 (40.0) 0 (0.0) 0 (0.0) 3 (1.0) 110 (35.4) 310
Original study 16 (6.0) 23 (8.6) 27 (10.1) 112 (41.9) 0 (0.0) 0 (0.0) 3 (1.0) 86 (32.2) 267
Review 1 (20.0) 0 (0.0) 2 (40.0) 2 (40.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 5
Case report 3 (7.9) 0 (0.0) 1 (2.6) 10 (26.3) 0 (0.0) 0 (0.0) 0 (0.0) 24 (63.2) 38
Total All 101 (7.4) 87 (6.4) 104 (7.6) 505 (37.1) 5 (0.4) 2 (0.1) 28 (2.1) 530 (38.9) 1362
Original study 70 (7.6) 85 (9.3) 75 (8.2) 356 (38.8) 5 (0.5) 2 (0.2) 27 (2.9) 297 (32.4) 917
Review 2 (8.0) 1 (4.0) 5 (20.0) 9 (36.0) 0 (0.0) 0 (0.0) 0 (0.0) 8 (32.0) 25
Case report 29 (6.9) 1 (0.2) 24 (5.7) 140 (33.3) 0 (0.0) 0 (0.0) 1 (0.2) 225 (53.6) 420
Table 4.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence for Each Purpose Category
Table 4.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence for Each Purpose Category
Study Purpose Assessment by Hedges Project Criteria Articles (%) Level of Evidence According to Oxford Centre for Evidence-Based Medicine P *
Level 1 Level 2 Level 3 Level 4 Level 5
Etiology (n = 99) Passed 28 (28.3) 2 (7.1) 2 (7.1) 24 (85.7) 0 (0.0) 0 (0.0) <0.001
Failed 71 (71.7) 0 (0.0) 4 (5.6) 31 (43.7) 36 (50.7) 0 (0.0)
Prognosis (n = 86) Passed 64 (74.4) 35 (54.7) 24 (37.5) 0 (0.0) 5 (7.8) 0 (0.0) <0.001
Failed 22 (25.6) 2 (9.1) 7 (31.8) 0 (0.0) 13 (59.1) 0 (0.0)
Diagnosis (n = 99) Passed 20 (20.2) 16 (80.0) 3 (15.0) 0 (0.0) 1 (5.0) 0 (0.0) <0.001
Failed 79 (79.8) 12 (15.2) 3 (3.8) 24 (30.4) 40 (50.6) 0 (0.0)
Treatment (n = 496) Passed 73 (14.7) 71 (97.3) 1 (1.4) 0 (0.0) 1 (1.4) 0 (0.0) <0.001
Failed 423 (85.3) 1 (0.2) 16 (3.8) 43 (10.2) 362 (85.6) 1 (0.2)
Economics (n = 5) Passed 4 (80.0) 0 (0.0) 4 (100.0) 0 (0.0) 0 (0.0) 0 (0.0) 0.046
Failed 1 (20.0) 0 (0.0) 0 (0.0) 0 (0.0) 1 (100.0) 0 (0.0)
Clinical prediction guide (n = 2) Passed 0 (0.0) N/A, †
Failed 2 (100.0) N/A, †
Review article (n = 17) Passed 7 (41.2) 5 (71.4) 1 (14.3) 1 (14.3) 0 (0.0) 0 (0.0) 0.003
Failed 10 (58.8) 1 (10.0) 0 (0.0) 2 (20.0) 3 (30.0) 4 (40.0)
Total (n = 804) Passed 196 (24.4) 129 (65.8) 35 (17.9) 25 (12.8) 7 (3.6) 0 (0.0) <0.001
Failed 608 (75.6) 16 (2.6) 30 (5.0) 100 (16.5) 455 (75.1) 5 (0.8)
Overall 804 (100.0), † 145 (18.1) 65 (8.1) 125 (15.6) 462 (57.6) 5 (0.6)
Table 5.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence by Journal
Table 5.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence by Journal
Journal Assessment by Hedges Project Criteria Articles (%) Level of Evidence According to Oxford Centre for Evidence-Based Medicine P *
Level 1 Level 2 Level 3 Level 4 Level 5
American Journal of Ophthalmology (n = 631) Total assessed 293 (46.4) 36 (12.3) 18 (6.1) 45 (15.4) 192 (65.5) 2 (0.7)
 Passed 54 (18.4) 30 (55.6) 12 (22.2) 10 (18.5) 2 (3.7) 0 (0.0) <0.001
 Failed 239 (81.6) 6 (2.5) 6 (2.5) 35 (14.6) 190 (79.5) 2 (0.8)
Excluded, † 338 (53.6) N/A
Archives of Ophthalmology (n = 376) Total assessed 119 (31.6) 31 (26.1) 11 (9.2) 16 (13.4) 61 (51.3) 0 (0.01)
 Passed 34 (28.6) 26 (76.5) 7 (20.6) 1 (2.9) 0 (0.0) 0 (0.0) <0.001
 Failed 85 (71.4) 5 (5.9) 4 (4.7) 15 (17.6) 61 (71.8) 0 (0.0)
Excluded, † 257 (68.4) N/A
British Journal of Ophthalmology (n = 472) Total assessed 193 (40.9) 36 (18.7) 15 (7.8) 32 (16.6) 107 (55.4) 3 (1.5)
 Passed 56 (29.0) 34 (60.7) 8 (14.3) 11 (19.6) 3 (5.4) 0 (0.0) <0.001
 Failed 137 (71.0) 2 (1.5) 7 (5.1) 21 (15.3) 104 (74.9) 3 (2.2)
Excluded, † 279 (59.1) N/A
Ophthalmology (n = 440) Total assessed 197 (44.8) 42 (21.3) 21 (8.1) 32 (16.2) 102 (51.8) 0 (0.0)
 Passed 52 (26.4) 39 (75.0) 8 (15.4) 3 (5.8) 2 (3.8) 0 (0.0) <0.001
 Failed 145 (73.6) 3 (2.1) 13 (9.0) 29 (20.0) 100 (69.0) 0 (0.0)
Excluded, † 243 (55.2) N/A
The authors thank Brian Haynes and Nancy Wilczynski of McMaster University for providing us with the details of the complete Hedges Project criteria for use in this study. 
SackettDL, RosenbergWC, Muir GrayJA, et al. Evidence-based medicine: what it is and what it isn’t. BMJ. 1996;312:71–72. [CrossRef] [PubMed]
LeungGM. Evidence-based practice revisited. Asia Pac J Public Health. 2001;13:116–121. [CrossRef] [PubMed]
SlawsonDC, ShaughnessyAF. Using “medical poetry” to remove the inequities in health care delivery. J Fam Pract. 2001;50:51–56. [PubMed]
LaiTY, WongVW, LeungGM. Is ophthalmology evidence-based?—A clinical audit of the emergency unit of a regional eye hospital. Br J Ophthalmol. 2003;87:385–390. [CrossRef] [PubMed]
McKibbonKA, WilczynskiNL, HaynesRB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?. BMC Med. 2004;2:33. [CrossRef] [PubMed]
MillerPA, McKibbonKA, HaynesRB. A quantitative analysis of research publications in physical therapy journals. Phys Ther. 2003;83:123–131. [PubMed]
LauritsenJ, MollerAM. Publications in anesthesia journals: quality and clinical relevance. Anesth Analg. 2004;99:1486–1491. [PubMed]
BleakleyC, MacAuleyD. The quality of research in sports journals. Br J Sports Med. 2002;36:124–125. [CrossRef] [PubMed]
ThomasT, FaheyT, SomersetM. The content and methodology of research papers published in three United Kingdom primary care journals. Br J Gen Pract. 1998;48:1229–1232. [PubMed]
SchummLP, FisherJS, ThistedRA, OlakJ. Clinical trials in general surgical journals: are methods better reported?. Surgery. 1999;125:41–45. [CrossRef] [PubMed]
ThakurA, WangEC, ChiuTT, et al. Methodology standards associated with quality reporting in clinical studies in pediatric surgery journals. J Pediatr Surg. 2001;36:1160–1164. [CrossRef] [PubMed]
MerensteinJ, RaoG, D’AmicoF. Clinical research in family medicine: quantity and quality of published articles. Fam Med. 2003;35:284–288. [PubMed]
SchererRW, CrawleyB. Reporting of randomized clinical trial descriptors and use of structured abstracts. JAMA. 1998;280:269–272. [CrossRef] [PubMed]
HarperR, ReevesB. Compliance with methodological standards when evaluating ophthalmic diagnostic tests. Invest Ophthalmol Vis Sci. 1999;40:1650–1657. [PubMed]
Sanchez-ThorinJC, CortesMC, MontenegroM, VillateN. The quality of reporting of randomized clinical trials published in Ophthalmology. Ophthalmology. 2001;108:410–415. [CrossRef] [PubMed]
SiddiquiMA, Azuara-BlancoA, BurrJ. The quality of reporting of diagnostic accuracy studies published in ophthalmology journals. Br J Ophthalmol. 2005;89:261–265. [CrossRef] [PubMed]
WilczynskiNL, McKiboonKA, HaynesRB. Enhancing retrieval of best evidence for health care from bibliographic databases: calibration of the hand search of the literature. Medinfo. 2001;10:390–393.
Oxford-Centre for Evidence Based Medicine. Level of Evidence. May 2001;Available at http://www.cebm.net/levels_of_evidence.asp. Accessed March 14, 2006
AngA, TongL, BhanA. Analysis of publication trends in two international renowned ophthalmology journals. Br J Ophthalmol. 2001;85:1497–1498. [CrossRef] [PubMed]
BrownMM, BrownGC. Value based medicine. Br J Ophthalmol. 2004;88:979. [PubMed]
MoherD, SchulzKF, AltmanDG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;14:1191–1194.
BossuytPM, ReitsmaJB, BrunsDE, et al. Standards for Reporting of Diagnostic Accuracy: toward complete and accurate reporting of studies of diagnostic accuracy—the STARD initiative. Am J Clin Pathol. 2003;119:18–22. [CrossRef] [PubMed]
AltmanDG. Poor-quality medical research: what can journals do?. JAMA. 2002;287:2765–2767. [CrossRef] [PubMed]
ReevesBC. Evidence about evidence. Br J Ophthalmol. 2005;89:253–254. [PubMed]
Figure 1.
 
Summary flow diagram of the review process.
Figure 1.
 
Summary flow diagram of the review process.
Table 1.
 
Methodological Rigor Assessment According to the Hedges Project Criteria
Table 1.
 
Methodological Rigor Assessment According to the Hedges Project Criteria
Purpose Category Definition Items of Methodological Rigor Assessment
Etiology Content pertains directly to determining if there is an association between an exposure and a disease or condition i. Observations concerned with the relationship between exposures and putative clinical outcomes;
ii. Data collection is prospective;
iii. Clearly identified comparison group(s);
iv. Blinding of observers of outcome to exposure.
Prognosis Content pertains directly to the prediction of the clinical course or natural history of a disease or condition i. Inception cohort initially free of the outcome of interest;
ii. Follow-up of 80% or more patients until the occurrence of a major study end point or to the end of the study;
iii. Analysis consistent with study design.
Diagnosis Content pertains directly to using a tool to arrive at a diagnosis of a disease or condition i. Inclusion of a spectrum of participants;
ii. Objective diagnostic (“gold”) standard or current clinical standard for diagnosis;
iii. Participants received both the new test and some form of the diagnostic standard;
iv. Interpretation of diagnostic standard without knowledge of test result and visa versa;
v. Analysis consistent with study design.
Treatment Content pertains directly to an intervention for therapy (including adverse effects studies), prevention, rehabilitation, quality improvement or continuing medical education. i. Random allocation of participants to comparison groups;
ii. Outcome assessment of at least 80% of those entering the investigation accounted for in one major analysis at any given follow-up assessment;
iii. Analysis consistent with study design.
Economics Content pertains directly to the economics of a health care issue. i. The research question is a comparison of alternatives;
ii. Alternative services or activities compared on outcomes produced (effectiveness) and resources consumed (costs);
iii. Evidence of effectiveness must be from a study of real patients that meets the above-noted criteria for diagnosis, treatment, quality improvement, or a systematic review article;
iv. Effectiveness and cost estimates based on individual patient data (micro-economics);
v. Results presented in terms of the incremental or additional costs and outcomes of one intervention over another;
vi. Sensitivity analysis if there is uncertainty.
Clinical prediction guide Content pertains directly to the prediction of some aspect of a disease or condition. i. Guide is generated in one or more sets of real patients (training set);
ii. Guide is validated in another set of real patients (test set).
Review articles Any full text bannered “review, overview, or meta-analysis” in the title or section heading, or indicated in the test that the intention was to review, summarize or highlight the literature on a particular topic i. Statement of the clinical topic;
ii. Explicit statement of the inclusion and exclusion criteria;
iii. Description of the methods;
iv. One or more article in the review must meet the above-noted criteria.
Qualitative Content relates to how people feel or experience certain situations. Excluded from methodological rigor assessment
Other Content of the study does not fit into any of the other definitions Excluded from methodological rigor assessment
Table 2.
 
Summary of the Study Format of the 1919 Articles Assessed
Table 2.
 
Summary of the Study Format of the 1919 Articles Assessed
Journal Original Study Review Case Report General and Miscellaneous* Total
American Journal of Ophthalmology 301 (47.7) 11 (1.7) 179 (28.4) 140 (22.2) 631 (100)
Archives of Ophthalmology 160 (42.6) 4 (1.0) 116 (30.9) 96 (25.5) 376 (100)
British Journal of Ophthalmology 306 (64.8) 5 (1.1) 88 (18.6) 73 (15.5) 472 (100)
Ophthalmology 276 (62.7) 6 (1.4) 38 (8.6) 120 (27.3) 440 (100)
Total 1043 (54.4) 26 (1.4) 421 (21.9) 429 (22.4) 1919 (100)
Table 3.
 
Study Purpose of the 1362 Articles Relevant to Human Health Care for Each Journal
Table 3.
 
Study Purpose of the 1362 Articles Relevant to Human Health Care for Each Journal
Publishing Journal Study Format Etiology Prognosis Diagnosis Treatment Economics Clinical Prediction Guide Qualitative Other* Total
American Journal of Ophthalmology All 36 (7.9) 28 (6.2) 40 (8.8) 189 (41.7) 0 (0.0) 1 (0.2) 8 (1.8) 151 (33.3) 453
Original study 20 (7.6) 27 (10.3) 20 (7.6) 119 (45.2) 0 (0.0) 1 (0.4) 8 (3.0) 68 (25.9) 263
Review 0 (0.0) 0 (0.0) 3 (27.3) 2 (18.2) 0 (0.0) 0 (0.0) 0 (0.0) 6 (54.5) 11
Case report 16 (8.9) 1 (0.6) 17 (9.5) 68 (38.0) 0 (0.0) 0 (0.0) 0 (0.0) 77 (43.0) 179
Archives of Ophthalmology All 13 (5.3) 15 (6.1) 14 (5.7) 77 (31.2) 0 (0.0) 1 (0.4) 7 (2.8) 120 (48.6) 247
Original study 9 (7.1) 15 (11.8) 11 (8.7) 45 (35.4) 0 (0.0) 1 (0.8) 7 (5.5) 39 (30.7) 127
Review 0 (0.0) 0 (0.0) 0 (0.0) 2 (50.0) 0 (0.0) 0 (0.0) 0 (0.0) 2 (50.0) 4
Case report 4 (3.4) 0 (0.0) 3 (2.6) 30 (68.1) 0 (0.0) 0 (0.0) 0 (0.0) 79 (68.1) 116
British Journal of Ophthalmology All 32 (9.1) 21 (6.0) 20 (5.7) 115 (32.7) 5 (1.4) 0 (0.0) 10 (2.8) 149 (42.3) 352
Original study 25 (9.6) 20 (7.7) 17 (6.5) 80 (30.8) 5 (1.9) 0 (0.0) 9 (3.5) 104 (40.0) 260
Review 1 (20.0) 1 (20.0) 0 (0.0) 3 (60.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 5
Case report 6 (6.9) 0 (0.0) 3 (3.4) 32 (36.8) 0 (0.0) 0 (0.0) 1 (1.1) 45 (51.7) 87
Ophthalmology All 20 (6.5) 23 (7.4) 30 (9.7) 124 (40.0) 0 (0.0) 0 (0.0) 3 (1.0) 110 (35.4) 310
Original study 16 (6.0) 23 (8.6) 27 (10.1) 112 (41.9) 0 (0.0) 0 (0.0) 3 (1.0) 86 (32.2) 267
Review 1 (20.0) 0 (0.0) 2 (40.0) 2 (40.0) 0 (0.0) 0 (0.0) 0 (0.0) 0 (0.0) 5
Case report 3 (7.9) 0 (0.0) 1 (2.6) 10 (26.3) 0 (0.0) 0 (0.0) 0 (0.0) 24 (63.2) 38
Total All 101 (7.4) 87 (6.4) 104 (7.6) 505 (37.1) 5 (0.4) 2 (0.1) 28 (2.1) 530 (38.9) 1362
Original study 70 (7.6) 85 (9.3) 75 (8.2) 356 (38.8) 5 (0.5) 2 (0.2) 27 (2.9) 297 (32.4) 917
Review 2 (8.0) 1 (4.0) 5 (20.0) 9 (36.0) 0 (0.0) 0 (0.0) 0 (0.0) 8 (32.0) 25
Case report 29 (6.9) 1 (0.2) 24 (5.7) 140 (33.3) 0 (0.0) 0 (0.0) 1 (0.2) 225 (53.6) 420
Table 4.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence for Each Purpose Category
Table 4.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence for Each Purpose Category
Study Purpose Assessment by Hedges Project Criteria Articles (%) Level of Evidence According to Oxford Centre for Evidence-Based Medicine P *
Level 1 Level 2 Level 3 Level 4 Level 5
Etiology (n = 99) Passed 28 (28.3) 2 (7.1) 2 (7.1) 24 (85.7) 0 (0.0) 0 (0.0) <0.001
Failed 71 (71.7) 0 (0.0) 4 (5.6) 31 (43.7) 36 (50.7) 0 (0.0)
Prognosis (n = 86) Passed 64 (74.4) 35 (54.7) 24 (37.5) 0 (0.0) 5 (7.8) 0 (0.0) <0.001
Failed 22 (25.6) 2 (9.1) 7 (31.8) 0 (0.0) 13 (59.1) 0 (0.0)
Diagnosis (n = 99) Passed 20 (20.2) 16 (80.0) 3 (15.0) 0 (0.0) 1 (5.0) 0 (0.0) <0.001
Failed 79 (79.8) 12 (15.2) 3 (3.8) 24 (30.4) 40 (50.6) 0 (0.0)
Treatment (n = 496) Passed 73 (14.7) 71 (97.3) 1 (1.4) 0 (0.0) 1 (1.4) 0 (0.0) <0.001
Failed 423 (85.3) 1 (0.2) 16 (3.8) 43 (10.2) 362 (85.6) 1 (0.2)
Economics (n = 5) Passed 4 (80.0) 0 (0.0) 4 (100.0) 0 (0.0) 0 (0.0) 0 (0.0) 0.046
Failed 1 (20.0) 0 (0.0) 0 (0.0) 0 (0.0) 1 (100.0) 0 (0.0)
Clinical prediction guide (n = 2) Passed 0 (0.0) N/A, †
Failed 2 (100.0) N/A, †
Review article (n = 17) Passed 7 (41.2) 5 (71.4) 1 (14.3) 1 (14.3) 0 (0.0) 0 (0.0) 0.003
Failed 10 (58.8) 1 (10.0) 0 (0.0) 2 (20.0) 3 (30.0) 4 (40.0)
Total (n = 804) Passed 196 (24.4) 129 (65.8) 35 (17.9) 25 (12.8) 7 (3.6) 0 (0.0) <0.001
Failed 608 (75.6) 16 (2.6) 30 (5.0) 100 (16.5) 455 (75.1) 5 (0.8)
Overall 804 (100.0), † 145 (18.1) 65 (8.1) 125 (15.6) 462 (57.6) 5 (0.6)
Table 5.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence by Journal
Table 5.
 
Articles That Met the Hedges Project Criteria and Their Level of Evidence by Journal
Journal Assessment by Hedges Project Criteria Articles (%) Level of Evidence According to Oxford Centre for Evidence-Based Medicine P *
Level 1 Level 2 Level 3 Level 4 Level 5
American Journal of Ophthalmology (n = 631) Total assessed 293 (46.4) 36 (12.3) 18 (6.1) 45 (15.4) 192 (65.5) 2 (0.7)
 Passed 54 (18.4) 30 (55.6) 12 (22.2) 10 (18.5) 2 (3.7) 0 (0.0) <0.001
 Failed 239 (81.6) 6 (2.5) 6 (2.5) 35 (14.6) 190 (79.5) 2 (0.8)
Excluded, † 338 (53.6) N/A
Archives of Ophthalmology (n = 376) Total assessed 119 (31.6) 31 (26.1) 11 (9.2) 16 (13.4) 61 (51.3) 0 (0.01)
 Passed 34 (28.6) 26 (76.5) 7 (20.6) 1 (2.9) 0 (0.0) 0 (0.0) <0.001
 Failed 85 (71.4) 5 (5.9) 4 (4.7) 15 (17.6) 61 (71.8) 0 (0.0)
Excluded, † 257 (68.4) N/A
British Journal of Ophthalmology (n = 472) Total assessed 193 (40.9) 36 (18.7) 15 (7.8) 32 (16.6) 107 (55.4) 3 (1.5)
 Passed 56 (29.0) 34 (60.7) 8 (14.3) 11 (19.6) 3 (5.4) 0 (0.0) <0.001
 Failed 137 (71.0) 2 (1.5) 7 (5.1) 21 (15.3) 104 (74.9) 3 (2.2)
Excluded, † 279 (59.1) N/A
Ophthalmology (n = 440) Total assessed 197 (44.8) 42 (21.3) 21 (8.1) 32 (16.2) 102 (51.8) 0 (0.0)
 Passed 52 (26.4) 39 (75.0) 8 (15.4) 3 (5.8) 2 (3.8) 0 (0.0) <0.001
 Failed 145 (73.6) 3 (2.1) 13 (9.0) 29 (20.0) 100 (69.0) 0 (0.0)
Excluded, † 243 (55.2) N/A
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×