Abstract
Purpose :
The pediatric ophthalmic examination is a challenge to ophthalmologists at any level of training. We previously introduced a new teaching tool, the PEAR, to provide structured feedback to residents, fellows and medical students on conducting a pediatric ophthalmic examination. Based on our ongoing experience with using the PEAR, we developed a revised version, which includes an examination sheet that is completed by the trainee. The purpose of the current study was to establish face and content validity of this new PEAR toolkit.
Methods :
The two-page Pediatric Examination Assessment Rubric (PEAR) toolkit was sent to 14 content experts (pediatric ophthalmologists at academic medical centers) for their review and comments. The faculty members were encouraged to use the rubric in a clinical setting and provide feedback on its content and structure.
Results :
We received feedback from 10 content experts. All 10 provided positive comments regarding the rubric and examination sheet, including their expert opinions that the rubric and examination sheet include the content areas necessary to optimally teach and evaluate a comprehensive pediatric ophthalmic examination. Four experts suggested revisions to the rubric, while three experts suggested revisions to the trainee examination sheet. The substantive revisions included changing “occludes untested eye” to “uses age-appropriate occluder (e.g., adhesive patch)”, changing a rating for the binocular ophthalmoscopy component of the examination from “performs with good form” to “chooses an appropriate degree of illumination and spot size”, adding to the rapport section a checklist item on properly sanitizing hand, and adding a section on measuring current glasses prescription using a lensometer. Other suggested revisions were minor wording changes. The experts’ comments were incorporated into the new PEAR toolkit, establishing face and content validity. The PEAR toolkit is displayed in Figures 1 and 2.
Conclusions :
With feedback from content experts, a new PEAR toolkit with face and content validity was developed. A study to determine interrater reliability of the PEAR toolkit is ongoing.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.