April 2009
Volume 50, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2009
Real-time Image Quality Feedback for Fundus Camera Photography
Author Affiliations & Notes
  • B. Davis
    VisionQuest Biomedical, Albuquerque, New Mexico
  • G. Heileman
    Elec & Comp Eng,
    University of New Mexico, Albuquerque, New Mexico
  • M. Pattichis
    University of New Mexico, Albuquerque, New Mexico
  • S. Murillo
    VisionQuest Biomedical, Albuquerque, New Mexico
  • E. S. Barriga
    VisionQuest Biomedical, Albuquerque, New Mexico
  • P. Soliz
    VisionQuest Biomedical, Albquerque, New Mexico
  • Footnotes
    Commercial Relationships  B. Davis, None; G. Heileman, None; M. Pattichis, None; S. Murillo, None; E.S. Barriga, None; P. Soliz, None.
  • Footnotes
    Support  R41EY018971
Investigative Ophthalmology & Visual Science April 2009, Vol.50, 327. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      B. Davis, G. Heileman, M. Pattichis, S. Murillo, E. S. Barriga, P. Soliz; Real-time Image Quality Feedback for Fundus Camera Photography. Invest. Ophthalmol. Vis. Sci. 2009;50(13):327.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : To demonstrate a technique for real-time assessment of retinal image quality for immediate feedback to the photographer as to the technical source of poor image quality.

Methods: : The dataset consisted of 1,454 ETDRS field two digital images from different Topcon cameras. Patients had been imaged both mydriatical and non-mydriatical. Image formats varied from 1000 x 1000 pixels to 2048 x 1536 pixels. Images were graded pass/fail by an ophthalmologist or an ophthalmic technician. One-hundred mathematical features were extracted. These features represented texture, color balance, contrast, and sharpness of a region. Features were calculated for seven regions of the image centered on the fovea. These features were processed to identify the most important in producing a correct classification of the image in terms of pass/fail in image quality. Ground truth was provided to the automatic image quality classifier by the pass/fail image quality grades of the human analysts. Image quality was also assessed in terms of out of focus or over/under exposed. Image artifacts were not explicitly identified. A form of regression was used for the classifier model.

Results: : Three data sets of 600, 396, and 358 of self-similar formats (number of pixels) were first used in the classifier. For the binary classification problem (pass/fail) the areas under the ROC for the automatic classifier were 1.0, 1.0, and 0.98, respectively. A test was performed on the set of 600 images to determine the ability to not only recognize poor image quality, but to also identify the technical source of the image degradation. In this test, a Kappa of 99.6% was achieved when compared to the grader. 99% of the images were correctly identified to the source of the quality degradation.

Conclusions: : Each image was classified in a few milliseconds demonstrating the ability to identify poor quality images in real time. The images features that were used do not require explicit segmentation of retinal structures saving computational time that reduces the ability to perform image quality assessment in real time.

Keywords: image processing • retina • imaging/image analysis: non-clinical 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×