June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
Automatic image stitching of video frames captured during indirect retinopathy of prematurity examinations using a condensing lens and camera phone
Author Affiliations & Notes
  • Jeffrey Wigdahl
    VisionQuest Biomedical, Albuquerque, New Mexico, United States
  • Sam Wilson
    Keeler, United Kingdom
  • Maya Barley
    Keeler, United Kingdom
  • E Simon Barriga
    VisionQuest Biomedical, Albuquerque, New Mexico, United States
  • Footnotes
    Commercial Relationships   Jeffrey Wigdahl, Keeler (F), VisionQuest Biomedical (E); Sam Wilson, Keeler (E); Maya Barley, Keeler (E); E Simon Barriga, VisionQuest Biomedical (E)
  • Footnotes
    Support  R44 EY 023474
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 3250. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeffrey Wigdahl, Sam Wilson, Maya Barley, E Simon Barriga; Automatic image stitching of video frames captured during indirect retinopathy of prematurity examinations using a condensing lens and camera phone. Invest. Ophthalmol. Vis. Sci. 2021;62(8):3250.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To demonstrate software that automatically generates wide-field images of neonates’ retinas using videos captured with a low-cost, narrow field smartphone fitted to a headset and an indirect ophthalmoscopy lens. Accurate determination of plus disease is critical for achieving favorable outcomes in infants with ROP. However, given the narrow field of a smartphone-based imager, it can be difficult to judge vascular changed present in an infant’s posterior pole. To meet the needs of the developing world, low-cost tools and software need to provide equally actionable information as more expensive counterparts that capture larger field of view images.

Methods : To generate a mosaic, videos must be stripped of extraneous information:
Lens Detection: Circle Hough transform is used to detect the most likely lens candidate in each frame.
Image Quality: A deep neural network was trained with over 100,000 images from three classes (outlier, low, high). The model is used to determine which frames will be used to create the mosaic.
Grouping: During the ROP exam, the lens is out of frame when the doctor moves to a different area of the retina. The initial grouping of usable frames exploits this so that each group represents a contiguous view of the retina.
Image Stitching: Each group is stitched together, then an attempt is made to stitch the group outputs into a final mosaic. Stitching is performed using FAST key point detection and Rotated BRIEF descriptors to find correspondence between images.

Results : 20 videos were used to test the algorithms with success criteria being the retinal field of view present in the output mosaics. The full field of view, as determined by an independent grader, was manually extracted from the videos and visually compared against the automatic output. The full field of view was found to be present in 18 videos, with only a single view underrepresented in each. An example output and processing pipeline is provided in Fig.1.

Conclusions : We demonstrated an automatic approach that can extract the usable retinal information from a video and create one or more widefield images that can be used for documentation, education, or making an accurate ROP diagnosis.

This is a 2021 ARVO Annual Meeting abstract.

 

Figure 1. (A) Frames are processed to find the lens. (B) Image quality is determined for each lens. (C) Good quality frames are grouped by region and (D) a final mosaic output.

Figure 1. (A) Frames are processed to find the lens. (B) Image quality is determined for each lens. (C) Good quality frames are grouped by region and (D) a final mosaic output.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×