April 2011
Volume 52, Issue 14
Free
ARVO Annual Meeting Abstract  |   April 2011
The Application Of Lucky Imaging To Adaptive Optics Scanning Laser Ophthalmoscope
Author Affiliations & Notes
  • Gang Huang
    School of Optometry, Indiana University, Bloomington, Indiana
  • Zhangyi Zhong
    School of Optometry, Indiana University, Bloomington, Indiana
  • Weiyao Zou
    School of Optometry, Indiana University, Bloomington, Indiana
  • Xiaofeng Qi
    School of Optometry, Indiana University, Bloomington, Indiana
  • Stephen A. Burns
    School of Optometry, Indiana University, Bloomington, Indiana
  • Footnotes
    Commercial Relationships  Gang Huang, None; Zhangyi Zhong, None; Weiyao Zou, None; Xiaofeng Qi, None; Stephen A. Burns, None
  • Footnotes
    Support  NIH Grants EY04395, EY14375, P30EY019008
Investigative Ophthalmology & Visual Science April 2011, Vol.52, 4056. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gang Huang, Zhangyi Zhong, Weiyao Zou, Xiaofeng Qi, Stephen A. Burns; The Application Of Lucky Imaging To Adaptive Optics Scanning Laser Ophthalmoscope. Invest. Ophthalmol. Vis. Sci. 2011;52(14):4056.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : Adaptive optics has greatly improved retinal image resolution. However, even with AO, simply aligning and averaging data , variations in image quality occur, in some cases due to variations in focus across (for instance) the foveal pit. To address these variations we implemented an image post-processing method called lucky imaging (Fried, 1978) to AOSLO images.

Methods: : Several datasets from cone imaging sessions were selected. These consisted of multiple frames collected at a frame rate of 30Hz. For each data set, the images in the sequence are first corrected for sinusoidal scan distortions, then aligned using a within frame motion compensation technique. The resulting aligned images are then stacked. We next empirically set a sub-isoplanatic window based on the frame size, and move it across the images. For each of the sub-images in the window, we calculate its co-occurrence Matrix contrast information to assess the sub-images' quality. For the areas without cones, such as blood vessels, the contrast value would be of the same order as the cone region while the variance of the contrast value is higher than the cone regions. Thus non-photoreceptor regions are also automatically identified and marked. Finally, the sub-images of the same region in image sequence are sorted based on their contrast values, and those with highest contrast are selected and averaged. We selected the percentage to average as such that in cone bearing regions we averaged from 10% to 30% of all frames while for non-cone regions, all the frames were used

Results: : The processed results demonstrate great improvements in contrast and uniformity. Quantitative analysis shows the contrast was increased by 20%, and the brightness by 10%. For some local region, the contrast could be improved even more, depending on the exact quality of the original image sequences. For image sequences with both cones and blood vessels, the result shows the image quality of cones are improved while the blood vessel’s region remains at the same noise level as by the simple averaging, as expected.

Conclusions: : We have employed lucky imaging for AOSLO image post-possessing and have been able to increase the contrast and uniformity of cone-photoreceptor images. In the examples we have shown, by only selecting the very best sub-regions from the dataset, the contrast has been improved by 20% on average and the uniformity is improved, especially for areas where the depth varies, such as near the foveal pit

Keywords: image processing 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×