April 2009
Volume 50, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2009
Multimodal Extraction of Retinal Features Within Large Heterogeneous Data Sets
Author Affiliations & Notes
  • E. Troeger
    Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
  • E. Zrenner
    Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
  • R. Wilke
    Institute for Ophthalmic Research, University of Tuebingen, Tuebingen, Germany
  • Footnotes
    Commercial Relationships  E. Troeger, None; E. Zrenner, None; R. Wilke, None.
  • Footnotes
    Support  Tistou und Charlotte Kerstan Stiftung Vision 2000 Sehen-Kunst-Sinnesfunktion
Investigative Ophthalmology & Visual Science April 2009, Vol.50, 340. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      E. Troeger, E. Zrenner, R. Wilke; Multimodal Extraction of Retinal Features Within Large Heterogeneous Data Sets. Invest. Ophthalmol. Vis. Sci. 2009;50(13):340.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : A multitude of modalities for morphological (fundus-, red-free-, IR-photography, OCT, autofluorescence, angiography) and functional (ERG, mfERG, (micro-) perimetry) assessment of the retina is available today. Although many features such as vascular attributes, changes in retinal appearance, morphology and function can be extracted from those data, usually only the data of few modalities are regarded in combination in order to assess interrelations. Here we present an approach to semi-automatically extract clinically meaningful features on a large scale.

Methods: : The accuracy of the feature extraction is important for the comparability of multi-modal data. Prominent pathological changes of the retina are abundant in our data sets and lower specificities and sensitivities of automatic extraction algorithms. Therefore a semi-automatic approach has been chosen. A tool has been developed to support fast feature extraction by providing a user-friendly interface for manual extraction steps as well as automatic extraction routines. The tool is based on the Windows Presentation Foundation which allows for easy integration of customized visualization components, e.g. for perimetry- or OCT-data. Algorithms are prototyped in MATLAB, whereas parallelizable algorithms are implemented using the NVIDIA CUDA programming interface.

Results: : We have defined ~30 features in all available imaging and functional testing modalities including A/V ratio, areas of changes of autofluorescence, bone-spikules, mfERG amplitudes and latencies for a large cohort of patients with hereditary retinal dystrophies (>1,500 patients). A fast semi-automatic concept has been proven to be well suited to obtain feature values within this large and heterogeneous data set. Extraction algorithms can be optimized with CUDA reducing the computation time e.g. for automated delineation of the optic disc by a factor of 14 (NVIDIA 8600M GT) compared to the non-parallelized implementation. If automated algorithms fail due to advanced disease stages, the tool still provides comfortable manual extraction routines.

Conclusions: : Our tool enables fast and robust features extraction for multi-modal statistical analysis and will be extended with statistical capabilities and feature extraction approaches based on co-registered data sets. Parallel algorithms will be optimized further and may enable near real-time extraction even of complex features. The planned application in large scale datasets may give novel insights to interrelations between morphological and functional features in retinal disease.

Keywords: retina • imaging/image analysis: clinical • imaging methods (CT, FA, ICG, MRI, OCT, RTA, SLO, ultrasound) 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×