April 2010
Volume 51, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2010
A Framework for Fast 3-D Histomorphometric Reconstructions
Author Affiliations & Notes
  • S. Gaffling
    Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
    SAOT Graduate School in Advanced Optical Technologies, Erlangen, Germany
  • M. Scholz
    Institute of Anatomy II, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
  • M. Eichhorn
    Institute of Anatomy II, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
  • E. Lütjen-Drecoll
    Institute of Anatomy II, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
  • Footnotes
    Commercial Relationships  S. Gaffling, None; M. Scholz, None; M. Eichhorn, None; E. Lütjen-Drecoll, None.
  • Footnotes
    Support  None.
Investigative Ophthalmology & Visual Science April 2010, Vol.51, 4370. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      S. Gaffling, M. Scholz, M. Eichhorn, E. Lütjen-Drecoll; A Framework for Fast 3-D Histomorphometric Reconstructions. Invest. Ophthalmol. Vis. Sci. 2010;51(13):4370.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : Three-dimensional (3-D) histomorphometric reconstructions have been advantageously created in the past to investigate structural changes in the eye due to diseases like glaucomas. Time-consuming manual interaction during the reconstruction, however, makes the process slow and subjective. We propose a framework for fast 3-D reconstruction of histological data sets, with minimal user interaction, making histomorphometric reconstructions feasible for everyday use.

Methods: : Histological images show many artifacts that have to be treated to ensure a proper reconstruction of tissue morphology. Most notably are inter-slice intensity differences due to inhomogeneities in slice thickness and exposition time during histological staining. Therefore a normalization step is performed using histogram equalization and bias field elimination methods. Afterwards, intensity-driven rigid registration is used to first align the content of the image sequence. In case of large gaps in the data set due to disrupted or missing slices, interpolation is used to restore them. For this we perform a non-rigid registration of the neighboring slices. The resulting deformation field is then partially applied to the reference image to restore the slice image in between. To account for the tissue deformations during sample slicing, the images have to be non-rigidly registered to each other. Finally the images are stacked to a 3-D histomorphometric volume. The approach was tested on several datasets of human and animal eyes, including a dataset of 82 section images of a human optic nerve head. Since acquisition of ground truth data is still an open issue for histological volumes, evaluation was done using visual inspection by experts.

Results: : The histological images were first improved using image enhancement methods. This also enables subsequent automatic registration and segmentation methods. The severity of the artifacts, however, has a significant effect on the quality of the reconstructed volume. The overall 3-D impression of a histological dataset nevertheless proved to be beneficial.

Conclusions: : While existing reconstruction and visualization techniques needed on average 1 month to be finished, our framework enables reconstruction in approximately one hour, depending on the resolution. This clearly makes the 3-D reconstruction of histological slices more feasible for everyday use in anatomical laboratories.

Keywords: image processing • microscopy: light/fluorescence/immunohistochemistry • anatomy 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×