Step 1 of the algorithm provides an approximation (
Fig. 2B) of the original image that is suitable for segmentation. Specifically, step 1 removes the background noise and, moreover, forms an approximation of the original image that is, tendentially, piecewise constant. In other words, this approximation shows homogeneous domains made by one or different adjacent layers. This process allows to highlight the interfaces between different layers. However, different adjacent layers having small contrast are merged into a unique layer complex; that is, they cannot be resolved, as it is evident from
Figures 2A and
2B. Finally, we mention that step 1 has been carried out by a TV method.
27 Step 2 (
Fig. 2C) of the imaging algorithm is devoted to improve the sharpness of the boundaries of the domains found in step 1. Step 2 provides an improvement of the image especially in those parts where the boundaries between different domains are not so sharp. Step 2 has been carried out by means of the so-called shock-filter.
28 Step 3 (
Fig. 2D) is devoted to the extraction of the contours; that is, the boundaries between different domains. This is one of the most critical parts of the segmentation algorithm, and an ad hoc procedure has been developed (Giovinco G, Savastano MC, Ventre S, Tamburrino A. Automated detection of the retinal layers from OCT spectral domain images, submitted for publication, 2013). The idea behind this procedure is that the magnitude of the gradient of the image is maximum at the boundaries. The procedure looks for connected lines joining the points where the magnitude of the gradient achieves its maxima and has been developed ad hoc for treating images of layered structures as those of OCT. The output of step 3 may provide some artifacts. Indeed, after the contour extraction and the related segmentation, small regions may appear as inclusions in major regions (
Fig. 2E). To remove those inclusions that do not have any physiological significance, we adopted an ad hoc fusion strategy based on the measure of the length of the curve between the “small” region and the surrounding regions. The fusion rule associates the inclusion with the adjacent region having the largest number of contour pixels in common with the inclusion. The effect of the region fusion algorithm is shown in
Figure 2F. At this stage we have the final segmentation of the retinal layers, apart from the hyperreflective layer. This latter layer has been extracted separately. First, we remove the previously segmented regions from the image. Then, a nonlinear complex diffusion algorithm,
29 is applied to mitigate the background noise affecting the hyperreflective region and the choroid. Finally, a contour extraction is applied as described in steps 2 and 3. The resulting image is shown in
Figure 2G. The final segmentation result is shown in
Figure 2H, where the contours are superimposed onto the original image of
Figure 2A.