September 2018
Volume 59, Issue 11
Open Access
Multidisciplinary Ophthalmic Imaging  |   September 2018
Cone Photoreceptor Cell Segmentation and Diameter Measurement on Adaptive Optics Images Using Circularly Constrained Active Contour Model
Author Affiliations & Notes
  • Jianfei Liu
    Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
  • HaeWon Jung
    Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
  • Alfredo Dubra
    Department of Ophthalmology, Stanford University, Palo Alto, California, United States
  • Johnny Tam
    Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
  • Correspondence: Johnny Tam, Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, 10 Center Drive Room 10N109, Bethesda, MD 20892, USA; [email protected]
Investigative Ophthalmology & Visual Science September 2018, Vol.59, 4639-4652. doi:https://doi.org/10.1167/iovs.18-24734
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jianfei Liu, HaeWon Jung, Alfredo Dubra, Johnny Tam; Cone Photoreceptor Cell Segmentation and Diameter Measurement on Adaptive Optics Images Using Circularly Constrained Active Contour Model. Invest. Ophthalmol. Vis. Sci. 2018;59(11):4639-4652. https://doi.org/10.1167/iovs.18-24734.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Cone photoreceptor cells can be noninvasively imaged in the living human eye by using nonconfocal adaptive optics scanning ophthalmoscopy split detection. Existing metrics, such as cone density and spacing, are based on simplifying cone photoreceptors to single points. The purposes of this study were to introduce a computer-aided approach for segmentation of cone photoreceptors, to apply this technique to create a normal database of cone diameters, and to demonstrate its use in the context of existing metrics.

Methods: Cone photoreceptor segmentation is achieved through a circularly constrained active contour model (CCACM). Circular templates and image gradients attract active contours toward cone photoreceptor boundaries. Automated segmentation from in vivo human subject data was compared to ground truth established by manual segmentation. Cone diameters computed from curated data (automated segmentation followed by manual removal of errors) were compared with histology and published data.

Results: Overall, there was good agreement between automated and manual segmentations and between diameter measurements (n = 5191 cones) and published histologic data across retinal eccentricities ranging from 1.35 to 6.35 mm (temporal). Interestingly, cone diameter was correlated to both cone density and cone spacing (negatively and positively, respectively; P < 0.01 for both). Application of the proposed automated segmentation to images from a patient with late-onset retinal degeneration revealed the presence of enlarged cones above individual reticular pseudodrusen (average 23.0% increase, P < 0.05).

Conclusions: CCACM can accurately segment cone photoreceptors on split detection images across a range of eccentricities. Metrics derived from this automated segmentation of adaptive optics retinal images can provide new insights into retinal diseases.

Noninvasive imaging of cone photoreceptor mosaics in the living human eye has been enabled by various adaptive optics (AO) ophthalmoscopy modalities.13 Quantitative assessment of the mosaic through metrics on AO retinal images, such as cone density and spacing, has shown potential for clinical application2,4 with substantial efforts already realized toward assembling normative databases.512 To overcome the tedious task of manually identifying cones and to remove the variability of human graders, various automated algorithms have been developed for two types of AO modalities: confocal1316 and nonconfocal1719 AO scanning light ophthalmoscopy (AOSLO). However, most quantitative metrics have been based on representing each cone as a point.9 In this work, we focused on using region-based descriptors of cone photoreceptors (i.e., representing cones as a cloud of points as opposed to a single point such as the centroid) as seen by nonconfocal split detection AO,11 starting with cone diameter. Enlargement of cone photoreceptors has been reported in various diseases.12,20,21 However, when performed manually, this process is several-fold more time intensive than identification of cone centers. Therefore, we present a novel algorithm that builds upon our previous work20,22 enabling automated segmentation of cone photoreceptors, and demonstrate its potential value for clinical application. 
Cell segmentation is an active area of research in digital pathology and microscopy.23 Active contour models (ACMs)24,25 are of particular interest here because of their subpixel accuracy as well as robustness to image noise.2630 ACM is a propagation process of deformable and closed contours that is controlled by image forces pulling them to stop at object boundaries.24,25 If the contour has an explicit mathematical representation, such as a spline, ACM is also called “snake”24; if it is implicitly embedded into a high-dimensional function and the actual contour is the zero isosurface of the function, then it is also called “level set.”25 Level sets are often exploited in cell segmentation because they are superior to snakes in handing touching cells.2629 Cell shape priors from a set of manually marked cell contours are often embedded into level-set framework to constrain level-set propagation only near the shape boundary, which can prevent cell oversegmentation due to weak cell boundaries that fail to stop contour propagation.27,30 However, directly applying these approaches to segment cone photoreceptors is challenging given the anisotropic boundary consisting of dark and bright shading on two opposite sides and little to no contrast on orthogonal sides (Fig. 1A). This three-dimensional appearance with oblique illumination also poses an interesting dilemma on defining the precise location of the boundary, which we will discuss later. To date, there have been very few articles addressing image analysis of split detection images,17,18,22,31,32 with only one report of cone photoreceptor segmentation22 (defined as the extraction of the region occupied by cells as opposed to identification of cell centers). 
Figure 1
 
Overview of cone photoreceptor segmentation algorithm on split detection images (subject 2). (A) Input image. (B) Dual-intensity extreme region detections (yellow dots). (C) Dual-intensity extreme regions (red and blue regions). (D) Detection pair connections that represent candidate cone photoreceptors for segmentation (green lines). (E) Convex hull of each cone (black contours). (F) Circular templates computed from contours (yellow circles). (G) Cone boundaries (cyan contours) from CCACM. (H) Refined boundaries (green contours) via snake model. White arrows indicate a cone photoreceptor with single extreme region detection; black arrows, cone with more than two region detections. A recovery process finds missing opposing intensity regions to complete the process, allowing the proposed segmentation algorithm to handle cones with single-region detection while still tolerating multiple-region detections. Scale bar: 10 μm.
Figure 1
 
Overview of cone photoreceptor segmentation algorithm on split detection images (subject 2). (A) Input image. (B) Dual-intensity extreme region detections (yellow dots). (C) Dual-intensity extreme regions (red and blue regions). (D) Detection pair connections that represent candidate cone photoreceptors for segmentation (green lines). (E) Convex hull of each cone (black contours). (F) Circular templates computed from contours (yellow circles). (G) Cone boundaries (cyan contours) from CCACM. (H) Refined boundaries (green contours) via snake model. White arrows indicate a cone photoreceptor with single extreme region detection; black arrows, cone with more than two region detections. A recovery process finds missing opposing intensity regions to complete the process, allowing the proposed segmentation algorithm to handle cones with single-region detection while still tolerating multiple-region detections. Scale bar: 10 μm.
Here, instead of manually creating shape priors,27,30 we proposed to solve the cone photoreceptor segmentation problem by dynamically establishing circular templates for cone photoreceptors, based on automatic detection of their dark and bright regions, and also cell-specific circular templates, which we call circularly constrained active contour model (CCACM). This improves upon our previous work22 with a novel snake method to improve contour position estimation by achieving subpixel segmentation. We then demonstrate the potential power of the CCACM approach in streamlining the creation of a normal database of cone photoreceptor diameters across a wide range of eccentricities. Finally, we illustrate the potential clinical utility of the photoreceptor inner segment size as a metric by studying a patient with photoreceptors that appear to be locally enlarged over reticular pseudodrusen. 
Methods
Data Collection
Research procedures adhered to the tenets of the Declaration of Helsinki and were approved by the Institutional Review Board of the National Institutes of Health. Written informed consent was obtained after the nature of the research and possible consequences of the study were explained to the subjects. 
Data were acquired by using a custom-built multimodal AO retinal imager (based on an AOSLO)11,33 outfitted with a computer-controlled fixation system.34 Image sequences were corrected for eye motion35 and manually assembled into montages that included both confocal and split detection images, as previously described.17 In this study, we imaged 10 subjects with no history of systemic or ocular disease (five female, five male; age range, 22–40 years; mean ± SD, 26.3 ± 5.6 years; additional information in Supplementary Table S1) and also a patient with late-onset retinal degeneration (male, 55 years). For each subject, over 100 retinal locations were imaged, from the fovea out to an eccentricity of approximately 6 mm in the temporal direction. The powers of the 790- and 850-nm light sources measured at the cornea were less than 135 μW and 35 μW, respectively, which were less than the maximum permissible exposure set by the American National Standards Institute standard Z136.1 2014.36 The retinal scaling factor for conversion from degrees to millimeters was computed by using a paraxial ray trace on a three-surfaced simplified model eye37 using the subject's biometric information (axial length, corneal curvature, and anterior chamber depth) measured with an IOL Master (Carl Zeiss Meditec, Dublin, CA, USA). 
Automated Cone Segmentation Algorithm
The automated cone segmentation algorithm consists of seven steps (Fig. 1; detailed mathematical description of the proposed algorithm is provided in the Appendix). 
Step 1: Dual Region Detection
Because cone photoreceptors contain both dark and bright regions, multiscale circular voting17 is used to detect region pairs. Briefly, this algorithm detects gradient magnitude values at the boundaries of circular regions and effectively transfers them into the centers of the local radii of curvature. The region centroid is then determined as the point with the highest response values (Fig. 1B, yellow dots). 
Step 2: Dual Region Segmentation
Pairs of dark and bright regions are segmented for the purpose of establishing cell-specific circular templates for segmentation in the subsequent steps. Since dual regions are either dark or bright, their intensity values are either regions of local minima or maxima. Their first image derivatives are therefore close to zero, with second derivatives either positive for dark regions or negative for bright regions. Therefore, we used the Hessian matrix to represent the second image derivatives of the 2D image intensity function (see Appendix). 
Dual region detections from Step 1 form seed points to initialize this step. Starting from each seed point, region growing is exploited to include image points with positive or negative Hessian matrix response according to Equation A7 to segment dark and bright regions (Fig. 1C, red and blue regions). Since actual cone segmentation is carried out in a subsequent step, the positioning of this dual region segmentation relative to actual cone boundaries is deemphasized here. Instead, we leverage the robustness of the Hessian matrix response to establish stable circular templates for initializing the actual cone segmentation that follows. 
Step 3: Dual Region Connection
This step pairs bright and dark regions corresponding to each cell, according to the following criteria: (1) dark regions are always on the left of bright ones (Fig. 1C; red and blue regions, respectively), (2) their distance between region detections (Fig. 1B, yellow dots) is less than the expected maximum cone radius (4.5 μm in our datasets, which is consistent with what has been reported in histology38), and (3) if there are multiple region candidates, the one with the largest area is selected (Fig. 1C, black arrow). Dual regions from a single cone photoreceptor can thus be connected (Fig. 1D, green lines). Some cells contained only a single region (Fig. 1C, white arrow). The strategy for recovery of such regions is described in Step 4. 
Step 4: Convex Hull Determination
This step determines initial bounding regions for each cell from its dual regions. The morphology of the bounding region is used to determine the size of circular templates. Note that there are usually gaps between dual regions (Fig. 1C). To fill these gaps, two distance functions39 are computed for each gap starting from the boundary contours of its dual regions, where the distance function measures the shortest distance of each image point to the starting contour. The shortest distance between two contours is used as a filling threshold: the gap is filled with a set of image points if their distance to both contours is less than the shortest contour distance (Equation A10; Fig. 1E, green regions). Finally, the combined dual regions plus filled gaps are imported into the convex hull algorithm40 to determine the smallest convex polygon (Fig. 1E, black contours). 
Recovery Procedure
A recovery procedure is applied to cones that contain only a single region (Figs. 1C–E, white arrows). First, we generate an intensity histogram of all dark regions that were identified from the previous steps. Next, for each cone with a single bright region, we define a trapezoidal arc search area starting from the region detection of the current bright region to find a seed point in the missing dark region. The missing dark region is recovered through a fast matching algorithm25 to grow the seed point. Following recovery, convex hull algorithm can be used to determine the bounding regions for dual regions, except that there are no filled gaps. In the opposite case (dark regions missing bright regions), the same recovery process is applied except that we invert the intensity values of the original split detection image as Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(\bar I\left( {x,y} \right) = 255 - I\left( {x,y} \right)\), where the maximum intensity value is 255. Thus, dark regions are changed to bright, and bright ones are inverted to dark for the purposes of recovery. 
Step 5: Circular Template Construction
This step aims to create circular templates from bounding regions to help constrain actual cone segmentation. An ellipse is fit to each bounding region from the previous step, with minor and major axes Display Formula\({r_1}\) and Display Formula\({r_2}\), respectively. A circular template is constructed centered on the bounding region with radius given by Display Formula\(({r_1} + {r_2})/2\), based on the assumption that cone photoreceptors are circular in shape (Fig. 1F, yellow circles). Since these circular templates are constructed dynamically from the image itself, they are already placed in the desired image locations to constrain cone segmentation. 
Step 6: Circularly Constrained Active Contour Model
The goal of this step is to find cone boundary contours based on (higher) image gradient magnitudes, which is guided by the circular templates from Step 5. Level-set propagation is used to reduce oversegmentation of any cone photoreceptors that touch (e.g., at lower eccentricities, where they are closer together). The level set is initialized by using the circular template contour, and image forces are defined as the weighted linear combination of image gradient magnitudes and normalized distance from circular template boundaries (Appendix, Equation A16). Level sets are iteratively pulled toward cone boundaries by these image forces (Fig. 1G, cyan contours). Image forces reach a minimum when they approach the boundary balanced by image gradients and the circular template, at which point the iterations are stopped. 
Step 7: Active Contour Refinement
Cone boundary contours from CCACM often contain many jagged edges due to pixel space (Fig. 1G, cyan contours), which can discretize the computation of cone photoreceptor diameters. Moreover, cones do not appear to have jagged edges in images (or if they do, it is below the resolution limit of the current system). Therefore, the purpose of this step is to reduce the effects of pixelation and to achieve subpixel segmentation. Instead of using level-set propagation to refine contours, an explicit active contour model (snake)24 is used because it contains a higher-order contour smoothness term. Enforcing the smoothness term stretches the contour to remove the jagged edges in subpixel space (Fig. 1H, green contours). 
Cone Diameter Measurement
The area of each cell contour is calculated by treating it as a polygon and is computed on the basis of this representation.41 Cone diameter was calculated on the basis of a circle with equivalent area. 
Pixel Sampling
Pixel sampling is determined during image acquisition by the field of view (FOV): larger FOVs reduce the number of pixels within each cone, which might cause instability of some image operations in the segmentation algorithm that are computed over pixel space. A commonly used FOV for AO imaging (e.g., ∼0.46 mm, corresponding to 1.5° of visual angle) can have cones that are only approximately 11 pixels across in size, even at eccentric locations (Fig. 2). In contrast, smaller FOVs lead to larger numbers of pixels within each cone, but may be less ideal for image acquisition owing to the need to acquire more image locations and the increase in retinal exposure to light when compared to larger FOVs. Therefore, determination of acceptable pixel sampling that results in accurate results is important for cone diameter estimation. Ten retinal regions of interest (ROIs) (60 × 60 μm) from 10 healthy subjects at the eccentricity of approximately 4.5 mm were selected to determine the optimal FOV for cone diameter estimation. Each area was imaged with three different FOVs: 0.23, 0.30, and 0.45 mm. Cone contours were manually marked in the image with FOV of 0.23 mm (Fig. 2, red contours) because it contained the highest pixel sampling. Manually marked contours, including repeated markings from the same expert grader, were compared with the results from CCACM. The segmentation accuracy was evaluated by using the average absolute diameter difference (ADD) and relative diameter difference (RDD) as defined in Table 1. All subsequent results were generated from optimized pixel samplings. 
Figure 2
 
FOV (pixel sampling) optimization. Comparison of cone segmentation results on the images acquired at the same retinal region of subject 5 with different FOVs: (A) 0.23 mm (cone diameter ∼22 pixels); (B) 0.3 mm (∼16 pixels); (C1) 0.46 mm (∼11 pixels); (C2) 0.46 mm upsampled to 0.23 mm (∼22 pixels). For visualization purpose, all images are displayed at the same scale. CCACM segmentation results are shown in green contours, and manual markings in red. Since cone photoreceptors are composed of intensity patterns of half dark and half bright regions, the width of each region will contain even fewer pixels (approximately half the cone diameter). This can lead to pixel-sampling–induced errors in segmentation. The cone indicated by white arrows was oversegmented in (B) and (C1), but not (A), while the one indicated by blue arrows was significantly oversegmented in (C1), but not in (A) and (B). These oversegmentation issues can be addressed as shown in (C2) by upsampling the image to provide sufficient pixel numbers within each cone. Scale bar: 10 μm.
Figure 2
 
FOV (pixel sampling) optimization. Comparison of cone segmentation results on the images acquired at the same retinal region of subject 5 with different FOVs: (A) 0.23 mm (cone diameter ∼22 pixels); (B) 0.3 mm (∼16 pixels); (C1) 0.46 mm (∼11 pixels); (C2) 0.46 mm upsampled to 0.23 mm (∼22 pixels). For visualization purpose, all images are displayed at the same scale. CCACM segmentation results are shown in green contours, and manual markings in red. Since cone photoreceptors are composed of intensity patterns of half dark and half bright regions, the width of each region will contain even fewer pixels (approximately half the cone diameter). This can lead to pixel-sampling–induced errors in segmentation. The cone indicated by white arrows was oversegmented in (B) and (C1), but not (A), while the one indicated by blue arrows was significantly oversegmented in (C1), but not in (A) and (B). These oversegmentation issues can be addressed as shown in (C2) by upsampling the image to provide sufficient pixel numbers within each cone. Scale bar: 10 μm.
Table 1
 
Five Metrics to Evaluate the Segmentation Accuracy Between Manually Marked and Automatically Segmented Cone Boundaries Contours
Table 1
 
Five Metrics to Evaluate the Segmentation Accuracy Between Manually Marked and Automatically Segmented Cone Boundaries Contours
Validation of Cone Segmentation Results
Ten retinal ROIs (60 × 60 μm) from 10 healthy subjects at the eccentricity of approximately 4.5 mm from fixation were selected to evaluate segmentation accuracy. At this eccentricity, approximately half of the image is covered by cones, while the other half is not, providing equal opportunity for false positives and false negatives to appear. These 10 retinal ROIs are different from the ones used for the FOV optimization. All cone photoreceptors from these ROIs were manually segmented by a single expert grader to generate ground truth. For evaluation of segmentation accuracy, in addition to ADD and RDD, we also used average symmetric contour distance (ASD), root mean square symmetric contour distance (RMSD), and maximum symmetric contour distance (MSD) (Table 1). These metrics are widely used to evaluate accuracy in organ and tumor segmentation.42 
Owing to the “3D” appearance of cells, the precise cell boundary can be difficult to define (e.g., whether and how much of the “shadow” to include in the cell). To better quantify this issue, for each ROI, five cone photoreceptors were randomly selected (approximately half of the cones that are fully contained within each ROI; Supplementary Fig. S1); we then asked the same grader to repeat manual segmentation on unmarked versions of these cones 8.5 months afterwards (at which point the grader did not retain any memory of the previous markings). This grader did not have any knowledge of the automated segmentation results until the completion of the study. Differences between the first and second contours were analyzed to understand repeatability. We also compared automated segmentation results with the first manual marks, second manual marks, and average of two manual marks. 
Quantification of Cone Diameters in Relation to Published Studies
As reported previously,17 cone photoreceptors cannot currently be reliably resolved by using split detection in the foveal center of healthy retinas. Therefore, at eccentricities ranging from approximately 1.35 to 6.35 mm along the temporal direction, 146 ROIs were selected from 10 healthy subjects (70 × 70 μm ROIs). Whenever needed, the ROIs were slightly shifted to avoid blood vessels. Following automated CCACM segmentation, data were manually corrected to remove erroneous identifications for the purposes of establishing a preliminary normal database. Cone diameters were computed from normal database and plotted against published histology and in vivo data.11,12,38,43 
Cone diameter is a region-based quantitative measurement, which differs from existing point-based measurements such as cone density and spacing. Whether there are relationships between the two remains to be explored. Therefore, we also computed the cone density and spacing for the same 146 ROIs, from a previously developed implementation9 of the density recovery profile approach.44,45 Univariate linear regression was performed to understand the correlation between diameter and density as well as diameter and spacing, with 1-way analysis of variance performed to determine statistical significance. Differences between cone diameters measured in males and females were compared by using a 2-tailed, unpooled, paired t-test; axial length and age between males and females were compared by using a 2-tailed, unpooled t-test. Moreover, the coefficient of variation was used to evaluate the relative variability of cone diameters at each ROI by dividing the standard deviation of cone diameters by their mean for each eccentricity. 
Demonstration of Value of Cone Diameter Measurements in a Patient With Late-Onset Retinal Degeneration
Cone diameter measurements were performed on images from a patient with late-onset retinal degeneration whose eye contained reticular pseudodrusen lesions46 that were visible with multimodal imaging, including AOSLO, as has been previously described.47 Here, we selected four ROIs where single reticular pseudodrusen were clearly visible as hyperreflective regions on confocal reflectance AOSLO across a range of eccentricities (2.85, 3.52, 4.39, and 4.58 mm; each ROI was 175 × 175 μm). CCACM, followed by manual correction, was used to segment cone photoreceptors and measure cone diameters and the coefficient of variation on the selected regions corresponding to lesion (cones directly above individual reticular pseudodrusen) and nonlesion (neighboring cones not directly above individual reticular pseudodrusen). For the manual correction, three expert graders independently identified cones inside ROIs and only cones with agreement between at least 2 of 3 graders were selected (the purpose of this was to ensure agreement of cone identification only, before any segmentation analysis). To calculate cone spacing and density in the nonlesion area, we used ROIs of size 70 × 70 μm. For each lesion, cone diameters in lesion and nonlesion areas were compared by using a 2-tailed, unpooled t-test. Nonlesion cone diameters were compared to the expected normal cone diameter from our data by using a 2-tailed, unpooled, paired t-test. Finally, nonlesion cone spacing and density were compared to expected normal values from histology48 by using a 2-tailed, unpooled, paired t-test. 
Results
Selection of Field of View (Pixel Sampling) for CCACM-Based Cone Segmentation
The average ADD on all 10 ROIs was 0.92, 1.05, and 1.55 μm for the FOVs of 0.23, 0.30, and 0.46 mm, respectively (Fig. 2; Table 2). The automated segmentation accuracy gradually decreased with increasing FOV, suggesting that the FOV of 0.23 mm is the best choice for the proposed implementation of CCACM. Oversegmentation was the primary source of reduced segmentation accuracy in the case of larger FOVs (Figs. 2B, 2C1), caused by inaccurate dual region segmentation. In addition, the multiscale Hessian matrix requires a sufficient number of image points to be stable (Equation A7). Nevertheless, we found that segmentation accuracy could be near fully recovered by upsampling (bicubically increasing pixel density) the 0.46-mm FOV to 0.23 mm (Fig. 2C2). Interestingly, segmentation accuracy remained stable and sometimes even dropped in the case of further upsampling to a FOV of 0.15 mm or more owing to the presence of inhomogeneous regions inside of dark and bright regions within cones that were emphasized by further upsampling. Upsampling also increased computational cost. In our instrument, the 0.23-mm FOV is the smallest FOV that we use. Therefore, the optimal pixel sampling corresponded to a FOV of 0.23 mm, which was used to perform cone segmentation and diameter estimation in this work. 
Table 2
 
Evaluation of Segmentation Accuracy on the Same Retinal Regions With Different Square Fields of View on 10 Subjects
Table 2
 
Evaluation of Segmentation Accuracy on the Same Retinal Regions With Different Square Fields of View on 10 Subjects
Accuracy of Cone Segmentation
CCACM accurately segmented cone photoreceptors on 10 subjects with varying levels of image quality: ASD, RMSD, MSD, ADD, and RDD were all within a small fraction of the actual cone size (Fig. 3; Table 3). For the 0.23-mm FOV, the diameter of a cone photoreceptor is approximately 22 pixels. This means that the average contour difference between manually marked and automatically segmented is less than 2 pixels (ASD). The difference in cone diameter is also approximately 2 pixels (ADD). The computation time to segment cones in a 70 × 70 μm ROI (approximately 200 × 200 pixels) was 1.6 seconds on a standard Windows 7 64-bit PC (Microsoft, Redmond, WA, USA; Intel quad-core i7-3770 3.4-GHz CPU, CPU release date April 2012, 16 GB of RAM). CCACM was implemented in C++ with single thread. There were five main types of discrepancies between manually marked and automatically segmented cones: (1) contour displacements due to different definitions of where the cone photoreceptor boundary should be (Figs. 3A1, 3A2; white arrows), (2) undersegmentation caused by strong edge effects between dark and bright regions that mislead active contour propagation (Figs. 3B1, 3B2), (3) oversegmentation due to homogenous regions that have particularly weak cell boundaries, which fail to stop propagation (Figs. 3C1, 3C2), (4) incomplete image information due to cells being at or near image boundaries (Figs. 3D1, 3D2), and (5) ambiguous cell boundaries due to the presence of multiple edges that masquerade as boundaries (Figs. 3E1, 3E2). 
Figure 3
 
Cone segmentation results comparing automated segmentation to manual marking for subjects 2, 3, 6, 8, and 10, corresponding to each column. Note the variation in image quality. Green contours, automated segmentation; red contours, manual marking. Five example cones from five subjects (white arrows) are selected to illustrate the difference between segmentation and manual marking. (A1, A2) Disagreement of contour placement due to different assumptions of cell boundaries; (B1, B2) undersegmentation caused by the attraction of strong image edges in the cone dark region; (C1, C2) oversegmentation because of the large circular template; (D1, D2) false cell segmentation near the image boundaries; (E1, E2) ambiguous cell boundaries due to multiple image edges. Nevertheless, most cone photoreceptors are accurately segmented and their boundaries are in good agreement with manual marking. Scale bar: 10 μm.
Figure 3
 
Cone segmentation results comparing automated segmentation to manual marking for subjects 2, 3, 6, 8, and 10, corresponding to each column. Note the variation in image quality. Green contours, automated segmentation; red contours, manual marking. Five example cones from five subjects (white arrows) are selected to illustrate the difference between segmentation and manual marking. (A1, A2) Disagreement of contour placement due to different assumptions of cell boundaries; (B1, B2) undersegmentation caused by the attraction of strong image edges in the cone dark region; (C1, C2) oversegmentation because of the large circular template; (D1, D2) false cell segmentation near the image boundaries; (E1, E2) ambiguous cell boundaries due to multiple image edges. Nevertheless, most cone photoreceptors are accurately segmented and their boundaries are in good agreement with manual marking. Scale bar: 10 μm.
Table 3
 
Evaluation of Segmentation Accuracy on 10 Subjects
Table 3
 
Evaluation of Segmentation Accuracy on 10 Subjects
Manual segmentation of cone photoreceptors was repeatable, with average ASD, RMSD, MSD, ADD, and RDD also within a small fraction of cone size, comparing data from the same grader repeated 8.5 months apart (Supplementary Fig. S1). These correspond to differences of approximately 1 pixel (ASD) and diameter differences of approximately 2 pixels (ADD). However, for a small subset of cones, there were some notable differences observed (Supplementary Figs. S1D2, S1E2; white arrows), Therefore, we compared our automated segmentation results to both manual segmentations and to the average of two manual segmentations. The difference between CCACM versus average segmentation was close to the difference between two manual segmentations, suggesting that the accuracy of the automated segmentation is close to the achievable accuracy of manual segmentation, but with the benefit of zero variation (since the algorithm will always return the same results for the same data). 
Normal Database of Cone Diameters Across Eccentricities
A total of 7441 cone photoreceptors were automatically segmented from 10 subjects across the eccentricities ranging from 1.35 to 6.35 mm along the temporal direction. Erroneously segmented cones (defined as the five types of errors described in the validation section above) were manually excluded to ensure high data quality for the purposes of establishing a normal database (Fig. 4, blue contours). The remaining 5191 cones were selected for inclusion into the normal database. Segmented cones were grouped every 0.3 mm from 1.35 to 6.35 mm (18 bins total). Within each group, the mean and standard deviation of cone diameters and actual eccentricities were computed and plotted against published histology and in vivo data. The resulting automatically segmented cone diameters were in good agreement with existing histology and published data (Fig. 5), except for the results from Andrade da Costa,43 possibly owing to a species-dependent variation (Cebus apella) (Table 4). There is very little human-based data on cone photoreceptor diameters (to our knowledge, the only study is that of Scoles et al.11 and none that have measured large amounts of cones). Our study was in good agreement with the human histology data from Scoles et al.11 and was consistent with the two other diameter measurements based on split detection images.11,12 Interestingly, there was a reduction in cone diameter at the eccentricity of 5.5 to 6.0 mm, consistent with results from Scoles et al.11 This may potentially be related to an increased presence of rods48 or also a change in the density of RPE cells49 at this eccentricity. Although additional subjects are needed to confirm, cone diameters in male subjects were slightly smaller at most eccentricities than those in females (Supplementary Fig. S2, P < 0.01), which could not be adequately explained by differences in axial length (male: 24.6 ± 1.5 mm, female: 23.8 ± 0.5; P = 0.35) or age (male: 27.6 ± 7.0 years, female: 25.0 ± 3.3 years; P = 0.53). Two male subjects had higher myopia (Supplementary Table S1). The coefficient of variation was also plotted (Supplementary Fig. S3, top row). 
Figure 4
 
Example of cone diameter computation on subject 6 at the eccentricities ranging from 1.60 to 6.56 mm. Top row: Black squares (solid and dotted) indicate regions of interest for diameter computation along the temporal direction. Five representative squares (solid lines) at the eccentricities of 1.65, 3.04, 3.90, 5.23, and 6.37 mm are selected for illustration. (A1E1) AOSLO images corresponding to five solid squares. (A2E2) Corresponding cell segmentation results, where cone contours in green were used for diameter computation, and contours in blue excluded. Five types of cone segmentations were excluded in computation, including cell misidentification (white arrow; A1, A2), weak cell boundary (B1, B2), image boundary (C1, C2), oversegmentation (D1, D2), and possible image artifacts (E1, E2). Scale bars: 100 μm (top) and 10 μm (bottom).
Figure 4
 
Example of cone diameter computation on subject 6 at the eccentricities ranging from 1.60 to 6.56 mm. Top row: Black squares (solid and dotted) indicate regions of interest for diameter computation along the temporal direction. Five representative squares (solid lines) at the eccentricities of 1.65, 3.04, 3.90, 5.23, and 6.37 mm are selected for illustration. (A1E1) AOSLO images corresponding to five solid squares. (A2E2) Corresponding cell segmentation results, where cone contours in green were used for diameter computation, and contours in blue excluded. Five types of cone segmentations were excluded in computation, including cell misidentification (white arrow; A1, A2), weak cell boundary (B1, B2), image boundary (C1, C2), oversegmentation (D1, D2), and possible image artifacts (E1, E2). Scale bars: 100 μm (top) and 10 μm (bottom).
Figure 5
 
Comparison of cone diameters calculated from circularly constrained active contour model in 10 subjects with histology data and results of the existing literature (Table 4). In our results, vertical bars denote the one standard deviation of cone diameters, and horizontal bars standard deviations of eccentricities.
Figure 5
 
Comparison of cone diameters calculated from circularly constrained active contour model in 10 subjects with histology data and results of the existing literature (Table 4). In our results, vertical bars denote the one standard deviation of cone diameters, and horizontal bars standard deviations of eccentricities.
Table 4
 
Subject Information for Cone Inner Segment Diameter Calculation in the Existing Histology Studies
Table 4
 
Subject Information for Cone Inner Segment Diameter Calculation in the Existing Histology Studies
Relationship Between Cone Size and Packing
There was a statistically significant correlation between cone diameter (averaged across the ROI) with both cone density and spacing (P < 0.001 for both; Fig. 6). This is surprising given the large intersubject variability that is observed in cone density, spacing, and diameter when viewed independently. This intersubject variability is most apparent at eccentricities of 4 to 6 mm (Supplementary Fig. S4). In the case of cone density versus cone diameter, the inverse quadratic fit was only slightly better than the inverse linear fit (R2 = 0.45 compared to R2 = 0.43), which is likely due to the presence of rods at the eccentricities plotted. 
Figure 6
 
Correlations between cone diameters with cone density and spacing measured in the same cells. Each dot represents data from one ROI, color-coded for the 10 subjects. Cone diameters represent the average diameters across the ROI.
Figure 6
 
Correlations between cone diameters with cone density and spacing measured in the same cells. Each dot represents data from one ROI, color-coded for the 10 subjects. Cone diameters represent the average diameters across the ROI.
Quantification of Cone Swelling in Patient Data
In all four ROIs, cones were significantly enlarged above reticular pseudodrusen lesions, compared to neighboring cones from the same patient (P < 0.05 for each of the four lesions; Fig. 7, Table 5). On average, cones above lesions were 23.0% bigger than their neighboring nonlesion cones. There was no difference between cone diameters measured in nonlesion areas compared to the expected normal cone diameter from our data (P = 0.41). A preliminary assessment of the relative variance of the cone diameters (based on calculation of the coefficient of variation in a sliding window across the image as shown in Supplementary Fig. S3) showed that cone diameters were often more irregular when compared to the expected values from the normal database. The size of the sliding window was matched to that used for the computation of the normal database (70 × 70 μm). We were not able to reliably quantify cone spacing and density for cones above lesions because the individual lesions were too small, and not all cones could be reliably identified owing to image quality issues. Computation of cone density is particularly sensitive to misidentification of even a single cone when the size of the region is small (e.g., lesion areas in Fig. 7D3). This fundamental limitation of cone spacing and density (i.e., that it has to be computed over a contiguous array of cones) illustrates the potential application of cone diameter (which can be computed in a handful of noncontiguous cones). The differences in cone spacing and density between nonlesion cones and expected values from histology48 were not statistically significant (P = 0.06 and P = 0.16, respectively). 
Figure 7
 
Cone segmentation results on four retinal regions (columns) from a patient with late-onset retinal degeneration with individual reticular pseudodrusen lesions (black dashed ellipses). The numbers above the images denote the retinal eccentricity (temporal to the fovea). Row 1: Input images. Neighboring nonlesion ROIs are shown in dashed white squares. Cone photoreceptors appear to be enlarged in lesion areas. Row 2: Results from CCACM. Green contours are correct segmentations, while blue contours are segmentation results that were manually removed according to criterion defined in the results section. Row 3: Contours within lesion and nonlesion ROIs were manually corrected. Green contours are untouched from CCACM; red contours were manually added. Scale bar: 50 μm.
Figure 7
 
Cone segmentation results on four retinal regions (columns) from a patient with late-onset retinal degeneration with individual reticular pseudodrusen lesions (black dashed ellipses). The numbers above the images denote the retinal eccentricity (temporal to the fovea). Row 1: Input images. Neighboring nonlesion ROIs are shown in dashed white squares. Cone photoreceptors appear to be enlarged in lesion areas. Row 2: Results from CCACM. Green contours are correct segmentations, while blue contours are segmentation results that were manually removed according to criterion defined in the results section. Row 3: Contours within lesion and nonlesion ROIs were manually corrected. Green contours are untouched from CCACM; red contours were manually added. Scale bar: 50 μm.
Table 5
 
Comparison of Cone Density and Spacing Between the Patient and Healthy Subjects at Four Different Eccentricities
Table 5
 
Comparison of Cone Density and Spacing Between the Patient and Healthy Subjects at Four Different Eccentricities
Discussion
We provided the first demonstration of a fully automated region-based cone segmentation algorithm (CCACM) for split detection images of cone photoreceptors. CCACM can quickly and accurately segment cone photoreceptors on split detection AOSLO images through dynamically constructed circular shape priors and active contour propagation. We further demonstrated the utility of using CCACM to efficiently construct a normal database of cone photoreceptor diameters across a wide range of eccentricities, from 1.35 to 6.35 mm, and presented the largest database to date consisting of 5191 cones from 10 subjects. Our results are consistent with previously reported values. Finally, application of this approach to a patient with late-onset retinal degeneration to assess cone photoreceptors above individual reticular pseudodrusen lesions46 demonstrates the potential clinical utility of this as a potential biomarker for disease. 
Cone density and spacing are the two most commonly used quantitative metrics for cone photoreceptors, but both suffer from the requirement that a sufficiently large, contiguous patch of cones must be completely and accurately identified. Cone density, in particular, is highly sensitive to misidentification of even a single cone for small ROIs. Although the ROIs above individual reticular pseudodrusen lesions are too small to be used to reliably quantify spacing and density, we illustrated that cone diameter can be measured even on a handful of noncontiguous cones, which is a major advantage for studying data from patients with poor image quality. Here, we showed that even on a lesion-by-lesion basis, we can find patches of cones that are significantly enlarged when compared to their immediate neighbors. In the future, we plan to investigate these photoreceptor-based changes in the context of the underlying RPE (Liu J, et al. IOVS 2017;58:ARVO E-Abstract 304).50 Automating cone diameter measurements could have important applications for the management, diagnosis, and development and testing of new treatments of many diseases including glaucoma,51 retinitis pigmentosa,21 achromatopsia,52 rod monochromacy,10 cone-rod dystrophy,53 and macular telangiectasia type 2 (Scoles DH, et al. IOVS 2014;55:ARVO E-Abstract 5951). 
Interestingly, cone diameter was correlated to both cone spacing and cone density. It may be possible to reduce intersubject variability in a dataset of cone spacing or cone density by taking into consideration cone diameters. Generally, one would expect cells to expand to fill the space that is available surrounding them, but studying whether there are natural size constraints to cells relative to their packing could be an important step toward understanding how and why cones become enlarged in disease. In this patient, cones were enlarged by an average of 23.0% (and by as much as 30.5% in one of the lesions). In the future, incorporating rod photoreceptors and Müller cells into a model of size and packing will give a more complete understanding. 
Given the “3D” appearance of cone photoreceptors in split detection images, we found that it can be challenging to precisely define the location of cell boundaries (e.g., in the presence of “shadows”). We propose the following guidelines based on our experiences: an object is considered as a cone photoreceptor in a split detection image if (1) it has a “3D” appearance with dark and bright regions on both sides and (2) the cone boundary is defined as the locations with the largest intensity gradient magnitude. This second criterion is an important one, since the cone boundaries are not sharp edges. 
CCACM is a natural extension of our previous cone identification method,17 as the framework provides the detections of dark and bright regions to start CCACM. We expect that the integration of CCACM with cone identification algorithms can further improve robustness, particularly for the reduction of false positives. There are limitations of CCACM that can be further improved as well as areas for future work. First, many of the parameters used in CCACM were empirically found to be optimal for a 0.23-mm FOV. Further optimization of parameters throughout the algorithm may lead to additional improvements in performance. Second, incorporation of noncircular templates or other geometric templates may be necessary in cases when cells are significantly elongated. Third, we selected approximately 70% of the automatically segmented cones for inclusion in the normal database, comprising only those cones with the clearest boundaries. Most of the excluded cone photoreceptors were located near the fovea, where the dense packing results in a higher likelihood of ambiguous boundaries (due to boundaries of neighboring cones interacting with each other). Nevertheless, the smaller size of cones and higher density near the fovea meant that a similar number of cones could still be selected for the normal database even with a higher exclusion rate. While we do not expect that exclusion of cones with weak boundaries will affect the accuracy of the normal database, in the future, further enhancement of segmentation accuracy of crowded cones could be achieved by training the cone boundary classifier through deep learning54,55 and applying the classifier to separate touching cells.56 Fourth, in addition to cone diameter, second-order metrics based on cone segmentation might lead to additional insights about diseases such as Best disease, in which cones have been reported to be noncircular.50 Finally, the CCACM approach will also be useful for other recently demonstrated nonconfocal AO images that contain pairs of bright and dark regions, such as ganglion cells imaged by using offset-aperture AO57 or translucent cells (e.g., horizontal cells) imaged in mice by using knife-edge AO.58 
In conclusion, CCACM provides a novel way to quickly and accurately perform cone segmentation across a wide range of eccentricities. This enabled us to efficiently construct the largest normal database to date of cone photoreceptor diameters, illustrating the potential for CCAM as an analysis tool for quantifying photoreceptor changes that occur owing to disease, which might not be captured by existing approaches. 
Acknowledgments
The authors thank Tao Liu for assistance with adaptive optics data; Catherine Cukras, Laryssa Huryn, Wadih Zein, Henry Wiley, Angel Garced, John Rowan, Patrick Lopez, Sharon Yin, Christina Appleman, Gloria Babilonia-Ayukawa, Mike Arango, and Denise Cunningham for assistance with clinical procedures; Robert Cooper, Pavan Tiruveedhula, and Austin Roorda for sharing cone photoreceptor analysis software; and Howard Metger for custom machining for adaptive optics instrumentation. 
Supported by the Intramural Research Program of the National Eye Institute (National Institutes of Health), the Alcon Research Institute, the Glaucoma Research Foundation Catalyst for a Cure Initiative, and National Institutes of Health Grant U01 EY025477. 
Custom codes that have not been previously described in the published literature used in this research will be made available for research usage upon reasonable request to the corresponding author. 
Disclosure: J. Liu, None; H. Jung, None; A. Dubra, Athena Vision (C), P; J. Tam, None 
Appendix
Dual Region Segmentation
Let I(x,y) : ℝ2 → ℝ be the intensity function of a split detection image. Dual-region segmentation can be considered as the search of image regions with local minimum-maximum pairs, corresponding to the dark and bright region pairs associated with each cone photoreceptor. If an image point Display Formula\(\left( {x,y} \right)\) is a local minimum in a dark region, then Display Formula\(I\left( {x,y} \right) \lt I\left( {x + \Delta x,y + \Delta y} \right)\), where Display Formula\(\left( {\Delta x,\Delta y} \right)\) denotes a displacement vector; if it has local maximum value inside a bright region, then Display Formula\(I\left( {x,y} \right) \gt I\left( {x + \Delta x,y + \Delta y} \right)\). In the vicinity of the extrema, we can use the Taylor series to get:  
\begin{equation}\tag{\it A\rm1}I\left( {x + \Delta x,y + \Delta y} \right) \approx I\left( {x,y} \right) + \left( {{I_x},{I_y}} \right){\left( {\Delta x,\Delta y} \right)^T} + {1 \over 2}\left( {\Delta x,\Delta y} \right)H\left( {x,y} \right){\left( {\Delta x,\Delta y} \right)^T}{\rm ,}\end{equation}
where Display Formula\(H\) is the Hessian matrix of Display Formula\(I\left( {x,y} \right)\) given by:  
\begin{equation}\tag{\it A\rm2}H\left( {x,y} \right) = \left( {\matrix{ {{I_{xx}}}&{{I_{xy}}} \cr {{I_{xy}}}&{{I_{yy}}} \cr } } \right){\rm .}\end{equation}
Note that the linear term in (Δx, Δy) vanishes because (x, y) are extrema and thus, the derivative of the intensity at these points is zero. Therefore, Equation A1 becomes:  
\begin{equation}\tag{\it A\rm3}I\left( {x + \Delta x,y + \Delta y} \right) - I\left( {x,y} \right) \approx {1 \over 2}\left( {\Delta x,\Delta y} \right)H\left( {x,y} \right){\left( {\Delta x,\Delta y} \right)^T}{\rm .}\end{equation}
Display Formula\(H\left( {x,y} \right) \gt 0\) and Display Formula\(H\left( {x,y} \right) \lt 0\) (positive definite and negative definite matrices, respectively) mean the corresponding point Display Formula\(\left( {x,y} \right)\) inside the dark and bright regions, respectively. Since Display Formula\(H\left( {x,y} \right)\) is a Display Formula\(2 \times 2\) symmetrical matrix in Equation 2, we have  
\begin{equation}\tag{\it A\rm4}H\left( {x,y} \right) \gt 0 \Rightarrow {\rm{det}}H\left( {x,y} \right) \gt 0 \ {\rm and}\ H\left( {x,y} \right) \lt 0 \Rightarrow {\rm{det}}H\left( {x,y} \right) \lt 0{\rm ,}\end{equation}
where Display Formula\({\rm{det}}H\left( {x,y} \right) = {I_{xx}}{I_{yy}} - I_{xy}^2\). Display Formula\({\rm{det}}H\left( {x,y} \right)\) involves image derivative computation, which is an ill-posed problem but can be converted into a well-posed problem via convolution with Gaussian function Display Formula\(G\left( {x,y;t} \right)\).59 One critical issue is the determination of Display Formula\(t\), which is also called scale selection problem.6062 Multiscale image representation, L(x,y;t) : ℝ2 × ℝ+ → ℝ, is thus generated through convolution between Display Formula\(I\left( {x,y} \right)\) and a set of Gaussian functions:  
\begin{equation}\tag{\it A\rm5}L\left( {x,y;\sigma } \right) = G\left( {x,y;t} \right)*I\left( {x,y} \right){\rm .}\end{equation}
The scale parameter Display Formula\(t\) is defined as Display Formula\(t \in \left[ {0.5, \cdots ,7.7} \right]\), where Display Formula\({t_{k + 1}} = {\rm{\gamma }}{t_k}\), with Display Formula\(k\) denoting the index of scale level. We selected Display Formula\({\rm{\gamma }} = 1.2\) from our earlier work.17 This leads to:  
\begin{equation}\tag{\it A\rm6}{\rm{det}}H\left( {x,y;t} \right) = {L_{xx}}{L_{yy}} - L_{xy}^2{\rm .}\end{equation}
 
Region growing is exploited to segment dual regions, based on the following criterion to include neighboring points:  
\begin{equation}\tag{\it A\rm7}\hat t = {\rm{arg}}\mathop {\max }\limits_t \left| {{\rm{det}}H\left( {x,y;t} \right)} \right|\ {\rm and}\ \left| {{\rm{det}}H\left( {x,y;\hat t} \right)} \right| \gt {\varepsilon _H}{\rm .}\end{equation}
Here, Display Formula\({\varepsilon _H}\) is a threshold, set to 100 as suggested by Mikolajczyk and Schmid.62 Note that the scale-normalized Hessian matrix here is invariant to scale changes. If Display Formula\({\rm{det}}H\left( {x,y;t} \right) \gt 0\), the current point belongs to a dark region; otherwise, it is from a bright regionDisplay Formula\(.\)  
Convex Hull Determination
Let Display Formula\({C_D}\) and Display Formula\({C_B}\) be boundary contours of a connected pair of dark and bright regions, respectively. Using the Euclidean distance39 (denoted by ‖·‖) between a point Display Formula\(p = \left( {x,y} \right)\) and the sets Display Formula\({C_D}\) and Display Formula\({C_B}\), we compute two distances,  
\begin{equation}{D_D}\left( {x,y} \right) = \mathop {\min }\limits_{w \in {C_D}}\vert\vert p - {w_D}\vert\vert ,p = \left( {x,y} \right)\end{equation}
 
\begin{equation}\tag{\it A\rm8}{D_B}\left( {x,y} \right) = \mathop {\min }\limits_{w \in {C_D}} \vert\vert p - {w_B}\vert\vert ,p = \left( {x,y} \right){\rm ,}\end{equation}
 
where Display Formula\(p\) denotes points not at Display Formula\({C_D}\) or Display Formula\({C_B}\), respectively, and Display Formula\(w\) denotes points at Display Formula\({C_D}\) or Display Formula\({C_B}\), respectively. 
The shortest distance Display Formula\(d\) between Display Formula\({C_D}\) and Display Formula\({C_B}\) is calculated as  
\begin{equation}\tag{\it A\rm9}d = {\rm{min}}\left\{ {\mathop {\min }\limits_{\left( {x,y} \right) \in {C_B}} {D_D}\left( {x,y} \right),\mathop {\min }\limits_{\left( {x,y} \right) \in {C_D}} {D_B}\left( {x,y} \right)} \right\}{\rm .}\end{equation}
The gap is thus filled (Fig. 1E, green regions) by finding the set of image points Display Formula\({\cal S}\) satisfying  
\begin{equation}\tag{ A\rm10}{\cal S} = \{ \left( {x,y} \right)|{D_D}\left( {x,y} \right) \le d{\rm{\ and\ }}{D_B}\left( {x,y} \right) \le d\} {\rm .}\end{equation}
Recovery process. The recovery process of missing regions begins with seed point determination. Centering at the region detection of the bright region, a trapezoidal arc search area is set up as  
\begin{equation}\tag{ A\rm11}{\rm{\Omega }}\left( {x,y,{r_{min}},{r_{max}},\theta } \right) = \left\{ {\left( {x \pm rcos\varphi ,y \pm rsin\varphi } \right)|{r_{min}} \le r \le {r_{max}},{{180}^\circ } - \theta \le \varphi \le {{180}^\circ } + \theta } \right\}{\rm .}\end{equation}
We use Display Formula\({r_{min}} = 5,{\rm{\ }}{r_{max}} = 15\) pixels, and Display Formula\(\theta = {30^\circ }\) in this work. We select the seed point within the search area that fulfills  
\begin{equation}\tag{ A\rm12}\left( {{x_s},{y_s}} \right) = {\rm{arg}}\mathop {\min }\limits_{\left( {x,y} \right) \in {\rm{\Omega }}} I\left( {x,y} \right)\ {\rm and}\ I\left( {{x_s},{y_s}} \right) \lt {\varepsilon _I}{\rm ,}\end{equation}
where Display Formula\({\varepsilon _I}\) is the intensity value that is larger than 80% of the samples in the histogram of dark regions found from previous steps.  
Circularly Constrained Active Contour Model (CCACM)
If Display Formula\(C \left(q \right):\left[0,1 \right] \to\)2 parameterizes a contour, then its Euclidean length is given by  
\begin{equation}\tag{ A\rm13}E\left( C \right) = \oint ds = \oint \left| {C^{\prime} \left( q \right)} \right|dq = \mathop \smallint \limits_0^1 \left| {C^{\prime} \left( q \right)} \right|dq{\rm .}\end{equation}
 
The actual cone boundary contour, Display Formula\(\hat C\left( q \right)\), is set up to stop near (1) circular template boundaries as well as (2) an area that contains large image gradient magnitudes. Image forces are thus composed of these two terms, which achieve a maximum when preparing to stop contour propagation (i.e., the contour velocity should vanish when Display Formula\(\hat C\left( q \right)\) is found). 
The influence of circular templates (Fig. 1F, yellow circles) is introduced by using Display Formula\(T\left( {x,y} \right)\), which is defined as a normalized distance function (Equation A8) from the template boundary contour Display Formula\({C_T}\left( q \right)\), given by  
\begin{equation}\tag{A14}T\left( {x,y} \right) = {D_{normalized}}\left( {x,y} \right),\ {\rm where}\ {D_{normalized}}\left( {x,y} \right) \in \left[ {0,1} \right]{\rm .}\end{equation}
If an image point Display Formula\(\left( {x,y} \right)\) is close to Display Formula\({C_T}\left( q \right)\), Display Formula\(T\left( {x,y} \right)\) is close to 0; otherwise, it is close to 1. Therefore, Display Formula\(T\left( {x,y} \right)\) can be used as the speed term to pull Display Formula\(\hat C\left( q \right)\) to be close to Display Formula\({C_T}\left( q \right)\).  
A second speed term is introduced as Display Formula\(M\left( {x,y} \right)\), which is motivated by the fact that Display Formula\(\hat C\left( q \right)\) should stay in the image points with high image gradient magnitudes.  
\begin{equation}\tag{A15}M\left( {x,y} \right) = {1 \over {\left( {1 + {{\left| {\nabla I\left( {x,y} \right)} \right|}^2}} \right)}}\end{equation}
Note that the “1” in the denominator of Equation A15 is a tunable parameter that can be further adjusted to modulate this speed term. Integrating Equations A14 and A15 into Equation A13, the search of cone boundary contour Display Formula\(\hat C\left( q \right)\) can be mathematically formulated as  
\begin{equation}\hat C = {\rm{arg}}\mathop {\min }\limits_C E\left( C \right) = {\rm{arg}}\mathop {\min }\limits_C \mathop \smallint \limits_0^1 F\left( {C\left( q \right)} \right)\left| {C^{\prime} \left( q \right)} \right|dq\end{equation}
 
\begin{equation}\tag{A16}F\left( {x,y} \right) = \alpha T\left( {x,y} \right) + \left( {1 - \alpha } \right)M\left( {x,y} \right),\left( {x,y} \right) \in C\left( q \right){\rm ,}\end{equation}
with Display Formula\(\alpha = 0.7\) to balance the influence of the circular template and image gradients in this work. Equation A16 is similar to the geodesic active contour,63 except that circular template is included in this work. The Euler-Lagrange equation64 is used to minimize Equation A16,  
\begin{equation}\tag{A17}{{\partial C\left( {\tau ,q} \right)} \over {\partial \tau }} = \kappa F\vec {\cal N} - \left( {\nabla F \cdot \vec {\cal N}} \right)\vec {\cal N}{\rm ,}\end{equation}
where Display Formula\(\kappa \) is the curvature of Display Formula\(C\) and Display Formula\(\vec {\cal N}\) is the unit inward normal. Equation A17 shows that the minimization process of Equation 16 is the propagation of active contour Display Formula\(C\) over time step Display Formula\(\tau \). To achieve high accuracy and numerical stability, the contour Display Formula\(C\), as well as its underlying image grids, is typically re-discretized after a few iterations, which is especially critical when the contour Display Formula\(C\) moves a large distance. To alleviate such issue, level-set function Display Formula\(\phi \left( {x,y} \right)\) is instead used to implicitly describe the active contour Display Formula\(C\), given by  
\begin{equation}\tag{A18}\phi \left( {x,y} \right) = \left\{ {\matrix{ { - D\left( {x,y} \right)}&{{\rm{if}}\left( {x,y} \right){\rm{outside\ the\ area\ inside\ }}C} \cr {D\left( {x,y} \right)}&{{\rm{else}}} \cr } } \right.{\rm ,}\end{equation}
where Display Formula\(D\left( {x,y} \right)\) denotes the Euclidean distance of a point Display Formula\(\left( {x,y} \right)\) to Display Formula\(C\) (Equation A8). Therefore, the contour Display Formula\(C\) can be implicitly represented as Display Formula\(C = \{ \left( {x,y} \right)|\phi \left( {x,y} \right) = 0\} \). Such implicit contour representation can avoid the discretization of image grids, and Equation A17 is rewritten as  
\begin{equation}\tag{A19}{{\partial \phi } \over {\partial \tau }} = F\left| {\nabla \phi } \right|{\rm{div}}\left( {{{\nabla \phi } \over {\left| {\nabla \phi } \right|}}} \right) - \nabla F \cdot \nabla \phi {\rm ,}\end{equation}
where Display Formula\({\rm{div}}\left( \cdot \right)\) denotes the divergence operator. The initial contour at Display Formula\(\tau = 0\) is set to Display Formula\({C_T}\left( q \right)\), which corresponds to the zero-level set of Display Formula\(\phi \) at Display Formula\(\tau = 0\). Iterative evolution of level-set function Display Formula\(\phi \left( {x,y} \right)\) over Display Formula\(\tau \) leads to the identification of cone photoreceptor boundaries.  
Active Contour Refinement
Explicit active contour model, also called snake,24 is given by  
\begin{equation}\tag{A20}E\left( C \right) = \underbrace {\beta \int_0^1 {{{\left| {{C^{\prime}}\left( q \right)} \right|}^2}} dq + \zeta \int_0^1 {{{\left| {{C^{\prime\prime}}\left( q \right)} \right|}^2}} dq}_{{\rm{smoothness\ term}}} - \underbrace {\lambda \int_0^1 {{{\left| {\nabla I\left( {C\left( q \right)} \right)} \right|}^2}} dq}_{{\rm{Image\ term}}}{\rm .}\end{equation}
Here, we set Display Formula\(\beta = 1\), Display Formula\(\zeta = 1000\), and Display Formula\(\lambda = 0.2\). This emphasizes the smoothness term (especially the second smoothness term describing the contour second derivatives, which constrains the contour according to the thin plate equation to effectively remove the jagged edges). The third image term still plays the role of forcing the contour to stop at the image points with high image gradients. The Euler-Lagrange equation64 is again used to minimize Equation 20.  
\begin{equation}\tag{A21}{{\partial C\left( {q,\tau } \right)} \over {\partial \tau }} = \beta C^{\prime\prime} \left( {q,\tau } \right) - \zeta C^{\prime\prime\prime\prime} \left( {q,\tau } \right) - \nabla \left( {{{\left| {\nabla I\left( {C\left( {q,\tau } \right)} \right)} \right|}^2}} \right)\end{equation}
Iteratively evolving Display Formula\(C\left( {q,\tau } \right)\) over Display Formula\(\tau \) yields the refined cone boundary contour.  
Cone Diameter Estimation
Given a cone boundary contour Display Formula\(\hat C\) with Display Formula\({\rm{{\rm N}}}\) points that is discretely represented as Display Formula\(\left\{ {{p_0},{p_1}, \cdots ,{p_N}} \right\}\), where Display Formula\({p_i} = \left( {{x_i},{y_i}} \right)\), Display Formula\(i \in {\rm{{\rm N}}}\), and Display Formula\({p_0} = {p_N}\), the polygon area enclosed by Display Formula\(\hat C\) can be computed as41  
\begin{equation}\tag{A22}A = {1 \over 2}\mathop \sum \limits_{i = 0}^{\rm{{\rm N}}} {x_i}\left( {{y_{i + 1}} - {y_{i - 1}}} \right),\left( {{x_i},{y_i}} \right) \in \hat C{\rm{\ }}{\rm .}\end{equation}
Assuming the cone photoreceptor is circular shape, the cone diameter Display Formula\({\cal D}\) is calculated as  
\begin{equation}\tag{A23}{\cal D} = 2\sqrt {A/\pi } {\rm .}\end{equation}
 
References
Liang J, Williams DR, Miller DT. Supernormal vision and high-resolution retinal imaging through adaptive optics. J Opt Soc Am A Opt Image Sci Vis. 1997; 14: 2884–2892.
Roorda A, Duncan JL. Adaptive optics ophthalmoscopy. Annu Rev Vis Sci. 2015; 1: 19–50.
Jonnal RS, Kocaoglu OP, Zawadzki RJ, Liu Z, Miller DT, Werner JS. A review of adaptive optics optical coherence tomography: technical advances, scientific applications, and the future. Invest Ophthalmol Vis Sci. 2016; 57: OCT51–OCT68.
Talcott KE, Ratnam K, Sundquist SM, et al. Longitudinal study of cone photoreceptors during retinal degeneration and in response to ciliary neurotrophic factor treatment. Invest Ophthalmol Vis Sci. 2011; 52: 2219–2226.
Song H, Chui TYP, Zhong Z, Elsner AE, Burns SA. Variation of cone photoreceptor packing density with retinal eccentricity and age. Invest Ophthalmol Vis Sci. 2011; 52: 7376–7384.
Zayit-Soudry S, Sippl-Swezey N, Porco TC, et al. Repeatability of cone spacing measures in eyes with inherited retinal degenerations. Invest Ophthalmol Vis Sci. 2015; 56: 6179–6189.
Zhang T, Godara P, Blanco ER, et al. Variability in human cone topography assessed by adaptive optics scanning laser ophthalmoscopy. Am J Ophthalmol. 2015; 160: 290–300.e1.
Wells-Gray EM, Choi SS, Bries A, Doble N. Variation in rod and cone density from the fovea to the mid-periphery in healthy human retinas using adaptive optics scanning laser ophthalmoscopy. Eye (Lond). 2016; 30: 1135–1143.
Cooper RF, Wilk MA, Tarima S, Carroll J. Evaluating descriptive metrics of the human cone mosaic. Invest Ophthalmol Vis Sci. 2016; 57: 2992–3001.
Carroll J, Choi SS, Williams DR. In vivo imaging of the photoreceptor mosaic of a rod monochromat. Vis Res. 2008; 48: 2564–2568.
Scoles D, Sulai YN, Langlo CS, et al. In vivo imaging of human cone photoreceptor inner segments. Invest Ophthalmol Vis Sci. 2014; 55: 4244–4251.
Sun LW, Johnson RD, Langlo CS, et al. Assessing photoreceptor structure in retinitis pigmentosa and Usher syndrome. Invest Ophthalmol Vis Sci. 2016; 57: 2428–2442.
Li KY, Roorda A. Automated identification of cone photoreceptors in adaptive optics retinal images. J Opt Soc Am A Opt Image Sci Vis. 2007; 24: 1358–1363.
Chiu SJ, Lokhnygina Y, Dubis AM, et al. Automatic cone photoreceptor segmentation using graph theory and dynamic programming. Biomed Opt Express. 2013; 4: 924–937.
Bukowska DM, Chew AL, Huynh E, et al. Semi-automated identification of cones in the human retina using circle Hough transform. Biomed Opt Express. 2015; 6: 4676–4693.
Cooper RF, Langlo CS, Dubra A, Carroll J. Automatic detection of modal spacing (Yellott's ring) in adaptive optics scanning light ophthalmoscope images. Ophthalmic Physiol Opt. 2013; 33: 540–549.
Liu J, Jung H, Dubra A, Tam J. Automated photoreceptor cell identification on nonconfocal adaptive optics images using multiscale circular voting. Invest Ophthalmol Vis Sci. 2017; 58: 4477–4489.
Cunefare D, Cooper RF, Higgins B, et al. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images. Biomed Opt Express. 2016; 7: 2036–2050.
Bergeles C, Dubis AM, Davidson B, et al. Unsupervised identification of cone photoreceptors in non-confocal adaptive optics scanning light ophthalmoscope images. Biomed Opt Express. 2017; 8: 3081–3094.
Liu J, Cukras CA, Tam J. Quantitative analysis of photoreceptor swelling in late-onset retinal degeneration using adaptive optics. Invest Ophthalmol Vis Sci. 2016; 57: 3168–3168.
Murakami Y, Matsumoto H, Roh M, et al. Receptor interacting protein kinase mediates necrotic cone but not rod cell death in a mouse model of inherited degeneration. Proc Natl Acad Sci U S A. 2012; 109: 14598–14603.
Liu J, Dubra A, Tam J. A fully automatic framework for cell segmentation on non-confocal adaptive optics images. SPIE Medical Imaging. 2016; 9785: 97852J.
Xing F, Yang L. Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: a comprehensive review. IEEE Rev Biomed Eng. 2016; 9: 234–263.
Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vision. 1988; 1: 321–331.
Sethian JA. Level Set Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science (Cambridge Monographs on Applied and Computational Mathematics). Cambridge Monographs on Applied and Computational Mathematics (Book 3). Cambridge, UK: Cambridge University Press; 1996.
Nath SK, Palaniappan K, Bunyak F. Cell segmentation using coupled level sets and graph-vertex coloring. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2006 (Lecture Notes in Computer Science). Berlin, Heidelberg: Springer; 2006: 101–108.
Ali S, Madabhushi A. An integrated region-, boundary-, shape-based active contour for multiple object overlap resolution in histological imagery. IEEE Trans Med Imaging. 2012; 31: 1448–1460.
Bergeest J-P, Rohr K. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals. Med Image Anal. 2012; 16: 1436–1444.
Lu Z, Carneiro G, Bradley AP. An improved joint optimization of multiple level set functions for the segmentation of overlapping cervical cells. IEEE Trans Image Proc. 2015; 24: 1261–1272.
Song Y, Tan EL, Jiang X, et al. Accurate cervical cell segmentation from overlapping clumps in Pap smear images. IEEE Trans Med Imaging. 2017; 36: 288–300.
Liu J, Dubra A, Tam J. Computer-aided detection of human cone photoreceptor inner segments using multi-scale circular voting. SPIE Medical Imaging. 2016: 9785: 97851A.
Liu J, Jung H, Tam J. Accurate correspondence of cone photoreceptor neurons in the human eye using graph matching applied to longitudinal adaptive optics images. In: Medical Image Computing and Computer-Assisted Intervention − MICCAI 2017 (Lecture Notes in Computer Science). Cham, Switzerland: Springer; 2017: 153–161.
Dubra A, Sulai Y. Reflective afocal broadband adaptive optics scanning ophthalmoscope. Biomed Opt Express. 2011; 2: 1757–1768.
Rossi EA, Rangel-Fonseca P, Parkins K, et al. In vivo imaging of retinal pigment epithelium cells in age related macular degeneration. Biomed Opt Express. 2013; 4: 2527–2539.
Dubra A, Harvey Z. Registration of 2D images from fast scanning ophthalmic instruments. In: Fischer B, Dawant BM, Lorenz C, eds. Biomedical Image Registration (Lecture Notes in Computer Science). Berlin Heidelberg: Springer; 2010: 60–71.
Laser Institute of America. ANSI Z136.1—Safe Use of Lasers (2014). Orlando, FL: Laser Institute of America; 2014.
Bennett AG, Rabbetts RB. Proposals for new reduced and schematic eyes. Ophthalmic Physiol Opt. 1989; 9: 228–230.
Packer O, Hendrickson AE, Curcio CA. Photoreceptor topography of the retina in the adult pigtail macaque (Macaca nemestrina). J Comp Neurol. 1989; 288: 165–183.
Maurer CR, Qi R, Raghavan V. A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions. IEEE Trans Pattern Anal Mach Intell. 2003; 25: 265–270.
Devroye L, Toussaint GT. A note on linear expected time algorithms for finding convex hulls. Computing. 1981; 26: 361–366.
Sunday D. Fast polygon area and Newell normal computation. J Graph Tools. 2002; 7: 9–13.
Heimann T, van Ginneken B, Styner MA, et al. Comparison and evaluation of methods for liver segmentation from CT datasets. IEEE Trans Med Imaging. 2009; 28: 1251–1265.
Andrade da Costa BLS, Hokoç JN . Photoreceptor topography of the retina in the New World monkey Cebus apella. Vis Res. 2000; 40: 2395–2409.
Duncan JL, Zhang Y, Gandhi J, et al. High-resolution imaging with adaptive optics in patients with inherited retinal degeneration. Invest Ophthalmol Vis Sci. 2007; 48: 3283–3291.
Rodieck RW. The density recovery profile: a method for the analysis of points in the plane applicable to retinal studies. Vis Neurosci. 1991; 6: 95–111.
Cukras C, Flamendorf J, Wong WT, Ayyagari R, Cunningham D, Sieving PA. Longitudinal structural changes in late-onset retinal degeneration. Retina. 2016; 36: 2348–2356.
Meadway A, Wang X, Curcio CA, Zhang Y. Microstructure of subretinal drusenoid deposits revealed by adaptive optics imaging. Biomed Opt Express. 2014; 5: 713–727.
Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human photoreceptor topography. J Comp Neurol. 1990; 292: 497–523.
Liu T, Jung H, Liu J, Droettboom M, Tam J. Noninvasive near infrared autofluorescence imaging of retinal pigment epithelial cells in the human retina using adaptive optics. Biomed Opt Express. 2017; 8: 4348–4360.
Tam J, Liu J, Dubra A, Fariss R. In vivo imaging of the human retinal pigment epithelial mosaic using adaptive optics enhanced indocyanine green ophthalmoscopy. Invest Ophthalmol Vis Sci. 2016; 57: 4376–4384.
Nork TM, Hoeve JNV, Poulsen GL, et al. Swelling and loss of photoreceptors in chronic human and experimental glaucomas. Arch Ophthalmol. 2000; 118: 235–245.
Langlo CS, Patterson EJ, Higgins BP, et al. Residual foveal cone structure in CNGB3-associated achromatopsia. Invest Ophthalmol Vis Sci. 2016; 57: 3984–3995.
Gregory-Evans K, Fariss RN, Possin DE, Gregory-Evans CY, Milam AH. Abnormal cone synapses in human cone-rod dystrophy. Ophthalmology. 1998; 105: 2306–2312.
Cunefare D, Fang L, Cooper RF, Dubra A, Carroll J, Farsiu S. Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks. Sci Rep. 2017; 7: 6620.
Davidson B, Kalitzeos A, Carroll J, et al. Automatic cone photoreceptor localisation in healthy and Stargardt afflicted retinas using deep learning. Sci Rep. 2018; 8: 7911.
Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 (Lecture Notes in Computer Science). Cham, Switzerland: Springer, Cham: 2015: 234–241.
Rossi EA, Granger CE, Sharma R, et al. Imaging individual neurons in the retinal ganglion cell layer of the living eye. Proc Natl Acad Sci U S A. 2017; 114: 586–591.
Guevara-Torres A, Williams DR, Schallek JB. Imaging translucent cell bodies in the living mouse retina without contrast agents. Biomed Opt Express. 2015; 6: 2106–2119.
Bertero M, Poggio TA, Torre V. Ill-posed problems in early vision. Proc IEEE. 1988; 76: 869–889.
Lindeberg T. Scale-Space Theory in Computer Vision. Dordrecht, The Netherlands: Kluwer Academic Publishers; 1994.
Lindeberg T. Feature detection with automatic scale selection. Int J Comput Vision. 1998; 30: 79–116.
Mikolajczyk K, Schmid C. Scale & affine invariant interest point detectors. Int J Comput Vision. 2004; 60: 63–86.
Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vision. 1997; 22: 61–79.
Fox C. An Introduction to the Calculus of Variations. New York: Dover Publications; 2010.
Figure 1
 
Overview of cone photoreceptor segmentation algorithm on split detection images (subject 2). (A) Input image. (B) Dual-intensity extreme region detections (yellow dots). (C) Dual-intensity extreme regions (red and blue regions). (D) Detection pair connections that represent candidate cone photoreceptors for segmentation (green lines). (E) Convex hull of each cone (black contours). (F) Circular templates computed from contours (yellow circles). (G) Cone boundaries (cyan contours) from CCACM. (H) Refined boundaries (green contours) via snake model. White arrows indicate a cone photoreceptor with single extreme region detection; black arrows, cone with more than two region detections. A recovery process finds missing opposing intensity regions to complete the process, allowing the proposed segmentation algorithm to handle cones with single-region detection while still tolerating multiple-region detections. Scale bar: 10 μm.
Figure 1
 
Overview of cone photoreceptor segmentation algorithm on split detection images (subject 2). (A) Input image. (B) Dual-intensity extreme region detections (yellow dots). (C) Dual-intensity extreme regions (red and blue regions). (D) Detection pair connections that represent candidate cone photoreceptors for segmentation (green lines). (E) Convex hull of each cone (black contours). (F) Circular templates computed from contours (yellow circles). (G) Cone boundaries (cyan contours) from CCACM. (H) Refined boundaries (green contours) via snake model. White arrows indicate a cone photoreceptor with single extreme region detection; black arrows, cone with more than two region detections. A recovery process finds missing opposing intensity regions to complete the process, allowing the proposed segmentation algorithm to handle cones with single-region detection while still tolerating multiple-region detections. Scale bar: 10 μm.
Figure 2
 
FOV (pixel sampling) optimization. Comparison of cone segmentation results on the images acquired at the same retinal region of subject 5 with different FOVs: (A) 0.23 mm (cone diameter ∼22 pixels); (B) 0.3 mm (∼16 pixels); (C1) 0.46 mm (∼11 pixels); (C2) 0.46 mm upsampled to 0.23 mm (∼22 pixels). For visualization purpose, all images are displayed at the same scale. CCACM segmentation results are shown in green contours, and manual markings in red. Since cone photoreceptors are composed of intensity patterns of half dark and half bright regions, the width of each region will contain even fewer pixels (approximately half the cone diameter). This can lead to pixel-sampling–induced errors in segmentation. The cone indicated by white arrows was oversegmented in (B) and (C1), but not (A), while the one indicated by blue arrows was significantly oversegmented in (C1), but not in (A) and (B). These oversegmentation issues can be addressed as shown in (C2) by upsampling the image to provide sufficient pixel numbers within each cone. Scale bar: 10 μm.
Figure 2
 
FOV (pixel sampling) optimization. Comparison of cone segmentation results on the images acquired at the same retinal region of subject 5 with different FOVs: (A) 0.23 mm (cone diameter ∼22 pixels); (B) 0.3 mm (∼16 pixels); (C1) 0.46 mm (∼11 pixels); (C2) 0.46 mm upsampled to 0.23 mm (∼22 pixels). For visualization purpose, all images are displayed at the same scale. CCACM segmentation results are shown in green contours, and manual markings in red. Since cone photoreceptors are composed of intensity patterns of half dark and half bright regions, the width of each region will contain even fewer pixels (approximately half the cone diameter). This can lead to pixel-sampling–induced errors in segmentation. The cone indicated by white arrows was oversegmented in (B) and (C1), but not (A), while the one indicated by blue arrows was significantly oversegmented in (C1), but not in (A) and (B). These oversegmentation issues can be addressed as shown in (C2) by upsampling the image to provide sufficient pixel numbers within each cone. Scale bar: 10 μm.
Figure 3
 
Cone segmentation results comparing automated segmentation to manual marking for subjects 2, 3, 6, 8, and 10, corresponding to each column. Note the variation in image quality. Green contours, automated segmentation; red contours, manual marking. Five example cones from five subjects (white arrows) are selected to illustrate the difference between segmentation and manual marking. (A1, A2) Disagreement of contour placement due to different assumptions of cell boundaries; (B1, B2) undersegmentation caused by the attraction of strong image edges in the cone dark region; (C1, C2) oversegmentation because of the large circular template; (D1, D2) false cell segmentation near the image boundaries; (E1, E2) ambiguous cell boundaries due to multiple image edges. Nevertheless, most cone photoreceptors are accurately segmented and their boundaries are in good agreement with manual marking. Scale bar: 10 μm.
Figure 3
 
Cone segmentation results comparing automated segmentation to manual marking for subjects 2, 3, 6, 8, and 10, corresponding to each column. Note the variation in image quality. Green contours, automated segmentation; red contours, manual marking. Five example cones from five subjects (white arrows) are selected to illustrate the difference between segmentation and manual marking. (A1, A2) Disagreement of contour placement due to different assumptions of cell boundaries; (B1, B2) undersegmentation caused by the attraction of strong image edges in the cone dark region; (C1, C2) oversegmentation because of the large circular template; (D1, D2) false cell segmentation near the image boundaries; (E1, E2) ambiguous cell boundaries due to multiple image edges. Nevertheless, most cone photoreceptors are accurately segmented and their boundaries are in good agreement with manual marking. Scale bar: 10 μm.
Figure 4
 
Example of cone diameter computation on subject 6 at the eccentricities ranging from 1.60 to 6.56 mm. Top row: Black squares (solid and dotted) indicate regions of interest for diameter computation along the temporal direction. Five representative squares (solid lines) at the eccentricities of 1.65, 3.04, 3.90, 5.23, and 6.37 mm are selected for illustration. (A1E1) AOSLO images corresponding to five solid squares. (A2E2) Corresponding cell segmentation results, where cone contours in green were used for diameter computation, and contours in blue excluded. Five types of cone segmentations were excluded in computation, including cell misidentification (white arrow; A1, A2), weak cell boundary (B1, B2), image boundary (C1, C2), oversegmentation (D1, D2), and possible image artifacts (E1, E2). Scale bars: 100 μm (top) and 10 μm (bottom).
Figure 4
 
Example of cone diameter computation on subject 6 at the eccentricities ranging from 1.60 to 6.56 mm. Top row: Black squares (solid and dotted) indicate regions of interest for diameter computation along the temporal direction. Five representative squares (solid lines) at the eccentricities of 1.65, 3.04, 3.90, 5.23, and 6.37 mm are selected for illustration. (A1E1) AOSLO images corresponding to five solid squares. (A2E2) Corresponding cell segmentation results, where cone contours in green were used for diameter computation, and contours in blue excluded. Five types of cone segmentations were excluded in computation, including cell misidentification (white arrow; A1, A2), weak cell boundary (B1, B2), image boundary (C1, C2), oversegmentation (D1, D2), and possible image artifacts (E1, E2). Scale bars: 100 μm (top) and 10 μm (bottom).
Figure 5
 
Comparison of cone diameters calculated from circularly constrained active contour model in 10 subjects with histology data and results of the existing literature (Table 4). In our results, vertical bars denote the one standard deviation of cone diameters, and horizontal bars standard deviations of eccentricities.
Figure 5
 
Comparison of cone diameters calculated from circularly constrained active contour model in 10 subjects with histology data and results of the existing literature (Table 4). In our results, vertical bars denote the one standard deviation of cone diameters, and horizontal bars standard deviations of eccentricities.
Figure 6
 
Correlations between cone diameters with cone density and spacing measured in the same cells. Each dot represents data from one ROI, color-coded for the 10 subjects. Cone diameters represent the average diameters across the ROI.
Figure 6
 
Correlations between cone diameters with cone density and spacing measured in the same cells. Each dot represents data from one ROI, color-coded for the 10 subjects. Cone diameters represent the average diameters across the ROI.
Figure 7
 
Cone segmentation results on four retinal regions (columns) from a patient with late-onset retinal degeneration with individual reticular pseudodrusen lesions (black dashed ellipses). The numbers above the images denote the retinal eccentricity (temporal to the fovea). Row 1: Input images. Neighboring nonlesion ROIs are shown in dashed white squares. Cone photoreceptors appear to be enlarged in lesion areas. Row 2: Results from CCACM. Green contours are correct segmentations, while blue contours are segmentation results that were manually removed according to criterion defined in the results section. Row 3: Contours within lesion and nonlesion ROIs were manually corrected. Green contours are untouched from CCACM; red contours were manually added. Scale bar: 50 μm.
Figure 7
 
Cone segmentation results on four retinal regions (columns) from a patient with late-onset retinal degeneration with individual reticular pseudodrusen lesions (black dashed ellipses). The numbers above the images denote the retinal eccentricity (temporal to the fovea). Row 1: Input images. Neighboring nonlesion ROIs are shown in dashed white squares. Cone photoreceptors appear to be enlarged in lesion areas. Row 2: Results from CCACM. Green contours are correct segmentations, while blue contours are segmentation results that were manually removed according to criterion defined in the results section. Row 3: Contours within lesion and nonlesion ROIs were manually corrected. Green contours are untouched from CCACM; red contours were manually added. Scale bar: 50 μm.
Table 1
 
Five Metrics to Evaluate the Segmentation Accuracy Between Manually Marked and Automatically Segmented Cone Boundaries Contours
Table 1
 
Five Metrics to Evaluate the Segmentation Accuracy Between Manually Marked and Automatically Segmented Cone Boundaries Contours
Table 2
 
Evaluation of Segmentation Accuracy on the Same Retinal Regions With Different Square Fields of View on 10 Subjects
Table 2
 
Evaluation of Segmentation Accuracy on the Same Retinal Regions With Different Square Fields of View on 10 Subjects
Table 3
 
Evaluation of Segmentation Accuracy on 10 Subjects
Table 3
 
Evaluation of Segmentation Accuracy on 10 Subjects
Table 4
 
Subject Information for Cone Inner Segment Diameter Calculation in the Existing Histology Studies
Table 4
 
Subject Information for Cone Inner Segment Diameter Calculation in the Existing Histology Studies
Table 5
 
Comparison of Cone Density and Spacing Between the Patient and Healthy Subjects at Four Different Eccentricities
Table 5
 
Comparison of Cone Density and Spacing Between the Patient and Healthy Subjects at Four Different Eccentricities
Supplement 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×