Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 6
June 2024
Volume 65, Issue 6
Open Access
Multidisciplinary Ophthalmic Imaging  |   June 2024
Choroidalyzer: An Open-Source, End-to-End Pipeline for Choroidal Analysis in Optical Coherence Tomography
Author Affiliations & Notes
  • Justin Engelmann
    School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
    Centre for Medical Informatics, University of Edinburgh, Edinburgh, United Kingdom
  • Jamie Burke
    School of Mathematics, University of Edinburgh, Edinburgh, United Kingdom
  • Charlene Hamid
    Clinical Research Facility and Imaging, University of Edinburgh, Edinburgh, United Kingdom
  • Megan Reid-Schachter
    Clinical Research Facility and Imaging, University of Edinburgh, Edinburgh, United Kingdom
  • Dan Pugh
    British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, United Kingdom
  • Neeraj Dhaun
    British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, United Kingdom
  • Diana Moukaddem
    Department of Vision Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
  • Lyle Gray
    Department of Vision Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
  • Niall Strang
    Department of Vision Sciences, Glasgow Caledonian University, Glasgow, United Kingdom
  • Paul McGraw
    School of Psychology, University of Nottingham, Nottingham, United Kingdom
  • Amos Storkey
    Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
  • Paul J. Steptoe
    Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, United Kingdom
  • Stuart King
    School of Mathematics, University of Edinburgh, Edinburgh, United Kingdom
  • Tom MacGillivray
    Clinical Research Facility and Imaging, University of Edinburgh, Edinburgh, United Kingdom
    Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
  • Miguel O. Bernabeu
    Centre for Medical Informatics, University of Edinburgh, Edinburgh, United Kingdom
    The Bayes Centre, University of Edinburgh, Edinburgh, United Kingdom
  • Ian J. C. MacCormick
    Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
    Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
  • Correspondence: Justin Engelmann, Nine BioQuarter, Plot 9 Little France Rd., Edinburgh EH16 4UX, UK; [email protected]
  • Jamie Burke, Nine BioQuarter, Plot 9 Little France Rd., Edinburgh EH16 4UX, UK; [email protected]
  • Footnotes
     JE and JB contributed equally as first authors.
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 6. doi:https://doi.org/10.1167/iovs.65.6.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Justin Engelmann, Jamie Burke, Charlene Hamid, Megan Reid-Schachter, Dan Pugh, Neeraj Dhaun, Diana Moukaddem, Lyle Gray, Niall Strang, Paul McGraw, Amos Storkey, Paul J. Steptoe, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Ian J. C. MacCormick; Choroidalyzer: An Open-Source, End-to-End Pipeline for Choroidal Analysis in Optical Coherence Tomography. Invest. Ophthalmol. Vis. Sci. 2024;65(6):6. https://doi.org/10.1167/iovs.65.6.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To develop Choroidalyzer, an open-source, end-to-end pipeline for segmenting the choroid region, vessels, and fovea, and deriving choroidal thickness, area, and vascular index.

Methods: We used 5600 OCT B-scans (233 subjects, six systemic disease cohorts, three device types, two manufacturers). To generate region and vessel ground-truths, we used state-of-the-art automatic methods following manual correction of inaccurate segmentations, with foveal positions manually annotated. We trained a U-Net deep learning model to detect the region, vessels, and fovea to calculate choroid thickness, area, and vascular index in a fovea-centered region of interest. We analyzed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error [MAE]) in internal and external test sets. We compared Choroidalyzer to two manual graders on a small subset of external test images and examined cases of high error.

Results: Choroidalyzer took 0.299 seconds per image on a standard laptop and achieved excellent region (Dice: internal 0.9789, external 0.9749), very good vessel segmentation performance (Dice: internal 0.8817, external 0.8703), and excellent fovea location prediction (MAE: internal 3.9 pixels, external 3.4 pixels). For thickness, area, and vascular index, Pearson correlations were 0.9754, 0.9815, and 0.8285 (internal)/0.9831, 0.9779, 0.7948 (external), respectively (all P < 0.0001). Choroidalyzer's agreement with graders was comparable to the intergrader agreement across all metrics.

Conclusions: Choroidalyzer is an open-source, end-to-end pipeline that accurately segments the choroid and reliably extracts thickness, area, and vascular index. Especially choroidal vessel segmentation is a difficult and subjective task, and fully automatic methods like Choroidalyzer could provide objectivity and standardization.

The retinal choroid is a densely vascularized tissue at the back of the eye, providing essential nutrients and support to the outer retinal pigment epithelium and photoreceptors.1 The choroid is emerging as a window into systemic vascular health, including brain,2 kidney,3 and heart.4 The choroid is also affected by ophthalmic conditions like myopia.5 Thus, the choroid is a potential source of biomarkers for ocular and nonocular disease.69 This is driven by improvements in optical coherence tomography (OCT) imaging, especially enhanced depth imaging OCT (EDI-OCT).10 Previously, only the retinal layers were well captured, whereas the choroid, which sits below the hyper-reflective retinal pigment epithelium, was not imaged well and thus received little attention. Now, the choroid can be captured well and is a promising frontier for systemic health assessment,11 especially as OCT devices become commonplace even at high street optometrists. To compute choroidal metrics that could serve as potential vascular biomarkers like choroidal thickness, area, or vascular index, the choroid region and vasculature must be identified and segmented accurately and reliably. 
While choroidal region segmentation is relatively straightforward compared to vessel segmentation, as only a single shape needs to be identified per scan, accurate detection of the lower choroid boundary (choroid-sclera, C-S, junction) can be time-consuming and at times ambiguous due to poor contrast or image noise. While semiautomatic methods have been proposed,6,1220 these typically require training and expertise to use and do not remove human error and subjectivity. Fully automatic, deep learning–based approaches to region segmentation have been proposed and address both the time-intensive and the ambiguous nature of region segmentation, drastically improving both the ease and standardization of choroidal segmentation. Many of these methods are not openly available to the research community,2124 but recently DeepGPET, an open-source choroidal region segmentation method, was published that can be freely downloaded from GitHub.25 
Choroidal vessel segmentation is a far more complex and time-consuming task. The choroidal vessels are highly heterogeneous in terms of vessel size, shape, and edge contrast and are sometimes hard to discern due to poor contrast or noise, making manual segmentations prohibitively time-consuming and very subjective. Currently, local thresholding algorithms are commonplace for choroidal vessel segmentation,2628 and the current state-of-the-art is the Niblack algorithm.29,30 Niblack is a local thresholding technique that segments the vessels using a fixed-size sliding window and a standard deviation offset to determine a pixel-level threshold. However, there is evidence of wide intergrader disagreement between the two commonly used adaptations to Niblack's algorithm.31 Deep learning approaches have been proposed previously trained on manual annotations or Niblack's algorithm32,33 but are not openly available at the time of writing. 
Finally, in addition to region and vessel segmentation, there are two more necessary steps that are often overlooked, namely, fovea detection and computation of choroidal metrics. OCT B-scans are not necessarily perfectly centered, and the size of a pixel can differ not only between devices but also between scans. Thus, once region, vessels, and fovea are extracted, choroidal metrics should be computed in a fovea-centered region of interest,6 which must account for key details like the pixel scaling of the scan. Currently, each of these four steps is done by a different tool34,35 with ad hoc and nonstandardized approaches used especially for fovea detection.36 
We address these issues by proposing Choroidalyzer, an end-to-end pipeline for choroidal analysis. Choroidalyzer consists of a single deep learning model that simultaneously segments the choroidal region and vessels and detects the fovea location, combined with all the code needed to extract choroidal thickness, area, and vascular index in a fovea-centered region of interest. Figure 1 shows how Choroidalyzer improves on the current state-of-the-art by providing a comprehensive solution for all elements of choroidal analysis. To our knowledge, Choroidalyzer is the first open-source method for comprehensive, automatic analysis of the choroid from a raw OCT B-scan. Choroidalyzer is highly effective, can be run on a standard laptop in less than one-third of a second per image, does not require any specialist training in image processing, and is available on GitHub: https://github.com/justinengelmann/Choroidalyzer
Figure 1.
 
A comparison between Choroidalyzer and the existing state of choroidal analysis. To obtain choroidal metrics in a fovea-centered region of interest, researchers currently need to combine many different tools. Choroidalyzer unifies everything into a end-to-end pipeline that is very fast and convenient to use.
Figure 1.
 
A comparison between Choroidalyzer and the existing state of choroidal analysis. To obtain choroidal metrics in a fovea-centered region of interest, researchers currently need to combine many different tools. Choroidalyzer unifies everything into a end-to-end pipeline that is very fast and convenient to use.
Methods
Study Population
Our data set contains 5600 OCT B-scans of 233 participants from six cohorts of healthy and diseased individuals, unrelated to ocular pathology: OCTANE,37 a longitudinal cohort study investigating choroidal microvascular changes in renal transplant recipients and healthy donors; Diurnal Variation,37 a subcohort of OCTANE of young individuals investigating the possible effects of diurnal variation on the relationship between the choroid and markers of renal function; Normative, a detailed OCT examination of one of the authors (JB) with informed consent; i-Test,37 a cohort of pregnant women evaluating whether the choroidal microvasculature reflects cardiovascular changes in both healthy and complicated pregnancies; Prevent Dementia, a longitudinal cohort tracking middle-aged individuals with varying risk of developing late-onset Alzheimer's dementia38; and GCU Topcon,39 an investigation into diurnal variation of the choroid in emmetropic and myopic individuals. All studies adhered to the Declaration of Helsinki and received relevant ethical approval, and informed consent from all subjects was obtained in all cases from the host institution. Table 1 describes the population statistics and image acquisition statistics for each cohort. 
Table 1.
 
Overview of Population Characteristics
Table 1.
 
Overview of Population Characteristics
Three OCT device types were used from two device manufacturers: the spectral domain OCT SPECTRALIS Standard Module OCT1 system and the spectral domain OCT SPECTRALIS portable FLEX Module OCT2 system (both Heidelberg Engineering, Heidelberg, Germany) and the swept source OCT DRI Triton Plus (Topcon, Tokyo, Japan). For the Heidelberg devices, active eye tracking with built-in automatic real time (ART) software was used with horizontal and vertical line scans capturing a 30° (9-mm) fovea-centered region of interest, with an ART of 100 (i.e., each final B-scan is the average of 100 B-scans). Posterior pole macular line scans covered a 30-by-25-degree rectangular region of interest using 31 consecutive scans, each with an ART of 50 (posterior pole scans in the Normative cohort were acquired with an ART of 9). All Heidelberg data were collected at a pixel resolution of 768 × 768 pixels, with a signal quality ⩾15. The Topcon device imaged the macular region using 12 fovea-centered radial scans, spaced 30° apart and covering a 30° (9-mm) region of interest. Each B-scan had a resolution of 992 × 1024 pixels, which was cropped horizontally by 32 pixels and resized to the resolution of the Heidelberg scans of 768 × 768. All Topcon data had an image quality score > 88 determined by the built-in TopQ software. 
Five of the six cohorts were split into training (4144 B-scans, 122 subjects), validation (466 B-scans, 28 subjects), and internal test sets (756 B-scans, 37 subjects) containing approximately 75%, 10%, and 15% of the B-scans, respectively. We split the data on the subject level, such that no individual ended up in more than one set. The remaining cohort, OCTANE, was entirely held out as an external test set (168 B-scans, 46 individuals). Supplementary Table S1 gives a detailed overview of population and image characteristics for each of the four sets. 
Ground-Truth Labels
The fovea coordinate was defined as the horizontal (column) pixel index, which aligned with the deepest point of the foveal pit depression36 (i.e., where the central foveal pit was most illuminated, typically aligning with a ridge formed at the photoreceptor layer). The choroidal region was defined as the space posterior to the boundary delineating the retinal pigment epithelium layer and Bruch's membrane complex (RPE-choroid, RPE-C, junction) and superior to the boundary delineating the sclera from the posterior-most point of Haller's layer (C-S junction). Between the choroid and sclera lies the suprachoroidal space, which is rarely visible on OCT B-scans and we consider not to be part of the choroid itself. The choroidal space is made up of interstitial fluid, or stroma, seen as brightly illuminated strips in the OCT B-scans, with interspersed, irregular areas of darker intensity representing choroidal vasculature. This has been both empirically observed26,40 and widely accepted among the research community.29 The choriocapillaris, a dense network of choroidal capillaries, is seen as a small band below Bruch's membrane complex approximately 10 microns thick1 (roughly 3 pixels deep in OCT B-scans) and is assumed as part of the choroidal vasculature alongside larger vessels seen in Haller and Sattler's layers. 
For OCT B-scans centered at the fovea (i.e., horizontal, vertical, and radial scans), the foveal column location was detected manually. Those not centered at the fovea do not show the fovea. The ground-truths (GTs) for choroidal region segmentation were generated using DeepGPET25 with the default threshold of 0.5. In total, 897 scans were excluded from the data set (and removed from Table 1 and Supplementary Table S1) because of poor region segmentations—these were primarily Topcon B-scans that DeepGPET had not been trained on before. 
GTs for vessel segmentation were generated using a novel, multiscale quantization and clustering-based approach, called multiscale median cut quantization (MMCQ), which we found to produce superior results to standard application of Niblack in preliminary analysis on the training set. MMCQ segments the choroidal vasculature by performing patchwise local contrast enhancement at several scales using median cut clustering (quantization)41 and histogram equalization. The pixels of the subsequently enhanced choroidal space are then clustered globally using median cut clustering once more, classifying the pixels belonging to the clusters with the darkest pixel intensities as vasculature. We provide a brief comparison between MMCQ and Niblack in the Supplementary Section 4. In our experience, MMCQ tends to provide higher-fidelity vessel segmentations and avoid oversegmentation compared to Niblack. The code for this algorithm is freely available here at https://github.com/jaburke166/mmcq
To improve the fidelity and robustness of our vessel segmentation GTs, we randomly varied the brightness and contrast of each OCT B-scan before application of MMCQ. We used five linearly spaced gamma levels to fix the mean brightness of each image between 0.2 and 0.5 and simultaneously altered the contrast using five linearly spaced factors between 0.5 and 3. A 3:2 majority vote for vessel label classification was used across all 25 variants. This improves robustness as spurious over- and undersegmentation contigent on specific image statistics are averaged out. 
Choroidalyzer's Deep Learning Model
Choroidalyzer segments the choroid region and vessels, as well as detects the fovea using a UNet deep learning model42 with a depth of 7. This relatively high depth allows our model to better consider the global context. The first three blocks increase the internal channel dimension from 8 to 64, after which it is kept constant to reduce memory consumption and parameter count. Blocks consist of two convolutional layers, each followed by BatchNorm43 and ReLU activation. Our up-blocks use a 1 × 1 convolution to reduce the channel dimension followed by bilinear interpolation, which is more compute and memory efficient than the standard transposed convolutions. We train our model for 40 epochs using the AdamW optimizer44 with a learning rate of 5 × 10−4 and weight decay of 10−8 to minimize binary cross-entropy, clamping the maximum gradient norm to 3 before each step. We use automatic mixed precision to speed up training dramatically while reducing memory consumption by almost half. Forward pass and loss computation are done in bfloat16, a half-precision data type optimized for machine learning. 
During training, we apply the following data augmentations in random order per sample: horizontal flip (P = 0.5), changing the brightness and contrast independently (factors ∼U(0.5, 1.5), P = 0.95), random rotation and shear (degrees ∼U(− 25, 25) and ∼U(− 15, 15), respectively, \(p= {1}/{3}\)), and scaling the image (factor ∼U(0.8, 1.2), \(p={1}/{3}\)), where U(a, b) denotes a uniform distribution between a and b, and p the probability of the transform being applied. For peripapillary scans that have a resolution of 1536 × 768, we use a crop of 768 × 768 using a random multiple of 192 as offset per example and epoch. 
The fovea is only a single point that would be difficult for a segmentation model to learn, as predicting close to 0 for all pixels would yield virtually the same loss as a perfect prediction. Thus, we create a target 51 pixels high and 19 pixels wide centered at the GT fovea location. The exact fovea location is set to 1, the whole column to 0.95, and adjacent columns to 0.95 − (d*0.1), where d is the column distance from the fovea. Finally, we employ one-sided label smoothing and set all other pixels to 0.01 instead of 0 to stabilize training. We extract fovea column predictions by applying a 21-width triangular filter to the column-wise sums of our model's predictions and taking the column with the highest value. 
Statistical Analysis
We evaluate agreement in segmentations using the area under the receiver operating characteristic curve (AUC) and Dice coefficient, applying a fixed threshold of 0.5 to binarize our model's predictions. For the fovea column location, we use mean absolute error (MAE) and median absolute error (AE). For derived choroid metrics, we evaluate agreement with Pearson and Spearman correlations and further report MAEs. 
All choroidal metrics were computed using a region of interest (ROI) centered at the foveal pit, measuring 3-mm temporally and nasally—the ROI for volume scans was centered at the middle column index of the image—corresponding to the standardized ROI according to the Early Treatment Diabetic Retinopathy Study (ETDRS) macular grid of 6000 × 6000 microns.45 As peripapillary scans do not allow for a fovea-centered region of interest, we only look at segmentation metrics and use a threshold of 0.25 for vessel predictions. Area was computed by counting the pixels within the ETDRS grid, while thickness was measured at three linearly spaced locations, spanning the ETDRS grid, as point-source micron distances between the RPE-C and C-S junctions, locally perpendicular to the RPE-C junction. 
Choroid vascular index is the ratio of vessel to nonvessel pixels in the choroid within the ETDRS grid. Our deep learning model outputs probabilities instead of discrete predictions, which capture uncertainty. As capturing uncertainty is desirable, we propose a “soft” vascular index that takes the ratio of predicted probabilities instead of discretized binary predictions. On the validation set, we found that this improves agreement. 
To examine and characterize the behavior of our model, we analyzed cases of high error in detail. Concretely, for each of the three tasks (region and vessel segmentation, fovea detection), we selected the 15 examples from each test set where Choroidalyzer produced the highest errors. For redundant cases (i.e., adjacent, highly similar slices from a volume scan), only one was retained. For fovea detection, cases of low error were also discarded. This left 28 cases for region, 29 for vessel, and 25 for fovea. 
An adjudicating clinical ophthalmologist (IM) was provided with the original image, Choroidalyzer's prediction, and the GT while being masked to the identity of the methods. Images and labels were provided individually and as composites. For each example, the adjudicator was asked which label they preferred. They also rated each label qualitatively on a five-level ordinal scale (“very bad,” “bad,” “okay,” “good,” and “very good”) for region segmentation quality, as well as intravascular and interstitial vessel segmentation quality, the latter two to quantify any potential under-segmentation of vessels and oversegmentation of the interstitial space. 
Finally, we selected a random subsample of 20 B-scans at the patient level from the external test set to be manually segmented by two graders, M1 and M2. M1 was a clinical ophthalmologist (IM) and M2 was a PhD student who has worked with choroidal OCT data for the last 4 years (JB). Manual graders segmented the region and choroidal vessels using ITK-Snap.46 The manual segmentations were compared to Choroidalyzer and to the current state-of-the-art, namely, DeepGPET for region segmentations25 and Niblack for vessel segmentation using a window size of 51 and standard deviation offset of −0.05, which mirrors previously published work.33 
Results
Performance on Internal and External Test Sets
Table 2 shows the performance of Choroidalyzer on the internal and external test sets. Our model achieves very good performance in terms of AUC and Dice for region and vessels on both sets. Metrics for region are higher than for vessels, which is expected as choroidal vessel segmentation is much more difficult and ambiguous than region segmentation, and thus the GTs are themselves imperfect. Performance was slightly higher for the internal test set than the external test set, which is expected, but only marginally so, indicating that our model generalizes well to new cohorts. For the peripapillary scans that only exist in the internal test set, our model achieved an AUC of 0.9996 (region)/0.9925 (vessel) and Dice of 0.9636 (region)/0.7155 (vessel). This is reasonable performance but lower than for other scans. 
Table 2.
 
Metrics for Choroidalyzer Against Ground-Truth Annotations From the Internal and External Test Sets
Table 2.
 
Metrics for Choroidalyzer Against Ground-Truth Annotations From the Internal and External Test Sets
For fovea detection, the model had a MAE of 3.9 pixels (px) for the internal and 3.4 px for the external test set, with the median absolute error being 3 px for both. This is excellent performance as an error of 3 px on a 768 px-wide image will not meaningfully change our region of interest or resulting metrics (data not shown—see the Supplementary materials for the analysis effects of fovea location on downstream metrics). For the derived choroid metrics, Choroidalyzer shows excellent agreement with the GTs on thickness and area, with Pearson and Spearman correlations of 0.9692 or greater for both internal and external test sets. For the vascular index, performance is a bit lower, with correlations between 0.7948 and 0.8285. Although vascular index depends on both region and vessel segmentation, the other metrics indicate that the differences in vascular index are driven primarily by differences in vessel segmentation. Still, the observed correlations are high in absolute terms. Figure 2 shows correlation and Bland–Altman plots for the three derived metrics on both test sets, which likewise indicate generally very good agreement. Figure 3 shows some examples for each of the three imaging devices. 
Figure 2.
 
Agreement in thickness, area and vascular index for (A) the internal and (B) the external test sets. Top row shows scatterplots with best regression fit and identity lines; bottom row shows Bland–Altman plots. Note that we chose to fit each plot to the data range, and thus the scale of the axes is not exactly the same between internal and external test sets, especially for vascular index. Best viewed electronically.
Figure 2.
 
Agreement in thickness, area and vascular index for (A) the internal and (B) the external test sets. Top row shows scatterplots with best regression fit and identity lines; bottom row shows Bland–Altman plots. Note that we chose to fit each plot to the data range, and thus the scale of the axes is not exactly the same between internal and external test sets, especially for vascular index. Best viewed electronically.
Figure 3.
 
Examples of Choroidalyzer being applied to scans from different imaging devices. Six fovea-centered OCT B-scans, two per imaging device type, from the internal test set showing region segmentations (left), vessel segmentations (middle), and fovea column location (right).
Figure 3.
 
Examples of Choroidalyzer being applied to scans from different imaging devices. Six fovea-centered OCT B-scans, two per imaging device type, from the internal test set showing region segmentations (left), vessel segmentations (middle), and fovea column location (right).
Comparison With Manual Segmentations
Table 3 shows the results from manual segmentations. For automated methods, we compare with each manual grader and then average the performance across both graders to make the results more concise. The comparisons with individual graders are reported in Supplementary Table S2. Interestingly, while vessel Dice for Choroidalyzer (0.7410 vs. M1 and 0.7927 vs. M2; mean 0.7669) is again much worse than region Dice and even worse than the vessel Dice on both test sets, it is very similar to the intergrader agreement of 0.7699. More generally, the intergrader agreements for all other metrics are similar to Choroidalyzer's agreement with the graders, with the notable exception of vascular index. Here, Choroidalyzer's MAE is better (0.0555 vs. M1 and 0.0506 vs. M2; mean 0.0531) than the intergrader agreement (0.0618), as is the Spearman correlation, but Pearson correlation and intraclass correlation are worse. Compared to the respective state-of-the-art (i.e., DeepGPET for region, Niblack for vessel segmentation), Choroidalyzer has better agreement with the graders for most of the metrics, although methods are generally comparable. 
Table 3.
 
Comparison Metrics for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 3.
 
Comparison Metrics for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 4 shows the time per scan for the manual graders and automatic approaches. The manual graders on average needed more than 26 and 22 minutes (mean 24), with the vast majority of that time spent on the vessel segmentation. By contrast, the automatic methods on a standard laptop needed about a second per scan and no human time at all. Thus, to get through a data set of 100 scans, it would take manual graders about 40 hours of work, but with automated methods, it would be less than 2 minutes. With GPU-acceleration, Choroidalyzer and DeepGPET could achieve throughputs of dozens or hundreds of scans per second even on consumer-grade hardware. Comparing the automated methods with each other, Choroidalyzer took 73% less time than DeepGPET and Niblack, while also detecting the fovea location. All three methods are very fast but for very large data sets or deployment on edge devices, Choroidalyzer's efficiency is an additional advantage over existing automated methods. 
Table 4.
 
Mean (Standard Deviation) Execution Time of the Four Different Approaches to Region and Vessel Segmentation for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 4.
 
Mean (Standard Deviation) Execution Time of the Four Different Approaches to Region and Vessel Segmentation for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Detailed Error Analysis
Table 5 shows the results of manual inspection of scans where Choroidalyzer produced the highest error compared to the GT on the test sets. For region segmentation, Choroidalyzer was preferred in 8 cases, the GT in 5, and both methods were considered equally good in 15 cases. In terms of quality, Choroidalyzer was “very bad” in only one case compared with two for the GT and “very good” three times compared to none for the GT. For the vessels, Choroidalyzer was preferred in 13 cases, the GT in 4, and both were tied in 12 cases. Vessel segmentation is a harder task, with no methods achieving “very good.” However, the intravascular scores for Choroidalyzer are substantially better, with no “bad” or “very bad” (vs. 3 and 2, respectively for GT) and far more “good” (17 vs. 5), and the interstitial scores are similarly better. Finally, for the fovea, Choroidalyzer was preferred 23 of 25 times and the GT only twice, indicating that large fovea errors are almost exclusively due to mistakes in the manual GT labels. 
Table 5.
 
Preference and Segmentation Scores From Masked Expert Adjudicator (IM) Comparing the Highest Region Segmentation, Vessel Segmentation, and Fovea Column Errors Between Choroidalyzer and the Ground-Truth Labels
Table 5.
 
Preference and Segmentation Scores From Masked Expert Adjudicator (IM) Comparing the Highest Region Segmentation, Vessel Segmentation, and Fovea Column Errors Between Choroidalyzer and the Ground-Truth Labels
Figure 4 shows the distributions of fovea errors for both test sets along with each example in both sets. For very large residuals (10+ px), the GTs are wrong and Choroidalyzer correctly identifies the fovea location. For errors around 7 px, still twice the MAE, both methods are similar, with either method sometimes being more correct. Further exploration revealed the majority of incorrectly labeled ground-truths to be Topcon OCT B-scans, as each 12-stack of radial scans are not centered at the fovea, and initial manual annotation detected the fovea for only one to represent each stack. Despite this oversight, Choroidalyzer learned to dectect the fovea robust and accurately. 
Figure 4.
 
Histogram of absolute errors for fovea column detection for the internal (left) and external (right) test sets. Examples for different levels of error are shown, with dotted lines indicating which part of the distribution they come from. In the examples, the teal line indicates the GT label, the dashed orange line the prediction.
Figure 4.
 
Histogram of absolute errors for fovea column detection for the internal (left) and external (right) test sets. Examples for different levels of error are shown, with dotted lines indicating which part of the distribution they come from. In the examples, the teal line indicates the GT label, the dashed orange line the prediction.
Discussion
We developed Choroidalyzer, an end-to-end pipeline for choroidal analysis. Choroidalyzer shows excellent performance on the internal and external test sets. Choroidalyzer produced the highest errors, primarily cases of imperfect GTs, and Choroidalyzer was generally preferred by a blinded adjudicating ophthalmologist (IM), further indicating robustness and good performance. Its agreement with manual segmentations, which demand substantial time and attention from a human expert, is comparable to the intergrader agreement. This suggests that Choroidalyzer performs well compared to laborious manual segmentation and also highlights the subjectivity introduced by manual graders. Choroidalyzer not only produces results similar to that of a skilled manual grader but also does so fully automatically without introducing subjectivity and thus increases standardization and reproducibility. If researchers use Choroidalyzer, their results are repeatable and would be much more comparable to other studies also using Choroidalyzer than if different manual graders were used in each case. 
Additionally, Choroidalyzer saves a substantial amount of time per image over manual segmentation, freeing up researcher time and enabling large-scale analyses that otherwise would not be possible. Even compared to the current state-of-the-art for automated methods, DeepGPET and Niblack, Choroidalyzer can do the analysis in roughly a quarter of the time. More importantly, Choroidalyzer provides an end-to-end pipeline, which makes it easier to implement and use than having to combine multiple methods like Niblack and DeepGPET. Ease of use is often underappreciated in the literature but key in saving researchers time and allowing them to focus on the science. 
Choroidalyzer performed well against manual graders relative to the state-of-the-art methods, reaching or surpassing the levels of agreement even between the two manual graders, particularly for vascular index, a far more difficult metric to calculate accurately than area and thickness. The intergrader agreement between manual graders for these metrics indicates a potential lower bound of what effect sizes we might expect from these metrics. This has important downstream impact on the statistical confidence of results from cohort studies, particularly when assessing the choroidal vasculature.28,31 
It is often difficult to visualize the choroid due to imaging noise, poor eye tracking and patient fixation, or operator inexperience. Thus, in some cases, vessel boundaries can be hard to discern. This is why we proposed to use a soft version of the choroid vascular index, where the probabilities that Choroidalyzer outputs are used instead of thresholded, binarized segmentations. The probabilities capture uncertainty about the precise location of the vessel wall and thus are more robust than using a single, somewhat arbitrary threshold. Users could also tune the binarization threshold for their own images, if desired, which might help in instances of poor visibility of the choroidal vasculature. 
Segmentation performance for peripapillary scans was reasonable but much worse than for other scan types. This could be due to those scans being relatively rare in our data set and showing parts of the retina on the nasal side of the optic disc that are not captured in fovea-centered scans. More peripapillary training data would likely increase performance. In our opinion, at present, Choroidalyzer can be used for these scans but requires subsequent manual inspection and potential correction. Furthermore, adjusting the binarization threshold for the vessel predictions can improve results. 
Our model detected the fovea well, and the largest errors were cases where ground-truths were incorrectly labeled with the model correctly identifying the fovea location as confirmed by masked adjudication. Thus, the model performed even better than what the quantitative results suggest. In the present work, we have focused on identifying the fovea column, which is needed to define the fovea-centered region of interest. However, after selecting and evaluating our final model, we realized that in relatively rare cases related to poor image acquisition, the retina and choroid can be at a steep angle relative to the image axes. For those, it would be best to define the region of interest along the choroid axes rather than image axes, most easily done by drawing a center line from the fovea perpendicularly through the retina and choroid. Thus, it could be useful to also segment the retina and to determine both the row and column of the fovea. While not our initial objective, we did some preliminary analyses and found that we can derive the fovea row well with our current model (data not shown). We also analyzed the effect of defining the region of interest perpendicular to the choroid instead of aligned to the image axes, as shown in Supplementary Section 5. Contrary to our initial hypothesis, the difference in area and vascular index is very minor even for highly myopic eyes. However, for thickness, the choroid-aligned measurement tends to be higher, and there are a few cases of large disagreement. Furthermore, to understand the effect of fovea location error on downstream choroidal metrics, we simulated random per-sample deviations of ±6 px, twice the median AE, and found that they yielded virtually identical results (see Supplementary Fig. S1 and Fig. S2). 
Figure 5.
 
Three example Topcon OCT B-scans with successful region segmentations from Choroidalyzer (right) and failed segmentations from DeepGPET (middle).
Figure 5.
 
Three example Topcon OCT B-scans with successful region segmentations from Choroidalyzer (right) and failed segmentations from DeepGPET (middle).
The data set in the present work was substantially larger than the one used for DeepGPET and importantly contains both Heidelberg and Topcon scans. As a result, Choroidalyzer can segment even difficult Topcon scans where DeepGPET failed (Fig. 5). Choroidalyzer was trained on region and vessel GTs generated by fully and semiautomatic methods, respectively, which were then checked for errors and only manually improved where needed. Recent work argues that such approaches to generating GTs are preferable as they reduce subjectivity and thus bias and inconsistency.47 
Choroidalyzer also has limitations. Most importantly, there is no quality scoring component to reject B-scans that do not show the choroid in sufficient detail to allow for reasonable analysis. While modern OCT devices typically show the choroid in good detail, especially if EDI is used, this is not always the case. Most devices provide some quality indicators, but we have not investigated quality thresholds for specific devices, below which Choroidalyzer would not function. Furthermore, OCT quality indicators are typically focused on the retina, and although poor visualization of the retina might imply poor visualization of the choroid, the reverse is not necessarily the case. A quality scoring method specific to the choroid would be a useful addition to the field. Another limitation is that Choroidalyzer was trained only on cohorts relating to systemic health but not ocular disease or data acquired during routine clinical practice. 
Future work could improve the underlying deep learning model of Choroidalyzer (e.g., by training and evaluating it on data from more diverse sources). Data with ocular pathology (e.g., abnormally sized choroids due to myopia, age-related macular degeneration, or central serous chorio-retinopathy) could be used to investigate whether Choroidalyzer is robust in those contexts and to train an improved version if needed. Moreover, automated quality scoring methods relating to the choroid would address a key need in choroidal analysis. Finally, Choroidalyzer could be extended to measure additional choroidal metrics, such as macular thickness and vessel density maps across a volume, or relating to its curvature. 
Conclusions
Choroidal thickness, area, and especially vascular index are highly interesting metrics and potential biomarkers for both systemic and ocular health. However, calculating them used to be laborious and—when done manually—subjective. Choroidalyzer provides an efficient, end-to-end pipeline to alleviate these problems. We hope that by making Choroidalyzer openly accessible, we will enable researchers and clinicians to conveniently calculate these metrics and use them for their research, while improving reproducibility and standardization in the field. 
Acknowledgments
The authors thank the Edinburgh Imaging and Edinburgh Clinical Research Facility at the University of Edinburgh for support and all participants in the studies used in this article. Supported in part by the Alzheimer's Drug Discovery Foundation (project no. GDAPB-201808-2016196), NHS Lothian R&D, and British Heart Foundation Centre for Research Excellence Award III (RE/18/5/34216). Supported in part also by the Wellcome Leap In Utero scheme. The funding sources were not involved in designing, conducting, or submitting this work. 
M.O.B. gratefully acknowledges funding from: Fondation Leducq Transatlantic Network of Excellence (17 CVD 03); EPSRC grant no. EP/X025705/1; British Heart Foundation and The Alan Turing Institute Cardiovascular Data Science Award (C-10180357); Diabetes UK (20/0006221); Fight for Sight (5137/5138); the SCONe projects funded by Chief Scientist Office, Edinburgh & Lothians Health Foundation, Sight Scotland, the Royal College of Surgeons of Edinburgh, the RS Macdonald Charitable Trust, and Fight For Sight. 
Supported by UK Research and Innovation (grant EP/S02431X/1) as part of the Centre of Doctoral Training in Biomedical AI at the School of Informatics, University of Edinburgh (JE). Supported by the Medical Research Council (grant MR/N013166/1) as part of the Doctoral Training Programme in Precision Medicine at the Usher Institute, University of Edinburgh (JB). 
Disclosure: J. Engelmann, None; J. Burke, None; C. Hamid, None; M. Reid-Schachter, None; D. Pugh, None; N. Dhaun, None; D. Moukaddem, None; L. Gray, None; N. Strang, None; P. McGraw, None; A. Storkey, None; P.J. Steptoe, None; S. King, None; T. MacGillivray, None; M.O. Bernabeu, None; I.J.C. MacCormick, None 
References
Nickla DL, Wallman J. The multifunctional choroid. Prog Retin Eye Res. 2010; 29(2): 144–168. [CrossRef] [PubMed]
Robbins CB, Grewal DS, Thompson AC, et al. Choroidal structural analysis in Alzheimer disease, mild cognitive impairment, and cognitively healthy controls. Am J Ophthalmol. 2021; 223: 359–367. [CrossRef] [PubMed]
Balmforth C, van Bragt JJ, Ruijs T, et al. Chorioretinal thinning in chronic kidney disease links to inflammation and endothelial dysfunction. JCI Insight. 2016; 1(20): e89173, https://doi.org/10.1172/jci.insight.89173. [PubMed]
Yeung SC, You Y, Howe KL, Yan P. Choroidal thickness in patients with cardiovascular disease: a review. Surv Ophthalmol. 2020; 65(4): 473–486. [CrossRef] [PubMed]
Read SA, Fuss JA, Vincent SJ, Collins MJ, Alonso-Caneiro D. Choroidal changes in human myopia: insights from optical coherence tomography imaging. Clin Exp Optom. 2019; 102(3): 270–285. [CrossRef] [PubMed]
Burke J, Pugh D, Farrah T, et al. Evaluation of an automated choroid segmentation algorithm in a longitudinal kidney donor and recipient cohort. Transl Vis Sci Technol. 2023; 12(11): 19. [CrossRef] [PubMed]
Burke J, Dhaun N, Dhillon B, Wilson KJ, Beare NAV, MacCormick IJC. The retinal contribution to the kidney–brain axis in severe malaria. Trends Parasitol. 2023; 39(6): 410–411, ISSN 1471–4922, https://doi.org/10.1016/j.pt.2023.03.002. [PubMed]
Shin YU, Lee SE, Kang MHO, Han S-W, Yi J-H, Cho H. Evaluation of changes in choroidal thickness and the choroidal vascularity index after hemodialysis in patients with end-stage renal disease by using swept-source optical coherence tomography. Medicine (Baltimore). 2019; 98(18).
Kundu A, Ma JP, Robbins CB, et al. Longitudinal analysis of retinal microvascular and choroidal imaging parameters in Parkinson's disease compared with controls. Ophthalmol Sci. 2023; 3(4): 100393, ISSN 2666–9145, https://doi.org/10.1016/j.xops.2023.100393. [PubMed]
Spaide RF, Koizumi H, Pozonni MC. Enhanced depth imaging spectral-domain optical coherence tomography. Am J Ophthalmol. 2008; 146(4): 496–500. [CrossRef] [PubMed]
Tan K-A, Gupta P, Agarwal A, et al. State of science: choroidal thickness and systemic health. Surv Ophthalmol. 2016; 61(5): 566–581. [CrossRef] [PubMed]
Burke J, King S. Edge tracing using gaussian process regression. IEEE Trans Image Process. 2021; 31: 138–148. [CrossRef] [PubMed]
Eghtedar RA, Esmaeili M, Peyman A, Akhlaghi M, Rasta SH. An update on choroidal layer segmentation methods in optical coherence tomography images: a review. J Biomed Phys Eng. 2022; 12(1): 1. [PubMed]
Masood S, Sheng B, Li P, Shen R, Fang R, Wu Q. Automatic choroid layer segmentation using normalized graph cut. IET Image Proc. 2018; 12(1): 53–59. [CrossRef]
Salafian B, Kafieh R, Rashno A, Pourazizi M, Sadri S. Automatic segmentation of choroid layer in EDI OCT images using graph theory in neutrosophic space. arXiv preprint arXiv:1812.01989, 2018.
Kajić V, Esmaeelpour M, Považay B, Marshall D, Rosin PL, Drexler W. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model. Biomed Opt Express. 2012; 3(1): 86–103. [CrossRef] [PubMed]
Wang C, Wang YaX, Li Y. Automatic choroidal layer segmentation using Markov random field and level set method. IEEE J Biomed Health Inf. 2017; 21(6): 1694–1702. [CrossRef]
Srinath N, Patil A, Kumar VK, Jana S, Chhablani J, Richhariya A. Automated detection of choroid boundary and vessels in optical coherence tomography images. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Chicago, IL, USA: IEEE; 2014: 166–169, doi: 10.1109/EMBC.2014.6943555.
George N, Jiji CV. Two stage contour evolution for automatic segmentation of choroid and cornea in OCT images. Biocybern Biomed Eng. 2019; 39(3): 686–696. [CrossRef]
Danesh H, Kafieh R, Rabbani H, Hajizadeh F. Segmentation of choroidal boundary in enhanced depth imaging octs using a multiresolution texture based modeling in graph cuts. Comput Math Methods Med. 2014; 2014: 9, https://doi.org/10.1155/2014/479268. [CrossRef]
Mazzaferri J, Beaton L, Hounye G, Sayah DN, Costantino S. Open-source algorithm for automatic choroid segmentation of OCT volume reconstructions. Sci Rep. 2017; 7(1): 42112. [CrossRef] [PubMed]
Kugelman J, Alonso-Caneiro D, Read SA, et al. Automatic choroidal segmentation in OCT images using supervised deep learning methods. Sci Rep. 2019; 9(1): 13298. [CrossRef] [PubMed]
Devalla SK, Renukanand PK, Sreedhar B-K, et al. Drunet: a dilated-residual U-net deep learning network to segment optic nerve head tissues in optical coherence tomography images. Biomed Opt Express. 2018; 9(7): 3244–3265. [CrossRef] [PubMed]
Chen H-J, Huang Y-L, Tse S-L, et al. Application of artificial intelligence and deep learning for choroid segmentation in myopia. Transl Vis Sci Technol. 2022; 11(2): 38–38. [CrossRef]
Burke J, Engelmann J, Hamid C, et al. An open-source deep learning algorithmfor efficient and fully-automatic analysis of the choroid in optical coherence tomography. Trans. Vis. Sci. Tech. 2023; 12(11): 27, https://doi.org/10.1167/tvst.12.11.27.
Branchini LA, Adhi M, Regatieri CV, et al. Analysis of choroidal morphologic features and vasculature in healthy eyes using spectral-domain optical coherence tomography. Ophthalmology. 2013; 120(9): 1901–1908. [CrossRef] [PubMed]
Sonoda S, Sakamoto T, Yamashita T, et al. Choroidal structure in normal eyes and after photodynamic therapy determined by binarization of optical coherence tomographic images. Invest Ophthalmol Vis Sci. 2014; 55(6): 3893–3899. [CrossRef] [PubMed]
Agrawal R, Gupta P, Tan K-A, et al. Choroidal vascularity index as a measure of vascular status of the choroid: measurements in healthy eyes from a population-based study. Sci Rep. 2016; 6(1): 21090. [CrossRef] [PubMed]
Agrawal R, Ding J, Sen P, et al. Exploring choroidal angioarchitecture in health and disease using choroidal vascularity index. Prog Retin Eye Res. 2020; 77: 100829. [CrossRef] [PubMed]
Betzler BK, Ding J, Wei X, et al. Choroidal vascularity index: a step towards software as a medical device. Br J Ophthalmol. 2022; 106(2): 149–155. [CrossRef] [PubMed]
Wei X, Sonoda S, Mishra C, et al. Comparison of choroidal vascularity markers on optical coherence tomography using two-image binarization techniques. Invest Ophthalmol Vis Sci. 2018; 59(3): 1206–1211. [CrossRef] [PubMed]
Liu X, Bi L, Xu Y, Feng D, Kim J, Xu X. Robust deep learning method for choroidal vessel segmentation on swept source optical coherence tomography images. Biomed Opt Express. 2019; 10(4): 1601–1612. [CrossRef] [PubMed]
Muller J, Alonso-Caneiro D, Read SA, Vincent SJ, Collins MJ. Application of deep learning methods for binarization of the choroid in optical coherence tomography images. Transl Vis Sci Technol. 2022; 11(2): 23. [CrossRef] [PubMed]
Zheng Gu, Jiang Y, Shi Ce, et al. Deep learning algorithms to segment and quantify the choroidal thickness and vasculature in swept-source optical coherence tomography images. J Innov Opt Health Sci. 2021; 14(1): 2140002. [CrossRef]
Khaing TT, Okamoto T, Ye C, et al. Choroidnet: a dense dilated U-net model for choroid layer and vessel segmentation in optical coherence tomography images. IEEE Access. 2021; 9: 150951–150965. [CrossRef]
Xuan M, Wang W, Shi D, et al. A deep learning–based fully automated program for choroidal structure analysis within the region of interest in myopic children. Transl Vis Sci Technol. 2023; 12(3): 22. [CrossRef] [PubMed]
Dhaun N . Optical coherence tomography and nephropathy: the Octane Study. 2014. https://clinicaltrials.gov/ct2/show/NCT02132741. Accessed May 31, 2023.
Ritchie CW, Ritchie K. The prevent study: a prospective cohort study to identify mid-life biomarkers of late-onset Alzheimer's disease. BMJ Open. 2012; 2(6): e001893. [CrossRef] [PubMed]
Moukaddem D, Strang N, Gray L, McGraw P, Scholes C. Comparison of diurnal variations in ocular biometrics and intraocular pressure between hyperopes and non-hyperopes. Invest Ophthalmol Vis Sci. 2022; 63(7): 1428–F0386.
Sohrab M, Wu K, Fawzi AA. A pilot study of morphometric analysis of choroidal vasculature in vivo, using en face optical coherence tomography. PLoS One. 2012; 7(11): e48631. [CrossRef] [PubMed]
Heckbert P . Color image quantization for frame buffer display. ACM SIGGRAPH Comput Graph. 1982; 16(3): 297–307. [CrossRef]
Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. Springer; 2015; 9351: 234–241, https://doi.org/10.1007/978-3-319-24574-4_28.
Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning. 2015; 37: 448–456, (ICML'15), JMLR.org.
Loshchilov I, Hutter F. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
Early Treatment Diabetic Retinopathy Study Research Group. Early treatment diabetic retinopathy study design and baseline patient characteristics: ETDRS Report Number 7. Ophthalmology. 1991; 98(5): 741–756. [CrossRef] [PubMed]
Yushkevich PA, Piven J, Hazlett HC, et al. User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability. Neuroimage. 2006; 31(3): 1116–1128. [CrossRef] [PubMed]
Maloca PM, Pfau M, Janeschitz-Kriegl L, et al. Human selection bias drives the linear nature of the more ground truth effect in explainable deep learning optical coherence tomography image segmentation. J Biophotonics. 2024; 17(2): e202300274, https://doi.org/10.1002/jbio.202300274. [PubMed]
Rahman W, Chen FK, Yeoh J, Patel P, Tufail A, Da Cruz L. Repeatability of manual subfoveal choroidal thickness measurements in healthy subjects using the technique of enhanced depth imaging optical coherence tomography. Invest Ophthalmol Vis Sci. 2011; 52(5): 2267–2271. [CrossRef] [PubMed]
Agrawal R, Wei X, Goud A, Vupparaboina KK, Jana S, and Chhablani J. Influence of scanning area on choroidal vascularity index measurement using optical coherence tomography. Acta Ophthalmol (Copenh). 2017; 95(8): e770–e775. [CrossRef]
Figure 1.
 
A comparison between Choroidalyzer and the existing state of choroidal analysis. To obtain choroidal metrics in a fovea-centered region of interest, researchers currently need to combine many different tools. Choroidalyzer unifies everything into a end-to-end pipeline that is very fast and convenient to use.
Figure 1.
 
A comparison between Choroidalyzer and the existing state of choroidal analysis. To obtain choroidal metrics in a fovea-centered region of interest, researchers currently need to combine many different tools. Choroidalyzer unifies everything into a end-to-end pipeline that is very fast and convenient to use.
Figure 2.
 
Agreement in thickness, area and vascular index for (A) the internal and (B) the external test sets. Top row shows scatterplots with best regression fit and identity lines; bottom row shows Bland–Altman plots. Note that we chose to fit each plot to the data range, and thus the scale of the axes is not exactly the same between internal and external test sets, especially for vascular index. Best viewed electronically.
Figure 2.
 
Agreement in thickness, area and vascular index for (A) the internal and (B) the external test sets. Top row shows scatterplots with best regression fit and identity lines; bottom row shows Bland–Altman plots. Note that we chose to fit each plot to the data range, and thus the scale of the axes is not exactly the same between internal and external test sets, especially for vascular index. Best viewed electronically.
Figure 3.
 
Examples of Choroidalyzer being applied to scans from different imaging devices. Six fovea-centered OCT B-scans, two per imaging device type, from the internal test set showing region segmentations (left), vessel segmentations (middle), and fovea column location (right).
Figure 3.
 
Examples of Choroidalyzer being applied to scans from different imaging devices. Six fovea-centered OCT B-scans, two per imaging device type, from the internal test set showing region segmentations (left), vessel segmentations (middle), and fovea column location (right).
Figure 4.
 
Histogram of absolute errors for fovea column detection for the internal (left) and external (right) test sets. Examples for different levels of error are shown, with dotted lines indicating which part of the distribution they come from. In the examples, the teal line indicates the GT label, the dashed orange line the prediction.
Figure 4.
 
Histogram of absolute errors for fovea column detection for the internal (left) and external (right) test sets. Examples for different levels of error are shown, with dotted lines indicating which part of the distribution they come from. In the examples, the teal line indicates the GT label, the dashed orange line the prediction.
Figure 5.
 
Three example Topcon OCT B-scans with successful region segmentations from Choroidalyzer (right) and failed segmentations from DeepGPET (middle).
Figure 5.
 
Three example Topcon OCT B-scans with successful region segmentations from Choroidalyzer (right) and failed segmentations from DeepGPET (middle).
Table 1.
 
Overview of Population Characteristics
Table 1.
 
Overview of Population Characteristics
Table 2.
 
Metrics for Choroidalyzer Against Ground-Truth Annotations From the Internal and External Test Sets
Table 2.
 
Metrics for Choroidalyzer Against Ground-Truth Annotations From the Internal and External Test Sets
Table 3.
 
Comparison Metrics for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 3.
 
Comparison Metrics for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 4.
 
Mean (Standard Deviation) Execution Time of the Four Different Approaches to Region and Vessel Segmentation for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 4.
 
Mean (Standard Deviation) Execution Time of the Four Different Approaches to Region and Vessel Segmentation for the 20 Images Assessed Manually and Algorithmically From the External Test Set
Table 5.
 
Preference and Segmentation Scores From Masked Expert Adjudicator (IM) Comparing the Highest Region Segmentation, Vessel Segmentation, and Fovea Column Errors Between Choroidalyzer and the Ground-Truth Labels
Table 5.
 
Preference and Segmentation Scores From Masked Expert Adjudicator (IM) Comparing the Highest Region Segmentation, Vessel Segmentation, and Fovea Column Errors Between Choroidalyzer and the Ground-Truth Labels
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×