Investigative Ophthalmology & Visual Science Cover Image for Volume 66, Issue 2
February 2025
Volume 66, Issue 2
Open Access
Multidisciplinary Ophthalmic Imaging  |   February 2025
Retinal Arteriovenous Information Improves the Prediction Accuracy of Deep Learning–Based baPWV Index From Color Fundus Photographs
Author Affiliations & Notes
  • Michiyuki Saito
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Mizuho Mitamura
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Kanae Fukutsu
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Dong Zhenyu
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Ryo Ando
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Satoru Kase
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Satoshi Katsuta
    Teine Keijinkai Hospital, Sapporo, Japan
  • Susumu Ishida
    Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
  • Correspondence: Michiyuki Saito, Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N-15, W-7, Kita-ku, Sapporo 060-8638, Japan; [email protected]
Investigative Ophthalmology & Visual Science February 2025, Vol.66, 63. doi:https://doi.org/10.1167/iovs.66.2.63
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michiyuki Saito, Mizuho Mitamura, Kanae Fukutsu, Dong Zhenyu, Ryo Ando, Satoru Kase, Satoshi Katsuta, Susumu Ishida; Retinal Arteriovenous Information Improves the Prediction Accuracy of Deep Learning–Based baPWV Index From Color Fundus Photographs. Invest. Ophthalmol. Vis. Sci. 2025;66(2):63. https://doi.org/10.1167/iovs.66.2.63.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To compare the prediction accuracy of brachial–ankle pulse wave velocity (baPWV) from color fundus photographs (CFPs) using different deep learning models.

Methods: This retrospective study analyzed the data of 696 participants whose baPWVs and CFPs were obtained during medical checkups. Arteriolar and venular probability maps, which were automatically calculated from the CFPs based on our modified deep U-net, Hokkaido University retinal vessel segmentation (HURVS) model, were applied as channel attention to retinal vessel location information to predict baPWV. The baPWV prediction parameters consisted of predicted baPWVs from a single-input model using CFPs only and from a three-input model using CFPs, and arteriolar and venular probability maps. The single- and three-input models adopted a common depth-wise net and were separately pretrained and trained with fivefold cross-validation. These baPWV prediction parameters were corrected using multiple regression equations with age, sex, and systolic blood pressure and were defined as single- and three-input regression-predicted baPWVs. The main outcome measures were the correlation coefficients between true baPWV and the baPWV prediction parameters.

Results: The correlation coefficient with true baPWVs was higher for the three-input predicted baPWVs (R = 0.538) than for the single-input predicted baPWVs (R = 0.527). After regression, the three-input, regression-predicted baPWVs (R = 0.704) had the highest prediction accuracy, followed by the single-input, regression-predicted baPWVs (R = 0.692).

Conclusions: The three-input model predicted true baPWVs with high accuracy. This improved prediction accuracy by channel attention to the arteriolar and venular probability maps based on the HURVS model confirmed that arterioles and venules are relevant regions for baPWV prediction.

Color fundus photographs (CFPs) potentially contain a variety of information, including age, sex, and cardiovascular risk factors (e.g., blood pressure, smoking status, arteriosclerosis, body mass index).15 Although there are objective grading criteria for arteriovenous nicking and hypertensive retinopathy,6 in clinical practice the assessment of arteriosclerosis is currently entrusted to the subjective judgment of ophthalmologists. Thus, developing methods to identify arteriosclerosis automatically and objectively from fundus images is crucial for predicting serious cardiovascular events. 
Brachial–ankle pulse wave velocity (baPWV) is one of the most common indicators of arterial stiffness in health check-ups, together with the cardio–ankle vascular index and ankle–brachial pressure index.7 The more advanced the arteriosclerosis, the faster the pulse wave becomes. The risk of cardiovascular events associated with a high PWV was shown to be approximately three times higher than that associated with a normal PWV.8,9 Concerning PWV and CFPs, a study demonstrated that an increase in baPWV correlated with a decrease in central retinal arteriolar equivalent.10 Due to technical limitations in which segmentation of the entire retinal vasculature was not available, the previous report estimated baPWV by focusing on specific regions of retinal vessels. Recent developments in deep learning have enabled us to comprehensively assess the entire retinal vasculature captured in each CFP, and we developed a neural network model, Hokkaido University retinal vessel segmentation (HURVS), which automatically identifies retinal arterioles and venules separately and calculates the area of each vasculture.11 This model demonstrated high accuracy in retinal blood vessel segmentation (sensitivity, 0.778; specificity, 0.985; area under the receiver operating characteristic curve, 0.98; overall accuracy of 0.967 when using the Digital Retinal Images for Vessel Extraction [DRIVE] database for validation).11 The sum of the pixels in the probability maps was defined as the total arteriolar area (AA) and total venous area (VA). Using the HURVS model, we found that AA had a significantly negative correlation with baPWV (R = −0.40, P < 0.001) compared to VA (R = −0.36, P < 0.001), and the predicted baPWV estimated from multiple regression equations using AA, age, sex, and systolic blood pressure (SBP) correlated well with the true (actually measured) baPWV (R = 0.697, P < 0.001).12 Thus, baPWVs predicted from CFPs using our model may serve as an alternative biomarker for evaluating systemic arteriosclerosis. 
Recently, the concept of end-to-end artificial intelligence (AI) has been explored. Conventional machine-learning systems require multiple stages of processing during data analysis, whereas end-to-end deep learning performs all processing in a single, large neural network with multiple layers and modules. For example, in automated driving, a non–end-to-end approach requires humans to solve multiple subtasks, such as object recognition, lane detection, path planning, and integrated steering control.13 In contrast, the end-to-end learning approach involves learning the steering control directly from images acquired from an in-vehicle camera and produces better results in an environment with sufficient computational power. If similar results could be obtained in the field of ophthalmology, conventional feature extraction based on past ophthalmologists’ knowledge would become unnecessary. 
Conversely, channel attention in deep learning refers to the focus mechanism within a neural network that uses the relationships between channels to concentrate on relevant features of the input image.14 Specifically, channel attention is used to highlight important features and improves the performance of the model in specific tasks. If the pathologically relevant parts of the image are known, it can improve the accuracy of deep learning by instructing AI on the regions of interest in the image as channel attention. This study aimed to compare the prediction accuracy of baPWVs between one model that uses CFPs as the end-to-end process and another model that adds retinal arteriovenous position information as channel attention on top of CFPs. 
Methods
Study Subjects and HURVS Model for AA and VA
This retrospective study adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of Hokkaido University Hospital (C-T2023-0179) on an opt-out basis, in which patients were given an opportunity to refuse to participate via a website because this was a noninvasive retrospective observational study. CFPs (n = 696) were obtained from 372 individuals who underwent regular health checkups at the Keijinkai Maruyama Clinic, where baPWV was measured simultaneously. CFPs were obtained using an automatic fundus camera (AFC-330; NIDEK, Gamagori, Japan). Most of the participants in this study were identical to those previously reported by us.11,12 Our deep learning algorithm, based on the modified deep U-net HURVS model11 (see Supplementary Fig. S1 for the HURVS model), was used to perform retinal vessel segmentation, arteriovenous classification, and retinal arteriovenous area measurement on CFPs. Arteriolar probability maps and venular probability maps were also automatically generated from CFPs simultaneously. The AA and VA were defined from the sum of the pixels in the probability maps. 
Depth-Wise Net for Single- and Three-Input Models
In the single-input model only the CFPs were used as the input information, whereas in the three-input model the arteriolar and venular probability maps were added to the CFPs with the positional information retained. In the three-input model, the arteriolar and venular probability maps were overlaid as two image layers; thus, the positional relationship between CFPs and arteriolar and venular probability maps was preserved, and the arteriolar and venular probability maps provided information on the location of the retinal blood vessels in each CFP to the depth-wise net. 
After the input layer, the single- and three-input models used a common neural network, the depth-wise net (Fig. 1A). This neural network is a 22-layer deep learning model that uses 12 layers of depth-wise convolutions to retain location information in the layer direction of the image while reducing computational complexity with six convolutional layers and four dense layers. We used four residual networks to maintain the layer depth and six dropout layers to prevent overtraining. 
Figure 1.
 
(A) Depth-wise net for the single-input and three-input models. The depth-wise net is a neural network with 22 layers that uses six convolutional layers, 12 depth-wise convolutional layers, and four dense layers. The neural network uses five skip connections (orange arrows), and the number of layers is doubled by 64,128, 256, 512,1024, and 2048. (B) Flow of fundus photographs used in the study. We used 9998 CFPs without and 696 CFPs with baPWV information for pretraining and main training and validation. In the pretraining, we assigned a provisional baPWV index generated by shallow learning from age, sex, and SBP to 9998 CFPs without baPWV information. In the main training and validation, we performed fivefold cross-validation of predicted baPWV with 696 CFPs with actual baPWV information.
Figure 1.
 
(A) Depth-wise net for the single-input and three-input models. The depth-wise net is a neural network with 22 layers that uses six convolutional layers, 12 depth-wise convolutional layers, and four dense layers. The neural network uses five skip connections (orange arrows), and the number of layers is doubled by 64,128, 256, 512,1024, and 2048. (B) Flow of fundus photographs used in the study. We used 9998 CFPs without and 696 CFPs with baPWV information for pretraining and main training and validation. In the pretraining, we assigned a provisional baPWV index generated by shallow learning from age, sex, and SBP to 9998 CFPs without baPWV information. In the main training and validation, we performed fivefold cross-validation of predicted baPWV with 696 CFPs with actual baPWV information.
Training Parameters
We used an AI-Compliant Advanced Computer System at Hokkaido University. We implemented the neural network using TensorFlow 19.03 (Google, Mountain View, CA, USA). Training images were augmented randomly by flipping the images horizontally and rotating them at 15°, with an epoch size of 250 and early stopping after 15 epochs. To minimize the overhead and maximally use the graphic processing unit (GPU) memory, we prioritized the size of the input images over batch size. For the FUJITSU PRIMERGY CX2570 M5 Multi-Node Server (FUJITSU, Tokyo, Japan) with a Tesla V100 SXM2 (32-GB GPU accelerator card × 4; NVIDIA, Santa Clara, CA, USA), we chose 704 × 704 square pixels and set the batch size to 64 samples. Mean squared error loss function and Adam optimizer were used with the following parameters: initial learning rate = 0.00003, α = 0.001, β1 = 0.9, β2 = 0.999, and ε = 1E-8. 
Training Methods and Verification
To train the neural network, we used 9998 CFPs from the medical checkup data without baPWV data for pretraining, and 696 CFPs with baPWV data were used for training and validation (Fig. 1B). First, because there were no baPWV data in the pretraining group of 9998 CFPs, a provisional baPWV index obtained by shallow learning (three-layer neural network) from AA, VA, age, sex, and blood pressure was used instead of the true baPWV. Subsequently, we used 696 CFPs with baPWV data for the main training and validation and performed stratified fivefold cross-validation for the predicted baPWV. The main training was performed by fine-tuning the neural network at all layers after pretraining. We performed pretraining, main training, and validation separately for the single-input and three-input models. Linearly normalized baPWV values were used for all of the processes. 
Comparison of the Accuracy of the baPWV Prediction Parameters
Figure 2 shows eight baPWV prediction parameters, where the light green boxes, representing (a) to (d), and the green boxes, representing (e) to (h), compare the prediction accuracy for the true baPWVs in this study. The correlations between the true baPWV and the baPWV prediction parameters were analyzed (Fig. 2, light green boxes): (a) normalized age, (b) predicted baPWV calculated from the single-input model using CFPs only (single-input predicted baPWV), (c) predicted baPWV from the three-input model using CFPs + arteriolar probability maps + venular probability maps (three-input predicted baPWV), and (d) normalized AA. Age was included as one of the baPWV prediction parameters for comparison in this study because age is generally known to be correlated with baPWV. AA was chosen as the index because of the high accuracy of baPWV prediction shown in our previous study.11 
Figure 2.
 
Comparison items of accuracy of baPWV. Blue, gray, yellow, and black arrows indicate deep learning operations, mathematical transformations, adding information as explanatory variables, and correction with multiple regression equations, respectively.
Figure 2.
 
Comparison items of accuracy of baPWV. Blue, gray, yellow, and black arrows indicate deep learning operations, mathematical transformations, adding information as explanatory variables, and correction with multiple regression equations, respectively.
In addition, baPWV prediction parameters of (b), (c), and (d) were corrected using multiple regression equations with age, sex, and SBP and were defined as (f) single-input regression-predicted baPWV, (g) three-input regression-predicted baPWV, and (h) regression AA, respectively. Normalized age was corrected using multiple regression equations with sex and SBP and defined as (e) regression age (Fig. 2, green boxes). 
Statistical Analysis
R 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) was used for statistical analysis. We used multiple regression analysis to calculate the correlation coefficients between true baPWV and baPWV prediction parameters. All eight baPWV prediction parameters and the true baPWV were linearly normalized within the range of 0 to 1. In the predicted baPWV results shown in Figures 2 for (b), (c), (f), and (g), no significant statistical differences were observed among the coefficients of variation for each of the fivefold cross-validation groups in the generalized linear mixed model; the correlation coefficients were regarded as an identical group, and the results shown in Figures 3B, 3C, 3F, and 3G are represented by five colored dots arbitrarily allotted according to the fivefold cross-validation groups. P < 0.05 was considered statistically significant in all analyses. 
Figure 3.
 
Accuracy of baPWV prediction parameters. (A) Correlation between true baPWV and normalized age. (B) Correlation between true baPWV and the single-input predicted baPWV. (C) Correlation between true baPWV and the three-input predicted baPWV. (D) Correlation between true baPWV and normalized AA (reverse scale in this graph). (E) Correlation between true baPWV and regression age. (F) Correlation between true baPWV and the single-input regression-predicted baPWV. (G) Correlation between true baPWV and the three-input regression-predicted baPWV. (H) Correlation between true baPWV and regression AA. Pearson product–moment correlation was used for statistical analysis. All values are linearly normalized within the range of 0 to 1.
Figure 3.
 
Accuracy of baPWV prediction parameters. (A) Correlation between true baPWV and normalized age. (B) Correlation between true baPWV and the single-input predicted baPWV. (C) Correlation between true baPWV and the three-input predicted baPWV. (D) Correlation between true baPWV and normalized AA (reverse scale in this graph). (E) Correlation between true baPWV and regression age. (F) Correlation between true baPWV and the single-input regression-predicted baPWV. (G) Correlation between true baPWV and the three-input regression-predicted baPWV. (H) Correlation between true baPWV and regression AA. Pearson product–moment correlation was used for statistical analysis. All values are linearly normalized within the range of 0 to 1.
Results
Accuracy of the baPWV Prediction Parameters
As shown in Figure 3 and the Table, the correlation coefficients with the true baPWV were 0.572 for the normalized age (Fig. 3A), 0.527 for the single-input predicted baPWV (Fig. 3B), 0.538 for the three-input predicted baPWV (Fig. 3C), and −0.398 for normalized AA (Fig. 3D) (all P < 0.0001). AA in Figure 3D is represented in a reversed scale to make the trend of the plotted points the same as in Figures 3A to 3C and to facilitate comparison. The correlation coefficient was the highest for normalized age, followed by the three-input model, single-input model, and normalized AA in that order. 
Table.
 
Correlations Among True baPWV and baPWV Prediction Parameters
Table.
 
Correlations Among True baPWV and baPWV Prediction Parameters
After correction by multiple regression equations, the correlation coefficients with the true baPWV were 0.682 for the regression age (Fig. 3E), 0.692 for the single-input regression-predicted baPWV (Fig. 3F), 0.704 for the three-input regression-predicted baPWV (Fig. 3G), and 0.697 for regression AA (Fig. 3H) (all P < 0.0001). After the regression correction, the three-input regression-predicted baPWV had the highest correlation coefficient, followed by regression AA, the single-input regression-predicted baPWV, and regression age in that order. 
Discussion
The main findings of this study are as follows: 
  • 1. All of the deep learning baPWV prediction parameters, (b), (c), (f), and (g), yielded higher prediction accuracy than that found in our previous study.12
  • 2. The correlation coefficients were higher in the following order: normalized age, three-input model, single-input model, and normalized AA.
  • 3. On conducting further regression by multiple regression equations, the three-input model had the highest correlation coefficient, followed by the AA and single-input model, whereas regression age had the lowest correlation coefficient.
The baPWV prediction parameters shown in Figure 2 were categorized and interpreted as follows: First, parameters designed by humans based on previous knowledge as being relevant to baPWV include (a) normalized age, (d) normalized AA, (e) regression age, and (h) regression AA. Second, parameters that are self-taught and meant to be learned by AI (i.e., via the end-to-end process) are (b) and (f), the single-input predicted baPWVs using CFPs only. Third, the parameters that allow AI to learn being given additional arteriovenous information are (c) and (g), the three-input predicted baPWVs with or without regression correction, both of which used CFPs + arteriolar and venular probability maps as channel attention by humans with appropriate knowledge. 
In this study, the accuracy of baPWV prediction parameters was higher in both the (b) single-input and (c) three-input models generated by AI than in (d) AA designated by humans, suggesting that an index that arbitrarily cuts out a certain aspect based on past knowledge is not as robust as an AI-generated index. Numerical values derived from a series of past human observations are traditionally referred to as indexes, such as the body mass index and cardiothoracic ratio. In this study, AA for the (d) and (h) models and age for the (a) and (e) models can be considered in the same category. Human-fixed indexes are useful but have the disadvantage of referring to only certain aspects. In contrast, AI excels at reducing the dimensions of various complex data and creates an index based on the training data in excess of the details. As expected, the AI-based analysis was more accurate than the human-selected indexes in our current study. 
Interestingly, (f) single-input regression-predicted baPWV was overtaken by (h) regression AA prepared by humans. This indicates that the single-input model would have already picked up pieces of information about age, sex, and SBP from CFPs, with blurring possibly due to cataract, macular reflection changes, vessel narrowing, fundus coloration, disc shape, etc. Therefore, adding multiple regression analysis of age, sex, and SBP did little to improve the accuracy and was surpassed by human-designated AA by multiple regression. Moreover, AI has a drawback in that it is difficult to eliminate confounding factors; when trying to detect arteriosclerosis from CFPs, a major confounding factor is a feature that is strongly related to age, such as blurring of CFPs due to cataract. The more age-related modifications that are contained in CFPs, the less likely retinal vascular changes are to be detected. Paradoxically, the high predictive accuracy of (a) normalized age, among the non-regressed parameters (a) to (d), confirms that age is strongly correlated with arteriosclerosis. However, predicting baPWV from age implies the same baPWV at a certain age in each individual, which means it is a clinically meaningless indicator, and its strong correlation makes age a major confounding factor when estimating pathological arteriosclerosis. The single-input model, in which AI was self-taught only with the CFPs, which contain more confounding factors and focus less on the retinal vessels, was proven to be a less satisfactory model. 
On comparing the three-input model with the single-input model, the (g) three-input regression-predicted baPWV model had the best prediction accuracy, suggesting that providing AI with appropriate attention improves its analytical accuracy. In this study, channel attention to arteriolar and venular probability maps did not mean complete additional information to AI. The retinal arteriovenous information (i.e., probability maps) was generated from CFPs based on another neural network, and the information was essentially embedded in the CFPs. In other words, the new instruction we gave to the AI was merely a flow of ideas to focus on the retinal arterioles and venules. If AI were to find this flow from the CFPs alone, it would have to prepare a much larger neural network and spend considerable time in mining this flow. It would thus be important to incorporate appropriate human knowledge into the design of AI-based neural networks, and appropriate human knowledge (i.e., an expert's channel attention that provides a flow of ideas) can help improve AI accuracy. 
The limitations of this study include the fact that the AA and VA obtained from the CFPs may be affected by the CFP alignment, magnification, isotropy, and media opacity. These aspects should be carefully analyzed in future studies to assess the need for AA and VA corrections based on refractive values, axial length, and image alignment. In the present study, the number of subjects with actual PWV values was relatively small to be handled by deep learning; thus, a provisional baPWV index obtained by shallow learning from AA, VA, age, sex, and blood pressure was used for pretraining. Training using true baPWVs from a larger number of cases would yield higher accuracy. 
In summary, the three-input model using arteriolar and venular probability maps as channel attention could predict baPWV with augmented accuracy. The three-input model, which incorporated the knowledge that retinal vessels are affected by arteriosclerosis into the neural network design, could improve the prediction accuracy of baPWV without consuming a large amount of time and quantitative resources. The excellent prediction accuracy by channel attention based on the HURVS model, which can precisely identify retinal vessels from CFPs, confirmed that arterioles and venules are relevant regions for arteriosclerosis, and its application to AI successfully improved the prediction accuracy for baPWVs. 
Acknowledgments
Disclosure: M. Saito, None; M. Mitamura, None; K. Fukutsu, None; D. Zhenyu, None; R. Ando, None; S. Kase, None; S. Katsuta, None; S. Ishida, None 
References
Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018; 2(3): 158–164. [CrossRef] [PubMed]
Sharrett AR, Hubbard LD, Cooper LS, et al. Retinal arteriolar diameters and elevated blood pressure: the atherosclerosis risk in communities study. Am J Epidemiol. 1999; 150(3): 263–270. [CrossRef] [PubMed]
Leung H, Wang JJ, Rochtchina E, Wong TY, Klein R, Mitchell P. Impact of current and past blood pressure on retinal arteriolar diameter in an older population. J Hypertens. 2004; 22(8): 1543–1549. [CrossRef] [PubMed]
Chew SK, Xie J, Wang JJ. Retinal arteriolar diameter and the prevalence and incidence of hypertension: a systematic review and meta-analysis of their association. Curr Hypertens Rep. 2012; 14(2): 144–151. [CrossRef] [PubMed]
Wang SB, Mitchell P, Liew G, et al. A spectrum of retinal vasculature measures and coronary artery disease. Atherosclerosis. 2018; 268: 215–224. [CrossRef] [PubMed]
Walsh JB. Hypertensive retinopathy: description, classification, and prognosis. Ophthalmology. 1982; 89: 1127–1131. [CrossRef] [PubMed]
Milan A, Zocaro G, Leone D, et al. Current assessment of pulse wave velocity: comprehensive review of validation studies. J Hypertens. 2019; 37(8): 1547–1557. [CrossRef] [PubMed]
Vlachopoulos C, Aznaouridis K, Terentes-Printzios D, Ioakeimidis N, Stefanadis C. Prediction of cardiovascular events and all-cause mortality with brachial-ankle elasticity index: a systematic review and meta-analysis. Hypertension. 2012; 60(2): 556–562. [CrossRef] [PubMed]
Ohkuma T, Ninomiya T, Tomiyama H, et al. Brachial-ankle pulse wave velocity and the risk prediction of cardiovascular disease: an individual participant data meta-analysis. Hypertension. 2017; 69(6): 1045–1052. [CrossRef] [PubMed]
Lin F, Zhu P, Huang F, et al. Aortic stiffness is associated with the central retinal arteriolar equivalent and retinal vascular fractal dimension in a population along the southeastern coast of China. Hypertens Res. 2015; 38(5): 342–348. [CrossRef] [PubMed]
Fukutsu K, Saito M, Noda K, et al. A deep learning architecture for vascular area measurement in fundus images. Ophthalmol Sci. 2021; 1(1): 100004. [CrossRef] [PubMed]
Fukutsu K, Saito M, Noda K, et al. Relationship between brachial-ankle pulse wave velocity and fundus arteriolar area calculated using a deep-learning algorithm. Curr Eye Res. 2022; 47(11): 1534–1537. [CrossRef] [PubMed]
Kim J, Canny J. Interpretable learning for self-driving cars by visualizing causal attention. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Piscataway, NJ: Institute of Electrical and Electronics Engineers; 2017: 2942–2950.
Wang Q, Wu B, Zhu P, Li P, Zuo W, Hu Q. ECA-Net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: Institute of Electrical and Electronics Engineers; 2020: 11534–11542.
Figure 1.
 
(A) Depth-wise net for the single-input and three-input models. The depth-wise net is a neural network with 22 layers that uses six convolutional layers, 12 depth-wise convolutional layers, and four dense layers. The neural network uses five skip connections (orange arrows), and the number of layers is doubled by 64,128, 256, 512,1024, and 2048. (B) Flow of fundus photographs used in the study. We used 9998 CFPs without and 696 CFPs with baPWV information for pretraining and main training and validation. In the pretraining, we assigned a provisional baPWV index generated by shallow learning from age, sex, and SBP to 9998 CFPs without baPWV information. In the main training and validation, we performed fivefold cross-validation of predicted baPWV with 696 CFPs with actual baPWV information.
Figure 1.
 
(A) Depth-wise net for the single-input and three-input models. The depth-wise net is a neural network with 22 layers that uses six convolutional layers, 12 depth-wise convolutional layers, and four dense layers. The neural network uses five skip connections (orange arrows), and the number of layers is doubled by 64,128, 256, 512,1024, and 2048. (B) Flow of fundus photographs used in the study. We used 9998 CFPs without and 696 CFPs with baPWV information for pretraining and main training and validation. In the pretraining, we assigned a provisional baPWV index generated by shallow learning from age, sex, and SBP to 9998 CFPs without baPWV information. In the main training and validation, we performed fivefold cross-validation of predicted baPWV with 696 CFPs with actual baPWV information.
Figure 2.
 
Comparison items of accuracy of baPWV. Blue, gray, yellow, and black arrows indicate deep learning operations, mathematical transformations, adding information as explanatory variables, and correction with multiple regression equations, respectively.
Figure 2.
 
Comparison items of accuracy of baPWV. Blue, gray, yellow, and black arrows indicate deep learning operations, mathematical transformations, adding information as explanatory variables, and correction with multiple regression equations, respectively.
Figure 3.
 
Accuracy of baPWV prediction parameters. (A) Correlation between true baPWV and normalized age. (B) Correlation between true baPWV and the single-input predicted baPWV. (C) Correlation between true baPWV and the three-input predicted baPWV. (D) Correlation between true baPWV and normalized AA (reverse scale in this graph). (E) Correlation between true baPWV and regression age. (F) Correlation between true baPWV and the single-input regression-predicted baPWV. (G) Correlation between true baPWV and the three-input regression-predicted baPWV. (H) Correlation between true baPWV and regression AA. Pearson product–moment correlation was used for statistical analysis. All values are linearly normalized within the range of 0 to 1.
Figure 3.
 
Accuracy of baPWV prediction parameters. (A) Correlation between true baPWV and normalized age. (B) Correlation between true baPWV and the single-input predicted baPWV. (C) Correlation between true baPWV and the three-input predicted baPWV. (D) Correlation between true baPWV and normalized AA (reverse scale in this graph). (E) Correlation between true baPWV and regression age. (F) Correlation between true baPWV and the single-input regression-predicted baPWV. (G) Correlation between true baPWV and the three-input regression-predicted baPWV. (H) Correlation between true baPWV and regression AA. Pearson product–moment correlation was used for statistical analysis. All values are linearly normalized within the range of 0 to 1.
Table.
 
Correlations Among True baPWV and baPWV Prediction Parameters
Table.
 
Correlations Among True baPWV and baPWV Prediction Parameters
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×