Purchase this article with an account.
Minhaj Nur Alam, David Le, Jennifer I Lim, Xincheng Yao; Deep learning for artery-vein classification in OCT angiography. Invest. Ophthalmol. Vis. Sci. 2020;61(9):PB00135.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
This study demonstrates the feasibility of deep fully convolutional neural network (FCNN) for artery-vein (AV) classification in Optical Coherence Tomography Angiography (OCTA).
We present here ‘AV-NET’, an FCNN based on U-Net and DenseNet architectures, which incorporates enface OCT and OCTA as training data and outputs a corresponding OCTA-AV map. The enface OCT and OCTA are combined as a 2-channel input to the network (Fig. 1). Enface OCT is a near infrared (NIR) image to provide vessel intensity profiles; while OCTA contains blood flow strength and vessel geometry features. Transfer learning was implemented for the encoder network from pre-trained weights optimized from the ImageNet dataset, for AV classification in OCTA. Our dataset comprised of images from 50 patients (20 control and 30 diabetic retinopathy). FCNN training procedure utilized the Adam optimizer with a learning rate of 0.0001, a dice loss function, and a minibatch size of 8. Additionally, regularization procedures including data augmentation and cross-validation were used to prevent overfitting. To evaluate the FCNN, 5-fold cross validation method, with each fold following an 80/20 train/test split procedure, was employed. Average accuracy and intersection-over-union (IOU) were used as the evaluation metric for automated AV classification, by comparing with manually labelled ground truths.
The AV-NET architecture is illustrated in Fig. 1a. The AV classification algorithm using the AV-NET architecture achieved an average accuracy of 70.5% (73% and 68% for vein and artery, respectively). The IOU metric identifies the correlation between pixel wise similarity between the AV map and ground truth. The mean IOU between the AV map and the ground truth was 0.72. A sample pair of ground truth and AV map is shown in Fig 1b. It was observed that the classifier specifically struggles to identify artery and vein in four-way cross sections, which could be overlap between arteries and veins. We anticipate that a larger training data size could potentially improve the IOU.
Deep learning based FCNN shows promise for fully automated AV classification in OCTA. Further optimization of the AV-NET and additional verification with independent data sets are required to improve the performance of automated AV classification.
This is a 2020 Imaging in the Eye Conference abstract.
This PDF is available to Subscribers Only