Purchase this article with an account.
Murat Hasanreisoglu, Muhammad Sohail Halim, Adithi Deborah Chakravarthy, Maria Soledad Ormaechea, Gunay Uludag, Muhammad Hassan, Huseyin Baran Ozdemir, Pinar Cakar Ozdal, Daniel Colombero, Marcelo N Rudzinski, Bernardo Ariel Schlaen, Yasir Jamal Sepah, Parvathi Chundi, Mahadevan Subramaniam, Quan Dong Nguyen; Ocular Toxoplasmosis Lesion Detection on Fundus Photograph using a Deep Learning Model. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1627.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To assess the utility of a deep learning algorithm to distinguish ocular toxoplasmosis (OT) lesion from a normal fundus photograph.
Fundus Images of eyes with OT leions from 6 uveitis clinics (Argentina, Turkey, and United States) were collected. All images were resized to 512×512px in height and width and contrast adjustment was performed using the contrast limited adaptive histogram. Images were annotated to localize and label OT entities. For every OT entity, image patches equivalent to the size and scale of the object were extracted from the original image. Healthy patches from the images were labelled accordingly. Patch level classifications were performed followed by image level classification. A sliding window protocol was implemented across every full color image. The outputs from the patch level classification were used to generate a probability heat map to detect regions containing OT entities in the fundus image. The heat map and patch features are then combined to develop a dual input hybrid CNN model detecting OT fundus images. The results were evaluated using area under receiver operating characteristic curve (AUC), sensitivity and specificity metrics for toxoplasmosis classification.
Color fundus images from 246 eyes of 215 patients with OT were utilized. Patients with uncertain diagnosis or non-OT lesions were excluded. Sixty-six (66) images from control subjects were also included. Table 1 outlines results of classification models using 70/30 sampling ratio. The hybrid dual input model showed the highest AUC, accuracy, sensitivity and specificity of 0.949, 0.925, 0.919 and 0.93 respectively.
The proposed model demonstrates excellent results in identifying OT entities with high sensitivity and specificity on both patches and full-resolution images making it a potentially useful tool to aid physicians in the diagnosis of ocular toxoplasmosis.
This is a 2020 ARVO Annual Meeting abstract.
Hybrid Dual Imput CNN model
Evaluation metrics results for patch, image level and hybrid dual input classifications
This PDF is available to Subscribers Only