Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 9
July 2024
Volume 65, Issue 9
Open Access
ARVO Imaging in the Eye Conference Abstract  |   July 2024
Improved Feature Fusion and Representation for Detecting Keratoconus from Corneal Maps
Author Affiliations & Notes
  • Rossen Mihaylov Hazarbassanov
    Department of Ophthalmology and Visual Sciences, Universidade Federal de Sao Paulo Escola Paulista de Medicina, Sao Paulo, SP, Brazil
  • Laith Al-Zubaidi
    School of Mechanical, Medical and Process Engineering, Queensland University of Technology, Brisbane, Queensland, Australia
  • Ali Al-Timemy
    Biomedical Engineering Department, AL-Khwarizmi College of Engineering, University of Baghdad, Baghdad, Baghdad , Iraq
    Computing and Mathematics, University of Plymouth, Plymouth, Plymouth, United Kingdom
  • Carlos Arce
    Eye Clinic of Sousas, Campinas, Brazil
  • Pedro Henrique Fabres Franco
    Department of Ophthalmology and Visual Sciences, Universidade Federal de Sao Paulo Escola Paulista de Medicina, Sao Paulo, SP, Brazil
  • Luzia Alves Dos Santos
    Department of Ophthalmology and Visual Sciences, Universidade Federal de Sao Paulo Escola Paulista de Medicina, Sao Paulo, SP, Brazil
  • Zahraa Mosa
    College of Science, Al-Nahrain University, Baghdad, Baghdad , Iraq
  • Hazem M Abdelmotaal
    Ophthalmology, Assiut University Faculty of Medicine, Assiut, Egypt
  • Alexandru Lavric
    Computers, Electronics and Automation, Stefan cel Mare University of Suceava, Suceava, Romania
  • Hidenori Takahashi
    Ophthalmology, Jichi Ika Daigaku, Shimotsuke, Tochigi, Japan
  • Suphi Taneri
    University Eye-Clinic, Ruhr-Universitat Bochum, Bochum, Nordrhein-Westfalen, Germany
    Zentrum für Refraktive Chirurgie, Muenster, Germany
  • Wuqaas Munir
    Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States
  • Siamak Yousefi
    Ophthalmology, The University of Tennessee Health Science Center, Memphis, Tennessee, United States
    Genetics, Genomics, and Informatics, The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Footnotes
    Commercial Relationships   Rossen Hazarbassanov, None; Laith Al-Zubaidi, None; Ali Al-Timemy, None; Carlos Arce, Ziemer Ophthalmic Systems AG (C); Pedro Henrique Fabres Franco, None; Luzia Alves Dos Santos, None; Zahraa Mosa, None; Hazem Abdelmotaal, None; Alexandru Lavric, None; Hidenori Takahashi, None; Suphi Taneri, None; Wuqaas Munir, None; Siamak Yousefi, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2024, Vol.65, PB0024. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rossen Mihaylov Hazarbassanov, Laith Al-Zubaidi, Ali Al-Timemy, Carlos Arce, Pedro Henrique Fabres Franco, Luzia Alves Dos Santos, Zahraa Mosa, Hazem M Abdelmotaal, Alexandru Lavric, Hidenori Takahashi, Suphi Taneri, Wuqaas Munir, Siamak Yousefi; Improved Feature Fusion and Representation for Detecting Keratoconus from Corneal Maps. Invest. Ophthalmol. Vis. Sci. 2024;65(9):PB0024.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : This study aims to develop and validate a hybrid model that integrates deep learning, specifically feature fusion, for the detection of keratoconus (KC) using corneal maps collected from a dual Scheimpflug analyzer.

Methods : We collected 363 corneal maps from 121 eyes of 61 patients at Sousas Eye Clinics in São Paulo, Brazil, using the GALILEI G4 instrument (Ziemer Ophthalmic Systems AG). The corneal thickness (CT), anterior curvature (AntCurv), and posterior curvature (PostCurv) maps were used for further analysis. After preprocessing the maps and removing non-image objects, we trained two deep learning models, MobileNet and EfficientNet, on each individual map. These models were then used to extract distinctive features from the images. The features extracted by each model, which contribute to effective feature representation, were fused using concatenation methods for each individual map. The fused features were combined with features from other maps and used to train a naive Bayes classifier (see Fig. 1). The model’s performance was evaluated based on accuracy and area under the receiver operating characteristic (ROC) curve and Area under the ROC curve (AUC) metrics.

Results : The dataset included 49 normal eyes and 72 eyes with KC, previously diagnosed by two corneal specialists. The proposed model achieved an accuracy of 96.7% in detecting KC from CT, AntCurv, and PostCurv maps, and an AUC of 100%. The results (in Fig. 2) indicate that the naive Bayes classifier performed very well for the KCN detection.

Conclusions : We developed a hybrid machine learning model based on the MobileNet and EfficientNet architectures to detect KC from CT, AntCurv, and PostCurv maps. The findings suggest that machine learning models show promise in detecting KC, and NOR maps are critical for KC diagnosis. Despite being developed based on a relatively small number of samples, the model was accurate and generalizable. This approach is particularly suitable for scenarios with small subsets and has the potential to enhance KC research and clinical practice.

This abstract was presented at the 2024 ARVO Imaging in the Eye Conference, held in Seattle, WA, May 4, 2024.

 

Figure 1: Schematic Representation of the Proposed Framework. This framework leverages deep learning feature fusion techniques, utilizing MobileNet and EfficientNet, and applies them to corneal thickness, as well as anterior and posterior curvature maps.

Figure 1: Schematic Representation of the Proposed Framework. This framework leverages deep learning feature fusion techniques, utilizing MobileNet and EfficientNet, and applies them to corneal thickness, as well as anterior and posterior curvature maps.

 

Figure 2 presents the outcomes derived from the proposed feature fusion method.

Figure 2 presents the outcomes derived from the proposed feature fusion method.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×