Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Investigating the need for building pre-trained models using fundus images with Ultra Wide Field camera
Author Affiliations & Notes
  • Hajime Tanaka
    Junior Resident, Hiroshima Daigaku Byoin, Hiroshima, Hiroshima, Japan
  • Hitoshi Tabuchi
    Department of Technology and Design Thinking for Medicine, Hiroshima University, Hiroshima Daigaku Daigakuin Ikei Kagaku Kenkyuka, Hiroshima, Hiroshima, Japan
  • Footnotes
    Commercial Relationships   Hajime Tanaka None; Hitoshi Tabuchi Thinkout LTD, Code E (Employment), GLORY LTD., Code F (Financial Support), TOPCON CORPORATION, Code F (Financial Support), CRESCO LTD, Code F (Financial Support), OLBA Healthcare Holdings Ltd., Code F (Financial Support), Tomey corporation, Code F (Financial Support), HOYA Corporation, Code F (Financial Support), Japanese Patent No.6419055,6695171,7139548,7339483,7304508,7060854, Code P (Patent)
  • Footnotes
    Support  Funding from the Principal Investigator
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 2373. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hajime Tanaka, Hitoshi Tabuchi; Investigating the need for building pre-trained models using fundus images with Ultra Wide Field camera. Invest. Ophthalmol. Vis. Sci. 2024;65(7):2373.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : In recent years, machine learning has become increasingly prominent in fundus image analysis, but the need for large numbers of images is a barrier. Image data should be collected for each individual task, but this requires significant resources. A common approach in information science is to create a general-purpose model, called “pre-trained model”, and fine-tune it for each task. In this study, we created a pre-trained model using fundus images obtained from Ultra Wide Field camera (UWF) and verified its generalization performance.

Methods : A pre-trained model was created using approximately 10,000 fundus images taken with UWF at Tsukazaki Hospital. The training task was designed to rotate each image in the range of 0-360° and estimate its angle. Next, using the pre-trained model as initial parameters, disease classification was performed on fundus images taken of about 4,000 patients at the same hospital who had a disease. The classification targets were Age-Related Macular Degeneration, Retinal Detachment, Glaucoma, Retinal Vessel Occlusion, Diabetic Retinopathy, and normal fundus. Similarly, we conducted experiments without pre-training and with a pre-trained model named ImageNet, a pre-trained model commonly used not only in the medical field but also for other general situations.

Results : When trained on a pre-trained model created from 10,000 UWF images, the performance was not different from that without pre-training, and the accuracy for the 6-class classifications was 0.743. On the other hand, when the k-nearest neighbors algorithm (k-NN) was used with features extracted from the pre-trained model, the accuracy was slightly improved to 0.744. However, fine tuning the more popular ImageNet-trained model gave the highest accuracy of 0.747.

Conclusions : The angle estimation problem using 10,000 UWF images did not yield a pre-trained model with high generalization performance that could be adapted to downstream tasks in fundus imaging. However, experiments using k-NN on features extracted from the same model suggest that the model may learn some features from the UWF images. It may be possible to generate a pre-trained model with better performance depending on the learning conditions (model architectures or hyperparameters) and learning content (other than angle estimation problems) in the pre-training process.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×