Multi-task deep learning for glaucoma detection from color fundus images

Lucas Pascal, Oscar J. Perdomo, Xavier Bost, Benoit Huet, Sebastian Otálora, Maria A. Zuluaga

Research output: Contribution to journalResearch Articlepeer-review

16 Scopus citations


Glaucoma is an eye condition that leads to loss of vision and blindness if not diagnosed in time. Diagnosis requires human experts to estimate in a limited time subtle changes in the shape of the optic disc from retinal fundus images. Deep learning methods have been satisfactory in classifying and segmenting diseases in retinal fundus images, assisting in analyzing the increasing amount of images. Model training requires extensive annotations to achieve successful generalization, which can be highly problematic given the costly expert annotations. This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis. The model simultaneously learns different segmentation and classification tasks, thus benefiting from their similarity. The evaluation of the method in a retinal fundus glaucoma challenge dataset, including 1200 retinal fundus images from different cameras and medical centers, obtained a [Formula: see text] AUC performance compared to an [Formula: see text] obtained by the same backbone network trained to detect glaucoma. Our approach outperforms other multi-task learning models, and its performance pairs with trained experts using [Formula: see text] times fewer parameters than training each task separately. The data and the code for reproducing our results are publicly available.

Original languageEnglish (US)
Article number12361
Pages (from-to)12361
JournalScientific Reports
Issue number1
StatePublished - Jul 20 2022

All Science Journal Classification (ASJC) codes

  • General


Dive into the research topics of 'Multi-task deep learning for glaucoma detection from color fundus images'. Together they form a unique fingerprint.

Cite this