Abstract
Glaucoma is an eye condition that leads to loss of vision and blindness if not diagnosed in time. Diagnosis requires human experts to estimate in a limited time subtle changes in the shape of the optic disc from retinal fundus images. Deep learning methods have been satisfactory in classifying and segmenting diseases in retinal fundus images, assisting in analyzing the increasing amount of images. Model training requires extensive annotations to achieve successful generalization, which can be highly problematic given the costly expert annotations. This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis. The model simultaneously learns different segmentation and classification tasks, thus benefiting from their similarity. The evaluation of the method in a retinal fundus glaucoma challenge dataset, including 1200 retinal fundus images from different cameras and medical centers, obtained a [Formula: see text] AUC performance compared to an [Formula: see text] obtained by the same backbone network trained to detect glaucoma. Our approach outperforms other multi-task learning models, and its performance pairs with trained experts using [Formula: see text] times fewer parameters than training each task separately. The data and the code for reproducing our results are publicly available.
Original language | English (US) |
---|---|
Article number | 12361 |
Pages (from-to) | 12361 |
Journal | Scientific Reports |
Volume | 12 |
Issue number | 1 |
DOIs | |
State | Published - Jul 20 2022 |
All Science Journal Classification (ASJC) codes
- General