The fundus images of patients with Diabetic Retinopathy (DR) often display numerous lesions scattered across the retina. Current methods typically utilize the entire image for network learning, which has limitations since DR abnormalities are usually localized. Training Convolutional Neural Networks (CNNs) on global images can be challenging due to excessive noise. Therefore, it's crucial to enhance the visibility of important regions and focus the recognition system on them to improve accuracy. This study investigates the task of classifying the severity of diabetic retinopathy in eye fundus images by employing appropriate preprocessing techniques to enhance image quality. We propose a novel two-branch attention-guided convolutional neural network (AG-CNN) with initial image preprocessing to address these issues. The AG-CNN initially establishes overall attention to the entire image with the global branch and then incorporates a local branch to compensate for any lost discriminative cues. We conduct extensive experiments using the APTOS 2019 DR dataset. Our baseline model, DenseNet-121, achieves average accuracy/AUC values of 0.9746/0.995, respectively. Upon integrating the local branch, the AG-CNN improves the average accuracy/AUC to 0.9848/0.998, representing a significant advancement in state-of-the-art performance within the field.
Citation
Mohamed Abderaouf Moustari ,
, (2024-06-27), Two-stage deep learning classification for diabetic retinopathy using gradient weighted class activation mapping, Automatika: Journal for Control, Measurement, Electronics, Computing and Communications,
Vol:65, Issue:3, pages:1284-1299, Taylor and Francis Ltd.