| Title |
|---|
| Explainable Deep Learning for Teleophthalmology: Visualizing Diagnostic Bias in Eyelid Tumor Screening |
| Authors |
|---|
| Randy Kindangen, Mutmainah Mahyuddin, Prasandhya Yusuf, Agus Sugiharto |
| Presenting |
|---|
| Randy Kindangen |
| PURPOSE: |
|---|
| Teleophthalmology offers a solution for early eyelid tumor screening in resource-limited settings. However, the "black box" nature of Deep Learning (DL) limits clinical trust. This study develops a DL model for classifying eyelid tumors in the Indonesian population and evaluates its decision-making transparency using Gradient-weighted Class Activation Mapping (Grad-CAM++) |
| METHODS: |
|---|
| A retrospective study utilized 696 clinical images of eyelid lesions (benign vs. malignant). The dataset was split into training (n=452), validation (n=128), and external evaluation (n=116) sets. A ResNet-50 architecture was trained to classify lesions. Grad-CAM++ was applied to visualize the model’s focus across different convolutional layers (from Layer 3 to Final Layer) to verify if predictions aligned with pathological features |
| RESULTS: |
|---|
| The model achieved an accuracy of 87.0% and an AUC-ROC of 0.92 on the external evaluation set (n=116) . Sensitivity for malignancy was 86% . Grad-CAM++ visualization confirmed that in True Positive cases, the model correctly focused on tumor morphology. However, in False Negative cases, layer-wise analysis showed a "distraction" phenomenon: initial layers focused on general features, but final decision layers erroneously converged on the cornea and specular reflections rather than the tumor mass, leading to misclassification |
| CONCLUSIONS: |
|---|
| While the DL model demonstrates high diagnostic capability (AUC > 0.90), Grad-CAM++ exposed a specific "corneal bias" in error cases. This finding underscores that high accuracy metrics alone are insufficient for clinical deployment. Explainable AI tools are essential in teleophthalmology to identify non-clinical artifacts (like corneal glare) that may mislead algorithms, ensuring that AI serves as a safe and interpretable support tool for ophthalmologists |