Deep Learning-Based Bone Tumor Detection from X-ray Images: A DenseNet121 Study
##plugins.themes.academic_pro.article.main##
الملخص
The identification of bone tumors from radiographic images is an important task in medical diagnosis, as early detection can support timely clinical decision-making and improve patient management. This study examines the use of the DenseNet121 deep learning architecture for automated bone tumor detection using X-ray images. The task is formulated as a binary classification problem, in which radiographs are categorized as either normal or tumor cases, with the aim of exploring the feasibility of deep learning as a supportive diagnostic tool.
The DenseNet121 model was evaluated on the BTXRD dataset, which contains labeled bone X-ray images representing both healthy and tumor-affected cases. Model performance was assessed using standard evaluation metrics to provide a balanced and objective analysis of its classification behavior. These metrics offer insight into both overall accuracy and the model’s discriminative ability across different thresholds.
The experimental results indicate moderate performance. The model achieved an accuracy of 75.73%, reflecting a reasonable ability to differentiate between normal and tumor images, although some misclassifications remain. The area under the curve (AUC) reached 83.87%, suggesting that the model maintains good class separability despite the moderate accuracy.
Overall, these findings demonstrate the potential of DenseNet121 for bone tumor detection from radiographs, while also highlighting the need for further improvements through larger datasets, enhanced feature learning, and more comprehensive validation before clinical application.