TRANSFORMERS IN ARABIC SHORT ANSWER GRADING: BRIDGING LINGUISTIC COMPLEXITY WITH DEEP LEARNING
| dc.Affiliation | October University for modern sciences and Arts MSA | |
| dc.contributor.author | WAEL HASSAN GOMAA | |
| dc.contributor.author | MENA HANY | |
| dc.contributor.author | EMAD NABIL | |
| dc.contributor.author | ABDELRAHMAN E. NAGIB | |
| dc.contributor.author | HALA ABDEL HAMEED | |
| dc.date.accessioned | 2026-01-05T10:46:06Z | |
| dc.date.issued | 2025-11-30 | |
| dc.description | SJR 2024 0.168 Q4 H-Index 42 | |
| dc.description.abstract | Automating the evaluation of Arabic short answers is a crucial step in advancing educational technology, as it enables rapid feedback, consistent scoring, and a significant reduction in educators’ workload. However, the structural richness and semantic complexity of Arabic—characterized by its extensive morphology, flexible word order, and diverse vocabulary—make reliable grading especially challenging. To address these difficulties, this study introduces a three-stage framework built upon fine-tuned transformer architectures. In the first stage, both the question and the learner’s response are encoded into dense semantic embeddings. The second stage applies comprehensive fine-tuning to a pre-trained transformer model, allowing it to capture task-specific nuances and better represent the intricate patterns of Arabic. In the final stage, a regression layer generates a numerical score, which is then compared against the human-assigned reference grade for evaluation. The proposed framework was rigorously tested on two benchmark datasets for Arabic short answer grading, AR-ASAG and Philosophy. Experimental results demonstrated strong performance, achieving Pearson correlation scores of 0.85 and 0.97, respectively, and outperforming previously reported state-of-the-art methods. These outcomes confirm the effectiveness of transformer-based models in handling the linguistic subtleties of Arabic while also demonstrating their scalability and adaptability across domains. Overall, the findings position fine-tuned transformers as a promising foundation for building accurate, efficient, and equitable automated grading systems in Arabic educational contexts. | |
| dc.description.uri | https://www.scimagojr.com/journalsearch.php?q=19700182903&tip=sid&clean=0 | |
| dc.identifier.issn | 19928645 | |
| dc.identifier.uri | https://repository.msa.edu.eg/handle/123456789/6623 | |
| dc.language.iso | en_US | |
| dc.publisher | Little Lion Scientific | |
| dc.relation.ispartofseries | Journal of Theoretical and Applied Information Technology ; Volume 103 , Issue 22 , Pages 9590 - 9603 | |
| dc.subject | Arabic Short Answer Grading | |
| dc.subject | Deep Learning | |
| dc.subject | Model Fine-Tuning | |
| dc.subject | Natural Language Processing | |
| dc.subject | Transformers | |
| dc.title | TRANSFORMERS IN ARABIC SHORT ANSWER GRADING: BRIDGING LINGUISTIC COMPLEXITY WITH DEEP LEARNING | |
| dc.type | Article |
