Gomaa, Wael HNagib, Abdelrahman ESaeed, Mostafa MAlgarni, AbdulmohsenNabil, Emad2023-10-032023-10-032023-06https://doi.org/10.3390/bdcc7030122http://repository.msa.edu.eg/xmlui/handle/123456789/5736Automated scoring systems have been revolutionized by natural language processing, enabling the evaluation of students’ diverse answers across various academic disciplines. However, this presents a challenge as students’ responses may vary significantly in terms of length, structure, and content. To tackle this challenge, this research introduces a novel automated model for short answer grading. The proposed model uses pretrained “transformer” models, specifically T5, in con- junction with a BI-LSTM architecture which is effective in processing sequential data by considering the past and future context. This research evaluated several preprocessing techniques and different hyperparameters to identify the most efficient architecture. Experiments were conducted using a standard benchmark dataset named the North Texas Dataset. This research achieved a state-of-the-art correlation value of 92.5 percent. The proposed model’s accuracy has significant implications for education as it has the potential to save educators considerable time and effort, while providing a reliable and fair evaluation for students, ultimately leading to improved learning outcomes.enautomatic scoring; short answer grading; transformers; deep learning; AI in educationEmpowering Short Answer Grading: Integrating Transformer-Based Embeddings and BI-LSTM NetworkArticlehttps://doi.org/10.3390/bdcc7030122