Abdulhamied, Reham MohamedNasr, Mona M.Abdulkader, Sarah N.2023-02-082023-02-082023-01DOI: 10.11591/ijeecs.v30.i1.pp545-556http://repository.msa.edu.eg/xmlui/handle/123456789/5338Sign language recognition is very important for deaf and mute people because it has many facilities for them, it converts hand gestures into text or speech. It also helps deaf and mute people to communicate and express mutual feelings. This paper's goal is to estimate sign language using action detection by predicting what action is being demonstrated at any given time without forcing the user to wear any external devices. We captured user signs with a webcam. For example; if we signed “thank you”, it will take the entire set of frames for that action to determine what sign is being demonstrated. The long short-term memory (LSTM) model is used to produce a real-time sign language detection and prediction flow. We also applied dropout layers for both training and testing dataset to handle overfitting in deep learning models which made a good improvement for the final result accuracy. We achieved a 99.35% accuracy after training and implementing the model which allows the deaf and mute communicate more easily with societyen-USAction detectionHand gestureLSTM modelMediaPipeSign languageReal-time recognition of American sign language using long- short term memory neural network and hand detectionArticlehttps://doi.org/10.11591/ijeecs.v30.i1.pp545-556