Browsing by Author "Laila Abdelhamid"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Fusing CNNs and attention-mechanisms to improve real-time indoor Human Activity Recognition for classifying home-based physical rehabilitation exercises(Elsevier Ltd, 2025-01-01) Moamen Zaher; Amr S. Ghoneim; Laila Abdelhamid; Ayman AtiaPhysical rehabilitation plays a critical role in enhancing health outcomes globally. However, the shortage of physiotherapists, particularly in developing countries where the ratio is approximately ten physiotherapists per million people, poses a significant challenge to effective rehabilitation services. The existing literature on rehabilitation often falls short in data representation and the employment of diverse modalities, limiting the potential for advanced therapeutic interventions. To address this gap, This study integrates Computer Vision and Human Activity Recognition (HAR) technologies to support home-based rehabilitation. The study mitigates this gap by exploring various modalities and proposing a framework for data representation. We introduce a novel framework that leverages both Continuous Wavelet Transform (CWT) and Mel-Frequency Cepstral Coefficients (MFCC) for skeletal data representation. CWT is particularly valuable for capturing the time-frequency characteristics of dynamic movements involved in rehabilitation exercises, enabling a comprehensive depiction of both temporal and spectral features. This dual capability is crucial for accurately modelling the complex and variable nature of rehabilitation exercises. In our analysis, we evaluate 20 CNNbased models and one Vision Transformer (ViT) model. Additionally, we propose 12 hybrid architectures that combine CNN-based models with ViT in bi-model and tri-model configurations. These models are rigorously tested on the UI-PRMD and KIMORE benchmark datasets using key evaluation metrics, including accuracy, precision, recall, and F1-score, with 5-fold cross-validation. Our evaluation also considers realtime performance, model size, and efficiency on low-power devices, emphasising practical applicability. The proposed fused tri-model architectures outperform both single-architectures and bi-model configurations, demonstrating robust performance across both datasets and making the fused models the preferred choice for rehabilitation tasks. Our proposed hybrid model, DenMobVit, consistently surpasses state-of-the-art methods, achieving accuracy improvements of 2.9% and 1.97% on the UI-PRMD and KIMORE datasets, respectively. These findings highlight the effectiveness of our approach in advancing rehabilitation technologies and bridging the gap in physiotherapy services.Item Rehabilitation monitoring and assessment: a comparative analysis of feature engineering and machine learning algorithms on the UI-PRMD and KIMORE benchmark datasets(Taylor and Francis Ltd., 2025-02-04) Moamen Zaher; Amr S. Ghoneim; Laila Abdelhamid; Ayman AtiaRehabilitation is crucial for individuals recovering from injuries or illnesses. It combines medical knowledge, therapy, and technology to improve health and independence. However, a global shortage of physiotherapists makes it challenging to provide adequate rehabilitation services. Current rehabilitation research often lacks advanced computational techniques to automate exercise assessment, relying heavily on time-consuming and costly in-person sessions. This study uses computer vision and classical machine learning (ML) to monitor and evaluate physical rehabilitation exercises using skeletal data. It compares five feature extraction approaches, six feature ranking techniques, and thirteen ML algorithms to identify the most influential features for accurate exercise classification using benchmark datasets (UI-PRMD and KIMORE). The performances of feature ranking algorithms–X2, ReliefF, Gini Decrease, FCBF, Information Gain, and Information Gain Ratio–were examined alongside ML algorithms such as SVMs, RFs, KNN, LDA, and lightGBM, amongst others. ReliefF with an Extra-Tree demonstrated superior performance (classification accuracy of 99.94%) compared to state-of-the-art studies on the UI-PRMD (a 4.4% improvement). However, FCBF, alongside an Extra-Tree, demonstrated robust performance across diverse datasets, achieving 99.64% on UIPRMD (the second-best result) and 81.85% on KIMORE (the highest accuracy reported compared to state-of-the-art studies). FCBF attained robust results together with the various classifiers, averaging 92.65%.