Browsing by Author "Khaled Shaban"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item CLASEG: advanced multiclassification and segmentation for differential diagnosis of oral lesions using deep learning(Nature Research, 2025-06-02) Afnan Al-Ali; Ali Hamdi; Mohamed Elshrif; Keivin Isufaj; Khaled Shaban; Peter Chauvin; Sreenath Madathil; Ammar Daer; FalehTamimi; Raidan Ba-HattabOral cancer has a high mortality rate primarily due to delayed diagnoses, highlighting the need for early detection of oral lesions. This study presents a novel deep learning framework for multi-class classification-based segmentation, enabling accurate differential diagnosis of 14 common oral lesions—benign, pre-malignant, and malignant—across various mouth locations using photographic images. A dataset of 2,072 clinical images was used to train and validate the model. The proposed framework integrates EfficientNet-B3 for classification and ResNet-101-based Mask R-CNN for segmentation, achieving a classification accuracy of 74.49% and segmentation performance with an average precision (AP50) of 72.18. The gradient-weighted class activation map technique was applied to the model outputs to enable visualization of the specific areas that were most influential for predictive decisions made by the model. This significantly improves upon the state-of-the-art, where previous models achieved lower segmentation accuracy (AP50<50%). The framework not only classifies the lesion type but also delineates the lesion boundaries with high precision, which is critical for early detection and differential diagnosis in clinical practice.Item LexiSem: A re-ranker balancing lexical and semantic quality for enhanced abstractive summarization(Elsevier B.V., 2025-07-02) Eman Aloraini; Hozaifa Kassab; Ali Hamdi; Khaled ShabanSequence-to-sequence neural networks have recently achieved significant success in abstractive summarization, especially through fine-tuning large pre-trained language models on downstream datasets. However, these models frequently suffer from exposure bias, which can impair their performance. To address this, re-ranking systems have been introduced, but their potential remains underexplored despite some demonstrated performance gains. Most prior work relies on ROUGE scores and aligned candidate summaries for ranking, exposing a substantial gap between semantic similarity and lexical overlap metrics. In this study, we demonstrate that a second-stage model can be trained to re-rank a set of summary candidates, significantly enhancing performance. Our novel approach leverages a re-ranker that balance lexical and semantic quality. Additionally, we introduce a new strategy for defining negative samples in ranking models. Through experiments on the CNN/DailyMail, XSum and Reddit TIFU datasets, we show that our method effectively estimates the semantic content of summaries without compromising lexical quality. In particular, our method sets a new performance benchmark on the CNN/DailyMail dataset (48.18 R1, 24.46 R2, 45.05 RL) and on Reddit TIFU (30.37 R1,RL 23.87).