A Multi-Layered Large Language Model Framework for Disease Prediction
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer International Publishing AG
Series Info
Lecture Notes in Networks and Systems ; Volume 1416 LNNS , Pages 259 - 270
Scientific Journal Rankings
Orcid
Abstract
Social telehealth has made a breakthrough in healthcare by allowing patients to share their symptoms and have medical consultations remotely. Users frequently post symptoms on social media and online health platforms, creating a huge repository of medical data that can be leveraged for disease classification and symptom severity assessment. Large language models (LLMs) like LLAMA3, GPT-3.5 Turbo, and BERT process complex medical data, enhancing disease classification. This study explores three Arabic medical text preprocessing techniques: text summarization, text refinement, and Named Entity Recognition (NER). Evaluating CAMeL-BERT, AraBERT, and Asafaya-BERT with LoRA, the best performance was achieved using CAMeL-BERT with NER-augmented text (83% Type classification, 69% Severity assessment). Non-fine-tuned models performed poorly (13–20% Type classification, 40–49% Severity assessment). Embedding LLMs in social telehealth enhances diagnostic accuracy and treatment outcomes.
Description
SJR 2024
0.166
Q4
H-Index
48
Citation
Mohamed, M., Emad, R., & Hamdi, A. (2025). A multi-layered large language model framework for disease prediction. In Lecture notes in networks and systems (pp. 259–270). https://doi.org/10.1007/978-981-96-6441-2_23
