News Presenter skills evaluation using multi-modality and machine learning

dc.AffiliationOctober university for modern sciences and Arts MSA
dc.contributor.authorEmam, Ahmed Mohamed
dc.contributor.authorElgarh, Mohamed AbdelAzim
dc.contributor.authorFahmy, Amany
dc.contributor.authorAbdel Moniem, Amira
dc.contributor.authorAtia, Ayman
dc.date.accessioned2023-09-28T11:37:11Z
dc.date.available2023-09-28T11:37:11Z
dc.date.issued2023-07
dc.description.abstractAssessing television presenters is a challenging yet essential task, as it requires considering numerous characteristics for their evaluation. A multi-modal approach is employed, utilizing various data sources such as eye gaze, gestures, and facial expressions. Automation of this process is crucial due to the exhaustive nature of presenter evaluation, where assessors need to evaluate the presenter based on all the aforementioned features. This paper proposes a system that assesses the presenter based on four key features, namely posture, eye contact, facial expression, and voice. Each feature is assigned a weight, and the presenter receives a grade based on their performance on each feature. The present study focused on facial emotion, eye tracking, and physical posture. The presenter's elbow, shoulder, and nose joints were extracted, and they served as inputs for classifiers that were divided into three categories: machine learning algorithms, template-based algorithms, and deep learning algorithms to classify the presenter's posture. For the eye gaze distance algorithms such as Euclidean distance and Manhattan distance were employed to analyze eye gaze, while facial expression analysis was conducted using the DeepFace library. The system proposed in this research paper achieved an accuracy of 92% utilizing SVM in the machine learning algorithms, 75% using dollarpy in the distance algorithm, besides 79% utilizing BiLSTM for the deep learning model. The data set used in this study was collected from faculty of Mass communication, MSA University.en_US
dc.description.urihttps://08104euot-1103-y-https-ieeexplore-ieee-org.mplbci.ekb.eg/document/10217597/authors
dc.identifier.doi10.1109/IMSA58542.2023.10217597
dc.identifier.other10.1109/IMSA58542.2023.10217597
dc.identifier.urihttp://repository.msa.edu.eg/xmlui/handle/123456789/5730
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries1st International Conference of Intelligent Methods, Systems and Applications, IMSA 2023;Pages 124 - 1292023
dc.subjectEye gaze; facial emotions; postureen_US
dc.titleNews Presenter skills evaluation using multi-modality and machine learningen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MSA avatar.jpg
Size:
49.74 KB
Format:
Joint Photographic Experts Group/JPEG File Interchange Format (JFIF)
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
51 B
Format:
Item-specific license agreed upon to submission
Description: