Show simple item record

AuthorPodder, Kanchon Kanti
AuthorEzeddin, Maymouna
AuthorChowdhury, Muhammad E.H.
AuthorSumon, Md Shaheenur Islam
AuthorTahir, Anas M.
AuthorAyari, Mohamed Arselene
AuthorDutta, Proma
AuthorKhandakar, Amith
AuthorMahbub, Zaid Bin
AuthorKadir, Muhammad Abdul
Available date2024-04-22T09:34:30Z
Publication Date2023-08-14
Publication NameSensors
Identifierhttp://dx.doi.org/10.3390/s23167156
CitationPodder, K. K., Ezeddin, M., Chowdhury, M. E., Sumon, M. S. I., Tahir, A. M., Ayari, M. A., ... & Kadir, M. A. (2023). Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model. Sensors, 23(16), 7156.
URIhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85168723966&origin=inward
URIhttp://hdl.handle.net/10576/54045
AbstractEvery one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face–hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron “SelfMLP” is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face–hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.
Languageen
PublisherMultidisciplinary Digital Publishing Institute (MDPI)
SubjectArabic Sign Language
deep learning
dynamic sign language
MediaPipe
segmentation
TitleSigner-Independent Arabic Sign Language Recognition System Using Deep Learning Model
TypeArticle
Issue Number16
Volume Number23
ESSN1424-8220
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record