عرض بسيط للتسجيلة

المؤلفPodder, Kanchon Kanti
المؤلفEzeddin, Maymouna
المؤلفChowdhury, Muhammad E.H.
المؤلفSumon, Md Shaheenur Islam
المؤلفTahir, Anas M.
المؤلفAyari, Mohamed Arselene
المؤلفDutta, Proma
المؤلفKhandakar, Amith
المؤلفMahbub, Zaid Bin
المؤلفKadir, Muhammad Abdul
تاريخ الإتاحة2024-04-22T09:34:30Z
تاريخ النشر2023-08-14
اسم المنشورSensors
المعرّفhttp://dx.doi.org/10.3390/s23167156
الاقتباسPodder, K. K., Ezeddin, M., Chowdhury, M. E., Sumon, M. S. I., Tahir, A. M., Ayari, M. A., ... & Kadir, M. A. (2023). Signer-Independent Arabic Sign Language Recognition System Using Deep Learning Model. Sensors, 23(16), 7156.
معرّف المصادر الموحدhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85168723966&origin=inward
معرّف المصادر الموحدhttp://hdl.handle.net/10576/54045
الملخصEvery one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face–hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron “SelfMLP” is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face–hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.
اللغةen
الناشرMultidisciplinary Digital Publishing Institute (MDPI)
الموضوعArabic Sign Language
deep learning
dynamic sign language
MediaPipe
segmentation
العنوانSigner-Independent Arabic Sign Language Recognition System Using Deep Learning Model
النوعArticle
رقم العدد16
رقم المجلد23
ESSN1424-8220
dc.accessType Open Access


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة