CNN feature and classifier fusion on novel transformed image dataset for dysgraphia diagnosis in children
Abstract
Dysgraphia is a neurological disorder that hinders the acquisition process of normal writing skills in children, resulting in poor writing abilities. Poor or underdeveloped writing skills in children can negatively impact their self-confidence and academic growth. This work proposes various machine learning methods, including transfer learning via fine-tuning, transfer learning via feature extraction, ensembles of deep convolutional neural network (CNN) models, and fusion of CNN features, to develop a preliminary dysgraphia diagnosis system based on handwritten images. In this work, an existing online dysgraphia dataset is converted into images, encompassing various writing tasks. Transfer learning is applied using a pre-trained DenseNet201 network to develop four distinct CNN models separately trained on word, pseudoword, difficult word, and sentence images. Soft voting and hard voting strategies are employed to ensemble these CNN models. The pre-trained DenseNet201 network is used for CNN feature extraction from each task-specific handwritten image data. The extracted CNN features are then fused in different combinations. Three machine learning algorithms support vector machine (SVM), AdaBoost, and Random forest are employed to assess the performance of the CNN features and fused CNN features. Among the task-specific models, the SVM trained on word data achieved the highest accuracy of 91.7%. In the case of ensemble learning, soft voting ensembles of task-specific CNNs achieved an accuracy of 90.4%. The feature fusion approach substantially improved the classification accuracy, with the SVM trained on fused features from the task specific-data achieving an accuracy of 97.3%. This accuracy surpasses the performance of state-of-the-art methods by 16%.
Collections
- Computer Science & Engineering [2319 items ]