Show simple item record

AuthorMahmud, Sakib
AuthorHossain, Md Shafayet
AuthorChowdhury, Muhammad E. H.
AuthorReaz, Mamun Bin Ibne
Available date2023-04-17T06:57:44Z
Publication Date2022
Publication NameNeural Computing and Applications
ResourceScopus
URIhttp://dx.doi.org/10.1007/s00521-022-08111-6
URIhttp://hdl.handle.net/10576/41961
AbstractElectroencephalogram (EEG) signals suffer substantially from motion artifacts when recorded in ambulatory settings utilizing wearable sensors. Because the diagnosis of many neurological diseases is heavily reliant on clean EEG data, it is critical to eliminate motion artifacts from motion-corrupted EEG signals using reliable and robust algorithms. Although a few deep learning-based models have been proposed for the removal of ocular, muscle, and cardiac artifacts from EEG data to the best of our knowledge, there is no attempt has been made in removing motion artifacts from motion-corrupted EEG signals: In this paper, a novel 1D convolutional neural network (CNN) called multi-layer multi-resolution spatially pooled (MLMRS) network for signal reconstruction is proposed for EEG motion artifact removal. The performance of the proposed model was compared with ten other 1D CNN models: FPN, LinkNet, UNet, UNet+, UNetPP, UNet3+, AttentionUNet, MultiResUNet, DenseInceptionUNet, and AttentionUNet++ in removing motion artifacts from motion-contaminated single-channel EEG signal. All the eleven deep CNN models are trained and tested using a single-channel benchmark EEG dataset containing 23 sets of motion-corrupted and reference ground truth EEG signals from PhysioNet. Leave-one-out cross-validation method was used in this work. The performance of the deep learning models is measured using three well-known performance matrices viz. mean absolute error (MAE)-based construction error, the difference in the signal-to-noise ratio (ΔSNR), and percentage reduction in motion artifacts (η). The proposed MLMRS-Net model has shown the best denoising performance, producing an average ΔSNR, η, and MAE values of 26.64 dB, 90.52%, and 0.056, respectively, for all 23 sets of EEG recordings. The results reported using the proposed model outperformed all the existing state-of-the-art techniques in terms of average η improvement.
SponsorOpen Access funding provided by the Qatar National Library. This work was made possible by Qatar National Research Fund (QNRF) NPRP12S-0227-190164 and International Research Collaboration Co-Fund (IRCC) Grant: IRCC-2021-001 and Universiti Kebangsaan Malaysia (UKM) under Grant GUP-2021-019 and DIP-2020-004. The statements made herein are solely the responsibility of the authors.
Languageen
PublisherSpringer Science and Business Media Deutschland GmbH
Subject1D convolutional neural networks (1D-CNN)
1D-segmentation
Deep learning
Electroencephalography (EEG)
Motion artifacts correction
Signal reconstruction
Signal to signal synthesis
TitleMLMRS-Net: Electroencephalography (EEG) motion artifacts removal using a multi-layer multi-resolution spatially pooled 1D signal reconstruction network
TypeArticle
dc.accessType Open Access


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record