Show simple item record

AuthorNweke H.F.
AuthorTeh Y.W.
AuthorMujtaba G.
AuthorAlo U.R.
AuthorAl-garadi M.A.
Available date2020-04-01T06:59:41Z
Publication Date2019
Publication NameHuman-centric Computing and Information Sciences
ResourceScopus
ISSN21921962
URIhttp://dx.doi.org/10.1186/s13673-019-0194-5
URIhttp://hdl.handle.net/10576/13661
AbstractMultimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. - 2019, The Author(s).
SponsorThis research is supported by University of Malaya BKP Special Grant no vote BKS006-2018.
Languageen
PublisherSpringer Berlin Heidelberg
SubjectActivity detection
Activity identification
Feature-level fusion
Multi-view stacking ensemble
Multiple classifier systems
Multiple sensor fusion
Wearable sensors
TitleMulti-sensor fusion based on multiple classifier systems for human activity identification
TypeArticle
Issue Number1
Volume Number9


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record