Show simple item record

AuthorLi, Ruilin
AuthorCui, Jian
AuthorGao, Ruobin
AuthorSuganthan, P. N.
AuthorSourina, Olga
AuthorWang, Lipo
AuthorChen, Chun Hsien
Available date2023-02-15T08:04:01Z
Publication Date2022-01-01
Publication NameProceedings - 2022 International Conference on Cyberworlds, CW 2022
Identifierhttp://dx.doi.org/10.1109/CW55638.2022.00049
CitationLi, R., Cui, J., Gao, R., Suganthan, P. N., Sourina, O., Wang, L., & Chen, C. H. (2022, September). Situation Awareness Recognition Using EEG and Eye-Tracking data: a pilot study. In 2022 International Conference on Cyberworlds (CW) (pp. 209-212). IEEE.‏
ISBN9781665468145
URIhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85142413363&origin=inward
URIhttp://hdl.handle.net/10576/40063
AbstractSince situation awareness (SA) plays an important role in many fields, the measure of SA is one of the most concerning problems. Using physiological signals to evaluate SA is becoming a popular research topic because of their advantages of non-intrusiveness and objectivity. However, previous studies mainly exploited the use of single physiological signals such as electroencephalogram (EEG) or eye tracking. The multi-modal SA recognition is still a research gap. Therefore, this work conducts a pilot study to investigate SA recognition by using two modalities: EEG and eye tracking data. Specifically, an optimized Stroop test that is more compatible with the definition of SA was used to induce different states of SA and collect physiological data. Furthermore, a random vector functional link-based stacking (RVFL-S) model was proposed to perform the multi-modal SA recognition. Experiment results showed that using the combination of EEG and eye tracking data can boost the performance of SA recognition. Moreover, the proposed RVFL-S model can effectively integrate the classification information from two modalities. It showed better performance than baseline methods, achieving 77.62% leave-one-subject-out (LOSO) average accuracy. This was around 5% improvement compared with the baseline classification models with input of only one modality. This pilot study demonstrated that the use of multi-modality is a potential strategy for SA recognition.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectElectroencephalogram (EEG)
Eye Tracking
Random Vector Functional Link (RVFL)
Situation Awareness (SA)
Stacking
TitleSituation Awareness Recognition Using EEG and Eye-Tracking data: a pilot study
TypeConference Paper
Pagination209-212
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record