Show simple item record

AuthorZolanvari, M.
AuthorYang, Z.
AuthorKhan, K.
AuthorJain, R.
AuthorMeskin, Nader
Available date2022-04-14T08:45:38Z
Publication Date2021
Publication NameIEEE Internet of Things Journal
ResourceScopus
Identifierhttp://dx.doi.org/10.1109/JIOT.2021.3122019
URIhttp://hdl.handle.net/10576/29760
AbstractDespite AI’s significant growth, its “black box" nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in IoT high-risk applications, such as critical industrial infrastructures, medical systems, and financial applications, etc. Explainable AI (XAI) has emerged to help with this problem. However, designing appropriately fast and accurate XAI is still challenging, especially in numerical applications. Here, we propose a universal XAI model named Transparency Relying Upon Statistical Theory (TRUST), which is model-agnostic, high-performing, and suitable for numerical applications. Simply put, TRUST XAI models the statistical behavior of the AI’s outputs in an AI-based system. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information to rank these variables and pick only the most influential ones on the AI’s outputs and call them “representatives” of the classes. Then we use multi-modal Gaussian distributions to determine the likelihood of any new sample belonging to each class. We demonstrate the effectiveness of TRUST in a case study on cybersecurity of the industrial Internet of things (IIoT) using three different cybersecurity datasets. As IIoT is a prominent application that deals with numerical data. The results show that TRUST XAI provides explanations for new random samples with an average success rate of 98%. Compared with LIME, a popular XAI model, TRUST is shown to be superior in the context of performance, speed, and the method of explainability. In the end, we also show how TRUST is explained to the user.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectArtificial intelligence
Cybersecurity
Internet of things
Lime
Artificial intelligence
Computational modelling
Cyber security
Explainable artificial intelligence (XAI)
Industrial IIoT
Machine-learning
Predictive models
Statistic modeling
Statistical modeling.
Trustworthy artificial intelligence
Computation theory
TitleTRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security
TypeArticle
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record