Show simple item record

AuthorIlahi, Inaam
AuthorUsama, Muhammad
AuthorQadir, Junaid
AuthorJanjua, Muhammad Umar
AuthorAl-Fuqaha, Ala
AuthorHoang, Dinh Thai
AuthorNiyato, Dusit
Available date2023-07-13T05:40:52Z
Publication Date2022
Publication NameIEEE Transactions on Artificial Intelligence
ResourceScopus
ISSN26914581
URIhttp://dx.doi.org/10.1109/TAI.2021.3111139
URIhttp://hdl.handle.net/10576/45579
AbstractDeep reinforcement learning (DRL) has numerous applications in the real world, thanks to its ability to achieve high performance in a range of environments with little manual oversight. Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications (e.g., smart grids, traffic controls, and autonomous vehicles) unless its vulnerabilities are addressed and mitigated. To address this problem, we provide a comprehensive survey that discusses emerging attacks on DRL-based systems and the potential countermeasures to defend against these attacks. We first review the fundamental background on DRL and present emerging adversarial attacks on machine learning techniques. We then investigate the vulnerabilities that an adversary can exploit to attack DRL along with state-of-the-art countermeasures to prevent such attacks. Finally, we highlight open issues and research challenges for developing solutions to deal with attacks on DRL-based intelligent systems. 2020 IEEE.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectAdversarial machine learning
cyber-security
deep reinforcement learning (DRL)
machine learning (ML)
robust machine learning
TitleChallenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
TypeArticle
Pagination90-109
Issue Number2
Volume Number3
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record