Deep Reinforcement Learning for Autonomous Navigation on Duckietown Platform: Evaluation of Adversarial Robustness
Author | Hosseini, Abdullah |
Author | Houti, Saeid |
Author | Qadir, Junaid |
Available date | 2025-07-08T03:58:09Z |
Publication Date | 2023 |
Publication Name | 2023 International Symposium on Networks, Computers and Communications, ISNCC 2023 |
Resource | Scopus |
Identifier | http://dx.doi.org/10.1109/ISNCC58260.2023.10323905 |
ISBN | 979-835033559-0 |
Abstract | Self-driving cars have gained widespread attention in recent years due to their potential to revolutionize the transportation industry. However, their success critically depends on the ability of reinforcement learning (RL) algorithms to navigate complex environments safely. In this paper, we investigate the potential security risks associated with end-to-end deep RL (DRL) systems in autonomous driving environments that rely on visual input for vehicle control, using the open-source Duckietown platform for robotics and self-driving vehicles. We demonstrate that current DRL algorithms are inherently susceptible to attacks by designing a general state adversarial perturbation and a reward tampering approach. Our strategy involves evaluating how attacks can manipulate the agent's decision-making process and using this understanding to create a corrupted environment that can lead the agent towards low-performing policies. We introduce our state perturbation method, accompanied by empirical analysis and extensive evaluation, and then demonstrate a targeted attack using reward tampering that leads the agent to catastrophic situations. Our experiments show that our attacks are effective in poisoning the learning of the agent when using the gradient-based Proximal Policy Optimization algorithm within the Duckietown environment. The results of this study are of interest to researchers and practitioners working in the field of autonomous driving, DRL, and computer security, and they can help inform the development of safer and more reliable autonomous driving systems. |
Sponsor | ACKNOWLEDGMENT The authors would like to express their gratitude for the support received from the Qatar University High Impact Internal Grant QUHI-CENG23/24-127. |
Language | en |
Publisher | IEEE |
Subject | Automobile drivers Autonomous vehicles Control system synthesis Decision making Learning algorithms Learning systems Open systems Perturbation techniques Reinforcement learning Security of data Autonomous driving Autonomous navigation Complex environments Driving environment End to end Reinforcement learning algorithms Reinforcement learning systems Reinforcement learnings Security risks Transportation industry Deep learning |
Type | Conference paper |
Files in this item
This item appears in the following Collection(s)
-
Computer Science & Engineering [2482 items ]