Artificial Intelligence Security in 5G Networks: Adversarial Examples for Estimating a Travel Time Task
Date
2020-09-01Author
Qiu, JingDu, Lei
Chen, Yuanyuan
Tian, Zhihong
Du, Xiaojiang
Guizani, Mohsen
...show more authors ...show less authors
Metadata
Show full item recordAbstract
With the rapid development of the Internet, the nextgeneration network (5G) has emerged. 5G can support a variety of new applications, such as the Internet of Things (IoT), virtual reality (VR), and the Internet of Vehicles. Most of these new applications depend on deep learning algorithms, which have made great advances in many areas of artificial intelligence (AI). However, researchers have found that AI algorithms based on deep learning pose numerous security problems. For example, deep learning is susceptible to a well-designed input sample formed by adding small perturbations to the original sample. This well-designed input with small perturbations, which are imperceptible to humans, is called an adversarial example. An adversarial example is similar to a truth example, but it can render the deep learning model invalid. In this article, we generate adversarial examples for spatiotemporal data. Based on the travel time estimation (TTE) task, we use two methods-white-box and blackbox attacks-to invalidate deep learning models. Experiment results show that the adversarial examples successfully attack the deep learning model and thus that AI security is a big challenge of 5G.
Collections
- Computer Science & Engineering [2402 items ]