Joint resource allocation and power control for D2D communication with deep reinforcement learning in MCC
Date
2021-04-01Author
Wang, DanQin, Hao
Song, Bin
Xu, Ke
Du, Xiaojiang
Guizani, Mohsen
...show more authors ...show less authors
Metadata
Show full item recordAbstract
Mission-critical communication (MCC) is one of the main goals in 5G, which can leverage multiple device-to-device (D2D) connections to enhance reliability for mission-critical communication. In MCC, D2D users can reuses the non-orthogonal wireless resources of cellular users without a base station (BS). Meanwhile, the D2D users will generate co-channel interference to cellular users and hence affect their quality-of-service (QoS). To comprehensively improve the user experience, we proposed a novel approach, which embraces resource allocation and power control along with Deep Reinforcement Learning (DRL). In this paper, multiple procedures are carefully designed to assist in developing our proposal. As a starter, a scenario with multiple D2D pairs and cellular users in a cell will be modeled; followed by the analysis of issues pertaining to resource allocation and power control as well as the formulation of our optimization goal; and finally, a DRL method based on spectrum allocation strategy will be created, which can ensure D2D users to obtain the sufficient resource for their QoS improvement. With the resource data provided, which D2D users capture by interacting with surroundings, the DRL method can help the D2D users autonomously selecting an available channel and power to maximize system capacity and spectrum efficiency while minimizing interference to cellular users. Experimental results show that our learning method performs well to improve resource allocation and power control significantly.
Collections
- Computer Science & Engineering [2402 items ]