Deep Reinforcement Learning for Efficient Uplink NOMA SWIPT Transmissions
Date
2021-01Metadata
Show full item recordAbstract
A key rival technology in radio access strategies for next generation cellular communications is non-orthogonal multiple access (NOMA) due to its enhanced performance compared to existing multiple access techniques such as orthogonal frequency division multiple access (OFDMA). The work in this thesis proposes a framework for an energy efficient system geared towards wireless exchange of intensive data collected from distributed Internet of things (IoT) sensor nodes connected to an edge node acting as a cluster head (CH). The IoT nodes utilize an adaptive compression model as an extra degree of freedom to control the transmitted rate going to the CH. The CH is an energy constrained node and may be battery operated. The CH is capable of radio frequency (RF) energy harvesting (EH) using simultaneous wireless power transfer (SWIPT). The proposed framework exploits deep reinforcement learning (DRL) mechanisms to achieve smart and efficient energy constrained up-link NOMA transmissions in IoT applications requiring data compression. In particular, the DRL maximizes the harvested energy at the CH while enforcing the data compression ratio constraints at the transmitting nodes and satisfying the outage probability constraints at the CH. The data compression in this type of sensor networks is vital in order to minimize the power consumption of the different sensors (transmitting nodes), which increases its service lifetime.
DOI/handle
http://hdl.handle.net/10576/17705Collections
- Computing [100 items ]