Show simple item record

AuthorSaad H.
AuthorMohamed A.
AuthorElbatt T.
Available date2022-04-21T08:58:35Z
Publication Date2012
Publication NameIEEE Vehicular Technology Conference
ResourceScopus
Identifierhttp://dx.doi.org/10.1109/VTCFall.2012.6399230
URIhttp://hdl.handle.net/10576/30171
AbstractIn this paper, we propose a distributed reinforcement learning (RL) technique called distributed power control using Q-learning (DPC-Q) to manage the interference caused by the femtocells on macro-users in the downlink. The DPC-Q leverages Q-Learning to identify the sub-optimal pattern of power allocation, which strives to maximize femtocell capacity, while guaranteeing macrocell capacity level in an underlay cognitive setting. We propose two different approaches for the DPC-Q algorithm: namely, independent, and cooperative. In the former, femtocells learn independently from each other, while in the latter, femtocells share some information during learning in order to enhance their performance. Simulation results show that the independent approach is capable of mitigating the interference generated by the femtocells on macro- users. Moreover, the results show that cooperation enhances the performance of the femtocells in terms fairness and aggregate femtocell capacity. 2012 IEEE.
Languageen
PublisherIEEE
SubjectCognitive femtocell networks
Distributed power control
Distributed reinforcement learning
Femto-cells
Macro cells
Power allocations
Q-learning
Learning algorithms
Femtocell
TitleDistributed cooperative q-learning for power allocation in cognitive femtocell networks
TypeConference Paper
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record