Show simple item record

AuthorLi, Kai
AuthorNi, Wei
AuthorTovar, Eduardo
AuthorGuizani, Mohsen
Available date2022-10-31T20:12:59Z
Publication Date2021-06-15
Publication NameIEEE Internet of Things Journal
Identifierhttp://dx.doi.org/10.1109/JIOT.2020.3019186
CitationLi, K., Ni, W., Tovar, E., & Guizani, M. (2020). Joint flight cruise control and data collection in UAV-aided Internet of Things: An onboard deep reinforcement learning approach. IEEE Internet of Things Journal, 8(12), 9787-9799.‏
URIhttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85104378672&origin=inward
URIhttp://hdl.handle.net/10576/35665
AbstractEmploying unmanned aerial vehicles (UAVs) as aerial data collectors in Internet-of-Things (IoT) networks is a promising technology for large-scale environment sensing. A key challenge in UAV-aided data collection is that UAV maneuvering gives rise to buffer overflow at the IoT node and unsuccessful transmission due to lossy airborne channels. This article formulates a joint optimization of flight cruise control and data collection schedule to minimize network data loss as a partially observable Markov decision process (POMDP), where the states of individual IoT nodes can be obscure to the UAV. The problem can be optimally solvable by reinforcement learning, but suffers from the curse of dimensionality and becomes rapidly intractable with the growth in the number of IoT nodes. In practice, a UAV-aided IoT network contains a large number of network states and actions in POMDP while the up-to-date knowledge is not available at the UAV. We propose an onboard deep Q -network-based flight resource allocation scheme (DQN-FRAS) to optimize the online flight cruise control of the UAV and data scheduling given outdated knowledge on the network states. Numerical results demonstrate that DQN-FRAS reduces the packet loss by over 51%, as compared to existing nonlearning heuristics.
SponsorThis work was supported in part by the National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit under Grant UIDB/04234/2020; in part by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Partnership Agreement, through the European Regional Development Fund; and in part by the National Funds through FCT, Within project(s) (ARNET) under Grant POCI-01-0145-FEDER-029074.
Languageen
PublisherInstitute of Electrical and Electronics Engineers Inc.
SubjectCommunication decisions
deep reinforcement learning
flight cruise control
Internet of Things (IoT)
unmanned aerial vehicles (UAVs)
TitleJoint Flight Cruise Control and Data Collection in UAV-Aided Internet of Things: An Onboard Deep Reinforcement Learning Approach
TypeArticle
Pagination9787-9799
Issue Number12
Volume Number8
dc.accessType Abstract Only


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record