RL-PDNN: Reinforcement Learning for Privacy-Aware Distributed Neural Networks in IoT Systems
Due to their high computational and memory demand, deep learning applications are mainly restricted to high-performance units, e.g., cloud and edge servers. Particularly, in Internet of Things (IoT) systems, the data acquired by pervasive devices is sent to the computing servers for classification. However, this approach might not be always possible because of the limited bandwidth and the privacy issues. Furthermore, it presents uncertainty in terms of latency because of the unstable remote connectivity. To support resource and delay requirements of such paradigm, joint and real-time deep co-inference framework with IoT synergy was introduced. However, scheduling the distributed, dynamic and real-time Deep Neural Network (DNN) inference requests among resource-constrained devices has not been well explored in the literature. Additionally, the distribution of DNN has drawn the attention to the privacy protection of sensitive data. In this context, various threats have been presented, including white-box attacks, where malicious devices can accurately recover received inputs if the DNN model is fully exposed to participants. In this paper, we introduce a methodology aiming at distributing the DNN tasks onto the resource-constrained devices of the IoT system, while avoiding to reveal the model to participants. We formulate such an approach as an optimization problem, where we establish a trade-off between the latency of co-inference, the privacy of the data, and the limited resources of devices. Next, due to the NP-hardness of the problem, we shape our approach as a reinforcement learning design adequate for real-time applications and highly dynamic systems, namely RL-PDNN. Our system proved its ability to outperform existing static approaches and achieve close results compared to the optimal solution. 2013 IEEE.
- Computer Science & Engineering [1480 items ]