Show simple item record

AuthorChen J.
AuthorLi K.
AuthorBilal K.
AuthorZhou X.
AuthorLi K.
AuthorYu P.S.
Available date2020-04-02T11:08:05Z
Publication Date2019
Publication NameIEEE Transactions on Parallel and Distributed Systems
ResourceScopus
ISSN10459219
URIhttp://dx.doi.org/10.1109/TPDS.2018.2877359
URIhttp://hdl.handle.net/10576/13787
AbstractBenefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.
SponsorThis research is partially funded by the National Key R&D Program of China (Grant No. 2016YFB0200201), the National Outstanding Youth Science Program of National Natural Science Foundation of China (Grant No. 61625202), the International Postdoctoral Exchange Fellowship Program (Grant No. 2018024), and the China Postdoctoral Science Foundation funded project (Grant No. 2018T110829). This work is also supported in part by NSF through grants IIS-1526499, IIS-1763325, CNS-1626432, and NSFC 61672313.
Languageen
PublisherIEEE Computer Society
Subjectbi-layered parallel computing
Big data
convolutional neural networks
deep learning
distributed computing
TitleA Bi-layered parallel training architecture for large-scale convolutional neural networks
TypeArticle
Pagination965-976
Issue Number5
Volume Number30


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record