عرض بسيط للتسجيلة

المؤلفZaman, Kh Shahriya
المؤلفReaz, Mamun Bin Ibne
المؤلفMd Ali, Sawal Hamid
المؤلفBakar, Ahmad Ashrif A
المؤلفChowdhury, Muhammad Enamul Hoque
تاريخ الإتاحة2023-04-17T06:57:41Z
تاريخ النشر2022
اسم المنشورIEEE Transactions on Neural Networks and Learning Systems
المصدرScopus
معرّف المصادر الموحدhttp://dx.doi.org/10.1109/TNNLS.2021.3082304
معرّف المصادر الموحدhttp://hdl.handle.net/10576/41939
الملخصThe staggering innovations and emergence of numerous deep learning (DL) applications have forced researchers to reconsider hardware architecture to accommodate fast and efficient application-specific computations. Applications, such as object detection, image recognition, speech translation, as well as music synthesis and image generation, can be performed with high accuracy at the expense of substantial computational resources using DL. Furthermore, the desire to adopt Industry 4.0 and smart technologies within the Internet of Things infrastructure has initiated several studies to enable on-chip DL capabilities for resource-constrained devices. Specialized DL processors reduce dependence on cloud servers, improve privacy, lessen latency, and mitigate bandwidth congestion. As we reach the limits of shrinking transistors, researchers are exploring various application-specific hardware architectures to meet the performance and efficiency requirements for DL tasks. Over the past few years, several software optimizations and hardware innovations have been proposed to efficiently perform these computations. In this article, we review several DL accelerators, as well as technologies with emerging devices, to highlight their architectural features in application-specific integrated circuit (IC) and field-programmable gate array (FPGA) platforms. Finally, the design considerations for DL hardware in portable applications have been discussed, along with some deductions about the future trends and potential research directions to innovate DL accelerator architectures further. By compiling this review, we expect to help aspiring researchers widen their knowledge in custom hardware architectures for DL. 2012 IEEE.
راعي المشروعThis work was supported in part by the Research University Grant, Universiti Kebangsaan Malaysia, under Grant DPK-2021-001, Grant DIP-2020-004, and Grant MI-2020-002; and in part by the Qatar National Research Foundation (QNRF) under Grant NPRP12s-0227- 190164.
اللغةen
الناشرInstitute of Electrical and Electronics Engineers Inc.
الموضوعApplication-specific integrated circuit (ASIC)
deep learning (DL)
deep neural network (DNN)
energy-efficient architectures
field-programmable gate array (FPGA)
hardware accelerator
machine learning (ML)
neural network hardware
review
العنوانCustom Hardware Architectures for Deep Learning on Portable Devices: A Review
النوعArticle
الصفحات6068-6088
رقم العدد11
رقم المجلد33
dc.accessType Abstract Only


الملفات في هذه التسجيلة

الملفاتالحجمالصيغةالعرض

لا توجد ملفات لها صلة بهذه التسجيلة.

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة