عرض بسيط للتسجيلة

المؤلفAl-Ali, Abdulaziz
المؤلفSuganthan, Ponnuthurai N.
المؤلفAly, Hussein
المؤلفHamdy, Mohamed
تاريخ الإتاحة2025-11-09T07:50:35Z
تاريخ النشر2025-07-05
اسم المنشورPattern Recognition
المعرّفhttp://dx.doi.org/10.1016/j.patcog.2025.112035
الاقتباسHamdy, M., Al-Ali, A., Suganthan, P. N., & Aly, H. (2026). Hybrid training of deep neural networks with multiple output layers for tabular data classification. Pattern Recognition, 170, 112035.
الرقم المعياري الدولي للكتاب0031-3203
معرّف المصادر الموحدhttps://www.sciencedirect.com/science/article/pii/S0031320325006958
معرّف المصادر الموحدhttp://hdl.handle.net/10576/68415
الملخصIn the rapidly evolving landscape of deep learning, the ability to balance performance and computational efficiency is indispensable. Layer-wise training, which involves independently training each hidden layer of a neural network with its private output layer, offers a promising avenue by enabling the construction of a single network that can leverage an ensemble of output layers during prediction. This approach has been successfully employed in state-of-the-art models like ensemble deep multilayer perceptron (edMLP) and ensemble deep random vector functional link (edRVFL), pushing the boundaries of their base models, MLP trained by backpropagation (BP) and RVFL trained using a closed-form solution (CFS). However, edRVFL often underperforms edMLP in accuracy, while edMLP incurs significantly higher computational cost. To this end, we introduce two novel hybrid training approaches that integrate BP and CFS, aiming to balance the trade-offs. Extensive experiments on diverse classification datasets reveal that one of the proposed models, ensemble deep adaptive sampling (edAS), achieves statistically significant improvements in classification accuracy over state-of-the-art models, including edRVFL, edMLP, and self-normalizing neural network (SNN), while being less computationally expensive. Furthermore, the second proposed model, MO-MLP, demonstrates statistically significant superiority over competing models while requiring less than one-third of the computation time needed by models that incorporate BP in a layer-wise manner. The source code for all proposed models is available on GitHub.11https://github.com/M-Hamdy-M/ed-hybrids.
اللغةen
الناشرElsevier
الموضوعBackpropagation
Closed-form solutions
Ensemble deep learning
Multiple output layers
Randomization-based neural networks
Layer-wise training
Random vector functional link
العنوانHybrid training of deep neural networks with multiple output layers for tabular data classification
النوعArticle
رقم المجلد170
ESSN1873-5142
dc.accessType Full Text


الملفات في هذه التسجيلة

Thumbnail

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة