عرض بسيط للتسجيلة

المؤلفHu, Minghui
المؤلفLi, Ning
المؤلفSuganthan, Ponnuthurai Nagaratnam
المؤلفWang, Junda
تاريخ الإتاحة2025-11-09T07:41:33Z
تاريخ النشر2025-06-19
اسم المنشورPattern Recognition
المعرّفhttp://dx.doi.org/10.1016/j.patcog.2025.111886
الاقتباسWang, J., Hu, M., Li, N., & Suganthan, P. N. (2026). Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers. Pattern Recognition, 170, 111886.
الرقم المعياري الدولي للكتاب0031-3203
معرّف المصادر الموحدhttps://www.sciencedirect.com/science/article/pii/S0031320325005461
معرّف المصادر الموحدhttp://hdl.handle.net/10576/68414
الملخصForward regularization (-F) with unsupervised knowledge was advocated to replace canonical Ridge regularization (-R) in online linear learners, as it achieved a lower relative regret boundary. However, we observe that -F cannot perform as expected in practice, even possibly losing to -R for online tasks. We identify two main causes for this: (1) inappropriate intervened regularization, and (2) non-i.i.d. nature and data distribution changes in online learning (OL), both of which result in unstable posterior distribution and optima offset of the learner.To improve these, we first introduce the adjustable forward regularization (-kF), a more general -F with controllable knowledge intervention. We also derive -kF’s incremental updates with variable learning rate, and study relative regret and boundary in OL. Inspired by the regret analysis, to curb unstable penalties, we further propose -kF-Bayes style with k synchronously self-adapted to revise the intractable tuning of -kF by considering parametric posterior distribution changes in non-i.i.d. online data streams. Additionally, we integrate the -kF and -kF-Bayes into a multi-layer ensemble deep random vector functional link (edRVFL) and present two practical algorithms for batch learning, avoiding past replay and catastrophic forgetting. In experiments, we conducted tests on numerical simulation, tabular, and image datasets, where -kF-Bayes surpassed traditional -R and -F, highlighting the efficacy of ready-to-work -kF-Bayes and the great potentials of edRVFL-kF-Bayes in OL and continual learning (CL) scenarios.
اللغةen
الناشرElsevier
الموضوعRandomized neural network
Forward regression
Random vector functional link
Online learning
Continual learning
Multiple output layers
العنوانBayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers
النوعArticle
رقم المجلد170
Open Access user License http://creativecommons.org/licenses/by/4.0/
ESSN1873-5142
dc.accessType Full Text


الملفات في هذه التسجيلة

Icon

هذه التسجيلة تظهر في المجموعات التالية

عرض بسيط للتسجيلة