Show simple item record

AuthorHu, Minghui
AuthorLi, Ning
AuthorSuganthan, Ponnuthurai Nagaratnam
AuthorWang, Junda
Available date2025-11-09T07:41:33Z
Publication Date2025-06-19
Publication NamePattern Recognition
Identifierhttp://dx.doi.org/10.1016/j.patcog.2025.111886
CitationWang, J., Hu, M., Li, N., & Suganthan, P. N. (2026). Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers. Pattern Recognition, 170, 111886.
ISSN0031-3203
URIhttps://www.sciencedirect.com/science/article/pii/S0031320325005461
URIhttp://hdl.handle.net/10576/68414
AbstractForward regularization (-F) with unsupervised knowledge was advocated to replace canonical Ridge regularization (-R) in online linear learners, as it achieved a lower relative regret boundary. However, we observe that -F cannot perform as expected in practice, even possibly losing to -R for online tasks. We identify two main causes for this: (1) inappropriate intervened regularization, and (2) non-i.i.d. nature and data distribution changes in online learning (OL), both of which result in unstable posterior distribution and optima offset of the learner.To improve these, we first introduce the adjustable forward regularization (-kF), a more general -F with controllable knowledge intervention. We also derive -kF’s incremental updates with variable learning rate, and study relative regret and boundary in OL. Inspired by the regret analysis, to curb unstable penalties, we further propose -kF-Bayes style with k synchronously self-adapted to revise the intractable tuning of -kF by considering parametric posterior distribution changes in non-i.i.d. online data streams. Additionally, we integrate the -kF and -kF-Bayes into a multi-layer ensemble deep random vector functional link (edRVFL) and present two practical algorithms for batch learning, avoiding past replay and catastrophic forgetting. In experiments, we conducted tests on numerical simulation, tabular, and image datasets, where -kF-Bayes surpassed traditional -R and -F, highlighting the efficacy of ready-to-work -kF-Bayes and the great potentials of edRVFL-kF-Bayes in OL and continual learning (CL) scenarios.
Languageen
PublisherElsevier
SubjectRandomized neural network
Forward regression
Random vector functional link
Online learning
Continual learning
Multiple output layers
TitleBayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers
TypeArticle
Volume Number170
Open Access user License http://creativecommons.org/licenses/by/4.0/
ESSN1873-5142
dc.accessType Full Text


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record