Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers
| Author | Hu, Minghui |
| Author | Li, Ning |
| Author | Suganthan, Ponnuthurai Nagaratnam |
| Author | Wang, Junda |
| Available date | 2025-11-09T07:41:33Z |
| Publication Date | 2025-06-19 |
| Publication Name | Pattern Recognition |
| Identifier | http://dx.doi.org/10.1016/j.patcog.2025.111886 |
| Citation | Wang, J., Hu, M., Li, N., & Suganthan, P. N. (2026). Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers. Pattern Recognition, 170, 111886. |
| ISSN | 0031-3203 |
| Abstract | Forward regularization (-F) with unsupervised knowledge was advocated to replace canonical Ridge regularization (-R) in online linear learners, as it achieved a lower relative regret boundary. However, we observe that -F cannot perform as expected in practice, even possibly losing to -R for online tasks. We identify two main causes for this: (1) inappropriate intervened regularization, and (2) non-i.i.d. nature and data distribution changes in online learning (OL), both of which result in unstable posterior distribution and optima offset of the learner.To improve these, we first introduce the adjustable forward regularization (-kF), a more general -F with controllable knowledge intervention. We also derive -kF’s incremental updates with variable learning rate, and study relative regret and boundary in OL. Inspired by the regret analysis, to curb unstable penalties, we further propose -kF-Bayes style with k synchronously self-adapted to revise the intractable tuning of -kF by considering parametric posterior distribution changes in non-i.i.d. online data streams. Additionally, we integrate the -kF and -kF-Bayes into a multi-layer ensemble deep random vector functional link (edRVFL) and present two practical algorithms for batch learning, avoiding past replay and catastrophic forgetting. In experiments, we conducted tests on numerical simulation, tabular, and image datasets, where -kF-Bayes surpassed traditional -R and -F, highlighting the efficacy of ready-to-work -kF-Bayes and the great potentials of edRVFL-kF-Bayes in OL and continual learning (CL) scenarios. |
| Language | en |
| Publisher | Elsevier |
| Subject | Randomized neural network Forward regression Random vector functional link Online learning Continual learning Multiple output layers |
| Type | Article |
| Volume Number | 170 |
| Open Access user License | http://creativecommons.org/licenses/by/4.0/ |
| ESSN | 1873-5142 |
Check access options
Files in this item
This item appears in the following Collection(s)
-
Information Intelligence [105 items ]

