Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers
View/ Open
Publisher version (Check access options)
Check access options
Date
2025-06-19Metadata
Show full item recordAbstract
Forward regularization (-F) with unsupervised knowledge was advocated to replace canonical Ridge regularization (-R) in online linear learners, as it achieved a lower relative regret boundary. However, we observe that -F cannot perform as expected in practice, even possibly losing to -R for online tasks. We identify two main causes for this: (1) inappropriate intervened regularization, and (2) non-i.i.d. nature and data distribution changes in online learning (OL), both of which result in unstable posterior distribution and optima offset of the learner.To improve these, we first introduce the adjustable forward regularization (-kF), a more general -F with controllable knowledge intervention. We also derive -kF’s incremental updates with variable learning rate, and study relative regret and boundary in OL. Inspired by the regret analysis, to curb unstable penalties, we further propose -kF-Bayes style with k synchronously self-adapted to revise the intractable tuning of -kF by considering parametric posterior distribution changes in non-i.i.d. online data streams. Additionally, we integrate the -kF and -kF-Bayes into a multi-layer ensemble deep random vector functional link (edRVFL) and present two practical algorithms for batch learning, avoiding past replay and catastrophic forgetting. In experiments, we conducted tests on numerical simulation, tabular, and image datasets, where -kF-Bayes surpassed traditional -R and -F, highlighting the efficacy of ready-to-work -kF-Bayes and the great potentials of edRVFL-kF-Bayes in OL and continual learning (CL) scenarios.
Collections
- Information Intelligence [105 items ]

