• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Copyrights
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • KINDI Center for Computing Research
  • Information Intelligence
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • KINDI Center for Computing Research
  • Information Intelligence
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Bayesian forward regularization replacing Ridge in online randomized neural network with multiple output layers

    Icon
    View/Open
    Publisher version (You have accessOpen AccessIcon)
    Publisher version (Check access options)
    Check access options
    1-s2.0-S0031320325005461-main.pdf (2.663Mb)
    Date
    2025-06-19
    Author
    Hu, Minghui
    Li, Ning
    Suganthan, Ponnuthurai Nagaratnam
    Wang, Junda
    Metadata
    Show full item record
    Abstract
    Forward regularization (-F) with unsupervised knowledge was advocated to replace canonical Ridge regularization (-R) in online linear learners, as it achieved a lower relative regret boundary. However, we observe that -F cannot perform as expected in practice, even possibly losing to -R for online tasks. We identify two main causes for this: (1) inappropriate intervened regularization, and (2) non-i.i.d. nature and data distribution changes in online learning (OL), both of which result in unstable posterior distribution and optima offset of the learner.To improve these, we first introduce the adjustable forward regularization (-kF), a more general -F with controllable knowledge intervention. We also derive -kF’s incremental updates with variable learning rate, and study relative regret and boundary in OL. Inspired by the regret analysis, to curb unstable penalties, we further propose -kF-Bayes style with k synchronously self-adapted to revise the intractable tuning of -kF by considering parametric posterior distribution changes in non-i.i.d. online data streams. Additionally, we integrate the -kF and -kF-Bayes into a multi-layer ensemble deep random vector functional link (edRVFL) and present two practical algorithms for batch learning, avoiding past replay and catastrophic forgetting. In experiments, we conducted tests on numerical simulation, tabular, and image datasets, where -kF-Bayes surpassed traditional -R and -F, highlighting the efficacy of ready-to-work -kF-Bayes and the great potentials of edRVFL-kF-Bayes in OL and continual learning (CL) scenarios.
    URI
    https://www.sciencedirect.com/science/article/pii/S0031320325005461
    DOI/handle
    http://dx.doi.org/10.1016/j.patcog.2025.111886
    http://hdl.handle.net/10576/68414
    Collections
    • Information Intelligence [‎105‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us
    Contact Us | QU

     

     

    Video