• English
    • العربية
  • العربية
  • Login
  • QU
  • QU Library
  •  Home
  • Communities & Collections
  • Help
    • Item Submission
    • Publisher policies
    • User guides
    • FAQs
  • About QSpace
    • Vision & Mission
View Item 
  •   Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • KINDI Center for Computing Research
  • Information Intelligence
  • View Item
  • Qatar University Digital Hub
  • Qatar University Institutional Repository
  • Academic
  • Research Units
  • KINDI Center for Computing Research
  • Information Intelligence
  • View Item
  •      
  •  
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Complementary Learning Subnetworks towards Parameter-Efficient Class-Incremental Learning

    View/Open
    Complementary_Learning_Subnetworks_Towards_Parameter-Efficient_Class-Incremental_Learning.pdf (3.884Mb)
    Date
    2025
    Author
    Li, Depeng
    Zeng, Zhigang
    Dai, Wei
    Suganthan, Ponnuthurai Nagaratnam
    Metadata
    Show full item record
    Abstract
    In the scenario of class-incremental learning (CIL), deep neural networks have to adapt their model parameters to non-stationary data distributions, e.g., the emergence of new classes over time. To mitigate the catastrophic forgetting phenomenon, typical CIL methods either cumulatively store exemplars of old classes for retraining model parameters from scratch or progressively expand model size as new classes arrive, which, however, compromises their practical value due to little attention paid to parameter efficiency. In this paper, we contribute a novel solution, effective control of the parameters of a well-trained model, by the synergy between two complementary learning subnetworks. Specifically, we integrate one plastic feature extractor and one analytical feed-forward classifier into a unified framework amenable to streaming data. In each CIL session, it achieves non-overwritten parameter updates in a cost-effective manner, neither revisiting old task data nor extending previously learned networks; Instead, it accommodates new tasks by attaching a tiny set of declarative parameters to its backbone, in which only one matrix per task or one vector per class is kept for knowledge retention. Experimental results on a variety of task sequences demonstrate that our method achieves competitive results against state-of-the-art CIL approaches, especially in accuracy gain, knowledge transfer, training efficiency, and task-order robustness. Furthermore, a graceful forgetting implementation on previously learned trivial tasks is empirically investigated to make its non-growing backbone (i.e., a model with limited network capacity) suffice to train on more incoming tasks.
    URI
    https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105000289812&origin=inward
    DOI/handle
    http://dx.doi.org/10.1109/TKDE.2025.3550809
    http://hdl.handle.net/10576/64812
    Collections
    • Information Intelligence [‎98‎ items ]

    entitlement


    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Home

    Submit your QU affiliated work

    Browse

    All of Digital Hub
      Communities & Collections Publication Date Author Title Subject Type Language Publisher
    This Collection
      Publication Date Author Title Subject Type Language Publisher

    My Account

    Login

    Statistics

    View Usage Statistics

    About QSpace

    Vision & Mission

    Help

    Item Submission Publisher policiesUser guides FAQs

    Qatar University Digital Hub is a digital collection operated and maintained by the Qatar University Library and supported by the ITS department

    Contact Us | Send Feedback
    Contact Us | Send Feedback | QU

     

     

    Video