3Department of Stomatology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, China2School of Computer Science and Technology, East China Normal University, Shanghai, 200062, China1Lab of Artificial Intelligence for Education, East China Normal University, Shanghai, 200062, China
刊名
Knowledge-Based Systems
年份
2026
卷号
Vol.341
页码
115868
ISSN
0950-7051
摘要
Dynamic expansion networks have emerged as a promising approach for incremental learning by introducing new feature extractors for each task while freezing previously learned ones to preserve acquired knowledge. However, existing methods often fail to effectively leverage information from earlier tasks when learning subsequent tasks, leading to blurred decision boundaries for earlier tasks and exacerbating catastrophic forgetting. To address this issue, we propose a novel orthogonal feature spac...更多
Dynamic expansion networks have emerged as a promising approach for incremental learning by introducing new feature extractors for each task while freezing previously learned ones to preserve acquired knowledge. However, existing methods often fail to effectively leverage information from earlier tasks when learning subsequent tasks, leading to blurred decision boundaries for earlier tasks and exacerbating catastrophic forgetting. To address this issue, we propose a novel orthogonal feature space constraint that encourages new-task features to lie in a subspace orthogonal to that of earlier tasks. This constraint helps preserves task boundaries and mitigate catastrophic forgetting. However, strictly enforcing orthogonality may undermine the plasticity needed to learn new tasks, potentially hindering performance. To strike a balance between stability and plasticity, we introduce a selective orthogonal constraint guided by a Task Relevance Matrix , which quantifies cross-class similarities between old and new classes, relaxes unnecessary restrictions, and enables more precise and adaptive orthogonal regularization. Additionally, we propose an intra-task orthogonality module that helps new tasks better differentiate their feature space from that of old tasks. Extensive experiments on CIFAR-100, ImageNet-100, and ImageNet demonstrate that our method effectively alleviates catastrophic forgetting while maintaining competitive performance on newly introduced tasks.收起