Abstract
In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting. KDbased methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks. Our analysis reveals that this issue originates from substantial representation shifts in the teacher network when dealing with outof-distribution data. This causes large errors in the KD loss component, leading to performance degradation in CIL models. Inspired by recent test-time adaptation methods, we introduce Teacher Adaptation (TA), a method that concurrently updates the teacher and the main models during incremental training. Our method seamlessly integrates with KD-based CIL approaches and allows for consistent enhancement of their performance across multiple exemplar-free CIL benchmarks. The source code for our method is available at https://github.com/fszatkowski/cl-teacher-adaptation.
Authors (6)
Cite as
Full text
full text is not available in portal
Keywords
Details
- Category:
- Conference activity
- Type:
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Language:
- English
- Publication year:
- 2024
- Bibliographic description:
- Szatkowski F., Pyła M., Przewięźlikowski M., Cygert S., Twardowski B., Trzciński T.: Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning// / : , 2024,
- Sources of funding:
-
- IDEAS NCBR
- Verified by:
- Gdańsk University of Technology
seen 128 times
Recommended for you
Category Adaptation Meets Projected Distillation in Generalized Continual Category Discovery
- G. Rypeść,
- D. Marczak,
- S. Cygert
- + 2 authors
MagMax: Leveraging Model Merging for Seamless Continual Learning
- D. Marczak,
- B. Twardowski,
- T. Trzciński
- + 1 authors
Looking through the past: better knowledge retention for generative replay in continual learning
- V. Khan,
- S. Cygert,
- K. Deja
- + 2 authors
Divide and not forget: Ensemble of selectively trained experts in Continual Learning
- G. Rypeść,
- S. Cygert,
- V. Khan
- + 3 authors