Abstrakt
Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation, we introduce a novel approach named SEED. SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert. For this purpose, each expert represents each class with a Gaussian distribution, and the optimal expert is selected based on the similarity of those distributions. Consequently, SEED increases diversity and heterogeneity within the experts while maintaining the high stability of this ensemble method. The extensive experiments demonstrate that SEED achieves state-of-the-art performance in exemplar-free settings across various scenarios, showing the potential of expert diversification through data in continual learning.
Autorzy (6)
Cytuj jako
Pełna treść
- Wersja publikacji
- Submitted Version
- Licencja
- otwiera się w nowej karcie
Słowa kluczowe
Informacje szczegółowe
- Kategoria:
- Aktywność konferencyjna
- Typ:
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Język:
- angielski
- Rok wydania:
- 2024
- Opis bibliograficzny:
- Rypeść G., Cygert S., Khan V., Trzciński T., Zieliński B., Twardowski B.: Divide and not forget: Ensemble of selectively trained experts in Continual Learning// / : , 2024,
- Źródła finansowania:
-
- finansowanie spoza PG
- Weryfikacja:
- Politechnika Gdańska
wyświetlono 48 razy
Publikacje, które mogą cię zainteresować
MagMax: Leveraging Model Merging for Seamless Continual Learning
- D. Marczak,
- B. Twardowski,
- T. Trzciński
- + 1 autorów
Revisiting Supervision for Continual Representation Learning
- D. Marczak,
- S. Cygert,
- T. Trzciński
- + 1 autorów
Category Adaptation Meets Projected Distillation in Generalized Continual Category Discovery
- G. Rypeść,
- D. Marczak,
- S. Cygert
- + 2 autorów