Search results for: continual learning - Bridge of Knowledge

Search

Search results for: continual learning

Filters

total: 4
filtered: 3

clear all filters


Chosen catalog filters

  • Category

  • Year

  • Options

clear Chosen catalog filters disabled

Search results for: continual learning

  • Divide and not forget: Ensemble of selectively trained experts in Continual Learning

    Publication
    • G. Rypeść
    • S. Cygert
    • V. Khan
    • T. Trzciński
    • B. Zieliński
    • B. Twardowski

    - Year 2024

    Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation,...

    Full text to download in external service

  • Looking through the past: better knowledge retention for generative replay in continual learning

    Publication
    • V. Khan
    • S. Cygert
    • K. Deja
    • T. Trzciński
    • B. Twardowski

    - IEEE Access - Year 2024

    In this work, we improve the generative replay in a continual learning setting to perform well on challenging scenarios. Because of the growing complexity of continual learning tasks, it is becoming more popular, to apply the generative replay technique in the feature space instead of image space. Nevertheless, such an approach does not come without limitations. In particular, we notice the degradation of the continually trained...

    Full text available to download

  • Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning

    Publication
    • F. Szatkowski
    • M. Pyła
    • M. Przewięźlikowski
    • S. Cygert
    • B. Twardowski
    • T. Trzciński

    - Year 2024

    In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting. KDbased methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks. Our analysis reveals that this issue originates from substantial representation shifts in the teacher...

    Full text to download in external service