Didn't find any results in this catalog!
But we have some results in other catalogs.Filters
total: 3145
-
Catalog
- Publications 2633 available results
- Journals 205 available results
- Conferences 26 available results
- Publishing Houses 1 available results
- People 104 available results
- Projects 9 available results
- e-Learning Courses 79 available results
- Events 9 available results
- Open Research Data 79 available results
displaying 1000 best results Help
Search results for: continual learning
-
Revisiting Supervision for Continual Representation Learning
Publication"In the field of continual learning, models are designed to learn tasks one after the other. While most research has centered on supervised continual learning, there is a growing interest in unsupervised continual learning, which makes use of the vast amounts of unlabeled data. Recent studies have highlighted the strengths of unsupervised methods, particularly self-supervised learning, in providing robust representations. The improved...
-
MagMax: Leveraging Model Merging for Seamless Continual Learning
PublicationThis paper introduces a continual learning approach named MagMax, which utilizes model merging to enable large pre-trained models to continuously learn from new data without forgetting previously acquired knowledge. Distinct from traditional continual learning methods that aim to reduce forgetting during task training, MagMax combines sequential fine-tuning with a maximum magnitude weight selection for effective knowledge integration...
-
Divide and not forget: Ensemble of selectively trained experts in Continual Learning
PublicationClass-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know. A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task. However, the experts are usually trained all at once using whole task data, which makes them all prone to forgetting and increasing computational burden. To address this limitation,...
-
CONTINUUM MECHANICS AND THERMODYNAMICS
Journals -
Looking through the past: better knowledge retention for generative replay in continual learning
PublicationIn this work, we improve the generative replay in a continual learning setting to perform well on challenging scenarios. Because of the growing complexity of continual learning tasks, it is becoming more popular, to apply the generative replay technique in the feature space instead of image space. Nevertheless, such an approach does not come without limitations. In particular, we notice the degradation of the continually trained...
-
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning
PublicationIn this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting. KDbased methods are successfully used in CIL, but they often struggle to regularize the model without access to exemplars of the training data from previous tasks. Our analysis reveals that this issue originates from substantial representation shifts in the teacher...
-
CONTINUUM Lifelong Learning in Neurology
Journals -
Acrodermatitis continua Hallopeau
Publication -
Optics Continuum
Journals -
Hybrid model of moving or rotating continua
PublicationThe paper introduces the method of the model reduction of systems that experience a Coriolis acceleration or gyroscopic effect component. In such causes that corresponding system equations are non-self-adjoined. Modal reduced model is built up for the system without Coriolis or gyroscopic effect terms. These phenomena are next included by application of any lumping technique. Hence, the final reduced model is a hybrid one, obtained...