Loading...
Adaptive online domain incremental continual learning
Abstract
Continual Learning (CL) problems pose significant challenges for Neural Network (NN)s. Online Domain Incremental Continual Learning (ODI-CL) refers to situations where the data distribution may change from one task to another. These changes can severely affect the learned model, focusing too much on previous data and failing to properly learn and represent new concepts. Conversely, if a model constantly forgets previously learned knowledge, it may be deemed too unstable and unsuitable. This work proposes Online Domain Incremental Pool (ODIP), a novel method to cope with catastrophic forgetting. ODIP also employs automatic concept drift detection and does not require task ids during training. ODIP maintains a pool of learners, freezing and storing the best one after training on each task. An additional Task Predictor (TP) is trained to select the most appropriate NN from the frozen pool for prediction. We compare ODIP against regularization methods and observe that it yields competitive predictive performance.
Type
Conference Contribution
Type of thesis
Series
Citation
Date
2022
Publisher
Springer Nature
Degree
Supervisors
Rights
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG. This is an author’s accepted version of a conference paper published in Proc 31st International Conference on Artificial Neural Networks (ICANN'22), Part 1, LNCS 13529.