Adaptive Neural Networks for Online Domain Incremental Continual Learning
| dc.contributor.author | Gunasekara, Nuwan | en_NZ |
| dc.contributor.author | Gomes, Heitor | en_NZ |
| dc.contributor.author | Bifet, Albert | en_NZ |
| dc.contributor.author | Pfahringer, Bernhard | en_NZ |
| dc.contributor.editor | Pascal, P | en_NZ |
| dc.contributor.editor | Ienco, D | en_NZ |
| dc.coverage.spatial | Montpellier, France | en_NZ |
| dc.date.accessioned | 2024-01-21T20:50:41Z | |
| dc.date.available | 2024-01-21T20:50:41Z | |
| dc.date.issued | 2022 | en_NZ |
| dc.description.abstract | Continual Learning (CL) poses a significant challenge to Neural Network (NN)s, where the data distribution changes from one task to another. In Online domain incremental continual learning (OD-ICL), this distribution change happens in the input space without affecting the label distribution. In order to adapt to such changes, the model being trained risks forgetting previously learned knowledge (stability). On the other hand, enforcing that the model preserves past knowledge will cause it to fail to learn new concepts (plasticity). We propose Online Domain Incremental Networks (ODIN), a novel method to alleviate catastrophic forgetting by automatically detecting the end of a task using concept drift detection. As a consequence, ODIN does not require the specification of task ids. ODIN maintains a pool of NNs, each trained on a single task and frozen for further updates. A Task Predictor (TP) is trained to select the most suitable NN from the frozen pool for prediction. We compare ODIN against popular regularization and replay methods. It outperforms regularization methods and achieves comparable predictive performance to replay methods. | en_NZ |
| dc.format.mimetype | application/pdf | |
| dc.identifier.doi | 10.1007/978-3-031-18840-4_7 | en_NZ |
| dc.identifier.eissn | 1611-3349 | en_NZ |
| dc.identifier.isbn | 9783031188398 | en_NZ |
| dc.identifier.issn | 0302-9743 | en_NZ |
| dc.identifier.uri | https://hdl.handle.net/10289/16372 | |
| dc.language.iso | en | |
| dc.publisher | Springer Nature | |
| dc.relation.isPartOf | Pro 25th International Conference on Discovery Science (DS 2022), LNAI 13601 | en_NZ |
| dc.rights | This is an author’s accepted version of a conference paper published in Proceedings of 25th International Conference on Discovery Science (DS 2022), LNAI 13601. © 2022 Springer. | |
| dc.source | DS 2022 | en_NZ |
| dc.subject | computer science | en_NZ |
| dc.subject | online domain incremental continual learning | en_NZ |
| dc.title | Adaptive Neural Networks for Online Domain Incremental Continual Learning | en_NZ |
| dc.type | Conference Contribution | |
| dspace.entity.type | Publication | |
| pubs.begin-page | 89 | |
| pubs.end-page | 103 | |
| pubs.finish-date | 2022-10-12 | en_NZ |
| pubs.publication-status | Published | en_NZ |
| pubs.start-date | 2022-10-10 | en_NZ |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- DS2022_ODIN_camera_ready.pdf
- Size:
- 709.33 KB
- Format:
- Adobe Portable Document Format
- Description:
- Accepted version
License bundle
1 - 1 of 1
Loading...
- Name:
- Research Commons Deposit Agreement 2017.pdf
- Size:
- 188.11 KB
- Format:
- Adobe Portable Document Format
- Description: