Better Self-training for Image Classification Through Self-supervision
| dc.contributor.author | Sahito, Attaullah | en_NZ |
| dc.contributor.author | Frank, Eibe | en_NZ |
| dc.contributor.author | Pfahringer, Bernhard | en_NZ |
| dc.contributor.editor | Long, G | en_NZ |
| dc.contributor.editor | Yu, X | en_NZ |
| dc.contributor.editor | Wang, S | en_NZ |
| dc.coverage.spatial | Univ Technol Sydney, ELECTR NETWORK | en_NZ |
| dc.date.accessioned | 2024-01-12T02:33:58Z | |
| dc.date.available | 2024-01-12T02:33:58Z | |
| dc.date.issued | 2022-01-01 | en_NZ |
| dc.description.abstract | Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision—learning without manual supervision by solving an automatically-generated pretext task—has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time. | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.doi | 10.1007/978-3-030-97546-3_52 | en_NZ |
| dc.identifier.eissn | 1611-3349 | en_NZ |
| dc.identifier.isbn | 978-3-030-97545-6 | en_NZ |
| dc.identifier.issn | 0302-9743 | en_NZ |
| dc.identifier.uri | https://hdl.handle.net/10289/16324 | |
| dc.language.iso | en | |
| dc.publisher | SPRINGER INTERNATIONAL PUBLISHING AG | en_NZ |
| dc.relation.isPartOf | AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE | en_NZ |
| dc.rights | © 2022 Springer. This is an author’s accepted version of a conference paper published in the Lecture Notes in Computer Science book series (LNAI,volume 13151). | |
| dc.source | 34th Australasian Joint Conference on Artificial Intelligence (AI) | en_NZ |
| dc.subject | Science & Technology | en_NZ |
| dc.subject | Technology | en_NZ |
| dc.subject | Computer Science, Artificial Intelligence | en_NZ |
| dc.subject | Computer Science | en_NZ |
| dc.subject | Self-supervised learning | en_NZ |
| dc.subject | Self-training | en_NZ |
| dc.subject | Rotational loss | en_NZ |
| dc.title | Better Self-training for Image Classification Through Self-supervision | en_NZ |
| dc.type | Conference Contribution | |
| dspace.entity.type | Publication | |
| pubs.begin-page | 645 | |
| pubs.end-page | 657 | |
| pubs.finish-date | 2022-02-04 | en_NZ |
| pubs.publication-status | Published | en_NZ |
| pubs.start-date | 2022-02-02 | en_NZ |
| pubs.volume | 13151 | en_NZ |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Self_supervised_Paper.pdf
- Size:
- 1.39 MB
- Format:
- Adobe Portable Document Format
- Description:
- Accepted version
License bundle
1 - 1 of 1
Loading...
- Name:
- Research Commons Deposit Agreement 2017.pdf
- Size:
- 188.11 KB
- Format:
- Adobe Portable Document Format
- Description: