Show simple item record  

dc.contributor.authorSahito, Attaullahen_NZ
dc.contributor.authorFrank, Eibeen_NZ
dc.contributor.authorPfahringer, Bernharden_NZ
dc.contributor.editorGallagher, M.en_NZ
dc.contributor.editorNoustafa, N.en_NZ
dc.contributor.editorLakshika, Een_NZ
dc.coverage.spatialCanberra, Australiaen_NZ
dc.date.accessioned2020-12-16T01:32:27Z
dc.date.available2020-12-16T01:32:27Z
dc.date.issued2020en_NZ
dc.identifier.citationSahito A., Frank E., Pfahringer B. (2020) Transfer of Pretrained Model Weights Substantially Improves Semi-supervised Image Classification. In: Gallagher M., Moustafa N., Lakshika E. (eds) AI 2020: Advances in Artificial Intelligence. AI 2020. Lecture Notes in Computer Science, vol 12576. Springer, Cham. https://doi.org/10.1007/978-3-030-64984-5_34
dc.identifier.urihttps://hdl.handle.net/10289/14030
dc.description.abstractDeep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training. Creating a large number of labeled examples requires considerable resources, time, and effort. If labeling new data is not feasible, so-called semi-supervised learning can achieve better generalisation than purely supervised learning by employing unlabeled instances as well as labeled ones. The work presented in this paper is motivated by the observation that transfer learning provides the opportunity to potentially further improve performance by exploiting models pretrained on a similar domain. More specifically, we explore the use of transfer learning when performing semi-supervised learning using self-learning. The main contribution is an empirical evaluation of transfer learning using different combinations of similarity metric learning methods and label propagation algorithms in semi-supervised learning. We find that transfer learning always substantially improves the model’s accuracy when few labeled examples are available, regardless of the type of loss used for training the neural network. This finding is obtained by performing extensive experiments on the SVHN, CIFAR10, and Plant Village image classification datasets and applying pretrained weights from Imagenet for transfer learning.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherSpringeren_NZ
dc.rights© 2020 Springer Nature Switzerland AG. This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-64984-5_34
dc.sourceAI 2020en_NZ
dc.subjectcomputer scienceen_NZ
dc.subjectsemi-supervised learningen_NZ
dc.subjecttransfer learningen_NZ
dc.subjectself-learningen_NZ
dc.subjecttriplet lossen_NZ
dc.subjectcontrastive lossen_NZ
dc.subjectarcface lossen_NZ
dc.subjectMachine learning
dc.titleTransfer of pretrained model weights substantially improves semi-supervised image classificationen_NZ
dc.typeConference Contribution
dc.identifier.doi10.1007/978-3-030-64984-5_34en_NZ
dc.relation.isPartOfAI 2020: Advances in Artificial Intelligence. AI 2020. Lecture Notes in Computer Scienceen_NZ
pubs.begin-page434
pubs.elements-id258449
pubs.end-page444
pubs.finish-date2020-11-30en_NZ
pubs.place-of-publicationCham, Switzerlanden_NZ
pubs.start-date2020-11-29en_NZ
pubs.volumeLNAI 12576en_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record