Sahito, AttaullahFrank, EibePfahringer, BernhardLong, GYu, XWang, S2024-01-122024-01-122022-01-01978-3-030-97545-60302-9743https://hdl.handle.net/10289/16324Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision—learning without manual supervision by solving an automatically-generated pretext task—has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time.application/pdfen© 2022 Springer. This is an author’s accepted version of a conference paper published in the Lecture Notes in Computer Science book series (LNAI,volume 13151).Science & TechnologyTechnologyComputer Science, Artificial IntelligenceComputer ScienceSelf-supervised learningSelf-trainingRotational lossBetter Self-training for Image Classification Through Self-supervisionConference Contribution10.1007/978-3-030-97546-3_521611-3349