Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Theses
      • Higher Degree Theses
      • View Item
      •   Research Commons
      • University of Waikato Theses
      • Higher Degree Theses
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      A study of self-training variants for semi-supervised image classification

      Sahito, Attaullah
      Thumbnail
      Files
      thesis.pdf
      5.886Mb
      Permanent link to Research Commons version
      https://hdl.handle.net/10289/14678
      Abstract
      Artificial neural networks achieve state-of-the-art performance when trained on a vast number of labelled examples. Still, they can easily overfit training examples when few labelled examples are available. The requirement to have labels for all training examples is a strong limitation of standard supervised machine learning. This can be addressed by applying semi-supervised learning methods that extend supervised learning and use unlabelled examples. Self-training is the most basic and generic semi-supervised approach. In self-training, a model is trained iteratively on both labelled and pseudo-labelled examples obtained from previous iterations. This thesis focuses on the task of investigating different variants of self-training by applying metric learning, transfer learning, and self-supervised learning.

      The first part of this thesis investigates how metric learning can be applied to self-training. This is achieved by applying several metric learning losses for the training of feedforward neural networks. Experimental results show that triplet loss – a metric learning loss – can achieve better results than cross-entropy loss with simple neural networks.

      For improving the performance of self-training, the second part of the thesis investigates applying large neural networks and pretraining on various image sizes of ImageNet with different loss functions. Experimental results show that pretraining always improves the predictive performance of the model. Pretraining on smaller image sizes with cross-entropy loss provides the highest performance.

      In the third part of this thesis, several self-training methods are developed using self-supervised learning. Geometric transformation-based self-supervised learning is applied to unlabelled examples. The experimental results indicate that applying self-supervised learning for only the first iteration achieves better performance than using it in all iterations of self-training.
      Date
      2021
      Type
      Thesis
      Degree Name
      Doctor of Philosophy (PhD)
      Supervisors
      Pfahringer, Bernhard
      Frank, Eibe
      Publisher
      The University of Waikato
      Rights
      All items in Research Commons are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.
      Collections
      • Higher Degree Theses [1747]
      Show full item record  

      Usage

      Downloads, last 12 months
      447
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement