Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Structure from randomness in halfspace learning with the zero-one loss

      Kabán, Ata; Durrant, Robert J.
      Thumbnail
      Files
      11506-Article (PDF)-24906-1-10-20201109.pdf
      Published version, 450.0Kb
      DOI
       10.1613/jair.1.11506
      Link
       jair.org
      Find in your library  
      Citation
      Export citation
      Kabán, A., & Durrant, R. J. (2020). Structure from randomness in halfspace learning with the zero-one loss. Journal of Artificial Intelligence Research, 69, 733–764. https://doi.org/10.1613/jair.1.11506
      Permanent Research Commons link: https://hdl.handle.net/10289/13964
      Abstract
      We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger than the sample size, using a notion of compressibility by random projection. In particular, we give upper bounds for the empirical risk minimizer learned efficiently from randomly projected data, as well as uniform upper bounds in the full high-dimensional space. Our main findings are the following: i) In both settings, the obtained bounds are able to discover and take advantage of benign geometric structure, which turns out to depend on the cosine similarities between the classifier and points of the input space, and provide a new interpretation of margin distribution type arguments. ii) Furthermore our bounds allow us to draw new connections between several existing successful classification algorithms, and we also demonstrate that our theory is predictive of empirically observed performance in numerical simulations and experiments. iii) Taken together, these results suggest that the study of compressive learning can improve our understanding of which benign structural traits – if they are possessed by the data generator – make it easier to learn an effective classifier from a sample.
      Date
      2020
      Type
      Journal Article
      Publisher
      AI Access Foundation
      Rights
      © 2020 AI Access Foundation.
      Collections
      • Computing and Mathematical Sciences Papers [1454]
      Show full item record  

      Usage

      Downloads, last 12 months
      67
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement