Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Conditional density estimation with class probability estimators

      Frank, Eibe; Bouckaert, Remco R.
      Thumbnail
      Files
      frank_and_bouckaert_ACML09.pdf
      295.8Kb
      DOI
       10.1007/978-3-642-05224-8_7
      Find in your library  
      Citation
      Export citation
      Frank, E. & Bouckaert, R. R. (2009). Conditional density estimation with class probability estimators. In E.-H. Zhou & T. Washio (Eds.), Proceedings of First Asian Conference on Machine Learning, ACML 2009, Nanjing, China, November 2-4, 2009. (pp. 65-81). Berlin: Springer.
      Permanent Research Commons link: https://hdl.handle.net/10289/3701
      Abstract
      Many regression schemes deliver a point estimate only, but often it is useful or even essential to quantify the uncertainty inherent in a prediction. If a conditional density estimate is available, then prediction intervals can be derived from it. In this paper we compare three techniques for computing conditional density estimates using a class probability estimator, where this estimator is applied to the discretized target variable and used to derive instance weights for an underlying univariate density estimator; this yields a conditional density estimate. The three density estimators we compare are: a histogram estimator that has been used previously in this context, a normal density estimator, and a kernel estimator. In our experiments, the latter two deliver better performance, both in terms of cross-validated log-likelihood and in terms of quality of the resulting prediction intervals. The empirical coverage of the intervals is close to the desired confidence level in most cases. We also include results for point estimation, as well as a comparison to Gaussian process regression and nonparametric quantile estimation.
      Date
      2009
      Type
      Conference Contribution
      Publisher
      Springer
      Rights
      This is an author’s accepted version of an article published in Proceedings of First Asian Conference on Machine Learning, ACML 2009, Nanjing, China, November 2-4, 2009. ©2009 Springer.
      Collections
      • Computing and Mathematical Sciences Papers [1455]
      Show full item record  

      Usage

      Downloads, last 12 months
      71
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement