Loading...
Thumbnail Image
Item

Stochastic gradient trees

Abstract
We present an algorithm for learning decision trees using stochastic gradient information as the source of supervision. In contrast to previous approaches to gradient-based tree learning, our method operates in the incremental learning setting rather than the batch learning setting, and does not make use of soft splits or require the construction of a new tree for every update. We demonstrate how one can apply these decision trees to different problems by changing only the loss function, using classification, regression, and multi-instance learning as example applications. In the experimental evaluation, our method performs similarly to standard incremental classification trees, outperforms state of the art incremental regression trees, and achieves comparable performance with batch multi-instance learning methods.
Type
Conference Contribution
Type of thesis
Series
Citation
Gouk, H., Pfahringer, B., & Frank, E. (2019). Stochastic gradient trees. In W. S. Lee & T. Suzuki (Eds.), Proceedings of 11th Asian Conference on Machine Learning (ACML 2019) (Vol. PMLR 101, pp. 1094–1109). Nagoya, Japan: PMLR.
Date
2019
Publisher
PMLR
Degree
Supervisors
Rights
© 2019 H. Gouk, B. Pfahringer & E. Frank.