Stacked generalization: when does it work?
Citation
Export citationTing, K.M. & Witten, I.H. (1997). Stacked generalization: when does it work? (Working paper 97/03). Hamilton, New Zealand: University of Waikato, Department of Computer Science.
Permanent Research Commons link: https://hdl.handle.net/10289/1066
Abstract
Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we resolve two crucial issues which have been considered to be a ‘black art’ in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input.
We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms, and also for combining models of the same type derived from a single learning algorithm in a multiple-data-batches scenario. We also compare the performance of stacked generalization with published results of arcing and bagging.
Date
1997-01Type
Report No.
97/03
Publisher
Department of Computer Science, University of Waik
Collections
- 1997 Working Papers [31]