Show simple item record  

dc.contributor.authorTing, Kai Ming
dc.contributor.authorWitten, Ian H.
dc.date.accessioned2008-10-20T03:09:07Z
dc.date.available2008-10-20T03:09:07Z
dc.date.issued1997-01
dc.identifier.citationTing, K.M. & Witten, I.H. (1997). Stacked generalization: when does it work? (Working paper 97/03). Hamilton, New Zealand: University of Waikato, Department of Computer Science.en_US
dc.identifier.issn1170-487X
dc.identifier.urihttps://hdl.handle.net/10289/1066
dc.description.abstractStacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we resolve two crucial issues which have been considered to be a ‘black art’ in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms, and also for combining models of the same type derived from a single learning algorithm in a multiple-data-batches scenario. We also compare the performance of stacked generalization with published results of arcing and bagging.en_US
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherDepartment of Computer Science, University of Waiken_NZ
dc.relation.ispartofseriesComputer Science Working Papers
dc.subjectstackingen_US
dc.subjectcross-validationen_US
dc.titleStacked generalization: when does it work?en_US
dc.typeWorking Paperen_US
uow.relation.series97/03
dc.relation.isPartOfProceedings of the Fifteenth International Joint Conference on Artificial Intelligence, IJCAI 97, Nagoya, Japan, August 23-29, 1997, 2 Volumesen_NZ
pubs.begin-page866en_NZ
pubs.elements-id54838
pubs.end-page871en_NZ
pubs.place-of-publicationHamiltonen_NZ


Files in this item

This item appears in the following Collection(s)

Show simple item record