Show simple item record  

dc.contributor.authorTing, Kai Ming
dc.contributor.authorLow, Boon Toh
dc.date.accessioned2008-10-28T23:01:52Z
dc.date.available2008-10-28T23:01:52Z
dc.date.issued1996-10
dc.identifier.citationTing, K. M. & Low, B. T. (1996). Theory combination: an alternative to data combination. (Working paper 96/19). Hamilton, New Zealand: University of Waikato, Department of Computer Science.en_US
dc.identifier.issn1170-487X
dc.identifier.urihttps://hdl.handle.net/10289/1172
dc.description.abstractThe approach of combining theories learned from multiple batches of data provide an alternative to the common practice of learning one theory from all the available data (i.e., the data combination approach). This paper empirically examines the base-line behaviour of the theory combination approach in classification tasks. We find that theory combination can lead to better performance even if the disjoint batches of data are drawn randomly from a larger sample, and relate the relative performance of the two approaches to the learning curve of the classifier used.en_US
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.relation.ispartofseriesComputer Science Working Papers
dc.subjectcomputer scienceen_US
dc.subjecttheory combinationen_US
dc.subjectdata combinationen_US
dc.subjectempirical evaluationen_US
dc.subjectlearning curveen_US
dc.subjectnear-asymptotic performanceen_US
dc.titleTheory combination: an alternative to data combinationen_US
dc.typeWorking Paperen_US
uow.relation.series96/19


Files in this item

This item appears in the following Collection(s)

Show simple item record