Beyond trees: Adopting MITI to learn rules and ensemble classifiers for multi-instance data
Abstract
MITI is a simple and elegant decision tree learner designed for multi-instance classification problems, where examples for learning consist of bags of instances. MITI grows a tree in best-first manner by maintaining a priority queue containing the unexpanded nodes in the fringe of the tree. When the head node contains instances from positive examples only, it is made into a leaf, and any bag of data that is associated with this leaf is removed. In this paper we first revisit the basic algorithm and consider the effect of parameter settings on classification accuracy, using several benchmark datasets. We show that the chosen splitting criterion in particular can have a significant effect on accuracy. We identify a potential weakness of the algorithm—subtrees can contain structure that has been created using data that is subsequently removed—and show that a simple modification turns the algorithm into a rule learner that avoids this problem. This rule learner produces more compact classifiers with comparable accuracy on the benchmark datasets we consider. Finally, we present randomized algorithm variants that enable us to generate ensemble classifiers. We show that these can yield substantially improved classification accuracy.
Type
Conference Contribution
Type of thesis
Series
Citation
Bjerring, L. & Frank, E. (2011). Beyond trees: Adopting MITI to learn rules and ensemble classifiers for multi-instance data. In D. Wang & M. Reynolds (Eds.), AI 2011, LNAI 7106 (pp. 41-50). Springer-Verlag Berlin Heidelberg.
Date
2011
Publisher
Springer