Item

Evaluating the replicability of significance tests for comparing learning algorithms

Abstract
Empirical research in learning algorithms for classification tasks generally requires the use of significance tests. The quality of a test is typically judged on Type I error (how often the test indicates a difference when it should not) and Type II error (how often it indicates no difference when it should). In this paper we argue that the replicability of a test is also of importance. We say that a test has low replicability if its outcome strongly depends on the particular random partitioning of the data that is used to perform it. We present empirical measures of replicability and use them to compare the performance of several popular tests in a realistic setting involving standard learning algorithms and benchmark datasets. Based on our results we give recommendations on which test to use.
Type
Conference Contribution
Type of thesis
Series
Citation
Bouckaert, R. R. & Frank, E. (2004). Evaluating the replicability of significance tests for comparing learning algorithms. In H. Dai, R. Srikant, & C. Zhang (Eds.), Proceedings 8th Pacific-Asia Conference, PAKDD 2004, Sydney, Australia, May 26-28, 2004(pp. 3-12). Berlin: Springer.
Date
2004
Publisher
Springer
Degree
Supervisors
Rights