Item

Estimating replicability of classifier learning experiments

Abstract
Replicability of machine learning experiments measures how likely it is that the outcome of one experiment is repeated when performed with a different randomization of the data. In this paper, we present an estimator of replicability of an experiment that is efficient. More precisely, the estimator is unbiased and has lowest variance in the class of estimators formed by a linear combination of outcomes of experiments on a given data set. We gathered empirical data for comparing experiments consisting of different sampling schemes and hypothesis tests. Both factors are shown to have an impact on replicability of experiments. The data suggests that sign tests should not be used due to low replicability. Ranked sum tests show better performance, but the combination of a sorted runs sampling scheme with a t-test gives the most desirable performance judged on Type I and II error and replicability.
Type
Conference Contribution
Type of thesis
Series
Citation
Bouckaert, R. R. (2004). Estimating replicability of classifier learning experiments. In Proceedings of the 21st International Conference on Machine Learning Banff, Canada, 2004.
Date
2004
Publisher
ACM
Degree
Supervisors
Rights