Permanent link to Research Commons versionhttps://hdl.handle.net/10289/16377
Inducing classifiers that make accurate predictions on future data is a driving force for research in inductive learning. However, also of importance to the users is how to gain information from the models produced. Unfortunately, some of the most powerful inductive learning algorithms generate “black boxes”—that is, the representation of the model makes it virtually impossible to gain any insight into what has been learned. This paper presents a technique that can help the user understand why a classifier makes the predictions that it does by providing a two-dimensional visualization of its class probability estimates. It requires the classifier to generate class probabilities but most practical algorithms are able to do so (or can be modified to this end).
This is an author’s accepted version of a conference paper published in the 7th European Conference on Principles and Practice of Knowledge Discovery in Databases. © 2003 Springer.