Loading...
teex: A toolbox for the evaluation of explanations
Abstract
We present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io.
Type
Journal Article
Type of thesis
Series
Citation
Date
2023-10-28
Publisher
Elsevier BV
Degree
Supervisors
Rights
© 2023 The Author(s). This is an open-access article under the CCBY license.