teex: A toolbox for the evaluation of explanations
Files
Published version, 1.778Mb
Permanent link to Research Commons version
https://hdl.handle.net/10289/16042Abstract
We present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io.
Date
2023-10-28Type
Publisher
Elsevier BV
Rights
© 2023 The Author(s). This is an open-access article under the CCBY license.