Show simple item record  

dc.contributor.authorAntoñanzas, Jesús M.en_NZ
dc.contributor.authorJia, Yunzheen_NZ
dc.contributor.authorFrank, Eibeen_NZ
dc.contributor.authorBifet, Alberten_NZ
dc.contributor.authorPfahringer, Bernharden_NZ
dc.date.accessioned2023-09-20T23:53:02Z
dc.date.available2023-09-20T23:53:02Z
dc.date.issued2023-10-28en_NZ
dc.identifier.issn0925-2312en_NZ
dc.identifier.urihttps://hdl.handle.net/10289/16042
dc.description.abstractWe present teex, a Python toolbox for the evaluation of explanations. teex focuses on the evaluation of local explanations of the predictions of machine learning models by comparing them to ground-truth explanations. It supports several types of explanations: feature importance vectors, saliency maps, decision rules, and word importance maps. A collection of evaluation metrics is provided for each type. Real-world datasets and generators of synthetic data with ground-truth explanations are also contained within the library. teex contributes to research on explainable AI by providing tested, streamlined, user-friendly tools to compute quality metrics for the evaluation of explanation methods. Source code and a basic overview can be found at github.com/chus-chus/teex, and tutorials and full API documentation are at teex.readthedocs.io.
dc.format.mimetypeapplication/pdf
dc.language.isoenen_NZ
dc.publisherElsevier BVen_NZ
dc.relation.urien_NZ
dc.rights© 2023 The Author(s). This is an open-access article under the CCBY license.
dc.subjectcomputer scienceen_NZ
dc.subjectexplainable AIen_NZ
dc.subjectexplanation evaluationen_NZ
dc.subjectPythonen_NZ
dc.titleteex: A toolbox for the evaluation of explanationsen_NZ
dc.typeJournal Article
dc.identifier.doi10.1016/j.neucom.2023.126642en_NZ
dc.relation.isPartOfNeurocomputingen_NZ
pubs.begin-page126642
pubs.elements-id328003
pubs.end-page126642
pubs.publication-statusAccepteden_NZ
pubs.volume555en_NZ
uow.identifier.article-no126642


Files in this item

This item appears in the following Collection(s)

Show simple item record