Studying and exploiting the relationship between model accuracy and explanation quality

dc.contributor.authorJia, Yunzheen_NZ
dc.contributor.authorFrank, Eibeen_NZ
dc.contributor.authorPfahringer, Bernharden_NZ
dc.contributor.authorBifet, Alberten_NZ
dc.contributor.authorLim, Nick Jin Seanen_NZ
dc.contributor.editorOliver, N.en_NZ
dc.contributor.editorPérez-Cruz, F.en_NZ
dc.contributor.editorKramer, Stefan en_NZ
dc.contributor.editorRead, Jesseen_NZ
dc.contributor.editorLozano, J.A.en_NZ
dc.coverage.spatialBilbao, Spainen_NZ
dc.date.accessioned2021-09-16T00:41:02Z
dc.date.available2021-09-16T00:41:02Z
dc.date.issued2021en_NZ
dc.description.abstractMany explanation methods have been proposed to reveal insights about the internal procedures of black-box models like deep neural networks. Although these methods are able to generate explanations for individual predictions, little research has been conducted to investigate the relationship of model accuracy and explanation quality, or how to use explanations to improve model performance. In this paper, we evaluate explanations using a metric based on area under the ROC curve (AUC), treating expert-provided image annotations as ground-truth explanations, and quantify the correlation between model accuracy and explanation quality when performing image classifications with deep neural networks. The experiments are conducted using two image datasets: the CUB-200-2011 dataset and a Kahikatea dataset that we publish with this paper. For each dataset, we compare and evaluate seven different neural networks with four different explainers in terms of both accuracy and explanation quality. We also investigate how explanation quality evolves as loss metrics change through the training iterations of each model. The experiments suggest a strong correlation between model accuracy and explanation quality. Based on this observation, we demonstrate how explanations can be exploited to benefit the model selection process—even if simply maximising accuracy on test data is the primary goal.
dc.format.mimetypeapplication/pdf
dc.identifier.citationJia, Y., Frank, E., Pfahringer, B., Bifet, A., & Lim, N. J. S. (2021). Studying and exploiting the relationship between model accuracy and explanation quality. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Eds.), Proc 25nd European Conference on Principles and Practice of Knowledge Discovery in Databases and 29th European Conference on Machine Learning. Research Track (ECML PKDD 2021) LNCS 12976 (pp. 699–714). Cham: Springer. https://doi.org/10.1007/978-3-030-86520-7_43en
dc.identifier.doi10.1007/978-3-030-86520-7_43en_NZ
dc.identifier.urihttps://hdl.handle.net/10289/14561
dc.language.isoen
dc.publisherSpringeren_NZ
dc.relation.isPartOfProc 25nd European Conference on Principles and Practice of Knowledge Discovery in Databases and 29th European Conference on Machine Learning. Research Track (ECML PKDD 2021) LNCS 12976en_NZ
dc.rights© Springer Nature Switzerland AG 2021.This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-86520-7_43
dc.sourceECML PKDD 2021en_NZ
dc.subjectcomputer scienceen_NZ
dc.subjectinterpretabilityen_NZ
dc.subjectexplainabilityen_NZ
dc.subjectexplanation qualityen_NZ
dc.titleStudying and exploiting the relationship between model accuracy and explanation qualityen_NZ
dc.typeConference Contribution
dspace.entity.typePublication
pubs.begin-page699
pubs.end-page714
pubs.finish-date2021-09-17en_NZ
pubs.place-of-publicationChamen_NZ
pubs.publication-statusAccepteden_NZ
pubs.start-date2021-09-13en_NZ

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
sub_624(1).pdf
Size:
2.65 MB
Format:
Adobe Portable Document Format
Description:
Accepted version

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Research Commons Deposit Agreement 2017.pdf
Size:
188.11 KB
Format:
Adobe Portable Document Format
Description: