Jia, YunzheFrank, EibePfahringer, BernhardBifet, AlbertLim, Nick Jin SeanOliver, N.Pérez-Cruz, F.Kramer, StefanRead, JesseLozano, J.A.2021-09-162021-09-162021Jia, Y., Frank, E., Pfahringer, B., Bifet, A., & Lim, N. J. S. (2021). Studying and exploiting the relationship between model accuracy and explanation quality. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Eds.), Proc 25nd European Conference on Principles and Practice of Knowledge Discovery in Databases and 29th European Conference on Machine Learning. Research Track (ECML PKDD 2021) LNCS 12976 (pp. 699–714). Cham: Springer. https://doi.org/10.1007/978-3-030-86520-7_43https://hdl.handle.net/10289/14561Many explanation methods have been proposed to reveal insights about the internal procedures of black-box models like deep neural networks. Although these methods are able to generate explanations for individual predictions, little research has been conducted to investigate the relationship of model accuracy and explanation quality, or how to use explanations to improve model performance. In this paper, we evaluate explanations using a metric based on area under the ROC curve (AUC), treating expert-provided image annotations as ground-truth explanations, and quantify the correlation between model accuracy and explanation quality when performing image classifications with deep neural networks. The experiments are conducted using two image datasets: the CUB-200-2011 dataset and a Kahikatea dataset that we publish with this paper. For each dataset, we compare and evaluate seven different neural networks with four different explainers in terms of both accuracy and explanation quality. We also investigate how explanation quality evolves as loss metrics change through the training iterations of each model. The experiments suggest a strong correlation between model accuracy and explanation quality. Based on this observation, we demonstrate how explanations can be exploited to benefit the model selection process—even if simply maximising accuracy on test data is the primary goal.application/pdfen© Springer Nature Switzerland AG 2021.This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-86520-7_43computer scienceinterpretabilityexplainabilityexplanation qualityStudying and exploiting the relationship between model accuracy and explanation qualityConference Contribution10.1007/978-3-030-86520-7_43