Research Commons
      • Browse 
        • Communities & Collections
        • Titles
        • Authors
        • By Issue Date
        • Subjects
        • Types
        • Series
      • Help 
        • About
        • Collection Policy
        • OA Mandate Guidelines
        • Guidelines FAQ
        • Contact Us
      • My Account 
        • Sign In
        • Register
      View Item 
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      •   Research Commons
      • University of Waikato Research
      • Computing and Mathematical Sciences
      • Computing and Mathematical Sciences Papers
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Studying and exploiting the relationship between model accuracy and explanation quality

      Jia, Yunzhe; Frank, Eibe; Pfahringer, Bernhard; Bifet, Albert; Lim, Nick Jin Sean
      Thumbnail
      Files
      sub_624(1).pdf
      Accepted version, 2.647Mb
      DOI
       10.1007/978-3-030-86520-7_43
      Find in your library  
      Citation
      Export citation
      Jia, Y., Frank, E., Pfahringer, B., Bifet, A., & Lim, N. J. S. (2021). Studying and exploiting the relationship between model accuracy and explanation quality. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Eds.), Proc 25nd European Conference on Principles and Practice of Knowledge Discovery in Databases and 29th European Conference on Machine Learning. Research Track (ECML PKDD 2021) LNCS 12976 (pp. 699–714). Cham: Springer. https://doi.org/10.1007/978-3-030-86520-7_43
      Permanent Research Commons link: https://hdl.handle.net/10289/14561
      Abstract
      Many explanation methods have been proposed to reveal insights about the internal procedures of black-box models like deep neural networks. Although these methods are able to generate explanations for individual predictions, little research has been conducted to investigate the relationship of model accuracy and explanation quality, or how to use explanations to improve model performance. In this paper, we evaluate explanations using a metric based on area under the ROC curve (AUC), treating expert-provided image annotations as ground-truth explanations, and quantify the correlation between model accuracy and explanation quality when performing image classifications with deep neural networks. The experiments are conducted using two image datasets: the CUB-200-2011 dataset and a Kahikatea dataset that we publish with this paper. For each dataset, we compare and evaluate seven different neural networks with four different explainers in terms of both accuracy and explanation quality. We also investigate how explanation quality evolves as loss metrics change through the training iterations of each model. The experiments suggest a strong correlation between model accuracy and explanation quality. Based on this observation, we demonstrate how explanations can be exploited to benefit the model selection process—even if simply maximising accuracy on test data is the primary goal.
      Date
      2021
      Type
      Conference Contribution
      Publisher
      Springer
      Rights
      © Springer Nature Switzerland AG 2021.This is the author's accepted version. The final publication is available at Springer via dx.doi.org/10.1007/978-3-030-86520-7_43
      Collections
      • Computing and Mathematical Sciences Papers [1455]
      Show full item record  

      Usage

      Downloads, last 12 months
      37
       
       
       

      Usage Statistics

      For this itemFor all of Research Commons

      The University of Waikato - Te Whare Wānanga o WaikatoFeedback and RequestsCopyright and Legal Statement