Sanity checks and improvements for patch visualisation in prototype-based image classification - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

Sanity checks and improvements for patch visualisation in prototype-based image classification

(1, 2) , (2) , (1) , (3)
1
2
3

Abstract

In this work, we perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes - ProtoPNet and ProtoTree. Using two fine-grained datasets (CUB-200-2011 and Stanford Cars), we first show that such methods do not correctly identify the regions of interest inside of the images, and therefore do not reflect the model behaviour. Secondly, using a deletion metric, we demonstrate quantitatively that saliency methods such as Smoothgrads or PRP provide more faithful image patches. We also propose a new relevance metric based on the segmentation of the object provided in some datasets (e.g. CUB-200-2011) and show that the imprecise patch visualisations generated by ProtoPNet and ProtoTree can create a false sense of bias that can be mitigated by the use of more faithful methods. Finally, we discuss the implications of our findings for other prototype-based models sharing the same visualisation method.
Fichier principal
Vignette du fichier
Sanity_checks_and_improvements_for_patch_visualisation_in_prototype_based_image_classification.pdf (7.71 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

cea-03943308 , version 1 (17-01-2023)

Identifiers

  • HAL Id : cea-03943308 , version 1

Cite

Romain Xu-Darme, Georges Quénot, Zakaria Chihani, Marie-Christine Rousset. Sanity checks and improvements for patch visualisation in prototype-based image classification. 2023. ⟨cea-03943308v1⟩
0 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More