arXiv (Cornell University)
COMIX: Compositional Explanations using Prototypes
January 2025 • Sarath Sivaprasad, Dmitry Kangin, Plamen Angelov, Mario Fritz
Aligning machine representations with human understanding is key to improving interpretability of machine learning (ML) models. When classifying a new image, humans often explain their decisions by decomposing the image into concepts and pointing to corresponding regions in familiar images. Current ML explanation techniques typically either trace decision-making processes to reference prototypes, generate attribution maps highlighting feature importance, or incorporate intermediate bottlenecks designed to align wi…