Eigen-CAM: Class Activation Map using Principal Components Article Swipe
YOU?
·
· 2020
· Open Access
·
· DOI: https://doi.org/10.1109/ijcnn48605.2020.9206626
· OA: W3046768359
Deep neural networks are ubiquitous due to the ease of developing models and\ntheir influence on other domains. At the heart of this progress is\nconvolutional neural networks (CNNs) that are capable of learning\nrepresentations or features given a set of data. Making sense of such complex\nmodels (i.e., millions of parameters and hundreds of layers) remains\nchallenging for developers as well as the end-users. This is partially due to\nthe lack of tools or interfaces capable of providing interpretability and\ntransparency. A growing body of literature, for example, class activation map\n(CAM), focuses on making sense of what a model learns from the data or why it\nbehaves poorly in a given task. This paper builds on previous ideas to cope\nwith the increasing demand for interpretable, robust, and transparent models.\nOur approach provides a simpler and intuitive (or familiar) way of generating\nCAM. The proposed Eigen-CAM computes and visualizes the principle components of\nthe learned features/representations from the convolutional layers. Empirical\nstudies were performed to compare the Eigen-CAM with the state-of-the-art\nmethods (such as Grad-CAM, Grad-CAM++, CNN-fixations) by evaluating on\nbenchmark datasets such as weakly-supervised localization and localizing\nobjects in the presence of adversarial noise. Eigen-CAM was found to be robust\nagainst classification errors made by fully connected layers in CNNs, does not\nrely on the backpropagation of gradients, class relevance score, maximum\nactivation locations, or any other form of weighting features. In addition, it\nworks with all CNN models without the need to modify layers or retrain models.\nEmpirical results show up to 12% improvement over the best method among the\nmethods compared on weakly supervised object localization.\n