This is implementation of [Relevance-CAM: Your Model Already Knows Where to Look] which is accepted by CVPR 2021
Official implementation of [Relevance-CAM: Your Model Already Knows Where to Look].
We introduce a novel method which allows to analyze the intermediate layers of a model as well as the last convolutional layer.
Method consists of 3 phases:
-
Propagation to get the activation map of a target layer of a model.
-
Backpropagation of to the target layer with Layer-wise Relevance Propagation Rule to calculate the weighting component
-
Weighted summation with activation map and weighting component.
Example: By running Multi_CAM.py, multiple CAM results of images in the picture folder can be saved in the results folder.
python Multi_CAM.py --models resnet50 --target_layer layer2 for the last layer of ResNet50
python Multi_CAM.py --models resnet50 --target_layer layer4 for the intermediate layer of ResNet50
python Multi_CAM.py --models vgg16 --target_layer 43 for the last layer of VGG16
python Multi_CAM.py --models vgg16 --target_layer 23 for the intermediate layer of VGG16
python Multi_CAM.py --models vgg19 --target_layer 52 for the last layer of VGG16
python Multi_CAM.py --models vgg19 --target_layer 39 for the intermediate layer of VGG16
- Model: resnet50 or vgg16, by --models.
- Target layer for R-CAM: you can choose layer2 of resnet50 by --target_layer.
- Target class: If you want to make R-CAM for african elephant class, you can use --target_class 386, 386 is index of ImageNet for elephant. default value is maximum index of model output.
@InProceedings{Lee_2021_CVPR,
author = {Lee, Jeong Ryong and Kim, Sewon and Park, Inyong and Eo, Taejoon and Hwang, Dosik},
title = {Relevance-CAM: Your Model Already Knows Where To Look},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {14944-14953}
}