BMVC'2023 Best Poster Award Winner
Niki Amini-Naieni, Kiana Amini-Naieni, Tengda Han, & Andrew Zisserman
Official PyTorch implementation for CounTX. Details can be found in the paper. [Paper] [Project Page]
Update (July 8, 2024): Check out our new model, CountGD, that improves significantly on the performance of CounTX! [Paper] | [Project Page] | [Code] | [App to Quickly Try Out Model]
- Preparation
- CounTX Train
- CounTX Inference
- Pre-trained Weights (FSC-147 & CARPK)
- Using New Dataset
- Additional Qualitative Examples
- Citation
- Acknowledgements
In our project, the FSC-147 dataset is used. Please visit following link to download this dataset.
We also use the text descriptions in FSC-147-D provided in this repository.
Note that the image names in FSC-147 can be used to identify the corresponding text descriptions in FSC-147-D.
The following commands will create a suitable Anaconda environment for running the CounTX training and inference procedures. To produce the results in the paper, we used Anaconda version 2022.10.
conda create --name countx-environ python=3.7
conda activate countx-environ
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install timm==0.3.2
pip install scipy
pip install imgaug
git clone git@github.com:niki-amini-naieni/CounTX.git
cd CounTX/open_clip
pip install .
- This repository uses
timm==0.3.2
, for which a fix is needed to work with PyTorch 1.8.1+. This fix can be implemented by replacing the file timm/models/layers/helpers.py in the timm codebase with the file helpers.py provided in this repository.
To train the model, run the following command after activating the Anaconda environment set up in step 2 of Preparation. Make sure to change the directory and file names to the ones you set up in step 1 of Preparation.
nohup python train.py --output_dir "./results" --img_dir "/scratch/local/hdd/nikian/images_384_VarV2" --gt_dir "/scratch/local/hdd/nikian/gt_density_map_adaptive_384_VarV2" --class_file "/scratch/local/hdd/nikian/ImageClasses_FSC147.txt" --FSC147_anno_file "/scratch/local/hdd/nikian/annotation_FSC147_384.json" --FSC147_D_anno_file "./FSC-147-D.json" --data_split_file "/scratch/local/hdd/nikian/Train_Test_Val_FSC_147.json" >>./training.log 2>&1 &
To test a model, run the following commands after activating the Anaconda environment set up in step 2 of Preparation. Make sure to change the directory and file names to the ones you set up in step 1 of Preparation. Make sure that the model file name refers to the model you want to test. By default, models trained in CounTX Train will be saved as ./results/checkpoint-1000.pth.
For the validation set:
python test.py --data_split "val" --output_dir "./test" --resume "./results/checkpoint-1000.pth" --img_dir "/scratch/local/hdd/nikian/images_384_VarV2" --FSC147_anno_file "/scratch/local/hdd/nikian/annotation_FSC147_384.json" --FSC147_D_anno_file "./FSC-147-D.json" --data_split_file "/scratch/local/hdd/nikian/Train_Test_Val_FSC_147.json"
For the test set:
python test.py --data_split "test" --output_dir "./test" --resume "./results/checkpoint-1000.pth" --img_dir "/scratch/local/hdd/nikian/images_384_VarV2" --FSC147_anno_file "/scratch/local/hdd/nikian/annotation_FSC147_384.json" --FSC147_D_anno_file "./FSC-147-D.json" --data_split_file "/scratch/local/hdd/nikian/Train_Test_Val_FSC_147.json"
The model weights used in the paper can be downloaded from Google Drive link (1.3 GB). To reproduce the results in the paper, run the following commands after activating the Anaconda environment set up in step 2 of Preparation. Make sure to change the directory and file names to the ones you set up in step 1 of Preparation. Make sure that the model file name refers to the model that you downloaded.
For the validation set:
python test_reproduce_paper.py --data_split "val" --output_dir "./test" --resume "paper-model.pth" --img_dir "/scratch/local/hdd/nikian/images_384_VarV2" --FSC147_anno_file "/scratch/local/hdd/nikian/annotation_FSC147_384.json" --FSC147_D_anno_file "./FSC-147-D.json" --data_split_file "/scratch/local/hdd/nikian/Train_Test_Val_FSC_147.json"
For the test set:
python test_reproduce_paper.py --data_split "test" --output_dir "./test" --resume "paper-model.pth" --img_dir "/scratch/local/hdd/nikian/images_384_VarV2" --FSC147_anno_file "/scratch/local/hdd/nikian/annotation_FSC147_384.json" --FSC147_D_anno_file "./FSC-147-D.json" --data_split_file "/scratch/local/hdd/nikian/Train_Test_Val_FSC_147.json"
The model weights used in the paper can be downloaded from Google Drive link (1.3 GB). To reproduce the results in the paper, run the following commands after activating the Anaconda environment set up in step 2 of Preparation and installing hub as described here. Make sure that the model file name refers to the model that you downloaded.
python test_carpk.py --resume "carpk.pth"
Additional qualitative examples for CounTX not included in the main paper are provided here.
@InProceedings{AminiNaieni23,
author = "Amini-Naieni, N. and Amini-Naieni, K. and Han, T. and Zisserman, A.",
title = "Open-world Text-specified Object Counting",
booktitle = "British Machine Vision Conference",
year = "2023",
}
This repository is based on the CounTR repository and uses code from the OpenCLIP repository. If you have any questions about our code implementation, please contact us at niki.amini-naieni@eng.ox.ac.uk.