This is an example code for track "Open-Set 3D Object Retrieval using Multi-Modal Representation" in SHREC22. The complete dataset OS-MN40 is adopted for input. Dataset can be download as follows:
More details about the dataset and the track can be found in here.
We implement the baseline via combining multi-modal backbone, as follows:
This example code is developed in Python 3.8.12 and pytorch1.8.1+cu102. You can install the required packages as follows.
pip install -r requirements.txt
conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 -c pytorch
By default, the datasets are placed under the "data" folder in the root directory. This code will create a new folder (name depends on the current time) to restore the checkpoint files under "cache/ckpts" folder for each run.
├── cache
│ └── ckpts
│ ├── OS-MN40_2022-01-12-20-57-46
│ │ ├── cdist.txt
│ │ ├── ckpt.meta
│ │ └── ckpt.pth
│ └── OS-MN40_2022-01-15-13-58-50
│ ├── cdist.txt
│ ├── ckpt.meta
│ └── ckpt.pth
└── data
├── OS-MN40/
└── OS-MN40-Miss/
You can also place the datasets anywhere you want. Don't forget change the related path in "line 19 in train.py" and "line 19 in get_mat.py".
Run "train.py". By default, 80% data in the train folder is used for training and the rest is used for validation.
python train.py
Modify the data_root and ckpt_path in "line 17-18 in get_mat.py". Then run:
python get_mat.py
The generated cdist.txt can be found in the same folder of the specified checkpoints.
You can submit the cdist.txt file with your team key on the track website. The submission with invalid team key will not appear on the leaderboard except for "Test Team". The online evaluation will use mAP, NN, NDCG@100, and ANMRR. The computation details of those scores can be found in "utils.py". The defination of those scores refer to the book View-Based 3-d Object Retrieval.