Create a directory to store reid datasets under this repo via
cd AGRL.pytorch/
mkdir data/
If you wanna store datasets in another directory, you need to specify --root path_to_your/data
when running the training code. Please follow the instructions below to prepare each dataset. After that, you can simply do -d the_dataset
when running the training code.
Please do not call image dataset when running video reid scripts, otherwise error would occur, and vice versa.
MARS [8]:
- Create a directory named
mars/
underdata/
. - Download dataset to
data/mars/
from http://www.liangzheng.com.cn/Project/project_mars.html. - Extract
bbox_train.zip
andbbox_test.zip
. - Download split information from https://github.com/liangzheng06/MARS-evaluation/tree/master/info and put
info/
indata/mars
(we want to follow the standard split in [8]). The data structure would look like:
mars/
bbox_test/
bbox_train/
info/
- Use
-d mars
when running the training code.
iLIDS-VID [11]:
- The code supports automatic download and formatting. Simple use
-d ilidsvid
when running the training code. The data structure would look like:
ilids-vid/
i-LIDS-VID/
train-test people splits/
splits.json
PRID [12]:
- Under
data/
, domkdir prid2011
to create a directory. - Download dataset from https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/PRID11/ and extract it under
data/prid2011
. - Download the split created by iLIDS-VID from here, and put it in
data/prid2011/
. We follow [11] and use 178 persons whose sequences are more than a threshold so that results on this dataset can be fairly compared with other approaches. The data structure would look like:
prid2011/
splits_prid2011.json
prid_2011/
multi_shot/
single_shot/
readme.txt
- Use
-d prid2011
when running the training code.
DukeMTMC-VideoReID [16, 23]:
- Use
-d dukemtmcvidreid
directly. - If you wanna download the dataset manually, get
DukeMTMC-VideoReID.zip
from https://github.com/Yu-Wu/DukeMTMC-VideoReID. Unzip the file todata/dukemtmc-vidreid
. Ultimately, you need to have
dukemtmc-vidreid/
DukeMTMC-VideoReID/
train/ # essential
query/ # essential
gallery/ # essential
... (and license files)
These are implemented in dataset_loader.py
where we have two main classes that subclass torch.utils.data.Dataset:
- VideoDataset: processes video-based person reid datasets.
These two classes are used for torch.utils.data.DataLoader that can provide batched data. Data loader with VideoDataset
outputs batch data of (batch, sequence, channel, height, width)
.