We support python3
.(Recommended version is Python 3.9).
To install the dependencies run:
pip install -r requirements.txt
In our method, all the configurations are contained in the file config/Mixed_data-10-8-wMaskWarp-aug.yaml
.
-
Download the test dataset
megc2022-synthesis
-
Download the
shape_predictor_68_face_landmarks.dat
and put it in thedataset
folder -
Put the three training set and one test set in the
dataset
folder. The file tree is shown as follows:
.
├── CASMEII
│ ├── CASME2-coding-20190701.xlsx
│ ├── CASME2_RAW_selected
├── copy_.py
├── crop.py
├── megc2022-synthesis
│ ├── source_samples
│ ├── target_template_face
├── SAMM
│ ├── SAMM
│ ├── SAMM_Micro_FACS_Codes_v2.xlsx
├── shape_predictor_68_face_landmarks.dat
└── SMIC
├── SMIC_all_raw
- Run the following code
cd dataset
python crop.py
python copy_.py
mv Mixed_dataset_test.csv ./Mixed_dataset
cd ..
the root of the preprocessed dataset is ./dataset/Mixed_dataset
- Download the train_mask.tar.gz and unzip it, then put it in the
./dataset/Mixed_dataset/train_mask
To train a model on specific dataset run:
CUDA_VISIBLE_DEVICES=0,1,2,3 python run.py \
--config config/Mixed_data-10-8-wMaskWarp-aug.yaml \
--device_ids 0,1,2,3
A log folder named after the timestamp will be created. Checkpoints, loss values, reconstruction results will be saved to this folder.
CUDA_VISIBLE_DEVICES=0 python demo.py \
--config config/Mixed_data-10-8-wMaskWarp-aug.yaml \
--checkpoint 'path to the checkpoint' \
--result_video './ckpt/relative' \
--mode 'relative'
Our provided model can be downloaded here
The final results are in the folder ./ckpt/relative
.
The main code is based upon FOMM, MRAA and TPS
Thanks for the excellent works!