This repository is the official implementation of Designing An Illumination-Aware Network for Deep Image Relighting. [Paper] [Demos]
Designing An Illumination-Aware Network for Deep Image Relighting
Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng
IEEE Transactions on Image Processing, 2022
- VIDIT dataset [Paper] [Download]
- Multi-Illumination dataset [Paper] [Download]
- DPR dataset [Paper] [Download]
- Place the one2one training data into folders
./data/one2one/train/depth
,./data/one2one/train/input
,./data/one2one/train/target
- Place the any2any training data into folders
./data/any2any/train/depth
(all '.npy' files),./data/any2any/train/input
(all RGB images) - Place the one2one validation data into folders
./data/validation/train/depth
,./data/validation/train/input
,./data/validation/train/target
- Run
gen_train_data.sh
to obtain full training and validation data.
- Create the environment by
conda env create -f environment.yml
- Download the pretrained model on DPR dataset from the link and place them into the folder 'pretrained'.
- Run
python test.py -opt options/videodemo_opt.yml
. - Image results will be save in the folder
results
. - You can further utilize the
ffmpeg
to generate demo videos asffmpeg -f image2 -i [path_to_results] -vcodec libx264 -r 10 demo.mp4
.
python train.py -opt [training config]
Dataset | Guidance | Config |
---|---|---|
VIDIT | depth, normal, lpe* | options/train_opt4b.yml |
Multi-Illumination | ❌ | options/train_adobe_opt.yml |
DPR | normal, lpe | options/trainany_opt4b.yml |
DPR | ❌ | options/trainany_opt4b_woaux.yml |
* The `lpe' represents our proposed linear positional encoding.
python test.py -opt [testing config]
Dataset | Guidance | Config | Pretrained |
---|---|---|---|
VIDIT | depth, normal, lpe | options/valid_opt.yml |
pretrained/VIDITOne2One.pth |
Multi-Illumination | ❌ | options/valid_adobe_opt.yml |
pretrained/MutliIllumination.pth |
DPR | normal, lpe | options/vaild_any_opt.yml |
pretrained/PortraitWithNormal.pth |
DPR | ❌ | options/vaild_any_opt.yml |
pretrained/PortraitWithoutNormal.pth |
You can download all pretrained models from this Google Driver or BaiduNetDisk (pwd: 5qtp).
@article{zhu2022ian,
author = {Zuo-Liang Zhu, Zhen Li, Rui-Xun Zhang, Chun-Le Guo, Ming-Ming Cheng},
title = {Designing An Illumination-Aware Network for Deep Image Relighting},
journal = {IEEE Transactions on Image Processing},
year = {2022},
doi = {10.1109/TIP.2022.3195366}
}
- This repository is maintained by Zuo-Liang Zhu (
nkuzhuzl [AT] gmail.com
) and Zhen Li (zhenli1031 [AT] gmail.com
). - Our code is based on a famous restoration toolbox BasicSR.
The code is released under Creative Commons Attribution-NonCommercial 4.0 International for non-commercial use only. Any commercial use should get formal permission first.
- AIM 2020: Scene Relighting and Illumination Estimation Challenge [Webpage] [Paper]
- NTIRE 2021 Depth Guided Image Relighting Challenge [Webpage] [Paper]
- Deep Single Portrait Image Relighting [Github] [Paper] [Supp]
- Multi-modal Bifurcated Network for Depth Guided Image Relighting [Github] [Paper]
- Physically Inspired Dense Fusion Networks for Relighting [Paper]
- LPIPS [Github] [Paper]