Download and organize data like:
tf_estimator_barebone/data/DIV2K/
├── DIV2K_train_HR
├── DIV2K_train_LR_bicubic
│ └── X2
│ └── X3
│ └── X4
├── DIV2K_valid_HR
└── DIV2K_valid_LR_bicubic
└── X2
└── X3
└── X4
conda install tensorflow-gpu pillow
python -m datasets.div2k --model-dir MODEL_DIR --input-dir INPUT_DIR --output-dir OUTPUT_DIR
Compare with WDSR (PyTorch-based)
Networks | Parameters | DIV2K (val) PSNR | Pre-trained models | Training command |
---|---|---|---|---|
EDSR[1] Baseline | 1,191,324 | 34.63 | Download | detailspython trainer.py --dataset div2k --model edsr --job-dir ./div2k_edsr |
WDSR[2] Baseline | 1,190,100 | 34.78 | Download | detailspython trainer.py --dataset div2k --model wdsr --job-dir ./div2k_wdsr |
[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]
[2] Jiahui Yu, Yuchen Fan, Jianchao Yang, Ning Xu, Zhaowen Wang, Xinchao Wang, Thomas Huang, "Wide Activation for Efficient and Accurate Image Super-Resolution", arXiv preprint arXiv:1808.08718. [arXiv] [Code]