- This is a simple pytorch implementation of DRL (PPO is used) for image denoising via residual recovery.
- Detailed illustration can be found in our paper R3L: Connecting Deep Reinforcement Learning To Recurrent Neural Networks For Image Denoising Via Residual Recovery (accepted by ICIP 2021).
- Although this project is for a specific task, this framework is designed ASAP (as simple as possible) to be applied for different tasks trained in "Batch Environment" (Batch * Channel * Height * Width) by slightly modifing the corresponding network and envrionment.
- Current implementations of PPO usually focus on environments with states in shape of (Height * Width) raising a gap for implementations in CV where (Channel * Height * Width) is needed.
- This implementation aims for an easy-to-modify PPO framework for CV tasks.
- The PPO used here is PPO-clip.
- Customize the environment by setting task specific reset(), step() in environment.py.
- Customize the data file paths in PPO_batch.py.
- Customize data argumentation in Load_batch.py.
- pytorch >= 1.6
- opencv
In case of use, please cite our publication:
Rongkai Zhang, Jiang Zhu, Zhiyuan Zha, Justin Dauwels, Bihan Wen, "R3L: Connecting Deep Reinforcement Learning to Recurrent Neural Networks for Image Denoising via Residual Recovery," ICIP 2021.
Bibtex:
@inproceedings{zhang2021r3l,
title={R3L: Connecting Deep Reinforcement Learning to Recurrent Neural Networks for Image Denoising via Residual Recovery},
author={Zhang, Rongkai and Zhu, Jiang and Zha, Zhiyuan and Dauwels, Justin and Wen, Bihan},
booktitle={2021 IEEE International Conference on Image Processing (ICIP)},
pages={1624--1628},
year={2021},
organization={IEEE}
}