As the name suggests, this is a video restoration pipeline pulling from various cutting edge technologies and merging them to create the one processing pipeline, for videos, to rule them all. The pipeline borrows from multiple AI techniques from different contributers, these techniques are mentioned in our releases page. If you like our project please give as a star and also don't forget to like the other projects used by the video restoration pipeline 🤠
NOTE only one video at a time
Setting up the environment
# Make sure you have git installed
git clone https://github.com/cliffordkleinsr/DE-SRFREN.git
cd DE-SRFREN
# Make sure you have python installed -.-"
# Install basicsr
pip install basicsr
# Install facexlib
# We use face detection and face restoration helper in the facexlib package
pip install facexlib #parsing path net and resnet faces
pip install realesrgan
pip install gfpgan
pip install -r requirements.txt
python setup.py develop
As a side note, make sure you have pytorch compiled with cuda binaries installed otherwise inference speed using the cpu will be greatly impacted
- Basic argument structure:
-i or --input , your input video directory
-fo or --frame_output , your frame output directory
-vo or --video_output, your video output
-h or --help , for help with arguments
- For quick inference with the GFPGAN variant use:
python inference.py -i inputs/videos -fo merged_sequence -vo results --face_enhance --bg_upsampler None #for faster inference
#or
python inference.py -i inputs/videos -fo merged_sequence -vo results --face_enhance #to super resolve your image after restoration
- For quick inference with the VQFR variant use:
NOTE: only usable with v0.0.1
python inference.py -i inputs/videos -fo merged_sequence -vo results --vqfr_enhance -v 2.0 -s 2 -f 0.1 --bg_upsampler None #for faster inference
#or
python inference.py -i inputs/videos -fo merged_sequence -vo results --vqfr_enhance -v 2.0 -s 2 -f 0.1 #to super resolve your image after restoration
Please NOTE That VQFR has its own caveats
- To use the super-resolution pipeline run:
python inference.py -i inputs/videos -fo merged_sequence -vo results
- To Encode an image sequence to a H.264 MP4 codec run argument:
python test.py #results will be in the results folder
sher10s.1.mp4
processed.mp4
Original | Processed |
---|---|
Original | Processed |
---|---|
- Take a video frame and turn into images
- Super resolve the image
- Restore the Faces in each frame step
- Merge Frames H.264 codec MP4
- Speed up inference *(main focus)
- Frame Generation 24-60 FPS
- More support for different video formats
- Color BLack and White Images
- Lossless Decoding and encoding
- Sound restoration
@InProceedings{clifford2023desrfren,
author = {Clifford Njoroge},
title = {DE-SRFREN: Video restoration Processing Pipeline},
date = {2023}
}
Real-ESRGAN
@InProceedings{wang2021realesrgan,
author = {Xintao Wang and Liangbin Xie and Chao Dong and Ying Shan},
title = {Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data},
booktitle = {International Conference on Computer Vision Workshops (ICCVW)},
date = {2021}
}
VQFR
@inproceedings{gu2022vqfr,
title={VQFR: Blind Face Restoration with Vector-Quantized Dictionary and Parallel Decoder},
author={Gu, Yuchao and Wang, Xintao and Xie, Liangbin and Dong, Chao and Li, Gen and Shan, Ying and Cheng, Ming-Ming},
year={2022},
booktitle={ECCV}
}
GFPGAN
@InProceedings{wang2021gfpgan,
author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2021}
}
IMAGEIO