😈BAD-RFs: Bundle-adjusted Radience Fields from degraded images with continuous-time motion models
This repo contains an accelerated reimplementation of our CVPR paper BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields, based on the nerfstudio framework.
In the future, we will continue to explore bundle-adjusted radience fields, add more accelerated implementations to this repo, such as a reimplementation of our ICLR paper USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields.
Deblurring & novel-view synthesis results on Deblur-NeRF's real-world motion-blurred data:
ball.mp4
decoration.mp4
girl.mp4
heron.mp4
Left: BAD-NeRFacto deblured novel-view renderings;
Right: Input images.
You may check out the original nerfstudio
repo for prerequisites and dependencies.
Currently, our codebase is build on top of the latest version of nerfstudio (v1.0.2),
so if you have an older version of nerfstudio installed,
please git clone
the main branch and install the latest version.
TL;DR: You can install nerfstudio
with:
# (Optional) create a fresh conda env
conda create --name nerfstudio -y python=3.10
conda activate nerfstudio
# install dependencies
pip install --upgrade pip setuptools
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
# install nerfstudio!
git clone https://github.com/nerfstudio-project/nerfstudio
cd nerfstudio
pip install -e .
Besides, we use pypose to implement the pose interpolation. You can install it with:
pip install pypose
Then you can clone and install this repo as a python package with:
git clone https://github.com/WU-CVGL/Bad-RFs
cd Bad-RFs
pip install -e .
As described in the previous BAD-NeRF paper, we re-rendered Deblur-NeRF's synthetic dataset with 51 interpolations per blurry image.
Additionally, in the previous BAD-NeRF paper, we directly run COLMAP on blurry images only, with neither ground-truth
camera intrinsics nor sharp novel-view images. We find this is quite challenging for COLMAP - it may fail to
reconstruct the scene and we need to re-run COLMAP for serval times. To this end, we provided a new set of data,
where we ran COLMAP with ground-truth camera intrinsics over both blurry and sharp novel-view images,
named bad-nerf-gtK-colmap-nvs
:
You can directly download the real_camera_motion_blur
folder from Deblur-NeRF.
-
Use the
ns-process-data
tool from Nerfstudio to process deblur-nerf training images.For example, if the dataset from BAD-NeRF is in
llff_data
, execute:ns-process-data images \ --data llff_data/blurtanabata/images \ --output-dir data/my_data/blurtanabata
-
The folder
data/my_data/blurtanabata
is ready.
Note: Although nerfstudio does not model the NDC scene contraction for LLFF data, we found that
scale_factor = 0.25
works well on LLFF datasets. If your data is captured in a LLFF fashion (i.e. forward-facing), instead of object-centric like Mip-NeRF 360, you can pass thescale_factor = 0.25
parameter to the nerfstudio dataparser (which is already set to default in ourDeblurNerfDataParser
), e.g.,ns-train bad-nerfacto --data data/my_data/my_seq --vis viewer+tensorboard nerfstudio-data --scale_factor 0.25
For Deblur-NeRF synthetic
dataset and Deblur-NeRF real
dataset, train with:
ns-train bad-nerfacto \
--data data/bad-nerf-gtK-colmap-nvs/blurtanabata \
--vis viewer+tensorboard \
deblur-nerf-data
ns-train bad-nerfacto \
--data data/real_camera_motion_blur/blurdecoration \
--vis viewer+tensorboard \
deblur-nerf-data
For custom data processed with ns-process-data
, train with:
ns-train bad-nerfacto \
--data data/my_data/blurtanabata \
--vis viewer+tensorboard \
nerfstudio-data --eval_mode "all"
ns-render interpolate \
--load-config outputs/tanabata/bad-nerfacto/<your_experiment_date_time>/config.yml \
--pose-source train \
--frame-rate 30 \
--interpolation-steps 10 \
--output-path renders/<your_filename>.mp4
Note1: You can add the
--render-nearest-camera True
option to compare with the blurry inputs, but it will slow down the rendering process significantly.Note2: The working directory when executing this command must be the parent of
outputs
, i.e. the same directory when training.Note3: You can find more information of this command in the nerfstudio docs.
Open this repo with your IDE, create a configuration, and set the executing python script path to
<nerfstudio_path>/nerfstudio/scripts/train.py
, with the parameters above.
Model | Dataset | PSNR↑ | SSIM↑ | LPIPS↓ | Train Time (steps@time) |
---|---|---|---|---|---|
BAD-NeRF (paper) | Cozy2room | 32.15 |
0.9170 | 0.0547 | 200k@11h |
bad-nerfacto |
Cozy2room | 29.74 / 31.59 | 0.8983 / 0.9403 |
0.0910 / 0.0406 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | Factory | 32.08 | 0.9105 | 0.1218 | 200k@11h |
bad-nerfacto |
Factory | 31.00 / 32.97 |
0.9008 / 0.9381 |
0.1358 / 0.0929 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | Pool | 33.36 | 0.8912 | 0.0802 | 200k@11h |
bad-nerfacto |
Pool | 31.64 / 33.62 |
0.8554 / 0.9079 |
0.1250 / 0.0584 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | Tanabata | 27.88 | 0.8642 | 0.1179 | 200k@11h |
bad-nerfacto |
Tanabata | 26.88 / 29.32 |
0.8524 / 0.9133 |
0.1450 / 0.0895 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | Trolley | 29.25 | 0.8892 | 0.0833 | 200k@11h |
bad-nerfacto |
Trolley | 27.45 / 31.00 |
0.8675 / 0.9371 |
0.1222 / 0.0445 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | ArchViz-low | 31.27 |
0.9005 | 0.1503 | 200k@11h |
bad-nerfacto |
ArchViz-low | 26.70 / 27.03 | 0.8893 / 0.9046 |
0.1672 / 0.1267 |
5k@200s / 30k@18min |
BAD-NeRF (paper) | ArchViz-high | 28.07 |
0.8234 | 0.2460 | 200k@11h |
bad-nerfacto |
ArchViz-high | 26.22 / 27.32 | 0.8649 / 0.8894 |
0.2504 / 0.2061 |
5k@200s / 30k@18min |
Tested with AMD Ryzen 7950X CPU + NVIDIA RTX 4090 GPU, on Manjaro Linux, with CUDA 12.1 and PyTorch 2.0.1. Train speed may vary with different configurations.
If you find this useful, please consider citing:
@misc{zhao2023badnerfs,
title = {{Bad-RFs: Bundle-adjusted Radiance Fields from Degraded Images with Continuous-time Motion Models}},
author = {Zhao, Lingzhe and Wang, Peng and Liu, Peidong},
year = {2023},
note = {{https://github.com/WU-CVGL/Bad-RFs}}
}
@InProceedings{wang2023badnerf,
title = {{BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields}},
author = {Wang, Peng and Zhao, Lingzhe and Ma, Ruijie and Liu, Peidong},
month = {June},
year = {2023},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages = {4170-4179}
}
- Kudos to the Nerfstudio team for their amazing framework:
@inproceedings{nerfstudio,
title = {Nerfstudio: A Modular Framework for Neural Radiance Field Development},
author = {
Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi, Brent
and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin,
Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa,
Angjoo
},
year = 2023,
booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
series = {SIGGRAPH '23}
}