Begin by setting up the dependencies. You can create a conda environment using conda env create -f environment.yml
. Then update the root path in the local configuration file, and remove its .example
suffix. Install torchsearchsorted by following instructions from their README
.
Our framework includes three learned components: A decoder model and a feature-plane super-resolution model shared between all 3D scenes, and an individual set of feature planes per 3D scene. You can experiment with our code in different levels, by following the directions starting from any of the 3 possible stages below (directions marked with * should only be perfomed if starting from the stage they appear in):
- Download our training scenes dataset.
- Download the desired (synthetic) test scene from the NeRF dataset and put all scenes in a dataset folder.
- Update the configuration file. Add the desired test scene name(s) to the training list. Update the scene name(s) in the evaluation list and update the paths to the scenes dataset folder and to storing the new models in the configuration file.
- Run
python train_nerf.py --config config/TrainModels.yml
Use pre-trained decoder and plane super-resolution models while learning feature planes corresponding to a new 3D scene.
- Download our pre-trained models file and unzip it.
- *Download our training scenes dataset.
- *Download the desired (synthetic) test scene from the NeRF dataset and put all scenes in a dataset folder.
- Learn the feature planes representation for a new test scene:
- Update the configuration file. Add the desired test scene name(s) to the training list. Then update the scene name(s) in the evaluation list, as well as the paths to the scenes dataset folder, pre-trained models folder and to storing the new scene feature planes in the configuration file.
- Run
python train_nerf.py --config config/Feature_Planes_Only.yml
- Jointly refine all three modules:
- Update the desired scene name (training and evaluation), as well as the paths to the scenes dataset folder, pre-trained models folder (decoder and SR), learned scene feature planes (from the previous step) and to storing the refined models in the configuration file.
- Run
python train_nerf.py --config config/RefineOnTestScene.yml
Use pre-trained decodeer and SR models, coupled with the learned feature-plane representation:
- *Download one of our pre-trained models and unzip it, then download the corresponding (synthetic or real world) scene from the NeRF dataset.
- Run:
python train_nerf.py --load-checkpoint <path to pre-trained models folder> --eval video --results_path <path to save output images and video>
Optionally, to resume training in any of the first two stages, use the --load-checkpoint
argument followed by the path to the saved model folder, and omit the --config
argument.
Feel free to raise GitHub issues if you find anything concerning. Pull requests adding additional features are welcome too.
This code is available under the MIT License. The code was forked from the nerf-pytorch repository.