Project page | Paper | Data
We present a novel ray-based continuous 3D shape representation, called RayDF. Our method achieves a 1000x faster speed than coordinate-based methods to render an 800 x 800 depth image.
Create a Conda environment with miniconda.
conda create -n raydf python=3.8 -y
conda activate raydf
Install all dependencies by running:
# install PyTorch
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# install other dependencies
pip install -r requirements.txt
In this paper, we conduct experiments on the following three datasets:
- Blender [2.03GB] [Google Drive][Baidu Netdisk]: We use 8 objects from the realistic synthetic 360-degree Blender dataset.
- DM-SR [1.71GB] [Google Drive][Baidu Netdisk]: We use 8 synthetic indoor scenes from the DM-SR dataset.
- ScanNet [4.34GB] [Google Drive][Baidu Netdisk]: We use 6 scenes
scene0004_00, scene0005_00, scene0009_00, scene0010_00, scene0030_00, scene0031_00
from the ScanNet dataset.
The pre-processed data can be automatically downloaded by running the following script:
# download all datasets
sh datasets/download.sh
# download one of the datasets
sh datasets/download.sh blender
sh datasets/download.sh dmsr
sh datasets/download.sh scannet
To train a dual-ray visibility classifier for different scenes by specifying --scene
:
CUDA_VISIBLE_DEVICES=0 python run_cls.py --config configs/blender_cls.txt --scene lego
CUDA_VISIBLE_DEVICES=0 python run_cls.py --config configs/dmsr_cls.txt --scene bathroom
CUDA_VISIBLE_DEVICES=0 python run_cls.py --config configs/scannet_cls.txt --scene scene0004_00
After finishing the training of classifier, modify ckpt_path_cls
in the config file
and train the ray-surface distance network:
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/dmsr.txt --scene bathroom
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/scannet.txt --scene scene0004_00
To train a ray-surface distance network with radiance branch by specifying --rgb_layer
:
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --rgb_layer 2
Alternatively, we provide a script for easy sequential training of the classifier and ray-surface distance network:
sh run.sh <gpu_id> <dataset_name> <scene_name>
# e.g., sh run.sh 0 blender chair
To evaluate the dual-ray visibility classifier:
CUDA_VISIBLE_DEVICES=0 python run_cls.py --config configs/blender_cls.txt --scene lego --eval_only
To evaluate the ray-surface distance network:
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --eval_only
# remove outliers
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --eval_only --denoise
# compute surface normals
CUDA_VISIBLE_DEVICES=0 python run_mv.py --config configs/blender.txt --scene lego --eval_only --grad_normal
The checkpoints of three datasets are free to download from Google Drive or Baidu Netdisk.
If you find our work useful in your research, please consider citing:
Licensed under the CC BY-NC-SA 4.0 license, see LICENSE.