Skip to content

Latest commit

 

History

History
90 lines (56 loc) · 3.27 KB

README.md

File metadata and controls

90 lines (56 loc) · 3.27 KB

arXiv code visitors License CC BY-NC-SA 4.0 Twitter Follow

OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos (ICML 2024)

Ziyang Song, Jinxi Li, Bo Yang

Overview

We propose the first framework to represent dynamic 3D scenes in infinitely many ways from a monocular RGB video.

drawing

Our method enables infinitely sampling of different 3D scenes that match the input monocular video in observed views:

drawing

1. Environment

Please first install a GPU-supported pytorch version which fits your machine. We have tested with pytorch 1.13.0.

Then please refer to official guide and install pytorch3d. We have tested with pytorch3d 0.7.5.

Install other dependencies:

pip install -r requirements

2. Data preparation

Our processed datasets can be downloaded from Google Drive.

If you want to work on your own dataset, please refer to data preparation guide.

3. Pre-trained models

You can download all our pre-trained models from Google Drive.

4. Train

python train.py config/indoor/chessboard.yaml --use_wandb

Specify --use_wandb to log the training with WandB.

5. Test

Sample valid scales

python sample.py config/indoor/chessboard.yaml --checkpoint ${CHECKPOINT}

${CHECKPOINT} is the checkpoint iterations to be loaded, e.g., 30000.

Render

python test.py config/indoor/chessboard.yaml --checkpoint ${CHECKPOINT} --n_sample_scale_test 1000 --scale_id ${SCALE_ID} --render_test

Specify --render_test to render testing views, otherwise render training views.

Evaluate

python evaluate.py --dataset_path ${DATASET_PATH} --render_path ${RENDER_PATH} --split test --eval_depth --eval_segm --mask

Specify --eval_depth to evaluate depth, --eval_segm to evaluate segmentation, --mask to apply co-visibility mask as in DyCheck.

Citation

If you find our work useful in your research, please consider citing:

@article{song2024,
  title={{OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos}},
  author={Song, Ziyang and Li, Jinxi and Yang, Bo},
  journal={ICML},
  year={2024}
}