Skip to content

Latest commit

 

History

History
27 lines (19 loc) · 1.22 KB

README.md

File metadata and controls

27 lines (19 loc) · 1.22 KB

SfMLearner

This codebase (in progress) implements the system described in the paper:

Unsupervised Learning of Depth and Ego-Motion from Video

Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe

In CVPR 2017 (Oral).

See the project webpage for more details. Please contact Tinghui Zhou (tinghuiz@berkeley.edu) if you have any questions.

Prerequisites

This codebase was developed and tested with Tensorflow 1.0, CUDA 8.0 and Ubuntu 16.04.

Running the single-view depth demo

We provide the demo code for running our single-view depth prediction model. First, download the pre-trained model by running the following

bash ./models/download_model.sh

Then you can use the provided ipython-notebook demo.ipynb to run the demo.

TODO List (after NIPS deadline)

  • Full training code for Cityscapes and KITTI.
  • Evaluation code for the KITTI experiments.

Disclaimer

This is the authors' implementation of the system described in the paper and not an official Google product.