Skip to content

This repository contains an unofficial implementation of GSLoc paper. It includes modified versions of external modules and custom scripts for estimating camera pose using 3D Gaussian Splatting.

Notifications You must be signed in to change notification settings

SravanChittupalli/3DGS-Pose-Refinement

Repository files navigation

Geometry-based Robust Camera Pose Refinement with 3D Gaussian Splatting : (Final Report)

This project was done as part of course project for 16-822 Geometry based Methods for Vision at CMU.

This repository contains an unofficial implementation of GSLoc paper Arxiv. It includes modified versions of external modules and custom scripts for estimating camera pose using 3D Gaussian Splatting.

Method

Method


Installation

1. Clone the Repository

git clone https://github.com/SravanChittupalli/3DGS-Pose-Refinement.git
cd 3DGS-Pose-Refinement

2. Set Up the Conda Environment

  1. Ensure you have Conda installed. If not, download and install it from Miniconda or Anaconda.

  2. Create the environment using the environment.yml file provided in this repository:

    conda env create -f environment.yml
  3. Activate the environment:

    conda activate gsloc-env
  4. (Optional) If you want to check the installed packages:

    conda list
  5. Additional Installation

    cd public_scaffold_gs
    pip install submodules/diff-gaussian-rasterization
    
    cd submodules
    git clone https://github.com/ingra14m/diff-gaussian-rasterization-extentions.git --recursive
    git checkout filter-depth
    pip install -e .
    
    pip install submodules/simple-knn
    

Datasets Setup

The marepo method has been evaluated using multiple published datasets:

We provide scripts in the datasets folder to automatically download and extract the data in a format that can be readily used by the marepo scripts.
The format is the same used by the DSAC* codebase; see here for details.

Important: make sure you have checked the license terms of each dataset before using it.

7-Scenes:

You can use the datasets/setup_7scenes.py scripts to download the data. To download and prepare the datasets:

cd datasets
# Downloads the data to datasets/7scenes_{chess, fire, ...}
./setup_7scenes.py

3. Download and Use Checkpoints

Checkpoints should be downloaded into the directory public_mast3r/checkpoints.

Checkpoints

Modelname Training resolutions Head Encoder Decoder
MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric 512x384, 512x336, 512x288, 512x256, 512x160 CatMLP+DPT ViT-L ViT-B

You can check the hyperparameters we used to train these models in the section: Our Hyperparameters.
Make sure to check the licenses of the datasets we used.

To download a specific model, for example MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth, run the following:

mkdir -p public_mast3r/checkpoints/
wget https://download.europe.naverlabs.com/ComputerVision/MASt3R/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth -P public_mast3r/checkpoints/

For these checkpoints, make sure to agree to the license of all the training datasets we used, in addition to CC-BY-NC-SA 4.0.
The MapFree dataset license in particular is very restrictive. For more information, check CHECKPOINTS_NOTICE.


Pre-trained Models

We also provide the following pre-trained models:

Model (Linked) Description
ACE Heads
wayspots_pretrain Pre-trained ACE Heads, Wayspots
pretrain Pre-trained ACE Heads, 7-Scenes & 12-Scenes
marepo models
paper_model marepo paper models

To run inference with marepo on a test scene, the following components are required:

  1. ACE Encoder:
    The ACE encoder (ace_encoder_pretrained.pt) is pre-trained from the ACE paper and should be readily available in the repository by default. Download it at public_marepo/.

  2. ACE Heads:

    • The ACE heads should be placed in either public_marepo/logs/wayspots_pretrain/ or public_marepo/logs/pretrain/.
    • We use the pre-trained ACE heads for scene-specific coordinate prediction.
  3. marepo Pose Regression Models:

    • The marepo pose regression models should be placed in logs/paper_model/.

Pre-trained Scaffold GS Models

All the scene models trained from SfM GT pose can be downloaded from the Link.
Unzip outputs.zip and place the folder in public_scaffold_gs folder.

If you want to train the Scaffold-GS models yourselves, we have provided the COLMAP models scaled to match the scale of GT Poses given by the authors of 7-Scenes. Link

4. Run the Code

Run your scripts or modules as needed within the activated environment.

gsloc.py is intended to run on the selected scenes in 7-Scenes and run pose refinement on MAREPO's initial guess and generate metrics. Metrics are stored under output_metrics_v2.

python gsloc.py

Make sure all files with paths hard-coded in gsloc.py are properly downloaded

There might be some import errors. One of them might be from public_scaffold_gs/gaussian_renderer/__init__.py just change the 12th Line with your absolute library path.

Visualizations

Visualization


Acknowledgments

This project includes three external modules that have been modified for this implementation.

The modules were downloaded and adapted for the purposes of this project. Original .git directories have been removed to integrate them into this repository seamlessly.


About

This repository contains an unofficial implementation of GSLoc paper. It includes modified versions of external modules and custom scripts for estimating camera pose using 3D Gaussian Splatting.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published