Skip to content

[RSS2024] HOV-SG: Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation"

License

Notifications You must be signed in to change notification settings

robot-learning-freiburg/HOV-SG

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

HOV-SG

Static Badge Static Badge License: MIT Static Badge

This repository is the official implementation of the paper:

Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation

Abdelrhman Werby*, Chenguang Huang*, Martin BΓΌchner*, Abhinav Valada, and Wolfram Burgard.
*Equal contribution.

arXiv preprint arXiv:2403.17846, 2024
(Accepted for Robotics: Science and Systems (RSS), Delft, Netherlands, 2024.)

HOV-SG allows the construction of accurate, open-vocabulary 3D scene graphs for large-scale and multi-story environments and enables robots to effectively navigate in them with language instructions.

πŸ— Setup

  1. Clone and set up the HOV-SG repository
git clone https://github.com/hovsg/HOV-SG.git
cd HOV-SG

# set up virtual environment and install habitat-sim afterwards separately to avoid errors.
conda env create -f environment.yaml
conda activate hovsg
conda install habitat-sim -c conda-forge -c aihabitat

# set up the HOV-SG python package
pip install -e .

Open CLIP

HOV-SG uses the Open CLIP model to extract features from RGB-D frames. To download the Open CLIP model checkpoint CLIP-ViT-H-14-laion2B-s32B-b79K please refer to Open CLIP.

mkdir checkpoints
wget https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin?download=true -O checkpoints/temp_open_clip_pytorch_model.bin && mv checkpoints/temp_open_clip_pytorch_model.bin checkpoints/laion2b_s32b_b79k.bin

Another option is to use the OVSeg fine-tuned Open CLIP model, which is available under here:

pip install gdown
gdown --fuzzy https://drive.google.com/file/d/17C9ACGcN7Rk4UT4pYD_7hn3ytTa3pFb5/view -O checkpoints/ovseg_clip.pth

SAM

HOV-SG uses SAM to generate class-agnostic masks for the RGB-D frames. To download the SAM model checkpoint sam_v2 execute the following:

wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth -O checkpoints/sam_vit_h_4b8939.pth

πŸ–ΌοΈ Prepare dataset

Habitat Matterport 3D Semantics

HOV-SG takes posed RGB-D sequences as input. In order to represent hierarchical multi-story scenes we make use of the Habitat 3D Semantics dataset (HM3DSem). We provide a script and pose files (data/hm3dsem_poses/) to generate RGB-D sequences using the habitat-sim simulator. The script can be found under hovsg/data.

  • Download the Habitat Matterport 3D Semantics dataset.

  • To generate RGBD sequences, run the following command:

    python data/habitat/gen_hm3dsem_from_poses.py --dataset_dir <hm3dsem_dir> --save_dir data/hm3dsem_walks/
    Make sure that the hm3dsem_dir has the following structure
    β”œβ”€β”€ hm3dsem_dir
    β”‚   β”œβ”€β”€ hm3d_annotated_basis.scene_dataset_config.json # this file is necessary
    β”‚   β”œβ”€β”€ val
    β”‚   β”‚   └── 00824-Dd4bFSTQ8gi
    β”‚   β”‚         β”œβ”€β”€ Dd4bFSTQ8gi.basis.glb
    β”‚   β”‚         β”œβ”€β”€ Dd4bFSTQ8gi.basis.navmesh
    β”‚   β”‚         β”œβ”€β”€ Dd4bFSTQ8gi.glb
    β”‚   β”‚         β”œβ”€β”€ Dd4bFSTQ8gi.semantic.glb
    β”‚   β”‚         └── Dd4bFSTQ8gi.semantic.txt
    ...
    

We only used the following scenes from the Habitat Matterport 3D Semantics dataset:

Show Scenes ID
  1. 00824-Dd4bFSTQ8gi
  2. 00829-QaLdnwvtxbs
  3. 00843-DYehNKdT76V
  4. 00847-bCPU9suPUw9
  5. 00849-a8BtkwhxdRV
  6. 00861-GLAQ4DNUx5U
  7. 00862-LT9Jq6dN3Ea
  8. 00873-bxsVRursffK
  9. 00877-4ok3usBNeis
  10. 00890-6s7QHgap2fW

To evaluate semantic segmentation cababilities, we used ScanNet and Replica.

ScanNet

To get an RGBD sequence for ScanNet, download the ScanNet dataset from the official website. The dataset contains RGB-D frames compressed as .sens files. To extract the frames, use the SensReader/python. We used the following scenes from the ScanNet dataset:

Show Scenes ID
  1. scene0011_00
  2. scene0050_00
  3. scene0231_00
  4. scene0378_00
  5. scene0518_00

Replica

To get an RGBD sequence for Replica, Instead of the original Replica dataset, download the scanned RGB-D trajectories of the Replica dataset provided by Nice-SLAM. It contains rendered trajectories using the mesh models provided by the original Replica datasets. Download the Replica RGB-D scan dataset using the downloading script in Nice-SLAM.

wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip -O data/Replica.zip && unzip data/Replica.zip -d data/Replica_RGBD && rm data/Replica.zip 

To evaluate against the ground truth semantics labels, you also need also to download the original Replica dataset from the Replica as it contains the ground truth semantics labels as .ply files.

git clone https://github.com/facebookresearch/Replica-Dataset.git data/Replica-Dataset
chmod +x data/Replica-Dataset/download.sh && data/Replica-Dataset/download.sh data/Replica_original

We only used the following scenes from the Replica dataset:

Show Scenes ID
  1. office0
  2. office1
  3. office2
  4. office3
  5. office4
  6. room0
  7. room1
  8. room2

πŸ“‚ Datasets file strutcre

The Data folder should have the following structure:

Show data folder structure
β”œβ”€β”€ hm3dsem_walks
β”‚   β”œβ”€β”€ val
β”‚   β”‚   β”œβ”€β”€ 00824-Dd4bFSTQ8gi
β”‚   β”‚   β”‚   β”œβ”€β”€ depth
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ Dd4bFSTQ8gi-000000.png
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ rgb
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ Dd4bFSTQ8gi-000000.png
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ semantic
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ Dd4bFSTQ8gi-000000.png
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ pose
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ Dd4bFSTQ8gi-000000.png
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
|   |   β”œβ”€β”€ 00829-QaLdnwvtxbs
|   |   β”œβ”€β”€ ..
β”œβ”€β”€ Replica
β”‚   β”œβ”€β”€ office0
β”‚   β”‚   β”œβ”€β”€ results
β”‚   β”‚   β”‚   β”œβ”€β”€ depth0000.png
β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   |   β”œβ”€β”€ rgb0000.png
β”‚   β”‚   |   β”œβ”€β”€ ...
β”‚   β”‚   β”œβ”€β”€ traj.txt
β”‚   β”œβ”€β”€ office1
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ ScanNet
β”‚   β”œβ”€β”€ scans
β”‚   β”‚   β”œβ”€β”€ scene0011_00
β”‚   β”‚   β”‚   β”œβ”€β”€ color
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0.jpg
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ depth
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0.png
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ poses
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0.txt
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”‚   β”‚   β”œβ”€β”€ internsics
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ intrinsics_color.txt
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ intrinsics_depth.txt
β”‚   β”‚   β”œβ”€β”€ ..

πŸš€ Run

Create Scene Graphs (only for Habitat Matterport 3D Semantics):

python application/create_graph.py main.dataset=hm3dsem main.dataset_path=hm3dsem_walks/val/00824-Dd4bFSTQ8gi/ main.save_path=data/scene_graphs/00824-Dd4bFSTQ8gi
This will generate a scene graph for the specified RGB-D sequence and save it. The following files are generated:
β”œβ”€β”€ graph
β”‚   β”œβ”€β”€ floors
β”‚   β”‚   β”œβ”€β”€ 0.json
β”‚   β”‚   β”œβ”€β”€ 0.ply
β”‚   β”‚   β”œβ”€β”€ 1.json
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ rooms
β”‚   β”‚   β”œβ”€β”€ 0_0.json
β”‚   β”‚   β”œβ”€β”€ 0_0.ply
β”‚   β”‚   β”œβ”€β”€ 0_1.json
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ objects
β”‚   β”‚   β”œβ”€β”€ 0_0_0.json
β”‚   β”‚   β”œβ”€β”€ 0_0_0.ply
β”‚   β”‚   β”œβ”€β”€ 0_0_1.json
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ nav_graph
β”œβ”€β”€ tmp
β”œβ”€β”€ full_feats.pt
β”œβ”€β”€ mask_feats.pt
β”œβ”€β”€ full_pcd.ply
β”œβ”€β”€ masked_pcd.ply

The graph folder contains the generated scene graph hierarchy, the first number in the file name represents the floor number, the second number represents the room number, and the third number represents the object number. The tmp folder holds intermediate results obtained throughout graph construction. The full_feats.pt and mask_feats.pt contain the features extracted from the RGBD frames using the Open CLIP and SAM models. the former contains per point features and the latter contains the features for the object masks. The full_pcd.ply and masked_pcd.ply contain the point cloud representation of the RGB-D frames and the instance masks of all objects, respectively.

Visualize Scene Graphs

python application/visualize_graph.py graph_path=data/scene_graphs/hm3dsem/00824-Dd4bFSTQ8gi/graph

hovsg_graph_vis

Interactive visualization of Scene Graphs with Queries

Setup OpenAI

In order to test graph queries with HOV-SG, you need to setup an OpenAI API account with the following steps:

  1. Sign up an OpenAI account, login your account, and bind your account with at least one payment method.
  2. Get you OpenAI API keys, copy it.
  3. Open your ~/.bashrc file, paste a new line export OPENAI_KEY=<your copied key>, save the file, and source it with command source ~/.bashrc. Another way would be to run export OPENAI_KEY=<your copied key> in the teminal where you want to run the query code.

Evaluate query against pre-built hierarchical scene graph

python application/visualize_query_graph.py main.graph_path=data/scene_graphs/hm3dsem/00824-Dd4bFSTQ8gi/graph

After launching the code, you will be asked to input the hierarchical query. An example is chair in the living room on floor 0. You can see the visualization of the top 5 target objects and the room it lies in. hovsg_graph_query

Extract feature map for Semantic Segmentation (only for ScanNet and Replica)

python application/semantic_segmentation.py main.dataset=replica main.dataset_path=Replica/office0 main.save_path=data/sem_seg/office0

Evaluate Semantic Segmentation (only for ScanNet and Replica)

python application/eval/evaluate_sem_seg.py dataset=replica scene_name=office0 feature_map_path=data/sem_seg/office0

Evaluate Scene Graphs (WIP)

python application/eval/evaluate_graph.py main.graph_path=data/scene_graphs/00824-Dd4bFSTQ8gi

πŸ“” Abstract

Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features. While these maps allow for the prediction of point-wise saliency maps when queried for a certain language concept, largescale environments and abstract queries beyond the object level still pose a considerable hurdle, ultimately limiting languagegrounded robotic navigation. In this work, we present HOVSG, a hierarchical open-vocabulary 3D scene graph mapping approach for language-grounded indoor robot navigation. Leveraging open-vocabulary vision foundation models, we first obtain state-of-the-art open-vocabulary segment-level maps in 3D and subsequently construct a 3D scene graph hierarchy consisting of floor, room, and object concepts, each enriched with openvocabulary features. Our approach is able to represent multistory buildings and allows robotic traversal of those using a cross-floor Voronoi graph. HOV-SG is evaluated on three distinct datasets and surpasses previous baselines in open-vocabulary semantic accuracy on the object, room, and floor level while producing a 75% reduction in representation size compared to dense open-vocabulary maps. In order to prove the efficacy and generalization capabilities of HOV-SG, we showcase successful long-horizon language-conditioned robot navigation within realworld multi-story environments.

If you find our work useful, please consider citing our paper:

@article{werby23hovsg,
Author = {Abdelrhman Werby and Chenguang Huang and Martin BΓΌchner and Abhinav Valada and Wolfram Burgard},
Title = {Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation},
Year = {2024},
journal = {Robotics: Science and Systems},
} 

πŸ‘©β€βš–οΈ License

For academic usage, the code is released under the MIT license. For any commercial purpose, please contact the authors.

πŸ™ Acknowledgment

This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant number 468878300, the BrainLinks-BrainTools Center of the University of Freiburg, and an academic grant from NVIDIA.

About

[RSS2024] HOV-SG: Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%