Hyeongjin Nam*1, Daniel Sungho Jung*1, Gyeongsik Moon2, Kyoung Mu Lee1
1Seoul National University, 2Codec Avatars Lab, Meta
(*Equal contribution)
CONTHO jointly reconstructs 3D human and object by exploiting human-object contact as a key signal in accurate reconstruction. To this end, we integrates "3D human-object reconstruction" and "Human-object contact estimation", the two different tasks that have been separately studied in two tracks, with one unified framework.
- We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.10.1 and Python >= 3.7.0. Our latest CONTHO model is tested on Python 3.9.13, PyTorch 1.10.1, CUDA 10.2.
- Setup the environment
# Initialize conda environment
conda create -n contho python=3.9
conda activate contho
# Install PyTorch
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=10.2 -c pytorch
# Install all remaining packages
pip install -r requirements.txt
- Prepare the
base_data
from either Google Drive or Onedrive, and place it as${ROOT}/data/base_data
. - Download the pre-trained checkpoint from either Google Drive or OneDrive.
- Lastly, please run
python main/demo.py --gpu 0 --checkpoint {CKPT_PATH}
You need to follow directory structure of the data
as below.
${ROOT}
|-- data
| |-- base_data
| | |-- annotations
| | |-- backbone_models
| | |-- human_models
| | |-- object_models
| |-- BEHAVE
| | |-- dataset.py
| | |-- sequences
| | | |-- Date01_Sub01_backpack_back
| | | |-- Date01_Sub01_backpack_hand
| | | |-- ...
| | | |-- Date07_Sub08_yogamat
| |-- InterCap
| | |-- dataset.py
| | |-- sequences
| | | |-- 01
| | | |-- 02
| | | |-- ...
| | | |-- 10
- Download Data01~Data07 sequences from BEHAVE dataset to
${ROOT}/data/BEHAVE/sequences
.
(Option 1) Directly download BEHAVE dataset from their download page.
(Option 2) Run the script below.
scripts/download_behave.sh
- Download RGBD_Images.zip and Res.zip from InterCap dataset to
${ROOT}/data/InterCap/sequences
.
(Option 1) Directly download InterCap dataset from their download page.
(Option 2) Run the script below.
scripts/download_intercap.sh
- Download base_data from either Google Drive or Onedrive.
- (Optional) Download the released checkpoints for BEHAVE (Google Drive | OneDrive) and InterCap (Google Drive | OneDrive) dataset.
To train CONTHO on BEHAVE or InterCap dataset, please run
python main/train.py --gpu 0 --dataset {DATASET}
To evaluate CONTHO on BEHAVE or InterCap dataset, please run
python main/test.py --gpu 0 --dataset {DATASET} --checkpoint {CKPT_PATH}
Here, we report the performance of CONTHO.
CONTHO is a fast and accurate 3D human and object reconstruction framework!
- RuntimeError: Subtraction, the
-
operator, with a bool tensor is not supported. If you are trying to invert a mask, use the~
orlogical_not()
operator instead: Please check reference. - bash: scripts/download_behave.sh: Permission denied: Please check reference.
We thank:
- Hand4Whole for 3D human mesh reconsturction.
- CHORE for training and testing on BEHAVE.
- InterCap for download script of the dataset.
- DECO for in-the-wild experiment setup.
@inproceedings{nam2024contho,
title = {Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer},
author = {Nam, Hyeongjin and Jung, Daniel Sungho and Moon, Gyeongsik and Lee, Kyoung Mu},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2024}
}