Jose Luis Ponton*,1, Haoran Yun*,1, Andreas Aristidou2,3, Carlos Andujar1, Nuria Pelechano1
1 Universitat Politècnica de Catalunya (UPC), Spain
2 University of Cyprus, Cyprus
3 CYENS Centre of Excellence, Cyprus
* Jose Luis Ponton and Haoran Yun are joint first authors.
This repository contains the implementation of the method shown in the paper SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data published at the ACM Transactions on Graphics.
Get an overview of the paper by visiting the project website or watching the video!
Download the paper here!
The project is structured into two primary directories: SparsePoserUnity
, which contains the Unity project, and python
, where the network implementation using PyTorch is located. The network runs in a Python environment, and it communicates with Unity using a TCP connection for result delivery.
- Clone the repository onto your local system.
- Navigate to the
python
directory with the command:cd SparsePoser/python/
. - Create a virtual environment with:
python -m venv env
(tested on Python 3.9). - Activate the created virtual environment.
- Install the necessary packages from the requirements file with:
pip install -r requirements.txt
. - Download and install PyTorch.
- Retrieve the motion dataset and decompress it.
At this stage, your python/
directory should be organized as follows:
└───python
├───data
│ └───xsens
│ ├───eval
│ └───train
├───env
├───models
│ ├───model_dancedb
│ └───model_xsens
└───src
-
Use the following command to evaluate BVH files:
python src/eval.py models/model_xsens/ data/xsens/eval/S02_A04.bvh ik
.Feel free to synthesize motion from any other .bvh file in the
.\data\xsens\
directory.Replace
ik
withgenerator
to exclusively synthesize motion using the generator network. -
The output will be stored in
data/eval_S02_A04.bvh
.
Installation Process
-
Download and install Unity 2021.2.13f1. (Note: Other versions may work but have not been tested.)
-
Launch the Unity Hub application. Select
Open
and navigate toSparsePoser/SparsePoserUnity/
.Upon opening, Unity may issue a warning about compilation errors in the project. Please select
Ignore
to proceed. Unity should handle the dependencies automatically, with the exception of SteamVR. -
Import SteamVR from the Asset Store using the following link: SteamVR Plugin.
Unity should configure the project for VR use and install OpenVR during the import of SteamVR. Any previously encountered errors should be resolved at this stage and the console should be clear.
-
[Only VR] Within Unity, navigate to
Edit/Project Settings/XR Plug-in Management
and selectOpenVR Loader
from thePlug-in Providers
list. Navigate toEdit/Project Settings/XR Plug-in Management/OpenVR
and change theStereo Rendering Mode
fromSingle-Pass Instanced
toMulti Pass
.
Running the Simulator
-
Within the
Project Window
, navigate toAssets/Scenes/SampleSceneSimulator
. -
Pick a skeleton (a
.txt
file) fromAssets/Data/
and refer to it in theT Pose BVH
field in thePythonCommunication
component of the Unity scene.We provide different skeleton configurations based on real human dimensions. For usability, the skeleton files are named as follows:
<gender>_<height>_<hips_height>.txt
. -
Open a Terminal or Windows PowerShell, go to
SparsePoser/python/
, activate the virtual environment, and execute the following command:python src/unity.py /models/model_xsens/ ../SparsePoserUnity/Assets/Data/male_180_94.txt ik
. Replacemale_180_94.txt
with the skeleton selected in step 2.This command facilitates communication between Unity and Python. Note: Repeatedly playing and stopping Unity may disrupt this and result in an error in both Unity and Python. If this occurs, re-execute the Terminal command.
-
Press
Play
in Unity. Manipulate the GameObjects (Root
,LFoot
,RFoot
,Head
,LHand
andRHand
) located within theTrackers
GameObject to adjust the end-effectors.The initial position and rotation of the trackers are utilized during calibration, so refrain from modifying them.
Running the Virtual Reality Demo
-
In the
Project Window
, select the sceneAssets/Scenes/SampleSceneVR
. -
Choose a skeleton (a
.txt
file) fromAssets/Data/
and reference it in theT Pose BVH
field in thePythonCommunication
component of the Unity scene.We provide different skeleton configurations based on real human dimensions. For usability, the skeleton files are named as follows:
<gender>_<height>_<hips_height>.txt
.Warning: the system may fail when the user dimensions are considerably different from those of the skeleton. Please, choose the closest skeleton to the participant.
-
Open a Terminal or Windows PowerShell, go to
SparsePoser/python/
, activate the virtual environment, and execute the following command:python src/unity.py /models/model_xsens/ ../SparsePoserUnity/Assets/Data/male_180_94.txt ik
. Replacemale_180_94.txt
with the skeleton chosen in step 2.This command facilitates communication between Unity and Python. Note: Repeatedly playing and stopping Unity may disrupt this and result in an error in both Unity and Python. If this occurs, re-execute the Terminal command.
-
Initiate SteamVR and connect one Head-Mounted Display (HMD), two HTC VIVE hand-held controllers, and three HTC VIVE Trackers 3.0. (Note: Other versions might work but have not been tested.)
-
Press
Play
in Unity. -
The first time the application is played, a SteamVR pop-up will appear to generate input actions, select accept. Similarly, a TextMeshPro pop-up will appear, select
Import TMP Essentials
. -
Within the VR environment, locate the mirror. Stand in a T-Pose and press the
Trigger
button on any handheld controller. -
A yellow skeleton will appear. Position your feet and body within this skeleton and face forward. Aligning hands is not necessary. Press the
Trigger
button on any handheld controller once you're in position.If the yellow skeleton doesn't animate within a few seconds, consider restarting Unity or the Terminal executing Python.
-
[Optional] If needed, modify the hyperparameters in the
src/train.py
parameter dictionary. -
Initiate the training process with the following command:
python src/train.py data/xsens/ train_test all
.The syntax for the command is as follows:
python src/train.py <train_db> <name> <generator|ik|all>
-
The training outcome will be stored in the following directory:
models/model_<name>_<train_db>
.
Data used for training can be downloaded from here. It contains over 1GB of high-quality motion capture data recorded with an Xsens Awinda system.
If you find this code useful for your research, please cite our paper:
@article {ponton2023sparseposer,
author = {Ponton, Jose Luis and Yun, Haoran and Aristidou, Andreas and Andujar, Carlos and Pelechano, Nuria},
title = {SparsePoser: Real-Time Full-Body Motion Reconstruction from Sparse Data},
year = {2023},
issue_date = {February 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {1},
issn = {0730-0301},
url = {https://doi.org/10.1145/3625264},
doi = {10.1145/3625264},
journal = {ACM Trans. Graph.},
month = {oct},
articleno = {5},
numpages = {14}
}
-
The code in this repository is released under the MIT license. Please, see the LICENSE for further details.
-
The model weights in this repository and the data are licensed under CC BY-SA 4.0 license as found in the LICENSE_data file.