Skip to content

AutonoBot-Lab/BestMan_Pybullet

Repository files navigation

GitHub license Ubuntu 22.04 Python 3.8 pre-commit Code style: black Imports: isort

Welcome to the official repository of BestMan, a mobile manipulator simulator (with a wheel-base and arm) built on PyBullet.

💻 Installation

  • Pull the repository and update the submodule
git clone https://github.com/AutonoBot-Lab/BestMan_Pybullet.git
cd BestMan_Pybullet
git submodule init
git submodule update

☘️ Conda

First install Anaconda or minconda on linux system and then perform the following steps:

  • Run the following script to add the project to the PYTHON search path
cd Install
chmod 777 pythonpath.sh
bash pythonpath.sh
source ~/.bashrc
  • Install ffmpeg to enable video record
sudo apt update && sudo apt install ffmpeg
  • Configure related libraries and links to support OpenGL rendering (If it already exists, skip this step.)
sudo apt update && sudo apt install -y libgl1-mesa-glx libglib2.0-0
sudo mkdir /usr/lib/dri
sudo ln -s /lib/x86_64-linux-gnu/dri/swrast_dri.so /usr/lib/dri/swrast_dri.so
  • Install gcc/g++ 9 (If it already exists, skip this step.)
sudo apt install -y build-essential gcc-9 g++-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 9
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 9
sudo update-alternatives --config gcc  # choice gcc-9
sudo update-alternatives --config g++  # choice g++-9

# Make sure gcc and g++ versions are consistent (conda enviroment don't install gcc to prevent problems caused by inconsistent versions)
gcc -v
g++ -v
  • Configure mamba to speed up the conda environment construction (Optional, skip if installation is slow or fails)
conda install mamba -n base -c conda-forge
  • Create basic conda environment
conda(mamba) env create -f basic_env.yaml
conda(mamba) activate BestMan
  • Install torch
conda(mamba) env update -f cuda116.yaml
  • Install lang-segment-anything
pip install -U git+https://github.com/luca-medeiros/lang-segment-anything.git
  • Install AnyGrasp

Note: you need export MAX_JOBS=2 in terminal; before pip install if you are running on an laptop due to this issue.

# Install MinkowskiEngine
conda install pytorch=1.13.1 -c pytorch --force-reinstall
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --global-option="--blas_include_dirs=${CONDA_PREFIX}/include" --global-option="--blas=openblas"

# Install graspnetAPI
pip install graspnetAPI

# Install pointnet2
cd third_party/pointnet2
python setup.py install

# Force reinstall to ensure version
pip install --force-reinstall opencv-python==4.1.2.30 numpy==1.23.5

☘️ Docker

Windows
  • Pull docker image from tencentyun
docker pull ccr.ccs.tencentyun.com/4090/bestman:v1
  • Create docker container
docker run -it --gpus all --name BestMan ccr.ccs.tencentyun.com/4090/bestman:v1
  • Install VcXsrv Windows X Server, Start and keep running in the background.

  • Execute echo $DISPLAY inside the container, Make sure the result is host.docker.internal:0 so that it can be visualized on the host machine, if not:

export DISPLAY=host.docker.internal:0
Linux
  • TBD

🔎 Project Structure & API References


👨‍💻 Basic Demos

First, Enter directory Examples:

cd Examples

Below are some examples and their rendering in Blender

☘️ Navigation

python navigation_basic.py

navigation_basic.mp4


☘️ Manipulation

  • Open Fridge
python open_fridge.py

open_fridge.mp4


  • Open microwave
python open_microwave.py

open_microwave.mp4


  • Grasp bowl on table use vacuum_gripper
python grasp_bowl_on_table_vacuum_gripper.py

grasp_bowl_on_table_sucker.mp4


  • Grasp lego on table use gripper
python grasp_lego_on_table_gripper.py

grasp_lego_on_table_gripper.mp4


  • Move bowl from drawer to table
python move_bowl_from_drawer_to_table.py

move_bowl_from_drawer_to_table.mp4


blender render

open microwave demo with blender render:

open_microwave.mp4


We have improved the pybullet-blender-recorder to import pybullet scene into blender for better rendering

If you want to enable pybullet-blender-recorder, please:

  1. Set blender: Ture in Config/default.yaml

  2. After running the demo, a pkl file will be generated and saved in Examples/record dir

  3. Install the pyBulletSimImporter.py plugin under Visualization/blender-render dir in blender (test on blender3.6.5) , and enalbe this plugin

image
  1. Import the pkl files into blender
image

🤝 Reference

If you find this work useful, please consider citing:

@inproceedings{ding2023task,
  title={Task and motion planning with large language models for object rearrangement},
  author={Ding, Yan and Zhang, Xiaohan and Paxton, Chris and Zhang, Shiqi},
  booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={2086--2092},
  year={2023},
  organization={IEEE}
}

@article{ding2023integrating,
  title={Integrating action knowledge and LLMs for task planning and situation handling in open worlds},
  author={Ding, Yan and Zhang, Xiaohan and Amiri, Saeid and Cao, Nieqing and Yang, Hao and Kaminski, Andy and Esselink, Chad and Zhang, Shiqi},
  journal={Autonomous Robots},
  volume={47},
  number={8},
  pages={981--997},
  year={2023},
  publisher={Springer}
}