Welcome to the official repository of BestMan, a mobile manipulator simulator (with a wheel-base and arm) built on PyBullet.
- Pull the repository and update the submodule
git clone https://github.com/AutonoBot-Lab/BestMan_Pybullet.git
cd BestMan_Pybullet
git submodule init
git submodule update
First install Anaconda
or minconda
on linux system and then perform the following stepsοΌ
- Run the following script to add the project to the PYTHON search path
cd Install
chmod 777 pythonpath.sh
bash pythonpath.sh
source ~/.bashrc
- Configure related libraries and links to support OpenGL rendering (If it already exists, skip this step.)
sudo apt update && sudo apt install -y libgl1-mesa-glx libglib2.0-0
sudo mkdir /usr/lib/dri
sudo ln -s /lib/x86_64-linux-gnu/dri/swrast_dri.so /usr/lib/dri/swrast_dri.so
- Install gcc/g++ 9 (If it already exists, skip this step.)
sudo apt install -y build-essential gcc-9 g++-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 9
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-9 9
sudo update-alternatives --config gcc # choice gcc-9
sudo update-alternatives --config g++ # choice g++-9
# Make sure gcc and g++ versions are consistent (conda enviroment don't install gcc to prevent problems caused by inconsistent versions)
gcc -v
g++ -v
- Configure mamba to speed up the conda environment construction (Optional, skip if installation is slow or fails)
conda install mamba -n base -c conda-forge
- Create basic conda environment
conda(mamba) env create -f basic_environment.yaml
conda(mamba) activate BestMan
# Install torch
conda(mamba) env update -f cuda116.yaml
# Install lang-segment-anything
pip install -U git+https://github.com/luca-medeiros/lang-segment-anything.git
# Install MinkowskiEngine
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --global-option="--blas_include_dirs=${CONDA_PREFIX}/include" --global-option="--blas=openblas"
# Install graspnetAPI
pip install graspnetAPI
# Install pointnet2
cd third_party/pointnet2
python setup.py install
-
AnyGrasp License
You need to get anygrasp license and checkpoint to use it.
- Pull docker image from tencentyun
docker pull ccr.ccs.tencentyun.com/4090/bestman:v1
- Create docker container
docker run -it --gpus all --name BestMan ccr.ccs.tencentyun.com/4090/bestman:v1
-
Install VcXsrv Windows X Server, Start and keep running in the background.
-
Execute
echo $DISPLAY
inside the container, Make sure the result ishost.docker.internal:0
so that it can be visualized on the host machine, if not:
export DISPLAY=host.docker.internal:0
- TBD
ββWe have supplemented and improved the pybullet-blender-recorder code base, importing the images in the pybullet scene into blender for rendering, which improves the rendering effect. For simple scenes and tasks, the import can be completed within 2 minutes, and for complex scenes and tasks, the import can be completed within half an hour.
First, Enter directory Examples:
cd Examples
Below are some examples and their rendering in Blender
βοΈ Navigation
python navigation_basic.py
navigation_basic.mp4
navigation_basic.mp4
βοΈ Manipulation
- Open Fridge
python open_fridge.py
open_fridge.mp4
https://github.com/user-attachments/assets/ed07b856-74ce-4299-9ba5-9de012b9eef5
- Open microwave
python open_microwave.py
open_microwave.mp4
https://github.com/user-attachments/assets/77530f8d-30fb-471c-8e6d-40f8dddfd56a
- Grasp bowl on table use sucker
python grasp_bowl_on_table_sucker.py
grasp_bowl_on_table_sucker.mp4
grasp_bowl_on_table_sucker.mp4
- Grasp lego on table use gripper
python grasp_lego_on_table_gripper.py
grasp_lego_on_table_gripper.mp4
grasp_lego_on_table_gripper.mp4
- Move bowl from drawer to table
python move_bowl_from_drawer_to_table.py
move_bowl_from_drawer_to_table.mp4
move_bowl_from_drawer_to_table.mp4
If you find this work useful, please consider citing:
@inproceedings{ding2023task,
title={Task and motion planning with large language models for object rearrangement},
author={Ding, Yan and Zhang, Xiaohan and Paxton, Chris and Zhang, Shiqi},
booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={2086--2092},
year={2023},
organization={IEEE}
}
@article{ding2023integrating,
title={Integrating action knowledge and LLMs for task planning and situation handling in open worlds},
author={Ding, Yan and Zhang, Xiaohan and Amiri, Saeid and Cao, Nieqing and Yang, Hao and Kaminski, Andy and Esselink, Chad and Zhang, Shiqi},
journal={Autonomous Robots},
volume={47},
number={8},
pages={981--997},
year={2023},
publisher={Springer}
}