8000 GitHub - ytwboxing/PBHC: Official Implementation of "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills"
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
/ PBHC Public
forked from TeleHuman/PBHC

Official Implementation of "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills"

License

Notifications You must be signed in to change notification settings

ytwboxing/PBHC

 
 

Repository files navigation


KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills

Weiji Xie* 1,2, Jinrui Han* 1,2, Jiakun Zheng* 1,3, Huanyu Li1,4, Xinzhe Liu1,5, Jiyuan Shi1, Weinan Zhang2, Chenjia Bai†1, Xuelong Li1
* Equal Contribution  † Corresponding Author
1Institute of Artificial Intelligence (TeleAI), China Telecom   2Shanghai Jiao Tong University   3East China University of Science and Technology   4Harbin Institute of Technology   5ShanghaiTech University

arXiv

Demo

demo

News

  • [2025-06] We release the code and paper for PBHC.

Contents

About

overview

This is the official implementation of the paper KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills.

Our paper introduces a physics-based control framework that enables humanoid robots to learn and reproduce challenging motions through multi-stage motion processing and adaptive policy training.

This repository includes:

  • Motion processing pipeline
    • Collect human motion from various sources (video, LAFAN, AMASS, etc.) to a unified SMPL format (motion_source/)
    • Filter, correct and retarget human motion to the robot (smpl_retarget/)
    • Visualize and analyze the processed motions (smpl_vis/, robot_motion_process/)
  • RL-based motion imitation framework (humanoidverse/)
    • Train the policy in IsaacGym
    • Deploy trained policies in MuJoCo for sim2sim verification. The framework is designed for easy extension--custom policies and real-world deployment modules can be plugged in with minimal effort
  • Example data (example/)
    • Sample motion data in our experiments (example/motion_data/, you can visualize the motion data with tools in robot_motion_process/)
    • A pretrained policy checkpoint (example/pretrained_hors_stance_pose/)

Usage

  • Refer to INSTALL.md for environment setup and installation instructions.

  • Each module folder (e.g., humanoidverse, smpl_retarget) contains a dedicated README.md explaining its purpose and usage.

  • How to let your robot perform a new motion?

    • Collect the motion data from the source and process the motion data to the SMPL format (motion_source/).
    • Retarget the motion data to the robot (smpl_retarget/, choose Mink or PHC pipeline as you like).
    • Visualize the processed motion to check whether the motion quality is satisfiable (smpl_vis/, robot_motion_process/).
    • Train a policy for the processed motion in IsaacGym (humanoidverse/).
    • Deploy the policy in MuJoCo or real-world robot (humanoidverse/).

Folder Structure

  • description: provide description file for SMPL and G1 robot.
  • motion_source: docs for getting SMPL format data.
  • smpl_retarget: tools for SMPL to G1 robot retargeting.
  • smpl_vis: tools for visualizing SMPL format data.
  • robot_motion_process: tools for processing robot format motion. Including visualization, interpolation, and trajectory analysis.
  • humanoidverse: training RL policy
  • example: example motion and ckpt for using PBHC

Citation

If you find our work helpful, please cite:

@article{xie2025kungfubot,
  title={KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills},
  author={Xie, Weiji and Han, Jinrui and Zheng, Jiakun and Li, Huanyu and Liu, Xinzhe and Shi, Jiyuan and Zhang, Weinan and Bai, Chenjia and Li, Xuelong},
  journal={arXiv preprint arXiv:2506.12851},
  year={2025}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

  • ASAP: We use ASAP library to build our RL codebase.
  • RSL_RL: We use rsl_rl library for the PPO implementation.
  • Unitree: We use Unitree G1 as our testbed robot.
  • Maskedmimic: We use the retargeting pipeline in Maskedmimic, which based on Mink.
  • PHC: We incorporate the retargeting pipeline from PHC into our implementation.
  • GVHMR: We use GVHMR to extract motions from videos.

Contact

Feel free to open an issue or discussion if you encounter any problems or have questions about this project.

For collaborations, feedback, or further inquiries, please reach out to:

We welcome contributions and are happy to support the community in building upon this work!

About

Official Implementation of "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.4%
  • Jupyter Notebook 1.5%
  • HTML 0.1%
BB8
0