8000 GitHub - GuanxingLu/vlarl: Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.

License

Notifications You must be signed in to change notification settings

GuanxingLu/vlarl

Repository files navigation

VLA-RL: Towards Masterful and General Robotic Manipulation with Scalable Reinforcement Learning

blog

🌟 Highlights

  • 🎯 General Manipulation: Improving OpenVLA-7B with outcome-based multi-task reinforcement learing.

  • ⚡️ Cutting-edge Architecture: Built with Ray+vLLM+LoRA+FSDP, our codebase delivers both scalability and flexibility.

  • 📝 Clean Implementation: Following cleanrl's philosophy, we provide a single-file implementation for easy reading and modification.

  • 🚧 Active Development: Work in Progress, let's build it together.

📝 TODO

  • Support SERL-style Real-world RL
  • Support More Environments (e.g., Roboverse)
  • Support More VLAs (e.g., MiniVLA)

🛠️ Installation

See INSTALL.md for installation instructions.

See ERROR_CATCH.md for error catching.

🚀 Quick Start

Before launching distributed training, please edit the script with the appropriate dataset and model paths first.

📈 Training

# bash scripts/train_rl_vllm_ray_fsdp.sh <gpus> <task_ids>
# e.g., 
bash scripts/train_rl_vllm_ray_fsdp.sh 0,1 0,1,2,3,4,5,6,7,8,9

🧪 Evaluation

# parallel evaluation with vectorized environment
bash scripts/eval_vllm_ray.sh 0,1

🏷️ License

This repository is released under the Apache-2.0 license.

🙏 Acknowledgement

Our code is built upon open-instruct, OpenRLHF, verl and openvla. We thank all these authors for their nicely open sourced code and their great contributions to the community.

🥰 Citation

If you find this repository helpful, please consider citing:

@misc{lu2025vlarl,
  title={VLA-RL: Towards Masterful and General Robotic Manipulation with Scalable Reinforcement Learning},
  author={Guanxing Lu, Chubin Zhang, Haonan Jiang, Yuheng Zhou, Zifeng Gao, Yansong Tang and Ziwei Wang},
  year={2025},
  howpublished={\url{https://congruous-farmhouse-8db.notion.site/VLA-RL-Towards-Masterful-and-General-Robotic-Manipulation-with-Scalable-Reinforcement-Learning-1953a2cd706280ecaad4e93a5bd2b8e3?pvs=4}},
  note={Notion Blog}
}

About

Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

0