Stars
- All languages
- Astro
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Crystal
- Cuda
- Cython
- Dockerfile
- Earthly
- Elixir
- Erlang
- Go
- HTML
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Lean
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Mathematica
- OCaml
- Objective-C
- PHP
- Pug
- Python
- Ruby
- Rust
- SAS
- SCSS
- Scheme
- Shell
- Svelte
- Swift
- TeX
- TypeScript
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems
Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Google Robot, WidowX+Bridge) (CoRL 2024)
TuulAI RobotBuilder: Robotics Course to go from Zero to Hero in AI driven Robots
🕳 bore is a simple CLI tool for making tunnels to localhost
A cache for AI agents to learn and replay complex behaviors.
Open 3D Engine (O3DE) is an Apache 2.0-licensed multi-platform 3D engine that enables developers and content creators to build AAA games, cinema-quality 3D worlds, and high-fidelity simulations wit…
A Best-of-list of Robot Simulators, re-generated weekly on Wednesdays
moojink / openvla-oft
Forked from openvla/openvlaFine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
Konda - The simplest way to use Conda environments on Google Colab.
A computer algebra system for research in combinatorial game theory
[CVPR 2025] Open-source, End-to-end, Vision-Language-Action model for GUI Agent & Computer Use.
Seamlessly integrate state-of-the-art transformer models into robotics stacks
A comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in robotics.
A version 1.1 of the Alexander Koch low cost robot arm with some small changes.
Fast and simple implementation of RL algorithms, designed to run fully on GPU.
Modular reinforcement learning library (on PyTorch and JAX) with support for NVIDIA Isaac Gym, Omniverse Isaac Gym and Isaac Lab
AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World
Stanford-ILIAD / openvla-mini
Forked from openvla/openvlaOpenVLA: An open-source vision-language-action model for robotic manipulation.
xLAM: A Family of Large Action Models to Empower AI Agent Systems
A curated list of papers for generalist agents
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related webs…
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
Crowdsourcing better titles and thumbnails on YouTube
About Awesome things towards foundation agents. Papers / Repos / Blogs / ...
openvla / openvla
Forked from TRI-ML/prismatic-vlmsOpenVLA: An open-source vision-language-action model for robotic manipulation.
Embedded property graph database built for speed. Vector search and full-text search built in. Implements Cypher.