Stars
A Game Demo Powered by ChatGPT Agents
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
[ICLR 2025] LAPA: Latent Action Pretraining from Videos
Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
Official Algorithm Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping
NVIDIA Isaac GR00T N1 is the world's first open foundation model for generalized humanoid robot reasoning and skills.
🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
[RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions
GEFormer is a genome-wide prediction model for genotype-environment interactions based on a deep learning approach designed to predict maize phenotypes using genotype and environment jointly.
The official project website of "Omni-Dimensional Dynamic Convolution" (ODConv for short, spotlight in ICLR 2022).
XAI: Tree-Based Interpretable Machine Learning Models for GxE Prediction
Automated Machine Learning for Environmental Data-Driven Genome Prediction
Continuous Thought Machines, because thought takes time and reasoning is a process.
Siamese Neural Networks for Regression: Similarity-Based Pairing and Uncertainty Quantification
A high-throughput and memory-efficient inference and serving engine for LLMs
An open-source, code-first Python toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
An open-source library for GPU-accelerated robot learning and sim-to-real transfer.
Body Transformer: Leveraging Robot Embodiment for Policy Learning
Pretraining infrastructure for multi-hybrid AI model architectures
Repository for StripedHyena, a state-of-the-art beyond Transformer architecture
Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)
Inference and numerics for multi-hybrid AI model architectures
Genome modeling and design across all domains of life
The entmax mapping and its loss, a family of sparse softmax alternatives.
Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.