-
Tsinghua University
- Beijing, China
- https://zhengkw18.github.io/
Stars
Official codebase for "Direct Discriminative Optimization" (ICML 2025 Spotlight)
An official implementation of Flow-GRPO: Training Flow Matching Models via Online RL
MAGI-1: Autoregressive Video Generation at Scale
verl: Volcano Engine Reinforcement Learning for LLMs
An Open-source RL System from ByteDance Seed and Tsinghua AIR
【三年面试五年模拟】AIGC算法工程师面试秘籍。涵盖AIGC、传统深度学习、自动驾驶、机器学习、计算机视觉、自然语言处理、强化学习、具身智能、元宇宙、AGI等AI行业面试笔试经验与干货知识。
This repository includes the official implementation of our paper "Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation"
flash attention tutorial written in python, triton, cuda, cutlass
Official Repo for Open-Reasoner-Zero
Official codebase for "Diffusion Bridge Implicit Models" (ICLR 2025) and "Consistency Diffusion Bridge Models" (NeurIPS 2024)
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
[CVPR 2025 Oral]Infinity ∞ : Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis
A generative world for general-purpose robotics & embodied AI learning.
[ICLR 2025] EdgeRunner: Auto-regressive Auto-encoder for Efficient Mesh Generation
[ICLR 2025] Official Implementation of Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis
Official implementation for "Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model" (NeurIPS 2024)
Quantized Attention achieves speedup of 2-3x and 3-5x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.
Official Codebase for "Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control" (NeurIPS 2024)
[NeurIPS 2024] MeshXL: Neural Coordinate Field for Generative 3D Foundation Models, a 3D fundamental model for mesh generation
A beautiful, simple, clean, and responsive Jekyll theme for academics
PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838
Fast and memory-efficient exact attention