- Stanford, CA
- http://keawang.github.io
Highlights
- Pro
Stars
- All languages
- Assembly
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- Cuda
- Cython
- Dart
- Dockerfile
- Fennel
- Fluent
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jinja
- Julia
- Jupyter Notebook
- Kotlin
- Lean
- LiveScript
- Lua
- MATLAB
- MDX
- MLIR
- Markdown
- Metal
- Nim
- OCaml
- Objective-C
- PHP
- PostScript
- Python
- R
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Svelte
- Swift
- SystemVerilog
- TeX
- TypeScript
- Vala
- Vim Script
- Vue
Foundational model for human-like, expressive TTS
This repo tracks the opened and merged PRs by the top SWE coding agents by OpenAI, GitHub, and others. Updates every 3 hours.
MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
The official repo for “Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting”, ACL, 2025.
SALMONN family: A suite of advanced multi-modal LLMs
A web-based 3D CAD application for online model design and editing
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
Ring attention implementation with flash attention
🔮 A lightweight comments widget built on GitHub issues
🪐 Markdown with superpowers — from ideas to presentations, articles and books.
Code for Cicero, an AI agent that plays the game of Diplomacy with open-domain natural language negotiation.
A reverse engineering of Linear's sync engine. Endorsed by its co-founder & CTO.
Video-based AI memory library. Store millions of text chunks in MP4 files with lightning-fast semantic search. No database needed.
Experimental playground for benchmarking language model (LM) architectures, layers, and tricks on smaller datasets. Designed for flexible experimentation and exploration.
A Simplified PyTorch Implementation of Vision Transformer (ViT)
FlashMLA: Efficient MLA decoding kernels
Speech To Speech: an effort for an open-sourced and modular GPT4-o
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
Official repository for our work on micro-budget training of large-scale diffusion models.
Accelerated First Order Parallel Associative Scan
Unofficial implementation of Titans, SOTA memory for transformers, in Pytorch