Stars
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less VRAM.
Mixture-of-Agents Framework Implementation at Distributed Edge Devices with Theoretical Guarantee of Finite Average Latency
tmwilliamlin168 / gitignore
Forked from github/gitignoreA collection of useful .gitignore templates