Stars
Open-Sora: Democratizing Efficient Video Production for All
🤗更优雅的微信公众号订阅方式,支持私有化部署、微信公众号RSS生成(基于微信读书)
Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.
A Starter Project Template for Wechaty works out-of-the-box
Conversational RPA SDK for Chatbot Makers. Join our Discord: https://discord.gg/7q8NBZbQzt
🌐 Make websites accessible for AI agents. Automate tasks online with ease.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
OpenAI Whisper API-style local server, runnig on FastAPI
Fully open reproduction of DeepSeek-R1
⚡ TabPFN: Foundation Model for Tabular Data ⚡
Official repo for paper "Structured 3D Latents for Scalable and Versatile 3D Generation" (CVPR'25 Spotlight).
MarS: a Financial Market Simulation Engine Powered by Generative Foundation Model
A generative world for general-purpose robotics & embodied AI learning.
A geometry-shader-based, global CUDA sorted high-performance 3D Gaussian Splatting rasterizer. Can achieve a 5-10x speedup in rendering compared to the vanialla diff-gaussian-rasterization.
[SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research
LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
A high-performance runtime framework for modern robotics.
🚀 KIMI AI 长文本大模型逆向API【特长:长文本解读整理】,支持高速流式输出、智能体对话、联网搜索、探索版、K1思考模型、长文档解读、图像解析、多轮对话,零配置部署,多路token支持,自动清理会话痕迹,仅供测试,如需商用请前往官方开放平台。
g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
real time face swap and one-click video deepfake with only a single image
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models