-
TransformerLab. Previously Tulip.com, Well.ca
- http://transformerlab.ai
- @aliasaria
Stars
🦀⚙️ Sudoless performance monitoring for Apple Silicon processors. CPU / GPU / RAM usage, power consumption & temperature 🌡️
SD.Next: All-in-one for AI generative image
Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasets
Public Documentation for Transformer Lab
Source code for Mozilla.ai's Lumigator platform
aliasaria / FastChat
Forked from lm-sys/FastChatAn open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
SQL databases in Python, designed for simplicity, compatibility, and robustness.
FastAPI framework, high performance, easy to learn, fast to code, ready for production
DeepSeek Coder: Let the Code Write Itself
Janus-Series: Unified Multimodal Understanding and Generation Models
Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! 🦥
MLX Omni Server is a local inference server powered by Apple's MLX framework, specifically designed for Apple Silicon (M-series) chips. It implements OpenAI-compatible API endpoints, enabling seaml…
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
An open-source RAG-based tool for chatting with your documents.
headless terminal - wrap any binary with a terminal interface for easy programmatic access.
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
Enchanted is iOS and macOS app for chatting with private self hosted language models such as Llama2, Mistral or Vicuna using Ollama.
cli tool to quantize gguf, gptq, awq, hqq and exl2 models
Accelerate, Optimize performance with streamlined training and serving options with JAX.