Stars
An extremely fast Python type checker and language server, written in Rust.
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
On-device AI across mobile, embedded and edge for PyTorch
Inference server benchmarking tool
A curated list of materials on AI efficiency
A Zig language server supporting Zig developers with features like autocomplete and goto definition
Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
Lightpanda: the headless browser designed for AI and automation
vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
Task driven LLM multi agent framework that gives you the building blocks to create anything you wish
Incredibly fast JavaScript runtime, bundler, test runner, and package manager – all in one
Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight
👻 Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
The financial transactions database designed for mission critical safety and performance.
The LLM's practical guide: From the fundamentals to deploying advanced LLM and RAG apps to AWS using LLMOps best practices
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Finite field training and inference for Neural Networks
Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild
Your Next Store: Modern Commerce with Next.js and Stripe as the backend.
A collection of projects and libraries to help implement FHIR-based products and solutions.