Stars
An open source, self-hosted implementation of the Tailscale control server
Spring Cloud Alibaba provides a one-stop solution for application development for the distributed solutions of Alibaba middleware.
A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools.
FUSE-based file system backed by Amazon S3
🍒 Cherry Studio is a desktop client that supports for multiple LLM providers.
Container runtimes on macOS (and Linux) with minimal setup
A Datacenter Scale Distributed Inference Serving Framework
Fast container image distribution plugin with lazy pulling
Zstandard - Fast real-time compression algorithm
Examples demonstrating available options to program multiple GPUs in a single node or a cluster
由 Dify API 驱动的前沿桌面智能对话应用,具备企业级人工智能对话能力。这款应用拥有主题定制、知识库管理以及多场景应用等显著功能。 如今,我们进行了重大升级,新增对 OpenAI 格式输出的支持。这意味着,它能够与市面上所有遵循 OpenAI 格式的人工智能模型无缝对接。不管您使用的是知名供应商的模型,比如提供可靠且可扩展云端解决方案的 Azure OpenAI,还是以先进自然语言处…
rsync in Go! implements client and server, which can send or receive files (upload, download, all directions supported)
DeepEP: an efficient expert-parallel communication library
All Dify Plugins listed in Dify Marketplace, plus illustrated plugin examples.
Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than net/http
Deploy langgenious/dify, an LLM based app on kubernetes with helm chart.
LeaderWorkerSet: An API for deploying a group of pods as a unit of replication
Linux virtual machines, with a focus on running containers
A CLI tool that helps manage training jobs on the SageMaker HyperPod clusters orchestrated by Amazon EKS
SGLang is a fast serving framework for large language models and vision language models.
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
A cross-platform Markdown note-taking application dedicated to using AI to bridge recording and writing, organizing fragmented knowledge into a readable note.
ARK: Survival Evolved via Kubernetes (and then some)
A high-throughput and memory-efficient inference and serving engine for LLMs
Model Context Protocol tool support for LangChain