基于SparkTTS、OrpheusTTS等模型,提供高质量中文语音合成与声音克隆服务。
-
Updated
May 5, 2025 - Python
8000
基于SparkTTS、OrpheusTTS等模型,提供高质量中文语音合成与声音克隆服务。
A local and uncensored AI entity.
Run Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1, and other state-of-the-art language models locally with scorching-fast performance. Inferno provides an intuitive CLI and an OpenAI/Ollama-compatible API, putting the inferno of AI innovation directly in your hands.
Simple LLM interface based on terminal + GUI option
Dolphin 3.0 🐬: Versatile AI for coding, math, and more
SmolLM2 🤗: Family of lightweight language models, performs diverse tasks on-device
This is a Discord Bot designed to help noobs in my Discord server chat about Dying Light modding.
A simple repository demonstrating LlamaCPP yielding structured output
Qwen2.5-Coder: Family of LLMs excels in code, debugging, etc
Gemma 3: Google's multimodal, multilingual, long context LLM.
Setting up a local inference environment with llama.cpp and pytorch, with CUDA support . Using huggingface transformers and outlines for structured generation.
This repository demonstrates how to use outlines and llama-cpp-python for structured JSON generation with streaming output, integrating llama.cpp for local model inference and outlines for schema-based text generation.
FastAPI semantic search + custom entity detection platform.
Add a description, image, and links to the llamacpp-python topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp-python topic, visit your repo's landing page and select "manage topics."