8000 GitHub - Teriks/airunner: Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows

License

Notifications You must be signed in to change notification settings

Teriks/airunner

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Runner

AI Runner Screenshot

openvoice_realtime_conversation.mp4

βœ‰οΈ Get notified when the packaged version releases

Discord GitHub PyPi GitHub last commit

🐞 Report Bug

✨ Request Feature

πŸ›‘οΈ Report Vulnerability

πŸ›‘οΈ Wiki

✨ Key Features
πŸ—£οΈ Real-time conversations
- Three speech engines: espeak, SpeechT5, OpenVoice
- Auto language detection (OpenVoice)
- Real-time voice-chat with LLMs
πŸ€– Customizable AI Agents
- Custom agent names, moods, personalities
- Retrieval-Augmented Generation (RAG)
- Create AI personalities and moods
πŸ“š Enhanced Knowledge Retrieval
- RAG for documents/websites
- Use local data to enrich chat
πŸ–ΌοΈ Image Generation & Manipulation
- Text-to-Image (Stable Diffusion 1.5, SDXL, Turbo)
- Drawing tools & ControlNet
- LoRA & Embeddings
- Inpainting, outpainting, filters
🌍 Multi-lingual Capabilities
- Partial multi-lingual TTS/STT/interface
- English & Japanese GUI
πŸ”’ Privacy and Security
- Runs locally, no external API (default)
- Customizable LLM guardrails & image safety
- Disables HuggingFace telemetry
- Restricts network access
⚑ Performance & Utility
- Fast generation (~2s on RTX 2080s)
- Docker-based setup & GPU acceleration
- Theming (Light/Dark/System)
- NSFW toggles
- Extension API
- Python library & API support

🌍 Language Support

Language TTS LLM STT GUI
English βœ… βœ… βœ… βœ…
Japanese βœ… βœ… ❌ βœ…
Spanish βœ… βœ… ❌ ❌
French βœ… βœ… ❌ ❌
Chinese βœ… βœ… ❌ ❌
Korean βœ… βœ… ❌ ❌

πŸ’Ύ Installation Quick Start

βš™οΈ System Requirements

Specification Minimum Recommended
OS Ubuntu 22.04, Windows 10 Ubuntu 22.04 (Wayland)
CPU Ryzen 2700K or Intel Core i7-8700K Ryzen 5800X or Intel Core i7-11700K
Memory 16 GB RAM 32 GB RAM
GPU NVIDIA RTX 3060 or better NVIDIA RTX 4090 or better
Network Broadband (used to download models) Broadband (used to download models)
Storage 22 GB (with models), 6 GB (without models) 100 GB or higher

πŸ”§ Installation Steps

  1. Install system requirements
    sudo apt update && sudo apt upgrade -y
    sudo apt install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl git nvidia-cuda-toolkit pipewire libportaudio2 libxcb-cursor0 gnupg gpg-agent pinentry-curses espeak xclip cmake qt6-qpa-plugins qt6-wayland qt6-gtk-platformtheme mecab libmecab-dev mecab-ipadic-utf8 libxslt-dev
    sudo apt install espeak
    sudo apt install espeak-ng-espeak
  2. Create airunner directory
    sudo mkdir ~/.local/share/airunner
    sudo chown $USER:$USER ~/.local/share/airunner
  3. Install AI Runner - Python 3.13+ required pyenv and venv are recommended (see wiki for more info)
    pip install "typing-extensions==4.13.2"
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
    pip install airunner[all_dev]
  4. Run AI Runner
    airunner

For more options, including Docker, see the Installation Wiki.


Basic Usage

  • Run AI Runner: airunner
  • Run the downloader: airunner-setup
  • Build templates: airunner-build-ui

πŸ€– Models

These are the sizes of the optional models that power AI Runner.

Modality Size
Text-to-Speech
OpenVoice (Voice) 4.0 GB
Speech T5 (Voice) 654.4 MB
Speech-to-Text
Whisper Tiny 155.4 MB
Text Generation
Ministral 8b (default) 4.0 GB
Whisper Tiny 155.4 MB
Ollama (various models) 1.5 GB - 20 GB
OpenRouter (various models) 1.5 GB - 20 GB
Huggingface (various models) 1.5 GB - 20 GB
Ministral instruct 8b (4bit) 5.8 GB
Image Generation
Controlnet (SD 1.5) 10.6 GB
Controlnet (SDXL) 320.2 MB
Safety Checker + Feature Extractor 3.2 GB
SD 1.5 1.6 MB
SDXL 1.0 6.45 MB

Stack

AI Runner uses the following stack

  • SQLite: For local data storage
  • Alembic: For database migrations
  • SQLAlchemy: For ORM
  • Pydantic: For data validation
  • http.server: Basic local server for static files
  • PySide6: For the GUI
  • A variety of other libraries for TTS, STT, LLMs, and image generation

✨ LLM Vendors

  • Default local model: Ministral 8b instruct 4bit
  • Ollama:: A variety of local models to choose from (requires Ollama CLI)
  • OpenRouter: Remove server-side LLMs (requires API key)
  • Huggingface: Coming soon

🎨 Art Models

By default, AI Runner installs essential TTS/STT and minimal LLM components, but AI art models must be supplied by the user.

Organize them under your local AI Runner data directory:

~/.local/share/airunner
β”œβ”€β”€ art
β”‚   └── models
β”‚       β”œβ”€β”€ SD 1.5
β”‚       β”‚   β”œβ”€β”€ controlnet
β”‚       β”‚   β”œβ”€β”€ embeddings
β”‚       β”‚   β”œβ”€β”€ inpaint
β”‚       β”‚   β”œβ”€β”€ lora
β”‚       β”‚   └── txt2img
β”‚       β”œβ”€β”€ Flux (not supported yet)
β”‚       β”œβ”€β”€ SDXL 1.0
β”‚       β”‚   β”œβ”€β”€ controlnet
β”‚       β”‚   β”œβ”€β”€ embeddings
β”‚       β”‚   β”œβ”€β”€ inpaint
β”‚       β”‚   β”œβ”€β”€ lora
β”‚       β”‚   └── txt2img
β”‚       └── SDXL Turbo
β”‚           β”œβ”€β”€ controlnet
β”‚           β”œβ”€β”€ embeddings
β”‚           β”œβ”€β”€ inpaint
β”‚           β”œβ”€β”€ lora
β”‚           └── txt2img

Optional third-party services

  • OpenStreetMap: Map API
  • OpenMeteo: Weather API

Chatbot Mood and Conversation Summary System

  • The chatbot's mood and conversation summary system is always enabled by default. The bot's mood and emoji are shown with each bot message.
  • When the LLM is updating the bot's mood or summarizing the conversation, a loading spinner and status message are shown in the chat prompt widget. The indicator disappears as soon as a new message arrives.
  • This system is automatic and requires no user configuration.
  • For more details, see the LLM Chat Prompt Widget README.
  • The mood and summary engines are now fully integrated into the agent runtime. When the agent updates mood or summarizes the conversation, it emits a signal to the UI with a customizable loading message. The chat prompt widget displays this message as a loading indicator.
  • See src/airunner/handlers/llm/agent/agents/base.py for integration details and src/airunner/api/chatbot_services.py for the API function.

πŸ” Aggregated Search Tool

AI Runner includes an Aggregated Search Tool for querying multiple online services from a unified interface. This tool is available as a NodeGraphQt node, an LLM agent tool, and as a Python API.

Supported Search Services:

  • DuckDuckGo (no API key required)
  • Wikipedia (no API key required)
  • arXiv (no API key required)
  • Google Custom Search (requires GOOGLE_API_KEY and GOOGLE_CSE_ID)
  • Bing Web Search (requires BING_SUBSCRIPTION_KEY)
  • NewsAPI (requires NEWSAPI_KEY)
  • StackExchange (optional STACKEXCHANGE_KEY for higher quota)
  • GitHub Repositories (optional GITHUB_TOKEN for higher rate limits)
  • OpenLibrary (no API key required)

API Key Setup:

  • Set the required API keys as environment variables before running AI Runner. Only services with valid keys will be queried.
  • Example:
    export GOOGLE_API_KEY=your_google_api_key
    export GOOGLE_CSE_ID=your_google_cse_id
    export BING_SUBSCRIPTION_KEY=your_bing_key
    export NEWSAPI_KEY=your_newsapi_key
    export STACKEXCHANGE_KEY=your_stackexchange_key
    export GITHUB_TOKEN=your_github_token

Usage:

  • Use the Aggregated Search node in NodeGraphQt for visual workflows.
  • Call the tool from LLM agents or Python code:
    from airunner.tools.search_tool import AggregatedSearchTool
    results = await AggregatedSearchTool.aggregated_search("python", category="web")
  • See src/airunner/tools/README.md for more details.

Note:

  • DuckDuckGo, Wikipedia, arXiv, and OpenLibrary do not require API keys and can be used out-of-the-box.
  • For best results and full service coverage, configure all relevant API keys.

Contributing

We welcome pull requests for new features, bug fixes, or documentation improvements. You can also build and share extensions to expand AI Runner’s functionality. For details, see the Extensions Wiki.

Take a look at the Contributing document and the Development wiki page for detailed instructions.

πŸ§ͺ Testing & Test Organization

AI Runner uses pytest for all automated testing. Test coverage is a priority, especially for utility modules.

Test Directory Structure

  • Headless-safe tests:
    • Located in src/airunner/utils/tests/
    • Can be run in any environment (including CI, headless servers, and developer machines)
    • Run with:
      pytest src/airunner/utils/tests/
  • Display-required (Qt/Xvfb) tests:
    • Located in src/airunner/utils/tests/xvfb_required/
    • Require a real Qt display environment (cannot be run headlessly or with pytest-qt)
    • Typical for low-level Qt worker/signal/slot logic
    • Run with:
      xvfb-run -a pytest src/airunner/utils/tests/xvfb_required/
      # Or for a single file:
      xvfb-run -a pytest src/airunner/utils/tests/xvfb_required/test_background_worker.py
    • See the README in xvfb_required/ for details.

CI/CD

  • By default, only headless-safe tests are run in CI.
  • Display-required tests are intended for manual or special-case runs (e.g., when working on Qt threading or background worker code).
  • (Optional) You may automate this split in CI by adding a separate job/step for xvfb tests.

General Testing Guidelines

  • All new utility code must be accompanied by tests.
  • Use pytest, pytest-qt (for GUI), and unittest.mock for mocking dependencies.
  • For more details on writing and organizing tests, see the project coding guidelines and the src/airunner/utils/tests/ folder.

Development & Testing

  • Follow the copilot-instructions.md for all development, testing, and contribution guidelines.
  • Always use the airunner command in the terminal to run the application.
  • Always run tests in the terminal (not in the workspace test runner).
  • Use pytest and pytest-cov for running tests and checking coverage.
  • UI changes must be made in .ui files and rebuilt with airunner-build-ui.

Documentation

  • See the Wiki for architecture, usage, and advanced topics.

Module Documentation

For additional details, see the Wiki.

About

Offline inference engine for art, real-time voice conversations, LLM powered chatbots and automated workflows

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.3%
  • Shell 0.4%
  • HTML 0.2%
  • Dockerfile 0.1%
  • CSS 0.0%
  • Mako 0.0%
0