Project NOVA (Networked Orchestration of Virtual Agents) is a comprehensive, self-hosted AI assistant ecosystem that leverages n8n workflows, LLMs, and 25+ specialized Model Context Protocol (MCP) servers to create a powerful, extensible agent system capable of controlling both digital environments and physical devices in your home.
- Centralized AI Assistant Hub: Route requests to 25+ specialized agents through an intelligent router
- Conversation-Aware Intelligence: Maintains context across interactions with advanced conversation history processing
- OpenWebUI Integration: Modern chat interface with conversation persistence and enhanced user experience
- Domain-Specific Capabilities: Knowledge management, development tools, media production, home automation, and more
- Self-Hosted & Privacy-Focused: Run everything locally with open-source components
- Containerized Architecture: Easy deployment with Docker, Dockerfiles, and docker-compose.yml files
- SSE Transport Integration: Enhanced MCP server communication using supergateway
- Native n8n Integration: Uses n8n's built-in MCP Client Tool node - no community nodes required
- Extensible Framework: Add new specialized agents with provided templates and comprehensive documentation
Project NOVA uses a sophisticated hub-and-spoke architecture with conversation intelligence:
- OpenWebUI Frontend: Modern chat interface with conversation history and session management
- n8n Workflow Engine: Processes requests and orchestrates agent communication
- Intelligent Router Agent: Analyzes requests and conversation context to route to appropriate specialized agents
- 25+ Specialized Agents: Domain-specific MCP-powered agents for various tasks
- MCP Server Ecosystem: Containerized services providing tool capabilities to agents
Project NOVA includes a comprehensive reference guide to help you understand and use all available agents effectively:
- Agent Index: Complete overview of all agents with capabilities and links
- Router Agent Quick Reference: How the routing system works and best practices
- Sub-Agents by Category:
- Knowledge Management Agents: Documentation, notes, and information management tools
- Development & Repository Agents: Code, repository, and system management tools
- Media & Creative Agents: Audio, video, and creative production tools
- AI & Automation Agents: Workflow, browser automation, and AI extension tools
The reference guide provides detailed information about each agent's capabilities, tools, example queries, and troubleshooting tips.
Project NOVA includes over 25 specialized agents across multiple domains:
- TriliumNext Notes Agent - Controls TriliumNext Notes knowledge base application
- Blinko Agent - Manages notes in the Blinko note-taking service (uses custom MCP implementation)
- BookStack Agent - Interfaces with BookStack documentation and wiki platform
- Memos Agent - Works with lightweight Memos note-taking application
- Outline Agent - Manages Outline knowledge base for team documentation
- SiYuan Agent - Controls SiYuan Note personal knowledge management system
- Karakeep Agent - Organizes digital content collections and bookmarks
- Paperless Agent - Manages Paperless-NGX document management system
- OnlyOffice Agent - Manages ONLYOFFICE DocSpace for document collaboration
- CLI Server Agent - Provides secure command-line execution capabilities
- Forgejo Agent - Manages Forgejo Git repositories and issues
- Gitea Agent - Controls Gitea Git service for repository management
- System Search Agent - Finds files and folders across file systems
- Ableton Copilot - Assists with music production in Ableton Live
- OBS Agent - Controls OBS Studio for streaming and recording
- Reaper Agent - Controls REAPER digital audio workstation
- Reaper QA Agent - Analyzes and answers questions about REAPER projects
- YouTube Agent - Transcribes YouTube videos for summarization and content analysis
- Flowise Agent - Connects to Flowise chatflows and AI assistants
- Langfuse Agent - Accesses managed prompts from Langfuse
- Puppeteer Agent - Provides browser automation for web scraping and testing
- RAGFlow Agent - Retrieval-augmented generation with source citations
- Fetch Agent - Retrieves web content from URLs with advanced options
- Home Assistant Agent - Controls smart home devices through Home Assistant
- Prometheus Agent - Queries and analyzes metrics from Prometheus monitoring
- Home Automation: "Turn off the living room lights and start playing my evening playlist"
- Knowledge Management: "Find my notes about the project meeting from last Tuesday"
- Creative Production: "Help me set up a new Ableton Live project with a drum rack"
- Development Assistance: "Check my Gitea repositories for any open pull requests"
- System Management: "Monitor the CPU usage on my server for the last 24 hours"
- Content Analysis: "Get the transcript from this YouTube video and summarize the key points"
Before setting up Project NOVA, ensure you have:
- OpenWebUI Instance: For the modern chat interface with conversation management
- Note: For text-to-speech features (voice input), OpenWebUI requires SSL/HTTPS (even a self-signed certificate works)
- n8n Instance: A running n8n installation (v1.88.0 or later)
- Critical: Version 1.88.0+ is required as it introduced the native MCP Client Tool node
- No Community Nodes Required: All agents now use n8n's native MCP Client Tool node
- n8n Instance: A running n8n installation (v1.88.0 or later) with chat trigger capabilities
- No Community Nodes Required: All agents use n8n's native MCP Client Tool node
- Docker Host: Environment for running containerized MCP servers
- LLM Access: Either:
- Cloud API access (OpenAI, Claude, Gemini, etc.)
- Local Ollama instance with models that support tool use/function calling
- Applications to Control:
- The actual applications you want NOVA to control (most agents are designed for self-hosted applications)
- Examples: Home Assistant, Gitea, Reaper, OBS Studio, TriliumNext, etc.
- Each agent requires its corresponding application to be accessible
git clone https://github.com/dujonwalker/project-nova.git
cd project-nova
Option A: OpenWebUI + n8n (Recommended)
This setup provides the best user experience with conversation history, session management, and a modern interface.
-
Set up OpenWebUI inlet filter:
- In your OpenWebUI instance, navigate to Workspace โ Functions
- Click "Create New Function"
- Copy and paste the code from
openwebui-function/n8n_inlet_filter.py
into the function editor - Important: Update the
n8n_url
configuration variable at the top of the filter with your complete n8n webhook URL (including server IP/hostname and webhook UUID) - Save the filter
-
Create a new model using the filter:
- Navigate to Workspace โ Models
- Click "Create New Model"
- Configure your base model (e.g., your preferred LLM)
- In the Filters section, select the n8n inlet filter you just created
- Save the model
- The filter will now automatically forward messages to n8n and manage conversation history
-
Configure n8n workflows:
- Import the router agent workflow (
n8n-workflows/router_agent.json
) - Set up a webhook trigger in the router agent (configure URL in OpenWebUI filter)
- Import specialized agent workflows as needed
- Import the router agent workflow (
Option B: n8n Only Setup
-
Import workflows in your n8n instance:
- Bulk import via n8n CLI (easiest):
# Navigate to your n8n installation directory n8n import:workflow --input=path/to/project-nova/n8n-workflows/*
- Import via n8n Web Interface:
- Navigate to your n8n dashboard
- Click on "Workflows" in the sidebar
- Use the "Import from file" option for each workflow
- Start with the router agent first, then specialized agents
- Bulk import via n8n CLI (easiest):
-
Configure after import:
- Update OpenWebUI filter: Copy the webhook URL from your imported router agent and paste it into the OpenWebUI inlet filter configuration
- Configure API keys: Add your actual API keys for the services you want to use (Blinko, Home Assistant, etc.)
- Update server IPs: Replace
YOUR_SERVER_IP
placeholders with your actual server addresses
- Use the Dockerfiles and docker-compose.yml files provided in
mcp-server-dockerfiles
to build and run your servers - Or use the instructions to set up MCP servers directly on your host
- Configure each MCP server for the agents you plan to use
- LLM Provider: Update credentials in each workflow for your LLM provider (OpenAI, Claude, Gemini, etc.)
- Application APIs: Give NOVA the keys to your apps (Paperless-NGX API key, Gitea API key, Home Assistant token, etc.)
- MCP Server Connections: Configure SSE endpoints or server connections in each workflow
With OpenWebUI Setup:
- Navigate to your OpenWebUI instance
- Start a conversation - messages are automatically processed through n8n
- The inlet filter handles conversation history and routing automatically
With n8n Only Setup:
- Navigate to the router agent workflow in n8n
- Disconnect the webhook trigger node and connect the chat trigger node instead (the chat trigger is already present but disconnected by default)
- Use the chat trigger node to start a conversation with NOVA
- The router will analyze your requests and direct them to appropriate specialized agents
The OpenWebUI integration uses an inlet filter (openwebui-function/n8n_inlet_filter.py
) that:
- Processes all incoming messages through n8n before they reach the LLM
- Manages conversation history by extracting and forwarding previous messages
- Prevents duplicate calls with intelligent deduplication (30-second window)
- Handles session management with unique chat IDs
- Returns n8n responses directly to the user through system message injection
Key configuration options in the filter:
n8n_url
: Your n8n webhook endpoint URLn8n_bearer_token
: Authentication token for n8ntimeout
: Request timeout for n8n calls (default: 15 seconds)enabled
: Toggle to enable/disable n8n processing
Project NOVA now includes sophisticated conversation context handling:
Router Agent Features:
- Processes conversation history to understand context
- Provides intelligent follow-up routing (e.g., "any repos with X in the name?" continues with the same repository agent)
- Maintains session continuity across interactions
Sub-Agent Features:
- All specialized agents receive conversation history when available
- Agents can reference previous interactions for better context
- Consistent conversation experience across agent handoffs
- Use the
prompt-templates/generate-agent.md
template to create a new agent system prompt - Create a Dockerfile using
prompt-templates/generate-container.md
as a guide - Add n8n workflow configuration using existing workflows as templates
- Update the router agent to include your new specialized agent
- Add documentation to the reference guide
project-nova/
โโโ README.md # This comprehensive README file
โโโ agents/ # System prompts for all agents (reference)
โโโ mcp-server-dockerfiles/ # Dockerfiles and docker-compose.yml files for MCP servers
โ โโโ [server-name]-mcp/ # Each MCP server has its own directory
โ โ โโโ Dockerfile # Container definition
โ โ โโโ docker-compose.yml # Deployment configuration
โ โ โโโ README.md # Server-specific setup instructions
โ โ โโโ start.sh # Entry point script (when applicable)
โโโ n8n-workflows/ # n8n workflow export files (.json) - THE ACTUAL WORKING SYSTEM
โโโ openwebui-function/ # OpenWebUI integration components
โ โโโ n8n_inlet_filter.py # OpenWebUI inlet filter for n8n integration
โโโ prompt-templates/ # Templates for creating new components
โ โโโ agent-input-examples/ # Example inputs for each agent (.json)
โ โโโ generate-agent.md # Template for creating new agents
โ โโโ generate-container.md # Guide for containerizing MCP servers
โ โโโ generate-routing-agent.md # Router agent generation template
โโโ reference-guide/ # Comprehensive documentation
โโโ agents-index.md # Complete list of all agents
โโโ router-agent-quick-reference.md # Router usage guide
โโโ sub-agents-by-category/ # Detailed agent documentation by domain
โโโ knowledge-agents.md
โโโ devops-agents.md
โโโ media-agents.md
โโโ automation-agents.md
- Recommended minimum specs for basic setup: 4GB RAM, 2 CPU cores
- For OpenWebUI + full agent ecosystem: 16GB RAM, 8 CPU cores recommended
- Consider using local LLM inference (Ollama) for reduced API costs and latency
- MCP server resource usage varies by agent - monitor container usage and scale as needed
- Enhanced OpenWebUI integration features:
- File upload support through agents
- Additional specialized agents for more domains
- n8n for the incredible workflow automation platform
- OpenWebUI for the excellent frontend interface with conversation management
- Model Context Protocol for standardizing AI tool interaction
- Supergateway for enabling conversion from STDIO to SSE transport
- nerding-io for their pioneering work on n8n-nodes-mcp that inspired this project
- All the open-source projects that make the specialized agents possible
- Project NOVA: Detailed Write-up - Comprehensive blog post with additional insights and implementation details
- For questions or support, please open an issue in this repository
If you're looking to expand your Project NOVA with additional agents, here are some valuable resources for finding MCP servers:
- My Curated List of Working MCP Servers - A regularly updated collection of GitHub-hosted MCP servers I've personally tested and verified working with Project NOVA. Note that some servers hosted on other platforms (like gitea-mcp) won't appear in this GitHub-specific list.
- MCP.so - A comprehensive directory of MCP libraries
- MCPServers.org - Collection of available MCP servers
- MCP Servers GitHub - Official MCP servers repository
- MCP Directory - Searchable directory of MCP servers
- Smithery AI - MCP server development platform
- Glama.ai MCP Servers - Curated list of MCP servers
- MCPHub.ai - Hub for MCP server discovery
- Awesome MCP Servers - Curated list of awesome MCP servers
These resources can help you discover new MCP servers to integrate into your own Project NOVA implementation.